• Marchesk
    4.6k
    Artificial neural networks have experienced a lot of success in recent years with image recognition, speech analysis, and game playing. What's interesting is that the networks aren't programmed specifically to recognize images or sounds, but rather learn to do so given a training data set. After having been trained, the network can then recognize new shapes or sounds, similar but not exactly the same as the ones it's been trained to learn.

    A simple example would be learning to recognize handwritten digits. Supervised learning is when the training data has labels such as 3 for any version of a handwritten three in the data. Unsupervised learning is when the network learns to recognize patterns which are not labelled.

    A heirerchachal neural network has different layers at which patterns are recognized and built up to match a complicated shape like a face, or spoken sentences. Last year, Google's DeepMind was fed three days worth of Youtube video thumbnails, containing 20,000 different objects that humans would recognize. But there were no labels, so this was unsupervised.

    Human faces and cats were two of the categories Deepmind learned to successfully recognize. This is of philosophical note, because it confirms that there are objective, mind-independent patterns in the data for a neural network to find. Otherwise, unsupervised learning shouldn't work at all, particularly with recognizing shapes and sounds that human minds do.

    In summary, neural networks are discovering patterns that our visual and auditory systems find, without any programming telling them to do so, in the case of unsupervised learning. This lends support to mind-independent shapes and sounds out there in the world that we evolved to see and hear. And this would be direct, because neural networks have no mental content to act as intermediaries.

    It also runs counter to Kant's claims, at least at the level of raw perception, because if DeepMind can recognize cat patterns in video data, then it can't be the case that cats are merely phenomena. They must be part of the noumena, since the neural networks have not been trained or programmed to recognize space, time, or any categories of thought.

    The cats really are there in Youtube videos.
  • fishfry
    3.4k
    So you're saying that left to themselves, neural networks will spend their time looking at cat videos?
  • sime
    1.1k


    All forms of pattern recognition involve a priori representational assumptions. Unsupervised learning is no different. In fact machine learning nicely vindicates neo-Kantian ideas of perceptual judgement, even if not in terms of the same fundamental categories, nor from categories derived from introspective transcendental arguments (Kant was after all, targeting philosophical skepticism about the self and the possiblity of knowledge of the external world, and not the scientific problem of how to understand the behavioural aspects of mental functioning that concerns a merely empirical affair)

    But yet from this neo-Kantian perspective, consider for example a nearest-neighbour image classifier consisting of nothing more than a disordered collection of images. Without any a priori assumptions it is impossible to even talk about this image collection as containing a pattern by which to classify new images with respect to that pattern.

    This was the basic observation of David Hume. Raw observation data by itself cannot justify empirical judgements as claims to knowledge. Kant merely pointed out that raw observations alone cannot even constitute empirical judgements, which as machine learning nicely illustrates, requires innate judgement in the form of synthetic-a priori responses.

    Generalisation from experience requires metrics of similarity for perceptual pattern matching, together with categories of perception for filtering the relevant information to be compared. Neural networks don't change this picture, even if perceptual filters are partially empirically influenced. Decisions still need to be made about the neural architecture, its width, depth, the neural activation responses on each layer, the anticipatory patterns of neurons and so on.

    All of this constitutes "synthetic a priori" processing.
  • Marchesk
    4.6k
    Precisely, cats all the way down.
  • Marchesk
    4.6k
    Decisions still need to be made about the neural architecture, its width, depth, the neural activation responses on each layer, the anticipatory patterns of neurons and so on.sime

    You have a point there. What if we let the cats train the neural networks?

    Even if this doesn't count as a critique of Kantianism, it does count against skepticism. And it shows how rudimentary perception can work on a direct realist account.
  • apokrisis
    7.3k
    This would still be proof of indirect realism. The network, after all, still has to learn to see/recognise patterns.

    Note also how the learning depends on there being an a-priori hierarchical organisation. So that rather supports Kant's point that notions of spacetime must be embedded to get the game going.

    Hierarchical organisation works by imposing a local~global dichotomy, or symmetry-breaking, on the data. At the hardware level, there is a separation of the transient impressions - a succession of training images - from the developing invariances extracted by the high-level feature-detecting "neurons".

    So space and time are built in by the hierarchical design. The ability to split experience into general timeless/placeless concepts vs specific spatiotemporal instances of those concepts is already a given embedded in the hardware design, not something that the machine itself develops.

    So a neural network can certainly help make Kant more precise. We can see that judgements of space and time have a deeper root - the hierarchical organisation that then "processes" the world, the thing-in-itself, in a particular natural fashion.

    The key is the way hierarchies enforce local~global symmetry-breakings on "data". And what emerges as a hierarchical modelling system interacts with the messy confusion of a "real world" is a separation in which both the global conceptions, and the local instances, become "clear" in tandem. They co-arise.

    Perception involves being able to see "that cat there", a judgement grounded in the matching development of a generalised capacity for categorising the world in terms of the long-run concept of "a cat".

    So the particular experience of a cat is as much indirect as the generalised notion of a cat-like or feline thing. Each is grounded in the other as part of a hierarchy-enforced symmetry-breaking. Neither arises in any direct fashion from the world itself. Even the crisp impression of some actual cat is a mental construction, as it is the hierarchy which both abstracts the timeless knowledge of cat-like and also locates some particular instance of that categorised experience to a place and a time (and a colour, and aesthetic response, etc, etc).

    The success of machine learning should at least give idealists and dualists pause for thought. Even a little bit of hierarchical "brain organisation" manages to do some prettty "mind-like" stuff.

    But the psychological and neurological realism found in DeepMind supports the indirect realist. After all, the patterns of activity inside the machine look nothing like the cat pictures.

    It is also worth noting that DeepMind extracts generality from the particulars being dumped on it. The training images may be random and thus "unsupervised". But in fact a choice of "what to see" is already embedded by the fact some human decided to point a camera and post the result to YouTube. The data already carries that implicit perceptual structure.

    So what would be more impressive is next step hardware that can also imagine particular cats based on its accumulated knowledge of felines. That would then be a fully two-way system that can start having sensory hallucinations, or dreams, just like me and you. The indirectness of its representational relationship with the real world would then be far more explicit, less hidden in the choice of training data, as well as the hierarchical design of its hardware.
  • fdrake
    6.6k
    So what would be more impressive is next step hardware that can also imagine particular cats based on its accumulated knowledge of felines. That would then be a fully two-way system that can start having sensory hallucinations, or dreams, just like me and you. The indirectness of its representational relationship with the real world would then be far more explicit, less hidden in the choice of training data, as well as the hierarchical design of its hardware. — @apokrisis

    I'm just going to leave the Painting Fool here. Neural nets can already learn ontologies (operational stratifications of concepts). It's probably being worked on somewhere in a lab at the minute.
  • apokrisis
    7.3k
    Yep. I wasn't going to mention it again for the umpteenth time but the architectural approaches that particularly impress me have been Grossberg's Adaptive Resonance Theory and Friston's Bayesian Brain.

    These are anticipation-based approaches to cognition and awareness. So imagination is not a tacked on extra. It is the basis on which anything happens.

    Realism is truly indirect as the brain is a hierarchical system attempting to predict its input. And the better practised it gets at that, the more it can afford to ignore "the real world".
  • Marchesk
    4.6k
    Realism is truly indirect as the brain is a hierarchical system attempting to predict its input. And the better practised it gets at that, the more it can afford to ignore "the real world".apokrisis

    Direct realism means awareness of mind-independent objects instead of some mental intermediary. For the hierarchal system to be indirect, our perceptual awareness would be of the hierarchy instead of the object that's being detected using the hierarchy.
  • apokrisis
    7.3k
    Direct realism means awareness of mind-independent objects instead of some mental intermediary.Marchesk

    We've been over this before. Let's just say that your version of direct realism wants to skip over the reality of the psychological processes involved, although I would also myself want to be more direct than any representationalist. So the terms can fast lose any general distinctions as we strive for our personal nuances.

    But by the same token, your claims to prove direct realism and deny Kantian psychology are weakened to the degree you don't have a hard definition of what direct vs indirect is about here.

    For the record, here is Wiki trying to firm up the definitions....

    In philosophy of mind, naïve realism, also known as direct realism or common sense realism, is the idea that the senses provide us with direct awareness of objects as they really are. Objects obey the laws of physics and retain all their properties whether or not there is anyone to observe them.[1] They are composed of matter, occupy space and have properties, such as size, shape, texture, smell, taste and colour, that are usually perceived correctly.

    In contrast, some forms of idealism claims that no world exists apart from mind-dependent ideas and some forms of skepticism say we cannot trust our senses. Naïve realism is known as direct as against indirect or representative realism when its arguments are developed to counter the latter position, also known as epistemological dualism;[2] that our conscious experience is not of the real world but of an internal representation of the world.

    So for example, do you think the world is really coloured? That red and green are properties of the world rather than properties of the mind?

    I prefer to argue the more complex story myself - that colours are properties of a pragmatic or functional mind~world relation. So this is an enactive, ecological or embodied approach. And it has its surprising consequences. Like the point I made about the intention of the brain being to maximise its ability to "ignore the world". We see the physics of the thing-in-itself in terms of reds and greens because such an "untrue" representation is the most useful or efficient one.

    Direct realism presumes the brain just grasps reality without effort because reality is "lying right there". The thing-in-itself forces itself upon our awareness due to its recalcitrant nature.

    Representationalism then points out that there is still considerable effort needed by the brain. It has all those computational circuits for a reason. But still, representationalism shares the underlying belief that the goal of re-presenting reality must be veridical. The mental intermediary in question is not the weaving of a solipsistic fiction, but a presentation of an actual world.

    Then as I say, I take the position in-between these two extremes of "indirect realism" (Ie: the realism that at least agrees there is a mediating psychological process). And that embodied realism is what Grossberg and Friston in particular have been at the forefront of modelling. They get it, and the philosophical consequences.

    So I hope that clears the air and we can get back to the interesting point that really caught my attention.

    The success of neural network architectures at doing mind-like things is down to their hierarchical organisation. So in trying to define the simplest self-teaching or pattern recognising machine, computer science has found it in fact is having to hardwire in some of pretty Kantian conditions.

    It may not seem obvious, but hierarchical structure - a local~global dichotomy - already wires in spatiotemporal "co-ordinates". It sets up a distinction between local transient impressions vs global lasting concepts from the get-go. Like a newborn babe, that is the order the machine is already seeking to discover in the "blooming, buzzing, confusion" of its assault from a random jumble of YouTube images.

    For the hierarchal system to be indirect, our perceptual awareness would be of the hierarchy instead of the object that's being detected using the hierarchy.Marchesk

    Maybe you understand the Kantian position far less well than I was assuming?

    A cognitive structure is required to do any perceiving. That is the (apparently chicken and egg) paradox that leads to idealism, social constructionism, and all those other dread philosophical positions. :)

    So no. My point is that in seeking the minimal or most fundamental a-priori structure, a neurological hierarchy is already enough. It already encodes the crucial spatiotemporal distinction in the most general possible way.

    The fact that it is so damned hard for folk to see their own hierarchical perceptual organisation shouldn't be surprising.

    Thought places space, objects, and even time (or change) as being "out there" in the world. But why would we place our own conceptual machinery in the world as the further objects of our contemplation - except when doing science and metaphysics?

    It is the same as with "qualia". We just see red. We don't see ourselves seeing red as a further perceptual act. At best, we can only reveal the hidden fact of the psychological processing involved.

    As in when asking people if they can imagine a greeny-red, or a yellowy-blue. A yellowy-red or a greeny-blue is of course no problem. We are talking orange and turquoise. But the peculiarities of human opponent colour processing pathways means some mixtures are structurally impossible.

    Likewise, blackish-blue, blackish-green and blackish-red are all standard mixtures (for us neurotypicals), yet never blackish-yellow (ie: what we see as just "pure brown").

    So the hidden structure of our own neural processing can always be revealed ... indirectly. We can impute its existence by the absence of certain experience. There are strategies available.

    But I'm baffled that you should reply as if we ought to "see the hierarchy" as if it were another perceptual object. We would only be able to notice its signature imprinted on every act of perception.

    And that is what we do already in making very familiar psychological distinctions - like the difference between ideas and impressions, or generals and particulars, or memory and awareness, or habit and attention, or event and context.

    The structure of the mind is hierarchical - always divided between the local and global scales of a response. The division then becomes the source of the unity. To perceive, ideas and impressions must become united in the one act.

    So the structure is certainly there if you know how to look. Psychological science targets that mediating structure. And computer science in turn wants to apply that knowledge to build machines with minds.

    That is why the right question is what kind of mediating structure, or indirect realism, does computer science now seem to endorse.

    I see you want to make the argument that remarkably little a-priori structure seems needed by a neural net approach like DeepMind. Therefore - with so little separating the machine from its world - a mindful relation is far more direct than many might have been making out.

    A fair point.

    But then my response is that even if the remaining a-priori structure seems terribly simple - just the starter of a set of hierarchically arranged connections - that is already a hell of a lot. Already an absolute mediating structure is in place that is about to impose itself axiomatically on all consequent development of the processing circuitry.
  • creativesoul
    12k
    The wiki article offers a version of direct realism that is indistinct from naive realism...

    :-O

    That's a pretty sad basis to start critiquing a robust/nuanced position involving direct perception.
  • apokrisis
    7.3k
    The wiki article offers a version of direct realism that is indistinct from naive realism...creativesoul

    Yep. And that is the point. The OP certainly comes off as an exercise in naive realism. You can't both talk about a mediating psychological machinery and then claim that is literally "direct".

    If Marchesk intends direct realism to mean anti-representationalism, then that is something else in my book. I'm also strongly anti-representational in advocating an ecological or embodied approach to cognition.

    But I'm also anti-direct realism to the degree that this is an assumption that "nothing meaningful gets in the way of see the world as it actually is". My argument is that the modelling relation the mind wants with the world couldn't even have that as its goal. The mind is all about finding a way to see the self in the world.

    What we want to see is the world with ourselves right there in it. And that depends on a foundational level indirectness (cut and paste here my usual mention of Pattee's epistemic cut and the machinery of semiosis).

    So this is a philosophical point with high stakes, not a trivial one - especially if we might want to draw strong conclusions from experiments in machine pattern recognition as the OP hopes to do.

    There just cannot be a direct experience of the real world ... because we don't even have a direct connection to our real selves. Our experience of experience is mediated by learnt psychological structure.

    The brain models the world. And that modelling is in large part involves the creation of the self that can stand apart from the world so as to be acting in that world.

    To chew the food in our mouth, we must already have the idea that our tongue is not part of the mixture we want to be eating. That feat is only possible because of an exquisite neural machinery employing forward modelling.

    If "I" know as a sensory projection how my tongue is meant to feel in the next split second due to the motor commands "I" just gave, then my tongue can drop right out of the picture. It can get cancelled away, leaving just the experience of the food being chewed.

    So my tongue becomes invisible by becoming the part of the world that is "really me" and "acting exactly how I intended". The world is reduced to a collection of objects - perceptual affordances - by "myself" becoming its encompassing context.

    The directest experience of the world is the "flow state" where everything I want to happen just happens exactly as I want it. It was always like that on the tennis court. ;) The backhand would thread down the line as if I owned its world. Or if in fact it was going to miss by an inch, already I could feel that unpleasant fact in my arm and racquet strings.

    Which is another way to stress that the most "direct" seeming experience - to the level of flow - is as mediated as psychological machinery gets. It takes damn years of training to get close to imposing your will on the flight of a ball. You and the ball only become one to the degree you have developed a tennis-capable self which can experience even the ball's flight and landing quite viscerally.

    So direct realism, or even weak indirect realism, is doubly off the mark. The indirectness is both about the ability of the self to ignore the world (thus demonstrating its mastery over the world) and also the very creation of this self as the central fact of this world. Self and world are two sides of the same coin - the same psychological process that is mediating a modelling relation.
  • creativesoul
    12k
    Yep. And that is the point. The OP certainly comes off as an exercise in naive realism. You can't both talk about a mediating psychological machinery and then claim that is literally "direct".apokrisis

    Perhaps not, but one could easily provide solid justificatory ground for talking about how mediating psychological machinery is existentially contingent upon direct perception.


    There just cannot be a direct experience of the real world ... because we don't even have a direct connection to our real selves. Our experience of experience is mediated by learnt psychological structure.apokrisis

    Nonsense. Physiological sensory perception is as direct a connection as one could reasonably hope for.
  • creativesoul
    12k
    Perception involves being able to see "that cat there", a judgement grounded in the matching development of a generalised capacity for categorising the world in terms of the long-run concept of "a cat"apokrisis

    Physiological sensory perception is prior to language on my view.
  • apokrisis
    7.3k
    If you want to argue your case with actual perceptual examples then go right ahead.

    What is direct about motion detection or hue perception I wonder? Are you going to begin by talking about the transduction of “sensory messages” at the level of receptors?
  • creativesoul
    12k
    Physiological sensory perception is prior to language on my view.creativesoul

    Do you agree?
  • apokrisis
    7.3k
    Physiological sensory perception is prior to language on my view.creativesoul

    Well of course. Animals have minds and selfhood. Our models of perception have been built from experiments on cats and monkeys mostly.
  • creativesoul
    12k
    Perception involves being able to see "that cat there", a judgement grounded in the matching development of a generalised capacity for categorising the world in terms of the long-run concept of "a cat"apokrisis

    Physiological sensory perception is prior to language on my view.
    — creativesoul

    Do you agree?
    creativesoul

    Well of course. Animals have minds and selfhood. Our models of perception have been built from experiments on cats and monkeys mostly.apokrisis

    What then did you mean by "perception" in the first quote above?
  • creativesoul
    12k
    Our models of perception have been built from experiments on cats and monkeys mostly.apokrisis

    Yeah, I mean because that's what we have direct access to as compared/contrasted/opposed to our own selves...

    What on earth are you talking about?
  • Marchesk
    4.6k
    Yep. And that is the point. The OP certainly comes off as an exercise in naive realism. You can't both talk about a mediating psychological machinery and then claim that is literally "direct".apokrisis

    I meant direct in the philosophical sense, where direct realists argue that perception is one of being directly aware of mind-independent objects out there in the world, and not some mentally constructed idea in the head.

    That there is neurological/cognitive machinery for perceiving objects directly is understood. That machinery is only a problem if it generates a mediating idea.

    The direct realist debates always go off the rails on these points. That's why we get arguments about how objects aren't in the head, or light takes time to travel, and therefore direct realism can't be the case.
  • creativesoul
    12k
    I meant direct in the philosophical sense, where direct realists argue that perception is one of being directly aware of mind-independent objects out there in the world, and not some mentally constructed idea in the head.Marchesk

    I'm a realist who argues in favor of direct physiological sensory perception. I'm not sure if I'd say/argue that direct perception requires awareness of that which is being perceived. Awareness requires an attention span of some sort. Bacteria directly perceive. I find no justification for saying that bacteria are aware of anything at all...
  • creativesoul
    12k
    That there is neurological/cognitive machinery for perceiving objects directly is understood. That machinery is only a problem if it generates a mediating idea.Marchesk

    Only if it generates an idea that mediates the physiological sensory perception itself...

    Doing that first requires becoming aware of such a thing. Language is required for becoming aware of one's own physiological sensory perception. Language is not required for being born with neurological/cognitive machinery(physiological sensory perception).

    Thus, drawing correlations, associations, and/or connections between 'objects' of physiological sensory perception can result in a mediating idea and still pose no problem whatsoever for a direct realist like myself. The attribution and/or recognition of causality is one such correlation/association/connection.

    One can learn about what happens when one touches fire without ever having generated an idea that mediates one's own physiological sensory perception. One cannot learn what happens when one touches fire without attributing/recognizing causality.
  • apokrisis
    7.3k
    What on earth are you talking about?creativesoul

    As usual I have no clue what you are on about. Did you think I would argue that sensory level, and then linguistic level knowledge of the world is indirect, but that scientific knowledge is direct?

    All knowledge would be indirect in the semiotic sense I’ve described.
  • creativesoul
    12k
    Perception involves being able to see "that cat there", a judgement grounded in the matching development of a generalised capacity for categorising the world in terms of the long-run concept of "a cat"apokrisis

    Physiological sensory perception is prior to language on my view.
    — creativesoul

    Do you agree?
    creativesoul

    Well of course. Animals have minds and selfhood. Our models of perception have been built from experiments on cats and monkeys mostly.apokrisis

    What then did you mean by "perception" in the first quote above?
  • apokrisis
    7.3k
    A pigeon can make the same perceptual discrimination. Human perception is of course linguistically scaffolded and so that takes it to a higher semiotic level.
  • apokrisis
    7.3k
    Remind me. How does your particular definition of direct realism account for hallucinations and illusions? How do I see a mental image? In what way is a wavelength really green? So on and so forth....
  • Marchesk
    4.6k
    In what way is a wavelength really green?apokrisis

    It's not. I would favor a direct scientific realist account of perception. But in any case, one could argue that smell, sound, color are how we experience the world directly.

    How does your particular definition of direct realism account for hallucinations and illusions?apokrisis

    It's not my definition, and I don't know whether direct realism is true. But it occurred to me that if neural networks are a crude approximation for how our perception works, then they do favor realism about the patterns being detected.

    I don't know whether any neural network can be said to have illusions or hallucinations. Possibly illusions. Sometimes there are notable failures where it incorrectly recognizes the wrong pattern, despite otherwise having a high degree of success.
  • apokrisis
    7.3k
    It's not. I would favor a direct scientific realist account of perception. But in any case, one could argue that smell, sound, color are how we experience the world directly.Marchesk

    How could we argue that the world is coloured as we “directly experience” it when science assures us it is not?

    Sure, phenomenally, our impressions of red seem direct. We just look and note the rose is red. That’s the end of it. But science tells us that it isn’t in fact the case. There is mediation that your philosophical position has to incorporate to avoid the charge of naive realism.

    But it occurred to me that if neural networks are a crude approximation for how our perception works, then they do favor realism about the patterns being detected.Marchesk

    But your argument seemed to be that unsupervised neural net learning is evidence for just how unmediated perception would be. So that might be an argument for a high degree of directness in that sense, yet it remains also an acceptance of indirectness as the basic condition.

    If awareness is mediated by a psychological process, then by definition, it ain’t literally direct.
  • creativesoul
    12k
    A pigeon can make the same perceptual discrimination. Human perception is of course linguistically scaffolded and so that takes it to a higher semiotic leveapokrisis

    Pigeon perception is not linguistically scaffolded. They have no concept of "cat".

    You need to sort out the incoherence and/or equivocation in your usage of the term "perception".
  • apokrisis
    7.3k
    Something to do with thought/belief I take it? LOL.
  • Harry Hindu
    5.1k
    Still using those "direct" and "indirect" terms - as if they really mean anything, Apo?

    How could we argue that the world is coloured as we “directly experience” it when science assures us it is not?apokrisis
    Your experience is part of the world, no?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.