Direct realism means awareness of mind-independent objects instead of some mental intermediary. — Marchesk
We've been over this before. Let's just say that your version of direct realism wants to skip over the reality of the psychological processes involved, although I would also myself want to be more direct than any representationalist. So the terms can fast lose any general distinctions as we strive for our personal nuances.
But by the same token, your claims to prove direct realism and deny Kantian psychology are weakened to the degree you don't have a hard definition of what direct vs indirect is about here.
For the record, here is Wiki trying to firm up the definitions....
In philosophy of mind, naïve realism, also known as direct realism or common sense realism, is the idea that the senses provide us with direct awareness of objects as they really are. Objects obey the laws of physics and retain all their properties whether or not there is anyone to observe them.[1] They are composed of matter, occupy space and have properties, such as size, shape, texture, smell, taste and colour, that are usually perceived correctly.
In contrast, some forms of idealism claims that no world exists apart from mind-dependent ideas and some forms of skepticism say we cannot trust our senses. Naïve realism is known as direct as against indirect or representative realism when its arguments are developed to counter the latter position, also known as epistemological dualism;[2] that our conscious experience is not of the real world but of an internal representation of the world.
So for example, do you think the world is really coloured? That red and green are properties of the world rather than properties of the mind?
I prefer to argue the more complex story myself - that colours are properties of a pragmatic or functional mind~world relation. So this is an enactive, ecological or embodied approach. And it has its surprising consequences. Like the point I made about the intention of the brain being to maximise its ability to "ignore the world". We see the physics of the thing-in-itself in terms of reds and greens because such an "untrue" representation is the most useful or efficient one.
Direct realism presumes the brain just grasps reality without effort because reality is "lying right there". The thing-in-itself forces itself upon our awareness due to its recalcitrant nature.
Representationalism then points out that there is still considerable effort needed by the brain. It has all those computational circuits for a reason. But still, representationalism shares the underlying belief that the goal of
re-presenting reality must be veridical. The mental intermediary in question is not the weaving of a solipsistic fiction, but a presentation of an actual world.
Then as I say, I take the position in-between these two extremes of "indirect realism" (Ie: the realism that at least agrees there is a mediating psychological process). And that embodied realism is what Grossberg and Friston in particular have been at the forefront of modelling. They get it, and the philosophical consequences.
So I hope that clears the air and we can get back to the interesting point that really caught my attention.
The success of neural network architectures at doing mind-like things is down to their hierarchical organisation. So in trying to define the simplest self-teaching or pattern recognising machine, computer science has found it in fact is having to hardwire in some of pretty Kantian conditions.
It may not seem obvious, but hierarchical structure - a local~global dichotomy - already wires in spatiotemporal "co-ordinates". It sets up a distinction between local transient impressions vs global lasting concepts from the get-go. Like a newborn babe, that is the order the machine is already seeking to discover in the "blooming, buzzing, confusion" of its assault from a random jumble of YouTube images.
For the hierarchal system to be indirect, our perceptual awareness would be of the hierarchy instead of the object that's being detected using the hierarchy. — Marchesk
Maybe you understand the Kantian position far less well than I was assuming?
A cognitive structure is required to do any perceiving. That is the (apparently chicken and egg) paradox that leads to idealism, social constructionism, and all those other dread philosophical positions.
:)
So no. My point is that in seeking the minimal or most fundamental a-priori structure, a neurological hierarchy is already enough. It already encodes the crucial spatiotemporal distinction in the most general possible way.
The fact that it is so damned hard for folk to see their own hierarchical perceptual organisation shouldn't be surprising.
Thought places space, objects, and even time (or change) as being "out there" in the world. But why would we place our own conceptual machinery in the world as the further objects of our contemplation - except when doing science and metaphysics?
It is the same as with "qualia". We just see red. We don't see ourselves seeing red as a further perceptual act. At best, we can only reveal the hidden fact of the psychological processing involved.
As in when asking people if they can imagine a greeny-red, or a yellowy-blue. A yellowy-red or a greeny-blue is of course no problem. We are talking orange and turquoise. But the peculiarities of human opponent colour processing pathways means some mixtures are structurally impossible.
Likewise, blackish-blue, blackish-green and blackish-red are all standard mixtures (for us neurotypicals), yet never blackish-yellow (ie: what we see as just "pure brown").
So the hidden structure of our own neural processing can always be revealed ... indirectly. We can impute its existence by the absence of certain experience. There are strategies available.
But I'm baffled that you should reply as if we ought to "see the hierarchy" as if it were another perceptual object. We would only be able to notice its signature imprinted on every act of perception.
And that is what we do already in making very familiar psychological distinctions - like the difference between ideas and impressions, or generals and particulars, or memory and awareness, or habit and attention, or event and context.
The structure of the mind is hierarchical - always divided between the local and global scales of a response. The division then becomes the source of the unity. To perceive, ideas and impressions must become united in the one act.
So the structure is certainly there if you know how to look. Psychological science targets that mediating structure. And computer science in turn wants to apply that knowledge to build machines with minds.
That is why the right question is what kind of mediating structure, or indirect realism, does computer science now seem to endorse.
I see you want to make the argument that remarkably little a-priori structure seems needed by a neural net approach like DeepMind. Therefore - with so little separating the machine from its world - a mindful relation is far more direct than many might have been making out.
A fair point.
But then my response is that even if the remaining a-priori structure seems terribly simple - just the starter of a set of hierarchically arranged connections - that is already a hell of a lot. Already an absolute mediating structure is in place that is about to impose itself axiomatically on all consequent development of the processing circuitry.