Do you say that the mind is analogous to software? If so, that would paint a rather inert picture of the mind. In this context I would rather say that software are the instructions for the brain. One problem is, how do we write those? — Querius
No, I mean the mind IS software. According to known physics, it can't be anything else. Consciousness is a software feature, and the software programs itself. — tom
So, our intentions, deliberations and thoughts are direct instructions for neurons. Neurons listen in and understand our mental stuff directly and know what to do? No problemo? — Querius
Well, in order to function, hardware does require translation of high-level programming language, so this analogy seems inapt.
"The programmer need not concern herself with the way in which the hardware enables her program to run." -- PN
Because a compiler — translator — bridges the gap. Right?
If so, how does downward causation work? How do we get from the intention to raise one’s arm to neurons which act in accord with that intention?Our thoughts are not instructions for neurons at all. The intentional contents of our beliefs and intentions aren't directed at neurons.
Excusez moi? In order to be functional, to act how and when they need to act, transistors in computers do need software instructions.They're typically directed at objects and states of affairs in the world. Our neurons need not be told what to do anymore that transistors in computers need be told by the software what to do.
You forget about the role of software information, which is part of the global structure.The installed software is a global structural property of the suitably programmed computer. What it is that the transistors are performing -- qua logical operations -- is a function of the context within which they operate (i.e. how they're connected with one another and with the memory banks and input devices). Their merely physical behavior only is governed by the local conditions, and the laws of physics, regardless of the global structure of the computer.
You are mistaken. No computer can run programming language/source code directly, translation to machine code is always necessary, unless, of course, you start with machine code. However our deliberations, thoughts and intentions are anything but ‘machine code’. Behold the gap.The hardware must only be suitably structured in order to deal adequately with the software instruction; it need not have instructions translated to it. If the high level code needs to be compiled or interpreted before it is run it's only in cases where the hardware is general purpose and it's native instruction set isn't able to run the code directly.
Again, you are mistaken, it is exactly that.The task of the compiler (or interpreter), though, isn't to translate high level instructions in a language that it understands.
Such a level of understanding is not at issue here. What transistors need are clear instructions. Obviously they don’t need to 'understand' anything else, let alone their position in the scheme of things.The neurons need not understand what their individual roles is in the underlying causal chain anymore than transistors in a computer need understand anything about their electrical "inputs".
The translation problem — from deliberations and intentions to instructions for neurons — persists.To be clear, I am not saying that the hardware/software analogy furnished a good or unproblematic model for the body/mind relationship. The purpose of the analogy is quite limited. It is intended to convey how top-down causation can be understood to operate unproblematically ...
If so, how does downward causation work? How do we get from the intention to raise one’s arm to neurons which act in accord with that intention? — Querius
The neurons don't need to act in accord with the intention since the intention isn't directed at the neurons. — Pierre-Normand
According to you, neurons don't need to act in accord with the intention to raise one arm ....
Unless you are willing to retract this claim, our discussion ends here — Querius
To be clear, I am not saying that the hardware/software analogy furnished a good or unproblematic model for the body/mind relationship. The purpose of the analogy is quite limited. It is intended to convey how top-down causation can be understood to operate unproblematically, in both cases, without any threat of causal overdetermination or violation of the causal closure of the lower level domain. — Pierre-Normand
The tendency to make this conflation is a core target in Bennett and Hacker, The Philosophical Foundations of Neuroscience. But if you don't like having your preconceptions challenged, suits you. — Pierre-Normand
The abstract mind, instantiated on the computationally universal brain, decides to move an arm. It does not know the mechanism of how this is performed, because it does not need to. The mechanism involves layers of sub-conscious neuronal control systems, which eventually result in the appropriate nerve signals to the appropriate muscles. — tom
I view consciousness as indivisible — Querius
I don't mind having my preconceptions challenged, if you don't mind elaborating? — tom
That is the well-known philosophical conundrum of the 'subjective unity of experience'. There is a vast literature on that, but it remains mysterious. — Wayfarer
The problem with 'mind as software' is that it surely is an analogy. — Wayfarer
The purpose of the digital computer analogy also was to show that, in this case also, individual transistors, or logic gates, or even collections of them, need not have the high level software instructions "translated" to them in the case where the implementation of this high level software specification is a matter of the whole computer being structured in such a way that its molar behavior (i.e. the input/output mapping) simply accords with the high level specification. — Pierre-Normand
Computers are not "of this world" so can be used as devices to freely imagine worlds.
Brains are devices constrained by a world. But in making that relationship structurally complex, brains gain the functional degrees of freedom that we call autonomy and subjective cohesion. (The freedom to actually ignore the world being a central one, as I argued.) — apokrisis
the mind does have its strong central division into habit and attention. Everything that can be dealt with without clear conscious knowledge gets sorted out in 150 to 300 milliseconds by "automatic" habit. Then anything left over becomes a focus of "conscious" attentional processing - which takes 300 to 700 milliseconds to form and stabilise. With attention we are now talking about reportable awareness as - having managed to remove so much unnecessary sensory detail from the picture - we have a small enough "point of view" to retain as a persisting state of working memory. — apokrisis
This has some relationship with the famous Libet experiments, doesn't it? They showed that the body moves before the subject is aware that they want to move it. — Wayfarer
So part of the habit-level planning for a routine action is the general broadcast of an anticipatory motor image. As part of the unity of experience, the sensory half of our brain has to be told that our hand is suddenly going to move in a split second or so. And the reason for that is so "we" can discount that movement as something "we" intended. We ignore the sensation of the moving hand in advance - and so then we can tell if instead the world caused our hand to move. A fact far more alarming and deserving of our attention. — apokrisis
So Libet was a Catholic and closet dualist. — apokrisis
But again this is reductionist to the extent that you're treating the subject - namely the human - in a biologistic way — Wayfarer
As far as free will (or won't) is concerned, the point from the perspective of a humanistic philosophy is not understanding the determinative causes of human actions from an abstract or theoretical point of view, but what freedom of action means. — Wayfarer
Isn't that 'the genetic fallacy'? Anyway, I'm Buddhist and an outed dualist. — Wayfarer
But again this is reductionist to the extent that you're treating the subject - namely the human - in a biologistic way - explaining human nature in terms of systems, reactions, models, and so on. It's adequate on one level of description, but not on others. — Wayfarer
All modelling is reductionist ... even if it is a reduction to four causes holistic naturalism. And as I say, even the brain is a reductionist modeller, focused on eliminating the unnecessary detail from its "unified" view of the world. The brain operates on the same principle of less is more. — apokrisis
To explain human behaviour, you then have to turn to the new level of semiosis which is linguistic and culturally evolving. So you can't look directly to biology for the constraints that make us "human" - the social ideas and purposes that shape individual psychologies. You do have to shift to an anthropological level of analysis to tell that story. — apokrisis
I don't understand the bad reputation which reductionism has received. If it's the way toward a good clear understanding, then where's the problem? — Metaphysician Undercover
The dualist allows non-spatial substance. — Metaphysician Undercover
I don't understand the bad reputation which reductionism has received. If it's the way toward a good clear understanding, then where's the problem? — Metaphysician Undercover
a dualist reductionist would not meet the same problem. The dualist allows non-spatial substance. — Metaphysician Undercover
I don't see this need. We hear people talking, we read books. These are perceptual activities. Why can't we treat them like any other perceptual activity? — Metaphysician Undercover
If there are such philosophers as 'reductionist dualists', I would be interested to hear about them. — Wayfarer
You meant conceptual activities really, didn't you? — apokrisis
Or at least some of us read books and listen to people talk to gain access to the group-mind. It kind of defines the line between crackpot and scholar. — apokrisis
No, I meant that hearing people speak, and reading books are acts of sensation. Don't you agree? — Metaphysician Undercover
I don't read books, or speak to people to gain access to any "group-mind". — Metaphysician Undercover
I'm pretty sure I'm dualist, and apokrisis has repeatedly affirmed that I'm reductionist, so where does that leave me? — Metaphysician Undercover
Chalmers? — apokrisis
Chalmers has admitted to being a dualist, but I don't know if he's admitted to being a physicalist. I suppose he and Searle and others of that ilk, take issue with materialism but at the same time, they don't want to defend any kind of traditional dualism. (Need to do more reading.) — Wayfarer
The problem with 'mind as software' is that it surely is an analogy. It isn't literally the case, because software is code that is executed on electro-mechanical systems, in response to algorithms input by programmers. The mind may be 'like' software, but it is not actually software, as has been argued by numerous critics of artificial intelligence — Wayfarer
This has some relationship with the famous Libet experiments, doesn't it? They showed that the body moves before the subject is aware that they want to move it. — Wayfarer
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.