• tom
    1.5k
    Do you say that the mind is analogous to software? If so, that would paint a rather inert picture of the mind. In this context I would rather say that software are the instructions for the brain. One problem is, how do we write those?Querius

    No, I mean the mind IS software. According to known physics, it can't be anything else. Consciousness is a software feature, and the software programs itself.
  • Metaphysician Undercover
    13.2k
    No, I mean the mind IS software. According to known physics, it can't be anything else. Consciousness is a software feature, and the software programs itself.tom

    That, I think is a very strange notion. You do realize don't you, that physics neither claims to understand the mind, nor attempts to understand the mind. So to make your determination of what the mind "is", according to known physics, would be a highly unusual, and fallible, thing to do.
  • Pierre-Normand
    2.4k
    So, our intentions, deliberations and thoughts are direct instructions for neurons. Neurons listen in and understand our mental stuff directly and know what to do? No problemo?Querius

    Our thoughts are not instructions for neurons at all. The intentional contents of our beliefs and intentions aren't directed at neurons. They're typically directed at objects and states of affairs in the world. Our neurons need not be told what to do anymore than transistors in computers need be told by the software what to do. The installed software is a global structural property of the suitably programmed computer. What it is that the transistors are performing -- qua logical operations -- is a function of the context within which they operate (i.e. how they're connected with one another and with the memory banks and input devices). Their merely physical behavior only is governed by the local conditions, and the laws of physics, regardless of the global structure of the computer.

    Well, in order to function, hardware does require translation of high-level programming language, so this analogy seems inapt.

    The hardware must only be suitably structured in order to deal adequately with the software instruction; it need not have instructions translated to it. If the high level code needs to be compiled or interpreted before it is run it's only in cases where the hardware is general purpose and its native instruction set isn't able to run the code directly.

    I think part of the trouble in conveying the significance of my hardware/software analogy stems from the fact that the term "hardware" is highly ambiguous. The term can either refer to the material constitution of the computer qua physical system, which merely obeys the laws of physics. It can also refer to the computer qua implementation of a program that transforms significant inputs into significant outputs. Understood in accordance with the latter acception, the hardware, in virtue of its structure, already embodies a specific algorithm -- an abstract mapping from input sequences to output sequences (including the possibility of non-termination). It can be represented formally by a Turing machine. When a definite part of the input to the hardware (maybe accessed by the CPU from an internal memory store) encodes a virtual machine, then non-native code can be interpreted. In that case, the hardware in conjunction with the interpreter can be conceived as embodying a higher level algorithm.

    "The programmer need not concern herself with the way in which the hardware enables her program to run." -- PN

    Because a compiler — translator — bridges the gap. Right?

    That's right. But that's a special case. The task of the compiler (or interpreter), though, isn't to translate high level instructions in a language that it understands. The hardware understands nothing, of course. Rather, the task of the compiler merely is to transform the structure of the input such that when this transformed input is mapped by the low level hardware to the corresponding output, the result of the computation accords with the specification of the high level program. The composite system consisting in that hardware, augmented with the compiler, constitutes a virtual machine that behaves exactly in the same way (though maybe less efficiently) as a more complex hardware that would have the high level language as its native instruction set.

    So, in a way, a mature human being understands natural language thanks to her overall physiological structure embodying a "virtual machine" that has this language as its native language. The neurons need not understand what their individual roles is in the underlying causal chain anymore than transistors in a computer need understand anything about their electrical "inputs". The relevant level of information processing (or of human natural language understanding) resides in the overall structure and behavior of the whole computer, including its compiler/interpreter (or of the whole human being).

    To be clear, I am not saying that the hardware/software analogy furnished a good or unproblematic model for the body/mind relationship. The purpose of the analogy is quite limited. It is intended to convey how top-down causation can be understood to operate unproblematically, in both cases, without any threat of causal overdetermination or violation of the causal closure of the lower level domain.
  • Querius
    37

    Our thoughts are not instructions for neurons at all. The intentional contents of our beliefs and intentions aren't directed at neurons.
    If so, how does downward causation work? How do we get from the intention to raise one’s arm to neurons which act in accord with that intention?
    They're typically directed at objects and states of affairs in the world. Our neurons need not be told what to do anymore that transistors in computers need be told by the software what to do.
    Excusez moi? In order to be functional, to act how and when they need to act, transistors in computers do need software instructions.
    The installed software is a global structural property of the suitably programmed computer. What it is that the transistors are performing -- qua logical operations -- is a function of the context within which they operate (i.e. how they're connected with one another and with the memory banks and input devices). Their merely physical behavior only is governed by the local conditions, and the laws of physics, regardless of the global structure of the computer.
    You forget about the role of software information, which is part of the global structure.
    The hardware must only be suitably structured in order to deal adequately with the software instruction; it need not have instructions translated to it. If the high level code needs to be compiled or interpreted before it is run it's only in cases where the hardware is general purpose and it's native instruction set isn't able to run the code directly.
    You are mistaken. No computer can run programming language/source code directly, translation to machine code is always necessary, unless, of course, you start with machine code. However our deliberations, thoughts and intentions are anything but ‘machine code’. Behold the gap.
    The task of the compiler (or interpreter), though, isn't to translate high level instructions in a language that it understands.
    Again, you are mistaken, it is exactly that.
    The neurons need not understand what their individual roles is in the underlying causal chain anymore than transistors in a computer need understand anything about their electrical "inputs".
    Such a level of understanding is not at issue here. What transistors need are clear instructions. Obviously they don’t need to 'understand' anything else, let alone their position in the scheme of things.
    To be clear, I am not saying that the hardware/software analogy furnished a good or unproblematic model for the body/mind relationship. The purpose of the analogy is quite limited. It is intended to convey how top-down causation can be understood to operate unproblematically ...
    The translation problem — from deliberations and intentions to instructions for neurons — persists.
  • Pierre-Normand
    2.4k
    If so, how does downward causation work? How do we get from the intention to raise one’s arm to neurons which act in accord with that intention?Querius

    The neurons don't need to act in accord with the intention since the intention isn't directed at the neurons. If my intention is to grasp a glass of water standing on a table before me, what would it mean for my neurons to act in accord with this intention? Neurons and blind and impotent. The electrical activity of my neurons must, for sure (and this is what you must mean), be such as to enable the suitable muscular contractions so that my hand will move towards the glass, etc. This neural activity may sustain a sensorymotor feedback loop that realizes, in context, my voluntarily taking hold of the glass of water specifically in conditions where I had both the opportunity and an intelligible reason to do so.

    Such a molar activity of my body/brain/environment may have become possible, while I matured, through the progressive tuning and habituation of the underlying physiological processes. Those processes need only sustain, albeit not control, higher level sensorimotor feedback loops as well as (at an even higher level) proclivities to act in a manner that is rationally suited to the circumstances. So, both the learning of the abilities and their actualization always are top-down causal processes from the intelligible and perceptible circumstances of our embodiment in the natural and cultural world to the tuning and (conditional) actualization of the underlying mindless (albeit adaptative) enabling responses from our low-level physiology. Our proclivities to be thus behaviorally trainable (which we share with other animals) and to be acculturated (which we don't) are of course evolved.

    There is no need for the person, her body, or her mind, to instruct neurons on what to do since it is rather their function -- through their mindless and automatic (previously selected and/or tuned and/or habituated) low-level activities -- to enable whole persons to move about in a manner that is expressive of rationality and of sensorimotor competence. Likewise, in a simpler case, the cells of the heart are collectively organized in such a way as to enable the whole organ to pump blood without its being necessary for the heart to "instruct" individually its own cells about what to do. The cells respond directly to local variation in blood flow, adrenalin, electric potential, etc., in such a manner that the result of the overall dynamics is adaptative.

    I'll comment later on the topic of the role on the compiler/interpreter in the computer analogy.
  • Querius
    37
    The neurons don't need to act in accord with the intention since the intention isn't directed at the neurons. — Pierre-Normand

    According to you, neurons don't need to act in accord with the intention to raise one arm ....

    Unless you are willing to retract this claim, our discussion ends here.
  • Pierre-Normand
    2.4k
    According to you, neurons don't need to act in accord with the intention to raise one arm ....

    Unless you are willing to retract this claim, our discussion ends here
    Querius

    In a sense they do (metaphorically) and in another sense (literally) they don't. Which is why I took the pain to disambiguate the two senses, charitably ascribed to you the reasonable one, and attempted to warn you against the easy conflation. The tendency to make this conflation is a core target in Bennett and Hacker, Philosophical Foundations of Neuroscience. But if you don't like having your preconceptions challenged, suits you. I may keep on answering some of your already stated questions and challenges. You are of course free not to respond.
  • Metaphysician Undercover
    13.2k
    To be clear, I am not saying that the hardware/software analogy furnished a good or unproblematic model for the body/mind relationship. The purpose of the analogy is quite limited. It is intended to convey how top-down causation can be understood to operate unproblematically, in both cases, without any threat of causal overdetermination or violation of the causal closure of the lower level domain.Pierre-Normand

    I think that the whole hardware/software analogy is an unproductive distraction. What is at issue here is the electrical processes of the computer, what that energy is doing. The hardware and software work together in unison to control what that energy is doing, so there is no real division here to speak of.

    But if we look at the neurological activity of the human being now, what the energy is doing, we can understand a real division which Querius points to. There are controls over the activity which are clearly aspects of the physical neurological system. But then we also have immaterial ideas, in the mind, which appear to exercise some control. So if our neurological activity is proceeding according to the constraints of the physical system, how is it that with our minds, and the use of immaterial ideas, we can decide which direction to go?
  • tom
    1.5k
    The tendency to make this conflation is a core target in Bennett and Hacker, The Philosophical Foundations of Neuroscience. But if you don't like having your preconceptions challenged, suits you.Pierre-Normand

    I don't mind having my preconceptions challenged, if you don't mind elaborating?
  • Wayfarer
    22.6k
    The abstract mind, instantiated on the computationally universal brain, decides to move an arm. It does not know the mechanism of how this is performed, because it does not need to. The mechanism involves layers of sub-conscious neuronal control systems, which eventually result in the appropriate nerve signals to the appropriate muscles.tom

    The problem with 'mind as software' is that it surely is an analogy. It isn't literally the case, because software is code that is executed on electro-mechanical systems, in response to algorithms input by programmers. The mind may be 'like' software, but it is not actually software, as has been argued by numerous critics of artificial intelligence 1.

    I view consciousness as indivisibleQuerius

    That is the well-known philosophical conundrum of the 'subjective unity of experience'. There is a vast literature on that, but it remains mysterious.

    Past a certain point, I just don't think it is possible to explain the nature of mind, because the mind is prior to, and the source of, any explanation about any subject whatever, including explanation of the nature of mind. Just as it is not really possible to explain why natural or scientific laws exist, it is also not possible to explain the basic operations of reason, as any explanation will have to make use of the very faculty which it is seeking to explain. In this case, it's a matter of 'knowing you don't know' being preferable to 'thinking you know something you don't.'
  • Pierre-Normand
    2.4k
    I don't mind having my preconceptions challenged, if you don't mind elaborating?tom

    My comment was directed at Querius who resolved to stop reading my post further than the first sentence lest I would issue a retraction. The rest of the post was of course an elaboration. Querius thought my assertion that my neurons don't need themselves to act "in accord" with my intentions -- as opposed to their activity, as I explained, merely enabling a molar (high-level) bodily behavior that itself constitutes my enacting those intentions in dynamic interaction with my environment -- was incredible. Was there something in my explanation that also rubbed against your view regarding the "mind/brain" relationship, which I propose may be viewed as a matter of high-level to low level structural (i.e. "implementation") relationship (as opposed to a boss/employee relationship, say)?

    The purpose of the digital computer analogy was to show that, in this case also, individual transistors, or logic gates, or even whole collections of them -- i.e., the CPU -- need not have the high level software instructions "translated" to them in the case where the implementation of this high level software specification is a matter of the whole computer being structured in such a way that its molar behavior (i.e. the input/output mapping) simply accords with this high level specification. In cases where the code is compiled or interpreted, the CPU need not know what virtual machine, if any, is being implemented.
  • apokrisis
    7.3k
    That is the well-known philosophical conundrum of the 'subjective unity of experience'. There is a vast literature on that, but it remains mysterious.Wayfarer

    It's not that mysterious once you accept that the unity is mostly being unified by what it successfully ignores. (Which is also what makes the computer analogies being used here fairly inappropriate.)

    So attentional processing "unifies" by singling out the novel or surprising. And it does that by suppressing everything else that can be treated as predictable, routine, or irrelevant.

    Well I say attention "does it". But of course it is anticipatory modelling and established habit that defines in advance the predictable, routine, or irrelevant. So attention-level processing only has some mopping up to do.

    Thus the mind does have its strong central division into habit and attention. Everything that can be dealt with without clear conscious knowledge gets sorted out in 150 to 300 milliseconds by "automatic" habit. Then anything left over becomes a focus of "conscious" attentional processing - which takes 300 to 700 milliseconds to form and stabilise. With attention we are now talking about reportable awareness as - having managed to remove so much unnecessary sensory detail from the picture - we have a small enough "point of view" to retain as a persisting state of working memory.

    So when it comes to something like the question of how does one lift one's arm, the usual way is without even attentionally deliberating. Attention is usually focused in anticipatory fashion on some general goal - like getting the cornflakes off the top shelf. Then habit slots in all the necessary muscular actions without need for higher level thought or (working memory) re-presentation. It is only if something goes wrong that we need to stop and think - start forming some different plan, like going to get a chair because our fingers can't in fact reach.

    So - as I have argued through the thread - the key is the hierarchical and semiotic organisation of top down constraints over bottom up degrees of freedom. And even a simple action like lifting an arm is highly complex in terms of its many levels of such organisation.

    I can lift a hand fast and reflexively if I put it on a hot stove. Pain signals only have to loop as far as the spine to trigger a very unthinking response.

    Then I can lift the hand in a habitual way because I am intending in a general way to have my routine breakfast.

    Or then I can lift my hand in a very artificial way - as perhaps in a laboratory experiment where I am wired up with electrodes and I'm being asked to notice when my intention to lift the arm first "entered my conscious awareness".

    At this point, it is all now about some researcher's model of "freewill" and the surprise that a familiar cultural understanding about the "indivisibility of consciousness" turns out to be quite muddled and wrong.

    Not that that will change any culturally prevalent beliefs however. As I say, the mind is set up to be excellent at ignoring things as a matter of entrenched habit. A challenge to preconceptions may cause passing irritation, but it is very easy for prejudice to reassert itself. If - like Querius - you don't like the answer to a question, you just hurry on to another question that you again only want the one answer to.
  • Pierre-Normand
    2.4k
    The problem with 'mind as software' is that it surely is an analogy.Wayfarer

    I quite agree. Its usefulness rests in helping clearing up some issues regarding inter-level material-realization v.s. functional-level causal relationships and the threat of causal over-determination that always lurk. Its drawback it that it encourages what Susan Hurley had called the sandwich model of the mind that portrays mental operations as being located in a linear causal chain mediating (i.e. being sandwiched) between raw sensory "inputs" on one side and bodily actions (raw motor "outputs") on the other side.
  • apokrisis
    7.3k
    The purpose of the digital computer analogy also was to show that, in this case also, individual transistors, or logic gates, or even collections of them, need not have the high level software instructions "translated" to them in the case where the implementation of this high level software specification is a matter of the whole computer being structured in such a way that its molar behavior (i.e. the input/output mapping) simply accords with the high level specification.Pierre-Normand

    Real computers are structured in hierarchical fashion. So once you start to talk about operating systems, languages, compilers, instruction sets, microcode and the rest, you are talking about something quite analogous to the organic situation where the connection from "software" to "hardware" is a multi-level set of constraints. Functions are translated from the level of programmes to the level of physical actions in a way that the two realms are materially or energetically disconnected. What the software can "freely imagine" is no longer limited by what the hardware can "constrainedly do".

    Where the computational analogy fails is that there is nothing coming back the other way. The physics doesn't inform the processing. There is no actual interaction between sign and matter as all the hierarchical organisation exists to turn the material world into a machine that can be ignored. That elimination of bottom-up efficient/material cause is then of course why the software can be programmed with the formal/final fantasies of us humans. We can make up the idea of a world and run it on the computer.

    So the computer metaphor - at least the Universal Turing Machine version - only goes so far. The organic reality is rather different in that there is a true interaction between sign and matter going on over all the hierarchical levels. Of course, this is more like a neural network or Bayesian brain architecture. But still, there is a world of difference between a computer - a machine designed to divorce the play of symbols from the play of matter - and a mind/brain, which is all about creating a hierarchically complex, and ecologically constrained, system of interaction between the two forms of play.

    Computers are not "of this world" so can be used as devices to freely imagine worlds.

    Brains are devices constrained by a world. But in making that relationship structurally complex, brains gain the functional degrees of freedom that we call autonomy and subjective cohesion. (The freedom to actually ignore the world being a central one, as I argued.)
  • Pierre-Normand
    2.4k
    Computers are not "of this world" so can be used as devices to freely imagine worlds.

    Brains are devices constrained by a world. But in making that relationship structurally complex, brains gain the functional degrees of freedom that we call autonomy and subjective cohesion. (The freedom to actually ignore the world being a central one, as I argued.)
    apokrisis

    Yes, I broadly agree. The interplay of worldly dynamic constraints and freedom to imagine (and, centrally, to plan actions) is explained in relation with the faculty of memory in an interesting way by Arthur Glenberg in his paper What memory is for, Behavioral and Brain Sciences, 20, 1, 1997.

    Here is the abstract:

    "Let's start from scratch in thinking about what memory is for, and consequently, how it works. Suppose that memory and conceptualization work in the service of perception and action. In this case, conceptualization is the encoding of patterns of possible physical interaction with a three-dimensional world. These patterns are constrained by the structure of the environment, the structure of our bodies, and memory. Thus, how we perceive and conceive of the environment is determined by the types of bodies we have. Such a memory would not have associations. Instead, how concepts become related (and what it means to be related) is determined by how separate patterns of actions can be combined given the constraints of our bodies. I call this combination “mesh.” To avoid hallucination, conceptualization would normally be driven by the environment, and patterns of action from memory would play a supporting, but automatic, role. A significant human skill is learning to suppress the overriding contribution of the environment to conceptualization, thereby allowing memory to guide conceptualization. The effort used in suppressing input from the environment pays off by allowing prediction, recollective memory, and language comprehension. I review theoretical work in cognitive science and empirical work in memory and language comprehension that suggest that it may be possible to investigate connections between topics as disparate as infantile amnesia and mental-model theory."
  • Wayfarer
    22.6k
    the mind does have its strong central division into habit and attention. Everything that can be dealt with without clear conscious knowledge gets sorted out in 150 to 300 milliseconds by "automatic" habit. Then anything left over becomes a focus of "conscious" attentional processing - which takes 300 to 700 milliseconds to form and stabilise. With attention we are now talking about reportable awareness as - having managed to remove so much unnecessary sensory detail from the picture - we have a small enough "point of view" to retain as a persisting state of working memory.apokrisis

    This has some relationship with the famous Libet experiments, doesn't it? They showed that the body moves before the subject is aware that they want to move it. A lot of people seem to interpret that as a blow against free will, but I see it as more an indicator of the limited role of ego; that a lot of what we do is pre-conscious and the process of 'thinking about it' lags by a brief period of time, partially because it's not always necessary, and partially because there's a lot of work involved. But that doesn't mean that we are not free agents. When we are 'unconsciously competent' at something, when it becomse 'second nature', then we can perform it without necessarily thinking about it

    This video has some interesting points to make about that (John Haugeland and Hubert Dreyfus both appear):

  • apokrisis
    7.3k
    This has some relationship with the famous Libet experiments, doesn't it? They showed that the body moves before the subject is aware that they want to move it.Wayfarer

    Yep. So what the experiments illustrate is that we have "free won't", rather than freewill. As long as we aren't being hurried into an impulsive reaction, we can - the prefrontal "we" of voluntary level action planning - pay attention to the predictive warning of what we are about to do, and so issue a cancel order.

    So part of the habit-level planning for a routine action is the general broadcast of an anticipatory motor image. As part of the unity of experience, the sensory half of our brain has to be told that our hand is suddenly going to move in a split second or so. And the reason for that is so "we" can discount that movement as something "we" intended. We ignore the sensation of the moving hand in advance - and so then we can tell if instead the world caused our hand to move. A fact far more alarming and deserving of our attention.

    So Libet was a Catholic and closet dualist. As an experimenter, that rather shaped how he reported his work. The popular understanding of what was found is thus rather misunderstood.

    If you turn it around, you can see that instead we live in the world in a way where we are attentionally conscious of what we mean do to do in the next second or so. Then at a faster operating habitual level, the detail gets slotted in - which includes this "reafference" or general sensory warning of what it is shortly going to feel like because our hand is going to suddenly move "of its own accord". But don't panic anyone ... in fact just ignore it. Only panic if the hand fails to get going, or if perhaps there is some other late breaking news that means mission abort - like now seeing the red tail spider lurking by the cornflakes packet.

    So the Libet experimental situation was extremely artificial - the opposite of ecologically natural. But it got huge play because it went to the heart of some purely cultural concerns over "the instantaneous unity of consiousness" and "the human capacity for freewill".
  • Wayfarer
    22.6k
    So part of the habit-level planning for a routine action is the general broadcast of an anticipatory motor image. As part of the unity of experience, the sensory half of our brain has to be told that our hand is suddenly going to move in a split second or so. And the reason for that is so "we" can discount that movement as something "we" intended. We ignore the sensation of the moving hand in advance - and so then we can tell if instead the world caused our hand to move. A fact far more alarming and deserving of our attention.apokrisis

    But again this is reductionist to the extent that you're treating the subject - namely the human - in a biologistic way - explaining human nature in terms of systems, reactions, models, and so on. It's adequate on one level of description, but not on others. As far as free will (or won't) is concerned, the point from the perspective of a humanistic philosophy is not understanding the determinative causes of human actions from an abstract or theoretical point of view, but what freedom of action means. That is the point of the clip: an important part of what makes us human is that we care about something. In your case, you care about philosophy of biology, which drives you to explore it, enlarge the boundaries of it, and so on. Very good. But I don't know if that necessarily has a biological origin or rationale.

    So Libet was a Catholic and closet dualist.apokrisis

    Isn't that 'the genetic fallacy'? Anyway, I'm Buddhist and an outed dualist. ;)
  • apokrisis
    7.3k
    But again this is reductionist to the extent that you're treating the subject - namely the human - in a biologistic wayWayfarer

    All modelling is reductionist ... even if it is a reduction to four causes holistic naturalism. And as I say, even the brain is a reductionist modeller, focused on eliminating the unnecessary detail from its "unified" view of the world. The brain operates on the same principle of less is more.

    As far as free will (or won't) is concerned, the point from the perspective of a humanistic philosophy is not understanding the determinative causes of human actions from an abstract or theoretical point of view, but what freedom of action means.Wayfarer

    Yep. But that is covered by my point that neuroscience only covers the basic machinery. To explain human behaviour, you then have to turn to the new level of semiosis which is linguistic and culturally evolving. So you can't look directly to biology for the constraints that make us "human" - the social ideas and purposes that shape individual psychologies. You do have to shift to an anthropological level of analysis to tell that story.

    (And I agree that the majority of neuroscientists - especially those with books to sell - don't get that limitation on what biology can explain.)

    Isn't that 'the genetic fallacy'? Anyway, I'm Buddhist and an outed dualist.Wayfarer

    As it happened, Libet told me about his dualistic "conscious mental field" hypothesis before he actually published it in 1994. So I did quiz him in detail about the issue of his personal beliefs and how that connected to the way he designed and reported his various earlier brain stimulation and freewill experiments.

    So I am not making some random ad hominen here. It is a genuine "sociology of science" issue. Both theists and aetheists, reductionist and holists, are social actors and thus can construct their work as a certain kind of "performance".

    And believe me, the whole of philosophy of mind/mind science came to seem to me a hollow public charade for this reason. For the last 50 years (starting from the AI days) it has been a massive populist sideshow. Meanwhile those actually doing real thinking - guys like Stephen Grossberg or Karl Friston - stayed well under the radar (largely because they saw the time-wasting populist sideshow for what it was as well.)
  • Wayfarer
    22.6k
    So you can't look directly to biology for the constraints that make us "human" - the social ideas and purposes that shape individual psychologies. You do have to shift to an anthropological level of analysis to tell that story.apokrisis

    Thank you.

    As it happened, Libet told me...apokrisis

    :-*
  • Metaphysician Undercover
    13.2k
    But again this is reductionist to the extent that you're treating the subject - namely the human - in a biologistic way - explaining human nature in terms of systems, reactions, models, and so on. It's adequate on one level of description, but not on others.Wayfarer

    All modelling is reductionist ... even if it is a reduction to four causes holistic naturalism. And as I say, even the brain is a reductionist modeller, focused on eliminating the unnecessary detail from its "unified" view of the world. The brain operates on the same principle of less is more.apokrisis

    I don't understand the bad reputation which reductionism has received. If it's the way toward a good clear understanding, then where's the problem? I can see how a monist materialist reductionist would meet a dead end in the quest for understanding, at the Planck level where the material world becomes unintelligible, and this would appear as the limit to intelligibility, but a dualist reductionist would not meet the same problem. The dualist allows non-spatial substance.

    To explain human behaviour, you then have to turn to the new level of semiosis which is linguistic and culturally evolving. So you can't look directly to biology for the constraints that make us "human" - the social ideas and purposes that shape individual psychologies. You do have to shift to an anthropological level of analysis to tell that story.apokrisis

    I don't see this need. We hear people talking, we read books. These are perceptual activities. Why can't we treat them like any other perceptual activity? Why do you feel the need to look to something else, like social ideas, cultural constraints, to understand what is just another perceptual activity? In reality this is just the individual interpreting what one hears and reads, just like we interpret any act of sensation. The only difference is that when we interpret these sensations, speaking and writing, we assign a special type of meaning to them because we recognize the context of having come from other minds.
  • Wayfarer
    22.6k
    I don't understand the bad reputation which reductionism has received. If it's the way toward a good clear understanding, then where's the problem?Metaphysician Undercover

    Reductionism, to put it bluntly, is 'nothing but-ism'. You may think you're a human being, endowed with inalienable rights, but in actual fact you're 'nothing but':
    • 'survival machines—robot vehicles blindly programmed to preserve the selfish molecules known as genes' (Richard Dawkins);
    • the behavior of a vast assembly of nerve cells and their associated molecules (Francis Crick);
    • a 'moist robot' (Daniel Dennett);
    • An accidental collocation of atoms (Betrand Russell).

    The dualist allows non-spatial substance.Metaphysician Undercover

    Reductionists are generally materialist. If there are such philosophers as 'reductionist dualists', I would be interested to hear about them.
  • apokrisis
    7.3k
    I don't understand the bad reputation which reductionism has received. If it's the way toward a good clear understanding, then where's the problem?Metaphysician Undercover

    I always say it is fine in itself. It is only bad in the sense that two causes is not enough to model the whole of things, so reductionism - as a tale of mechanical bottom-up construction - fails once we get towards the holistic extremes of modelling. You need a metaphysics of all four causes to talk about the very small, the very large, and the very complex (the quantum, the cosmological, the biotic).

    a dualist reductionist would not meet the same problem. The dualist allows non-spatial substance.Metaphysician Undercover

    Yep. Olde fashioned magick! Dualism is just failed reductionism doubling down to make a mystery of both mind and matter.

    I don't see this need. We hear people talking, we read books. These are perceptual activities. Why can't we treat them like any other perceptual activity?Metaphysician Undercover

    You meant conceptual activities really, didn't you? :)

    Or at least some of us read books and listen to people talk to gain access to the group-mind. It kind of defines the line between crackpot and scholar.
  • Metaphysician Undercover
    13.2k
    If there are such philosophers as 'reductionist dualists', I would be interested to hear about them.Wayfarer


    I'm pretty sure I'm dualist, and apokrisis has repeatedly affirmed that I'm reductionist, so where does that leave me?

    You meant conceptual activities really, didn't you?apokrisis

    No, I meant that hearing people speak, and reading books are acts of sensation. Don't you agree? And how the individual neurological system deals with these acts of sensation can be understood just like any other act of sensation. We can refer to those concepts of attention and habituation, which you like. Why should we refer to some concept of social constraints in order to understand these acts of sensation? The act of sensation is not being constrained by some aspect of society, it is just a matter of the individual focusing one's attention.

    Or at least some of us read books and listen to people talk to gain access to the group-mind. It kind of defines the line between crackpot and scholar.apokrisis

    I don't read books, or speak to people to gain access to any "group-mind". Whatever that is, it sounds like a crack-pot idea to me.
  • apokrisis
    7.3k
    No, I meant that hearing people speak, and reading books are acts of sensation. Don't you agree?Metaphysician Undercover

    Of course not. All my senses actually see is squiggles of black marks. My cat sees the same thing.

    To interpret marks as speaking about ideas is something very different. It is to be constrained not by the physics of light and dark patterns but by a communal level of cultural meaning.

    So without being a substance dualist, the semiotician has all the explanatory benefits of there being "two worlds" - the one of sign, the other of matter.

    I don't read books, or speak to people to gain access to any "group-mind".Metaphysician Undercover

    Exactly. I mean who needs a physics textbook to know about physics, or a neuroscience textbook to know about brains? Just make the damn shit up to suit yourself.
  • apokrisis
    7.3k
    Reductionists are generally materialist. If there are such philosophers as 'reductionist dualists', I would be interested to hear about them.Wayfarer

    Chalmers?
  • Wayfarer
    22.6k
    I'm pretty sure I'm dualist, and apokrisis has repeatedly affirmed that I'm reductionist, so where does that leave me?Metaphysician Undercover

    Trying to answer Apokrisis' question, I guess ;-)

    I'm highly sympathetic to dualism, but I think everyone is flummoxed by the idea of how 'res cogitans' coud be a 'non-extended substance', because the very idea of 'non-extended substance' appears self-contradictory. (I think I know how to resolve that, but I am never able to explain it.)

    Chalmers?apokrisis

    Chalmers has admitted to being a dualist, but I don't know if he's admitted to being a physicalist. I suppose he and Searle and others of that ilk, take issue with materialism but at the same time, they don't want to defend any kind of traditional dualism. (Need to do more reading.)
  • Pierre-Normand
    2.4k
    Chalmers has admitted to being a dualist, but I don't know if he's admitted to being a physicalist. I suppose he and Searle and others of that ilk, take issue with materialism but at the same time, they don't want to defend any kind of traditional dualism. (Need to do more reading.)Wayfarer

    I haven't kept up with Chalmers's recent views. He's recently endorsed a sort of a functionalist view of the mind that accommodates externalist and "extended mind" theses regarding cognitive functions and propositional content. His joint paper with Andy Clark regarding Otto and his notebook (The extended mind. Analysis. 58 (1): 7–19) explicitly brackets out issues of phenomenal consciousness ("what its like" questions; qualia and such). So, he may have remained an epiphenomenalist regarding the phenomenal content of consciousness. This, together with the idea of the intelligibility of P-zombies, is a position that seems to flounder on Wittgensteinian considerations regarding the necessary publicity ("publicness"?) of criteria that ground our abilities to learn and to understand the meanings of words that purport to refer to felt sensations.
  • tom
    1.5k
    The problem with 'mind as software' is that it surely is an analogy. It isn't literally the case, because software is code that is executed on electro-mechanical systems, in response to algorithms input by programmers. The mind may be 'like' software, but it is not actually software, as has been argued by numerous critics of artificial intelligenceWayfarer

    That all computationally universal hardware is equivalent is not an analogy. The human brain has to be computationally universal. That's not an analogy either.

    The mind isn't LIKE software, it IS software. The human mind is constantly changing - creating knowledge - by programming itself. It is also a type of software we don't understand yet.
  • tom
    1.5k
    This has some relationship with the famous Libet experiments, doesn't it? They showed that the body moves before the subject is aware that they want to move it.Wayfarer

    And "everyone" touts Libets 1980s experiments as evidence of the absence of free will, and ignores his 1990s experiments, where he demonstrated a mechanism for it.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.