• thewonder
    1.4k

    I sort of agree with Dennett's postulation. The "persuasive illusion", however, just simply is consciousness. There's nothing illusory about it.

    I don't think that he adequately supplants Chalmers's objections, though.

    Has anyone in the Philosophy of Mind taken up Merleau-Ponty? I think that the problem will begin to disappear as embodied cognition becomes more prevelant.

    Edit: To avoid starting another thread, I just wanted to bring up that bivalves can feel pain and therefore have consciousness. This, for me, had partially resulted in a crisis of faith as a vegetarian, but I think may posit something useful for anyone concerned with Philosophy of Mind. That a decentralized network can still be conscious has interesting implications for the field.
  • Wayfarer
    22.6k
    bivalves can feel pain and therefore have consciousnessthewonder

    They are, therefore, subjects of experience, not simply objects, even if very simple examples. Life could be seen as the emergence of the subjective. And there’s no place in Dennett’s philosophy for the subjective.

    The scientific revolution of the 17th century, which has given rise to such extraordinary progress in the understanding of nature, depended on a crucial limiting step at the start: It depended on subtracting from the physical world as an object of study everything mental – consciousness, meaning, intention or purpose. The physical sciences as they have developed since then describe, with the aid of mathematics, the elements of which the material universe is composed, and the laws governing their behavior in space and time.

    We ourselves, as physical organisms, are part of that universe, composed of the same basic elements as everything else, and recent advances in molecular biology have greatly increased our understanding of the physical and chemical basis of life. Since our mental lives evidently depend on our existence as physical organisms, especially on the functioning of our central nervous systems, it seems natural to think that the physical sciences can in principle provide the basis for an explanation of the mental aspects of reality as well — that physics can aspire finally to be a theory of everything.

    However, I believe this possibility is ruled out by the conditions that have defined the physical sciences from the beginning. The physical sciences can describe organisms like ourselves as parts of the objective spatio-temporal order – our structure and behavior in space and time – but they cannot describe the subjective experiences of such organisms or how the world appears to their different particular points of view. There can be a purely physical description of the neurophysiological processes that give rise to an experience, and also of the physical behavior that is typically associated with it, but such a description, however complete, will leave out the subjective essence of the experience – how it is from the point of view of its subject — without which it would not be a conscious experience at all.

    So the physical sciences, in spite of their extraordinary success in their own domain, necessarily leave an important aspect of nature unexplained.
    — Thomas Nagel

    Mind and Cosmos

    Which is, in my opinion, the same issue as that stated in ‘the hard problem of consciousness’.
  • thewonder
    1.4k

    I don't know that I would say that Dennett necessarily rejects subjectivity, but, then again, I honestly haven't read too much Dennett, and, so, couldn't tell you too much about him either way.

    Nagel makes a good point about physical sciences. I had always associated the hard problem of consciousness as being a critique of AI, but may have just conflated a set of theories at the time.

    I don't think that Dennett's critique is necessarily on point, but do sort of agree that qualia does not necessarily refute physicalism. Qualia can just describe aspects of physical states.
  • bongo fury
    1.6k
    At this point, would the people collectively manifest the consciousness of the original brain, as a whole, the way it would have manifested inside the person?simeonz

    Do you here allude to, or have you just re-invented, the China brain?

    Also relevant, this speculative theory of composition of consciousnesses. Also it attempts to quantify the kind of complexity of processing with which you (likewise) appear to be proposing to correlate a spectrum of increasingly vivid consciousness.
  • PoeticUniverse
    1.3k
    Qualia can just describe aspects of physical states.thewonder

    Especially since it is immediately sequential to certain brain results, and contains and represents those products, in a unified way, interrelating all the objects, and providing seamless continuity to the previous brain analyses.
  • simeonz
    310
    Do you here allude to, or have you just re-invented, the China brain?bongo fury
    It shows you how philosophically illiterate I am. At least the Wikipedia article doesn't say - "first proposed by ancient Chinese philosophers". :)

    Also relevant, this speculative theory of composition of consciousnesses. Also it attempts to quantify the kind of complexity of processing with which you (likewise) appear to be proposing to correlate a spectrum of increasingly vivid consciousness.bongo fury
    Interesting. Thank you.
  • simeonz
    310
    Edit: To avoid starting another thread, I just wanted to bring up that bivalves can feel pain and therefore have consciousness. This, for me, had partially resulted in a crisis of faith as a vegetarian, but I think may posit something useful for anyone concerned with Philosophy of Mind. That a decentralized network can still be conscious has interesting implications for the field.thewonder
    Just to explicate something to which you may have alluded here with the vegetarianism remark. If the hypothesis that bivalves can feel pain is true (, which doesn't seem particularly implausible in principle, and I wouldn't eat them either), why wouldn't other reactionary systems, such as those of plants, feel pain at some reduced level of awareness?
  • thewonder
    1.4k

    That, I am unsure of. Plants respond to things in nature. Why shouldn't plants feel? I would bet that that any living thing responds to stimuli means that it does in some way feel. In order not to starve, however, you do just have to make an arbitrary distinction. Sentience only extends to decentralized networks because I can't figure out how to get the nutrients that I need to survive otherwise.
  • simeonz
    310

    I am a vegetarian as well and I have made a similar arbitrary commitment. What I mean is, that at least logically, I cannot deny that plants may have some degree of feeling.

    In a discussion of this nature - concerning generalizations of consciousness, I think one cannot make hard assertions. I am just trying to examine (for my own sake) the consistency of the arguments and the value of the statements. But the validation, by whatever means become available, if they become available, will probably not happen in my lifetime.
  • TheHedoMinimalist
    460
    This neatly distinguishes a strong eliminativism (ascribing consciousness to nothing) from mere identity-ism (ascribing consciousness to some things, some brain states). The former would be what causes horrified reactions from many (see above), and the latter is accepted by Terrapin (I think), and @TheHedoMinimalist (I think).bongo fury

    You’re right, I misunderstood what eliminativism was. I was surprised to learn that some eliminativist philosophers even deny the existence of pain. This is certainly not what I believe. I actually think the proper term for my position is actually functionalism.

    Doesn't ascribing consciousness to any machines with "software" set the bar a bit low? Are you at all impressed by Searle's Chinese Room objection?bongo fury

    I would like to object to Searle’s Chinese Room objection with the following argument:

    P1: If AI are capable of learning through exposure to stimulus, then it likely has some mental understanding of what it has learned.

    P2: AI are capable of learning through exposure to stimulus

    C: Therefore, it likely has some mental understanding of what it has learned.

    To explain why I think P1 is true, imagine that you are training a dog. The dog listens to your commands and learns to respond appropriately to them. You would likely believe that the dog is indeed capable of hearing. Similarly, if there was an AI that could learn Mandarin by interacting with Mandarin speakers without the need to pre-code the knowledge of Mandarin into the AI, then it is likely capable of some type of mental understanding of Mandarin. It’s not clear to me why we have more reason to believe that the dog is capable of mental understanding when learning something but not an AI program. To show that P2 is true, I would like to mention that AI programs which are capable of learning through interaction already exist. For example, Alpha Zero is an AI program which taught itself how to play chess while only being pre-coded with the rules of chess. I imagine that it must be capable of some sort of mental understanding to constantly improve its strategy and adapt its playing style to beat every human and AI player. It might even receive positive and negative reinforcement through the experience of positive emotion after winning a game and negative emotion after losing a game. Overall, I would say that we likely already have AI with some mental capacity.
  • TheHedoMinimalist
    460
    I would to start by mentioning that bongo furry corrected me in his earlier comment about the definition of eliminative materialism. I would say that my view is more properly called functionalism rather than eliminative materialism.

    The human brain has greater overall capacity for information processing than that of animal species. Both have (in general) greater analytical performance compared to plants. Doesn't it follow that animals are more conscious then plants?simeonz

    Yes, I think it does because they are capable of more autonomous action and more complex decision making.

    Plants, on the other hand, are capable of some sophisticated behavior (both reactive and non-reactive), if their daily and annual routines are considered in their own time scale. Doesn't that make them more conscious then, say dirt?simeonz

    Well, I’m not sure if plants have mental activity of any sort. This is because plants do not seem to be capable of autonomous action or decision making which is remotely similar to that of humans. They also probably do not possess sufficient energy to support something like mental activity. Plants are more likely to have mental activity than dirt though. This is because dirt doesn’t seem to be sufficiently compact to form an embodied entity which could support a mind.

    But is dirt completely unconscious? Particles cannot capture substantial amount of information, because their states are too few, but they have reactions as varying as can be expected. After all, their position momentum state is the only "memory" of past "observations" that they possess. But it isn't trivial however. One could ask, why wouldn't they be considered capable of microscopic amount of awareness? Not by virtue of having a mass, but because of their memory and responses. If not, there has to be some specific point in the scale of structural and behavioral complexity at which we consider awareness to become manifested.simeonz

    I don’t think that the view that particles could have some microscopic amount of mental activity could completely be dismissed but I think the most reliable hypotheses that could be formed about the types of things which are conscious comes from the most certain beliefs which we hold about our own consciousness. I cannot know if you are really conscious but my best educated guess is that you are since you are the same type of thing as me(a human). Furthermore, it’s seems that animals are capable of experiencing certain things as well. This is because if I tell my dog that it’s time to eat, he will respond by running to his food bowl. This implies that he is capable of listening the same way that me and other people are. AI Programs also display characteristics indicative of mental activity. For example, if I am speaking to an AI chat bot which seems to respond to me as through it is reading and comprehending what I am saying, then it’s hard for me to conclude that the AI bot is less likely to have mental activity than an animal without simply being prejudice in my judgements. On the other hand, I don’t observe plants or particles performing tasks which I can recognize as being indicative of the presence of mental activity. Therefore, my best educated guess is that only humans, animals, and AI have mental activity.

    How many neurons (or similar structures) would we need to create an organism whose behavior can be considered minimally sentient - five, five hundred, five million, etc?simeonz

    This is difficult to precisely answer but I would make an educated guess and say enough to form a microscopic insect. I don’t think that my theory has to explain everything precisely in order to be a plausible theory. The same epistemic difficulties exist for the binary view of consciousness which you accept. The binary view also has to explain which things or beings are conscious. It also has to explain why animals should be considered just as conscious as humans or whether humans with serious neurological issues are just as conscious. Accepting the spectrum view would allow you to demonstrate that human beings are more complex in their mental activity than animals.

    I would like to illustrate how I think societies and ecosystems are similar with respect to consciousness using a thought experiment. Suppose that we use a person for each neuron in the brain, and give each person orders to interact with the rest like a neuron would, but using some pre-arranged conventional means of human interaction. We instruct each individual what corresponding neuron state it has initially, such that it matches the one from a living brain (taken at some time instant). Then we also feed the peripheral signals to the central nervous system, as the real brain would have experienced them. At this point, would the people collectively manifest the consciousness of the original brain, as a whole, the way it would have manifested inside the person? Or to put differently, do eliminative materialists allow for consciousness nesting?simeonz

    So, I would like to start by distinguishing between an unrealistic thought experiment and an absurd thought experiment and why I feel the thought experiment you are presenting me is part of the latter category and why only the former category is relevant for most metaphysical discussions. Unrealistic thought experiments cannot possibly occur in the real world but are still within the realm of possibility. For example, if you have a theory that a star will always be bigger than a planet, then I could ask you to imagine a planet that is bigger than a star. Because this scenario appears to be within the realm of possibility, it has some point to make even if we cannot ever find a planet which is bigger than a star. An absurd thought experiment, on the hand, is not only unrealistic but is also not within the realm of possibility. For example, if you have a theory that all round shapes have no sides and I respond by asking about whether round squares have sides, then I am giving you an absurd question/scenario because round squares are not within the realm of possibility. The reason why I think that the thought experiment you are providing me is absurd is because humans cannot remotely behave like neurons while maintaining their identity as humans or even humanoid creatures. This is because humans would have to carry out interactions as rapidly as neurons do with unrealistically perfect synchronization. This would require humans to have radically different body shapes and brain composition. In other words, they would have to look like giant neurons. Thus, the resulting creature would not resemble an ecosystem or a social system. It would just be a giant monster. I cannot even properly imagine your thought experiment similarly to how I cannot imagine a round square and so I will have to abstain from responding.
  • bongo fury
    1.6k
    I would start by mentioning that bongo fury corrected meTheHedoMinimalist

    No correction intended, I was just trying to orient myself on the wikipedia map of positions. (I think I'm happy here, but in some respects also here and here.)

    I would say that my view is more properly called functionalism rather than eliminative materialism.TheHedoMinimalist

    Yes, point taken. You are more likely to ascribe consciousness to non-fleshy as well as fleshy brains. (?)

    Thanks to you and @simeonz for the Chinese room arguments. I will be pleased to respond later, for what it's worth.

    Hi.

    , the classical Turing test is outdated, because it limits the scope of the observations to static behavior.simeonz

    Wasn't that Searle's point? That the test was useless already, because an obvious zombie (an old-style symbolic computer) would potentially pass it?

    Not that everyone then or now finds it obvious that an old-style computer would be a zombie, but the man-in-the-room was meant to pump the intuition of obvious zombie-ness. That was my understanding, anyway, and I suppose I tend to raise the Chinese room just to gauge whether the intuition has any buoyancy. Lately I gauge that it doesn't, much.

    In particular, does materialism deny awareness and self-awareness as a continuous spectrum for systems of different complexity?
    — simeonz

    They do not deny that it is a spectrum but they don’t have to think that it begins on a molecular level or that all objects are part of the spectrum.
    TheHedoMinimalist

    I agree, and I don't know that I could without a rather clear intuition that all current machines are complete zombies.

    How many neurons (or similar structures) would we need to create an organism whose behavior can be considered minimally sentient - five, five hundred, five million, etc?
    — simeonz

    This is difficult to precisely answer but I would make an educated guess and say enough to form a microscopic insect.
    TheHedoMinimalist

    I must say, I find it easy to intuit that all insects are complete zombies, largely by comparing them with state of the art robots, which I likewise assume are unconscious (non-conscious if you prefer). I admit there is an element of slippery slope logic here - probably affecting both "sides" and turning them into extremists: the "consciousness deniers" (if such the strong eliminativists be) and the "zombie-deniers" (panpsychists if they deserve the label).

    I agree it is interesting to poll our educated guesses (or to dispute) as to where the consciousness "spectrum" begins (and zombie-ness or complete and indisputable non-consciousness ends). I vote mammals.

    Related to that, it might be useful to poll our educated guesses (or to dispute) as to where the zombie "spectrum" ends (and consciousness or complete and indisputable non-zombie-ness begins). I vote humans at 6 months.
  • simeonz
    310
    I would to start by mentioning that bongo furry corrected me in his earlier comment about the definition of eliminative materialism. I would say that my view is more properly called functionalism rather than eliminative materialism.TheHedoMinimalist
    I understand. I cannot imagine how eliminative materialists would deny the phenomenology of senses, considering that senses are central to logical empiricism, which I thought was their precursor, but I am not qualified to speak.

    Well, I’m not sure if plants have mental activity of any sort. This is because plants do not seem to be capable of autonomous action or decision making which is remotely similar to that of humans. They also probably do not possess sufficient energy to support something like mental activity. Plants are more likely to have mental activity than dirt though. This is because dirt doesn’t seem to be sufficiently compact to form an embodied entity which could support a mind.TheHedoMinimalist
    I would like to contribute to my earlier point with a link to a video displaying vine-like climbing of plants on the surrounding trees in the jungle. While I understand that your argument is not only about appearances, and I agree that analytico-synthetic skills greatly surpass plant life, it still seems unfair to me to award not even a fraction of our sentience to those complex beings.

    This is difficult to precisely answer but I would make an educated guess and say enough to form a microscopic insect. I don’t think that my theory has to explain everything precisely in order to be a plausible theory.TheHedoMinimalist
    My thinking here is probably inapplicable to philosophy, but I always entertain the idea of a hypothetical method of measurement, a system of inference, and conditions for reproducibility. If we were to observe that our muscles strain when we lift things, and conclude that there is a force compelling objects to the the ground, this assertion wouldn't be implausible. Yet, it wouldn't have the aforementioned explanative and analytical qualities. But I acknowledge that philosophy is different from natural sciences.

    The same epistemic difficulties exist for the binary view of consciousness which you accept.TheHedoMinimalist
    I don't accept any view at present. I am examining the the various positions from a logical standpoint. But, speaking out of sentiment, I am leaning more towards a continuum theory.

    The reason why I think that the thought experiment you are providing me is absurd is because humans cannot remotely behave like neurons while maintaining their identity as humans or even humanoid creatures. This is because humans would have to carry out interactions as rapidly as neurons do with unrealistically perfect synchronization.TheHedoMinimalist
    The peripheral input could be fed in as slowly as necessary to allow a relaxed scale of time that is comfortable for the human beings involved. This doesn't slow the brain down relative to the sense stimuli it receives, only to time proper. But real time does not appear relevant for the experiment.
  • simeonz
    310
    Wasn't that Searle's point? That the test was useless already, because an obvious zombie (an old-style symbolic computer) would potentially pass it?bongo fury
    I don't know really. But according to Wikipedia:
    The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave.
    This seems to me to suggest that John Searle wanted to reject machines sentience in general.
    I agree it is interesting to poll our educated guesses (or to dispute) as to where the consciousness "spectrum" begins (and zombie-ness or complete and indisputable non-consciousness ends). I vote mammals.

    Related to that, it might be useful to poll our educated guesses (or to dispute) as to where the zombie "spectrum" ends (and consciousness or complete and indisputable non-zombie-ness begins). I vote humans at 6 months.
    bongo fury
    For me personally, the value of the discussion is the inspection of the logical arguments used for a given position and the examination of its distinguishing qualities. Without some kind of method of validation, meaning - any kind of quality control, it is difficult to commit. I would like a scale that starts at nothing, increases progressively with the analytico-synthetic capacity of the emergent structures, and reaches its limit at a point of total comprehension, or has no limit. It simply would make interpretations easier.
  • Wayfarer
    22.6k
    Eliminative materialists are firmly grounded in what has been described as 'neo-Darwinian materialism'. The mind can be understood solely in neurological terms as being produced by the physical brain, which in turn can be understood in terms of adaptive necessity which has shaped its evolution over hundreds of millions of years.

    through the microscope of molecular biology, we get to witness the birth of agency, in the first macromolecules that have enough complexity to ‘do things.’ ... There is something alien and vaguely repellent about the quasi-agency we discover at this level — all that purposive hustle and bustle, and yet there’s nobody home ...

    ...Love it or hate it, phenomena like this exhibit the heart of the power of the Darwinian idea. An impersonal, unreflective, robotic, mindless little scrap of molecular machinery is the ultimate basis of all the agency, and hence meaning, and hence consciousness, in the universe.'
    — Daniel Dennett

    Daniel Dennett, Darwin’s Dangerous Idea: Evolution and the Meanings of Life (New York: Simon and Schuster, 1995), 202-3.

    The philosophy of mind that is based on this view, is that the mind is simply the harmonised output of billions of neurons that produce the illusion of subjectivity.

    The clever thing about this argument is that it echoes (probably unconsciously) other philosophies that declare the illusory nature of the self (which in some sense Christian and Buddhist philosophy also does). But the downside is that the human sense of being responsible agents also becomes part of the machinery of illusion. This is why Daniel Dennett only half-jokingly says that humans are really robots:

    I was once interviewed in Italy and the headline of the interview the next day was wonderful. I saved this for my collection it was... "YES we have a soul but it's made of lots of tiny robots" and I thought that's exactly right. Yes we have a soul, but it's mechanical. But it's still a soul, it still does the work that the soul was supposed to do. It is the seat of reason. It is the seat of moral responsibility. It's why we are appropriate objects of punishment when we do evil things, why we deserve the praise when we do good things. It's just not a mysterious lump of wonder stuff... that will out-live us. — Daniel Dennett

    "Atheism Tapes, part 6", BBC TV documentary.

    So in the same way that Richard Dawkins says that the Universe exhibits the 'appearance of design', so to do human beings exhibit the 'appearance of agency'. But really, there is neither design, nor agency, except for in the sense of molecular behaviour that creates the illusion of them.
  • Mww
    4.9k


    Well spoken.

    It is odd, though, that such illusory architecture should be the fundamental prerequisite for the human condition.
  • Wayfarer
    22.6k
    It is odd, though, that such illusory architecture should be the fundamental prerequisite for the human condition.Mww

    The jealous god dies hard. :smile:
  • TheHedoMinimalist
    460
    I must say, I find it easy to intuit that all insects are complete zombies, largely by comparing them with state of the art robots, which I likewise assume are unconscious (non-conscious if you prefer).bongo fury

    Fair enough, I mostly suspect that insects are conscious because they are capable of moving. They also appear afraid whether I try to squash them.
  • TheHedoMinimalist
    460
    I would like to contribute to my earlier point with a link to a video displaying vine-like climbing of plants on the surrounding trees in the jungle. While I understand that your argument is not only about appearances, and I agree that analytico-synthetic skills greatly surpass plant life, it still seems unfair to me to award not even a fraction of our sentience to those complex beings.simeonz

    Well, I suppose that there’s some significant autonomous action from some plants so some plants might indeed have some mental activity. It’s hard for me to say but I’ll have to do more research and think about this topic more.

    I don't accept any view at present. I am examining the the various positions from a logical standpoint. But, speaking out of sentiment, I am leaning more towards a continuum theory.simeonz

    Fair enough, I should have been more careful at ascribing to you a viewpoint.

    The peripheral input could be fed in as slowly as necessary to allow a relaxed scale of time that is comfortable for the human beings involved. This doesn't slow the brain down relative to the sense stimuli it receives, only to time proper. But real time does not appear relevant for the experiment.simeonz

    Well, in that case, it’s not clear to me if the giant being made of human neurons would have any mental activity because he would thinking and acting extremely slow. This is because the slow speed of the human neurons would imply that the giant being would be taking an eternity to even perform a really basic cognitive task like responding to a stimulus. His extreme slowness and largeness would make him seem more like a mountain that can make gradual movements rather than a being of any sort. This would make mental activity seem less likely for this being since it’s probably not necessary for such basic functions. Otherwise, we might as well conclude that a lifeless rock like Mars is conscious because it’s capable of micro-movements like teutonic plate activity. So, we could say that there must be a minimum speed of processing for mental activity to occur. Just like water molecules have to move rapidly in order for water to boil. Even if you think that functionalism does imply that the giant being would be conscious, it still would make functionalism a theory which is just as plausible.
  • bongo fury
    1.6k
    In particular, does materialism deny awareness and self-awareness as a continuous spectrum for systems of different complexity?simeonz

    They do not deny that it is a spectrum but they don’t have to think that it begins on a molecular level or that all objects are part of the spectrum.TheHedoMinimalist

    This is still the question, for me. I think the OP is quite right that consciousness denial and zombie denial will both tend to lead to replacement of the vague binary (conscious/non-conscious) with an unbounded spectrum/continuum of umpteen grades (of consciousness by whatever name).

    I always suspect that (replacement of heap/non-heap by as many different grades of heap as we can possibly distinguish) is a step backwards.

    In this case my complaint against the unbounded spectrum is,

    • you build lots of AI, confident that all of it is conscious in some degree or other, but you could well be wrong. Hence my aversion to zombie denial.

    • we don't get to understand what consciousness is / how it works. We remark sagely that it is all an illusion... but by the way quite true, and why worry about it...
      But I want to understand my conscious states. Hence my behaviourism: my skepticism about the folk psychology of consciousness, the inner words and pictures.


    This seems to me to suggest that John Searle wanted to reject machines sentience in general.simeonz

    Apparently not, at least not by way of the Chinese room. He does say he suspects consciousness is inherently biological, but for other reasons.

    , I mostly suspect that insects are conscious because they are capable of moving. They also appear afraid whether I try to squash them.TheHedoMinimalist

    But wouldn't they appear that way if they were zombie robot insects?... if you can imagine such a thing... could zombie actors help? :lol:
    https://www.imdb.com/title/tt0088024/
  • TheHedoMinimalist
    460
    But wouldn't they appear that way if they were zombie robot insects?... if you can imagine such a thing... could zombie actors help?bongo fury

    I think that appearance of consciousness is some evidence for consciousness. Insects could be zombies but they could also be conscious. The fact that they are capable of moving and looking afraid means that there is greater evidence of insects being conscious than there is evidence that they are zombies. My knowledge of my own experience while I’m behaving a certain way provides evidence that the observation of such behavior patterns likely indicates consciousness. To make an analogy, I don’t have to taste a particular piece of candy to have a reasonable belief that it is sweet. My past experiences with similar candies would suffice as evidence for a hypothesis that the candy is more likely to be sweet than non-sweet. Similarly, my past experience of having behavioral patterns and seeing that they are influenced by my mental activity provides evidence for the hypothesis that insects are more likely to be conscious than zombies. Why do you think they are more likely to be zombies?
  • simeonz
    310
    Otherwise, we might as well conclude that a lifeless rock like Mars is conscious because it’s capable of micro-movements like teutonic plate activity.TheHedoMinimalist
    But Mars has no analytico-synthetic capacity, just dynamism. Even in its own time scale, it wouldn't appear as sentient as human beings. I do entertain the panpsychic idea, that simple matter possesses awareness, but of very tenuous quality. Negligible by human standards. Mars does manifest adaptation, but It does not engender assumptions of complex underlying model of reality.

    The brain from the thought experiment (the China brain idea, as pointed out) includes all marks of sentience that Mars does not have - great memorization, information processing, and responsiveness (through simulated peripheral output.) The time scale is off, but I do not see how this affects the assumption of awareness. In the post-Einsteinian world, time is flexible, especially when acceleration is involved, so I wouldn't relate time and sentience directly.
  • simeonz
    310
    I always suspect that (replacement of heap/non-heap by as many different grades of heap as we can possibly distinguish) is a step backwards.bongo fury
    I understand, but what is the alternative? If anything less then a million neurons is declared not conscious according to some version of materialism, then that one neuron somehow introduces immense qualitative difference. Which is not apprehensible in the materialist world, where the behavior will be otherwise almost unchanged - i.e. there will be no substantial observable effect. At the million scale, normal genetic variations or aging would be sufficient to alternate the presence of consciousness, without significant functional changes otherwise.

    I can better understand such claim at a smaller scale however. One might argue that sentience requires a number of discrete aspects, which might imply a minimal quantity of retention and processing units inside a sentient reasoning apparatus. But the claim would be in the dozens, not the thousands or millions, I imagine.
  • simeonz
    310
    In this post, I will try to summarize my intuition so far, as to what are the differences between pantheism and an eliminative materialism. The list is tentative and some items may not apply to all variants of eliminative materilism (as I am not even competent to say for sure), but here it goes.

    1) Theist attitude. Eliminative materialists have a stoic disposition conceptually, whereas pantheists have a reverential or deifying one.
    2) Subjective. Eliminative materialists may defy the subjective as intrinsic property of nature. Instead, they might argue that it is emergent, compelled phenomenon, by biology, by adaptation.
    3) Metaphysics. Eliminative materialists might argue that existence does not inherently pose further questions. That metaphysical inquiries can be explained with the inability of the human species to articulate harmony (broadly speaking) with their environment and themselves.

    A few personal remarks.

    I do not think that the subjective can be considered more illusionary then, say, hunger is. It has an evolutionary role. I do appreciate that not all awareness has to bound to one's self however. While researching this, I found out that the octupi have distributed nervous system, such that their tentacles individually possess a sense of their environment. Yet the organism has a unified sense of self-preservation, which might imply that its intelligence still operates under a singular concept of self. I am reasonably accepting of space-time relationalism, which to some extent implies the same attitude.

    Regarding metaphysical questions - I see them as a struggle for total comprehension. Which to me is an essential duty for a sentient being. They may not always produce constructive answers, but create productive attitudes.
  • simeonz
    310
    The philosophy of mind that is based on this view, is that the mind is simply the harmonised output of billions of neurons that produce the illusion of subjectivity.Wayfarer
    Reading from the quotes that you kindly provided, I am left wondering what "illusion" means in this context. I understand the general sentiment expressed, and I can see how Daniel Dennett might reject subjectivity as its own substance (mind-body dualism) or intrinsic property (panpsychism), But I do not see how the determination and differentiation of one's self can be considered anymore an illusion than other biologically compelled emotions - like hunger or willfulness. We don't call those dysfunctional. In the end, I am not sure that Daniel Dennett considers the concept of self dysfunctional either, since he does accept the consequences from being a subject.
  • TheHedoMinimalist
    460
    The brain from the thought experiment (the China brain idea, as ↪bongo fury pointed out) includes all marks of sentience that Mars does not have - great memorization, information processing, and responsiveness (through simulated peripheral output.) The time scale is off, but I do not see how this affects the assumption of awareness. In the post-Einsteinian world, time is flexible, especially when acceleration is involved, so I wouldn't relate time and sentience directly.simeonz

    If the brain from the thought experiment is supposed to have all the marks of sentience then I would have to disagree with that thought experiment. Perhaps my example of Mars wasn’t a very good example though. I think the marks of sentience requires that the information processing and responsiveness happens in a somewhat timely manner. If the giant being takes literally like 1000 years to respond to a stimulus like having water quickly thrown at him because the human neurons are taking forever to follow their instructions, then it’s hard to imagine what the giant being would even experience. Would he experience 1000 years of neutral emotion followed by 900000 years of being pissed off because someone threw water at him, and then would he experience the emotional states associated with calming down for another 200000 years? It’s kinda hard to imagine such slow responses would be influenced by mental states. Unless, the being experiences time really fast. But, how would experience time fast with such a slow brain. Having a slow brain doesn’t seem to make time go fast. So, I think it’s more plausible to think that the being is simply not conscious.
  • Wayfarer
    22.6k
    I do not see how the determination and differentiation of one's self can be considered anymore an illusion than other biologically compelled emotions - like hunger or willfulness. We don't call those dysfunctional.simeonz

    But then, they don't claim to be philosophically important. They're what we have in common with all animals. Dennett is quite happy to grant us animal nature.

    I am not sure Daniel Dennett considers the concept of self dysfunctional either, since he does accept the consequences from being a subject.simeonz

    But he doesn't, really. He says we appear to be subjects, but the appearance of subjectivity is, in reality, the sum of millions of mindless processes.
  • bongo fury
    1.6k
    Similarly, my past experience of having behavioral patterns and seeing that they are influenced by my mental activity provides evidence for the hypothesis that insects are more likely to be conscious than zombies. Why do you think they are more likely to be zombies?TheHedoMinimalist

    Because of my past observations of robots whose behavior, while suggestive of mental influence, was soon explained by a revelation: either that there was no robot after all because a human actor operated from inside; or of mechanics and software inside, whose operation I recognised as obviously non-conscious. The twist in the Chinese room, I guess, is to reveal a human (Searle) who is then revealed to be, in relation to the outer behaviour of the creature, a mere machine himself.

    None of this can impress you if you have lost all intuition of the non-consciousness of even simple machines. I'm not sure how to remedy that, although...

    ... see the motor car analogy, below, and...

    ... also, at least bear in mind that arguments for "other minds" (arguing by analogy with one's own behaviour and private experience) were (I'm betting... will check later) designed to counter the very healthiest intuition of zombie-ness, which might otherwise have inclined us to doubt consciousness in even the most sophisticated (e.g. biological) kinds of machines (our friends and family).

    So you might at least see that your intuition of zombie-ness is likely to have depleted drastically from a previous level, before your interest in AI perhaps? Not that that justifies the intuition. Perhaps zombie denial is to be embraced.

    I always suspect that (replacement of heap/non-heap by as many different grades of heap as we can possibly distinguish) is a step backwards.
    — bongo fury
    I understand, but what is the alternative?
    simeonz

    Short answer: my pet theory here.

    More generally, how about this silly allegory... Post-apocalypse, human society is left with no knowledge or science but plenty of perfectly formed motor vehicles, called "automobiles", and a culture that disseminates driving skills through a mythology about the spiritual power of "automotovity" or some such.

    Predictably, a primitive science attempts to understand and build machines with true "automotivity". The fruits of this research are limited to sail-powered and horse-powered vehicles, and there is much debate as to whether true automotivity reduces ultimately to mere sail-power, so that car engines will eventually be properly understood as complicated sail-systems. And even now the philosophers remark sagely that engines may appear to be automotive, but the appearance of automotivity is, in reality, the sum of millions of sailing processes.

    Do we hope that this society replaces its vague binary (automotive/non-automotive) with an unbounded spectrum, and stops worrying about whether automotivity is achieved in any particular vehicle that it builds, because everything is guaranteed automotive in some degree?

    I told you it was silly.
  • TheHedoMinimalist
    460
    The twist in the Chinese room, I guess, is to reveal a human (Searle) who is then revealed to be, in relation to the outer behaviour of the creature, a mere machine himself.bongo fury

    I’m not really understanding how this twist is relevant. If the Chinese AI was pre-programmed with knowledge of Chinese then sure I agree it is likely simply following its instructions (of course, you could never simply pre-program a machine to speak perfect Chinese). The Chinese AI would have to be programmed to know how to learn Chinese instead through interactions with Chinese speakers because it’s impossible to simply hard code the knowledge of Chinese into the AI. It would probably require you to type more lines of code than the number of atoms in the universe. But I actually think that being able to follow very complicated instructions would also require consciousness. The question of whether the AI or the human really understand Chinese is seemingly irrelevant because the functionalist could simply claim that the ability to follow really complicated instructions mentioned in the thought experiment would require you to mentally understand those instructions in some way. Just as the human in the thought experiment cannot follow his instructions without mentally understanding them, the AI couldn’t do so either.

    Predictably, a primitive science attempts to understand and build machines with true "automotivity". The fruits of this research are limited to sail-powered and horse-powered vehicles, and there is much debate as to whether true automotivity reduces ultimately to mere sail-power, so that car engines will eventually be properly understood as complicated sail-systems. And even now the philosophers remark sagely that engines may appear to be automotive, but the appearance of automotivity is, in reality, the sum of millions of sailing processes.bongo fury

    Well, I actually don’t consider cars to be autonomous or having consciousness as a whole. I think the car sensors are probably conscious and self-driving cars might be conscious as a whole. So, let me ask you a question. If the post-apocalyptic world had self-driving cars, how would the reductionist sages of that world explain them in terms of simpler mechanical processes? How would they even explain a seat belt sensor through simple mechanical processes?
  • Wayfarer
    22.6k
    self-driving cars might be consciousTheHedoMinimalist

    If you ran into one, do you think you would owe it an apology?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.