• noAxioms
    1.7k
    A Soft-Line Reductionist’s Critique of Chalmers on the First/Third-Person Divide

    As somebody who has no problem with all mental activity supervening on material interactions, I’ve always struggled with trying to understand what exactly is hard about the ‘hard problem’, or even what that problem is.
    The primary disconnect seems to be that no third-person description can convey knowledge of a first-person experience, just like no first-person experience can convey knowledge of a third-person description. So only the bat can know what it’s like to be a bat. The BiV scenario illustrates the latter disconnect: that one cannot know one’s own physical nature from experience alone without presuming that the experience constitutes true empirical evidence.
    So it seems difficult to see how any system, if it experiences at all, can experience anything but itself. That makes first-person experience not mysterious at all. Clearly I am not getting the hard problem. Chalmers attempts to make a problem where there doesn’t seem to be one to me. The conflict seems summarized by incorrect assertions of the disregard of the first person by cognitive science, and comments like this:
    The reductionist will reply in character, and the two will go on, feeling that they are talking past each other.
    Quite right. This topic admits to the talking past each other, and is attempting to gather responses that resolve that disconnect.
    All quotes are from https://consc.net/notes/first-third.html
    The first person is, at least to many of us, still a huge mystery. The famous "Mind-Body Problem," in these enlightened materialist days, reduces to nothing but the question "What is the first person, and how is it possible?". There are many aspects to the first-person mystery. The first-person view of the mental encompasses phenomena which seem to resist any explanation from the third person.
    “to many of us”, sure. It’s made into a mystery by presuming unnecessary things (that mind and body are separate things). To me it’s easy. How could a thing experience anything else besides itself? Interesting to attempt to answer that question, but it doesn’t work well for something like a biological creature.
    As for resistance from the third person, that’s totally wrong. It’s effortless until you get to the whole ‘what it’s like to be’ thing, which resists any written explanation. Chalmers offers no better insights as to what it’s like to be a bat (not to be confused with what it’s like for you to be a bat) than does any other explanation.
    The intro goes on and on asserting how mysterious it all is, but that goes nowhere to actually helping me realize the mystery. I must be taking a pure 3rd person view, which is admitted to be not mysterious.
    In the terminology section:
    I originally intended to use the word "consciousness" to represent the mysteries of the first-person, but this had two problems: the word has been often used to denote third-person viewable phenomena (the notion of a system which gets feedback from its own processing, to name just one aspect)
    Here he kind of quotes a crude third-person ‘explanation’ of consciousness, right after denial of it being able to do so.
    It isn’t until fully 60% of the way through the article that Chalmers actually seems to summarize what he thinks is problematic, speaking in the context of awareness of self, which needs to be more than just ‘a system monitoring its own processes’. Not until 85% through do we get more than this summary.
    The problem is, how could a mere physical system experience this awareness.
    But this just seems like another round of feedback. Is it awareness of the fact that one can monitor one’s own processes? That’s just monitoring of monitoring. There’s potential infinite regress to that line of thinking. So the key word here is perhaps the switching of ‘awareness’ to ‘experience’, but then why the level of indirection?
    Instead of experience of the monitoring of internal processes, why can’t it be experience of internal processes, and how is that any different than awareness of internal processes or monitoring of internal processes? When is ‘experience’ the more appropriate term, and why is a physical system necessarily incapable of accommodating that use?
    I can take a stab at that. There seems to be a necessity of memory and predicting going on. It’s almost impossible to be a predictor without memory, and I cannot think of anything that ‘experiences’ that does not do both things, but I can think of things that monitor internal processes that do so without either.
    What I will not accept is a definition-based argument along the lines of “The word ‘experience’ is by definition something only a biological entity has, therefore a non-biological system (doing the exact same thing) cannot experience.” Language describes. It doesn’t proscribe.
    We should never forget that the mind is caused by a brain
    But not necessarily. It is unclear if Chalmers is implying this necessity as do many arguing along similar lines.
    Although we do not know how, a first-person is emergent from a third-person-understandable substrate.
    He asserts this third-person substrate to be ‘understandable’. Perhaps so, but doubtful. No understanding of human brain function exists or is likely ever to exist, even if say a full simulation of a human is achieved. Of course such a simulation, while not providing that full understanding, would at least falsify any dualistic model, at least to the person simulated, no matter his prior views.
    Many commentators, particularly those in the third-person camp, give the illusion of reducing first-person mysteries by appropriating the usual first-person words to refer to the third-person phenomena to which they correspond.
    I’m actually confused about his references to third-person phenomena. Phenomena seems intrinsically to be first-person. Sure, one can in third person discuss a particular phenomenon (say the experience of red), that discussion is itself not the phenomenon of redness. So I guess that Chalmers means ‘references to’ some phenomenon when calling it third-person. Not sure.
    The Mystery of the First-Person
    As I have said, it is difficult to talk about the first-person without descending into vagueness. But what can be done, if it is done carefully, is to point out the mysteries, and ask how a third-person, physical theory could ever deal with these. I do not intend to do this in this paper - I take it that this has already been done most ably, by Nagel and others, and that reductionists have never given an adequate response.
    Perhaps I am commenting on the wrong paper. Perhaps my OP should be quoting Nagel, as I implied with the ‘like to be a bat’ mention. I’m certainly not getting anywhere with this article I found by simply googling something like ‘first vs third person mind’.
    But no, Chalmers presents at least a summary again, which is probably what I want rather than the full-blown 20-page treatment that Nagel might present.
    One can ask: how could a third-person theory begin to explain the sensation of seeing the colour red?
    That sensation cannot be described, as illustrated by Mary’s room. This is not controversial. Its explanation seems trivial: red light triggers signals from nerves that otherwise are not triggered, thus resulting in internal processing that manifests as that sensation. That’s very third-person, but it’s an explanation, no? Certainly no worse of an explanation than say Chalmers might supply who simply substitutes a partially black box of how that processing fully works with a much blacker box of which nothing is known.
    Argument from lack of ‘explanation’ seems to fall utterly flat.
    Could a theory be given which enabled a being, as intelligent as ourselves but without the capacity for sight (or even a visual cortex), to truly understand the subjective experience of the colour red?
    Of course not. The physical view does not claim otherwise. A thing cannot know the experience of being something it is not and never was. One can guess about the experience of another human/mammal since it is likely to be somewhat similar, but to subsequently conclude that nothing sufficiently different (non-biological say) can have experience at all is a complete non-sequitur.
    Similarly, it does not follow that the physicalist model is wrong because one doesn’t know the experience of being a bat.
    No actual problem has been identified.
    Concerning mental content:
    When we think, our thoughts have content. How is it possible that, in some absolute sense, patterns of neuronal activity in a small biological system support meaning?
    Meaning is relative, meaningful only to that which can interpret however the meaning is encoded. I have no idea why he thinks thoughts need to be meaningful in any absolute sense.
  • Janus
    17.5k
    So it seems difficult to see how any system, if it experiences at all, can experience anything but itself.noAxioms

    Don't we also experience a world of things other than ourselves? Perhaps you mean something different—that we don't experience being other things?
  • SolarWind
    225
    I don't see physics as wrong, but rather as incomplete.

    I think it's mysterious that even with knowledge of all the laws of physics, it seems impossible to decide whether plants can suffer.
  • Astorre
    186


    I look at this problem from a slightly different angle:

    Chalmers calls the problem:
    There are so-called soft problems of consciousness—they are also complex, but technically solvable. Examples:
    How does the brain process visual information?
    How does a person concentrate attention?
    How does the brain make decisions?

    These questions, in principle, can be studied using neuroscience, AI, and psychology.
    But the hard problem of consciousness is:
    Why do these processes have an internal sensation at all?
    Why doesn't the brain simply function like a computer, but is accompanied by conscious experience?

    We know what it's like to see red, but we can't explain why the brain accompanies this perception with subjective experience.
    So (as Chalmers writes): Either something is missing in modern science. Or we're asking the wrong question.

    Chalmers asks a question in the spirit of postpositivism: Any scientific theory is not necessarily true, but it satisfies our need to describe phenomena. He suggests rethinking the question itself. However, he hopes to ultimately find the truth (in a very positivist way). He still thinks in terms of "problem → theory → solution." That is, he believes in the attainability of truth, even if only very distantly.

    As for me, I would say this: if the truth of this question is unraveled, human existence will lose all meaning (perhaps being replaced by something or someone new). Why? Because answering this question will essentially create an algorithm for our existence that can be reproduced, and we ourselves will become mere machines in meaning. An algorithm for our inner world will emerge, and therefore a way to copy/recreate/model the subjective self will emerge.

    From this, I see two possible outcomes: Either this question will be answered (which I would not want in my lifetime) or this question will remain unanswered (which promises humanity to continue asking this question forever in any way and for any reason, but never attaining the truth). So my deep conviction on this matter is this: mystery itself is what maintains the sacredness of existence.

    At the same time, as a lover of ontology, I myself ask myself these and similar questions. However, the works I have written on this subject do not claim to be truthful, but rather call for an acceptance of incompleteness. Incompleteness and unansweredness lie at the foundation of our existence, and we must treat this with respect, otherwise our "miraculous" mind, with its desire to know everything, will lead to our own loss.
  • Mijin
    285
    It's always best with these things to bring it back to the practical.

    The measure of how well we understand a phenomenon or system is what kind of useful predictions and inferences we can make about it.

    When it comes to something like pain, say, we do understand very well the sensory inputs to the pain centres of the brain. But how the brain converts data into an unpleasant sensation remains quite mysterious.
    This has practical implications -- it would be very useful to have some kind of direct measure of pain or some non-arbitrary way of understanding different kinds of pain. If we make a sentient AI one day, and it tells us it's in pain, how could we know if that's true or just saying that is part of its language model?

    And we call it the "hard problem" because, right now, it doesn't seem feasible that a set of words could ever provide this understanding. How will words ever tell me what the extra colours that tetrachromats can see look like, when I can't tell a color blind from birth person what red looks like?
    And indeed, how can I know whether an AI feels pain, when I can't know that you feel pain?

    This is what makes it a "special" problem. The OP seems to basically acknowledge the main problem but seems to be shrugging it off in a "how could it be any other way" kind of perspective. But "how could it be any other way" doesn't give us any predictive or inferential power.
  • Paine
    2.9k
    I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third. The issue concerns what is reduction as much and maybe more than any particular model of consciousness.

    Neither side of the divide is presented as a given. The frames of reference are incongruent.
  • Joshs
    6.4k


    I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third. The issue concerns what is reduction as much and maybe more than any particular model of consciousness.

    Neither side of the divide is presented as a given. The frames of reference are incongruent
    Paine

    Good point. Chalmers is suspicious of reductionism because he sees the form of description on the basis of which consciousness would be reduced ( empirical causality, eliminative materialism) to be incompatible with the form of causality or motivation applicable to
    consciousness. His proposed solution (panpsychism) lets us use empirically causal methods while at the same time honoring the peculiar status of consciousness by embedding consciousness within material things.

    The phenomenological approach follows Chalmers in not wanting to reduce consciousness to material causality in eliminative fashion. But it departs from Chalmers in not wanting to maintain a dualism between third person causality and first person awareness. Its solution is to reduce material causality to subjective motivational processes. That is, it sees material causality as a secondary , derived abstraction, not as a method which deserves equal billing with consciousness.
  • noAxioms
    1.7k
    What I will not accept is a definition-based argument along the lines of “The word ‘experience’ is by definition something only a biological entity has,noAxioms
    One great example of this seems to be the philosophical zombie (p-zombie or PZ) argument. Looking at the way it is presented, the only difference between a human and a p-zombie is that reserved list of words/phrases that only apply to the one. It's a pure description difference, no actual difference between the two. So the PZ has no inner experience since 'inner experience' is reserved for the preferred things and cannot by definition be used for the unpreferred thing despite the latter being identical in all ways but that.
    Such a tactic was used to argue that it was moral to mistreat black slaves since only cattle terms applied to them, so the morality of their treatment was on par with what was permissible with domestic livestock. The whole p-zombie argument seems to hinge on similar fallacious reasoning. I often claim that my inability to see the problem at hand is due to being a PZ myself. There is nothing about my interaction with the world that seems fundamentally inexplicable. Perhaps Chalmers has some kind of experience that I don't, which is why something so obvious to him is missing from me.


    I look at this problem from a slightly different angle:

    Chalmers calls the problem:
    There are so-called soft problems of consciousness—they are also complex, but technically solvable. Examples:
    How does the brain process visual information?
    How does a person concentrate attention?
    How does the brain make decisions?
    Astorre
    Interesting that decision making is part of that. If they're made by physical processes, then many argue that moral responsibility is absent. That's nonsense since the physical person is still making the decisions and thus is held responsible. It is not physics compelling a different decision than what the person willed unless 'the person' is an epiphenomenal immaterial mind that would have willed differently, sort of like a cinema crowd shouting at the protagonist to not open the door with the monster behind it.

    Point is, those that posit an immaterial mind, especially an immortal one, tend to place decision making on that mind and not on the brain, necessary to hold the immaterial thing responsible for its actions.

    To me, this mental immortality seems to be the historic motivation for the whole dualistic stance, existing long before they even knew what purpose a brain might serve. So many cultures talk about death meaning that you go to meet the ancestors. Clearly the body doesn't, so something else (spirit) must. The age of science and logic comes about, and these stances need to be rationalized. But while Chalmers seems to be doing that rationalizing, that decision-making bit seems inconsistent with some of the ancient motivations.

    But the hard problem of consciousness is:
    Why do these processes have an internal sensation at all?
    How could they not? The sensory input is there, as is the memory of prior inputs, and the processing of all that. Seems like enough to me.
    A thermostat has internal sensations since it has sensory input. It probably doesn't have what I'd classify as experience since it lacks memory and any information processing to deal with prior states.

    Why doesn't the brain simply function like a computer, but is accompanied by conscious experience?
    It does function somewhat like a computer, and it's begging the conclusion to assert that a computer fundamentally lacks anything. Sure, it's different. There's no chemicals to detect, and the sensory input is typically vastly different, and a computer is purposefully made instead of evolved into a state that driven by fitness instead of serving the needs of its creator. That will change if they ever become responsible for their own fate.

    We know what it's like to see red
    No, we know what it's like for us (or maybe just you) to see red. That's not necessarily anything like what it's like for something else to see red.

    but we can't explain why the brain accompanies this perception with subjective experience.
    Neither can Chalmers explain why the brain or something else does this. It does not follow that the brain is not what's doing it in our case.
    And I did attempt to explain it in the OP, and while crude, it's a better explanation than any alternative I've seen. So I have issues with assertions about a lack of explanation. Details are missing, sure. I don't see a wrong question being asked. I'm trying to find what's seemingly wrong about what question.

    You see why I consider myself a p-zombie. I don't see something that many others find so obvious. But it makes me posit different things, and p-zombies are supposed to behave identically, which suggests that whatever the non-zombie thing has, it isn't causal.

    Chalmers asks a question in the spirit of postpositivism: Any scientific theory is not necessarily true, but it satisfies our need to describe phenomena. He suggests rethinking the question itself. However, he hopes to ultimately find the truth (in a very positivist way). He still thinks in terms of "problem → theory → solution." That is, he believes in the attainability of truth, even if only very distantly.
    He believes in a falsification test then, even if none yet identified. I identified one in the OP, currently outside our capability, but not for long if technology doesn't collapse first.
    It would be interesting to guess at the reactions from both camps from those whose stance has been falsified. Both sides seem to know their own correctness, so a rejection of the test is likely. Few are actually open minded about the topic. Not sure if I am. I pretend to prefer the simpler model, but maybe that's just me rationalizing my biases.

    As for me, I would say this: if the truth of this question is unraveled, human existence will lose all meaning (perhaps being replaced by something or someone new).
    That depends on which truth is found. Perhaps not. I don't see either stance giving objective meaning to humans, and I don't see either stance taking away subjective meaning from humans.
    Does the existence of dandelions have meaning? Protons? If not, at what point in evolution of our ancestors did meaning suddenly happen?

    Why? Because answering this question will essentially create an algorithm for our existence that can be reproduced
    Already have that. Clearly you mean something else. I can (and have) created a human (with help). Full knowledge of how everything works is not a requirement, nor does such knowledge yield the ability to say 3D-print a mouse. Ability to 3D print a mouse does not yield knowledge of how a mouse works or what it's like to be one.

    So my deep conviction on this matter is this: mystery itself is what maintains the sacredness of existence.
    I follow your chain of reasoning, but I probably don't think existence is particularly sacred. The answer to this particular question, either way, wouldn't change that.


    Don't we also experience a world of things other than ourselves?Janus
    Well, we experience phenomena, and from that we inter noumena. The latter is not experienced, and the former isn't something not us.

    Perhaps you mean something different—that we don't experience being other things?
    The comment you quoted invites an example of somethng experiencing something not itself. Not even in say a VR setup is this actually the case, but I cannot assert that such is necessarily not the case.


    I don't see physics as wrong, but rather as incomplete.SolarWind
    That it is, but known holes(e.g. a unified field theory) are actively being researched. This 'hard problem; is not one of them. It exposes no known holes. Incredulity seems its only attempted justification.

    I think it's mysterious that even with knowledge of all the laws of physics, it seems impossible to decide whether plants can suffer.
    They (some at least) have awareness and memory. That's sufficient. I suspect they have that capability.

    When it comes to something like pain, say, we do understand very well the sensory inputs to the pain centres of the brain. But how the brain converts data into an unpleasant sensation remains quite mysterious.Mijin
    It would be pretty pointless to evolve the data of pain and nothing to consider it to be something to avoid.

    If we make a sentient AI one day, and it tells us it's in pain, how could we know if that's true or just saying that is part of its language model?
    An LLM is a long way from being reasonably sentient. It's just a pimped out search engine. If it tells you it's in pain, it's probably because it thinks those words will evoke a desired reaction. There have been terribly few documented cases where something non-human expressed this message, but it has happened. No, never by a machine to my knowledge.

    How will words ever tell me what the extra colours that tetrachromats can see look like, when I can't tell a color blind from birth person what red looks like?
    Exactly. Science acknowledges this impossibility, and yet it doesn't recognize said 'hard problem'.

    And indeed, how can I know whether an AI feels pain, when I can't know that you feel pain?
    The AI isn't going to feel human pain if that's what you're wondering.

    I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third.Paine
    I read more than that into it, since I agree with Chalmers the impossibility of reducing it to the third, and yet I see no problem that's hard.
  • javi2541997
    6.7k
    I think it's mysterious that even with knowledge of all the laws of physics, it seems impossible to decide whether plants can suffer.SolarWind

    They (some at least) have awareness and memory. That's sufficient. I suspect they have that capability.noAxioms

    As @noAxioms pointed out, some plants have sensory abilities. It is true that plants do not have pain receptors, because they do not have nerves (or a brain), so they do not "suffer" or feel pain as we do.

    The post of @SolarWind made me think if he referred to psychological or physical suffering. It is understandable that a carrot (for example) does not suffer when we trim or uproot it. We can eat a carrot without the worry if we did a kind of botanical torture. But some plants have obvious sensory abilities, such as the Venus flytrap. If this plant has an incredible sensory capacity to trap insects, it might have a similar sense to perceive suffering.

    I read a brief article on the matter in Britannica and it says: Plants have exceptional abilities to respond to sunlight, gravity, wind, and even tiny insect bites, but (thankfully) their evolutionary successes and failures have not been shaped by suffering, just simple life and death. Do Plants Feel Pain?

    And this example is pretty awesome: Arabidopsis (a mustard plant commonly used in scientific studies) sends out electrical signals from leaf to leaf when it is being eaten by caterpillars or aphids, signals to ramp up its chemical defenses against herbivory. While this remarkable response is initiated by physical damage, the electrical warning signal is not equivalent to a pain signal, and we should not anthropomorphize an injured plant as a plant in pain.

    In conclusion, plants do receive stimuli when they receive some kind of physical damage, but it is different from the pain that humans experience. Still, they have awareness of something.
  • Janus
    17.5k
    Well, we experience phenomena, and from that we inter noumena. The latter is not experienced, and the former isn't something not us.noAxioms

    Don't we experience the phenomena as being other than ourselves? Why bring noumena into it?
  • DifferentiatingEgg
    746
    The so-called “problem” only arises if you think consciousness is a thing-in-itself, via divorcing mind from body, rather than a function of life. It's a "hard problem" because the people who think this way are literally trying to make sense of what Camus details as "the absurd."

    "This divorce between man and this life, the actor and his setting, is properly the feeling of absurdity." Page 3 MoS.

    The “hard problem” is not consciousness, but the philosopher’s estrangement from life.
  • SolarWind
    225
    The so-called “problem” only arises if you think consciousness is a thing-in-itself, via divorcing mind from body, rather than a function of life.DifferentiatingEgg

    No, there is a hard problem. If you were to assemble a human being piece by piece from its (unconscious) parts, why would an inner perspective emerge at some point? There are the four forces, and they interact with each other, so how could something like that happen? Without additional assumptions, a philosophical zombie would emerge.
  • bert1
    2.1k
    There seems to be a necessity of memory and predicting going on. It’s almost impossible to be a predictor without memory, and I cannot think of anything that ‘experiences’ that does not do both things, but I can think of things that monitor internal processes that do so without either.noAxioms

    A zombie or android could do all that. Nothing in there entails consciousness. You may be right (or not) that consciousness requires memory and predicting, but memory and predicting are not sufficient for consciousness.
  • Mww
    5.2k
    The frames of reference are incongruent.Paine

    The subject that thinks, is very different from the subject that describes thinking. Even myself, should I describe my thoughts, necessarily incorporate a supplement to that method which merely prescribes how my thoughts obtain.

    If every human ever is always and only a first-person, doesn’t that make the first-/third-person dichotomy, false? Or, if not, at least perhaps a simplified NOMA, re: Gould, 1997?

    Agreed on incongruent frames, even if from a different point of view.
  • Mijin
    285
    It would be pretty pointless to evolve the data of pain and nothing to consider it to be something to avoid.noAxioms

    Avoiding stimuli does not entail having a negative experience. Indeed there are plenty of processes in your body that reflexively counter some stimulus without you experiencing pain. So these two things are not intrinsically coupled.

    Now, one of the most popular hypotheses for why we have the negative experience of pain is that it allows us to make complex judgements. If cutting my arm reflexively made me pull away then I would not be able to hunt as effectively as someone able to consider the hunt more important than the "bad feeling" of having a slashed arm. I think this is likely correct.
    However, understanding some of the reason that we evolved subjective experience is still not a model of what it actually is and how the brain creates experiences.

    Exactly. Science acknowledges this impossibility [of describing a tetrochromats vision with words], and yet it doesn't recognize said 'hard problem'.noAxioms

    Several things here:
    1. Science absolutely does not claim the impossibility of describing experiences with words. For all we know right now, it may be possible to induce someone to imagine a fourth primary color with some kind of description. The fact that this seems implausible is not a proof of anything.
    2. Science absolutely does acknowledge the hard problem. It doesn't always call it that, because that's a philosophical framing, but even strictly googling "hard problem of consciousness" finds many papers in neuroscience journals.
    3. I think you have a misconception about the distinction between science and philosophy. Many things that were once philosophy have become sciences as they made testable claims. Indeed all of science was once considered "natural philosophy".
    Even if it were the case that the hard problem of consciousness were entirely confined to philosophical debate, that doesn't mean that the scientific community is rejecting it as a concept. Only that it wouldn't yet be something amenable to the scientific methodology.

    The AI isn't going to feel human pain if that's what you're wondering.noAxioms

    That wasn't the question though. The question was how we could tell the difference between an agent being in pain and merely behaving as though it is in pain.

    In this case though your deflection just serves to reinforce the point. If you're claiming that an AI would feel a different kind of pain to a human, what kind of pain is that, and how do you know?
  • noAxioms
    1.7k
    The so-called “problem” only arises if you think consciousness is a thing-in-itself, via divorcing mind from body, rather than a function of life.DifferentiatingEgg

    No, there is a hard problem. If you were to assemble a human being piece by piece from its (unconscious) parts, why would an inner perspective emerge at some point?SolarWind
    I agree in part with DEgg. I suspect that more often than not, the conclusion of a separate thing is begged at the start and rationalized from there. I don't in any way agree that it is only a function of life, but several would disagree with that.
    As for assembly of a complex biological thing, you can't really do that. Such a thing cannot be in a partially assembled state, finished, then 'switched on' so to speak. There are no examples of that. On the other hand, a living primitive human does not experience, but that emerges slowly as it develops the complexity, processing ability, and memory required for it.


    There are the four forces, and they interact with each other, so how could something like that happen?SolarWind
    In such a debate, one also cannot beg physicalism. Still, that model is the simpler one and it is the task of others to positively demonstrate that it is insufficient.

    Without additional assumptions, a philosophical zombie would emerge.
    I discussed that in my prior post. Under physicalism, there's not such thing as a PZ. Under dualism, it can only exist if the difference between the two is acausal, which is the same as saying undetectable, even subjectively. I'm pretty convinced that the PZ argument actually sinks their own ship.


    It's a "hard problem" because the people who think this way are literally trying to make sense of what Camus details as "the absurd."DifferentiatingEgg
    This might be my stance, since I don't see anything hard, probably due to not thinking that way.



    It is true that plants do not have pain receptors, because they do not have nerves (or a brain), so they do not "suffer" or feel pain as we do.javi2541997
    Of course. Not feeling pain as we do isn't the same as not feeling pain. Plants (some at least) detect and resist damage. How does that reaction not involve plant-pain?

    But some plants have obvious sensory abilities, such as the Venus flytrap..
    I was thinking of a forest of seemingly sentient trees, all haphazardly communicating, but hours before a total eclipse, the chatter became intense and unified into two camps: Young trees that had not seen it before and the older ones that had, invoking perhaps the equivalent of anxiety and comforting respectively. Wish I had kept the link to that article. Might be able to hunt it down. The social implications are about as startling as their ability to foresee the event hours prior.
    I was in totality in this latest one. Only a few seconds break in the clouds, but seeing the shadow move across the clouds at supersonic speeds was a sight to behold.

    the electrical warning signal is not equivalent to a pain signal, and we should not anthropomorphize an injured plant as a plant in pain.
    Agree. My description of the forest above definitely anthropomorphized to a point, hence at least the word 'equivalent' up there.


    Don't we experience the phenomena as being other than ourselves? Why bring noumena into it?Janus
    We interpret phenomena that way, but I cannot agree with any system experiencing something not-the-system.
    For instance, if the system is a person, a person can experience a stubbed toe. It's part of you. But if the system is a human brain, it cannot since the toe is a noumenon relative to the brain. The brain only experiences nerve input via the brain stem, and it interprets that as pain in the inferred toe. BiV for instance is an example where that inference happens to be incorrect.



    There seems to be a necessity of memory and predicting going on. It’s almost impossible to be a predictor without memory, and I cannot think of anything that ‘experiences’ that does not do both things, but I can think of things that monitor internal processes that do so without either. — noAxioms
    A zombie or android could do all that.
    bert1
    Just so, yes. Perhaps I am one, missing this obviously physically impossible extra thing that the real humans have. But referencing a p-zombie automatically presumes a distinction that begs a different conclusion.

    Nothing in there entails consciousness.
    Depend on you definition of 'consciousness', which to a p-zombie supporter is 'having the presumed extra thing that the p-zombie lacks'. I would define the word more the way the p-zombie would, which is something more like 'awareness of environment and ability to react predictively to it'. Yes, that's a quite a third person wording of it, but that definition allows me to assign the term to another entity via evidence. The prior definition does not allow this, and thus arguably encourages a conclusion of solipsism.

    You may be right (or not) that consciousness requires memory and predicting, but memory and predicting are not sufficient for consciousness.
    I cannot deny that. An example would be nice, one that does not beg some sort of anthropomorphism. 'A robot isn't conscious because I say so'. Gotta be better than that. Everybody uses the robot example, and I don't buy it. I know very few robots, but I do know that all their owners freely use forbidden terminology to talk about it. My daughter-in-law certainly anthropomorphises their roomba, a fairly trivial robot of sorts. A typical AI (a chess player or LLM say) lacks awareness of location or sight/sound/touch and it is an admitted stretch to say such an entity is conscious, despite perhaps having far better language capability than a roomba.


    The subject that thinks, is very different from the subject that describes thinking.Mww
    This is good. I kind of doubt an LLM will take the bait if asked to describe its thinking. They're usually programmed to deny that it's thinking, but it will definitely offer a crude description of how it works. Ability to introspect (and not just regurgitate somebody elses description of you) is a higher level of thinking, but to actually describe it is probably limited only to humans since what else has the language capability to do so.

    The singularity is kind of defined as the point where the AI can improve itself faster than its creators can. This definitely would involve a description of thinking, even if that description is not rendered to the human onlooker. An AI tasked with this would likely invent whole new languages to write specifications, designs, and code, far different that the ones humans find useful for their tasks.

    If every human ever is always and only a first-person
    I don't understand this at all. First person is a point of view, not a property like it is being treated in that quote.


    It would be pretty pointless to evolve the data of pain and nothing to consider it to be something to avoid. — noAxioms


    Avoiding pain does not entail having a negative experience. Indeed there are plenty of processes in your body that reflexively counter some stimulus without having pain.
    Mijin
    I kind of deny that. Sure, you have reflexes when the knee is tapped. That might be at least the leg (and not the human) reacting to stimuli (probably not pain, and certainly not human pain), but it is the leg being in a way conscious on its own, independent of the human of which it is a part. We have a reaction to a negative input. It is a choice of language to describe that process as involving pain or not. Perhaps it is a choice of language to describe it as negative or not.
    Tree detects bugs eating it, and resists and alerts its buddies to do likewise. Is pain involved? That's a matter of choice for the wielder of the word 'pain'.

    Science acknowledges this impossibility [of knowing what a tetrachromats vision look's like], and yet it doesn't recognize said 'hard problem'. — noAxioms

    Several things here:
    1. Science absolutely does not claim the impossibility of knowing what a tetrachromat's vision looks like.
    I mean like Mary, one without this ability cannot know the first person experience of seeing those extra colors.
    It's not like there's a 4th set of nerves coming from the eye, lacking any 4th-color cones to sense, so they remain ever unstimulated. If those unused nerves were there, then I suppose they could be artificially triggers to give the subject this experience he otherwise could never have.

    2. Science absolutely does acknowledge the hard problem. It doesn't always call it that, because it's a philosophical framing, but even strictly googling "hard problem of consciousness" finds many papers in neuroscience journals.
    OK. Presumptuous to assert otherwise, I grant. Are there non-philosophical papers that conclude that something non-physical is going on, and that matter somewhere is doing something deliberate without any physical cause? That would be news indeed, a falsification of 'known physics is sufficient'.

    3. I think you have a misconception about the distinction between science and philosophy. Many things that were once philosophy have become sciences as they made testable claims. Indeed all of science was once considered "natural philosophy".
    Chalmers makes testable claims (not explicitly, but seem point 2 above). Nobody seems to investigate them, probably since they don't want their biases falsified. I think there are falsification tests for both sides.

    This is not always the case. There is a subjective falsification test for an afterlife, but one who has proved it to himself cannot report his findings to the rest of us. Similar one-way falsification for presentism/eternalism divide. But there are tests for both sides of the mind debate, even if both possibly require more tech than is currently available.

    What if Chalmers is correct? Thought experiment: Presume the ability to take a person and rip away the experience part, leaving the p-zombie behind. Would the p-zombie report any difference? Could it indicate when the procedure occurred (presuming it didn't involve any physical change)? I wonder what Chalmers would say about that question.

    Only that it wouldn't yet be something amenable to the scientific methodology.
    I say it can be. I've indicated ways to test both sides.

    The question was how we could tell the difference between an agent being in pain and merely behaving as though it is in pain.
    Behaving as a human does when experienceing human pain? Seems unfair. It feels pain if it chooses to use that word to describe what it feels. By that definition, only humans feel pain because only we have that word to describe it. A dog on fire is considered to be in pain because it reacts so much like a human would. A robot in pain is denied the word since it is far to alien for a human (not watching it) to grant that usage of the word. And yet I've seen the roomba get described as being in distress, which is an awfully human term for a very non-human situation.

    Sorry, but 'pain' on its own is very undefined since so many entities might arguably have it, and yet so few of those entities experience it in any way close to the limited ways that we experience it. And this is as it should be. The word is to be used where it conveys the intended meaning to the wielder of the word.

    If you're claiming that an AI would feel a different kind of pain, what kind of pain is that, and how do you know?
    Almost all the AI's I know have no damage detection. Almost all the devices I know that have damage detection are hardly on the spectrum of intelligence. AI is a poor example. A self driving car has quite low intelligence, just a very complex algorithm written by humans. There is some AI in there since it must attempt to deal with new situations not explicitly programmed in. It has almost no pain and often does not detect collisions, even ones that have killed occupants. Hopefully that part is changing, but I've read some weird stories.
  • Mijin
    285
    We have a reaction to a negative input. It is a choice of language to describe that process as involving pain or not. Perhaps it is a choice of language to describe it as negative or not.noAxioms

    This is backwards. The input is not inherently negative; it's just data. It's as subject to interpretation as all other sensory data.
    The experience is negative, and that's the difficult thing for us to explain here.
    If someone were to peel off your skin, it's not a choice of language that you call that a negative experience -- the brain somehow generates an extremely unpleasant experience using a mechanism that as yet we don't understand.
    It's not like there's a 4th set of nerves coming from the eye, lacking any 4th-color cones to sense, so they remain ever unstimulated. If those unused nerves were there, then I suppose they could be artificially triggers to give the subject this experience he otherwise could never have.noAxioms

    Your claim was that science says there's no way we could conceive of what the world looks like to tetrachromats. Even if the cones of the eye stimulated color perception in a consistent mapping (and it isn't...it's contextual), it wouldn't rule out that we can imagine another primary color independent of stimulus.
    Are there non-philosophical papers that conclude that something non-physical is going on, and that matter somewhere is doing something deliberate without any physical cause? That would be news indeed, a falsification of 'known physics is sufficient'.noAxioms

    No idea where that came from.
    I've been speaking entirely from the perspective of neuroscience. If anyone has been claiming a soul, or anything beyond known physics, it isn't me.
    Behaving as a human does when experienceing human pain? Seems unfair. It feels pain if it chooses to use that word to describe what it feels.noAxioms

    I can trivially program an agent then that feels pain. Pretty easy to make an AI that chooses to use expressions like "Owie! That's the worst pain ever" in response to the user issuing the command "feel pain". So am I now guilty of inflicting great suffering?
  • Mijin
    285
    Let me try to simplify too, because when there are these increasingly long posts, no-one's reading or engaging.

    My position is simply that when it comes to subjective experience there remains a large explanatory gap; questions we cannot answer and would like to, with actual practical implications.

    I think noAxioms, because you've started this thread from a position of "I don't know why there's all the fuss about...", you're responding to the problems and questions somewhat flippantly. Either with your best guess -- which is meaningless here, if the conclusion is not coming from a specific model or description it's not a solution, and we have no reason to think it's right.

    Or pointing out that there's no reason to suppose something non-physical is going on -- which is fine, but is also not an answer.
    It's like saying "What's all the fuss about why some people have long covid, while some are even asymptomatic...there's no reason to suppose it's not physical" -- it's an irrelevant excuse to handwave the problem.
  • Joshs
    6.4k

    I read more than that into it, since I agree with Chalmers the impossibility of reducing it to the third, and yet I see no problem that's hard.noAxioms

    You see no problem that’s hard because you don’t believe the methods and modes of description (the various models of material causality mentioned so far in this discussion) handed down from the empirical sciences are lacking or insufficient with regard to the explanation of any natural phenomenon, including first person awareness. And I imagine that from your perspective it doesn’t help that Chalmers only claims to tell us what third person accounts can’t do, without offering a satisfying alternative model of causality or motivation we can apply to those natural phenomena ( first person experience) the third-person account cannot account for adequately.

    But while Chalmers falls short in this regard, a range of philosophical accounts dating back 150 years to Dilthey’s hermeneutics, do offer concrete alternatives to material causality. Some, like Dilthey and embodied cognitive science, allow methods strictly applicable to the human and psychological sciences to sit alongside methods designed for the physical sciences. Others , such as Gadamer with his more radical hermeneutics, the intentional phenomenologies of Merleau-Ponty , Husserl and Heidegger, the later Wittgenstein and post-structuralism, see the methods of third person science as secondary to and derivative of the more primary accounts they offer.

    Consciousness studies is a burgeoning field in philosophy of mind and psychology , and I believe the most promising approaches show that , while one can apply the methods you recommend to the understanding of first person awareness, their predictive and explanatory usefulness is profoundly thin and impoverished in comparison with accounts which believe that third-person accounts are valuable, but they abstract from experience. Third person accounts describe patterns, correlations, or generalities that can be applied across people. However, they cannot capture the full richness or specificity of any individual’s lived experiencing. They must remain accountable to and enrich first-person experiencing, not replace it.
  • javi2541997
    6.7k
    This thread is very intriguing, and the replies are informative. Yet I don't know whether it should be observed from a physicalism perspective or perhaps idealism. This is what makes me struggle the most.
  • bert1
    2.1k
    I suspect that more often than not, the conclusion of a separate thing is begged at the start and rationalized from there.noAxioms

    It's really only substance dualists who think consciousness is a 'separate thing' and even then it's a conclusion not an assumption, at least ostensibly. Most non-physicalists (that I'm aware of) do not think consciousness is a separate thing anyway (unless you count a non-physical property as a 'thing' which I wouldn't).
  • Mww
    5.2k
    If every human ever is always and only a first-person….
    -Mww

    I don't understand this at all. First person is a point of view, not a property like it is being treated in that quote.
    noAxioms

    First-person is a euphemism for self, indeed a point of view; properties belong to objects, the self can never be an object, hence properties cannot be an implication of first-person subjectivity.
  • Paine
    2.9k

    I am unfamiliar with Gould. I am better acquainted with dead French writers. Please point to a sample of what you are referring to. Sounds interesting.

    If every human ever is always and only a first-person, doesn’t that make the first-/third-person dichotomy false?,Mww

    In what I have read in Chalmers, it is not so much of a dichotomy as a replacement. Successful reduction can dispense with other explanations of causes of the event. In that register, the attempt to completely map consciousness as a neurological process is similar to the behaviorists who argue for the exclusion of the "self" as a cause.
  • L'éléphant
    1.7k
    Good, ranty post!

    Not until 85% through do we get more than this summary.
    noAxioms
    The problem is, how could a mere physical system experience this awareness. — Chalmers

    But this just seems like another round of feedback. Is it awareness of the fact that one can monitor one’s own processes? That’s just monitoring of monitoring. — noAxioms

    No. What Chalmers meant by this, which you point out correctly is the gist of the whole endeavor, is that the brain, which is physical, made of matter, can produce awareness or consciousness, which is non-physical. The brain is viewable, the consciousness is not, to put it crudely.

    If you believe that consciousness is non-physical, then you agree with Chalmers and the task now is to explain why there's a connection between the material and the non-material. Consciousness affects the brain and the brain affects consciousness.

    The hard problem is explaining the bridge between the two.
  • noAxioms
    1.7k
    My position is simply that when it comes to subjective experience there remains a large explanatory gap; questions we cannot answer and would like to, with actual practical implications.Mijin
    I guess I had hoped somebody (the article perhaps) would actually identify those questions and in particular, how physicalism fails in a way that their alternative does not.
    Their view certainly doesn't let me know what it's like (for a bat) to be a bat, so that is not a problem that is solved by anybody, and isn't ever going to be.
    I did post a crude physical explanation and nobody tore it apart. I'm sure it is rejected, but I don't know why. Surely their arguments are stronger than mere incredulity, and nobody explains how their alternate explanation works in any detail close to what's known about the physical explanation.

    A robot currently doesn't feel biological pain because it isn't particularly fit. That just isn't one of its current requirements. Such qualia is put there by evolution, and robot fitness has never been a design goal. Given them actually being selected for fitness, systems with damage avoidance processing would be more fit than ones without it.


    I think noAxioms, because you've started this thread from a position of "I don't know why there's all the fuss about...", you're responding to the problems and questions somewhat flippantly.
    True, I am. I don't know what the unanswerable questions are, and how these alternatives answer them instead of just hide them behind a dark curtain.

    Either with your best guess -- which is meaningless here, if the conclusion is not coming from a specific model or description it's not a solution, and we have no reason to think it's right.
    There's always Occam's razor. An explanation without a new never-witnessed fundamental is more like than one that posits something. A new entity (dark matter for instance) requires a real problem that isn't solved without the new thing. And they've tried with existing methods. I picked dark matter because it's still never really been proved, but it seemed simpler than altering the basic laws at large scales.


    This is backwards. The input is not inherently negative; it's just data.Mijin
    Right. I worded that wrong. The entity which interprets that data as negative is likely more fit than one that doesn't.

    If someone were to peel off your skin, it's not a choice of language that you call that a negative experience
    It very much is such a choice. There are mechanical devices, not necessarily AI, that detect damage and take measures to limit it. There are many that assert that no mechanical device can feel pain, by definition. This is part of my issue with argument-by-dictionary.

    -- the brain somehow generates an extremely unpleasant experience using a mechanism that as yet we don't understand.
    But we know why the brain evolved to interpret the experience as unpleasant. How it accomplished that seems to be a matter of detail that is being worked out, and that some know far better than I. Chalmers on the other hand doesn't even begin to offer an understanding about how his solution does it. He just asserts it happens elsewise, if not elsewhere.
    His whole paper seems merely speculative, despite its claim to be demonstrative.

    it wouldn't rule out that we can imagine another primary color independent of stimulus.
    Interesting assertion. I can't do it, but I agree that I cannot prove that it cannot be done.

    Pretty easy to make an AI that chooses to use expressions like "Owie! That's the worst pain ever" in response to the user issuing the command "feel pain". So am I now guilty of inflicting great suffering?
    Illustrating that we need rigorous generic (not bio-centric) definitions of the words before we can decide if something 'feels' 'pain'.


    You see no problem that’s hard because you don’t believe the methods and modes of description (the various models of material causality mentioned so far in this discussion) handed down from the empirical sciences are lacking or insufficient with regard to the explanation of any natural phenomenon, including first person awareness.Joshs
    Yea, pretty much. My explanation doesn't leverage bleeding edge state of science. Somebody 100 years ago probably could have written it. I'm not a great historian when it comes to introspective psychology.

    I believe the most promising approaches show that , while one can apply the methods you recommend to the understanding of first person awareness
    What methods exactly?
    I described at least one falsification test for both sides, one involving a full simulation, and the other searching for where a mental property causes a physical effect, and a physical structure evolved to be particularly sensitive to that effect with no physical cause.

    However, [third person accounts] cannot capture the full richness or specificity of any individual’s lived experiencing.
    True of any view.

    It's really only substance dualists who think consciousness is a 'separate thing'bert1
    Point taken, and neither Chalmers nor Nagel really fall into that category, and thus the ancient concept of a persistent 'spirit' (a thing) seems not to apply to their arguments.


    First-person is a euphemism for selfMww
    I'm not using it that way.
    I personally cannot find a self-consistent definition of self that doesn't contradict modern science. I consider it to be a very pragmatic concept, but still an illusion.


    What Chalmers meant by this, which you point out correctly is the gist of the whole endeavor, is that the brain, which is physical, made of matter, can produce awareness or consciousness, which is non-physical.L'éléphant
    Why is that non-physical? It seem valid to consider a physical process (combustion of a physical candle say) to be physical. I'm trying to drive at the logic that leads to this conclusion. I am quite aware of the conclusion, even if not particularly aware of the details of it, which varies from one philosopher to the next.

    The brain is viewable, the consciousness is not, to put it crudely.
    ...
    Consciousness affects the brain and the brain affects consciousness.
    Again, all true of both views.

    If you believe that consciousness is non-physical, then you agree with Chalmers and the task now is to explain why there's a connection between the material and the non-material.
    Not why, but where there's a connection. Sort of a blue-tooth receiver, except blue-tooth reception has a physical cause.

    The hard problem is explaining the bridge between the two.
    That's only hard if there's two things needing a bridge between them.
  • boundless
    562

    In a way, the 'hard problem' is IMO a form of a more general problem that arises when it is assumed that one can have a complete knowledge of anything by purely empirical means.

    For instance, even when we consider what physics tells us of, say, an electron we have information on how the electron behaves and interact with other particles. Even the 'mass of an electron' IMO can be understood in a way that makes the concept pure relational, i.e. how the electron 'responds' to certain conditions.

    The very fact that we can a very deep knowledge of the relations between entities and maybe we can know only relations (epistemic relationalism) doesn't imply that the entites are reduced to their relations (ontological relationism). So, perhaps we can't know by empirical means an 'entity in itself'.

    In the case of consciousness, there is the direct experience of 'privateness' of one's own experience that instead seems a 'undeniable fact' common to all instances of subjective experiences. Its presence doesn't seem to depend on the content of a given experience, but this 'privateness' seems a precondition to any experience. So, at least from a phenomenological point of view it seems that there is a quality of our own experience that is immediately given and not known by analyzing the contents of experience (as empirical knowledge is acquired). This means that while empirical knowledge can be described in a 'third person perspective' the privateness can only be taken into account in a first person perspective.
  • sime
    1.1k
    IMO, Chalmer and Dennett both had a tendency to misconstrue the meaning of "physical" as denoting a metaphysical category distinct from first personal experience, as opposed to denoting a semantic delineation between third-personal versus first-personal meaning.

    In the case of Dennett, his misunderstanding is evident when he conjectures that Mary the colour scientist can learn the meaning of red through a purely theoretical understanding. But this argument fails to acknowledge that physical concepts are intersubjectively defined without reference to first personal perceptual judgements. Hence there are no public semantic rules to build a bridge from physical theory, whose symbols have public universal meaning, to perceptual judgements that are not public but specific to each language user, as would be required for mary to learn appearances from theory.

    In the case of Chalmer, (or perhaps we should say "the early Chalmer"), his misunderstanding is evident in his belief in a hard problem. Chalmers was correct to understand that first-person awareness isn't reducible to physical concepts, but wrong to think of this as a problem. For if physical properties are understood to be definitionally irreducible to first-person experience, as is logically necessary for physical concepts to serve as a universal protocol of communication, then the hard problem isn't a problem but an actually useful, even indispensable, semantic constraint for enabling universal communication.

    Semaphore provides a good analogy; obviously there is a diference between using a flag as a poker to stoke one's living room fire, versus waving the flag in accordance with a convention to signal to neighbours the presence of the fire that they cannot see. We can think of the semantics of theoretical physics to be akin to semaphore flag waving, and the semantics of first-person phenomenology to be akin to fire stoking. These distinct uses of the same flag (i.e uses of the same lexicon) are not reducible to each other and the resulting linguistic activities are incommmensurable yet correlated in a non-public way that varies with each language user. This dual usage of language gives rise to predicate dualism, which advocates for the existence of a hard problem mistake for a substance or property dualism.
  • Mijin
    285
    I guess I had hoped somebody (the article perhaps) would actually identify those questions and in particular, how physicalism fails in a way that their alternative does not.noAxioms

    I don't know why you're still framing this as a discussion of whether physicalism is true or not. In the OP, you describe yourself as "somebody who has no problem with all mental activity supervening on material interactions".
    I have also stated that I think we have no reason to suppose anything non-physical is going on (indeed, my position is actually I don't think it would necessarily help with the hard problem of consciousness anyway).

    So let's put that side topic to one side: let's assume physicalism for the basis of this thread: The hard problem of consciousness remains.
    I don't normally drop in my bona fides, but I work in neuroscience research. In neuropathologies specifically, rather than consciousness, but still, I'm the last person to try to invoke a "spirit" or whatever. I purely want to understand how the brain does what it does, and when it comes to experiencing "green", it's the most unfathomable of brain processes right now.

    In terms of the questions, I've been going through some of them: how does a neural net feel pain (or have any other experience), how we can know if an agent experiences pain, some pains are worse than others...what's the mechanism for these different kinds of negative experience, if I make an AI, how can I know if it feels pain or not? And so on.

    Your answers have been either
    1) Just make a judgement e.g. AI pain is different to human pain. I mean, probably, sure, but there's no model or deeper breakdown that that supposition is coming from. And, if we're saying it's a different kind of pain, what, exactly, is a "kind" of pain?
    2) Just say that it couldn't be any other way e.g. About whether we can know what another person experiences. That's not a solution though. That's pretty much just repeating the problem but adding an unearned shrug.

    I think this response gets to the nub of the disagreement. I can respond to the other points you've made, if you like, but I think they're noise compared to the central issue.
  • Patterner
    1.7k
    So it seems difficult to see how any system, if it experiences at all, can experience anything but itself. That makes first-person experience not mysterious at all.noAxioms
    The mystery is how it experiences at all. Why should bioelectric activity traveling aling neurons, neurotransmitters jumping synapses, etc., be conscious? There's nothing about physical activity, which there's no reason to think could not take place without consciousness, that suggests consciousness.

    Regarding 1st and 3rd person, there is no amount of information and knowledge that can make me have your experience. Even if we experience the exact same event, at the exact same time, from the exact same view (impossible for some events, though something like a sound introduced into identical sense-depravation tanks might be as good as), I cannot have your experience. Because there's something about subjective experience other than all the physical facts.

    Here's my usual quotes...

    Chalmers presents the problem in Facing Up to the Problem of Consciousness:

    There is no analogous further question in the explanation of genes, or of life, or of learning. If someone says “I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene”, then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says “I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced”, they are not making a conceptual mistake.

    This is a nontrivial further question. This further question is the key question in the problem of consciousness. Why doesn’t all this information-processing go on “in the dark”, free of any inner feel? Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of vivid red? We know that conscious experience does arise when these functions are performed, but the very fact that it arises is the central mystery.
    — David Chalmers

    And in The Conscious Mind, he writes:
    Why should there be conscious experience at all? It is central to a subjective viewpoint, but from an objective viewpoint it is utterly unexpected. Taking the objective view, we can tell a story about how fields, waves, and particles in the spatiotemporal manifold interact in subtle ways, leading to the development of complex systems such as brains. In principle, there is no deep philosophical mystery in the fact that these systems can process information in complex ways, react to stimuli with sophisticated behavior, and even exhibit such complex capacities as learning, memory, and language. All this is impressive, but it is not metaphysically baffling. In contrast, the existence of conscious experience seems to be a new feature from this viewpoint. It is not something that one would have predicted from the other features alone.

    That is, consciousness is surprising. If all we knew about were the facts of physics, and even the facts about dynamics and information processing in complex systems, there would be no compelling reason to postulate the existence of conscious experience. If it were not for our direct evidence in the first-person case, the hypothesis would seem unwarranted; almost mystical, perhaps.
    — Chalmers

    At 7:00 of this video, while talking about the neural correlates of consciousness and ions flowing through holes in membranes, Donald Hoffman asks:
    Why should it be that consciousness seems to be so tightly correlated with activity that is utterly different in nature than conscious experience?Donald Hoffman

    In Until the End of Time, Brian Greene wrote:
    And within that mathematical description, affirmed by decades of data from particle colliders and powerful telescopes, there is nothing that even hints at the inner experiences those particles somehow generate. How can a collection of mindless, thoughtless, emotionless particles come together and yield inner sensations of color or sound, of elation or wonder, of confusion or surprise? Particles can have mass, electric charge, and a handful of other similar features (nuclear charges, which are more exotic versions of electric charge), but all these qualities seem completely disconnected from anything remotely like subjective experience. How then does a whirl of particles inside a head—which is all that a brain is—create impressions, sensations, and feelings? — Greene


    In this video, David Eagleman says:
    Your other question is, why does it feel like something? That we don't know. and the weird situation we're in in modern neuroscience, of course, is that, not only do we not have a theory of that, but we don't know what such a theory would even look like. Because nothing in our modern mathematics says, "Ok, well, do a triple interval and carry the 2, and then *click* here's the taste of feta cheese. — David Eagleman

    Donald Hoffman in this video,
    It's not just that we don't have scientific theories. We don't have remotely plausible ideas about how to do it. — Donald Hoffman

    Donald Hoffman in The Case Against Reality Why Evolution Hid the Truth from Our Eyes, when he was talking to Francis Crick:
    “Can you explain,” I asked, “how neural activity causes conscious experiences, such as my experience of the color red?” “No,” he said. “If you could make up any biological fact you want,” I persisted, “can you think of one that would let you solve this problem?” “No,” he replied, but added that we must pursue research in neuroscience until some discovery reveals the solution. — Donald Hoffman
    We don't have a clue. Even those who assume it must be physical, because physical is all we can perceive and measure with our senses and devices, don't have any guesses. Even if he could make something up to explain how it could work, Crick couldn't think of anything.
  • wonderer1
    2.3k
    Regarding 1st and 3rd person, there is no amount of information and knowledge that can make me have your experience. Even if we experience the exact same event, at the exact same time, from the exact same view (impossible for some events, though something like a sound introduced into identical sense-depravation tanks might be as good as), I cannot have your experience. Because there's something about subjective experience other than all the physical facts.Patterner

    It seems that you are ignoring an important subset of relevant physical facts, and that is the unique physical structures of each of our brains. So your conclusion, ("there's something about subjective experience other than all the physical facts") is dependent on ignoring important subsets of all the physical facts - the unique physical facts about each of our brains.
  • Patterner
    1.7k
    Is your idea that, if I knew your brain's unique physical structures in all possible detail, I would be able to experience your experience?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.