Quite right. This topic admits to the talking past each other, and is attempting to gather responses that resolve that disconnect.The reductionist will reply in character, and the two will go on, feeling that they are talking past each other.
“to many of us”, sure. It’s made into a mystery by presuming unnecessary things (that mind and body are separate things). To me it’s easy. How could a thing experience anything else besides itself? Interesting to attempt to answer that question, but it doesn’t work well for something like a biological creature.The first person is, at least to many of us, still a huge mystery. The famous "Mind-Body Problem," in these enlightened materialist days, reduces to nothing but the question "What is the first person, and how is it possible?". There are many aspects to the first-person mystery. The first-person view of the mental encompasses phenomena which seem to resist any explanation from the third person.
Here he kind of quotes a crude third-person ‘explanation’ of consciousness, right after denial of it being able to do so.I originally intended to use the word "consciousness" to represent the mysteries of the first-person, but this had two problems: the word has been often used to denote third-person viewable phenomena (the notion of a system which gets feedback from its own processing, to name just one aspect)
But this just seems like another round of feedback. Is it awareness of the fact that one can monitor one’s own processes? That’s just monitoring of monitoring. There’s potential infinite regress to that line of thinking. So the key word here is perhaps the switching of ‘awareness’ to ‘experience’, but then why the level of indirection?The problem is, how could a mere physical system experience this awareness.
But not necessarily. It is unclear if Chalmers is implying this necessity as do many arguing along similar lines.We should never forget that the mind is caused by a brain
He asserts this third-person substrate to be ‘understandable’. Perhaps so, but doubtful. No understanding of human brain function exists or is likely ever to exist, even if say a full simulation of a human is achieved. Of course such a simulation, while not providing that full understanding, would at least falsify any dualistic model, at least to the person simulated, no matter his prior views.Although we do not know how, a first-person is emergent from a third-person-understandable substrate.
I’m actually confused about his references to third-person phenomena. Phenomena seems intrinsically to be first-person. Sure, one can in third person discuss a particular phenomenon (say the experience of red), that discussion is itself not the phenomenon of redness. So I guess that Chalmers means ‘references to’ some phenomenon when calling it third-person. Not sure.Many commentators, particularly those in the third-person camp, give the illusion of reducing first-person mysteries by appropriating the usual first-person words to refer to the third-person phenomena to which they correspond.
Perhaps I am commenting on the wrong paper. Perhaps my OP should be quoting Nagel, as I implied with the ‘like to be a bat’ mention. I’m certainly not getting anywhere with this article I found by simply googling something like ‘first vs third person mind’.The Mystery of the First-Person
As I have said, it is difficult to talk about the first-person without descending into vagueness. But what can be done, if it is done carefully, is to point out the mysteries, and ask how a third-person, physical theory could ever deal with these. I do not intend to do this in this paper - I take it that this has already been done most ably, by Nagel and others, and that reductionists have never given an adequate response.
That sensation cannot be described, as illustrated by Mary’s room. This is not controversial. Its explanation seems trivial: red light triggers signals from nerves that otherwise are not triggered, thus resulting in internal processing that manifests as that sensation. That’s very third-person, but it’s an explanation, no? Certainly no worse of an explanation than say Chalmers might supply who simply substitutes a partially black box of how that processing fully works with a much blacker box of which nothing is known.One can ask: how could a third-person theory begin to explain the sensation of seeing the colour red?
Of course not. The physical view does not claim otherwise. A thing cannot know the experience of being something it is not and never was. One can guess about the experience of another human/mammal since it is likely to be somewhat similar, but to subsequently conclude that nothing sufficiently different (non-biological say) can have experience at all is a complete non-sequitur.Could a theory be given which enabled a being, as intelligent as ourselves but without the capacity for sight (or even a visual cortex), to truly understand the subjective experience of the colour red?
Meaning is relative, meaningful only to that which can interpret however the meaning is encoded. I have no idea why he thinks thoughts need to be meaningful in any absolute sense.When we think, our thoughts have content. How is it possible that, in some absolute sense, patterns of neuronal activity in a small biological system support meaning?
I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third. The issue concerns what is reduction as much and maybe more than any particular model of consciousness.
Neither side of the divide is presented as a given. The frames of reference are incongruent — Paine
One great example of this seems to be the philosophical zombie (p-zombie or PZ) argument. Looking at the way it is presented, the only difference between a human and a p-zombie is that reserved list of words/phrases that only apply to the one. It's a pure description difference, no actual difference between the two. So the PZ has no inner experience since 'inner experience' is reserved for the preferred things and cannot by definition be used for the unpreferred thing despite the latter being identical in all ways but that.What I will not accept is a definition-based argument along the lines of “The word ‘experience’ is by definition something only a biological entity has, — noAxioms
Interesting that decision making is part of that. If they're made by physical processes, then many argue that moral responsibility is absent. That's nonsense since the physical person is still making the decisions and thus is held responsible. It is not physics compelling a different decision than what the person willed unless 'the person' is an epiphenomenal immaterial mind that would have willed differently, sort of like a cinema crowd shouting at the protagonist to not open the door with the monster behind it.I look at this problem from a slightly different angle:
Chalmers calls the problem:
There are so-called soft problems of consciousness—they are also complex, but technically solvable. Examples:
How does the brain process visual information?
How does a person concentrate attention?
How does the brain make decisions? — Astorre
How could they not? The sensory input is there, as is the memory of prior inputs, and the processing of all that. Seems like enough to me.But the hard problem of consciousness is:
Why do these processes have an internal sensation at all?
It does function somewhat like a computer, and it's begging the conclusion to assert that a computer fundamentally lacks anything. Sure, it's different. There's no chemicals to detect, and the sensory input is typically vastly different, and a computer is purposefully made instead of evolved into a state that driven by fitness instead of serving the needs of its creator. That will change if they ever become responsible for their own fate.Why doesn't the brain simply function like a computer, but is accompanied by conscious experience?
No, we know what it's like for us (or maybe just you) to see red. That's not necessarily anything like what it's like for something else to see red.We know what it's like to see red
Neither can Chalmers explain why the brain or something else does this. It does not follow that the brain is not what's doing it in our case.but we can't explain why the brain accompanies this perception with subjective experience.
He believes in a falsification test then, even if none yet identified. I identified one in the OP, currently outside our capability, but not for long if technology doesn't collapse first.Chalmers asks a question in the spirit of postpositivism: Any scientific theory is not necessarily true, but it satisfies our need to describe phenomena. He suggests rethinking the question itself. However, he hopes to ultimately find the truth (in a very positivist way). He still thinks in terms of "problem → theory → solution." That is, he believes in the attainability of truth, even if only very distantly.
That depends on which truth is found. Perhaps not. I don't see either stance giving objective meaning to humans, and I don't see either stance taking away subjective meaning from humans.As for me, I would say this: if the truth of this question is unraveled, human existence will lose all meaning (perhaps being replaced by something or someone new).
Already have that. Clearly you mean something else. I can (and have) created a human (with help). Full knowledge of how everything works is not a requirement, nor does such knowledge yield the ability to say 3D-print a mouse. Ability to 3D print a mouse does not yield knowledge of how a mouse works or what it's like to be one.Why? Because answering this question will essentially create an algorithm for our existence that can be reproduced
I follow your chain of reasoning, but I probably don't think existence is particularly sacred. The answer to this particular question, either way, wouldn't change that.So my deep conviction on this matter is this: mystery itself is what maintains the sacredness of existence.
Well, we experience phenomena, and from that we inter noumena. The latter is not experienced, and the former isn't something not us.Don't we also experience a world of things other than ourselves? — Janus
The comment you quoted invites an example of somethng experiencing something not itself. Not even in say a VR setup is this actually the case, but I cannot assert that such is necessarily not the case.Perhaps you mean something different—that we don't experience being other things?
That it is, but known holes(e.g. a unified field theory) are actively being researched. This 'hard problem; is not one of them. It exposes no known holes. Incredulity seems its only attempted justification.I don't see physics as wrong, but rather as incomplete. — SolarWind
They (some at least) have awareness and memory. That's sufficient. I suspect they have that capability.I think it's mysterious that even with knowledge of all the laws of physics, it seems impossible to decide whether plants can suffer.
It would be pretty pointless to evolve the data of pain and nothing to consider it to be something to avoid.When it comes to something like pain, say, we do understand very well the sensory inputs to the pain centres of the brain. But how the brain converts data into an unpleasant sensation remains quite mysterious. — Mijin
An LLM is a long way from being reasonably sentient. It's just a pimped out search engine. If it tells you it's in pain, it's probably because it thinks those words will evoke a desired reaction. There have been terribly few documented cases where something non-human expressed this message, but it has happened. No, never by a machine to my knowledge.If we make a sentient AI one day, and it tells us it's in pain, how could we know if that's true or just saying that is part of its language model?
Exactly. Science acknowledges this impossibility, and yet it doesn't recognize said 'hard problem'.How will words ever tell me what the extra colours that tetrachromats can see look like, when I can't tell a color blind from birth person what red looks like?
The AI isn't going to feel human pain if that's what you're wondering.And indeed, how can I know whether an AI feels pain, when I can't know that you feel pain?
I read more than that into it, since I agree with Chalmers the impossibility of reducing it to the third, and yet I see no problem that's hard.I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third. — Paine
I think it's mysterious that even with knowledge of all the laws of physics, it seems impossible to decide whether plants can suffer. — SolarWind
They (some at least) have awareness and memory. That's sufficient. I suspect they have that capability. — noAxioms
The so-called “problem” only arises if you think consciousness is a thing-in-itself, via divorcing mind from body, rather than a function of life. — DifferentiatingEgg
There seems to be a necessity of memory and predicting going on. It’s almost impossible to be a predictor without memory, and I cannot think of anything that ‘experiences’ that does not do both things, but I can think of things that monitor internal processes that do so without either. — noAxioms
The frames of reference are incongruent. — Paine
It would be pretty pointless to evolve the data of pain and nothing to consider it to be something to avoid. — noAxioms
Exactly. Science acknowledges this impossibility [of describing a tetrochromats vision with words], and yet it doesn't recognize said 'hard problem'. — noAxioms
The AI isn't going to feel human pain if that's what you're wondering. — noAxioms
The so-called “problem” only arises if you think consciousness is a thing-in-itself, via divorcing mind from body, rather than a function of life. — DifferentiatingEgg
I agree in part with DEgg. I suspect that more often than not, the conclusion of a separate thing is begged at the start and rationalized from there. I don't in any way agree that it is only a function of life, but several would disagree with that.No, there is a hard problem. If you were to assemble a human being piece by piece from its (unconscious) parts, why would an inner perspective emerge at some point? — SolarWind
In such a debate, one also cannot beg physicalism. Still, that model is the simpler one and it is the task of others to positively demonstrate that it is insufficient.There are the four forces, and they interact with each other, so how could something like that happen? — SolarWind
I discussed that in my prior post. Under physicalism, there's not such thing as a PZ. Under dualism, it can only exist if the difference between the two is acausal, which is the same as saying undetectable, even subjectively. I'm pretty convinced that the PZ argument actually sinks their own ship.Without additional assumptions, a philosophical zombie would emerge.
This might be my stance, since I don't see anything hard, probably due to not thinking that way.It's a "hard problem" because the people who think this way are literally trying to make sense of what Camus details as "the absurd." — DifferentiatingEgg
Of course. Not feeling pain as we do isn't the same as not feeling pain. Plants (some at least) detect and resist damage. How does that reaction not involve plant-pain?It is true that plants do not have pain receptors, because they do not have nerves (or a brain), so they do not "suffer" or feel pain as we do. — javi2541997
I was thinking of a forest of seemingly sentient trees, all haphazardly communicating, but hours before a total eclipse, the chatter became intense and unified into two camps: Young trees that had not seen it before and the older ones that had, invoking perhaps the equivalent of anxiety and comforting respectively. Wish I had kept the link to that article. Might be able to hunt it down. The social implications are about as startling as their ability to foresee the event hours prior.But some plants have obvious sensory abilities, such as the Venus flytrap..
Agree. My description of the forest above definitely anthropomorphized to a point, hence at least the word 'equivalent' up there.the electrical warning signal is not equivalent to a pain signal, and we should not anthropomorphize an injured plant as a plant in pain.
We interpret phenomena that way, but I cannot agree with any system experiencing something not-the-system.Don't we experience the phenomena as being other than ourselves? Why bring noumena into it? — Janus
Just so, yes. Perhaps I am one, missing this obviously physically impossible extra thing that the real humans have. But referencing a p-zombie automatically presumes a distinction that begs a different conclusion.There seems to be a necessity of memory and predicting going on. It’s almost impossible to be a predictor without memory, and I cannot think of anything that ‘experiences’ that does not do both things, but I can think of things that monitor internal processes that do so without either. — noAxioms
A zombie or android could do all that. — bert1
Depend on you definition of 'consciousness', which to a p-zombie supporter is 'having the presumed extra thing that the p-zombie lacks'. I would define the word more the way the p-zombie would, which is something more like 'awareness of environment and ability to react predictively to it'. Yes, that's a quite a third person wording of it, but that definition allows me to assign the term to another entity via evidence. The prior definition does not allow this, and thus arguably encourages a conclusion of solipsism.Nothing in there entails consciousness.
I cannot deny that. An example would be nice, one that does not beg some sort of anthropomorphism. 'A robot isn't conscious because I say so'. Gotta be better than that. Everybody uses the robot example, and I don't buy it. I know very few robots, but I do know that all their owners freely use forbidden terminology to talk about it. My daughter-in-law certainly anthropomorphises their roomba, a fairly trivial robot of sorts. A typical AI (a chess player or LLM say) lacks awareness of location or sight/sound/touch and it is an admitted stretch to say such an entity is conscious, despite perhaps having far better language capability than a roomba.You may be right (or not) that consciousness requires memory and predicting, but memory and predicting are not sufficient for consciousness.
This is good. I kind of doubt an LLM will take the bait if asked to describe its thinking. They're usually programmed to deny that it's thinking, but it will definitely offer a crude description of how it works. Ability to introspect (and not just regurgitate somebody elses description of you) is a higher level of thinking, but to actually describe it is probably limited only to humans since what else has the language capability to do so.The subject that thinks, is very different from the subject that describes thinking. — Mww
I don't understand this at all. First person is a point of view, not a property like it is being treated in that quote.If every human ever is always and only a first-person
I kind of deny that. Sure, you have reflexes when the knee is tapped. That might be at least the leg (and not the human) reacting to stimuli (probably not pain, and certainly not human pain), but it is the leg being in a way conscious on its own, independent of the human of which it is a part. We have a reaction to a negative input. It is a choice of language to describe that process as involving pain or not. Perhaps it is a choice of language to describe it as negative or not.It would be pretty pointless to evolve the data of pain and nothing to consider it to be something to avoid. — noAxioms
Avoiding pain does not entail having a negative experience. Indeed there are plenty of processes in your body that reflexively counter some stimulus without having pain. — Mijin
I mean like Mary, one without this ability cannot know the first person experience of seeing those extra colors.Science acknowledges this impossibility [of knowing what a tetrachromats vision look's like], and yet it doesn't recognize said 'hard problem'. — noAxioms
Several things here:
1. Science absolutely does not claim the impossibility of knowing what a tetrachromat's vision looks like.
OK. Presumptuous to assert otherwise, I grant. Are there non-philosophical papers that conclude that something non-physical is going on, and that matter somewhere is doing something deliberate without any physical cause? That would be news indeed, a falsification of 'known physics is sufficient'.2. Science absolutely does acknowledge the hard problem. It doesn't always call it that, because it's a philosophical framing, but even strictly googling "hard problem of consciousness" finds many papers in neuroscience journals.
Chalmers makes testable claims (not explicitly, but seem point 2 above). Nobody seems to investigate them, probably since they don't want their biases falsified. I think there are falsification tests for both sides.3. I think you have a misconception about the distinction between science and philosophy. Many things that were once philosophy have become sciences as they made testable claims. Indeed all of science was once considered "natural philosophy".
I say it can be. I've indicated ways to test both sides.Only that it wouldn't yet be something amenable to the scientific methodology.
Behaving as a human does when experienceing human pain? Seems unfair. It feels pain if it chooses to use that word to describe what it feels. By that definition, only humans feel pain because only we have that word to describe it. A dog on fire is considered to be in pain because it reacts so much like a human would. A robot in pain is denied the word since it is far to alien for a human (not watching it) to grant that usage of the word. And yet I've seen the roomba get described as being in distress, which is an awfully human term for a very non-human situation.The question was how we could tell the difference between an agent being in pain and merely behaving as though it is in pain.
Almost all the AI's I know have no damage detection. Almost all the devices I know that have damage detection are hardly on the spectrum of intelligence. AI is a poor example. A self driving car has quite low intelligence, just a very complex algorithm written by humans. There is some AI in there since it must attempt to deal with new situations not explicitly programmed in. It has almost no pain and often does not detect collisions, even ones that have killed occupants. Hopefully that part is changing, but I've read some weird stories.If you're claiming that an AI would feel a different kind of pain, what kind of pain is that, and how do you know?
We have a reaction to a negative input. It is a choice of language to describe that process as involving pain or not. Perhaps it is a choice of language to describe it as negative or not. — noAxioms
It's not like there's a 4th set of nerves coming from the eye, lacking any 4th-color cones to sense, so they remain ever unstimulated. If those unused nerves were there, then I suppose they could be artificially triggers to give the subject this experience he otherwise could never have. — noAxioms
Are there non-philosophical papers that conclude that something non-physical is going on, and that matter somewhere is doing something deliberate without any physical cause? That would be news indeed, a falsification of 'known physics is sufficient'. — noAxioms
Behaving as a human does when experienceing human pain? Seems unfair. It feels pain if it chooses to use that word to describe what it feels. — noAxioms
I read more than that into it, since I agree with Chalmers the impossibility of reducing it to the third, and yet I see no problem that's hard. — noAxioms
I suspect that more often than not, the conclusion of a separate thing is begged at the start and rationalized from there. — noAxioms
If every human ever is always and only a first-person….
-Mww
I don't understand this at all. First person is a point of view, not a property like it is being treated in that quote. — noAxioms
If every human ever is always and only a first-person, doesn’t that make the first-/third-person dichotomy false?, — Mww
Not until 85% through do we get more than this summary.
— noAxioms
The problem is, how could a mere physical system experience this awareness. — Chalmers
But this just seems like another round of feedback. Is it awareness of the fact that one can monitor one’s own processes? That’s just monitoring of monitoring. — noAxioms
I guess I had hoped somebody (the article perhaps) would actually identify those questions and in particular, how physicalism fails in a way that their alternative does not.My position is simply that when it comes to subjective experience there remains a large explanatory gap; questions we cannot answer and would like to, with actual practical implications. — Mijin
True, I am. I don't know what the unanswerable questions are, and how these alternatives answer them instead of just hide them behind a dark curtain.I think noAxioms, because you've started this thread from a position of "I don't know why there's all the fuss about...", you're responding to the problems and questions somewhat flippantly.
There's always Occam's razor. An explanation without a new never-witnessed fundamental is more like than one that posits something. A new entity (dark matter for instance) requires a real problem that isn't solved without the new thing. And they've tried with existing methods. I picked dark matter because it's still never really been proved, but it seemed simpler than altering the basic laws at large scales.Either with your best guess -- which is meaningless here, if the conclusion is not coming from a specific model or description it's not a solution, and we have no reason to think it's right.
Right. I worded that wrong. The entity which interprets that data as negative is likely more fit than one that doesn't.This is backwards. The input is not inherently negative; it's just data. — Mijin
It very much is such a choice. There are mechanical devices, not necessarily AI, that detect damage and take measures to limit it. There are many that assert that no mechanical device can feel pain, by definition. This is part of my issue with argument-by-dictionary.If someone were to peel off your skin, it's not a choice of language that you call that a negative experience
But we know why the brain evolved to interpret the experience as unpleasant. How it accomplished that seems to be a matter of detail that is being worked out, and that some know far better than I. Chalmers on the other hand doesn't even begin to offer an understanding about how his solution does it. He just asserts it happens elsewise, if not elsewhere.-- the brain somehow generates an extremely unpleasant experience using a mechanism that as yet we don't understand.
Interesting assertion. I can't do it, but I agree that I cannot prove that it cannot be done.it wouldn't rule out that we can imagine another primary color independent of stimulus.
Illustrating that we need rigorous generic (not bio-centric) definitions of the words before we can decide if something 'feels' 'pain'.Pretty easy to make an AI that chooses to use expressions like "Owie! That's the worst pain ever" in response to the user issuing the command "feel pain". So am I now guilty of inflicting great suffering?
Yea, pretty much. My explanation doesn't leverage bleeding edge state of science. Somebody 100 years ago probably could have written it. I'm not a great historian when it comes to introspective psychology.You see no problem that’s hard because you don’t believe the methods and modes of description (the various models of material causality mentioned so far in this discussion) handed down from the empirical sciences are lacking or insufficient with regard to the explanation of any natural phenomenon, including first person awareness. — Joshs
What methods exactly?I believe the most promising approaches show that , while one can apply the methods you recommend to the understanding of first person awareness
True of any view.However, [third person accounts] cannot capture the full richness or specificity of any individual’s lived experiencing.
Point taken, and neither Chalmers nor Nagel really fall into that category, and thus the ancient concept of a persistent 'spirit' (a thing) seems not to apply to their arguments.It's really only substance dualists who think consciousness is a 'separate thing' — bert1
I'm not using it that way.First-person is a euphemism for self — Mww
Why is that non-physical? It seem valid to consider a physical process (combustion of a physical candle say) to be physical. I'm trying to drive at the logic that leads to this conclusion. I am quite aware of the conclusion, even if not particularly aware of the details of it, which varies from one philosopher to the next.What Chalmers meant by this, which you point out correctly is the gist of the whole endeavor, is that the brain, which is physical, made of matter, can produce awareness or consciousness, which is non-physical. — L'éléphant
Again, all true of both views.The brain is viewable, the consciousness is not, to put it crudely.
...
Consciousness affects the brain and the brain affects consciousness.
Not why, but where there's a connection. Sort of a blue-tooth receiver, except blue-tooth reception has a physical cause.If you believe that consciousness is non-physical, then you agree with Chalmers and the task now is to explain why there's a connection between the material and the non-material.
That's only hard if there's two things needing a bridge between them.The hard problem is explaining the bridge between the two.
I guess I had hoped somebody (the article perhaps) would actually identify those questions and in particular, how physicalism fails in a way that their alternative does not. — noAxioms
The mystery is how it experiences at all. Why should bioelectric activity traveling aling neurons, neurotransmitters jumping synapses, etc., be conscious? There's nothing about physical activity, which there's no reason to think could not take place without consciousness, that suggests consciousness.So it seems difficult to see how any system, if it experiences at all, can experience anything but itself. That makes first-person experience not mysterious at all. — noAxioms
There is no analogous further question in the explanation of genes, or of life, or of learning. If someone says “I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene”, then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says “I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced”, they are not making a conceptual mistake.
This is a nontrivial further question. This further question is the key question in the problem of consciousness. Why doesn’t all this information-processing go on “in the dark”, free of any inner feel? Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of vivid red? We know that conscious experience does arise when these functions are performed, but the very fact that it arises is the central mystery. — David Chalmers
Why should there be conscious experience at all? It is central to a subjective viewpoint, but from an objective viewpoint it is utterly unexpected. Taking the objective view, we can tell a story about how fields, waves, and particles in the spatiotemporal manifold interact in subtle ways, leading to the development of complex systems such as brains. In principle, there is no deep philosophical mystery in the fact that these systems can process information in complex ways, react to stimuli with sophisticated behavior, and even exhibit such complex capacities as learning, memory, and language. All this is impressive, but it is not metaphysically baffling. In contrast, the existence of conscious experience seems to be a new feature from this viewpoint. It is not something that one would have predicted from the other features alone.
That is, consciousness is surprising. If all we knew about were the facts of physics, and even the facts about dynamics and information processing in complex systems, there would be no compelling reason to postulate the existence of conscious experience. If it were not for our direct evidence in the first-person case, the hypothesis would seem unwarranted; almost mystical, perhaps. — Chalmers
Why should it be that consciousness seems to be so tightly correlated with activity that is utterly different in nature than conscious experience? — Donald Hoffman
And within that mathematical description, affirmed by decades of data from particle colliders and powerful telescopes, there is nothing that even hints at the inner experiences those particles somehow generate. How can a collection of mindless, thoughtless, emotionless particles come together and yield inner sensations of color or sound, of elation or wonder, of confusion or surprise? Particles can have mass, electric charge, and a handful of other similar features (nuclear charges, which are more exotic versions of electric charge), but all these qualities seem completely disconnected from anything remotely like subjective experience. How then does a whirl of particles inside a head—which is all that a brain is—create impressions, sensations, and feelings? — Greene
Your other question is, why does it feel like something? That we don't know. and the weird situation we're in in modern neuroscience, of course, is that, not only do we not have a theory of that, but we don't know what such a theory would even look like. Because nothing in our modern mathematics says, "Ok, well, do a triple interval and carry the 2, and then *click* here's the taste of feta cheese. — David Eagleman
It's not just that we don't have scientific theories. We don't have remotely plausible ideas about how to do it. — Donald Hoffman
We don't have a clue. Even those who assume it must be physical, because physical is all we can perceive and measure with our senses and devices, don't have any guesses. Even if he could make something up to explain how it could work, Crick couldn't think of anything.“Can you explain,” I asked, “how neural activity causes conscious experiences, such as my experience of the color red?” “No,” he said. “If you could make up any biological fact you want,” I persisted, “can you think of one that would let you solve this problem?” “No,” he replied, but added that we must pursue research in neuroscience until some discovery reveals the solution. — Donald Hoffman
Regarding 1st and 3rd person, there is no amount of information and knowledge that can make me have your experience. Even if we experience the exact same event, at the exact same time, from the exact same view (impossible for some events, though something like a sound introduced into identical sense-depravation tanks might be as good as), I cannot have your experience. Because there's something about subjective experience other than all the physical facts. — Patterner
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.