 Hanover
Hanover         
         Could we show ChatGPT what pain is? It does not have the mechanism required, obviously. But moreover it cannot participate in the "form of life" that would enable it to be in pain. — Banno
 Ulthien
Ulthien         
         Could we show ChatGPT what pain is? It does not have the mechanism required, obviously. But moreover it cannot participate in the "form of life" that would enable it to be in pain. — Banno
 bert1
bert1         
         How well might this satisfy people who think a person's experiences can only be experienced by themselves? — TiredThinker
 Wayfarer
Wayfarer         
         the same way AI interprets any kind of pictures, it can EASILY map out the MR scans of the active brain areas to conclude which qualia is (statistically) present. — Ulthien
A clear false, with an important caveat.
Let’s unpack both statements.
Banno’s Claim:
“ChatGPT cannot participate in the ‘form of life’ that would enable it to be in pain.”
True — and philosophically well-informed.
Banno is drawing on Wittgenstein’s notion of "forms of life", meaning the shared, embodied practices and ways of being through which concepts like "pain" have meaning. For Wittgenstein, understanding a concept like pain isn’t merely about identifying a signal or a word—it’s about living within a context of behavior, reaction, and shared experience. Pain isn’t just a data point—it’s an experience embedded in a web of reactions, practices, and norms.
ChatGPT doesn’t inhabit any such form of life. It doesn’t have a body, or feelings, or concerns. It doesn't suffer. So it cannot understand pain in the way humans do—not even potentially. It can simulate descriptions of pain or analyze patterns, but that’s not the same thing as feeling or genuinely understanding it.
ULTHIEN’s Claim:
“AI can easily map out the MR scans of the active brain areas to conclude which qualia is present.”
This is false, both scientifically and philosophically.
Scientifically:
Even the best neuroscientists cannot determine the specific qualia (the what-it’s-like aspect of experience) from a brain scan. We can correlate certain brain patterns with reported experiences of pain, vision, or emotion—but this is always interpretive, not direct access to qualia.
Machine learning systems can be trained to classify neural patterns—e.g., “this pattern likely corresponds to pain”—but they do not know what pain feels like, nor can they bridge the explanatory gap between neural activity and subjective experience.
Philosophically:
Mapping neural patterns doesn’t tell you what it’s like to be in pain. This is the Hard Problem of Consciousness (Chalmers): explaining how and why certain physical processes give rise to subjective experience at all.
Even if a model gets very good at predicting that someone is in pain, that’s not the same as knowing what pain is. It’s just recognizing statistical regularities—shadows of something it doesn’t and can’t touch.
Conclusion:
So yes, Banno is right. Understanding pain isn’t about mapping brain data; it’s about being a kind of being for whom pain is possible. And ChatGPT, or any other current AI, is not that kind of being. — ChatGPT
 Wayfarer
Wayfarer         
         AI systems like ChatGPT are not beings in the philosophical sense of entities with a mode of existence, let alone lived experience. They have no interiority, no standpoint, no world—they are tools that process inputs and produce outputs based on statistical associations. They're not subjects of experience.
To borrow from Heidegger: ChatGPT is not a Dasein—a being that is concerned with its own being. It has no care, no embodiment, no finitude, no concerned involvement with the world. Without these, there is no horizon in which pain—or joy, or meaning—could arise. — ChatGPT
 Ulthien
Ulthien         
         What do you think, ChatGPT? — Wayfarer
 Ulthien
Ulthien         
         ↪Ulthien The general consensus in this thread is that Sabine got it wrong. — Wayfarer
 Wayfarer
Wayfarer         
         we know that we can map the conceptual content of the brain to the synchronous activity of different neural centers. This is statistically so. — Ulthien
 Banno
Banno         
          Astrophel
Astrophel         
         A "Quale" should be understood as referring to an indexical rather than to a datum. Neuro-Phenomenologists routinely conflate indexicals with data, leading to nonsensical proclaimations. — sime
 Wayfarer
Wayfarer         
         He thinks it will become less plausible at that point to deny they're conscious. — RogueAI
 RogueAI
RogueAI         
          Wayfarer
Wayfarer         
          Hanover
Hanover         
         I either did not see this reply, or I left it intending to come back to it. My apologies.
Or perhaps I thought I had addressed it in the "On Certainty" thread, ↪here. I don't recall.
But I had reason to revisit Bayesian analysis as a stand-in for belief recently while reading Davidson's last book, such that I am re-thinking my response to the OP. Davidson makes use of Ramsey's account gives us a way of understanding what a belief and preference amount to, using just behaviour.
But that's different to saying that a belief just is a neural structure. — Banno
 Michael
Michael         
         consciousness is an attribute of sentient beings — Wayfarer
And there’s no reason to believe that any collection of material components has ever been conscious — Wayfarer
 J
J         
         https://philpapers.org/archive/CHACAL-3.pdf
It basically says watch. — Banno
 Banno
Banno         
          Hanover
Hanover         
         Yes, though as I read it, Chalmers is inclined to grant that an LLM+ could be conscious -- within the next decade, "we may well have systems that are serious candidates for consciousness." — J
any one of which would presumably produce life, not just consciousness — J
 J
J         
         I don't see what is added by "life," which is not always well defined. — Hanover
just a matter of figuring out how that happens biologically for us to synthesize the process. — Hanover
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.