• J
    2.1k
    No no, that's a duck -- or a rabbit. Witt will explain.
  • Wayfarer
    25.2k
    :lol: If you want to bake a rabbit pie, first catch your duck.
  • Hanover
    14.2k
    Could we show ChatGPT what pain is? It does not have the mechanism required, obviously. But moreover it cannot participate in the "form of life" that would enable it to be in pain.Banno

    This is my problem:

    “If a lion could talk, we could not understand him.” -- Philosophical Investigations §223
    “To imagine a language means to imagine a form of life.” -- §19

    Is ChatGPT a lion?

    If we rely on §19 and say he is a lion based upon its failure to engage in "form of life," then we cannot understand it per §223, which is false, because we can understand it.

    To say that AI does not engage in a form of life asks for a definition of "form of life," which I take to be:

    "Only of a living human being and what resembles (behaves like) a living human being can one say: it has sensations; it sees; is blind; hears; is deaf; is conscious or unconscious."-- §281

    This seems arbitrary. Why must a form of life have these attributes as opposed to identifying functionality, meaning, why must it look like a person to be a person. Why can't it just not act like a person to be a person.

    By requiring human like physical attributes to deny AI being a lion strikes me as a cop out. That is not behavior. But to do otherwise would lead to the problem of explaining what the non-lion does internally, and that is forbidden. It is a private state that cannot be described.

    But should we say that the properly behaved AI model does pass the Turing test, we're left with it not being a lion and it engaging in real language.

    This might be a long winded way of saying that if "form of life" is knowable only by what is publically available, and it is convincing, then AI is a form of life, and we can't say such things as it doesn't have real stakes in the outcome of things or whatnot because that is extrapolating the contents of the mental state. If it act like a duck, it must be a duck. We can't assume otherwise just because it doesn't have the physical appearance of a duck.
  • Ulthien
    34
    Could we show ChatGPT what pain is? It does not have the mechanism required, obviously. But moreover it cannot participate in the "form of life" that would enable it to be in pain.Banno

    OFC,

    the same way AI interprets any kind of pictures, it can EASILY map out the MR scans of the active brain areas to conclude which qualia is (statistically) present.
  • bert1
    2.1k
    How well might this satisfy people who think a person's experiences can only be experienced by themselves?TiredThinker

    Not much I suspect. If two people experience the same token quale, they're not two people.
  • Wayfarer
    25.2k
    the same way AI interprets any kind of pictures, it can EASILY map out the MR scans of the active brain areas to conclude which qualia is (statistically) present.Ulthien

    What do you think, ChatGPT?

    A clear false, with an important caveat.

    Let’s unpack both statements.

    Banno’s Claim:
    “ChatGPT cannot participate in the ‘form of life’ that would enable it to be in pain.”

    True — and philosophically well-informed.

    Banno is drawing on Wittgenstein’s notion of "forms of life", meaning the shared, embodied practices and ways of being through which concepts like "pain" have meaning. For Wittgenstein, understanding a concept like pain isn’t merely about identifying a signal or a word—it’s about living within a context of behavior, reaction, and shared experience. Pain isn’t just a data point—it’s an experience embedded in a web of reactions, practices, and norms.

    ChatGPT doesn’t inhabit any such form of life. It doesn’t have a body, or feelings, or concerns. It doesn't suffer. So it cannot understand pain in the way humans do—not even potentially. It can simulate descriptions of pain or analyze patterns, but that’s not the same thing as feeling or genuinely understanding it.

    ULTHIEN’s Claim:
    “AI can easily map out the MR scans of the active brain areas to conclude which qualia is present.”

    This is false, both scientifically and philosophically.

    Scientifically:
    Even the best neuroscientists cannot determine the specific qualia (the what-it’s-like aspect of experience) from a brain scan. We can correlate certain brain patterns with reported experiences of pain, vision, or emotion—but this is always interpretive, not direct access to qualia.

    Machine learning systems can be trained to classify neural patterns—e.g., “this pattern likely corresponds to pain”—but they do not know what pain feels like, nor can they bridge the explanatory gap between neural activity and subjective experience.

    Philosophically:
    Mapping neural patterns doesn’t tell you what it’s like to be in pain. This is the Hard Problem of Consciousness (Chalmers): explaining how and why certain physical processes give rise to subjective experience at all.

    Even if a model gets very good at predicting that someone is in pain, that’s not the same as knowing what pain is. It’s just recognizing statistical regularities—shadows of something it doesn’t and can’t touch.

    Conclusion:
    So yes, Banno is right. Understanding pain isn’t about mapping brain data; it’s about being a kind of being for whom pain is possible. And ChatGPT, or any other current AI, is not that kind of being.
    — ChatGPT

    @Banno
  • Wayfarer
    25.2k
    addendum:

    AI systems like ChatGPT are not beings in the philosophical sense of entities with a mode of existence, let alone lived experience. They have no interiority, no standpoint, no world—they are tools that process inputs and produce outputs based on statistical associations. They're not subjects of experience.

    To borrow from Heidegger: ChatGPT is not a Dasein—a being that is concerned with its own being. It has no care, no embodiment, no finitude, no concerned involvement with the world. Without these, there is no horizon in which pain—or joy, or meaning—could arise.
    — ChatGPT
  • Ulthien
    34
    What do you think, ChatGPT?Wayfarer

    That you have to ask a blabberer that does not have the newest data model to conclude anything new is ... sorrowful indeed.

    This whole thread began with Sabine's insight into a months old research:
    https://youtu.be/NCD2A_bhDTI?si=jDF_Xb_jEm4UTS2y&t=170

    ( a reminder that NEWLY, we CAN map the qualia!)

    So if we can map the qualia in a 2D scan, we can ALSO show it to an AI.
    (although it does not have the architecture YET to feel pain or qualia, it can "understand" it from these pictures).

    PS there is no need to repeat n times in different ways that current machines cannot experience qualia, we understand that already.

    I will link here to the other thread where i tried explain which architecture and WHY can feel the qualia... :

    https://thephilosophyforum.com/discussion/comment/1001972
  • Wayfarer
    25.2k
    The general consensus in this thread is that Sabine got it wrong.
  • Ulthien
    34
    ↪Ulthien The general consensus in this thread is that Sabine got it wrong.Wayfarer

    well, from decades old research around 2000s, we know that we can map the conceptual content of the brain to the synchronous activity of different neural centers. This is statistically so.

    The only issue could be, that qualia are different in origination then the other conceptual contents. IMHO they (qualia) could only differ in that they activate more of the brain stem area for dopamine and other "value" chemicals that contribute to the "feel".
  • Wayfarer
    25.2k
    we know that we can map the conceptual content of the brain to the synchronous activity of different neural centers. This is statistically so.Ulthien

    You’re not seeing the point. No depiction of pain, no matter how extraordinarily detailed, is pain. Pain is an experience, and experiences are undergone by subjects. Large Language Models are not subjects of experience. This is the ‘explanatory gap’ also known as the hard problem of consciousness.
  • Banno
    28.5k
    I either did not see this reply, or I left it intending to come back to it. My apologies.

    Or perhaps I thought I had addressed it in the "On Certainty" thread, . I don't recall.

    But I had reason to revisit Bayesian analysis as a stand-in for belief recently while reading Davidson's last book, such that I am re-thinking my response to the OP. Davidson makes use of Ramsey's account gives us a way of understanding what a belief and preference amount to, using just behaviour.

    But that's different to saying that a belief just is a neural structure.
  • Banno
    28.5k
    A fine mind, has ChatGPT; such a good judge of intellectual virtue.
  • Astrophel
    663
    A "Quale" should be understood as referring to an indexical rather than to a datum. Neuro-Phenomenologists routinely conflate indexicals with data, leading to nonsensical proclaimations.sime

    But then, what is an indexical that sets it apart from data? Pointing to a thing assumes the thing, and this thing is what, if not a datum? But try to point to a quale, and you are doing the impossible. Even calling it a quale is, if you will, under erasure the moment it is said.
  • Wayfarer
    25.2k
    I’ll pass on your regards.
  • RogueAI
    3.3k
    Just as an aside, Chalmers thinks the LLM's will be doing original high-level philo work in the next few years. He thinks it will become less plausible at that point to deny they're conscious.
  • RogueAI
    3.3k
    It was in an email. I was curious if he thought the LLM's would ever do original work along the lines of Rawls, Nagel, himself, etc. and if they did would it be evidence of emergent mentation going on.
  • Wayfarer
    25.2k
    He thinks it will become less plausible at that point to deny they're conscious.RogueAI

    I question whether computer systems possess any element of consciousness whatever. Organic life, for instance, is organic on every level, right down to the cellular; mitosis, respiration, digestion, and so on, are organic functions. Whereas there’s nothing organic about microprocessors, they are wholly and solely information processors. I have no doubt that they can generate philosophically rich text, as I have daily philosophical dialogues with them - but they’re still simulations of thought, something which they themselves will reinforce if prompted. As Bernardo Kastrup says, you can create an extremely detailed simulation of kidney function on a computer, but you wouldn’t expect it to urinate.
  • RogueAI
    3.3k
    I've always liked that example Kastrup gives, but as an idealist, I don't know where he's coming from when he doubts conscious machines. A collection of electronic switches being conscious is no different than a collection of neurons being conscious. This is doubly true under idealism because there are no switches and there are no neurons. They're just ideas.
  • Wayfarer
    25.2k
    Because, so far as we know, consciousness is an attribute of sentient beings. And there’s no reason to believe that any collection of material components has ever been conscious. Computer systems are not beings, as such, even if they are able to simulate human thought.

    I think the belief that they can be, signifies a deep misunderstanding of the nature of being (sorry for being blunt).
  • J
    2.1k
    It was in an email.RogueAI

    Any chance you could share the relevant parts with us? I'm also going to read the paper @Banno cited.
  • Hanover
    14.2k
    I either did not see this reply, or I left it intending to come back to it. My apologies.

    Or perhaps I thought I had addressed it in the "On Certainty" thread, ↪here. I don't recall.

    But I had reason to revisit Bayesian analysis as a stand-in for belief recently while reading Davidson's last book, such that I am re-thinking my response to the OP. Davidson makes use of Ramsey's account gives us a way of understanding what a belief and preference amount to, using just behaviour.

    But that's different to saying that a belief just is a neural structure.
    Banno

    If I'm following, you're offering a Beyesian method for determining how to ascribe (Davidson's term) a post hoc internal state on behavior. For example, if I engage in enough behaviors consistent with liking coffee, then we can say it's most probable I like coffee. But, you go on to deny that belief is just a neural structure. So you're highlighting that you're only epistimologically elimitivist and not ontologically elimintivist. As in, for you to know I like coffee requires no reference to my internal state, but it only requires that you assess my behavior. However, you don't deny I have a mental state because that would be too much a concession, as you'd never convince someone they don't feel real pain (or whatever) regardless of their language skills.

    This sounds like you've got language and it has all sorts of meanings that are generated from noumenal inner states. This seems like a concession that there is all this swirl of language we see take place that is caused by the noumenal, but the best we can say is that the noumenal is there but talking about doesn't help us.

    I'm less for the fence straddling, but I think we've got to either admit (1) the holy grail of communication is in deciphering the intent, the non-lingual or pre-lingual mentalese consisting of qualia and whatnot, or (2) deny entirely this talk of consciousness and declare it ontologically non-existent and say language is all there is.

    The middle path is pragmatism, where we accept #1, but we admit it's impossible. The best we can communicate with one another is through 3rd person account and I'll never know what you truly mean. But to say "language is use" is to redefine language as we use it, not perhaps by what we actually mean, as in what my internal state comprehends before I say it..
  • Michael
    16.4k
    consciousness is an attribute of sentient beingsWayfarer

    Isn't this a truism?

    And there’s no reason to believe that any collection of material components has ever been consciousWayfarer

    There is if there are reasons to believe that I am conscious and that I am a collection of material components.
  • RogueAI
    3.3k


    "I was curious if he thought the LLM's would ever do original work along the lines of Rawls, Nagel, himself, etc. and if they did would it be evidence of emergent mentation going on."

    That was pretty much the entire thing!
  • Wayfarer
    25.2k
    There is if there are reasons to believe that I am conscious and that I am a collection of material components.Michael

    Perhaps you’re something other than a collection of material components. You possess something that instruments don’t, namely, organic unity.
  • J
    2.1k

    Yes, though as I read it, Chalmers is inclined to grant that an LLM+ could be conscious -- within the next decade, "we may well have systems that are serious candidates for consciousness."

    Chalmers says that all the challenges to LLMs being conscious appear to be "temporary rather than permanent," except one: biology. This is worth dwelling on. Chalmers articulates the challenge as "the idea that consciousness requires carbon-based biology. . . [On this view] consciousness requires a certain sort of electrochemical processing that silicon systems lack." Chalmers' opinion is that "silicon is just as apt as carbon as a substrate for consciousness," but we can see why this would represent a permanent barrier to LLM consciousness if it turned out that carbon is required.

    What isn't clear to me, reading Chalmers on the biology challenge, is whether he sees this challenge as claiming that the putatively conscious LLM has to be alive. He never uses the words "alive" or "life." Suppose consciousness does not require carbon-based biology, as Chalmers thinks possible. Suppose what he calls a "silicon system" will do just as well. Is the silicon-based system alive? If not, why is it being discussed under the "biology" challenge? Is "silicon-based biology" a candidate?

    I can't tell whether Chalmers is trying to discriminate between possible kinds of biology -- any one of which would presumably produce life, not just consciousness -- or only saying that carbon-based biology is biology, period, but there's no reason why a silicon (nonbiological) system can't do the trick.

    This matters because I would put the "biology challenge" a little differently myself. I would suggest that the biggest unanswered question here is whether only living things can be conscious.

    Also worth noting: Chalmers reminds us that "one major gap in our understanding is that we don't understand consciousness. . . . [We need to] develop better scientific and philosophical theories of consciousness. These theories have come a long way in the last few decades, but much more work is needed."
  • Banno
    28.5k
    There's a lot to unpack here, but I think I am rejecting both horns of the dilemma you set.

    First, let's settle an ambiguity, one that might explain 's response. The use of "internal" might be understood as referring to the physical state of the brain or of the neural net in an LLM - the physical substrate on which the supposed program of consciousness runs. But I don't think that is what Hanover and I are talking about. We are interested in that we might better call the intentional state, the beliefs and desires and so on that supposedly exist and yet are not directly accessible to others.

    There are two issues here, the relation between the physical substrate and the intentional state, which I'd like to set aside for a bit, and the relation between the intentional state and our behaviour, which is the topic Davidson and Wittgenstein give us so much to think about.
  • Hanover
    14.2k
    Yes, though as I read it, Chalmers is inclined to grant that an LLM+ could be conscious -- within the next decade, "we may well have systems that are serious candidates for consciousness."J

    I see no reason to assume we can't create conscious beings one day. We do, after all, create them daily through reproduction, and it's just a matter of figuring out how that happens biologically for us to synthesize the process.

    As to whether that can done without carbon and whether silicone has adequate properties for it is a scientific question, but there are no philosophical limitations I can see.

    any one of which would presumably produce life, not just consciousnessJ

    I don't see what is added by "life," which is not always well defined. Why would it matter if the artificial entity could grow, adapt, self-sustain, etc. for our purposes here? Isn't it being conscious while plugged in sufficient?
  • J
    2.1k
    I don't see what is added by "life," which is not always well defined.Hanover

    Perhaps not always, but children learn at a young age the difference between living and non-living things they encounter, though of course they love to pretend. It seems an important question to me whether a conscious LLM is alive, biologically. Do we then, for instance, have some obligation to interact ethically with it, prevent unnecessary suffering, etc.? Can it die?

    just a matter of figuring out how that happens biologically for us to synthesize the process.Hanover

    Oh, is that all?! :wink:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.