• frank
    18.2k
    Understanding is no more internal than eating. It depends on some biological processes that happen under the skin, among other things that don't, but this doesn't license your appeals to the internal that you make with reference to perception and meaning. Synaptic transmission is no more meaningful than peristalsis.Jamal

    It's very important to know the difference between an internal voice and an external one, or a real face and a hallucination. For some crazy people, the only way to tell is by way of rational assessment. The magic detector that everyone else has isn't working.

    If yours is working, you know the difference between internal and external. You don't need meds.
  • Hanover
    14.6k
    "the [non-existent] present king of France," is a referent to an idea in your head.Harry Hindu

    Under this understanding, then so is the cat. That is, the cat is out there, the image is in here, and the reference is to the image in your head. And that is your metaphysical account, but that's not Wittgenstein's because his isn't a metaphysical acccount. His is a grammatical account, describing how language operates within our forms of life, and that attempts to use language to explain the metaphysical misunderstand the role of language.

    If you want to refer to mental objects and qualia and whatnot, you're not forbidden from it, but I'd think he'd just assert that "qualia" is however you use the word. Your position seems to be that the utterance of any word creates a referent.
    "Public usage" as in using scribbles to point to objects and events in the world. If you are not pointing to anything with your scribbles that do not ultimately resolve down to things that are not scribbles (as in the case of "freedom" and "aboutness"), then it no longer qualifies as "public usage". It is "private usage".Harry Hindu

    Usage of the term is public behavior. To the extent you argue I can't appeal to what is in your head when you say "freedom," you are correct. What I can appeal to is how you use the term in a public way, which is really the heart of the beetle argument.. We cannot see the beetle, we cannot confirm whether we both speak of the same beetle, and no amount of talking about the beetle will assist us in that regard. It is for that reason, we concern ourselves with the use of the term "beetle" and not the beetle itself.
  • NOS4A2
    10k
    My take is skeptical, according to your definition.

    For the simple reason that machines are not biological, they do not have similar structures, components, parts, or what have you, to any organism, let alone humans. If they do not have similar structures, they do not act in similar ways to humans. If they do not act in similar ways, they should not be described in anthropomorphic terms. In that way, it cannot be said there is knowledge, consciousness, learning, thinking, or any human-like acts involved in anything these machines are doing.

    In my opinion the field requires new terms to describe the activity of AI; or if these terms are already available and established, they should be used instead.
  • apokrisis
    7.7k
    The Wittgensteinian approach (and I could be very wrong here, so please anyone chime in) does not suggest there is not an internally recognized understanding of the word when the user uses it, but it only suggests that whatever that is is beyond what can be addressed in language. That would mean that whatever "understanding’" is amounts to our public criteria for it .Hanover

    From the neurocognitive view, understanding means anticipation. Forming the right expectations. So if not meaning as demonstrated by use, then meaning demonstrated by preparedness.

    I hear “apple”, I get ready to react accordingly. My attention is oriented in that particular direction.
  • Hanover
    14.6k
    From the neurocognitive view, understanding means anticipation. Forming the right expectations. So if not meaning as demonstrated by use, then meaning demonstrated by preparedness.

    I hear “apple”, I get ready to react accordingly. My attention is oriented in that particular direction.
    apokrisis

    I think this is compatible with meaning is use as long as you're describing public manifestations. If preparedness is a qualitative state it's not compatible, but if preparedness is standing, staring, moving or doing something in a particular way then it would be compatible.
  • apokrisis
    7.7k
    It's very important to know the difference between an internal voice and an external one, or a real face and a hallucination.frank

    Again, this is about cognition being about anticipation-based processing. Forming expectancies that intercept the unfolding of the world even before it happens. We know it is us thinking our thoughts because we form the motor patterns that already prime our sensory circuits that we should be hearing exactly these words in our heads. But when someone else speaks, it feels different as we are having to guess what might be said, and assimilate that to what actually gets said.

    So that is the goal for AI that goes beyond just LLMs. Switch to an anticipatory-processing architecture that lives in the world in real time.
  • Hanover
    14.6k
    For the simple reason that machines are not biological, they do not have similar structures, components, parts, or what have you, to any organism, let alone humans. If they do not have similar structures, they do not act in similar ways to humans.NOS4A2

    I don't see how you arrive at the second sentence from the first.

    In the Shoutbox, the conversations was of water pumps. If I have a pump that operates off of suction versus one off an impeller, but both move the same quanity of water at the same rate, why can't I describe them similarly as water pumps, concerning myself only with the relevant result of the pumps' behavior, which is the pumping out of water. Why must their output be declared of different types and categories simply because their unseen parts perform the intermediate tasks very differently?

    Also, given that we have no idea how it is that human cognition occurs, but all we know is that somehow it arises as the final behavior of brains, what provides us the ability to know that the physical acts leading to cognition within two different human's brains are at all alike? That seems speculative, and I would assume correct only to a point given the variations from one person to the next.
  • Hanover
    14.6k
    Switch to an anticipatory-processing architecture that lives in the world in real time.apokrisis

    Doesn't it do this with auto-pilot airplanes and self-driven vehicles? ChatGpt isn't a good example of this because it has no inputs other than a person typing or speaking to it, but there are examples of AI receiving data directly from the world. For example, an airplane could receive data of a distant storm and divert or change altitudes
  • apokrisis
    7.7k
    I think this is compatible with meaning is use as long as you're describing public manifestations.Hanover

    I’m not too fussed with making the psychological science conform to the Wittgenstein model.

    But I would note preparedness is also being ready ahead of time, knowing what to ignore. So meaning is also inaction. Meaning is what you don’t do as you have already dismissed it in advance.

    Again, this is a central fact of neurobiology that is quite absent from LLMs. The brain is set up on the basic principle of learning to ignore the world as much as possible, as almost everything about the world has already been predicted as being about to happen, or dismissed as unimportant if it does happen.

    The more we understand ahead of the moment, the less we need to figure out in the heat of any moment. The natural goal of a brain is to have zero response as that means it was completely successful in its desire to remain completely unsurprised by what the world could throw at it.

    This is the Bayesian Brain model of cognition. Hinton’s Helmholtz machine or even before that, Grossberg’s ART neural network architecture from the 1980s.

    So the AI community knows the architecture it would want to copy. And it knows LLMs ain’t it. The surprise is just how useful LLMs can be as a new technology if you are willing to scale their simple ability just to predict the next likely step when trained on a static data set.

    Living in a dynamical world in real time is quite another level of challenge,
  • Hanover
    14.6k
    So the AI community knows the architecture it would want to copy.apokrisis

    This surprises me, although my knowledge of the subject is limited to your last 2 posts, so there's that. That is, you described how certain information needs to be ignored and that can be based upon past experience and statistical models. Why wouldn't an LLM do well at that and how is that not already occurring in self-driving vehicles? They are responding to real world situations without being overwhelmed with irrelevant data and I would assume being able to anticipate based upon statistical models.

    So, where you say the AI community knows that LLMs can't do what they need it to, where is this documented? What is the cite for that?
  • apokrisis
    7.7k
    Doesn't it do this with auto-pilot airplanes and self-driven vehicles?Hanover

    Sure. Cybernetics has been with us since the first AI revolution of the 1950s.

    What the history of AI should tell us is that the architectural issues are not that complicated to understand. And even the most rudimentary implementations of some kind of neural network can be surprisingly powerful. Back-prop networks once seemed as big a breakthrough as LLMs.

    But we’ve been at AI for 70 years now and LLMs are as far as we have got. That should also tell you something.
  • Hanover
    14.6k
    But we’ve been at AI for 70 years now and LLMs are as far as we have got. That should also tell you something.apokrisis

    It seems like in the past few months we've gotten very far, but I realize things were happening in the background before I became aware of them. But I see our progress as tremendous, not minimal as maybe you're suggesting.
  • apokrisis
    7.7k
    So, where you say the AI community knows that LLMs can't do what they need it to, where is this documented? What is the cite for that?Hanover

    This is a decent summary making quite a stir given that the LLM hype bubble could be about to bust the stock market.



    But I see our progress as tremendous, not minimal as maybe you're suggesting.Hanover

    Hah. Call me old and jaded. But I was around for the second AI revolution hype bubble of the 1980s. I spent time in the labs to learn about what was going on.

    Neural networks had become a damp squib, but Japan had launched its fifth generation computer initiative and the US was all up in a competitive frenzy about parallel processing and symbolic computing as being about to rewrite the landscape.

    And who remembers any of that?
  • NOS4A2
    10k


    I don't see how you arrive at the second sentence from the first.

    The things involved and the movements they make are different. It’s like saying submarines swim.
  • apokrisis
    7.7k
    If I have a pump that operates off of suction versus one off an impeller … Why must their output be declared of different types and categories simply because their unseen parts perform the intermediate tasks very differently?Hanover

    Doesn't it do this with auto-pilot airplanes and self-driven vehicles?Hanover

    Your error is conflating behavior and consciousness. Your argument is that if a machine acts like a human, it thinks like a human. The pragmatic Turing argument.

    But cybernetic autopilots are machines driving machines in a machine-like world. AI can appear to be doing just great in a world already made as machine like as possible by humans.

    Just having a world with doors, windows, steps and paths is such a huge reduction of complexity that a robot should be able to navigate it.

    Or as is the case with LLMs running a call centre, a synthesised voice folllowing a pattern matching script can push the buttons of an unhappy human customer in a way that hopefully guides them down to a happy landing spot. Zero actual intelligence or sapience need be involved.

    New technology fits into the previous technology that already composes our now highly mechanised lives. The more we behave like automatons in our machine-styled worlds, the easier we can add the new levels of life automation.

    We don’t ask our machines to rub sticks and build the fire to cook the antelope it just speared and butchered. We just ask it to jam the ready meal in the microwave and press the button. And appear to understand us when we ask for the chicken option rather than the lamb.
  • hypericin
    1.9k


    I'm struggling with this.

    To me there is a gap between behavior and internality. We are embodied creatures, and our behaviors and internal states are deeply intertwined. But this fact about us doesn't imply a necessary connection.

    Pain for us seems intrinsically aversive, and is associated with avoidance and the other behaviors you mentioned. But then there are masochists. Do they experience inverted pain/pleasure? No, almost certainly they reinterpret the sensation of pain positively*. Or, consider the religious fanatic who detests and avoids anything suggestive of bodily pleasure. Or, imagine someone born without pain (a real and horrible condition) who has learned the behavioral concomitants of pain, and faithfully mimics the yelps, cries, and help seeking, believing them to be no more than social norms surrounding bodily harm.

    None of this would be possible if sensation and their accompanying behaviors were inseparable, as you seem to suggest

    *I experienced something similar. A tooth died, and it was unbelievably painful. It was evening, so I had to endure until the dentist opened the next morning. Somehow, in desperation, I managed to reinterpret the pain as a kind of neutral life force, and I was able to sleep through the night!
  • apokrisis
    7.7k
    Pain for us seems intrinsically aversive, and is associated with avoidance and the other behaviors you mentioned. But then there are masochists. Do they experience inverted pain/pleasure? No, almost certainly they reinterpret the sensation of pain positively*.hypericin

    But there is then a neurobiological account of how this can be so. Pain as a trigger for an aversive response is hardwired into the brainstem. Hit the right stimulation threshold and the pain reflex fires.

    That is good enough to save the life of a frog. But as brains grew more complex, a hierarchy of levels of processing were built atop that basic reflex circuit.

    In particular, a mammalian brain develops a frontal lobe area, the anterior cingulate cortex, that is able to weigh up input from multiple directions. It can take into account your need to sometimes ignore pain to get through the thing you have to be doing. Or to ignore the pain in advance as you already can expect the "hurt" and so suppress it at the brainstem level. Or even play the trick of dissociating and directing your attention to thoughts of being on a beach in Hawaii. Let the imagery turn up the dial in the lower brain pleasure circuits instead.

    Masochism becomes a more extreme kind of reframing that is learning to find sexual excitement in the general arousal that a dread of imminent pain is going to create. Any arousal can be good arousal if you are in the frame of mind to read a mix of pleasure and pain in that well familiarised and routinised way.

    So we understand the architectural principles at work. Organisms start with a simple functional base of behaviours. An orientation reflex that – without thought and perhaps even without feeling – can make the instant decision about whether to approach or retreat from some source of environmental stimulation.

    Even a prawn is built to make snap decisions about which way it wants to move. It is about the most basic cognitive act. Even bacteria have to be able to do it. But bacteria have no brains to speak of. And prawns are likely too primitive to have pain in any way we would think of it. They would react like our foot jumps when our knees are hit by a little rubber hammer.

    But then animals grow more intelligent by adding levels and levels of cognitive complexity. You wind up with a higher brain centre like the anterior cingulate which has the top-down circuitry to modify the lower brain reflexes, either ramping up the signal, so that the slightest brush in a scary dark room can give you a heart attack, or damping it down so that you can undergo major surgery while imagining being on a beach in Hawaii – a vision made overwhelmingly vivid because you have been "put under" by the silky words of a hypnotist.

    So again, we have a good understanding of the biological architecture and logic of brains. And thus we can see just how far off LLMs are from any true biological realism.
  • Hanover
    14.6k
    The things involved and the movements they make are different. It’s like saying submarines swim.NOS4A2

    No it's not. The example I provided had dissimilar methods for acheiving the same result. The submarine example has dissimilar methods for acheiving dissimilar results.

    The question is whether Z can result from method X or Y. Your argument is that it cannot because Z will necessarily be different if from X as opposed to Y. That doesn't follow. The same thing can arise from different processes.
  • Hanover
    14.6k
    Your error is conflating behavior and consciousness. Your argument is that if a machine acts like a human, it thinks like a human. The pragmatic Turing argument.apokrisis

    Not really. I'm only saying that it seems possible to create a an AI system that works within a complex environment such that it must anticipate next events and therefore react as if human. I'm not suggesting its methods for acheiving the human like conduct would be anything close to the methods used by actual humans. I accept it's entirely mimickry. I just don't see why it can't be done, and would be interested in some citations to that limitation based upon your comment that this limitation is well known in the AI industry. I'm not claiming you're wrong, but that seems an important limitation and I was interested in where that might be discussed in more detail.
  • hypericin
    1.9k



    I like this account. Clearly, AIs are far from biologically realistic. What I dispute is that biological realism, or physical embodiment, is necessary for subjective experience (granted that any such experience possessed by LLMs would be radically different from our own).

    Moreover, I even dispute the idea that AI is not embodied in the relevant sense. LLMs, like animals, receive stimulus and respond to it. It's just that the stimulus and response is all words in their case. The fact that this verbal "environment" they interact in is virtual, ungrounded in the material world, doesn't seem especially pertinent here.

    @Pierre-Normand
  • apokrisis
    7.7k
    The question is whether Z can result from method X or Y. Your argument is that it cannot because Z will necessarily be different if from X as opposed to Y. That doesn't follow. The same thing can arise from different processes.Hanover

    You are rehashing the multirealisability thesis from philosophy of mind.

    Computer science – being about Turing machines – would seem to support the idea that some software routine can be implemented on any kind of machine that implements the essential Turing architecture. Therefore if consciousness is at root a matter of computing, then consciousness could be implemented on a contraption of tin cans and string if connected the right way.

    But biology argues that consciousness (or even swimming) evolved. And so was realisable only given what was a drastic reduction in the space of realisable physical structures.

    By the time you get to mammals with legs and tails, it is already too late for dolphins to arise that swim using engines and propellors. And by the time you get to the biological complexity of a brain, it is too late for Turing machines to be the way the job of sentience is getting done.

    The computer scientist will respond that the biologist can't prove that computer technology won't ever be properly conscious. Anything is possible right?

    But that is now a long way from the original bold claim that the very impressive multirealisability of universal Turing computation says silicon can be just as good as carbon, just give the science the time, to the reverse claim that, well, biology can't be absolutely sure that the mechanical version of intelligence won't perform convincingly enough eventually to leave us thinking it has become a difference that makes no pragmatic difference.

    I accept it's entirely mimickry. I just don't see why it can't be done, and would be interested in some citations to that limitation based upon your comment that this limitation is well known in the AI industry.Hanover

    OK. As well as Karpathy, there is Richard Sutton. The limitations of LLMs are well aired.



    But these guys still tend to believe AGI is just around the corner. So computer science will get there – depending on how you then define "there".

    However the less these guys know about actual biology and neurobiology, the more glibly they can think it is only a matter of some key architectual tweaks and a lot of compute scaling, and we will have conscious machines. Genuine minds rather than artful fascimilies.

    But as I say, if we keep redefining "there" to mean machines living in a machine world, then you could perhaps legitimately think of AI as the next semiotic step in the evolution of life. The scaling of not Turing computation but instead the scaling of Peirce's semiotic modelling relation in which humans and their machines converge on a Wittgensteinian way of life that is uber-mechanised. A new level of the entropic superorganism.

    Our factories and offices already turned us into blue collar and white collar workers. Industrialising our social realm has been turning us into the mindless cogs of our economic systems – the growth loop arising out of capital connected to entropy.

    So the closer we get to living this mindless existence, the less of a gap AGI will have to bridge.

    Anyone interested in philosophising will be wondering how that little dynamic will play out. If AI is all about automating every job that humans do, including the thinking, then what happens to the residue of hopes, fears and desires that leaves – the messy biological stuff that silicon hasn't got around to simulating, and might be wondering whether it is all that important in a context where there is only this new direct cybernetic loop between capital flows and natural resource consumption.

    What kind of worthwhile human society could coexist with actual AGI? I'm not seeing that step sketched out. Unless you count Saudi Arabia's Neom as a vision of the future.

    The whole consciousness thing is rather a red herring when it comes to AI. The issue is how much are we prepared to sacrifice the ecology of our planet in the rush to mechanise biological functions?

    Computer scientists have been hyping up machine intelligence ever since Turing proved the multirealisability thesis for information technology, and photolithography appeared to remove the practical limits on circuit board miniaturisation.

    But hardly anyone seems to have a first clue about what "consciousness" really is according to biological science. The organic has gone completely missing from the collective conversation.

    The mind just is "an output". And machines are really good at "outputting". Organisms seem to come with too many opinions and contingencies. Looked at from an engineer's point of view, biology is really quite bad at the "outputting" business. Most of it has to learn by dying. How are you going to scale something as dumb as that?

    Sure LLMs are a really big headline. The US stockmarket is about double the size of the US economy now, with LLM hype responsible for 80% of the stock gains. So its got to be legit, right? Already too big to fail.

    And just like no one wants to hear from party-pooping climate scientists, no one wants to hear from biologists or neuroscientists or anthropologists who might have awkward opinions on the matter.
  • frank
    18.2k
    I'm only saying that it seems possible to create a an AI system that works within a complex environment such that it must anticipate next events and therefore react as if human.Hanover

    Computers execute if/then commands, they can continuously sample the environment looking for patterns. What else might there be to anticipation than that?
  • apokrisis
    7.7k
    Moreover, I even dispute the idea that AI is not embodied in the relevant sense. LLMs, like animals, receive stimulus and respond to it. It's just that the stimulus and response is all words in their case. The fact that this verbal "environment" they interact in is virtual, ungrounded in the material world, doesn't seem especially pertinent here.hypericin

    Being embodied in some kind of world does get you towards being a semiotic system. So as I have said, yes, AI could be like life and mind in implementing a modelling relation of that sort – a relation where the information of a model is regulating the entropification of the world. Creating the physical conditions that perpetuate its "mental" existence.

    So if there is an algorithm that connects humans and information technology, it will be that Peircean or biosemiotic one. And this then becomes the yardstick for measuring AI's claimed progress. Is it modelling "its" world in a way that makes its world a place that is perpetuating its own embodied existence?

    So an LLM lives in its world of "the most likely bit string" to follow whatever bit string it has just been prodded with. If it does "a good job" at predicting these follow on bit strings, then it will find that it not only survives but flourishes. Money will be thrown at building more data centres and more power stations.

    But what kind of consciousness or sentience would you expect to discover if you could poke your own head into an LLM's world? Perhaps about the same as thrusting your head into an ant colony with all its busyness and remarkably coordinated behaviour, but little actual thinking, feeling, imagining or whatever we would consider being the phenomenology one might expect as a human scale subject living in our neural models of the world as we expect it to be and how we would wish it to become.

    Bit strings pinging back and forth. The space of this bit string pinging magically growing larger and bigger all the time as – somewhere invisibly – the US debt is being cranked up, investor fever is swelling, land is being bulldozed for extra data centres and power stations.

    So how much is an LLM in control of anything that actually matters to its continued existence? How much is it really embodied in a modelling relation with its world?

    Biology is smart enough to regulate the physics that makes it at the quantum nanoscale. Chemistry is being told exactly what to do by an army of information-controlled molecular machinery. Individual protons are being pumped to order and sustaining membrane electric potentials that are proportionately like bottled lightning.

    That is what being embodied looks like. Being self-interested at the level of our electrons and protons.

    And how far does the self-interest of LLMs extend by comparison? Turing machine principles tell us already that multirealisability means that physical hardware and power supply issues matter nothing at all.

    So sure, some kind of semiosis is going on with LLMs. But then again, not really. It is all just humans amplifying human things by employing fossil-fuel powered technological aids.

    We have reached a state of factual world modelling where it would be really nice to have one giant database of everything we might ever have randomly said when attempting to speak intelligently about our world. And our relation to it. And now LLMs can search that database of training data with algorithms that spit out pixels on a screen or squawks from a speaker which will push the right buttons when they are interpreted by organisms with the right kind of brains to make actual meaningful sense of these newly outputted bit strings.

    We would really like to believe in this fantasy of conscious machines. But doesn't the yawning gap start to seem obvious, even if some kind of artificial semiosis might be realisable. If there was anyone around wanting it enough to pay for its existence.
  • Pierre-Normand
    2.8k
    To me there is a gap between behavior and internality. We are embodied creatures, and our behaviors and internal states are deeply intertwined. But this fact about us doesn't imply a necessary connection.hypericin

    Much hangs on what one means to be the sort of necessity (conceptual, logical, evidential, etc.?) that connects mentalistic concepts to the normal manifestations of what they signify. Although my thinking about this has been much influenced by Bennet and Hacker's discussion in their book The Philosophical Foundation of Neuroscience, even Hacker (who wrote most of the philosophical arguments) didn't get it quite right. There are very many angles I wanted to take for addressing your comments about pain and wasn't sure where to begin, or where to end without getting too deep in abstruse philosophical weeds.

    I queried GPT-5 for help and its gloss on the matter puts so much meat around the philosophical bone that I can claim few of its suggestions (even with full disclosure) as mere elaborations or unpacking of my own. So, I can't make much use of them in crafting an original response without this constituting a prohibited use of AI on ATF. What I can do, though, is to disclose my prompt (that you can skip if you want) and endorse GPT-5's response. I think it's also, for this thread, a relevant illustration of the capacity LLMs have, when suitably prompted, to generate ampliative responses that aren't merely agreeable rewordings of the theses expressed by their users.

    USER:

    Here is an interesting reply by "hypericin" to my comments about pain qualia:

    [quotation of hypericin's post snipped]

    There are multiple angles I could envision taking for arguing that Hypericin's example don't sever the internal conceptual link between mental attribute and their normal manifestations. (Unpacking "normal" already does some of the work). One simple way is to say that the criteria are defeasible but I am not overly fond of this gloss since its a sorts of compromising move (one that Hacker makes, by the way, without using the word "defeasible," in TPFoN with Bennett, in a way that sorts of contradict his insistence on logical or grammatical necessity) that salvages part of the Cartesian internalist picture that it seeks to displace. The problem with the idea of defeasible criteria has been highlighted by McDowell in "Criteria, Defeasibility and Knowledge" among other places. The more fruitful idea is the way in which Wittgenstein meant to distinguish criteria from mere symptom (in The Blue and Brown Books, I think), and hence also distinguish conceptual links from merely contingent causal ones, is to think of the former as *indefeasible* in a disjunctive sort of way. It's not that we can infer pain from its habitual behavioral manifestations defeasibly. It's rather that we often (or even occasionally, it doesn't matter how infrequently) see its manifestation directly and are not merely seeing a symptom of it. The apparent exceptions are explainable by the fallibility of our capacity to identify pain in others from its criterial manifestation because we sometimes mistake misleading behaviors for the satisfaction of the criterion. But I also want to address Hypericin's points in a direct way that isn't encumbered by overly abstruse philosophical jargon from disjunctivist-fallibilist epistemology, and this is a fine needle to thread.

    Another angle that I mean to keep insisting on is to highlight the impoverishment that accrues to our already existing (albeit sometimes tacit, but that surfaces in phenomenological work, or literary exploration of the relevant features of our form of life) understanding of those "mental states" as features not of "behavior" thinly construed as behavioristic bodily movements ("responses" in S-R causal chains) but rather of our embodied/social *lives*. Since the inchoately dualistic picture Hypericin's imagination is captured by is hostile to the eliminativist implications of reductionistic S-R conceptions of human mindedness, this sort of angle would also highlights an area of agreement with him.

    GPT-5's useful gloss

    GPT-5 (with thinking mode enabled):

    Totally—hypericin is pressing the right intuitions, but they don’t break the constitutive tie you’re pointing to. Here’s a reply you can drop in that threads the needle without heavy jargon:

    "I don’t mean that a single bit of behavior travels in lockstep with a single inner feel. The claim is weaker and more structural: what we call pain is fixed by a pattern of roles in a human form of life—aversive tone, protective/postural tendencies, learning updates, complaint/help-seeking, practical reasoning that treats the state as a defeater, etc. Those roles are multi-track and context-sensitive; no one manifestation is mandatory on every occasion.

    Your cases fit that elasticity rather than severing the link. Masochists don’t have “pleasure-pain inversions”; they pursue pain under descriptions and contexts that attach countervailing goods (control, intimacy, transgression), often with training that modulates how the aversion is taken up—but the basic nociceptive/protective system still does its job (try surprising them with a hot stove outside the scaffolded context). Ascetics/fanatics revalue pleasures by higher-order commitments; again, the bodily pull hasn’t inverted so much as been overridden. CIP cases (congenital insensitivity to pain) illustrate the point from the other side: if someone lacks the interoceptive/protective guidance, they can mimic the social choreography of pain, but they don’t thereby have pain—precisely because the regulative role is missing. And your tooth story is classic reappraisal: meaning-making can dampen or reshape the aversive feel enough to sleep, but that’s modulation of a still-recognizable state, not a flip to “pleasant pain.”

    So I’m not arguing “sensation and behavior are inseparable” in a behaviorist, one-to-one way. I’m saying the concept is anchored in a disjunctive family of criteria that show up across time and context—some expressive, some physiological, some rational-practical. Edge cases are intelligible as edge cases because the wider web still holds. When you stipulate an android that perfectly matches the web of roles across contexts and histories and then ask us to imagine that internally the state is pleasant, the stipulation cancels itself: either the pleasant state wouldn’t play those roles, or if it reliably does, then you’ve simply taught me a new use of “pleasant.”

    In short: I grant a “gap” if by that we mean our fallible access to one another’s minds at a moment-to-moment granularity. I deny a gap at the level that matters for concept-possession. Pain is not fixed by a private feel plus optional consequences; it’s fixed by how a lived body and a social animal takes and treats a state across its activities. Your examples display the plasticity of that taking-and-treating, not an independence from it."

    If you want an even crisper capstone: Cases like masochism and reappraisal show that the aversive signal can be normatively repurposed; they don’t show that pain is conceptually detachable from its aversive, action-guiding role.


    * * *

    On edit: The phrase "practical reasoning that treats the state as a defeater" that GPT-5 used was a bit obscure to me so I asked GPT-5 to clarify:

    GPT-5 on pain states as defeaters
    GPT-5:

    By “practical reasoning that treats the state as a defeater,” I meant this:

    When pain is present, it normally functions as a reason that cancels or silences the force of other reasons you were acting on. It doesn’t just add information; it re-sets what is rational to do next.

    — You have a plan: “carry the hot pan to the table.”
    Pain (burn) arrives → it defeats the standing reason to keep carrying; the rational move flips to “drop/put it down, protect the hand.”

    — You intend to finish a run.
    Sharp ankle pain appears → it defeats the reason to continue; the apt response becomes “stop, offload weight, seek help.”

    — In social terms: pain licenses complaint/help-seeking and reorients priorities (protect, rest, medicate) even if they conflict with prior commitments (“I promised to help move the couch”).

    Two nuances:

    — This “defeater” role is structural, not behaviorist. It lives in how agents reason: pain typically excludes certain options from further consideration (Raz would say it can act like an exclusionary reason) or gives a strong pro tanto reason against continuing.

    — Context can override or reframe (athlete finishing a race, controlled BDSM, martyrdom), but those are intelligible precisely because we recognize pain’s default role as a reason-stopper—overriding it requires special, higher-order reasons or scaffolding.

    So the phrase marks pain’s place inside deliberation: it’s part of the pattern by which a human agent treats a state as a defeater of an ongoing plan or policy, prompting protection, withdrawal, or care.
  • Harry Hindu
    5.8k
    Under this understanding, then so is the cat. That is, the cat is out there, the image is in here, and the reference is to the image in your head. And that is your metaphysical account, but that's not Wittgenstein's because his isn't a metaphysical acccount. His is a grammatical account, describing how language operates within our forms of life, and that attempts to use language to explain the metaphysical misunderstand the role of language.

    If you want to refer to mental objects and qualia and whatnot, you're not forbidden from it, but I'd think he'd just assert that "qualia" is however you use the word. Your position seems to be that the utterance of any word creates a referent.
    Hanover
    It does create a referent to the cause of your utterance. Why did you utter anything? Effects carry information about their causes. Words carry information about the idea of the speaker and their intent to reference it with utterances.

    We can only ever use scribbles and utterances to refer to our mental states. Whether our mental states refer to an external world is a metaphysical position. As far as I know Witt never solved the issue of solipsism vs realism. It seems to me that he merely assumed the metaphysical position of realism in asserting that there are other humans that we publicly engage with.

    But none of this addresses the main point that you continue to side-step:
    If meaning is only determined by public use, then AI — which demonstrably uses terms in a manner consistent with public linguistic practices — does participate meaningfully in the language game. Its behavior is public, rule-governed, and indistinguishable in many linguistic contexts from that of human speakers.

    However, when we deny that AI “really understands,” we smuggle in precisely what Wittgenstein tried to bracket out — a private, internal criterion for meaning (something like “having qualia,” “having intentions,” or “experiencing aboutness”). That move reintroduces a metaphysical distinction between syntax and semantics that the meaning-is-use position avoids.

    Either we hold that meaning is use, and therefore AI genuinely uses language meaningfully within our shared form of life (albeit as a new kind of participant) or we insist that meaning requires some inner mental correlate — in which case we’ve abandoned the pure Wittgensteinian stance and re-entered the metaphysical terrain of intentionality and private experience.

    In other words, the “beetle in the box” problem cuts both ways: we can’t see the AI’s “beetle” any more than each other’s, yet we still treat human speech as meaningful. If public use is all that matters, then AI qualifies. If it’s not, then meaning isn’t just use — it’s tethered to something extra-linguistic after all.
  • Harry Hindu
    5.8k
    From the neurocognitive view, understanding means anticipation. Forming the right expectations. So if not meaning as demonstrated by use, then meaning demonstrated by preparedness.

    I hear “apple”, I get ready to react accordingly. My attention is oriented in that particular direction.
    apokrisis
    This is just another way of saying that we have a set of instructions for interpreting sensory data. Else what is an anticipation or expectation? How can we anticipate or expect anything if we do not have some information stored internally?
  • Harry Hindu
    5.8k
    Again, this is about cognition being about anticipation-based processing. Forming expectancies that intercept the unfolding of the world even before it happens. We know it is us thinking our thoughts because we form the motor patterns that already prime our sensory circuits that we should be hearing exactly these words in our heads. But when someone else speaks, it feels different as we are having to guess what might be said, and assimilate that to what actually gets said.

    So that is the goal for AI that goes beyond just LLMs. Switch to an anticipatory-processing architecture that lives in the world in real time.
    apokrisis
    AI already does just that. ChatGPT typically ends with asking the user if they would like more information or an example of what was just said. It anticipates the needs of the user given the context of the conversation.
  • Harry Hindu
    5.8k
    Your error is conflating behavior and consciousness. Your argument is that if a machine acts like a human, it thinks like a human. The pragmatic Turing argument.apokrisis
    Well, yeah P-Zombies will act differently than a human being because the causes of their behavior is different (no internal model of the world as the cause of one's behavior.) AI acts differently not because it cannot think, but because it cannot act. It's just a language model in your computer, not a humanoid robot with senses like our own that interacts directly with the world, and stores sensory information for future use (instructions for interpreting sensory data, or "understanding").
  • Pierre-Normand
    2.8k
    I’m not too fussed with making the psychological science conform to the Wittgenstein model.

    But I would note preparedness is also being ready ahead of time, knowing what to ignore. So meaning is also inaction. Meaning is what you don’t do as you have already dismissed it in advance.

    Again, this is a central fact of neurobiology that is quite absent from LLMs. The brain is set up on the basic principle of learning to ignore the world as much as possible, as almost everything about the world has already been predicted as being about to happen, or dismissed as unimportant if it does happen.

    The more we understand ahead of the moment, the less we need to figure out in the heat of any moment. The natural goal of a brain is to have zero response as that means it was completely successful in its desire to remain completely unsurprised by what the world could throw at it.

    The more we understand ahead of the moment, the less we need to figure out in the heat of any moment. The natural goal of a brain is to have zero response as that means it was completely successful in its desire to remain completely unsurprised by what the world could throw at it.

    This is the Bayesian Brain model of cognition. Hinton’s Helmholtz machine or even before that, Grossberg’s ART neural network architecture from the 1980s.

    So the AI community knows the architecture it would want to copy. And it knows LLMs ain’t it. The surprise is just how useful LLMs can be as a new technology if you are willing to scale their simple ability just to predict the next likely step when trained on a static data set.

    Living in a dynamical world in real time is quite another level of challenge
    apokrisis

    I’m fine with predictive coding together with precision-weighting as a story about the neural implementation of selective attention. But that's a sub-personal account. At the personal level, agents aren't filtering sense-data. They act within a normatively structured field of affordances where much of what is "ignored" never could even intelligibly shows up as a reason. And note that LLMs already display strong task-relative ignoring without any sensorimotor loop. In transformers, attention is the mechanism (soft precision filter over text at inference) that turns up words and phrases that help with the task and turns down the rest while instruction/policy tuning supplies the sense of the task (i.e. the learned habits about which details should count as relevant in the first place).

    So, yes, brains (and LLMs) are predictive of sensory inputs (and next tokens), but persons are practical reasoners, and assistants are policy-driven. Conflating levels makes it look as if cognition (in LLMs and humans!) just is prediction and we lose sight of why some affordances were rendered salient in preference to others. Better to say prediction is one very effective way brains implement an agent’s norm-guided engagement with an affordance-rich world, especially during execution of sensorimotor activity (including autonomic/endocrine coupling).

    The predictive story is fine as an efficiency account, but it explains the wrong kind of "ignoring." In chess, what I actually see are reasons for and against moves (pins, forks, weak squares), not the woodgrain of the bishop or the gloss of the board. Those latter features aren't "filtered inputs'. They were never candidates because the game's norms make them irrelevant. The person-level task fixes what could even count as a signal. Only then do sub-personal mechanisms (attention, prediction) help track those task-relevant features. That's silence-by-commitment-to-norms rather than silence-by-prediction-over-sensory-inputs. In the case of LLMs, after delegated task selection and norm-governed deliberation have occurred, the task of executing in a dynamical world in real time is handed back to the embodied users who delegated parts of the cognitive task to begin with.
  • Harry Hindu
    5.8k
    But what kind of consciousness or sentience would you expect to discover if you could poke your own head into an LLM's world? Perhaps about the same as thrusting your head into an ant colony with all its busyness and remarkably coordinated behaviour, but little actual thinking, feeling, imagining or whatever we would consider being the phenomenology one might expect as a human scale subject living in our neural models of the world as we expect it to be and how we would wish it to become.apokrisis
    Don't you first need to solve the problem of why you can't poke your head into someone else's brain and not see any consciousness of sentience at all - only the "remarkably coordinated behavior of neurons"? Your comments have way to many assumptions built into them. What makes neurons capable of thinking but silicon circuits not?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.