frank
Understanding is no more internal than eating. It depends on some biological processes that happen under the skin, among other things that don't, but this doesn't license your appeals to the internal that you make with reference to perception and meaning. Synaptic transmission is no more meaningful than peristalsis. — Jamal
Hanover
"the [non-existent] present king of France," is a referent to an idea in your head. — Harry Hindu
"Public usage" as in using scribbles to point to objects and events in the world. If you are not pointing to anything with your scribbles that do not ultimately resolve down to things that are not scribbles (as in the case of "freedom" and "aboutness"), then it no longer qualifies as "public usage". It is "private usage". — Harry Hindu
NOS4A2
apokrisis
The Wittgensteinian approach (and I could be very wrong here, so please anyone chime in) does not suggest there is not an internally recognized understanding of the word when the user uses it, but it only suggests that whatever that is is beyond what can be addressed in language. That would mean that whatever "understanding’" is amounts to our public criteria for it . — Hanover
Hanover
From the neurocognitive view, understanding means anticipation. Forming the right expectations. So if not meaning as demonstrated by use, then meaning demonstrated by preparedness.
I hear “apple”, I get ready to react accordingly. My attention is oriented in that particular direction. — apokrisis
apokrisis
It's very important to know the difference between an internal voice and an external one, or a real face and a hallucination. — frank
Hanover
For the simple reason that machines are not biological, they do not have similar structures, components, parts, or what have you, to any organism, let alone humans. If they do not have similar structures, they do not act in similar ways to humans. — NOS4A2
Hanover
Switch to an anticipatory-processing architecture that lives in the world in real time. — apokrisis
apokrisis
I think this is compatible with meaning is use as long as you're describing public manifestations. — Hanover
Hanover
So the AI community knows the architecture it would want to copy. — apokrisis
apokrisis
Doesn't it do this with auto-pilot airplanes and self-driven vehicles? — Hanover
Hanover
But we’ve been at AI for 70 years now and LLMs are as far as we have got. That should also tell you something. — apokrisis
apokrisis
So, where you say the AI community knows that LLMs can't do what they need it to, where is this documented? What is the cite for that? — Hanover
But I see our progress as tremendous, not minimal as maybe you're suggesting. — Hanover
apokrisis
If I have a pump that operates off of suction versus one off an impeller … Why must their output be declared of different types and categories simply because their unseen parts perform the intermediate tasks very differently? — Hanover
Doesn't it do this with auto-pilot airplanes and self-driven vehicles? — Hanover
hypericin
apokrisis
Pain for us seems intrinsically aversive, and is associated with avoidance and the other behaviors you mentioned. But then there are masochists. Do they experience inverted pain/pleasure? No, almost certainly they reinterpret the sensation of pain positively*. — hypericin
Hanover
The things involved and the movements they make are different. It’s like saying submarines swim. — NOS4A2
Hanover
Your error is conflating behavior and consciousness. Your argument is that if a machine acts like a human, it thinks like a human. The pragmatic Turing argument. — apokrisis
hypericin
apokrisis
The question is whether Z can result from method X or Y. Your argument is that it cannot because Z will necessarily be different if from X as opposed to Y. That doesn't follow. The same thing can arise from different processes. — Hanover
I accept it's entirely mimickry. I just don't see why it can't be done, and would be interested in some citations to that limitation based upon your comment that this limitation is well known in the AI industry. — Hanover
frank
I'm only saying that it seems possible to create a an AI system that works within a complex environment such that it must anticipate next events and therefore react as if human. — Hanover
apokrisis
Moreover, I even dispute the idea that AI is not embodied in the relevant sense. LLMs, like animals, receive stimulus and respond to it. It's just that the stimulus and response is all words in their case. The fact that this verbal "environment" they interact in is virtual, ungrounded in the material world, doesn't seem especially pertinent here. — hypericin
Pierre-Normand
To me there is a gap between behavior and internality. We are embodied creatures, and our behaviors and internal states are deeply intertwined. But this fact about us doesn't imply a necessary connection. — hypericin
Harry Hindu
It does create a referent to the cause of your utterance. Why did you utter anything? Effects carry information about their causes. Words carry information about the idea of the speaker and their intent to reference it with utterances.Under this understanding, then so is the cat. That is, the cat is out there, the image is in here, and the reference is to the image in your head. And that is your metaphysical account, but that's not Wittgenstein's because his isn't a metaphysical acccount. His is a grammatical account, describing how language operates within our forms of life, and that attempts to use language to explain the metaphysical misunderstand the role of language.
If you want to refer to mental objects and qualia and whatnot, you're not forbidden from it, but I'd think he'd just assert that "qualia" is however you use the word. Your position seems to be that the utterance of any word creates a referent. — Hanover
Harry Hindu
This is just another way of saying that we have a set of instructions for interpreting sensory data. Else what is an anticipation or expectation? How can we anticipate or expect anything if we do not have some information stored internally?From the neurocognitive view, understanding means anticipation. Forming the right expectations. So if not meaning as demonstrated by use, then meaning demonstrated by preparedness.
I hear “apple”, I get ready to react accordingly. My attention is oriented in that particular direction. — apokrisis
Harry Hindu
AI already does just that. ChatGPT typically ends with asking the user if they would like more information or an example of what was just said. It anticipates the needs of the user given the context of the conversation.Again, this is about cognition being about anticipation-based processing. Forming expectancies that intercept the unfolding of the world even before it happens. We know it is us thinking our thoughts because we form the motor patterns that already prime our sensory circuits that we should be hearing exactly these words in our heads. But when someone else speaks, it feels different as we are having to guess what might be said, and assimilate that to what actually gets said.
So that is the goal for AI that goes beyond just LLMs. Switch to an anticipatory-processing architecture that lives in the world in real time. — apokrisis
Harry Hindu
Well, yeah P-Zombies will act differently than a human being because the causes of their behavior is different (no internal model of the world as the cause of one's behavior.) AI acts differently not because it cannot think, but because it cannot act. It's just a language model in your computer, not a humanoid robot with senses like our own that interacts directly with the world, and stores sensory information for future use (instructions for interpreting sensory data, or "understanding").Your error is conflating behavior and consciousness. Your argument is that if a machine acts like a human, it thinks like a human. The pragmatic Turing argument. — apokrisis
Pierre-Normand
I’m not too fussed with making the psychological science conform to the Wittgenstein model.
But I would note preparedness is also being ready ahead of time, knowing what to ignore. So meaning is also inaction. Meaning is what you don’t do as you have already dismissed it in advance.
Again, this is a central fact of neurobiology that is quite absent from LLMs. The brain is set up on the basic principle of learning to ignore the world as much as possible, as almost everything about the world has already been predicted as being about to happen, or dismissed as unimportant if it does happen.
The more we understand ahead of the moment, the less we need to figure out in the heat of any moment. The natural goal of a brain is to have zero response as that means it was completely successful in its desire to remain completely unsurprised by what the world could throw at it.
The more we understand ahead of the moment, the less we need to figure out in the heat of any moment. The natural goal of a brain is to have zero response as that means it was completely successful in its desire to remain completely unsurprised by what the world could throw at it.
This is the Bayesian Brain model of cognition. Hinton’s Helmholtz machine or even before that, Grossberg’s ART neural network architecture from the 1980s.
So the AI community knows the architecture it would want to copy. And it knows LLMs ain’t it. The surprise is just how useful LLMs can be as a new technology if you are willing to scale their simple ability just to predict the next likely step when trained on a static data set.
Living in a dynamical world in real time is quite another level of challenge — apokrisis
Harry Hindu
Don't you first need to solve the problem of why you can't poke your head into someone else's brain and not see any consciousness of sentience at all - only the "remarkably coordinated behavior of neurons"? Your comments have way to many assumptions built into them. What makes neurons capable of thinking but silicon circuits not?But what kind of consciousness or sentience would you expect to discover if you could poke your own head into an LLM's world? Perhaps about the same as thrusting your head into an ant colony with all its busyness and remarkably coordinated behaviour, but little actual thinking, feeling, imagining or whatever we would consider being the phenomenology one might expect as a human scale subject living in our neural models of the world as we expect it to be and how we would wish it to become. — apokrisis
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.