Pierre-Normand
Showmee
Pierre-Normand
Regardless of how “human” large language models may appear, they remain far from genuine artificial intelligence. More precisely, LLMs represent a dead end in the pursuit of artificial consciousness. Their responses are the outcome of probabilistic computations over linguistic data rather than genuine understanding. When posed with a question, models such as ChatGPT merely predict the most probable next word, whereas a human truly comprehends the meaning of what she is saying. — Showmee
frank
Pierre-Normand
Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.
In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.
We can't deduce that the AI does not have private sensations, any more than we can deduce this of our human counterparts. Rather, we seem to presume it. — Banno
Pierre-Normand
We don't know how the human mind works. Is there something special about the human hardware, something quantum for instance, that is key to consciousness? Or is it all in the organic "software"?
So how do we examine the question with a large chunk of information missing? How do you look at it? — frank
frank
It simply a living body embedded in a natural and social niche. — Pierre-Normand
Metaphysician Undercover
In order for the model to produce this name as the most probable next word, it has to be sensitive to relevant elements in the plot structure, distinguish apparent from real clues, infer the states of minds of the depicted characters, etc. Sutskever's example is hypothetical but can be adapted to any case where LLMs successfully produce a response that can't be accounted for by mere reliance on superficial and/or short range linguistic patterns. — Pierre-Normand
Pierre-Normand
This is not true. To predict the name of the murderer in the novel, does not require that the LLM does any of that. It requires only that the LLM is able to predict the habits of the author. — Metaphysician Undercover
J
I’m drawing on Rödl’s Kantian distinction between knowledge from receptivity and knowledge from spontaneity. Empirical knowledge is receptive: we find facts by observation. But avowals like "I believe…" or "I intend…" are paradigms of spontaneous knowledge. We settle what to believe or do, and in settling it we know it not by peeking at a private inner state but by making up our mind (with optional episodes of theoretical of practical deliberation). — Pierre-Normand
Pierre-Normand
"I believe" and "I intend" are convenient examples to support this position, because they have no "content" apart from a kind of imprimatur on decision or action. But most mental life will not fit such an example. When I imagine a purple cow, I am, precisely, peeking at a private inner state to discover this. A (mental) purple cow is not a belief or an intention. It is an image of a purple cow. I've never understood how the Wittgensteinian public-criteria position can address this. What conceivable public criterion could there be that would tell me whether you are, at this moment, imagining a purple cow? (assuming you remain silent about it). — J
Harry Hindu
It seems to me that a starting point would be to define the terms we are using: "intelligence", "intent",' "understand", "thought", etc.Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible. — Banno
J
However, in the case of a normally sighted person, how do you (or they) check that the purple cow that they are imagining is indeed imagined to be purple? — Pierre-Normand
Notice that nothing I've said about the public criteria the determination of the content of acts of imagination depend on impugns the notion that the person imagining them has first person authority. She's the one to be believed when she claims that the cow she imagines looks "like that" while pointing at the public sample. — Pierre-Normand
I'll come back to the issues of public criteria for intentions, as they may apply to LLMs, later. — Pierre-Normand
Hanover
Astorre
Metaphysician Undercover
If the chatbot tells you who the murderer might be, and explains to you what the clues are that led it to this conclusion, and the clues are being explicitly tied together by the chatbot through rational chains of entailment that are sensitive to the the significance of the clues in the specific narrative context, can that be explained as a mere reproduction of the habits of the author? What might such habits be? The habit to construct rationally consistent narratives? You need to understand a story in order to construct a rationally consistent continuation to it, I assume. — Pierre-Normand
You need to understand a story in order to construct a rationally consistent continuation to it, I assume. — Pierre-Normand
Look at this Einstein riddle. Shortly after GPT-4 came out, I submitted it to the model and asked it to solve it step by step. It was thinking about it quite systematically and rationally but was also struggling quite a bit, making occasional small inattention mistakes that were compounding and leading it into incoherence. Repeating the experiment was leading it to approach the problem differently each time. If any habits of thought were manifested by the chatbot, that were mere reproductions of the habits of thought of the people who wrote its training texts, they'd be general habits of rational deliberation. Periodically, I assessed the ability of newer models to solve this problem and they were still struggling. The last two I tried (OpenAI o3 and Gemini 2.5 Pro, I think) solved the problem on the first try. — Pierre-Normand
baker
Pierre-Normand
This is wrong; because if the LLM was trained on ordinary news texts, then this is also where it could learn about self-preservation. — baker
Pierre-Normand
You are changing the description now. Before, the description had the chatbox come up with a "name as the most probable next word". Now, the chatbox comes up with "who the murderer might be". Do you see the difference here? In the first case, you are talking about words, symbols, the "name". In the second case you are talking about what the symbol stands for, "who". — Metaphysician Undercover
You need to understand a story in order to construct a rationally consistent continuation to it, I assume.
— Pierre-Normand
I don't think that's a correct assumption. All you need to be able to do, is to carry on with the author's activity in a consistent way. One does not need to "understand the story" to produce a rationally consistent continuation of it. We have very good examples of this with human activities. When a person says "I am just a cog in the wheel", they are continuing the activity in a consistent way, without understanding what they are doing.
Sorry, I don't see the relevance,. You'd have to explain how you think that this is relevant.
Pierre-Normand
I think my answer is that AI has no soul and that's not why it's not a person. I'm satisfied going mystical. — Hanover
Pierre-Normand
It's time to think not about neural networks, but about the internet garbage we consume and where we draw our inspiration. — Astorre
Pierre-Normand
I think this is the wrong question, though it's invited by the way I framed the problem. Better to have said, "What conceivable public criterion could there be that would tell me whether you are, at this moment, imagining what you believe to be a purple cow?" The point is not the accuracy of the image -- indeed, you may have got purple all wrong, or cows -- but the inaccessibility of the 1st person experience. — J
Metaphysician Undercover
Yes, you can do that, but the result of doing it is qualitatively (and measurably) different from what it is that LLMs do when they are prompted to impersonate a novelist or a physicist, say. An analogy that I like to employ is an actor who plays the role of J. Robert Oppenheimer in a stage adaptation of the eponymous movie (that I haven't yet seen, by the way!) If the actor has prepared for the role by reading lots of source material about Oppenheimer's life and circumstances, including his intellectual trajectory, but never studied physics at a level higher than middle school, say, and has to improvise facing an unscripted questions about physics asked by another actor who portrays a PhD student, he might be able to improvise a sciency sounding soundbite that will convince those in the audience that don't know any better. Many earlier LLMs up to GPT-3-5 often were improvising/hallucinating such "plausible" sounding answers to question that they manifestly didn't understand (or misunderstood in funny ways). In order to reliably produce answers to unscripted questions that would be judged to be correct by PhD physicists in the audience, the actor would need to actually understand the question (and understand physics). That's the stage current LLMs are at (or very close to). — Pierre-Normand
It's relevant to displaying an LLMs successful deployment, with intelligent understanding, of its "System 2" thinking mode: one that is entirely reliant, at a finer grain of analysis, on its ability to generate not just the more "likely" but also the more appropriate next-tokens one at a time. — Pierre-Normand
frank
Is mind a necessary condition for meaning? — RogueAI
Pierre-Normand
Even with our embeddedness taken into consideration, we still don't have a working theory of consciousness which we could use to assess AI's. Do we forge ahead using philosophical attitudes instead? — frank
Second question: analog-to-digital technology is relatively advanced at this time. If a system included both LLM, sight, hearing, pressure sensing, some robotic capability, and someone to talk to, do the you think it would then be more likely to develop human-like sapience?
frank
In their book The Philosophical Foundations of Neuroscience (that has no less then five chapters on consciousness!), Peter Hacker and Maxwell Bennett (though it's mainly Hacker who wrote those parts) argue that philosophical inquiry into mentalistic concepts must come before their scientific investigation. My view is a bit less extreme but I think both can go hand in hand. Our being able to duplicate some aspects of cognition in LLMs furnishes another tool for inquiry. — Pierre-Normand
By means of interoception and homeostatic regulation, the organism is continuously estimating and correcting its distance from viable conditions of life. — Pierre-Normand
This set of integrated regulative systems does not just furnish "emotional" experiences but also shapes what counts for us as a reason, what feels urgent, and which affordances even show up for us as intelligible in our environment. — Pierre-Normand
So, yes, you can add cameras, microphones, pressure sensors, and a mechanical body, and you get richer sensorimotor loops. But without a comparable system of interoceptive feedback and bodily stakes, where regulation of a living body constrains what matters to the system, the result is at best a proficient controller (like a tireless hyperfocused clothes-folding Optimus robot), not human-like sapience/sentience. — Pierre-Normand
Pierre-Normand
I disagree with this assessment. Not only is it possible to create a system that is intimately responding and organizing its environment, we've long since accomplished that in telephony, which is governed by computers. If that kind of connection to the environment creates human-like sapience, we did it in the 1960s. — frank
sime
In order to fully dislodge the Cartesian picture, that Searle's internalist/introspective account of intentionally contentful mental states (i.e. states that have intrinsic intentionality) indeed seem not to have fully relinquished, an account of first person authority must be provided that is consistent with Wittgenstein's (and Ryle and Davidson's) primary reliance on public criteria. — Pierre-Normand
Theoretical sentences: Describe things not directly observable, such as "Atoms are the basic building blocks of matter". They require complex background knowledge and cannot be verified by a simple, direct observation.
Observation categoricals: Sentences that involve a relationship between two events, often derived from theory and hypothesis together, such as "When the sun comes up, the birds sing".
Occasion sentences: Sentences that are sometimes true and sometimes false, like "It is raining". An observation sentence can also be an occasion sentence, as "It is cold" is true on some occasions and false on others.
"Myth of the museum" sentences: Traditional view of language where sentences are like labels for pre-existing meanings, which Quine rejects because it assumes meanings exist independently of observable behavior.
frank
I was not arguing that this was impossible. I was sort of cataloguing all of the different ways in which the organism and its natural and social environment need being tightly integrated (and the subsystems themselves need being integrated together) in order that meanginful and contentful sapience and sentience emerge. — Pierre-Normand
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.