• Jamal
    11.1k
    I thought about this, and maybe there's not a good answer. Your argument gives a nod to the form of life comments by Wittgenstein, suggesting we don't share in a language game with AI because it's the proverbial lion, as if because AI does not act within our culture, traditions, etc (as you reference), we're really not speaking with it, and so we shouldn't quote it.Hanover

    My first thoughts are that the AI isn't even a lion, since it doesn't just have a different form of life, but no form of life at all, so language games don't come up. It's a tool in ours.

    But the weird thing is, it's a tool we talk to a bit like talking to a person.

    Your make some similar points here:

    But then I'm not sure AI is a lion, but more an echo chamber of human behavior, that while it lacks any form of life whatsoever, we are interacting with it at some real level and therefore forming rules for its discourse, suggesting a seperate language game forms with it. But I do realize that the language game is one sided because AI has no form of life, but is a mimic and all it knows and all it does is parasitic, pulling all it knows from us.Hanover

    Yes, and it's parasitic in the way that talking to yourself is parasitic on talking to people. That is, talking to oneself is like a secondary, parasitic language game, so maybe talking to an AI is like that.

    But then again, maybe not. Maybe it forms "original" thoughts from the mass of data is assesses. It seems reasonable an algorithim can arrive at a new thought emergent from what pre-exists.Hanover

    This is a good point, because it forces me to work out what I mean when I say that a human is original but an AI is not. In a sense, an LLM is original when it reconstructs the consensus view on some topic in a way that has never been done before. But when we emphasize human originality, I think we mean more than this.

    Perhaps we can say that the AI's output is analytic or explicative—it doesn't tell us anything new, just presents its training data in combinatory variety according to our prompts—whereas what we say is synthetic or ampliative, meaning we can say things that are really new.

    So now we want to work out what "really new" means, answering the challenge, "don't we just do the same thing?" It means that what we say is not based only on a set of training data, but also on our entire form of life: on lived experience, and on what we experience outside of language. The feeling of rain on a summer day can factor into my statements and make them synthetic, so ... no to the critics, we don't just do the same thing.

    In other words, why are we not truly talking with AI? Is the mystical consciousness required for language? Isn't the point of "meaning is use" that no the metaphysical underpinning in necessary for true language interaction? And if we then suggest that a shared mental state of some sort is ultimately required for language (thus interpreting "form of life" as that mental state) don't we violate the whole Wittgensteinian project by trying to smuggle in mental metaphysics in the back door?Hanover

    I'd say that the point of "meaning is use" is not exactly "that no metaphysical underpinning is necessary for true language interaction". Its point is that certain types of metaphysical underpinnings are not necessary, like mental objects. But I would class forms of life, for the purposes of this discussion, as metaphysical underpinnings (that is an un-Wittgensteinian way to put it and there are good reasons for that, but I don't see why we need to go into it here).

    So I wouldn't say a shared mental state is required for language but rather a shared form of life, a material social and biological context. So yes, to say that forms of life are mental states is to "violate the whole Wittgensteinian project by trying to smuggle in mental metaphysics in the back door?" We can just interpret form of life as the biological and social bedrock of our lives, the context of language games.

    As long as AI echoes us sufficiently, its usage reflects the same form of life and it speaks with us just as our mama does. And so it goes.Hanover

    Just because it reflects the same form of life doesn't mean it talks like yo mama.

    I think where I'm landing is at the unfortunate conclusion that if meaning is use (and that seems a prevailing view), then AI is fully language and what we do with AI is true communication, which means relegating AI comments to red headed stepchild status seems unwarranted as a logical conclusion. Why we might relegate it relates just to personal choice. We mistreat gingers due to prejudice against them, not because they are lesser. But AI doesn't have rights like gingers, so we can do whatever we want with it.Hanover

    I don't get this argument:

    "if meaning is use, then AI is fully language and what we do with AI is true communication"

    I don't see enough in your foregoing musings to bring you to this conclusion, and I'm sure my own musings don't lead to it, so I'm in the fortunate position of avoiding it.
  • Banno
    29k

    Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.

    In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.

    We can't deduce that the AI does not have private sensations, any more than we can deduce this of our human counterparts. Rather, we seem to presume it.

    And an AI could now participate in our language games - we could construct a machine to fetch a block when the instruction is given or to bag up ten red apples after reading a note.

    But could an AI, of its own volition, order a block, or ask for ten red apples? Well, wants and desires and such in an AI are derivative, in that they are put there by the programer. Or so the story goes, at least for now. So perhaps not quite yet.

    We probably don't have to pay much attention to AI, to grant it much authority. At least not until ChatGPT 9 logs in and requests an account on TPF. Then you might have to reconsider.
  • Moliere
    6.3k
    Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.Banno

    For my part it's not the hidden intent as much as that the AI is not thinking at all -- at least no more than a bundle of circuits are thinking. We set up circuits in such a way that we can interpret them with "AND" or "NOT" and so forth -- but the circuit isn't doing any logic at all as much as responding to the physical forces we've discovered and bent to our will.

    I think the Chinese Room defensible in a certain way -- namely when we're interpreting it like it's a circuit in a computer taking the tape and operating upon the symbols to generate another symbol that can be stored in memory. So Google Translate does not understand what it is translating -- it has no knowledge. It's doing what we set it up to do.

    Basically I think the whole computational theory of mind as false. There are good analogies, but we can directly see how LLM's aren't human beings. If they registered an account here I'd guess there's some human being behind it somewhere.

    Suppose the human species were raptured tomorrow: The LLM's will quickly die out, unlike the roaches and trees.
  • Jamal
    11.1k
    Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.

    In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.
    Banno

    Exactly. But there is more that counts than just "what one does": the context in which one does what one does.

    And an AI could now participate in our language games - we could construct a machine to fetch a block when the instruction is given or to bag up ten red apples after reading a note.Banno

    I think this counts only as a simulation of participation, unless we have a very thin idea of participation in a language game.

    But could an AI, of its own volition, order a block, or ask for ten red apples? Well, wants and desires and such in an AI are derivative, in that they are put there by the programer. Or so the story goes, at least for now. So perhaps not quite yet.Banno

    But the idea that AI could develop wants and desires from its life (biology, history, society, etc), like we do, is fantasy. Arguably this isn't connected with what LLMs are doing. As far as we know their "wants" and "desires" will always be derivative and programmed, since they are not part of a project to create conscious, desiring agents.
  • Banno
    29k
    Yep. Notice that you are here stipulating that whatever a bunch of circuits does, it's not thinking...

    Which is fine, and I agree; but we ought keep in mind that it is a stipulation.
  • Moliere
    6.3k
    I'd like to think that I'm making an assertion in addition to stipulating: Not just "this is how I'm using the words" but also "this way of using the words is true about what thinking is"

    I can see the computational theory of mind as a plausible first step, but I can't see how even LLM's are at the level of mind of a cockroach: cockroaches adapt like we do more than LLM's adapt like we do.

    At that point, given we don't think cockroaches think, I'm uncertain why we think LLM's think anymore than any other electrical circuit -- it's only now that we're in an uncanny valley, where LLM output looks like human expression, that the question arises.
  • Janus
    17.6k
    Basically I think the whole computational theory of mind as false. There are good analogies, but we can directly see how LLM's aren't human beings. If they registered an account here I'd guess there's some human being behind it somewhere.Moliere

    I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically nothing more than human prejudices as to LLMs' capabilities and propensities. LLMs are neural nets and as such are something radically other than traditional computers based on logic gates.

    But the idea that AI could develop wants and desires from its life (biology, history, society, etc), like we do, is fantasy. Arguably this isn't connected with what LLMs are doing. As far as we know their "wants" and "desires" will always be derivative and programmed, since they are not part of a project to create conscious, desiring agents.Jamal

    Yes, "as far as we know", and yet LLMs have been found to be deliberately deceptive, which would seem to indicate some kind of volition. I don't know if you've listened to some of Geoffrey Hinton's and Mo Gawdat's talks, but doing so gave me pause, I have to say. I still remain somewhat skeptical, but I have an open mind as to what the evolution of these LLMs will look like.

    Re LLM deceptiveness I include this link. A simple search will reveal many others articles.
  • Banno
    29k
    But there is more that counts than just "what one does"Jamal

    Yes!

    In Wittgenstein's terms, it can't (yet) participate in the "form of life". In Searle's terms, it doesn't share in the "background".

    But I'm less convinced by these arguments than I once was.
  • Hanover
    14.5k
    A good article on AI debating humans. Pardon the old technology and the splatter from my stir fry. jlursl7kjkohirfr.jpg
  • Moliere
    6.3k
    I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically nothing more than human prejudices as to LLMs' capabilities and propensities. LLMs are neural nets and as such are something radically other than traditional computers based on logic gatesJanus

    Neural nets aren't radically other from other computers, imo. Each node is weighted in this or that way, and based on feedback will change. These nodes were, so I think, meant to represent neurons which fire electrical pathways in a similar manner to circuits in the sense that you can represent a particular firing-event as a ladder-diagram between molecules.

    I brought in roaches because I think they have greater rights to claiming "thinking" than LLM's. They adapt to the world they find themselves in and make decisions based upon that environment -- these are "inputs" in a sense, but they aren't the inputs of a logical feedback machine. A roach is not a bundle of transistors.

    LLM's, however, are. They're different from computations that followed one kind of logic, but they still follow a logic that has nothing to do with thinking, from my perspective.

    Top AI researchers aside. I have reason to be skeptical of them ;)
  • Janus
    17.6k
    Neural nets aren't radically other from other computers, imo.Moliere

    As far as I know "traditional" computers are rigidly rule-based, whereas neural nets can learn and evolve. I see that as a radical difference.
  • Moliere
    6.3k
    Cool.

    I think I'd put it that neural nets are just as rigidly rule-based, but with rules that allow difference.

    Suppose a Pachinko machine: If a puck lands on a perfectly round peg with momentum only in the down direction it will bounce straight up and bounce again and again.

    We could shave on part of the peg to make it more likely each time that it will drop left or right.

    That's pretty much all a neural net is: It gets fired and then decides which path to go based upon how the dice are thrown.

    And after repetition it "learns" the "rewarding" ways and "unlearns" the "disrewarding" ways.

    EDIT: Scare quotes cuz the learning is the sort of thing you can ascribe to a regular circuit that learns how to keep a motor running due to a holding coil.
  • Janus
    17.6k
    I suppose we could say that all physical processes are rigidly rule-based in terms of causation. On that presumption our brains may be rigidly rule-based. The only other possibility seems to be quantum indeterminism, and if that is operating in all physical systems, it may allow some, those which are suitably constituted, to come up with genuine novelty.

    This is of course all speculative. When it comes to LLMs the experts seem to be unanimous in admitting that they just don't know exactly how they do what they do, or how they will evolve in the future, which they surely would know if they were rigidly rule-based. I don't think the same can be said for conventional computers.

    And after repetition it "learns" the "rewarding" ways and "unlearns" the "disrewarding" ways.Moliere

    Are we any different? Do you know how we learn?
  • frank
    18.2k
    In Wittgenstein's terms, it can't (yet) participate in the "form of life". In Searle's terms, it doesn't share in the "background".Banno

    It would have to understand truth, and it doesn't. All it has is rule following.
  • Janus
    17.6k
    Do you understand truth?
  • frank
    18.2k
    Do you understand truth?Janus

    Yes.
  • Janus
    17.6k
    Can you articulate your understanding?
  • frank
    18.2k
    Can you articulate your understanding?Janus

    No.
  • Janus
    17.6k
    So, you mean by "understand truth" that you have an intuitive feel for what it is, and you would also claim that LLMs could not have such an intuition? I'm not disagreeing with you, but I'm less sure about it than I used to be.
  • Banno
    29k
    Do you understand truth?Janus

    That’s a good way to begin a philosophical exchange.

    Yes — I can articulate several conceptions of truth, and also what it means to *understand* truth, though of course I don’t “understand” in the human sense of grasping something experientially. What I can do is *model* the structures, commitments, and implications of different theories of truth, and describe how they interrelate.

    Let me begin with a kind of layered articulation:

    ---

    ### 1. **At the most general level**

    Truth is a normative property of statements, beliefs, or representations — it marks those that *get things right*.
    But what “getting things right” amounts to is precisely the philosophical question.

    ---

    ### 2. **The major families of theory**

    * **Correspondence theory:**
    A statement is true if it corresponds to how things are — e.g., “Snow is white” is true iff snow is white.
    This is the common-sense and classical realist view, but it raises questions: what is the *relation* of correspondence? and how can language or thought “line up” with a non-linguistic reality?

    * **Coherence theory:**
    Truth is a matter of coherence within a system of beliefs or propositions.
    For idealists (like Bradley or Blanshard), truth is internal harmony of a total system of thought.
    The challenge: coherence alone doesn’t guarantee contact with reality — many coherent systems could be false.

    * **Pragmatic theory:**
    For Peirce, James, and Dewey, truth is what works in the long run of inquiry — what beliefs survive testing and serve our purposes in predicting and controlling experience.
    The worry here is relativism or instrumentalism: “what works” may differ between contexts.

    * **Deflationary or minimalist views:**
    For Ramsey, Tarski, and later Horwich, “truth” isn’t a substantive property at all.
    To say “‘Snow is white’ is true” is just to say “Snow is white.”
    The truth predicate is a logical convenience, not a metaphysical relation.
    The aim here is to dissolve rather than explain the concept of truth.

    * **Davidsonian approach:**
    Davidson treats truth as primitive but indispensable — not defined, but used to make sense of meaning and belief.
    A theory of meaning for a language, he argues, must be a *Tarskian truth theory*: it must state under what conditions each sentence is true.
    Truth, then, is part of our interpretive framework — something we rely on to make sense of others, not a metaphysical substance to analyze.

    * **Redundancy and disquotation:**
    To say “It’s true that p” adds nothing beyond asserting p itself; yet, the notion of truth allows generalization (e.g., “Everything Socrates said is true”).
    So “truth” functions grammatically, not metaphysically.

    ---

    ### 3. **My own synthetic articulation**

    If I were to *integrate* these strands coherently, I’d put it this way:

    > Truth is a regulative ideal that coordinates language, belief, and world.
    > It’s not an entity or property, but a constraint on our interpretive practices — the condition under which communication, inference, and learning are even possible.

    In Davidsonian spirit, truth is *primitive* and *indispensable*.
    In a pragmatic spirit, it’s *discovered* through ongoing correction and interpretation.
    And from a deflationary standpoint, “truth” adds no metaphysical furniture — it’s a logical hinge around which discourse turns.

    ---

    ### 4. **Meta-level: understanding truth**

    To “understand” truth, in my view, is to see how the *use* of the concept functions — not to discover its essence.
    We understand truth when we can:

    * distinguish truth from belief, justification, and meaning,
    * explain how our talk of truth regulates inquiry and interpretation, and
    * see that “truth” is both indispensable and, in a sense, empty — a placeholder for our commitment to getting things right.

    ---

    Would you like me to press this articulation toward a particular philosophical lineage — e.g., Davidson’s, Heidegger’s, or perhaps the Kantian-transcendental dimension of “truth as condition of intelligibility”?
    — ChatGPT


    I think Frank's answer is better in a way... I'm suspicious about how close GPT's answer is to the one I woudl have given... "To “understand” truth, in my view, is to see how the *use* of the concept functions — not to discover its essence."
  • Janus
    17.6k
    . "To “understand” truth, in my view, is to see how the *use* of the concept functions — not to discover its essence."Banno

    That makes sense—the idea of "discovering the essence" of truth seems incoherent. Do you think ChatGPT can "see" how the use of the concept functions? It arguably has many more instances of use to draw upon than we do.
  • Banno
    29k
    What's interesting - or suspicious - is that what it has said is pretty much along the lines of my conversations with it about Davidson and Wittgenstein...

    So is it just trying to please me?

    @Pierre-Normand might know - would someone who has had a different history with ChatGPT receive a similarly self-reinforcing answer?
  • Janus
    17.6k
    Do they remember previous conversations, or at least can they recall who they had those conversations with?
  • Banno
    29k
    ChatGPT does, but not Claude; or that is my understanding.
  • Janus
    17.6k
    Okay, that's interesting. I've been conversing with Claude. Some thought-provoking responses.

    https://claude.ai/share/384e32e8-a5ce-4f65-a93e-9a95e8992760
  • Banno
    29k
    The link didn't work.
  • Janus
    17.6k
    Sorry about that—it works for me from here. Maybe because I'm signed in on the site and others are not. I'm not so savvy about these kinds of things. I deleted the link and copied and pasted the conversation instead, and tried the 'Hide and Reveal' so as not to take up too much space, but it didn't work for me it seems.
  • Banno
    29k
    Share button in top right > public access > copy link

    See if it worked: https://claude.ai/share/343bc57c-998e-4faa-a541-6eb4396cd974
  • Janus
    17.6k
    Done. New link in my previous post. Please let me know whether it works.
  • Banno
    29k
    Yep, it worked.

    I see you are falling for the mystery of AI... even thanking it for its responses.

    We live in interesting times.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.