Yes, but you can't have a dialogue with language or with a book. You can't ask questions to a book, expect the book to understand your query and provide a relevant response tailored to your needs and expectations. The AI can do all of that, like a human being might, but it can't do philosophy or commit itself to theses. That's the puzzle. — Pierre-Normand
I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of Jamal's arguments, it may become more obvious that there is a problem at stake. — Leontiskos
I think comparing AI to a calculator highlights the limits of AI when using it to “do philosophy”. Calculators do for numbers what AI can do for words. No one wonders if the calculator is a genius at math. But for some reason, we think so low of what people do, we wonder if a fancy word processor might be better at doing philosophy.
Calculators cannot prompt anything. Neither does AI. Calculators will never know the value we call a “sine” is useful when measuring molecules. Why would we think AI would know that “xyz string of words” is useful for anything either? AI doesn’t “know”, does it?
So many unaddressed assumptions. — Fire Ologist
For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque. — Leontiskos
Once, in the very distant and very romantic past, philosophy was about having conversations; it was about what each participant could bring to the discussion, on the spot. The proverbial Rhodus was everywhere, and a philosopher was expected to be able to jump far, right there and then.namely the valuing of human interaction (and the definition of human interaction). — Leontiskos
Absolutely.If we don't know why we want to engage in human-to-human communication, or if we don't know what the relevant difference is between humans and AI, then we will not have the capacity or endurance to withstand the pressures of AI. We need to understand these questions in order to understand how to approach rules, guidelines, and interaction with respect to AI. — Leontiskos
Yes. But, so far as I can see, it can't break out of the web of its texts and think about whether the text it produces is true, or fair or even useful. — Ludwig V
A philosopher prompts. A philosopher invents a language. A philosopher sees when to care about the words, when to prompt more inquiry, and when not to care anymore, or when to claim understanding versus ignorance. AI doesn’t have to, or cannot, do all of that in order to do what it does. — Fire Ologist
It's probably unfair to think of it as a model of idealism; it seems closer to a model of post-modernism. — Ludwig V
That's a deep puzzle. I've been exploring it for a couple years now. — Pierre-Normand
But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative.
[...]
Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners. — Pierre-Normand
I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them. — Pierre-Normand
I think the rational structure of their responses and their reinforced drive to provide accurate responses warrant ascribing beliefs to them, although those beliefs are brittle and non-resilient. One must still take a Dennettian intentional stance towards them to make sense of their response (which necessitates ascribing them both doxastic and conative states), or interpret their responses though Davidson's constitutive ideal of rationality. But I think your insight that they aren't thereby making moves in our language game is sound. The reason why they aren't is because they aren't persons with personal and social commitments and duties, and with a personal stake in the game. But they can roleplay as a person making such moves (when instructed to do so) and do so intelligently and knowledgeably. — Pierre-Normand
An interesting direction here might be to consider if, or how, Ramsey's account can be appleid to AI.
You have a plant. You water it every day. This is not a symptom of a hidden, private belief, on Ramsey's account - it is your belief. What is given consideration is not a hidden private proposition, "I believe that the plant needs water", but the activities in which one engages. The similarities to both Ryle and Wittgenstein should be apparent. — Banno
Ramsey then looks for the points of indifference; the point of inaction. That's the "zero" from which his statistical approach takes off. Perhaps there's a fifty percent chance of rain today, so watering may or may not be needed. It won't make a difference whether you water or not.
There seem to be two relevant approaches. The first is to say that an AI never has any skin in the game, never puts it's balls on the anvil. So for an AI, every belief is indifferent.
The second is to note that if a belief is manifest in an action, then since the AI is impotent, it again has no beliefs. That's not just a manifestation of the AI's not being capable of action. Link a watering system to ChatGPT and it still has no reason to water or not to water.
Yes, and it is important that we treat these behaviours as acts. There are real world consequences for what AI's say.Most of ChatGPT's behavior is verbal behavior. — Pierre-Normand
Pretty much. It's a shame Davidson is no longer around for us to ask his opinion on AI.You'll tell me if I'm wrong but it seems to me like Davidson's radical interpretation approach nicely combines Ramsey's possibly overly behavioristic one with Quine's more holistic (but overly empiricist) approach. — Pierre-Normand
So are you saying that chatbots possess the doxastic component of intelligence but not the conative component? — Leontiskos
I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them.
— Pierre-Normand
It seems to me that what generally happens is that we require scare quotes. LLMs have "beliefs" and they have "motivations" and they have "intelligence," but by this one does not actually mean that they have such things. The hard conversation about what they really have and do not have is usually postponed indefinitely.
I would argue that the last bolded sentence nullifies much of what has come before it. "We are required to treat them as persons when we interact with them; they are not persons; they can roleplay as a person..." This is how most of the argumentation looks in general, and it looks to be very confusing.
I'll go over Austin again, since it provides a set of tools that are quite applicable. A Phatic act is the act of putting words together in a sequence that recognisably part of language - constructing a sentence en English. This is what an LLM does. It uses a statistical engine to generate a set of words that follow on form the words provide din the prompt. An illocutionary act is one performed in making use of such words - making a statement, asking a question, and so on. This, so the claim goes, an LLM cannot do. — Banno
The AI strings words together, only ever performing the phatic act and never producing an illocution.
The uniquely human addition is taking those word-strings and using them in a language game.
So the question arrises, can such an account be consistently maintained; what is it that people bring to the game that an AI cannot? — Banno
Use AI outputs as starting points for further refinement
Cycle through multiple rounds of critique and revision
Refine prompts to avoid confirmation bias and explore diverse readings
Now this looks very much like a recipe for a language game.
On the other hand, the data set used by a human appears to be far, far smaller than that used by an LLM. Our brains simply do not "contain" the number of texts available to ChatGPT. Therefore whatever the brain is doing, it is different to what is happening in ChatGPT. — Banno
That's were I'm at, in my present musings. SoExactly! That seems to be the central question. I don't have an answer—would it be that AI researchers are the ones best placed to answer to it? — Janus
Perhaps we are again treating a conceptual problem as if it were empirical - the ubiquitous error Midgley identified. We cannot discover the answer to conceptual issues by looking around; we have to make stuff up.There's the tension laid out, it seems. I'm not certain that it is real. It may come down to what we are willing to attribute to AI, rather than what we can discover about them. — Banno
What we face might be not an empirical question but an ethical one - do we extend the notion of intentionality to include AIs? — Banno
I'd rather say that they have both the doxastic and conative components but are mostly lacking on the side of conative autonomy. As a result, their intelligence, viewed as a capacity to navigate the space of reasons, splits at the seam between cleverness and wisdom. In Aristotelian terms, they have phronesis (to some extent), since they often know what's the right thing to do in this or that particular context, without displaying virtue since they don't have an independent motivation to do it (or convince their users that they should do it). This disconnect doesn't normally happen in the case of human beings since phronesis (the epistemic ability) and virtue (the motivational structure) grow and maintain themselves (and are socially scaffolded) interdependently. — Pierre-Normand
Those are questions that I spend much time exploring rather than postponing even though I haven't arrived at definitive answers, obviously. But one thing I've concluded is that rather that it being a matter of all or nothing, or a matter of degree along a linear scale, the ascription of mental states or human capabilities to LLM-based chatbots often is rendered problematic by the divergence of our ordinary criteria of application. Criteria that normally are satisfied together in the case of human beings are satisfied separately in the case of chatbots. — Pierre-Normand
Maybe it looks confusing because it is. I mean that assessing the nature of our "conversations" with chatbot is confusing, not because of a conceptual muddle that my use of scare quotes merely papers over... — Pierre-Normand
...but rather because chatbots are mongrels. They have "brains" that have been enculturated through exposure to a massive body* of human knowledge, lore and wisdom (and prejudices) but they don't have human bodies, lack human motivations and aren't persons. — Pierre-Normand
LLMs aren't AIs that we build... — Pierre-Normand
A more nuanced view might acknowledge the similarities in these two accounts. While acknowledging that reference is inscrutable, we do manage to talk about things. If we ask the AI the height of Nelson's Column, there is good reason to think that when it replies "52m" it is talking about the very same thing as we are - or is it that there is no good reason not to think so? — Banno
LLMs as language users, on that account, are pure consumers. — Pierre-Normand
So another step: Can an AI name something new? Can it inaugurate a causal chain of reference? — Banno
(For my part, I'm quite content to suppose that there may be more than one way for reference to work - that we can have multiple correct theories of reference, and choose between them as needed or appropriate.)
For example, suppose I say:
Let this object — the glinting sphere I imagine at the center of all possible conversations — be called "Orvyn."
By itself, “Orvyn” is a linguistic token. But if you begin to use “Orvyn” to refer to something — a sculpture, a symbol, a concept — then a genuine causal–historical chain of reference begins with you, not with me. My utterance would be a precursor or trigger for your act of baptism.
In other words:
I can propose names.
You can ratify them into reality. — ChatGPT
Surprisingly precocious. — Banno
I think this is right since, although we can ask them if they are capable of intentionality, and they will answer, we might not be able to trust the answer. — Janus
Okay, fair enough. I suppose I would be interested in more of those examples. I am also generally interested in deductive arguments rather than inductive arguments. For example, what can we deduce from the code, as opposed to inducing things from the end product as if we were encountering a wild beast in the jungle? It seems to me that the deductive route would be much more promising in avoiding mistakes. — Leontiskos
They are not trained to back track their tentative answers and adjust them on the fly. — Pierre-Normand
So a further thought. Davidson pointed out that we can make sense of malapropisms and nonsense. He used this in an argument not too far from Quine's Gavagai, that malapropisms cannot, by their very nature, be subsumed and accounted for by conventions of language, because by their very nature they break such conventions.
So can an AI construct appropriate sounding malapropisms?
Given that LLMs use patterns, and not rules, presumably they can. — Banno
I just went off on a bit of a tangent, looking at using a response as a prompt in order to investigate something akin to Hofstadter's strange loop. ChatGPT simulated (?) 100 cycles, starting with “The thought thinks itself when no thinker remains to host it”. It gradually lost coherence, ending with "Round 100: Recursive loop reaches maximal entropy: syntax sometimes survives, rhythm persists, but semantics is entirely collapsed. Language is now a stream of self-referential echoes, beautiful but empty." — Banno
Here's the 40 rounds, if you are interested — Banno
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.