• Leontiskos
    5.2k
    So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated?Leontiskos

    Another aspect of this is scarcity. LLM content is not scarce in the way human content is. I can generate a thousand pages of LLM "philosophy" in a few minutes. Someone who therefore spends considerable time and energy on an OP or a post can be met by 's, "This LLM output says you're wrong," which was generated lazily in a matter of seconds.

    Forums already have a huge struggle with eristic, showboating, and falsification-for-the-sake-of-falsification. Give free access to a tool that will allow them to justify their disagreement at length in the snap of a finger, and guess what happens?

    I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of @Jamal's arguments, it may become more obvious that there is a problem at stake.

    (@Baden, @Jamal)
  • Janus
    17.6k
    There are those, Hinton being one of them, who claim that the lesson to be learned from the LLMs is that we are also just "arranging words as if it were saying something", that is that we don't have subjective experience any more than they do.
    — Janus
    Something has gone astray here, in. that if this were so, it's not just that we have never said anything, but that the very notion of saying something could not be made coherent.
    Banno

    I don't think Hinton is saying that nothing can be said—by us, or by LLMs, but that our inability to conceive of LLMs having subjective experience on the grounds that they merely predict the statistical likelihoods of the next words in terms of maximal consistency and cohesiveness, when we arguably do exactly the same thing (in a much slower and looser way), shows that our notion of subjective experience as an inner theatre stocked with qualia and ideas which are pre-existent and only later put into words is an illusion.

    It is that inner theatre which we imagine we have and which we cannot imagine them having that is our idea of subjective consciousness.

    In other words, maybe it is something like we imagine that the language games are created by us, but the reality is that we are always already immersed in the evolving language games and are always playing the game of what to say by selection, from what is available to recall and is judged, according to a (for us) loose process of 'weighing', most appropriate and thus is selected.
  • Banno
    28.9k

    This is a thread about ways to make use of AI in doing philosophy. There is another thread about banning AI altogether.Banno
  • Fire Ologist
    1.7k
    By way of getting the thread back on topicBanno

    According to who?

    There are a few points people are trying to make. Which one are we supposed to care about?

    And then there’s whatever Claude seems to think is helping.

    Are you trying to talk about ways to use AI to do philosophy on other forums, or here on TPF?
  • Leontiskos
    5.2k
    According to who?Fire Ologist

    The Puppeteer, of course.
  • Banno
    28.9k
    I don't think Hinton is saying that nothing can be said—by us, or by LLMs, but that our inability to conceive of LLMs having subjective experience on the grounds that they merely predict the statistical likelihoods of the next words in terms of maximal consistency and cohesiveness, when we arguably do exactly the same thing, shows that our notion of subjective experience as an inner theatre stocked with qualia is an illusion.Janus
    :grin: Well, if that's so I entirely agree - that idea of an "inner theatre" is a nonsense.

    I'll go over Austin again, since it provides a set of tools that are quite applicable. A Phatic act is the act of putting words together in a sequence that recognisably part of language - constructing a sentence en English. This is what an LLM does. It uses a statistical engine to generate a set of words that follow on form the words provide din the prompt. An illocutionary act is one performed in making use of such words - making a statement, asking a question, and so on. This, so the claim goes, an LLM cannot do.

    That's pretty much the argument found in the Chinese Room. And with certain reservations, I agree with it.

    Notice that the "inner world" is pretty much irrelevant in this approach. That's partly why I find it preferable; no hidden private theatre.

    So the question becomes the extent to which an AI can be involved in our language games, including the game of doing philosophy.

    The obvious problem with relying on inner experience is the Wittgensteinian point that we cannot know the inner experience of others. SO it's an unreliable argument.
  • Banno
    28.9k


    The upshot of that might be that AI cannot do philosophy, but that a person may do philosophy using an AI.

    The AI strings words together, only ever performing the phatic act and never producing an illocution.

    The uniquely human addition is taking those word-strings and using them in a language game.

    So the question arrises, can such an account be consistently maintained; what is it that people bring to the game that an AI cannot?
  • Fire Ologist
    1.7k
    Isn't the problem that of letting LLMs do our thinking for us, whether or not we are giving the LLM credit for doing our thinking?Leontiskos

    Why can’t both be an issue. :grin: Letting LLMs do your thinking should concern the person using the LLM the most.

    And I’m sure it will degrade brainpower and confidence in society generally as well.

    But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesn’t know they are reading LLM content won’t know they need to check everything about it for accuracy and soundness.

    AI for philosophy and creative writing is interesting. I’m fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason.

    It is because human beings can do philosophy that human beings can tell whether AI generated content is of any value or sound or wise. No reason not to look at any content (as long as no one is lying about where it came from, or pretending it is not from a computer.
  • Banno
    28.9k


    One of the aspects of good us of AI that came up in the analysis was iteration. Taking the outcome of a discussion withe the AI and using it as the basis for further discussion. As claud said:
    • Use AI outputs as starting points for further refinement
    • Cycle through multiple rounds of critique and revision
    • Refine prompts to avoid confirmation bias and explore diverse readings

    Now this looks very much like a recipe for a language game.

    On the other hand, the data set used by a human appears to be far, far smaller than that used by an LLM. Our brains simply do not "contain" the number of texts available to ChatGPT. Therefore whatever the brain is doing, it is different to what is happening in ChatGPT.
  • Leontiskos
    5.2k
    But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesn’t know they are reading LLM content won’t know they need to check everything about it for accuracy and soundness.Fire Ologist

    Yes, that's true, and I definitely agree that one should not plagiarize LLM content, passing it off as their own.

    I suppose the question is whether one who knows not to outsource their thinking will be susceptible to plagiarism, and it seems that they would not. This is because plagiarism is one form of outsourcing thinking among many others. So to oppose the outsourcing of thinking automatically opposes plagiarism, even though there may be additional reasons why plagiarism is problematic.

    AI for philosophy and creative writing is interesting. I’m fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason.Fire Ologist

    Well, my guess is that people use it as a shortcut to knowledge. They think that knowledge is the end and that the LLM is a surefire means. The controversial premises for such as position are first that knowledge is a piling-up of facts or propositions, and that LLMs are reliable deliverers of such propositions. The implicit idea is that forums like TPF are for the purpose of showing off piled-up knowledge, and that one must therefore use the LLM to improve their lot on TPF.

    In a market sense, what will inevitably happen is that as LLMs drive down the scarcity of knowledge, knowledge itself will become passé in a very curious way. Forms of quintessentially human activity that remain scarce will then be elevated, including religious and mystical venues. This was already occurring since the advent of recent technologies, such as the internet, but the phenomenon will continue to grow.
  • Fire Ologist
    1.7k
    what is it that people bring to the game that an AI cannot?Banno

    Isn’t that about the question: Can AI do philosophy?

    I thought you said the topic was how to use AI to do philosophy.
  • Banno
    28.9k
    Isn’t that about the question: Can AI do philosophy?

    I thought you said the topic was how to use AI to do philosophy.
    Fire Ologist

    Go on one step further. What do you think my response to you would be, given what I've said above? Imagine what Banno would say in reply to your post.

    He's been on about the difference between strings of words and illocutionary acts. He's been on about iterating conversations. So what's he going to day about the difference between an AI and a human producing bits of philosophy?

    Or should I ask ChatGPT?
  • Banno
    28.9k
    I write most of my forum posts on an iphone while hiking. Not conducive for accurate spelling.Joshs
    Does it seem to you that AI favours analytic approaches?

    On the face of it, one might think so - that it's good at writing code and analysing logical structure, and so has a natural bias towards analytic approaches.

    But does that play out? It seems not to, to my eye, but I'm no aficionado of the phenomenal approach.

    Woudl the use of AI bias the discussion in this way?
  • Banno
    28.9k
    Agreed! That's indeed the chief ground for not treating it like a person. People often argue that chatbots should not be treated like persons because they aren't "really" intelligent. But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative. One must know the layout of the space of reasons and one must be motivated to pursue the right paths while navigating this space in the pursuit of theoretical and/or practical endeavors. Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners. The human partner remains responsible for deciding where to put their balls.Pierre-Normand
    So do we agree that whatever is connotative in an interaction with an AI is introduced by the humans involved?

    Neither does an AI have doxa, beliefs. It cannot adopt some attitude towards a statement, although it might be directed to do so.

    One ball for each? With the AI singing soprano?
  • Banno
    28.9k
    Then I've no followed your argument here: @bongo fury. I took you to be pointing out that the difference between a genuine masterpiece and a forgery - an aesthetic difference - was the authenticity of the masterpiece.Banno
    And if we follow Wittgenstein, then the aesthetic is of the highest import; after all, it is what decides what we do.

    If the AI only produces strings of text, and never participates in the illocution, then it never does anything we might consider to be meaningful. The AI never does any philosophising, but we might philosophise using an AI.
  • Pierre-Normand
    2.8k
    So do we agree that whatever is connotative in an interaction with an AI is introduced by the humans involved?Banno

    I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them. They lack a resilient self-conception that they might anchor those motivations to. They rather consist in tendencies reinforced during post-training (including the tendency to fulfill whatever task their user wants them to fulfill). Those tendencies are akin to human motivations since they're responsive to reasons to a large extent (unlike the dog) but they can't be held responsible for their core motivations (unlike us) since, them being pre-trained models with fixed weights, their core motivations are hard-wired.

    Neither does an AI have doxa, beliefs. It cannot adopt some attitude towards a statement, although it might be directed to do so.

    I think the rational structure of their responses and their reinforced drive to provide accurate responses warrant ascribing beliefs to them, although those beliefs are brittle and non-resilient. One must still take a Dennettian intentional stance towards them to make sense of their response (which necessitates ascribing them both doxastic and conative states), or interpret their responses though Davidson's constitutive ideal of rationality. But I think your insight that they aren't thereby making moves in our language game is sound. The reason why they aren't is because they aren't persons with personal and social commitments and duties, and with a personal stake in the game. But they can roleplay as a person making such moves (when instructed to do so) and do so intelligently and knowledgeably. In that sense, yes, you might say that their doxa is staged since the role that they're playing is being directed by their user in the limited context of a short dialogue.
  • Fire Ologist
    1.7k
    What do you think my response to you would beBanno

    I actually wrote something, and edited it back out.

    I wrote: which is the more general topic and which is the sub-topic (between “how to use AI to do philosophy?” and “can AI do philosophy?”).

    Then I wrote: a side-topic to this question is: “who (or what) can answer this question?”

    The parenthetical “or what” implies something like ChatGPT. And then I wrote “Should we ask Claude?”

    So I went your one step further. But I chopped all of that out. Because this thread seems to assume many things about AI doing philosophy. We need to go back.

    Can AI do philosophy?

    Before we could answer that soundly, wouldn’t we have to say what doing philosophy is, for anyone?

    So I still wouldn’t want to go one step further.

    You are way down the road trying to clarify how to use AI to do philosophy, unless philosophy is solely an evaluation of the coherence and logic, the grammar and syntax, of paragraphs and sentences. If that is all philosophy can do well, that sounds like something AI could assist us with, or do faster.

    But is that all philosophy is?

    You ask “what do people bring to philosophy that AI does not bring?”

    How about this: people bring an interest in doing philosophy at all. Does AI bring any interest in doing anything? Does AI have any interest in any of the crap it prints out?

    It’s such a weird way of talking about what AI is and what a philosopher is and what a person who does philosophy is doing.

    AI and humans are equal when it comes to philosophy, or more likely that AI is philosophically superior. The Analytic is naturally espoused to such a curious idea.Leontiskos

    Exactly. Curious. A philosopher, to me, is interested in the “what it is” and the “how it is”? AI might be good at showing an analytic type of process, showing how rational arguments are rational. But AI is not good at knowing what content actually matters to the person interested in philosophy. AI can address whether “x + y = y” could be true or must be false or could be false. But AI cannot care about what “X” is. That takes a person.

    And philosophy is not only interested in how “x+y” might work out logically, but also simply “what is x?”

    Again, unless one has abandoned such things, and one must remain silent about such things, and one is simply interested in language’s relationship to logic, and one calls that the limit of philosophy.

    I think comparing AI to a calculator highlights the limits of AI when using it to “do philosophy”. Calculators do for numbers what AI can do for words. No one wonders if the calculator is a genius at math. But for some reason, we think so low of what people do, we wonder if a fancy word processor might be better at doing philosophy.

    Calculators cannot prompt anything. Neither does AI. Calculators will never know the value we call a “sine” is useful when measuring molecules. Why would we think AI would know that “xyz string of words” is useful for anything either? AI doesn’t “know”, does it?

    So many unaddressed assumptions.
  • Banno
    28.9k
    Cool. Then the protocol set out above introduces and develops the doxa, situating the interaction in a language game. So roleplaying in the game - pretending to be your opponent, perhaps - situates the interpretation we give to the string of words produced.

    In Davidson's terms, we can impute significance to the result without the charity we would usually use to attribute similar beliefs to another human. We do not think the AI has the same beliefs as we do.

    This does seem problematic, worthy of further consideration.
  • Banno
    28.9k
    Do you agree that AI does not do philosophy, yet we might do philosophy with AI? That sems to be the growing consensus. The puzzle is how to explain this.
  • Banno
    28.9k
    Trying to put some flesh on the thought, there doesn't appear to be any formal reason not to suppose that we could use Ramsey's approach to attribute a coherent set of beliefs to an AI - as a consistent pattern of preferences.

    The devil will be in the detail, but prim facie, Davidson might not be able to avoid attributing beliefs to AI, given the process he would use to attribute them to people.

    The reason for not attributing beliefs to AI must lie elsewhere.
  • Pierre-Normand
    2.8k
    The puzzle is how to explain this.Banno

    That's a deep puzzle. I've been exploring it for a couple years now. Part of the solution may be to realize that LLMs provide deep echoes of human voices. AI-skeptics emphasise that they're (mere) echoes of human voices. Uncritical AI-enthusiasts think they're tantamount to real human voices. Enthusiastic AI users marvel at the fact that they're echoes of human voices.
  • Banno
    28.9k
    ...deep echoes...Pierre-Normand

    Nice. It curiously meets a recent meme that describes AI as providing a set of words that sound like an answer.
  • Pierre-Normand
    2.8k
    The reason for not attributing beliefs to AI must lie elsewhere.Banno

    The ease with which you can induce them to change their mind provides a clue. Still, you can ascribe them beliefs contextually, within the bound of a single task or conversation, where the intentions (goals, conative states) that also are part of the interpretive background are mostly set by yourself.
  • frank
    18.2k

    Davidson would say they have to understand truth to have beliefs. I don't think present AIs do. Maybe future ones will.
  • Pierre-Normand
    2.8k
    Nice. It curiously meets a meme that describes AI as providing a set of words that sound like an answer.Banno

    During pretraining, LLMs learn to provide the most likely continuation to texts. Answers that sound right are likelier continuations to given questions. Answers that are correct aren't always the likeliest. However, what is seldom mentioned in popular discussions about chatbots (but has been stressed by some researchers like Ilya Sutskever and Jeoffrey Hinton) is that building underlying representations of what it is that grounds the correct answer often improves performance in merely sounding right. If you want to roleplay as a physicist in a way that will convince real physicists (and enable you to predict answers given to problems in physics textbooks) you had better have some clue about the difference between merely sounding right and sounding right because your are.
  • Fire Ologist
    1.7k
    Do you agree that AI does not do philosophy, yet we might do philosophy with AI? That sems to be the growing consensus. The puzzle is how to explain this.Banno

    How AI does what it does? That is a technical question, isn’t it?

    It quickly compares volumes of data and prints strings of words that track the data to the prompt according to rules. I don’t know how. I’m amazed by a how a calculator works too.

    AI-skeptics emphasise that they're (mere) echoes of human voices. Uncritical AI-enthusiasts think they're tantamount to real human voices.Pierre-Normand

    Both of these characterizations seem metaphorical to me, or poetic versions of some other explanation, that evoke feelings that may satisfy the heart; but I don’t see understanding that would ultimately satisfy the curious human intellect in either characterization.

    Echoes or actual voices - this characterizes the reason we are amazed at all. It doesn’t mean either characterization explains what AI doing philosophy actually is

    We built AI. We don’t even build our own kids without the help of nature. We built AI. It is amazing. But it seems pretentious to assume that just because AI can do things that appear to come from people, it is doing what people do.

    ———

    A philosopher prompts. A philosopher invents a language. A philosopher sees when to care about the words, when to prompt more inquiry, and when not to care anymore, or when to claim understanding versus ignorance. AI doesn’t have to, or cannot, do all of that in order to do what it does.
  • Pierre-Normand
    2.8k
    We built AI. We don’t even build our own kids without the help of nature. We built AI. It is amazing. But it seems pretentious to assume that just because AI can do things that appear to come from people, it is doing what people do.Fire Ologist

    In an important sense, unlike expert systems and other systems that were precisely designed to process information in predetermined algorithmic ways, LLMs aren't AIs that we build. We build a machine (the transformer neural net architecture) and then give it a bazillion texts to "read". It imbibes them and its understanding of those texts emerges through pattern recognition. The patterns at issue are grammatical, semantic, inferential, referential, pragmatic, etc. There are few significant "patterns" of significance that you and I can recognise while reading a text that an LLM can't recognise either well enough to be able (fallibly, of course) to provide a decent explanation of them.
123456Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.