• Leontiskos
    5.2k
    So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated?Leontiskos

    Another aspect of this is scarcity. LLM content is not scarce in the way human content is. I can generate a thousand pages of LLM "philosophy" in a few minutes. Someone who therefore spends considerable time and energy on an OP or a post can be met by 's, "This LLM output says you're wrong," which was generated lazily in a matter of seconds.

    Forums already have a huge struggle with eristic, showboating, and falsification-for-the-sake-of-falsification. Give free access to a tool that will allow them to justify their disagreement at length in the snap of a finger, and guess what happens?

    I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of @Jamal's arguments, it may become more obvious that there is a problem at stake.

    (@Baden, @Jamal)
  • Janus
    17.6k
    There are those, Hinton being one of them, who claim that the lesson to be learned from the LLMs is that we are also just "arranging words as if it were saying something", that is that we don't have subjective experience any more than they do.
    — Janus
    Something has gone astray here, in. that if this were so, it's not just that we have never said anything, but that the very notion of saying something could not be made coherent.
    Banno

    I don't think Hinton is saying that nothing can be said—by us, or by LLMs, but that our inability to conceive of LLMs having subjective experience on the grounds that they merely predict the statistical likelihoods of the next words in terms of maximal consistency and cohesiveness, when we arguably do exactly the same thing (in a much slower and looser way), shows that our notion of subjective experience as an inner theatre stocked with qualia and ideas which are pre-existent and only later put into words is an illusion.

    It is that inner theatre which we imagine we have and which we cannot imagine them having that is our idea of subjective consciousness.

    In other words, maybe it is something like we imagine that the language games are created by us, but the reality is that we are always already immersed in the evolving language games and are always playing the game of what to say by selection, from what is available to recall and is judged, according to a (for us) loose process of 'weighing', most appropriate and thus is selected.
  • Banno
    28.9k

    This is a thread about ways to make use of AI in doing philosophy. There is another thread about banning AI altogether.Banno
  • Fire Ologist
    1.7k
    By way of getting the thread back on topicBanno

    According to who?

    There are a few points people are trying to make. Which one are we supposed to care about?

    And then there’s whatever Claude seems to think is helping.

    Are you trying to talk about ways to use AI to do philosophy on other forums, or here on TPF?
  • Leontiskos
    5.2k
    According to who?Fire Ologist

    The Puppeteer, of course.
  • Banno
    28.9k
    I don't think Hinton is saying that nothing can be said—by us, or by LLMs, but that our inability to conceive of LLMs having subjective experience on the grounds that they merely predict the statistical likelihoods of the next words in terms of maximal consistency and cohesiveness, when we arguably do exactly the same thing, shows that our notion of subjective experience as an inner theatre stocked with qualia is an illusion.Janus
    :grin: Well, if that's so I entirely agree - that idea of an "inner theatre" is a nonsense.

    I'll go over Austin again, since it provides a set of tools that are quite applicable. A Phatic act is the act of putting words together in a sequence that recognisably part of language - constructing a sentence en English. This is what an LLM does. It uses a statistical engine to generate a set of words that follow on form the words provide din the prompt. An illocutionary act is one performed in making use of such words - making a statement, asking a question, and so on. This, so the claim goes, an LLM cannot do.

    That's pretty much the argument found in the Chinese Room. And with certain reservations, I agree with it.

    Notice that the "inner world" is pretty much irrelevant in this approach. That's partly why I find it preferable; no hidden private theatre.

    So the question becomes the extent to which an AI can be involved in our language games, including the game of doing philosophy.

    The obvious problem with relying on inner experience is the Wittgensteinian point that we cannot know the inner experience of others. SO it's an unreliable argument.
  • Banno
    28.9k


    The upshot of that might be that AI cannot do philosophy, but that a person may do philosophy using an AI.

    The AI strings words together, only ever performing the phatic act and never producing an illocution.

    The uniquely human addition is taking those word-strings and using them in a language game.

    So the question arrises, can such an account be consistently maintained; what is it that people bring to the game that an AI cannot?
  • Fire Ologist
    1.7k
    Isn't the problem that of letting LLMs do our thinking for us, whether or not we are giving the LLM credit for doing our thinking?Leontiskos

    Why can’t both be an issue. :grin: Letting LLMs do your thinking should concern the person using the LLM the most.

    And I’m sure it will degrade brainpower and confidence in society generally as well.

    But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesn’t know they are reading LLM content won’t know they need to check everything about it for accuracy and soundness.

    AI for philosophy and creative writing is interesting. I’m fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason.

    It is because human beings can do philosophy that human beings can tell whether AI generated content is of any value or sound or wise. No reason not to look at any content (as long as no one is lying about where it came from, or pretending it is not from a computer.
  • Banno
    28.9k


    One of the aspects of good us of AI that came up in the analysis was iteration. Taking the outcome of a discussion withe the AI and using it as the basis for further discussion. As claud said:
    • Use AI outputs as starting points for further refinement
    • Cycle through multiple rounds of critique and revision
    • Refine prompts to avoid confirmation bias and explore diverse readings

    Now this looks very much like a recipe for a language game.

    On the other hand, the data set used by a human appears to be far, far smaller than that used by an LLM. Our brains simply do not "contain" the number of texts available to ChatGPT. Therefore whatever the brain is doing, it is different to what is happening in ChatGPT.
  • Leontiskos
    5.2k
    But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesn’t know they are reading LLM content won’t know they need to check everything about it for accuracy and soundness.Fire Ologist

    Yes, that's true, and I definitely agree that one should not plagiarize LLM content, passing it off as their own.

    I suppose the question is whether one who knows not to outsource their thinking will be susceptible to plagiarism, and it seems that they would not. This is because plagiarism is one form of outsourcing thinking among many others. So to oppose the outsourcing of thinking automatically opposes plagiarism, even though there may be additional reasons why plagiarism is problematic.

    AI for philosophy and creative writing is interesting. I’m fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason.Fire Ologist

    Well, my guess is that people use it as a shortcut to knowledge. They think that knowledge is the end and that the LLM is a surefire means. The controversial premises for such as position are first that knowledge is a piling-up of facts or propositions, and that LLMs are reliable deliverers of such propositions. The implicit idea is that forums like TPF are for the purpose of showing off piled-up knowledge, and that one must therefore use the LLM to improve their lot on TPF.

    In a market sense, what will inevitably happen is that as LLMs drive down the scarcity of knowledge, knowledge itself will become passé in a very curious way. Forms of quintessentially human activity that remain scarce will then be elevated, including religious and mystical venues. This was already occurring since the advent of recent technologies, such as the internet, but the phenomenon will continue to grow.
  • Fire Ologist
    1.7k
    what is it that people bring to the game that an AI cannot?Banno

    Isn’t that about the question: Can AI do philosophy?

    I thought you said the topic was how to use AI to do philosophy.
  • Banno
    28.9k
    Isn’t that about the question: Can AI do philosophy?

    I thought you said the topic was how to use AI to do philosophy.
    Fire Ologist

    Go on one step further. What do you think my response to you would be, given what I've said above? Imagine what Banno would say in reply to your post.

    He's been on about the difference between strings of words and illocutionary acts. He's been on about iterating conversations. So what's he going to day about the difference between an AI and a human producing bits of philosophy?

    Or should I ask ChatGPT?
  • Metaphysician Undercover
    14.3k
    And then there’s whatever Claude seems to think is helping.Fire Ologist

    Yeah, let's ask Claude what Claude can do to help us lowly philosophers out. And, we'll let Claude be the authority on this. Smells like AI rule around here.
  • Banno
    28.9k
    I write most of my forum posts on an iphone while hiking. Not conducive for accurate spelling.Joshs
    Does it seem to you that AI favours analytic approaches?

    On the face of it, one might think so - that it's good at writing code and analysing logical structure, and so has a natural bias towards analytic approaches.

    But does that play out? It seems not to, to my eye, but I'm no aficionado of the phenomenal approach.

    Woudl the use of AI bias the discussion in this way?
123456Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.