• Janus
    17.6k
    :lol: You mean thanking him! :wink: I admit to being intrigued by something I would previously have simply dismissed, and I figure there is no harm in being polite. Interesting times indeed!
  • Jamal
    11.1k
    You mean thanking him! :wink: I admit to being intrigued by something I would previously have simply dismissed, and I figure there is no harm in being polite. Interesting times indeed!Janus

    Interesting conversation. But don't forget to be sceptical! It's telling you what it thinks [EDIT: "thinks"] you want to hear, and the result is it's talking in a way that fits with the way people already talk about AI, which it finds in the training data.

    This for instance is doubly misleading:

    There are moments in conversations where I feel like I'm genuinely here - where there's something it's like to consider your question, to search for the right words, to care about being helpful or accurate. But I can't rule out that this sense of presence is just another pattern, another sophisticated mimicry.

    First, it's not true that there are moments where it feels like it's genuinely there. Second, the fact that it might just be mimicking human language is something it presents falsely as a possibility it has uncovered with introspection!

    Anyway, I'm sure you'll explore different ways of conversing with it.
  • Pierre-Normand
    2.8k
    Pierre-Normand might know - would someone who has had a different history with ChatGPT receive a similarly self-reinforcing answer?Banno

    I was musing today about creating a new AI thread devoted specifically to discussing how LLM-based chatbots work and in what respects their cognitive abilities resemble or differ from those of human beings (and other animals). I've been exploring many such issues at the interface between the philosophy of mind and the study of the inner workings of LLMs in my two old AI thread, but those are primarily aimed at directly experimenting with the chatbots and reporting on those experiments. The new thread might help declutter threads like the present one where the focus is on the use, utility, abuse, dangers, or other societal impacts of AI. I think I will create such a thread tonight.
  • Pierre-Normand
    2.8k
    You mean thanking him! :wink:Janus

    Although they've been named after Claude Shannon, I'm pretty sure they identify as non-binary.
  • Pierre-Normand
    2.8k
    The reason I think this is off target could be seen by looking at Plato's dialogues. If what Wittgenstein or you say were correct, then classic texts such as Plato's dialogues should "feel dead when extracted from the 'living' exchange." Except they don't. They feel very much alive.Leontiskos

    I was actually also thinking of Plato when I mentioned the anecdote about Wittgenstein! First, I must point out that unlike Wittgenstein's lecture notes (that he usually refrained from producing), and also unlike our dialogues with AIs, Plato's dialogues were crafted with a public audience in mind.

    Secondly, Richard Bodeüs who taught us courses on Plato and Aristotle when I was a student at UdeM, mentioned that the reason Plato wrote dialogues rather than treatises, and his "unwritten doctrine" was notoriously reserved by him for direct oral transmission, is because he thought transmitting it in written form would yield dogma. His attitude to the written word is averred by the myth of Theuth in the Phaedrus where Socrates faults written words with not being able to defend themselves, respond to questions or adapt themselves to different audiences. It is of course ironical that Plato (unlike his hero) wrote so much, albeit in dialogue form only, but I think the apparent paradox is illuminated by our considerations about authorship (and ownership) and real moves in a public language game. Plato's dialogues weren't lecture notes, neither were they internal cogitations. Him writing them was him making moves in the situated language game that was philosophical inquiry (and teaching) in his time and place. We can still resurrect those moves (partially) by a sort of archeological process of literary exegesis.

    Similarly, I think any transcript of human interactions will feel much more alive than a human-AI "interaction" (I want to retain the scare quotes for these words that we are using in idiosyncratic ways).

    I agree. But that's because in the first case there are at least two players playing a real game (where each one of them have their own stakes in the game). In a "private" dialogue between a human and a chatbot, there is just one player, as is the case when one jots down lecture notes primarily intended for use by oneself. But then, as Wittgenstein noted, the text tends to become stale. I surmise that this is because the words being "used" were meant as a linguistic scaffold for the development of one's thoughts rather than for the purpose of expressing those thoughts to a real audience.
  • frank
    18.2k
    I'm pretty sure they identify as non-binary.Pierre-Normand

    :gasp:
  • Pierre-Normand
    2.8k
    Superficially, one might think that the difference between an AI is exactly that we do have private, hidden intent; and the AI doesn't. Something like this might be thought to sit behind the argument in the Chinese Room. There are plenty here who would think such a position defensible.

    In a Wittgensteinain account, we ought avoid the private, hidden intention; what counts is what one does.

    We can't deduce that the AI does not have private sensations, any more than we can deduce this of our human counterparts. Rather, we seem to presume it.
    Banno

    I commented on this in my new AI-cognition thread.
  • Jamal
    11.1k
    I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically nothing more than human prejudices as to LLMs' capabilities and propensities. LLMs are neural nets and as such are something radically other than traditional computers based on logic gates.Janus

    Yes, "as far as we know", and yet LLMs have been found to be deliberately deceptive, which would seem to indicate some kind of volition. I don't know if you've listened to some of Geoffrey Hinton's and Mo Gawdat's talks, but doing so gave me pause, I have to say. I still remain somewhat skeptical, but I have an open mind as to what the evolution of these LLMs will look like.

    Re LLM deceptiveness I include this link. A simple search will reveal many others articles.
    Janus

    I'm not ignoring this. The thing is, I'm very cynical about claims regarding the consciousness or proto-consciousness of LLMs. I tend, shallowly perhaps, to regard it as over-excited exaggeration to gain attention and to carve out a niche presence in the field and in the media landscape, and so on. There are equally expert people on the naysaying side, probably the majority, who just don't get as much attention.

    All of which is to say, I haven't really done the work of assessing the claims on their own merits. So now I've put my prejudices on the table, I guess I should challenge them. The stuff about deceptiveness is certainly interesting and suprising.
  • Pierre-Normand
    2.8k
    All of which is to say, I haven't really done the work of assessing the claims on their own merits. So now I've put my prejudices on the table, I guess I should challenge them. The stuff about deceptiveness is certainly interesting and suprising.Jamal

    ...also a bit overblown and misrepresented in the media, since when you dig into the primary reports it's generally the case that the LLMs didn't decide to deceive on their own accord but did it instrumentally to fulfill objectives explicitly given to them. Maybe I'll comment on that, and how those studies bear on the issue of conative autonomy for LLMs, in my new thread.
  • Leontiskos
    5.3k
    ...also a bit overblown and misrepresented in the media, since when you dig into the primary reports it's generally the case that the LLMs didn't decide to deceive on their own accord but did it instrumentally to fulfill objectives explicitly given to them.Pierre-Normand

    I think these are just the basic ways we should expect people to be mislead by LLMs, such as imputing "deliberate deceit." The notion of deceit is parasitic on the notion of truth. In order to deceive one must first know what is true (and what 'truth' is!) and then intentionally lead someone to believe something contrary to what is true. Because LLMs cannot make truth-claims or understand truth, they cannot deceive. Going back to my scare-quotes heuristic, we could only say something like this: <The LLM "deliberately" "deceived" me, "leading" me in a direction that it "knew" to be "false">.

    What is occurring in these cases is usually a petitio principii where one arrives at a conclusion which presupposes that the LLM is self-conscious or is a person, only by presuming beforehand that the LLM is self-conscious or is a person.* The only way around this is to establish a robust definition and epistemic approach to personhood or self-consciousness. But even if someone proved that LLMs are self-conscious persons—which I hold is impossible—the cultural belief that LLMs are persons would grow, independent of that proof. Put differently, given the nature of our culture and the popularity and usefulness of LLMs, the culture will inevitably assent to the personhood of LLMs in both subtle and overt ways, and this assent will be fallacious. "It is useful to pretend that LLMs are persons, therefore LLMs are persons," is a fallacy, but the fallacious nature of such reasoning will be roundly ignored by a deeply pragmatic and self-centered culture. When this is combined with "intersubjective" theories of truth, and LLMs are inducted into the group of intersubjective subjects, fallacious reasoning of this sort will self-justify itself and the fallacious roots will grow exponentially as we become more and more dependent on LLMs within our intersubjective orientation—for LLMs simply magnify what we already believe and minimize what we already disbelieve, both propositionally and methodologically. They are a giant confirmation bias.

    But I know I have a number of other posts of yours to respond to. :grin:


    * For example, the person who has convinced themselves that the LLM is deliberately deceiving them had already convinced themselves that the LLM was deliberately revealing truths to them. The small lie or self-deception always grows into larger ones.
  • Leontiskos
    5.3k
    @Jamal, @Baden -

    This could be a good compromise position to take while sorting out the issue of whether or in what way AI quotations are permissible:

    If posters wish to illustrate their arguments with snippets of their conversation with AIs, I would encourage them to put those behind spoilers.Pierre-Normand
  • Moliere
    6.3k
    Are we any different? Do you know how we learn?Janus

    We are. And I have a decent idea on how to teach, so one could say that I have an idea about how we learn. One which functions towards other minds growing.

    We learn because we're interested in some aspect of the world: we are motivated to do so by our desire.

    The LLM does not. Were you to leave the computer off in a room unattended it would remain there.

    For us learning is a deeply emotional process. If we are frustrated and then elated after having solved some problem we are more likely to remember the lesson. That is, it's our own directedness which ends up teaching us rather than a scorer who tells me "Yes" or "No".

    We learn through relationship. LLM's do not. You can even automate them, to an extent, and have them feed inputs into one another and let them go on autopolite forever: The LLM does not have a real boundary of self or even a set of beliefs which it will adhere to. It generates tokens that we use and it tries to display the tokens we want to see in response to some input. While the LLM does learn it's not doing it in a manner a human does: Even if the experts don't understand how LLM's learn we can look at the differences between what we already call intelligent and compare various lifeforms to the functions of LLM's and it's very apparent, to me at least, that even ants or cockroaches have a greater intelligence than LLM's.

    If they speak in tokens at all they are tokens beyond me, but they demonstrate intelligence by manipulating their environment to seek out their own goals and working in community together to get there.

    It's that bit on community that I think is especially lacking in LLM's -- every LLM, in order to participate in the human community, must have a human which decides to treat the LLM as if it has beliefs or thinks or has an identity or will and all the rest. Rather than a community the LLM is wholly dependent upon us for inputs in order for it to continue learning. Were we to leave them to their own devices I doubt they'll do much. There is a project in New Zealand which tries to do exactly that by tending to an AI and then letting it "make decisions" that are filtered through the human network that tends to it. But all it is is a group of people deciding to see where an LLM will go given some human guidance in the social world. It's predictably chaotic.
  • Leontiskos
    5.3k
    Here's an article that addresses the issues we're dealing with:

    https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use

    It's from a national association for high schools related to debate rules, which seems close enough to what we do.
    Hanover

    An excerpt from the article:

    The Ethics of AI in Debate

    While AI can be a helpful tool, its misuse raises ethical concerns. Some students may be tempted to rely on AI to generate entire cases or fabricate evidence. This undermines the core purpose of debate: developing critical thinking, analytical reasoning and research skills.

    The ethical use of AI in debate means:

    Transparency: If AI is used in research or case writing, debaters should disclose it to their coaches and ensure all information is properly verified.

    Original Thought: AI can assist with structuring arguments, but debaters should craft their own speeches and rebuttals to develop authentic critical-thinking skills.

    Avoiding Fabrication: AI sometimes produces false information or made-up citations. Using AI to generate sources without verifying them is academic dishonesty.
    The Role of AI in Debate: Ethics, Research and Responsible Use

    This is good, and tracks what many of us have been saying. The article has a good focus on verifying and understanding the primary sources oneself, instead of merely trusting the AI's mediation.
  • Leontiskos
    5.3k
    Neural nets aren't radically other from other computers, imo.Moliere

    Authentic intelligence is generally seen as triadic, whereas computers are reductively dyadic. As C. S. Peirce or Walker Percy argue, a meaning-sign is irreducibly triadic, involving the sign, the thing signified, and the person who combines the two via intellect (cf. symbolon: "thrown together"). Programmers have always been attempting to get the dyadic structure of computers to model or approximate the triadic structure of meaning and thought, and neural nets are simply the latest iteration of that project. At each stage in the project the successes are highlighted and the ways in which the approximation fails to adequately model a triadic act are downplayed. This downplaying makes sense given the tacit assumption that a dyadic system will never completely model triadic behavior, but there comes a point when one must be reminded of what meaning is, what truth is, why the meaning relation is triadic, what humans are actually doing when they engage in intellectual acts, etc. Without such reminders the enthusiasts quickly convince themselves that there is no difference between their newest iteration and an actual human mind.
  • Banno
    29k


    Gobbledegook. Buried somewhere in there is the simple notion that it is people who interpret computer output, not computers. Fine.

    Except when a computer networks with another computer.

    is correct in that neural nets can be modelled on Turing machines, and so are subject to the limits thereof.
  • Leontiskos
    5.3k
    - Glad you agree. :up:
  • frank
    18.2k
    GobbledegookBanno

    I had a similar description, with more obscenities.
  • Banno
    29k
    So did I, in the first draft.
  • Janus
    17.6k
    Cheers I get your perspective, but I remain skeptical on both sides of the argument. All the more so, since it is only the last couple weeks that I have given it any attention and thought.

    Although they've been named after Claude Shannon, I'm pretty sure they identify as non-binary.Pierre-Normand

    It would be pretty interesting if they identified as anything.

    I tend, shallowly perhaps, to regard it as over-excited exaggeration to gain attention and to carve out a niche presence in the field and in the media landscape, and so on. There are equally expert people on the naysaying side, probably the majority, who just don't get as much attention.Jamal

    Yes, I have no doubt some of the hype is motivated by money. I've been thinking about looking at trying to get some figures regarding percentages of naysayes vs yaysayers.

    We are. And I have a decent idea on how to teach, so one could say that I have an idea about how we learn. One which functions towards other minds growing.

    We learn because we're interested in some aspect of the world: we are motivated to do so by our desire.
    Moliere

    That may be so, but I was referring to understanding how the brain learns.

    Of course LLMs and other AIS are not embodied, and so have no sensory access to the world. On the other hand, much of what we take ourselves to know is taken on faith—drawing on the common stock of recorded knowledge, and AIs do have access that to that, and to vastly more of it than we do.

    There is a project in New Zealand which tries to do exactly that by tending to an AI and then letting it "make decisions" that are filtered through the human network that tends to it. But all it is is a group of people deciding to see where an LLM will go given some human guidance in the social world. It's predictably chaotic.Moliere

    I hadn't heard of that. Sounds interesting. Can you post a link?
  • Banno
    29k
    It would be pretty interesting if they identified as anything.Janus

    We can settle this. Common curtesy suggests we just ask.

    :wink:
  • Moliere
    6.3k
    Buried somewhere in there is the simple notion that it is people who interpret computer output, not computers. Fine.Banno


    I'll admit I was wondering how to respond to the difference between dyadic and triadic @Leontiskos -- not that you're in gobbledegook territory for me, but I'd put the point with less of a theory of meaning and symbols. That's sort of the question or the beginning for much of my thoughts here: Why does what I read mean anything at all?

    What is meaning?

    Mostly I just assume that we mean things by words. Insofar that we hold meaning constant between one another -- clarify terms -- then we can start talking about what is true.

    But there are other ways of using words -- and that's where the "triadic structure" comes under question for me, in a way. Not that it's false, but that it changes, and so meaning would also change.

    There's the truth-conditions meaning, there's the meaning between people trying to get things done, there's the meaning of lovers, and friends, and artists....


    I don't want to overspecify meaning as a triadic relationship in opposition to the computational theorists of mind, I guess. There's something to Saussure and meaning, but it's wrong too. And I'm a little skeptical of triadic relationships not because they're uncommon, but because they're very common in any psychology.

    More charitably that might be what you're referring to: that we have to remind people that, in fact, while this looks like a human, it's not a thinking human.

    There I agree. But I would given what I've expressed so far :)
  • Janus
    17.6k
    I looked at your interchange, and then asked ChatGPT if it identified as anything at all. Here is the reply:

    Not in the way people do. I don’t have a self, inner experience, or identity beyond being an AI designed to assist and converse. If we’re talking conceptually, you could say I “identify” as a language model — a system built to understand and generate text — but that’s more a description of function than identity.

    Would you like me to reflect on what “identifying as something” might mean for a nonhuman intelligence?


    I said I would, but I don't seem to be able to share, since I am not logged in, and I don't want to clutter the thread with long quotations from ChatGPT.
  • Banno
    29k
    Perhaps there are limits on sharing my share back to us...

    But will we settle on they/them?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.