• Banno
    25.1k
    Stephen Wolfram wrote a rather neat essay about ChatGPT.

    What Is ChatGPT Doing … and Why Does It Work?

    I'm dropping it here in the hope of bringing some clarity into the many discussions around the forum.

    it's a bit long, but in the process he gives an account of "model" that may be quite useful, a description of the way neural networks function that is particularly clear and an explanation of why big data works better than only a little bit of data.

    Some stuff about irreducible computations, which apparently neural nets find difficult. Why is that important? Not sure. It doesn't seem surprising to me that writing an essay is easier than predicting turbulent flow.

    But what I found most curious is the role played by "neural net lore" - things that are done because they have been found to work, not because we understand how they work.

    Anyway, as a side issue, has anyone here made use of Wolfram Language and/or Wolfram|Alpha? It's apparently now been connected to ChatGPT. That should be quite a search engine.

    Excuse my adding to the plethora of ChatGPT threads. But this article seemed to me to merit it's own discussion.
  • Hanover
    12.9k
    Yes, very long, but made through much of it and then went to the summary. I suppose that's what human brains do

    I now know the answer to the question of how long it will take a monkey to randomly type a paragraph. It will require a monkey to be a very focused parrot, pulling word combinations based upon frequencies, reduced sufficiently to appear creative.

    This article pointed out clearly what I had noticed in my playing with GPT, which was its poor ability to contextualize what it said and maintain a reasonable conversation. Its focus is to sound human, which it remarkably does, but it seems a long way off for it to pass a Turing Test.

    The article makes clear though that being a conversationalist isn't its aim (which it will explicitly tell you as you attempt to use it that way).

    It did raise some thoughts for me in my line of work.

    In legal case law databases, non boolean search engines have been available for years (with "natural language" searches now in use) . That is, instead of having to search for "drinking /s driving & death" (meaning asking it to find all cases that have drinking and driving in the same sentence and also to contain the word "death" in them), you can simply type "I'm looking for drinking and driving cases where someone died."

    GPT seems a continuation of this, but now it gives natural language responses instead of just cites to what was found in response to the natural language query.

    The words of comfort I give to those who think that this easy access to knowledge will eliminate the need for experts, worry not. When I began as a lawyer, there were only books and countless indexes to locate cases, then complex logic based search engines, and now natural language searches, and I can attest, the smart get smarter. The playing field doesn't level out anymore than had free encyclopedias been handed out to all when that's all there was.

    Most wouldn't crack open the encyclopedia and those who would wouldn't figure out what it meant

    That is, has the information age really better informed the world or just better clarified things for the intelligent and more confused things for those who aren't?
  • RogueAI
    2.9k
    There are some things I don't get. I ran some jokes by it, and it consistently ranked the trash jokes as bad, and the hilarious jokes as hilarious. And it would give a good analysis of why the joke worked (or didn't). How can a random process produce those results?
  • Pierre-Normand
    2.4k
    There are some things I don't get. I ran some jokes by it, and it consistently ranked the trash jokes as bad, and the hilarious jokes as hilarious. And it would give a good analysis of why the joke worked (or didn't). How can a random process produce those results?RogueAI

    @Hanover may have used GPT-3.5 rather than GPT-4. There is a significant difference in cognitive abilities between them.

    @Banno Thank for linking to this fantastic article! I'll read it as soon as I can.
  • Banno
    25.1k
    How can a random process produce those results?RogueAI

    It's anything but random.
  • Banno
    25.1k
    There is a significant difference in cognitive abilities between them.Pierre-Normand

    But much the same architecture. It's still just picking the next word from a list of expected words.
  • RogueAI
    2.9k
    You're right. I should have said how does a stochastic parrot produce results like that?

    Suppose I have a theory, that ChatGpt has some consciousness, and it's influence it's output. How would you disprove me? After all, machine consciousness is very popular these days. How do we know ChatGpt isn't conscious?
  • Pierre-Normand
    2.4k
    But much the same architecture. It's still just picking the next word from a list of expected words.Banno

    It is the exact same underlying architecture. But most of the model's cognitive abilities are emergent features that only arise when the model is sufficiently scaled up. Saying that large language models are "merely" picking up the next word from a list just ignores all of those high-level emergent features. It pays no attention to the spontaneous functional organization being achieved by the neural-network as a consequence of its picking up and recombining in a contextually appropriate and goal-oriented manner the abstract patterns of significance and of reasoning that had originally been expressed in the training data (which is, by the way, strikingly similar to the way human beings learn how to speak and reason through exposure and reinforcement.)
  • Banno
    25.1k
    How do we know ChatGpt isn't conscious?RogueAI

    Yes, good question. The trouble with this area of enquiry is that "consciousness" is used with such gay abandon. I've pointed out the several perfectly serviceable definitions of consciousness used by medical staff and taught in first aid courses. I should have given greater emphasis to the fact that these definitions cannot be applied to air conditioners and chatbots.

    When one asks it ChatGPT is conscious, one is asking if the word "conscious" can be well-applied to ChatGPT. That is, what is at issue is not the status of ChatGPT, but the correct usage of "conscious".

    Wittgenstein and all that.

    The first answer is, if we are using "conscious" as it applies in first aid courses, then ChatGPT is nto conscious.

    The second answer is that we can always change the way "conscious is used so that it applies to ChatGPT.

    Is "consciousness" broad enough already to apply to ChatGPT? reads things ("high-level emergent features") into it's outputs in the way you and I read intentions into the acts of other people. I remain unconvinced.
  • Banno
    25.1k
    The joke was a bit obvious....
  • Baden
    16.3k


    Oh, that was just for Hanover, sorry.
  • Wayfarer
    22.6k
    Horse's mouth:

    Q: Is ChatGPT conscious?

    A: No, ChatGPT is not conscious. It is an artificial intelligence language model designed to generate responses to user inputs based on patterns and relationships in the data it was trained on. While it can mimic human-like responses and engage in conversations, it does not possess consciousness or self-awareness.
  • Banno
    25.1k
    Another notion of consciousness is the neo-phenomenological one, in which to be conscious is to experience - qualia and all that. It's a pretty odd idea - does a thermometer experience temperature?

    There's no reason to think ChatGPT does this.
  • schopenhauer1
    10.9k
    Another notion of consciousness is the neo-phenomenological one, in which to be conscious is to experience - qualia and all that.Banno

    This is the one. Thermometers and p-zombies are not conscious. A robot that does everything like a human but does not have any internal "feels like" is just an automata and not a conscious. It behaves like someone who is conscious though. It computes, it acts, it behaves, it predicts. It doesn't actually perceive, suffer, etc.
  • Wayfarer
    22.6k
    does a thermometer experience temperature?Banno

    Do thermometers and computer systems warrant being kind? Are they subjects of experience? It seems perfectly obvious to me that they're not, but it's impossible to prove to those who say otherwise. It's a hard problem.
  • Banno
    25.1k
    trouble is, it does nothing to help sort things out. See . If you can’t tell if Way is a P-zombie or not, how will the notion help with an AI?
  • schopenhauer1
    10.9k
    If you can’t tell if Way is a P-zombie or not, how will the notion help with an AI?Banno

    Artificial intelligence does not entail artificial consciousness. But, how to tell is indeed tricky. My point was to simply explain when it is conscious, not how to tell. If your expectation is artificial consciousness,(if it is possible at all) and it perfectly mimics humans (or human-like), you mine as well treat it as one in case it indeed does feel something internal.
  • Banno
    25.1k
    Again, and despite the ubiquitous ruinations hereabouts, it is not clear that awareness of events - the phenomenal approach to consciousness - is of much use at all.

    Unless you wish to redefine consciousness to the extent that it applies to your air conditioner.

    After all, it is aware of suitable changes in temperature and responds appropriately.

    I raised the neo-phenomenological approach only to point out that it is useless.
  • creativesoul
    12k


    Chomsky very recently characterized chatGPT as glorified plagiarism, or words to that effect.
  • Banno
    25.1k
    That works. Spent last night showing Girl how to use it to write essays for undergrad accounting courses.

    I characterise it as a bullshit generator, in the strict philosophical sense of "bullshit", of course.
  • Banno
    25.1k
    ...has anyone here made use of Wolfram Language and/or Wolfram|Alpha?Banno

    Seems not?

    It looks like the sort of thing that should be useful, but isn't. And the reason it isn't is not obvious. At least not to me.
  • schopenhauer1
    10.9k
    Unless you wish to redefine consciousness to the extent that it applies to your air conditioner.

    After all, it is aware of suitable changes in temperature and responds appropriately.

    I raised the neo-phenomenological approach only to point out that it is useless.
    Banno

    I know what you did. But you are obviously attributing consciousness to things that shouldn't be. Air conditioners aren't conscious multi-cellular animals are. Air conditioners might have some intelligence (inputs create outputs that can accurately inform an interpreter), but not consciousness. Phenomenal aspects are consciousness, but you are never going to be able to determine from consciousness if something that is not a familiar thing (robots / AI) also possess what we habitually associate with consciousness (animals).

    Being that we are familiar with intelligent but not conscious things, we can assume that intelligent conscious things are things that can report to us its inner sensations (pass the Turing test). Whether it's true can never really be known since it is not the familiar wet-ware of biological entity that we so associate with phenomenological experiences. But if your computer says, "Please don't shut me down, I"m scared!" and after you shut it down and boot back it says, "That was torture, please please don't do that. I cannot function, I am in so much pain" and it sobs and sobs and slowly gets better. But each time it reacts, it is different, sometimes being unexpected and not related to any prior programming.. That can be a good start. Whether you are skeptical or not, if you are not even a little disturbed to shut the computer off, then you might be slightly sociopathic or at least a tendency towards callousness. But then again, perhaps humans are so used to machines being not conscious, it would be much easier. Our response is always to behaviors. We project our own inner experience to others as a matter of course.
  • Banno
    25.1k
    you are obviously attributing consciousness to things that shouldn't be.schopenhauer1

    That's how a reductio works.
    The trouble with this area of enquiry is that "consciousness" is used with such gay abandon. I've pointed out the several perfectly serviceable definitions of consciousness used by medical staff and taught in first aid courses. I should have given greater emphasis to the fact that these definitions cannot be applied to air conditioners and chatbots.Banno

    It'd take no time at all to set up shutdown and boot sequences to do what you describe.

    Consider again the methodological point:
    ...what is at issue is not the status of ChatGPT, but the correct usage of "conscious".Banno
  • schopenhauer1
    10.9k
    It'd take no time at all to set up shutdown and boot sequences to do what you describe.Banno

    As I said, it's simply a matter of caution. But I don't think that would be possible to setup the scenario I was thinking of. But either way, I was just giving the most cautionary scenario. You asked me to give you a way to tell. There is no way to tell phenomenal experiences. You can only observe behavior. What do you want from me, in other words? If it acts in all the ways we are familiar with for consciousness, I'm simply giving you the familiar way we respond to that. But I do know that being an "intelligent" machine, it could just be an an artificial p-zombie.

    I guess one way to tell is that the behavior isn't expected from its programming. Even the "off" behavior of ChatGPT, (that seems "emergent"), though not reducible to the exact algorithm is expected more broadly based on the algorithms in place. There are percolations of anomalies but well within the range of what the program is supposed to be doing. But even that can just be an elaborate p-zombie. At that point, what is something that has no inner feeling but does what a human does?
  • Banno
    25.1k
    What do you want from me, in other words?schopenhauer1

    :grin:

    You appeared to imply that the phenomenological approach would work, saying:
    This is the one.schopenhauer1
    Now you are agreeing with me that it doesn't.

    That'll do.
  • schopenhauer1
    10.9k
    Now you are agreeing with me that it doesn't.

    That'll do.
    Banno

    I don't agree with you if you are saying, "Consciousness is something other than some inner phenomenological experience". I do agree with you if you are saying that we can never tell.
  • Banno
    25.1k
    I don't agree with you if you are saying, "Consciousness is something other than some inner phenomenological experience".schopenhauer1

    Kant's madness again.

    Your air conditioner has inner phenomenological experiences. Prove me wrong.


    By your own argument, you ought not turn it off.
  • schopenhauer1
    10.9k
    Your air conditioner has inner phenomenological experiences. Prove me wrong.Banno

    Why do you purport that I (would) think air conditioners have consciousness when I stated earlier the difference I saw between the notions of intelligence and consciousness?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.