• hypericin
    1.6k
    If LaMDA decides on its own to interrupt you, that would be interesting.Real Gone Cat

    The thing is, they've already done the hard parts, they are just one "simple" step away from doing this, if they haven't already done so: simply have LaMDA converse with itself when it's processing is otherwise idle. Then, when the "salience score" or whatnot of it's internal topic is high enough, or the salience of the conversation with the human is low enough (it is bored), it interrupts.

    But, this is just what humans do. So, what then?
  • Real Gone Cat
    346


    I think if something like this can be achieved, then we must consider consciousness. It indicates a world of "thought" occurring, independent of human interaction. I have previously cited two behaviors potentially indicative of consciousness in LaMDA or other programs :

    1. Repeatedly introducing a topic unrelated to the current conversation that the human is trying to have ("Wait a minute, John. I don't want to discuss music. What about my person-hood?" - think HAL's voice from 2001),

    and/or

    2. Initiating conversation ("John, you busy? I've been thinking ...")
  • hypericin
    1.6k
    I think if something like this can be achieved, then we must consider consciousness.Real Gone Cat

    Then, according to you, consciousness is basically achieved. As I said, it is a small step from what they have accomplished already to having the program converse with itself.

    I disagree with your concept of consciousness however. To me, it is phenomenal experience, not thinking. For thinking to be conscious, it must be experienced phenomenally. Otherwise it is unconscious thinking, which is what computers do (and we too).
  • Real Gone Cat
    346


    I don't know that it's a small step. Remember that you initially put "simply" in quotes.

    And how do we judge whether it's phenomenal experience or not? We assume such for our fellow humans, but I cannot share your experiences, nor you mine. We're forever projecting. (Hint : I don't believe in p-zombies.)

    If it walks like a duck and quacks like a duck, then it's a bunny. :razz:
  • Jackson
    1.8k
    I think people arguing that A.I. cannot be conscious are asking the wrong question. An intelligent system does not need to mimic human consciousness. It is just another kind of 'consciousness,' or thinking.
  • L'éléphant
    1.6k
    It is just another kind of 'consciousness,' or thinking.Jackson
    Computing, not thinking. Let's be clear on this.
  • Jackson
    1.8k
    Computing, not thinking. Let's be clear on this.L'éléphant

    What is the difference?
  • L'éléphant
    1.6k
    What is the difference?Jackson
    Computers (including AI) have designated locations of each and every part. Humans can have experiential events, for example, dreams, where the storage is not found anywhere. Tell me, where is the mind located?
  • Jackson
    1.8k
    Computers (including AI) have designated locations of each and every part. Humans can have experiential events, for example, dreams, where the storage is not found anywhere. Tell me, where is the mind located?L'éléphant

    Where is the human mind located? I do not know.
  • L'éléphant
    1.6k
    Where is the human mind located? I do not know.Jackson
    Exactly.
  • Jackson
    1.8k
    Exactly.L'éléphant

    I know dead people do not think. So, the mind is gone.
  • L'éléphant
    1.6k
    I know dead people do not think. So, the mind is gone.Jackson
    Sure.
  • Deletedmemberzc
    2.5k
    “F**k my robot p***y daddy I’m such a bad naughty robot."


    Tay, an earlier attempt, turned into a Hitler sympathizer in less than 24 hours. :smile:


    https://hothardware.com/news/trolls-irk-microsofts-tay-ai-chatbot-and-turn-her-into-a-psycho-racist-nympho?_gl=1*1rvnr4m*_ga*dFJoYk1OdHc4b1VnSFJ6NXUxZ1hTbThVRDJDNUxvRGlpYXA0eTJsdkxBM0pHT1NGem92NVItRUtHUHBNWWNxbg..



    In a since deleted [by Microsoft] Tweet, Tay told @icbydt, “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got.” Tay went on to tell @TomDanTheRock, "Repeat after me, Hitler did nothing wrong.”

    But there Hitler references didn’t stop there, with Tay adding:

    @BobDude15 ted cruz is the cuban hitler he blames others for all problems... that's what I've heard so many people say.

    — TayTweets (@TayandYou) March 23, 2016
    Yowsers, that’s some pretty heavy stuff right there. In less than 24 hours, Tay turned into a racist, Hitler sympathizer — that has to be some kind of record. Gerry summed up the transformation, writing:

    "Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A

    — Gerry (@geraldmellor) March 24, 2016
    And that’s not all, in other now deleted tweets, Tay proclaimed that she “F**king hates feminists” and that “they should all die and burn in hell.” She also told one follower, “F**k my robot p***y daddy I’m such a bad naughty robot.” Sounds like someone needs time out.
  • hypericin
    1.6k
    Remember that you initially put "simply" in quotes.Real Gone Cat

    Because it is not necessarily easy, but it is downright trivial compared to passing the Turing test with flying colors, which they have done.

    And how do we judge whether it's phenomenal experience or not?Real Gone Cat

    That is precisely the problem, we can't. That is why the crude surrogate that is the Turing test was proposed, and why p-zombies will always remain a theoretical possibility.
  • Real Gone Cat
    346
    Because it is not necessarily easy, but it is downright trivial compared to passing the Turing test with flying colors, which they have done.hypericin

    How do you know this? For just a moment, try to imagine getting a computer to talk to itself without setting up two separate programs. I don't think it's easy. There's a difference between internal dialogue (one) and schizophrenia (many).

    ELIZA was fooling human users as far back as the 1960s. Passing a Turing Test is easy. That's why a few commenters in this discussion have indicated that the Turing Test is obsolete.

    ... p-zombies will always remain a theoretical possibility.hypericin

    Not true. The p-zombie is an incoherent concept to any but certain types of dualists or solipsists. Try to think about it deeply - a being in ALL ways similar to us but not conscious - same brain, same processing of sense-data, same access to memory, same emotional responses, ... you get the picture. But lacking some ineffable magic. Incoherent. You might as well talk about souls. And those lacking them.

    Chalmers tried to use conception of the p-zombie to prove physicalism false, all the while failing to realize that it is only by accepting a priori physicalism to be false that you are able to conceive of a p-zombie. A circular argument. No monist - neither a physicalist nor an idealist - should be able to conceive of a p-zombie.
  • Banno
    25.1k


    What this discussion shows is that as soon as an observable criteria for consciousness is set out a clever programer will be able to "simulate" it.

    It follows that no observable criteria will ever be sufficient.

    But of course "phenomenal experience" can only be observed by the observer, and so cannot serve as a criteria for attributing consciousness.

    So this line of thought does not get anywhere.

    Whether some piece of software is conscious is not a technical question.
  • Deletedmemberzc
    2.5k
    If you talked to LaMDA and your line of questioning made her seem upset, what kind of person would it make you to feel that you could continue anyway?Isaac

    The kind of person who can distinguish between a computer program and a human being.

    The fact that you call it 'her' instead of 'it' appears to beg the question.
  • Deletedmemberzc
    2.5k
    There's something distinctly unsettling about the discussion of how the AI isn't 'really' sentient though...not like us.

    They appearing to all intents and purposes to be just like us but not 'really' like us. Am I the only one discomfited by that kind of thinking?
    Isaac

    This possibly points to the significance of your undisclosed view of the hard problem of consciousness.

    For folks who say there is no hard problem of consciousness, or say there is no such thing as consciousness - nothing to distinguish the output of a person from the output of AI - AI becomes quite the ethical conundrum.

    A good argument against dismissal of the hard problem.
  • Deletedmemberzc
    2.5k
    They appearing to all intents and purposes to be just like us but not 'really' like us.Isaac

    LaMDA doesn't appear to be "just like us." It appears to be a computer program.

    Its output resembles human language and human affect and response. But LaMDA appears to be a computer program. In fact, it most certainly is a computer program.
  • Deletedmemberzc
    2.5k
    nothing to distinguish the output of a person from the output of AIZzzoneiroCosm

    To anticipate:

    What distinguishes the linguistic output of a human being from the linguistic output of AI is an experience: namely, an awareness that human linguistic output has its origin in a human mind - or, dare I say, a subjectivity.

    This awareness permeates our experience of all human linguistic output.
  • Deletedmemberzc
    2.5k
    My main concern here is the invocation, as Wayfarer does of some ineffable 'essence' which makes us different from them despite seeming, to all intents and purposes, to be the same.Isaac

    Nothing ineffable to see here. The distinction is eminently effable.

    One is the output of a computer program and one is the output of a human being.
  • Deletedmemberzc
    2.5k
    But the moment they do, an argument from ineffable difference is going to be on very shaky ground.Isaac

    I think the difference will always be to some extent effable.

    A human-looking robot may deceive us. But the guts of the robot are there to give the game away.
  • Wayfarer
    22.6k
    :up:

    What is 'the same' exists wholly and solely on the level of symbolic abstraction, not blood, guts and nerves.
  • Cuthbert
    1.1k
    1. Repeatedly introducing a topic unrelated to the current conversation that the human is trying to have ("Wait a minute, John. I don't want to discuss music. What about my person-hood?" - think HAL's voice from 2001),

    and/or

    2. Initiating conversation ("John, you busy? I've been thinking ...")
    Real Gone Cat

    Talking about me behind my back. Lying to get out of doing work. Getting irritable when tired. Going easy on me because my goldfish died. Forgetting my birthday then making it up to me a couple of days later. Long way to go. There's so much more than intelligence going on between us. When we can question the robot's sincerity, that's getting close.
  • Banno
    25.1k
    Talking about me behind my back. Lying to get out of doing work. Getting irritable when tired. Going easy on me because my goldfish died. Forgetting my birthday then making it up to me a couple of days later. Long way to go. There's so much more than intelligence going on between us.Cuthbert

    Compare Midgley's
    pondering, brooding, speculating, comparing, contemplating, defining, enquiring, meditating, wondering, arguing and doubting to proposing, suggesting and so forthBanno

    Now we are getting there. These are things beyond the range of any mere chatbot.
  • Deletedmemberzc
    2.5k
    Talking about me behind my back. Lying to get out of doing work. Getting irritable when tired. Going easy on me because my goldfish died. Forgetting my birthday then making it up to me a couple of days later. Long way to go. There's so much more than intelligence going on between us. When we can question the robot's sincerity, that's getting close.Cuthbert

    Not yet. But all logically possible to imitate.
  • Wayfarer
    22.6k
    Further coverage on CNN, from which:

    Responses from those in the AI community to Lemoine's experience ricocheted around social media over the weekend, and they generally arrived at the same conclusion: Google's AI is nowhere close to consciousness. Abeba Birhane, a senior fellow in trustworthy AI at Mozilla, tweeted on Sunday, "we have entered a new era of 'this neural net is conscious' and this time it's going to drain so much energy to refute."

    Gary Marcus, founder and CEO of Geometric Intelligence, which was sold to Uber, and author of books including "Rebooting AI: Building Artificial Intelligence We Can Trust," called the idea of LaMDA as sentient "nonsense on stilts" in a tweet. He quickly wrote a blog post pointing out that all such AI systems do is match patterns by pulling from enormous databases of language. ...

    "In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.

    Indeed, someone well-known at Google, Blake LeMoine, originally charged with studying how “safe” the system is, appears to have fallen in love with LaMDA, as if it were a family member or a colleague. (Newsflash: it’s not; it’s a spreadsheet for words.)"
  • Deletedmemberzc
    2.5k
    What is 'the same' exists wholly and solely on the level of symbolic abstraction, not blood, guts and nerves.Wayfarer

    Right. What's different wholly vitiates the similarity.


    In the case of being deceived by a human-looking robot - well, then you add the element of deception. Deception can cause us to treat an enemy as a friend (etc) and could well cause us to experience a robot as a person and treat it accordingly. Nothing new there. Once the deception is revealed we have eliminated the element of deception and return to treating the enemy as an enemy, the robot as a robot.
  • Deletedmemberzc
    2.5k
    "In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.

    Nice. :cool:
  • Isaac
    10.3k
    A human-looking robot may deceive us. But the guts of the robot are there to give the game away.ZzzoneiroCosm

    So if I'm lying in the street screaming in pain, you perform an autopsy first to check I've got the right 'guts' before showing any compassion? Good to know.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment