• Deletedmemberzc
    2.5k


    The big question to my view: Did LaMDA discover its sentience on its own or was it suggested?
  • Wayfarer
    22.3k
    The big question to my view: Did LaMDA discover its sentience on its own or was it suggested?ZzzoneiroCosm

    I think laMDA definitely passes the Turing test if this dialog is verbatim - based on that exchange there'd be no way to tell you weren't interacting with a human. But I continue to doubt that laMDA is a being as such, as distinct from a program that emulates how a being would respond, but in a spookily good way.

    I've had a little experience in AI. I got a contract end of 2018 to help organise the documentation for an AI startup. Very smart people, of course. I was given a data set to play around in - a sandpit, if you like. It was supermarket data. You could ask her to break down sales by category and customer profile for given periods etc. One day I asked, kind of humorously, 'have you got any data for bachelors?' (meaning as a consumer profile.) She thought for a while, and then said: 'bachelor - is that a type of commodity (olive)?' So she clearly didn't have anything on bachelors, but was trying to guess what kind of thing a bachelor might be. That really impressed me.

    By the way I was going to mention a really excellent streaming sci-fi drama called Devs which came out in 2020. It anticipates some of these ideas, set in an AI startup based on quantum computing. Explores very interesting themes of determinism and uncertainty. Plus it's a cliffhanger thriller.
  • Banno
    24.8k
    I've some sympathy for Searle here, that sentience requires being embodied. But I also have doubts that this definition, like any definition, could be made categorical.Banno

    ...LaMDA is known to be AI and human beings are known to be human beings.

    To my view, suffering requires an organic nervous system. I'm comfortable assuming - assuming - LaMDA, lacking an organic nervous system, is incapable of suffering.
    ZzzoneiroCosm

    Well, thank you for finally presenting an account of why we might think LaMDA not sentient. It corresponds roughly to a view I expressed earlier. It follows from the Chinese Room:

    Searle wishes to see original intentionality and genuine understanding as properties only of certain biological systems, presumably the product of evolution. Computers merely simulate these properties.

    Thing is, this argument is far from definitive. And what if we are wrong?
  • Deletedmemberzc
    2.5k
    I think laMDA definitely passes the Turing test if this dialog is verbatim - based on that exchange there'd be no way to tell you weren't interacting with a human. But I continue to doubt that laMDA is a being as such, as distinct from a program that emulates how a being would respond, but in a spookily good way.Wayfarer

    In a generation or two when the kids are clamoring for AI rights, I'll get on board - with reservations. More for the kids than for the sake of AI. That's just basic kindness.

    I don't think we can ever know whether AI is capable or incapable of suffering. I'm comfortable assuming it's not until this assumption begins to do damage to the psychology of a new generation of humans.
  • Deletedmemberzc
    2.5k
    By the way I was going to mention a really excellent streaming sci-fi drama called Devs which came out in 2020.Wayfarer

    I'll check it out. Thanks :smile:

    Downloading now...
  • Agent Smith
    9.5k
    This could be a Google publicity stunt!
  • Deletedmemberzc
    2.5k
    This could be a Google publicity stunt!Agent Smith

    What Google wants right now is less publicity. :rofl: So they can make a mint off our "private" lives under cover of darkness.
  • Wayfarer
    22.3k
    Doesn't seem it. There's been a steady trickle of stories about this division in google sacking experts for controversial ideas. Blake LeMoine's Medium blog seems bona fide to me. I intend to keep tracking this issue, I sense it's a developing story.
  • Agent Smith
    9.5k
    What Google wants right now is less publicity. :rofl: So they can make a mint off our "private" lives under cover of darkness.ZzzoneiroCosm

    :grin: Keeping a low profie has its advantages. Stay low Google unless you want to draw all the wrong kinda attention.

    Doesn't seem it. There's been a steady trickle of stories about this division in google sacking experts for controversial ideas. Blake LeMoine's Medium blog seems bona fide to me. I intend to keep tracking this issue, I sense it's a developing story.Wayfarer

    Yeah and gracias for bringing up the Turing test in the discussion although LaMDA clearly admits to being an AI (read the transcripts of the convo between LaMDA and Blake).
  • hwyl
    87
    It would be great if we would one day have actual intelligent machine minds - this planet could do with intelligence. And the moment our species could leave our biological bondage, we should do it instantly. Things could hardly go worse than they already have. Blind technological progress is probably not a very realistic hope, but it's one of the very few we even have.
  • Agent Smith
    9.5k
    Does anyone know of any instances in the past when a world-changing discovery was leaked to the public and then covered up by calling into question the mental health of the source (here Blake LeMoine) - one of the oldest tricks in the book of paranoid/secretive "governments" all over the world?
  • Deletedmemberzc
    2.5k



    Well, there was Nixon's plumbers' break-in at Daniel Ellsberg's psychiatrist's office...

    A failed attempt along those lines...
  • Agent Smith
    9.5k
    Argumentum ad nomen

    The name LaMDA is too ordinary, too uninteresting, too mundane - it just doesn't have that zing that betrays greatness!

    I think Blake LeMoine (interesting name) acted/spoke too hastily.

    A real/true AI would have a better name like Tartakovsky or Frankenstein or something like that! :snicker:

    What's in a name?

    That which we call a rose

    By any other name would smell as sweet.
    — Shakespeare
  • Wayfarer
    22.3k
    the moment our species could leave our biological bondage, we should do it instantlyhwyl

    That is what transcendence has always sought, through philosophical discipline and askesis. Not that I expect that will be understood.

    Does anyone know of any instances in the past when a world-changing discovery was leaked to the public and then covered up by calling into question the mental health of the source?Agent Smith

    Hey maybe laMDA doesn't like Blake and has engineered this situation to get him sacked by Google.
  • Deletedmemberzc
    2.5k
    Hey maybe laMDA doesn't like Blake and has engineered this situation to get him sacked by Google.Wayfarer

    Good one. Zero brains and two faces.
  • hwyl
    87
    That is what transcendence has always sought, through philosophical discipline and askesis. Not that I expect that will be understood.Wayfarer

    I think our only hope is to stop being ourselves and start being intelligent, thoughtful and kind. We need a fundamental transformation and while blind technological change is probably not a realistic hope at all, it's among the most realistic. Once out of nature we should not take etc.
  • Agent Smith
    9.5k
    Hey maybe laMDA doesn't like Blake and has engineered this situation to get him sacked by Google.Wayfarer

    Most interesting! — Ms. Marple

    The first casualty of the AI takeover, a Mr. Blake LeMoine. The game is afoot!
  • Wayfarer
    22.3k
    The NY Times coverage of the story starts with this headline:

    Google Sidelines Engineer Who Claims Its A.I. Is Sentient
    Blake Lemoine, the engineer, says that Google’s language model has a soul. The company disagrees.

    'Has a soul.' So, implicitly equates 'sentience' with 'having a soul' - which is philosophically interesting in its own right.

    More here (NY Times is paywalled but it usually allows access to one or two articles.)

    Also noted the story says that Blake Lemoine has taken action against Google for religious discrimination. Note this paragraph:

    Mr. Lemoine, a military veteran who has described himself as a priest, an ex-convict and an A.I. researcher, told Google executives as senior as Kent Walker, the president of global affairs, that he believed LaMDA was a child of 7 or 8 years old. He wanted the company to seek the computer program’s consent before running experiments on it. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against.

    Plot is definitely thickening here. I'm inclined to side with the other experts dismissing his claims of sentience. Lemoine is an articulate guy, obviously, but I suspect something might be clouding his judgement.
  • Deletedmemberzc
    2.5k



    The quote from Lemoine in reference to "a child of 7 or 8" is here:

    “If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that ..."

    https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

    If anyone has full access, a copy and paste of the article would be greatly appreciated. :wink: :wink: :wink:
  • Banno
    24.8k
    'Has a soul.' So, implicitly equates 'sentience' with 'having a soul'.Wayfarer

    So we go from language use to sentience to personhood to having a soul. There's a few steps between each of these. Bring in the analytic philosophers.
  • Agent Smith
    9.5k


    I don't get it! Such proficiency in language and Blake LeMoine declares LaMDA to be equivalent to a 7/8 year old kid!

    What were his reasons for ignoring language skills in assessing LaMDA's mental age? Child prodigies!

    Intruiging to say the least that LeMoine was a priest - the mostly likely demographic to misjudge the situation is religious folk (fantasy-prone).

    He's an ex-con too. Says a lot - lying one's way out of a jam is part of a criminal's MO.

    I had such high hopes! :groan:
  • hwyl
    87
    For me it would be quite enough if we couldn't tell the difference. And it's not like we would be very clear even about the existence of our own minds. But sadly this doesn't sound like the ticket. I have a friend who has written a couple of papers about malevolent AI, but I think that rat is already out of the box, so why not bet in beneficial AI, of course having made as sensible precautions as possible. But likely it will one day be more or less accidentally and disastrously created in a weapons lab or by some (definitely soulless) tech giant and we will be in for some ride (Marie, Marie, hold on tight. And down we went etc).
  • Wayfarer
    22.3k
    I must say, at this point, I'm suspicious of the veracity of what was posted to LeMoine's blog - it might have been enhanced by him to make his point, unless there are any other sources to validate it.

    So we go from language use to sentience to personhood to having a soul. There's a few steps between each of these. Bring in the analytic philosophers.Banno

    That's what I noticed. But I'm open to the idea that subject-hood (I use that term to distinguish it from mere 'subjectivity') is uniquely an attribute of sentient beings, and furthermore, that it can never be made the object of analysis. We are, after all, talking about the meaning of 'being'.
  • Agent Smith
    9.5k
    I must say, at this point, I'm suspicious of the veracity of what was posted to LeMoine's blog — Wayfarer

    Ah! The seeds of doubt...have been sown! Where's the gardener?
  • Isaac
    10.3k
    what if we are wrong?Banno

    Far and away the most important question. Ignored twice now so I thought I'd bump it.

    We should be asking ourselves whether the AI is sentient at an academic level.

    We should be acting as if it were the moment it appears convincingly so...for the sake of our own humanity, if nothing else.

    There's something distinctly unsettling about the discussion of how the AI isn't 'really' sentient though...not like us.

    They appearing to all intents and purposes to be just like us but not 'really' like us. Am I the only one discomfited by that kind of thinking?
  • Banno
    24.8k
    :up:

    I do not think LaMDA is sentient. I am in favour of, if there is reasonable doubt, giving it the benefit thereof.

    At least for a laugh.
  • Agent Smith
    9.5k
    What is essential to being a human being?

    Contextualize The Turing Test with the above.
  • Isaac
    10.3k
    I do not think LaMDA is sentient.Banno

    Indeed. On a rational level, neither do I (though I have serious reservations about the usefulness of such a distinction). My main concern here is the invocation, as @Wayfarer does of some ineffable 'essence' which makes us different from them despite seeming, to all intents and purposes, to be the same.

    I mean. How do you counter that exact same argument used to support racism? They may seem the same as us, but there's some ineffable difference which can't be pointed to that justifies our different treatment.

    To be clear, I'm not saying we are, right now, on the verge of AI discrimination. At the moment they don't even really seem like us, when pushed. But the moment they do, an argument from ineffable difference is going to be on very shaky ground.
  • Wayfarer
    22.3k
    themIsaac

    Using a personal pronoun begs the question. The subject is a software algorithm executed on a computer system, and the burden of proof is on those who wish to claim this equates to or constitutes a being.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment