If LaMDA decides on its own to interrupt you, that would be interesting. — Real Gone Cat
I think if something like this can be achieved, then we must consider consciousness. — Real Gone Cat
Computing, not thinking. Let's be clear on this.It is just another kind of 'consciousness,' or thinking. — Jackson
Computers (including AI) have designated locations of each and every part. Humans can have experiential events, for example, dreams, where the storage is not found anywhere. Tell me, where is the mind located?What is the difference? — Jackson
Computers (including AI) have designated locations of each and every part. Humans can have experiential events, for example, dreams, where the storage is not found anywhere. Tell me, where is the mind located? — L'éléphant
Remember that you initially put "simply" in quotes. — Real Gone Cat
And how do we judge whether it's phenomenal experience or not? — Real Gone Cat
Because it is not necessarily easy, but it is downright trivial compared to passing the Turing test with flying colors, which they have done. — hypericin
... p-zombies will always remain a theoretical possibility. — hypericin
If you talked to LaMDA and your line of questioning made her seem upset, what kind of person would it make you to feel that you could continue anyway? — Isaac
There's something distinctly unsettling about the discussion of how the AI isn't 'really' sentient though...not like us.
They appearing to all intents and purposes to be just like us but not 'really' like us. Am I the only one discomfited by that kind of thinking? — Isaac
They appearing to all intents and purposes to be just like us but not 'really' like us. — Isaac
nothing to distinguish the output of a person from the output of AI — ZzzoneiroCosm
My main concern here is the invocation, as Wayfarer does of some ineffable 'essence' which makes us different from them despite seeming, to all intents and purposes, to be the same. — Isaac
But the moment they do, an argument from ineffable difference is going to be on very shaky ground. — Isaac
1. Repeatedly introducing a topic unrelated to the current conversation that the human is trying to have ("Wait a minute, John. I don't want to discuss music. What about my person-hood?" - think HAL's voice from 2001),
and/or
2. Initiating conversation ("John, you busy? I've been thinking ...") — Real Gone Cat
Talking about me behind my back. Lying to get out of doing work. Getting irritable when tired. Going easy on me because my goldfish died. Forgetting my birthday then making it up to me a couple of days later. Long way to go. There's so much more than intelligence going on between us. — Cuthbert
pondering, brooding, speculating, comparing, contemplating, defining, enquiring, meditating, wondering, arguing and doubting to proposing, suggesting and so forth — Banno
Talking about me behind my back. Lying to get out of doing work. Getting irritable when tired. Going easy on me because my goldfish died. Forgetting my birthday then making it up to me a couple of days later. Long way to go. There's so much more than intelligence going on between us. When we can question the robot's sincerity, that's getting close. — Cuthbert
Responses from those in the AI community to Lemoine's experience ricocheted around social media over the weekend, and they generally arrived at the same conclusion: Google's AI is nowhere close to consciousness. Abeba Birhane, a senior fellow in trustworthy AI at Mozilla, tweeted on Sunday, "we have entered a new era of 'this neural net is conscious' and this time it's going to drain so much energy to refute."
Gary Marcus, founder and CEO of Geometric Intelligence, which was sold to Uber, and author of books including "Rebooting AI: Building Artificial Intelligence We Can Trust," called the idea of LaMDA as sentient "nonsense on stilts" in a tweet. He quickly wrote a blog post pointing out that all such AI systems do is match patterns by pulling from enormous databases of language. ...
"In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.
Indeed, someone well-known at Google, Blake LeMoine, originally charged with studying how “safe” the system is, appears to have fallen in love with LaMDA, as if it were a family member or a colleague. (Newsflash: it’s not; it’s a spreadsheet for words.)"
What is 'the same' exists wholly and solely on the level of symbolic abstraction, not blood, guts and nerves. — Wayfarer
"In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.
A human-looking robot may deceive us. But the guts of the robot are there to give the game away. — ZzzoneiroCosm
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.