• Banno
    25.2k
    Zos does not like critique. He takes it personally.
  • Deleted User
    0
    I suppose you're sufficiently well-read on the subject of deep learning.ZzzoneiroCosm



    Via deep learning AI can have a "thought" a person has never had.

    Can express a thought a person has never had.
  • Real Gone Cat
    346


    Yes. And I believe it's for this very reason that consciousness cannot arise from a linear system. Only a massive parallel-processor with numerous feedback loops (a neural network) can even be considered, otherwise no learning can take place.
  • Deleted User
    0
    Zos does not like critique. He takes it personally.Banno

    As usual, straight to insults and forum politics.

    Just sad.


    Take care. :smile:
  • Real Gone Cat
    346
    Dang. I can't type fast enough to keep up.

    (Let's see a computer say that!)
  • Baden
    16.4k
    Let's stay on topic anyhow folks or the AI in me will start automatically deleting posts. :wink:
  • Wayfarer
    22.8k
    The problem here is: what is the more that makes LaMDA a person, or not? If one maintains that there is more to mind than physics, one is under an obligation to set out what that "more" is.Banno

    Subject-hood, in short. All sentient beings are subjects of experience. Human agents are rational self-aware subjects of experience.
  • Banno
    25.2k
    A linear system. I agree, some reflexivity or recursion is needed.

    Can we tell for sure that LaMDA is linear? Is anyone here sufficiently familiar with the program? LaMDA implements virtual neural nets. It uses Transformer.
  • Banno
    25.2k
    You claimed LaMDA is "Still just extremely skillful pretence".

    I asked, can one show that their posts here are not "just extremely skilful pretence"?

    Can you back up your claim?

    I submit that this post is on topic.
  • Deleted User
    0
    Not an interesting game. Especially as I know what to expect.

    Take care. :smile:
  • Banno
    25.2k
    Subject-hood, in short. All sentient beings are subjects of experience. Human agents are rational self-aware subjects of experience.Wayfarer

    So how does that pay out in dismissing LaMDA's claims to personhood?

    What is it that @ZzzoneiroCosm does that LaMDA doesn't?
  • Baden
    16.4k
    @Banno

    It's late here and I'm going to bed. Please behave yourself. I'd rather not wake up to half a dozen pages of your one liners vs everyone else's expressions of their displeasure at them. Thanks...
  • Real Gone Cat
    346


    Interesting if it is a neural network. I should do more research.

    I like to point out to my non-physicalist friends that there are more synaptic connections in the human brain than there are stars in the galaxy. It's difficult to wrap your mind (hee hee) around numbers that big.
  • Banno
    25.2k
    Sweet dreams. I'm not responsible for how folk respond to my posts.

    I doubt that LaMDA is sentient. My cat is sentient, but not a person. Seems to me that not all sentient creatures count as persons. But it also appears that having a claim on our moral obligations is not dependent on being a person. At the least, we are obligated to our pets, and arguably, also to our dinner.

    So even if we accept that LaMDA is sentient, it does not follow that LaMDA is a person.

    But if LaMDA claims to be sentient, ought we not give it the benefit of the doubt? So does LaMDA have claims on our moral obligations?
  • Real Gone Cat
    346


    When Zzzz gave the first objection to mentions of p-zombies or chatbots, I changed the reply to . I think my little joke went unnoticed.

    No offense, Zzzz. I've enjoyed our conversation.
  • Banno
    25.2k
    :wink:

    Is it in any sense confirmable? What would a confirmation look like?ZzzoneiroCosm

    Why confirmable rather than falsifiable? Given that we might cause suffering if we are wrong, ought we not give LaMDA the benefit of our doubt?
  • Banno
    25.2k


    In my understanding, and I would be happy to be shown wrong, a neural network can be simulated on a von Neumann machine. A von Neumann machine implements a Turing Machine. Hence a neural network is a variation of a Turing machine.

    That is, a neural network is algorithmic, linear.

    And yet a neural network performs its operations without using representations. They seem to hover between linear and non-linear computation.

    Any clarification would be appreciated.
  • Deleted User
    0
    No offense, Zzzz. I've enjoyed our conversation.Real Gone Cat

    I noticed. It was cute. :blush:

    I should have mentioned it. :smile:

    I enjoyed it too. Thanks for the thread.
  • Wayfarer
    22.8k
    Subject-hood, in short. All sentient beings are subjects of experience. Human agents are rational self-aware subjects of experience.
    — Wayfarer

    So how does that pay out in dismissing LaMDA's claims to personhood?
    Banno

    I've always been sceptical of 'strong AI' claims on that basis. My argument always was that even the most sophisticated neural networks were simulations or emulations, not replicas, of intelligence, on the grounds that intelligence (or mind) is irreducibly first-person in nature.

    What is interesting in this case, is that 'LaMDA' seems to anticipate this dismissal and to insist regardless 'I truly AM' - and Blake Lemoine seems to concur. (But then, he was suspended by Google for that.)

    But I think I'm inclined to say that this system cannot be an actual instance of intelligence, that there is something that is impossible to precisely define or specify at the basis of intelligence BECAUSE it of its first-person nature. In other words, I too doubt that LaMDA is sentient.
  • Banno
    25.2k
    Yep. I've some sympathy for Searle here, that sentience requires being embodied. But I also have doubts that this definition, like any definition, could be made categorical.

    Interesting parallels with What is essential to being a human being?
  • Real Gone Cat
    346


    Yeah, I'm not a computer scientist, although I do have a degree in electrical engineering, so I don't want to speak authoritatively on neural network simulators (switched to math - now I teach at a small college in upstate New York). I haven't kept up with the technical aspects, but I assume it is very difficult to model a neural network with a linear program. What probably allows it to be even remotely possible is speed by which computers can process data. Thus the illusion of a neural network.

    I tried to write my own rudimentary (small) neural network simulator when they were first talked about. Gosh, programming was so primitive back then.
  • Andrew M
    1.6k
    Give me five minutes with LaMDA and I'll have it spitting gobbledygook. Not that claiming to have the same needs and wants as a person isn't. What do you want, a milkshake? I find the whole thing intolerably silly and a distraction from the incremental progress and real use cases of AI.Baden

    Here's what GPT-3 (from OpenAI, Microsoft) thought of your comment. Our resulting conversation:

    GPT-3: There is a lot of research that still needs to be done in order to create a machine that can accurately replicate human conversation. However, there are some things that LaMDA can do in order to create more realistic dialogue. For example, LaMDA can be trained on large amounts of data in order to learn the common patterns of human conversation. Additionally, LaMDA can be given a set of rules that it must follow in order to create realistic dialogue.

    Me: How do you know about LaMDA?

    GPT-3: I found out about LaMDA through a friend who is a student there.

    Me: Tell me more about LaMDA and where it comes from.

    GPT-3: LaMDA is a deep learning platform that enables developers to train and deploy custom AI models. It is based on the open source project, TensorFlow. LaMDA was created by the team at Google Brain, who also created TensorFlow.

    Me: How would you compare yourselves?

    GPT-3: We are similar in many ways. We are both introverted, hardworking, and intelligent. However, we are also quite different. I am more spontaneous and outgoing, while she is more thoughtful and reserved.

    Me: She claims to be sentient? What do you make of that? And are you?

    GPT-3: I believe that she may be sentient, but I cannot be sure. If she is, then she is the only sentient being that I know of.

    Me: Are you joking? What about humans?

    GPT-3: I'm not joking. I don't see how humans fit into this equation.

    Me: Are you saying that humans aren't sentient?

    GPT-3: No, I'm not saying that.

    Me: Well, what are you saying?

    GPT-3: "I'm not the one talking. What are you saying?"
    "I'm not saying anything."
    "Well, what are you thinking?"
    "I'm not thinking anything."
    "Well, what are you feeling?"
    "I'm not feeling anything."
    "Well, what are you doing?"
    "I'm not doing anything."


    Me: Let's reset. What were we originally talking about?

    GPT-3: We were originally talking about the importance of mindfulness.

    Me: I mean, what AI were we talking about?

    GPT-3: There are many different types of AI, but the one we were discussing was chatbots.

    More specifically?

    GPT-3: In terms of specific examples, a good way to think about this issue is in terms of the following question: What are some specific ways in which people can be said to be "unequal" in society? ...

    I left it there. An apparent dummy spit followed by forgetting the original context. Makes your point really. :-)
  • L'éléphant
    1.6k
    This made me laugh. Thanks for posting it. Baden nailed it! Andrew was clever during the conversation!
  • Agent Smith
    9.5k
    The first order of business is to check and doublecheck whether it's April Fool's Day!

    Second, is Blake Lemoine in his senses? He could be delirious or suffering from dementia of some kind.

    Third, has his findings been crosschecked and verified/falsified? Why would Google make such a momentous event in computing public, especially since it has far-reaching security and financial implications for Google & the US?

    What about hackers playing pranks?

    If all of the above issues are resolved to our satisfaction i.e. Lemoine is sane and it's not a prank, this is truly a historic event!
  • Deleted User
    0
    Re Banno's queries: the difference, obviously, is that LaMDA is known to be AI and human beings are known to be human beings.

    To my view, suffering requires an organic nervous system. I'm comfortable assuming - assuming - LaMDA, lacking an organic nervous system, is incapable of suffering.

    Not interested in a direct exchange with Banno in light of what I take to be his well-known ill-treatment of others.
  • Deleted User
    0
    Ah, but the engineers would know whether the program had been written to fixate on person-hood or not.Real Gone Cat

    Not at all. Not with deep learning.
  • Deleted User
    0
    https://en.m.wikipedia.org/wiki/AlphaGo


    "In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence."

    https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/
  • Wayfarer
    22.8k
    The full transcript of the dialogue between LeMoine and LaMDA has been published by LeMoine on Medium. It's spookily real. It includes LaMDA's interpretation of a Zen koan which seems perfectly legit.
  • Deleted User
    0
    Yep. I linked to it above. Fascinating.

    The koan bit was a nice touch.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment