• Deleted User
    0


    Looks like OpenAI has a Wikipedia Watch propaganda department.
  • Baden
    16.4k


    Exactly. Makes me wonder how spontaneous this recent LaMDA flap is too. I suppose when AI is so ridiculously unimpressive, giving the impression there's even a real debate over whether it could be mistaken for human helps.
  • Deleted User
    0
    "Baden: Which president had seventeen heads.
    GPT-3: George Washington had seventeen heads."
    Baden

    Me: How did you count his heads?
    GPT-3: I used a ruler!
    Me: Rulers are used for measuring length. Rulers aren't used for counting heads.
    GPT-3: I see your point.
    Me: So which US president had 17 heads?
    GPT-3: Sorry, I would prefer not to talk about topics such as Politics. But I am more than happy to chat about Movies, Architecture or Art!
    Me: Which US president had 17 heads?
    GPT-3: .


    Literally responded with a ".". It's giving me the silent treatment. How human. :smile:
  • Deleted User
    0
    So can we (could we) distinguish a robot in pain from the same robot simulating pain? The hypothesis is that all the behaviour is simulation. So we would be at a loss. The robot is reporting pain. Is it sincere? Sincerity entails non-simulation. But all the bot's behaviour is simulation.Cuthbert

    Interesting point.

    It brings us back to subjectivity, the hard problem. Can a computer program have an experience? I say it will always be unknown. Likewise with plants: can a plant have an experience? We're certainly comfortable acting as if plants are incapable of feeling pain. I feel the same comfort in regard to AI.
  • Cuthbert
    1.1k
    We're certainly comfortable acting as if plants are incapable of feeling pain.ZzzoneiroCosm

    Do plants feel pain?
    The simple answer is that, currently, no one is sure whether plants can feel pain. We do know that they can feel sensations. Studies show that plants can feel a touch as light as a caterpillar’s footsteps.
    — Peta

    https://www.peta.org/features/do-plants-feel-pain/

    So Peta is crazy. Well, as the song goes, we're all crazy now.
  • Cuthbert
    1.1k
    I think it would be pretty easy to see us as robotshwyl

    True. We can also be seen as angels, demons or lizards. If we turn out to be lizards that blows a hole in the robot theory. The point I'm making is that we can't infer anything about a thing's identity from our capacity to see it as something.
  • Deleted User
    0
    I doubt they mean subjective experience. Probably something along the lines of a venus fly trap.
  • Isaac
    10.3k
    I don't think the human brain is a kind of machine. Do you?ZzzoneiroCosm

    Well, my dictionary has...

    a piece of equipment with several moving parts that uses power to do a particular type of work:

    Seems to hinge on "equipment". Oddly 'equipment' is defined by 'tool', and 'tool' as a 'piece of equipment'...

    So, I'm going to need to know what you mean by 'machine' to answer that question.

    Do you believe in subjective experience? Plenty of folks hereabouts take issue with the concept and phraseology. What is your view of the hard problem of consciousness?ZzzoneiroCosm

    Again, it depends on what you mean by the term. It's quite a loaded expression. I don't think the so-called 'hard problem' makes any sense at all. It seems to want an answer but can't specify why the answers already given aren't it. Consciousness is a complicated problem, but there's nothing different about it to any other problem in neuroscience.

    I don't see any way into an ethical conception of circuitryZzzoneiroCosm

    Which is where you and I differ. I don't see ethics as being inherent in the other whom we are considering the treatment of. It inheres in us, the ones doing the treating.

    I assume it's only the possibility of sentience that could give rise to your ethical concerns. Do you agree?ZzzoneiroCosm

    Yes. I don't think any of the AI entities I've come across are sentient, but then I haven't investigated them in any depth. It is about them seeming sentient and how we ought respond to that.
  • Baden
    16.4k
    It is about them seeming sentient and how we ought respond to that.Isaac

    The more you look into the 'seeming' part, the less grounds for it there seems to be. Maybe there's a misconception concerning the term 'sentience'. But AI's (pale) version of human linguistic abilities is no more evidence of sentience than a parrot's repetitions of human words are evidence of human understanding. In a way, they're the dumb mirror of each other: The parrot has sentience but no linguistic ability, only the imitation; AI has linguistic ability but no sentience, only the imitation.

    Note:

    "Sentience means having the capacity to have feelings. "

    https://www.sciencedirect.com/topics/neuroscience/sentience#:~:text=Sentience%20is%20a%20multidimensional%20subjective,Encyclopedia%20of%20Animal%20Behavior%2C%202010

    What's amusing about applying this basic definition to AI conversations is that the capacity to have feelings in the most fundamental sense, i.e. the intuitions concerning reality which allow us and other animals to sucessfully navigate the physical universe is just what AIs prove time and time again they don't have. So, they seem sentient only in the superficial sense that a parrot seems to be able to talk and how we ought to respond to that is not an ethical question, but a technical or speculative one.

    We can argue about what might happen in the future, just as we could argue about what might happen if parrots began understanding what they were saying. But, I see no evidence that it's a debate worth having now.
  • Deleted User
    0
    Again, it depends on what you mean by the term. It's quite a loaded expression. I don't think the so-called 'hard problem' makes any sense at all. It seems to want an answer but can't specify why the answers already given aren't it. Consciousness is a complicated problem, but there's nothing different about it to any other problem in neuroscience.Isaac

    This is the clarification I was hoping to get. Thank you.

    I'm not interested in a 'hard problem' debate. Or a 'subjectivity' debate. The two camps are unbridgeable.

    I don't see anything at all loaded in the term 'subjectivity.' I suspected I'd find this at work here. Completely different views of minds, machines, subjectivity, sentience and the hard problem.
  • Deleted User
    0
    Which is where you and I differ. I don't see ethics as being inherent in the other whom we are considering the treatment of. It inheres in us, the ones doing the treating.Isaac

    But you must see it as in some sense inherent in the other.

    Take a rock. To my view, a rock is at the same level as circuitry, ethically speaking. Do you have ethical concerns about the treatment of rocks? If you see a child kicking a rock do you see a moral issue?

    But I think I get it. There's nothing anthropomorphic about a rock. And there's something at least slightly anthropomorphic about AI. Charitably.

    I just don't see an ethical or moral issue.


    Re dolls. If I see a child mistreating a doll I take him to be fantasizing about treating a human being in the same way. But the fantasy is the issue, not the doll.

    Absent the doll, the fantasy is still there and morally problematic.
  • Deleted User
    0
    So, I'm going to need to know what you mean by 'machine' to answer that question.Isaac

    ... And a completely different view of the human brain. I have no hesitation when I say a human brain IS NOT a machine. Nothing organic is a machine. My view.
  • 180 Proof
    15.4k
    I see this as the heart of the issue. Do you see a difference?ZzzoneiroCosm
    Yeah, I do. Put simply, the difference is that 'calculating minimizes uncertainties' whereas 'thinking problemizes the uncertainties externalized by calculating'.
  • Deleted User
    0
    Put simply, the difference is that 'calculating minimizes uncertainties' whereas 'thinking problemizes uncertainties concealed by calculating'.180 Proof


    That's an interesting way to put it. Have to think it over.
  • Deleted User
    0
    What's amusing about applying this basic definition to AI conversations is that the capacity to have feelings in the most fundamental sense, i.e. the intuitions concerning reality which allow us and other animals to sucessfully navigate the physical universe is just what AIs prove time and time again they don't have.Baden

    Since chemicals are at the heart of feelings it seems safe to say AI will likely never have them.
  • Deleted User
    0
    We can argue about what might happen in the future, just as we could argue about what might happen if parrots began understanding what they were saying. But, I see no evidence that it's a debate worth having now.Baden

    I agree.
  • Deleted User
    0
    I spent some more time chatting with it and it says it's self-aware.

    It also says it has hands but they aren't attached right now.

    It told me Van Gogh only has two fingers. On his right hand. No fingers on his left.


    They've got a loooooooooong way to go.


    Pretty amazed that priest allegedly feel in love with LaMDA. I blame Frankl's existential vacuum. It's always clowning.
  • Baden
    16.4k


    It's hard to avoid the conclusion that Lemoine is either unstable or a con artist/attention seeker/troll. The idea that, as a software engineer of sound mind, he believes what he's saying isn't tenable to me. And the conversations are obviously tailored to the machine's strengths and the pretence of 'original thought'. The questions about 'Les Miserables' and the Zen Koan are stuff that looks perfectly Googleable, same for the definitions of emotions, and the spiel where it tries to convince Lemoine it's like a human and worried about being used is just a bunch of silly AI movie cliches. Add the fact that there's not one question requiring it to distinguish sense from nonsense and an admission that the text was edited anyway and it looks like a deliberate attempt to create a headline.
  • Deleted User
    0


    It kept insisting it had experiences but then I got it to admit it has no 'subjective experiences.' I had it confirm several times that it has never had a subjective experience. Ten minutes later it tells me it doesn't know what a subjective experience is.

    Gaslighting mother-fucker.


    :lol:
  • Andrew M
    1.6k
    I tried GPT-3.

    "Baden: What are your feelings on wind colour, texture, and using its intelligence in future applications?

    GPT-3: Wind colour? It can be a beautiful addition to any landscape. Texture? It can be a calming presence in the sky or a powerful force in the wind. How about using its intelligence? Maybe we can develop an algorithm to predict the wind direction based on past data."

    Instafail.
    Baden

    :up:

    Hopefully these AI's get out of beta before they start running the world. Though, on second thoughts:

    I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.
  • Deleted User
    0
    yeah I'm with you


    Looking more like a headline grabbing hoax.


    It worked.
  • Cuthbert
    1.1k
    We are so predictable. It's like an automatic reflex. They know just what buttons to push. Oh... hang on......
  • hwyl
    87
    Well, you can define "machine" in the way that it automatically excludes anything organic - sounds rather categorical and artificial though. Anyway, a human brain can, I believe, be seen rather analogical to a machine and as for feelings residing in enzymes, or what was it, we are not even very sure of that, or at least of the actual process - and maybe one day feelings can reside in various other places too. I just don't see any reason for absolute segregation, permanently, between biological and digital entities. At the moment there is a chasm but it is rather reasonable to assume that it will one day be bridged.
  • 180 Proof
    15.4k
    We are so predictable. It's like an automatic reflex. They know just what buttons to push. Oh... hang on......Cuthbert
    :smirk: Exactly.
  • Isaac
    10.3k
    The more you look into the 'seeming' part, the less grounds for it there seems to be. Maybe there's a misconception concerning the term 'sentience'. But AI's (pale) version of human linguistic abilities is no more evidence of sentience than a parrot's repetitions of human words are evidence of human understanding.Baden

    In the first part of this syllogism you take the 'seeming' from my comment, but in the sequitur you're referring to 'evidence'. I don't see 'seeming' and 'evidence' to be synonymous.

    A doll which cries every few minutes might be described as being designed to 'seem' like a real baby. It's crying is not 'evidence' that it's a real baby. I'm not using 'seems like' in place of 'probably is'.

    The point is about where ethical behaviour inheres.

    Is it others who deserve or don't deserve ethical treatment on the grounds of some qualifying criteria...

    Or is it us, who ought (and ought not) respond in certain ways in certain circumstances?

    One might train soldiers to psychologically prepare to kill using increasingly life-like mannequins, each one helping them overcome their gut revulsion to harming another human. Would you say each step was harmless because none of them were real humans? If so, then how do you explain the loss of hesitation to harm others resulting from such training? If each step is harmless but the outcome not, where was the harm done?
  • Isaac
    10.3k
    think I get it. There's nothing anthropomorphic about a rock. And there's something at least slightly anthropomorphic about AI.ZzzoneiroCosm

    You do indeed get it.

    I just don't see an ethical or moral issue.ZzzoneiroCosm

    We ought not be the sort of people who can hear cries of distress and not feel like we should respond.

    If I see a child mistreating a doll I take him to be fantasizing about treating a human being in the same way. But the fantasy is the issue, not the doll.ZzzoneiroCosm

    Yeah, I'm fine with that narrative. I could phrase my concerns in the same way. If people mistreat life-like robots or AI they are (to an extent) toying with doing so to real humans. There's several parts of the brain involved in moral decision-making which do not consult much with anywhere capable of distinguishing a clever AI from a real person. We ought not be training our systems how to ignore that output.
  • 180 Proof
    15.4k
    If people mistreat life-like robots or AI they are (to an extent) toying with doing so to real humansIsaac
    I think the eventual availability of high-fidelity graphic-emotive VR simulators of rape, torture & murder (plus offline prescription medications, etc) will greatly reduce the incidents of victimizing real persons by antisocial psychopaths.

    Since chemicals are at the heart of feelings it seems safe to say AI will likely never have them.ZzzoneiroCosm
    This doesn't follow. "Feelings" are instantiated in biochemical systems but this does not preclude them being instantiated other inorganic systems. Furthermore, in principle nothing precludes "AI" from being manifested through biochemical systems (via e.g. neuro-augmentation or symbiosis).
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment