• Agent Smith
    9.5k
    Asimov's 3 Laws of Robotics

    First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    An AI (artificial intelligence) that passes the Turing test is, for all intents and purposes, indistinguishable from a human.

    Asimov's 3rd Law fails for the simple reason that implicit in it is the provision that robots can protect themselves against other robots but, the catch is, robots won't be able to tell the difference between robots (AI) and humans (robots/AI pass the Turing test).
  • john27
    693
    Asimov's 3rd Law fails for the simple reason that implicit in it is the provision that robots can protect themselves against other robots but, the catch is, robots won't be able to tell the difference between robots (AI) and humans (robots/AI pass the Turing test).Agent Smith

    An AI (artificial intelligence) that passes the Turing test is, for all intents and purposes, indistinguishable from a human.Agent Smith


    If a robot isn't distinguishable from a human, would it still be subjected to Asimov's three laws of robotics?
  • Agent Smith
    9.5k
    If a robot isn't distinguishable from a human, would it still be subjected to Asimov's three laws of robotics?john27

    Apparently no.
  • john27
    693
    Apparently no.Agent Smith

    Then perhaps we could say: for those robots who have passed the Turing exam, they are now exempt from Asimov's laws of robotics? As in, reserve the rule for those who do act robotically.
  • Agent Smith
    9.5k
    Then perhaps we could say: for those robots who have passed the Turning exam, they are now exempt from Asimov's laws of robotics? As in, reserve the rule for those who do act robotically.john27

    Yes.
  • john27
    693


    Then...In my belief I don't think Asimovs' rules fail. They just apply to who their applicable too.
  • Agent Smith
    9.5k
    They just apply to who their applicable too.john27

    There are none to whom Asimov's laws apply to. That's the point.
  • john27
    693


    Wouldn't it just apply to robots who haven't passed the Turing exam?
  • Agent Smith
    9.5k
    Wouldn't it just apply to robots who haven't passed the Turning exam?john27

    ALL robots pass the Turing test.
  • john27
    693
    ALL robots pass the Turing test.Agent Smith

    Really?! Wow, I didn't know that. Dang.
  • Agent Smith
    9.5k
    Really?! Wow, I didn't know that. Dang.john27

    Well, now you know.
  • Agent Smith
    9.5k
    A robot to which the 3 laws of robotics applies is one that is sentient enough for the 3 laws of robotics not to apply to it. What's a name for this kinda situation? Catch-22? To want to be declared insane is to prove that you're sane. :chin: There must be a better, fancier, more eloquent way to express this state of affairs. Are there real-world examples?

    A rule is meant for a certain category of people, but then people in that category are the exact kind of people the rule isn't meant for.

    :chin:
  • Raymond
    815
    An AI (artificial intelligence) that passes the Turing test is, for all intents and purposes, indistinguishable from a human.Agent Smith

    If that's so, how does the robot know what's the difference between people ("human beings", "conscious rational agents", "neutral observers") and robots? Can he prevent a "logic, conscious, rational, unit agent" killing a robot? What if a robot turns against him?

    Or is this what you ask? The root of robot identity crisis...
  • Agent Smith
    9.5k
    While I am interested in knowing more about Asimov's 3 laws, how they stand up to deeper analysis, I'd also like to know a short phrase/name for such situations as described in my last post, you know like Dunning-Kruger effect, Drunkard's search, Rubber Duck debugging, and so on.
  • Raymond
    815


    Breaking the law.
  • Agent Smith
    9.5k
    Breaking the law.Raymond

    No, that's not it.
  • Agent Smith
    9.5k
    A robot will intend to follow Asimov's laws, but then that means it doesn't have to follow Asimov's laws.

    Kavka's toxin puzzle: I intend to drink the poison, but then, after that, I don't have to drink the poison.

    Kavka-Asimov Paradox!
  • Real Gone Cat
    346
    Um...

    The 3 Laws of p-Zombies?
  • Raymond
    815

    First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    Agent Smith

    So the robot cannot harm or by doing nothing cause harm. All his actions should be human friendly. He cannot stab a knife and must prevent a knive being stabbed.He should act as told as long as his actions don't harm people or prevent him preventing harm. He must protect himself as long as he doesn't hurt people and the order is not to kill himself.

    When they are ordered to kill themselves and killing themselves means people get killed they cannot kill themselves. When protecting themselves from people trying to kill them they should let themselves get killed. When trying to protect themselves from robots preventing hurt to people they should not protect themselves. When ordered to kill robots they should obey only if the robots the kill are not involved in preventing harm.

    So, the auto destruct demand nor the kill command can be complied to if people are hurt. They can't protect themselves from the auto destroy command if they don't hurt people by doing this. All robots that are not involved in preventing pain can be destroyed like this. The robots that are involved in preventing pain cannot obey that order.

    When they don't know the difference between people and robots, the situation gets complicated. Should he kill himself if ordered? He must obey, and protecting himself is less important than obeying. But obeying is less important than preventing people from getting hurt. If he's a robot he should obey, if he's a people he should disobey. Looking in the mirror doesn't offer solace. "Who am I?" the robot asks. Should I obey or should I disobey?

    Asimov-Turing Pickalilly
  • Agent Smith
    9.5k
    When they are ordered to kill themselves and killing themselves means people get killed they cannot kill themselves.Raymond

    :up: Interesting! Suicide is wrong: we being robots/humans unable to harm humans/robots (Asimov's 1st law).

    auto destructRaymond

    Cellular Suicide (Apopotosis)
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.