“Imagine asking a robot to pass you a screwdriver in a workshop. Based on current conventions the best way for a robot to pick up the tool is by the handle. Unfortunately, that could mean that a hugely powerful machine then thrusts a potentially lethal blade towards you, at speed. Instead, the robot needs to know what the end goal is, i.e., to pass the screwdriver safely to its human colleague, in order to rethink its actions.” — ToothyMaw
Then it is so for mutual benefit, to elaborate the machine of man by man NOT for man will be self seeking in its awareness of the task. — Deus
What if the (entirely benevolent) robot decides there are better goals? The Asimov laws are hardly ideal, and quickly lead to internal conflict. Human goals tend to center on the self, not on say humanity. The robot might decide humanity was a higher goal (as was done via a 0th law in Asimov's foundation series). Would you want to live with robots with a 0th law?I honestly don't see why a robot as intelligent as a human would necessarily exist in opposition to human goals merely for its intelligence, autonomy, or ability to accomplish tasks according to more general rules. — ToothyMaw
I've been known to repeatedly suggest how humans are very much a slave to their biology, and also that this isn't always a bad thing, depending on the metric by which 'bad' is measured.A *robot is no less a slave to its programming than we are slaves to our biology, I think.
A *robot is no less a slave to its programming than we are slaves to our biology, I think.
I've been known to repeatedly suggest how humans are very much a slave to their biology, and also that this isn't always a bad thing, depending on the metric by which 'bad' is measured. — noAxioms
What if the (entirely benevolent) robot decides there are better goals? — noAxioms
Human goals tend to center on the self, not on say humanity. The robot might decide humanity was a higher goal — noAxioms
I'm think more big, long-term decisions, not knee-jerk decisions like pulling somebody out of danger. Consulting the humans is probably the worst thing to do since the humans in such situations are not known for acting on the higher goals.It could be programmed to consult humans before changing its goals, but that is kind of a cop-out; that could be discarded in a pinch if a quick decision is needed. — ToothyMaw
What's your idea of 'robot'? An imitation human? Does it in any way attempt to imitate us like they do in Blade Runner (or to a lesser extent, in the Asimov universe)? My robots are like the ones I already see like self-driving cars and such. What does the human do if his car refuses to take him to the office because the weather conditions are bad enough that it considers the task to be putting him in unreasonable danger. They guy gets fired for not being there, and/or he gets a car that doesn't override him and then he ends up in the hospital due to not being as good a driver as the robot. Now just scale that story up from an individual to far larger groups, something at which humans do not excel at all.autonomous robots
OK, but by what metric is 'the greater good' measured? I can think of several higher than 'max comfort for me', and most of them conflict with each other. But that's also the relativist in me. Absent a universal morality, it is still incredibly hard for an entirely benevolent entity to choose a path.The greater good always wins out for me (I just hope I would have the courage to jump in front of the trolley if the time comes).
it is still incredibly hard for an entirely benevolent entity to choose a path. — noAxioms
What gets me stoked is this: the skill set the OP wishes robots to have may require computing power & programming complexity sufficient to make such robots sentient (re unintended consequences). — Agent Smith
The robots would refuse to comply if they're anything like us. — Agent Smith
there's a sophos who can predict the future accurately. — Agent Smith
Perhaps, but then they're also incredibly stupid, driven by short term goals seemingly designed for rapid demise of the species. So maybe the robots could do better.Most humans are largely benevolent — ToothyMaw
Take away all the wars (everything since say WW2) and society would arguably have collapsed already. Wars serve a purpose where actual long-term benevolent efforts are not even suggested.Is society collapsing because we have somewhat benevolent entities, and some that are not at all benevolent, with the ability to destroy the human race fixated on waging a cold war with each other?
Disagree heavily. At best we've thus far avoided absolute disaster simply by raising the stakes. The strategy cannot last indefinitely.we avoid absolute disaster because we are rational enough to realize that we all have skin in the game.
Maybe they get smarter than the humans and want to do better. I've honestly not seen it yet. The best AI I've seen (a contender for the Turing test) attempts to be like us, making all the same mistakes. A truly benevolent AI, smarter than any of us, would probably not pass the Turing test. Wrong goal.I don't see why intelligent, autonomous robots wouldn't accept such a fact and coexist, or just execute their functions, alongside humans with little complaint because of this.
Pretty sure we're already at this point, unless you're working with a supernatural sentience definition,may require computing power & programming complexity sufficient to make such robots sentient — Agent Smith
Pretty sure we're already at this point, unless you're working with a supernatural sentience definition, — noAxioms
Perhaps, but then they're also incredibly stupid, driven by short term goals seemingly designed for rapid demise of the species. So maybe the robots could do better. — noAxioms
Take away all the wars (everything since say WW2) and society would arguably have collapsed already. Wars serve a purpose where actual long-term benevolent efforts are not even suggested. — noAxioms
Disagree heavily. At best we've thus far avoided absolute disaster simply by raising the stakes. The strategy cannot last indefinitely. — noAxioms
Maybe they get smarter than the humans and want to do better. I've honestly not seen it yet. The best AI I've seen (a contender for the Turing test) attempts to be like us, making all the same mistakes. A truly benevolent AI, smarter than any of us, would probably not pass the Turing test. Wrong goal. — noAxioms
Human ethics are based on human stupidity. I’d not let ‘anything the humans want’ to be part of its programming. Dangerous enough to just make it generic ‘benevolent’ and leave it up to the AI to determine what that means. If the AI does its job well, it will most certainly be seen as acting unethically by the humans. That’s the whole point of not leaving the humans in charge.I'm certain robots could do better, especially given we could mold them into just about anything we want, whether or not doing so is ethical. — ToothyMaw
That perhaps can improve skills. Can it fix stupid? I doubt the military has more benevolent goals than our hypothetical AI.DARPA actually is investigating Targeted Neuroplasticity Training for teaching marksmanship and such things.
I said arguably, so I can only argue. I admit that most wars since have been political and have not really accomplished the kinds of effects I’m talking about. Population reduction by war seems not to have occurred much since WW2. Technology has been driven at an unnatural pace due to the cold war, and higher technology is much of what has driven us to our current predicament.Can you back this up at all?
DARPA actually is investigating Targeted Neuroplasticity Training for teaching marksmanship and such things.
That perhaps can improve skills. Can it fix stupid? I doubt the military has more benevolent goals than our hypothetical AI. — noAxioms
Human ethics are based on human stupidity. I’d not let ‘anything the humans want’ to be part of its programming. — noAxioms
Can't answer that since it seems to be dependent on a selected goal. Being human, I'm apparently too stupid to select a better goal. I'm intelligent enough to know that I should not be setting the goal.What do you consider to be acceptable ethics and/or meta-ethics? — ToothyMaw
Right. But we'll not like it because it will contradict the ethics that come from our short-sighted human goals.Maybe the benevolent AI could come up with some good stuff after being created?
Can't answer that since it seems to be dependent on a selected goal. Being human, I'm apparently too stupid to select a better goal. I'm intelligent enough to know that I should not be setting the goal.
But I can think of at least three higher goals, each of which has a very different code of what's 'right'. — noAxioms
Right. But we'll not like it because it will contradict the ethics that come from our short-sighted human goals. — noAxioms
The preservation of the human raceWhat are those goals? — ToothyMaw
These two already seem to be supported by some humans. I suspect they're both in conflict with almost any of the goals listed above. 'Future of human race' seems more in line with the beginnings of my list.Bodily autonomy? The maximization of fulfillment of preferences? — ToothyMaw
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.