Perhaps, but then they're also incredibly stupid, driven by short term goals seemingly designed for rapid demise of the species. So maybe the robots could do better. — noAxioms
I'm certain robots could do better, especially given we could mold them into just about anything we want, whether or not doing so is ethical. To mold a human even into something as seemingly mundane as an infantryman is orders of magnitude harder than it would be to simply create a robot capable of executing those functions, once the programming is complete, and would result in less or no trauma from seeing combat.
Speaking of which, the training process is so imperfect and slow - turning people into soldiers - that DARPA actually is investigating Targeted Neuroplasticity Training for teaching marksmanship and such things. And, while the military always gets the cutting-edge tech first, it could offset our soon to be dependance on robots in the future in many professions.
In fact, if we were all walking around all big-brain with our extra plasticity, we would be able to excel at jobs unrelated to the mundane work executed by robots in the near future.
Sorry for getting a little off course there.
Take away all the wars (everything since say WW2) and society would arguably have collapsed already. Wars serve a purpose where actual long-term benevolent efforts are not even suggested. — noAxioms
Can you back this up at all? Not necessarily with studies or anything, I just thought that the Vietnam War, for example, didn't even accomplish its goals, as it didn't really prevent the spread of communism, which was indeed its goal?
Disagree heavily. At best we've thus far avoided absolute disaster simply by raising the stakes. The strategy cannot last indefinitely. — noAxioms
I think you are right, at least partially. But we are talking about rational people when we talk about Biden and Putin, or at least largely rational. Putin is, of course, a despicable war criminal, but I don't think he wants to see the demise of his country and everyone in it. And so long as he doesn't press the button, he is tacitly acknowledging that he has
some sort of twisted idea of what he wants for humanity.
Maybe they get smarter than the humans and want to do better. I've honestly not seen it yet. The best AI I've seen (a contender for the Turing test) attempts to be like us, making all the same mistakes. A truly benevolent AI, smarter than any of us, would probably not pass the Turing test. Wrong goal. — noAxioms
Agreed. It need not be indistinguishable from a human.