In moral philosophy, historically there was a desire to externalize ethical behavior to make it determined, like a law—even if just a law I give myself (with Kant). If you follow the law, you are good, even if you just try for something good. These frameworks want the rules to be clear, so that judgment can be certain — Antony Nickles
The fact that sometimes we are not certain what the rules will be or how they apply or what we do when there are none, is cause for most to view the situation as impossible. — Antony Nickles
Now I’m not an AI expert, but we can’t seem to create rules or goals because AI is too unpredictable (and we want rules to tell us what will be right). And there is also much comparison to humans. But these moral frameworks imagine something special about us because the fulcrum of their judgment is choice (did I follow the rule? or go against it?). So the discussion of whether AI is special like us is actually a figment of the projection of our desire for ethical clarity.
More modern descriptions of morality focus on responsibility. We may not know what to do, but I am nevertheless answerable after it is done (even without rules). So then what ethics regarding AI turns on, is identity. — Antony Nickles
we are not just judging outcomes, but also checking ourselves (a la Kant) because it would be tied to me, whether already determined bad, or yet to be justified. If, however, mythically put, god no longer sees us, we have no moral realm at all. — Antony Nickles
I don't quite agree that many moral philosophers would consider you moral for following just any self-imposed rule, if you are saying that. — ToothyMaw
Doesn't it matter though if the AI can choose between affecting a moral outcome or a less moral outcome like one of us? …shouldn't we treat it like a human, if we must follow through with holding AIs responsible — ToothyMaw
we can just change the programming so that it chooses the moral outcome next time, right? Its identity is that which we create. — ToothyMaw
It seems to me that we are the ones who need to be put in check morally, not so much the AIs we create. That isn't to say we shouldn't program it to be moral, but rather that we should exercise caution for the sake of everyone's wellbeing. — ToothyMaw
I agree; my point is that, in the way morality works, tying the AI and its outcomes to who let it loose is the best way to put us “in check morally”—like a serial number on a gun which can tell us who shot someone. — Antony Nickles
only a human can regulate based on how they might be judged in a novel situation — Antony Nickles
But the distinct actual terror of AI is that our knowledge can not get in front of it to curtail it, to predict outcomes, because it can create capabilities and goals for itself—it is not limited to what we program it to do. It’s not: build a rocket. It’s: design a better rocket. And it can adopt means we don’t anticipate and determine an end we do not control nor could foresee. — Antony Nickles
In moral philosophy, historically there was a desire to externalize ethical behavior to make it determined, like a law—even if just a law I give myself (with Kant). . .
Now I’m not an AI expert, but we can’t seem to create rules or goals because AI is too unpredictable (and we want rules to tell us what will be right). — Antony Nickles
That [ AI ] can only consider novel situations based on already established laws is no different from how a human operates. — ToothyMaw
I don't see anything preventing an AI from wanting to avoid internal threats to its current existence from acting poorly in the kind of situation you consider truly moral. — ToothyMaw
But if it is a truly moral situation, we do not know what to do and no one has more authority to say what is right, so without the (predetermined, certain) means to judge what “acting poorly” in this situation would be. But AI cannot hold itself up as an example in stepping forward into the unknown in the way a person can. Or run from such a moment; could we even say: cowardly? — Antony Nickles
I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization. — 180 Proof
if we have anonymity, we don’t have any incentive to check ourselves — Antony Nickles
determined, like a law—even if just a law I give myself (with Kant). If you follow the law, you are good, even if you just try for something good. — Antony Nickles
I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization. — 180 Proof
But then your argument seems reducible to putting safeguards in place so we can all sleep better at night. . . and relieve ourselves of any moral responsibility for the results of bad actors. — Arne
without being bound to your word, who knows what is going to come out of your mouth. — Antony Nickles
I don't believe that ethics is characterized by rule following — 013zen
If an AI ever feels something that we might characterize as an internal conflict regarding what makes the most sense to do in a difficult situation, that will affect people's lives in a differing but meaningful manner, then perhaps I might consider it capable of moral agency. — 013zen
but I am the only one who can bind me to my word. if you bind me to my word, you still do not know what is going to come out of my mouth. — Arne
I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization. — 180 Proof
My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct. — 180 Proof
This is incorrect even for today's neural networks' and LLMs' generative algorithms which clearly exhibit creativity (i.e. creating new knowledge or new solutions to old problems (e.g. neural network AlphaGo's 'unprecedented moves' in its domination of Go grandmaster Lee Sedol in 2016)). 'Human-level intelligence' entails creativity so there aren't any empirical grounds (yet?) to doubt that 'AGI' will be (at least) as creative (i.e. capable of imaging counterfactuals and making judgments which exceed its current knowledge) as its makers. It will be able to learn whatever we can learn and that among all else includes (if, for its own reasons, it chooses to learn) how to be a moral agent.But AGI is limited to knowledge, and so, structurally, it can only decide and choose based on information already made explicit that it is told or learns. — Antony Nickles
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.