Ethics: The Potential Advent of AGI I see no massive issue with equating them in this context (it is not really relevant as I am talking about something that is essentially capable of neither). I do not confuse them though
;)
to learn how to develop its own "objectives" and comply with those operational goals in order to function at or above the level of human metacognitive performance (e.g. normative eusociality³). — 180 Proof
and
We are (e.g. as I have proposed ↪180 Proof), and I expect AGI will learn from our least maladaptive attempts to "say what is and is not moral"³. — 180 Proof
These two points roughly outline my concerns - or rather, the unknown space between them. If we are talking about a capacity above and beyond human comprehension then such a system may see that is valid to extend morals and create its own to work by - all in a non-sentient state.
If it can create its own objectives then it can possibly supplant previously set parameters (morality we dictated) by extending it in some seemingly subtle way that could effectively bypass it. How and why I have no idea - but that is the point of something working far, far beyond our capacity to understand - because we will not understand it.
As for this:
More approaches come from explicitly combining two or three of the approaches which you've mentioned in various ways. In my case, 'becoming a better person' is cultivated by 'acting in ways which prevent or reduce adverse consequences' to oneself and others (i.e. 'virtues' as positive feedback loops of 'negative utilitarian / consequentialist' practices). None of the basic approaches to ethics seems to do all the work which each respectively sets out to do, which is why (inspired by D. Parfit) I think they can be conceived of in combinations which compensate for each other's limitations. — 180 Proof
This is likely the best kind of thing we can come up with. I was more or less referring to Moral Realism not Moral Naturalism in what I said.
As we have seen throughout the history of human cultures, and cultures present today, there is most certainly a degree of moral relativism. Herein lies the obvious problem of what to teach AGI what is or is not right/good when in a decade or two we may well think our current thoughts on Morality are actually wrong/bad. If AGI is not sentient and sentience is required for Morality then surely you can see the conundrum here? If Morality does not require sentience then Moral Realism is correct, which would lead to the further problem of extracting what is correct (the Real Morality) from fallacies that pose universal truths.
I grant that there are a hell of a lot of IFs and BUTs involved in this field of speculation, but nevertheless the extraordinary - and partly unintelligible - potential existential threat posed by the occurrence of AGI warrants some serious attention.
Current we have widely differing estimations of when AGI will be a reality. Some say in 5 years whilst others say in 75 years. What I do know is that this estimate has dropped quite dramatically from the seemingly impossible to a realistic proposition.