• I like sushi
    4.9k
    I see no massive issue with equating them in this context (it is not really relevant as I am talking about something that is essentially capable of neither). I do not confuse them though ;)

    to learn how to develop its own "objectives" and comply with those operational goals in order to function at or above the level of human metacognitive performance (e.g. normative eusociality³).180 Proof

    and

    We are (e.g. as I have proposed ↪180 Proof), and I expect AGI will learn from our least maladaptive attempts to "say what is and is not moral"³.180 Proof

    These two points roughly outline my concerns - or rather, the unknown space between them. If we are talking about a capacity above and beyond human comprehension then such a system may see that is valid to extend morals and create its own to work by - all in a non-sentient state.

    If it can create its own objectives then it can possibly supplant previously set parameters (morality we dictated) by extending it in some seemingly subtle way that could effectively bypass it. How and why I have no idea - but that is the point of something working far, far beyond our capacity to understand - because we will not understand it.

    As for this:

    More approaches come from explicitly combining two or three of the approaches which you've mentioned in various ways. In my case, 'becoming a better person' is cultivated by 'acting in ways which prevent or reduce adverse consequences' to oneself and others (i.e. 'virtues' as positive feedback loops of 'negative utilitarian / consequentialist' practices). None of the basic approaches to ethics seems to do all the work which each respectively sets out to do, which is why (inspired by D. Parfit) I think they can be conceived of in combinations which compensate for each other's limitations.180 Proof

    This is likely the best kind of thing we can come up with. I was more or less referring to Moral Realism not Moral Naturalism in what I said.

    As we have seen throughout the history of human cultures, and cultures present today, there is most certainly a degree of moral relativism. Herein lies the obvious problem of what to teach AGI what is or is not right/good when in a decade or two we may well think our current thoughts on Morality are actually wrong/bad. If AGI is not sentient and sentience is required for Morality then surely you can see the conundrum here? If Morality does not require sentience then Moral Realism is correct, which would lead to the further problem of extracting what is correct (the Real Morality) from fallacies that pose universal truths.

    I grant that there are a hell of a lot of IFs and BUTs involved in this field of speculation, but nevertheless the extraordinary - and partly unintelligible - potential existential threat posed by the occurrence of AGI warrants some serious attention.

    Current we have widely differing estimations of when AGI will be a reality. Some say in 5 years whilst others say in 75 years. What I do know is that this estimate has dropped quite dramatically from the seemingly impossible to a realistic proposition.
  • 180 Proof
    15.4k
    If AGI is not sentient and sentience is required for Morality then surely you can see the conundrum here? If Morality does not require sentience then Moral Realism is correct ...I like sushi
    Why do you assume there is any relation between "sentience" and "morality"?

    Does (e.g.) computating, protein folding, translating or regulating homeostasis "require sentience"? If not, then why does "morality"? If they do, however, then what are non-sentient machines doing when they perform such rules-bound (i.e. normative) functions?

    I was more or less referring to Moral Realism not Moral Naturalism in what I said.
    Well, the latter (re: pragmatics) afaik is a subset of the former (re: semantics).

    https://plato.stanford.edu/entries/naturalism-moral/
  • I like sushi
    4.9k
    Why do you assume there is any relation between "sentience" and "morality"?180 Proof

    I do not. This was a speculative statement. I did state sentience is not massively important to what I was focusing on here.

    Well, the latter (re: pragmatics) afaik is a subset of the former (re: semantics).180 Proof

    My mistake. What I meant was Moral in terms of Empirically validated - or something along those lines. I forget what family of Morality it is in philosophical jargon. Moral Absolutism I believe? Hopefully you can appreciate the confusion :D
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.