Note that language itself is the very Being in question. — Constance
I do not believe in the existence of objective categories, this includes moral or aesthetic values. — Ourora Aureis
A reaction to this would be ethical egoism, the ethical framework I follow. It declares that we ought to act according to our values, not the value judgements of others. In this way it seems similar to the idea of personal morality you hold. — Ourora Aureis
My understanding of morals doesn’t really fit in with those generally discussed here. — T Clark
The problem of identity is a real problem, but if we admit this problem to the equation, then there may be no “me” who could fail to prevent suffering either. — Fire Ologist
Because you cannot particularize this prevention of suffering in a particular “you” who doesn’t suffer, AN is acting ethical towards no one, no one who ever exists. — Fire Ologist
That life, regardless of change or possible omission of what is currently held in the antinatalist mindset as "suffering" or "negative", creation of new life either, is intrinsically a negative, whether that conviction is held based on the likelihood of even, say, a perfect utopia naturally always reverting to a negative state, or some other generally non-evidential belief. — Outlander
Life is way more than suffering. Maybe only human beings can recognize this. Why kill ourselves off because of a little suffering? — Fire Ologist
And I think I’ve said my peace. Antinatalism seems unneccesssry if it be based on simply suffering, seems anti-ethics while it puts ethics above ethical people, and simply ignores the joy in life. — Fire Ologist
One may experience something so alien to common sense and deeply profound that it requires metaphysics to give an account of it, but to make the claim that the world as it is in all its mundanity itself possesses the basis for religious possibility, this is the idea here; that in the common lies the uncommon metaethical foundation for ethics and religion. — Constance
Here, I want to show that this other world really is this one. — Constance
So here is a question that lies at the center of the idea of the OP: what if ethics were apodictic, like logic? This is what you could call an apriori question, looking into the essence of what is there in the world and determining what must be the case given what is the case. Logic reveals apodicticity, or an emphatic or unyielding nature. Entirely intellectually coercive. I claim that ethics has this at its core. — Constance
Of course, this is right. It ALWAYS depends on the flexibility of the words we are using. When you start the car in the morning, are you "thinking" about starting the car, or is it just rote action? But you certainly CAN think about it. I think when a person enters an environment of familiarity, like a classroom or someone's kitchen, there is, implicit in all one sees, the discursive possibility that lies "at the ready," as when one asks me suddenly, doesn't that chef's knife look like what you have at home? I see it, and language is there, "ready to hand". For us, not cows and goats, but for us, there is language everywhere and in everything. — Constance
Why do you assume there is any relation between "sentience" and "morality"? — 180 Proof
Well, the latter (re: pragmatics) afaik is a subset of the former (re: semantics). — 180 Proof
to learn how to develop its own "objectives" and comply with those operational goals in order to function at or above the level of human metacognitive performance (e.g. normative eusociality³). — 180 Proof
We are (e.g. as I have proposed ↪180 Proof), and I expect AGI will learn from our least maladaptive attempts to "say what is and is not moral"³. — 180 Proof
More approaches come from explicitly combining two or three of the approaches which you've mentioned in various ways. In my case, 'becoming a better person' is cultivated by 'acting in ways which prevent or reduce adverse consequences' to oneself and others (i.e. 'virtues' as positive feedback loops of 'negative utilitarian / consequentialist' practices). None of the basic approaches to ethics seems to do all the work which each respectively sets out to do, which is why (inspired by D. Parfit) I think they can be conceived of in combinations which compensate for each other's limitations. — 180 Proof
I can't think of any reason why AGI would ignore, or fail to comply with, eusocializing norms (i.e. morals) whether, in fact, we consider them "truths" or "fictions". — 180 Proof
I can't think of any reason why AGI would ignore, or fail to comply with, eusocializing norms (i.e. morals) whether, in fact, we consider them "truths" or "fictions". — 180 Proof
This clarification is very helpful. AGI can independently use its algorithms to teach itself routines not programmed into it? — ucarr
At the risk of simplification, I take your meaning here to be concern about a powerful computing machine that possesses none of the restraints of a moral compass. — ucarr
AI doesn’t know why it is important to get to the finish line , what it means to do so in relation to overarching goals that themselves are changed by reaching the finish line, and how reaching the goal means different things to different people. — Joshs
Computation is not thought. — Joshs
Even though there are many things we don’t understand about how other organism function, we don’t seem to have any problem getting along with other animals, and they are vastly more capable than any AGI. — Joshs
Yes – preventing and reducing² agent-dysfunction (i.e. modalities of suffering (disvalue)¹ from incapacity to destruction) facilitated by 'nonzero sum – win-win – resolutions of conflicts' between humans, between humans & machines and/or between machines.
¹moral fact
²moral truth (i.e. the moral fact of (any) disvalue functions as the reason for judgment and action / inaction that prevents or reduces (any) disvalue) — 180 Proof
That sounds kind of horrific. — ToothyMaw
I'm pretty certain AGI, or strong AI, does indeed refer to sentient intelligences, but I'll just go with your definition. — ToothyMaw
Making AI answerable to whatever moral facts we can compel it to discover doesn't resolve the threat to humanity, however, but rather complicates it.
Like I said: what if the only discoverable moral facts are so horrible that we have no desire to follow them? What if following them would mean humanity's destruction? — ToothyMaw
Also: Asimov's 'Three Laws of Robotics' were deficient, and he pointed out the numerous contradictions and problems in his own writings. So, it seems to me we need something much better than that. I would have no idea where to start apart from what I have written above, which is definitely not sufficient. — ToothyMaw
If you are suggesting we root our ethical foundations for AGI in moral facts: even if we or the intelligences we might create could discover some moral facts, what would compel any superintelligences to abide those facts given they have already surpassed us analytically? What might an AGI see when it peers into the moral fabric of the universe and how might that change its - or others' - behavior? And what if we do discover these moral facts and they are so repugnant or detrimental to humanity that we wish not to abide them ourselves? — ToothyMaw