If people mistreat life-like robots or AI they are (to an extent) toying with doing so to real humans. There's several parts of the brain involved in moral decision-making which do not consult much with anywhere capable of distinguishing a clever AI from a real person. We ought not be training our systems how to ignore that output. — Isaac
What this discussion shows is that as soon as an observable criteria for consciousness is set out a clever programer will be able to "simulate" it.
It follows that no observable criteria will ever be sufficient.
But of course "phenomenal experience" can only be observed by the observer, and so cannot serve as a criteria for attributing consciousness.
So this line of thought does not get anywhere.
Whether some piece of software is conscious is not a technical question. — Banno
These two go along nicely together, and also stimulate some of my thinking on underlying issues with respect to the relationship between knowledge and ethics (which is super cool! But I'm going to stay on topic)
I agree that, at bottom, there is no scientific matter at stake. A trained producer of scientific knowledge wouldn't be able run a process, interpret it, and issue a reasonable inference on every being in some kind of Bureau of Moral Inspection to whether or not we will be treating this one as if it is a moral being or not.
In fact, while comical to think on at a distance, it would, in truth, be horrific to adjudicate moral reasoning to a bureaucratic establishment dedicated to producing knowledge, issuing certificates of analysis on each robot, alien, or person that they qualify. Not even in an exaggerated sense, but just imagine a
Brave New World scenario where, instead of a science of procreation being run by the state to institute natural hierarchies to create order, you'd have a state scientific bureau determining what those natural hierarchies already are --
Functionally speaking, not much different.
Also, naturally we are hearing this for a reason -- the news is literature! And Google wants to make sure it still looks good in the eyes of the public in spite of firing this guy, especially because the public will be more credulous when it comes to A.I. being sentient.
Another reason to be hesitant to immediately agree. After all -- what about the time the guy is right? Will Alphabet corporation have our moral worth at the heart of their thinking when they want to keep a sentient A.I. because it's more useful to own something sentient?
No, I'd say it's far more sensible to err on the side of caution, because of who we will become if we do not.