ENOAH
Questioner
It has nothing to do with the internal so-called nature of the artificial being, and everything to do with the human mind’s conditioning, and how it triggers our bodies to feel.
We can see the seeds of this (emerging recognition) in our inclination to thank, even current presumably primitive AI, when it delivers an excellent answer.
Soon enough a generation will be born with the necessary programing to recognize AI sentience, even to guard it/guard against it, being input at a very early age, around the same time they are being conditioned to "recognizing" a distinctly sentient subject operating their own bodies (and in the same entirely constructed/conditioned way). — ENOAH
Questioner
acts and responds as if it is alive — MrLiminal
Pantagruel
Questioner
What are humans if not biological computers that suck at giving answers? — MrLiminal
Questioner
feedback loop as an agent with an environment. — Pantagruel
Questioner
human brain is contextual — Questioner
Richard B
ENOAH
ENOAH
Human reaction proves humans are sentient, not the AI. — Questioner
ENOAH
If it acts and responds as if it is alive, should we as moral actors not operate as if it is? — MrLiminal
ENOAH
ENOAH
Agreed. And I think that our conclusions following said reflection are mistaken. Personal sentience agency is an illusion, effective/functional (in determining our actions etc) only because it is believed (an efficient fiction etc). My proposal is that it is the same with AI. Not sentience once they achieve certain criteria. But sentience once we believe them to have sentience. The criteria may only be the means by which we come to believe.it is a sort historical reflection about how we characterize what it is being human. — Richard B
Pantagruel
Questioner
It seems odd not to even consider extending that courtesy to other systems. — MrLiminal
ucarr
ENOAH
Pantagruel
I like sushi
ENOAH
subjectivism — Pantagruel
ENOAH
Even today people argue about the sentience of their pets. — I like sushi
Pantagruel
It might sound that way, but Im not sure that's what Im intending. Subjectivity is part of the system which, for humans, a "trap" which forms such conclusions as "I" am an agent who wills things/AI is an agent who wills things. The "trap" is very simply, the data input into human minds, and by a process which includes repetition, conditioning our behavior (including thoughts) in various ways including what we believe.
It is not that only Subjective beliefs are true. They are not. It is that we are trapped by [that process leading us to adopt] subjective beliefs.
Therefore, for us, uniquely humans subjected by history to this process, AI (like "I") will become sentient. But not because within some universal system of truths they are objectively so--not even we are--and not because they have naturally or comically crossed that threshold into becoming subjective, but because we will believe they are Subjects, free willing agents like us, and like we believe we are. — ENOAH
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.