Your question hinges on your philosophical or technical definition of "Consciousness". Literally, the "-ness" suffix implies that the reference is to a general State or felt Quality (of sentience), not to a specific Thing or definite Quanta (e.g. neurons). In Nature, animated behavior (e.g. seek food, or avoid being food) is presumed to be a sign of minimal sentience, and self-awareness.How is conscious mind essentially different to AI on a strictly operational level? How would you go about programming such a thing? — enqramot
I am inclined to think that consciousness is a natural result of complexity. If that's the case, an exact emulation may have to be conscious too. — Down The Rabbit Hole
Computers don't need to be conscious. I don't see why people make a big deal out of consciousness. — Jackson
But the nature of consciousness has eluded science for such a long time that it's impossible not to see it as a huge challenge. — enqramot
Much to debate here, and worthwhile. My short answer is that I think people make consciousness into a fetish. The question is about intelligence and processing information and making new things. — Jackson
AI is here and getting more complex. Again, I don't see the importance of self awareness. — Jackson
I am inclined to think that consciousness is a natural result of complexity. If that's the case, an exact emulation may have to be conscious too. — Down The Rabbit Hole
I heard this theory, but I must admit it doesn't really make any sense to me, tbh. I just can't see how increasing complexity can lead to anything other than just more complexity. — enqramot
It is hard to believe, but this theory must be judged in comparison to the other theories of consciousness.
What theories of consciousness are more plausible? — Down The Rabbit Hole
I honestly don't believe there are any credible theories in existence explaining phenomenon of consciousness. The science is completely in the dark in this area as far as I'm aware (correct me if I'm wrong). — enqramot
The trouble is, how do you prove the subject has experiences? I think it likely we will never be able to do a test to tell us what consciousness is. — Down The Rabbit Hole
Your question hinges on your philosophical or technical definition of "Consciousness". Literally, the "-ness" suffix implies that the reference is to a general State or felt Quality (of sentience), not to a specific Thing or definite Quanta (e.g. neurons). In Nature, animated behavior (e.g. seek food, or avoid being food) is presumed to be a sign of minimal sentience, and self-awareness.
AI programs today are able to crudely mimic sophisticated human behaviors, and the common expectation is that the animation & expressions of man-made robots will eventually be indistinguishable from their nature-made makers -- on an "operational level". When that happens, the issue of enslaving sentient (knowing & feeling) beings could require the emancipation of artificial creatures, since modern ethical philosophy has decided that, in a Utopia, all "persons" are morally equal -- on an essential level.
Defining a proper ethical hierarchy is not a new moral conundrum though. For thousands of years, military captives were defined as "slaves", due to their limited freedom in the dominant culture. Since, many captives of the ruling power happened to have darker skin, that distinguishing mark came to be definitive. At the same time, females in a male-dominated society, due to their lack of military prowess, were defined as second-class citizens. At this point in time, the social status of AI is ambiguous ; some people treat their "comfort robots" almost as-if they are "real" pets or persons. But, dystopian movies typically portray dispassionate artificial beings as the dominant life-form (?) on the planet.
But, how can we distinguish a "real" Person from a person-like Mechanism? That "essential" difference is what Chalmers labeled the "Hard Problem" : to explain "why and how we have qualia or phenomenal experiences". The essence-of-sentience is also what Nagel was groping for in his query "what does it feel like?". Between humans, we take homo sapien feelings for granted, based on the assumption of similar genetic heritage, hence equivalent emotions. But, the genesis of AI, is a novel & unnatural lineage in evolution. So, although robots are technically the offspring of human minds, are they actually kin, or uncanny?
Knowing and Feeling are the operational functions of Consciousness. But Science doesn't do Essences. "If you can't measure it, it ain't real". Yet, a Cartesian solipsist could reply, "If I can't feel it, it ain't real". Therefore, I would answer the OP : that the essential difference between AI behavior and human Consciousness is the Qualia (the immeasurable feeling) of Knowing. Until Cyberneticists can reduce the Feeling-of-Knowing to a string of 1s & 0s, Consciousness will remain essential, yet ethereal. So, if a robot says it's conscious, we may just have to take it's expression for evidence. :smile:
Google AI has come to life :
AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
Google's AI is impressive, but it's not sentient. Here's why :
https://www.msnbc.com/opinion/msnbc-opinion/google-s-ai-impressive-it-s-not-sentient-here-s-n1296406 — Gnomon
Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that. For example, the current National Geographic magazine has a cover article on the sense of touch. And it shows a mechanical hand with touch sensors on the fingertips. Without "sentience" (feedback) an animated robot would be helplessly clumsy. But "consciousness" literally means to "know with". Certainly a robot with touch sensors can interpret sensible feedback in order to guide its behavior. But is it aware of itself as the agent (actor) of sentient behavior?Maybe consciousness isn't the right word, maybe sentience would be, — enqramot
Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that. — Gnomon
Therefore, the philosophical question here is "does a robot (AI) know that it knows"? Is it self-aware? To answer that question requires, not an Operational (scientific) definition, but an Essential (philosophical) explanation. — Gnomon
When an octopus acts as-if it recognizes its image in a mirror, is that just an operational function of sentience, or an essential function of self-awareness? We could debate such rhetorical questions forever. So, I can only say that, like most philosophical enigmas, it's a matter of degree, rather than Yes or No. Some intelligences are more conscious than others. — Gnomon
Ha! Philosophy has no "settled questions", and philosophers are not content with mechanical "operational principles". So, the OP goal of encapsulating Consciousness, is still an open question.I'd rather philosophy steered clear of questions already settled. The operational principle of AI is already known, described in technical terms, there should be no need for an alternative explanation. — enqramot
They are even using Artificial Intelligence to search for signs of Consciousness — Gnomon
Most interesting! — Ms. Marple
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.