• enqramot
    64
    By this I mean: Do you think it would be (strictly theoretically) possible to emulate, say, a person, by AI (common Artificial Intelligence, naturally not possessing any sort of consciousness, but instruction driven, as deterministic as they get), given unlimited resources, like ram, CPU speed etc. ASSUMPTIONS: Average person is not totally unpredictable, somehow follows certain patterns (you could argue they’re governed by subconsciousness), is limited in terms of imagination, memory, intelligence, you name it. All this creates this person’s general character, personality – which could be described in general terms. Programming such person into an AI system would mean a much more detailed description of such person, down to most minute details of this person’s behaviour in various scenarios.
    What would be the stumbling block, do you think? How is conscious mind essentially different to AI on a strictly operational level? How would you go about programming such a thing? What are conscious thoughts? Who creates them / how are they generated ? Are they created inside the system or received from outside?
    What would be the implications of 2 entities, a conscious one and an unconscious one, displaying the same sort of behaviour?
  • Down The Rabbit Hole
    530


    I am inclined to think that consciousness is a natural result of complexity. If that's the case, an exact emulation may have to be conscious too.
  • Gnomon
    3.7k
    How is conscious mind essentially different to AI on a strictly operational level? How would you go about programming such a thing?enqramot
    Your question hinges on your philosophical or technical definition of "Consciousness". Literally, the "-ness" suffix implies that the reference is to a general State or felt Quality (of sentience), not to a specific Thing or definite Quanta (e.g. neurons). In Nature, animated behavior (e.g. seek food, or avoid being food) is presumed to be a sign of minimal sentience, and self-awareness.

    AI programs today are able to crudely mimic sophisticated human behaviors, and the common expectation is that the animation & expressions of man-made robots will eventually be indistinguishable from their nature-made makers -- on an "operational level". When that happens, the issue of enslaving sentient (knowing & feeling) beings could require the emancipation of artificial creatures, since modern ethical philosophy has decided that, in a Utopia, all "persons" are morally equal -- on an essential level.

    Defining a proper ethical hierarchy is not a new moral conundrum though. For thousands of years, military captives were defined as "slaves", due to their limited freedom in the dominant culture. Since, many captives of the ruling power happened to have darker skin, that distinguishing mark came to be definitive. At the same time, females in a male-dominated society, due to their lack of military prowess, were defined as second-class citizens. At this point in time, the social status of AI is ambiguous ; some people treat their "comfort robots" almost as-if they are "real" pets or persons. But, dystopian movies typically portray dispassionate artificial beings as the dominant life-form (?) on the planet.

    But, how can we distinguish a "real" Person from a person-like Mechanism? That "essential" difference is what Chalmers labeled the "Hard Problem" : to explain "why and how we have qualia or phenomenal experiences". The essence-of-sentience is also what Nagel was groping for in his query "what does it feel like?". Between humans, we take homo sapien feelings for granted, based on the assumption of similar genetic heritage, hence equivalent emotions. But, the genesis of AI, is a novel & unnatural lineage in evolution. So, although robots are technically the offspring of human minds, are they actually kin, or uncanny?

    Knowing and Feeling are the operational functions of Consciousness. But Science doesn't do Essences. "If you can't measure it, it ain't real". Yet, a Cartesian solipsist could reply, "If I can't feel it, it ain't real". Therefore, I would answer the OP : that the essential difference between AI behavior and human Consciousness is the Qualia (the immeasurable feeling) of Knowing. Until Cyberneticists can reduce the Feeling-of-Knowing to a string of 1s & 0s, Consciousness will remain essential, yet ethereal. So, if a robot says it's conscious, we may just have to take it's expression for evidence. :smile:


    Google AI has come to life :
    AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.
    https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

    Google's AI is impressive, but it's not sentient. Here's why :
    https://www.msnbc.com/opinion/msnbc-opinion/google-s-ai-impressive-it-s-not-sentient-here-s-n1296406
  • enqramot
    64
    I am inclined to think that consciousness is a natural result of complexity. If that's the case, an exact emulation may have to be conscious too.Down The Rabbit Hole

    I heard this theory, but I must admit it doesn't really make any sense to me, tbh. I just can't see how increasing complexity can lead to anything other than just more complexity. Impossible to rule this out, of course, but suppose scientists achieve consciousness in this way: they'd still have no clue what consciousness is. To me it seems that the very idea of associating computers with consciousness is based solely on apparent similarity of what computers do to human thinking, disregarding the fact that the general principle is completely different. Sounds like wishful thinking - hoping that increasing complexity will somehow result in consciousness as a byproduct. It's like barking up the wrong tree entirely. We know how computers work - there's nothing mystical about them - and they are not conscious.
  • Jackson
    1.8k
    the very idea of associating computers with consciousness is based solely on apparent similarity of what computers do to human thinking,enqramot

    Computers don't need to be conscious. I don't see why people make a big deal out of consciousness.
  • enqramot
    64

    Good and long reply, but it's already 0230 local time so I'll take a stance tomorrow.
  • Manuel
    4.1k


    I think the problem here already lies in the premise, that consciousness is a kind of AI. Or that it could be somehow transcribed to AI terms and relevant technology.

    It seems to me that this is the immediate stumbling block. Brains are biological organs that have gone through billions of years of evolution. Machines, are things people do, and are much less sophisticated (by far) from anything that could be done with any technology.

    There's also the issue that there are no machines in nature, we could choose to think of nature as a machine, but this would be misleading taken literally.

    As for the general question, I think we don't know anywhere nearly enough about experience to even know where to begin on how to create consciousness. And we likely never will, given our cognitive limitations as natural beings.
  • enqramot
    64
    Computers don't need to be conscious. I don't see why people make a big deal out of consciousness.Jackson

    Absolutely. Computers don't do consciousness and that's their advantage. But the nature of consciousness has eluded science for such a long time that it's impossible not to see it as a huge challenge.
  • Jackson
    1.8k
    we don't know anywhere nearly enough about experience to even know where to begin on how to create consciousnessManuel

    I don't think people working in AI are even concerned with consciousness.
  • Jackson
    1.8k
    But the nature of consciousness has eluded science for such a long time that it's impossible not to see it as a huge challenge.enqramot

    Much to debate here, and worthwhile. My short answer is that I think people make consciousness into a fetish. The question is about intelligence and processing information and making new things.
  • enqramot
    64
    Much to debate here, and worthwhile. My short answer is that I think people make consciousness into a fetish. The question is about intelligence and processing information and making new things.Jackson

    Maybe consciousness isn't the right word, maybe sentience would be, but the fundamental difference between computers and me is that I am alive and can feel, whereas computer is completely dead and about as conscious as a brick, no matter how much ram, how fast cpu and how complex program currently running. I believe talking about consciousness in relation to computers can only be dictated by a marketing strategy. Selling "sentient" computer will make you rich in no time.
  • Jackson
    1.8k
    I believe talking about consciousness in relation to computers can only be dictated by a marketing strategy.enqramot

    AI is here and getting more complex. Again, I don't see the importance of self awareness.
  • Manuel
    4.1k


    That's probably true. The issue then become, to what extent is it intelligible to create a separation between consciousness and intelligence?

    Because if these aren't thought about carefully, we might attribute a lot of intelligence to almost all living creatures. And that's tricky.
  • enqramot
    64
    AI is here and getting more complex. Again, I don't see the importance of self awareness.Jackson

    I don't see either. Not in computers, that is. But the mechanism of it I find interesting in itself although I believe science is no closer to solving it that it has ever been. Hard to even know where to start.
  • Jackson
    1.8k
    The issue then become, to what extent is it intelligible to create a separation between consciousness and intelligence?Manuel

    I separate them. Consciousness is a function of intelligence. But intelligence can function without consciousness.
  • Jackson
    1.8k
    science is no closer to solving it that it has ever beenenqramot

    AI is an engineering problem. No need to have a theory of consciousness.
  • Manuel
    4.1k


    Oh, that's interesting. It looks tricky to me, because, why use consciousness when intelligence can be way more efficient?
  • enqramot
    64
    AI is an engineering problem. No need to have a theory of consciousness.Jackson

    Sure, if we're talking about AI. But these are completely separate topics, AI and consciousness, and I was only considering whether one can emulate the other.
  • Jackson
    1.8k
    Oh, that's interesting. It looks tricky to me, because, why use consciousness when intelligence can be way more efficient?Manuel

    That is my point, yes.
  • enqramot
    64
    I think the problem here already lies in the premise, that consciousness is a kind of AI.Manuel

    I'm not making such a premise, merely considering to what extent behaviour of a conscious being can be described in terms of computer code.
  • Manuel
    4.1k


    Ah, then it is a good point.



    Ah well, if its behavior, then, that's a different thing. But behavior can be very misleading, in terms of, not confusing the data (which is what behavior is) from the theory (that which causes the behavior).
  • Jackson
    1.8k
    This is from Turing,
    " The view that machines cannot give rise to surprises is due, I believe, to a fallacy; this is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the
    mind simultaneously with it." https://www.csee.umbc.edu/courses/471/papers/turing.pdf

    Here criticizing the idea that a machine cannot be creative.
  • Down The Rabbit Hole
    530


    I am inclined to think that consciousness is a natural result of complexity. If that's the case, an exact emulation may have to be conscious too.Down The Rabbit Hole

    I heard this theory, but I must admit it doesn't really make any sense to me, tbh. I just can't see how increasing complexity can lead to anything other than just more complexity.enqramot

    It is hard to believe, but this theory must be judged in comparison to the other theories of consciousness.

    What theories of consciousness are more plausible?
  • enqramot
    64
    It is hard to believe, but this theory must be judged in comparison to the other theories of consciousness.

    What theories of consciousness are more plausible?
    Down The Rabbit Hole

    I honestly don't believe there are any credible theories in existence explaining phenomenon of consciousness. The science is completely in the dark in this area as far as I'm aware (correct me if I'm wrong).
  • Down The Rabbit Hole
    530


    I honestly don't believe there are any credible theories in existence explaining phenomenon of consciousness. The science is completely in the dark in this area as far as I'm aware (correct me if I'm wrong).enqramot

    The trouble is, how do you prove the subject has experiences? I think it likely we will never be able to do a test to tell us what consciousness is.
  • enqramot
    64
    The trouble is, how do you prove the subject has experiences? I think it likely we will never be able to do a test to tell us what consciousness is.Down The Rabbit Hole

    Exactly, I can't fathom how such a test could be possible.
  • enqramot
    64
    Your question hinges on your philosophical or technical definition of "Consciousness". Literally, the "-ness" suffix implies that the reference is to a general State or felt Quality (of sentience), not to a specific Thing or definite Quanta (e.g. neurons). In Nature, animated behavior (e.g. seek food, or avoid being food) is presumed to be a sign of minimal sentience, and self-awareness.

    AI programs today are able to crudely mimic sophisticated human behaviors, and the common expectation is that the animation & expressions of man-made robots will eventually be indistinguishable from their nature-made makers -- on an "operational level". When that happens, the issue of enslaving sentient (knowing & feeling) beings could require the emancipation of artificial creatures, since modern ethical philosophy has decided that, in a Utopia, all "persons" are morally equal -- on an essential level.

    Defining a proper ethical hierarchy is not a new moral conundrum though. For thousands of years, military captives were defined as "slaves", due to their limited freedom in the dominant culture. Since, many captives of the ruling power happened to have darker skin, that distinguishing mark came to be definitive. At the same time, females in a male-dominated society, due to their lack of military prowess, were defined as second-class citizens. At this point in time, the social status of AI is ambiguous ; some people treat their "comfort robots" almost as-if they are "real" pets or persons. But, dystopian movies typically portray dispassionate artificial beings as the dominant life-form (?) on the planet.

    But, how can we distinguish a "real" Person from a person-like Mechanism? That "essential" difference is what Chalmers labeled the "Hard Problem" : to explain "why and how we have qualia or phenomenal experiences". The essence-of-sentience is also what Nagel was groping for in his query "what does it feel like?". Between humans, we take homo sapien feelings for granted, based on the assumption of similar genetic heritage, hence equivalent emotions. But, the genesis of AI, is a novel & unnatural lineage in evolution. So, although robots are technically the offspring of human minds, are they actually kin, or uncanny?

    Knowing and Feeling are the operational functions of Consciousness. But Science doesn't do Essences. "If you can't measure it, it ain't real". Yet, a Cartesian solipsist could reply, "If I can't feel it, it ain't real". Therefore, I would answer the OP : that the essential difference between AI behavior and human Consciousness is the Qualia (the immeasurable feeling) of Knowing. Until Cyberneticists can reduce the Feeling-of-Knowing to a string of 1s & 0s, Consciousness will remain essential, yet ethereal. So, if a robot says it's conscious, we may just have to take it's expression for evidence. :smile:


    Google AI has come to life :
    AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.
    https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

    Google's AI is impressive, but it's not sentient. Here's why :
    https://www.msnbc.com/opinion/msnbc-opinion/google-s-ai-impressive-it-s-not-sentient-here-s-n1296406
    Gnomon

    I wanted to make a comment but realised that basically I agree with everything you said and have nothing meaningful to add at this time, so I'll leave it at that.
  • Gnomon
    3.7k
    Maybe consciousness isn't the right word, maybe sentience would be,enqramot
    Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that. For example, the current National Geographic magazine has a cover article on the sense of touch. And it shows a mechanical hand with touch sensors on the fingertips. Without "sentience" (feedback) an animated robot would be helplessly clumsy. But "consciousness" literally means to "know with". Certainly a robot with touch sensors can interpret sensible feedback in order to guide its behavior. But is it aware of itself as the agent (actor) of sentient behavior?

    Therefore, the philosophical question here is "does a robot (AI) know that it knows"? Is it self-aware? To answer that question requires, not an Operational (scientific) definition, but an Essential (philosophical) explanation. All man-made machines have some minimal feedback to keep them on track. So, it's obvious that their functions are guided by operational feedback loops. And that is the basic definition of Cybernetics (self-controlled behavior). Which is why some AI researchers are satisfied with Operational Sentience, and don't concern themselves with Essential Consciousness. It's what Jackson calls an "engineering problem".

    But philosophers are not engineers. So, they are free to ask impractical questions that may never be answered empirically. When an octopus acts as-if it recognizes its image in a mirror, is that just an operational function of sentience, or an essential function of self-awareness? We could debate such rhetorical questions forever. So, I can only say that, like most philosophical enigmas, it's a matter of degree, rather than Yes or No. Some intelligences are more conscious than others. So, it's hard to "encapsulate" Consciousness into a simple matter of fact.

    Ironically, the one asking such impractical rhetorical questions may be the most self-aware, and the most introspective & self-questioning. The behavior of Intelligent animals is typically pragmatic, and focused on short-term goals : food, sex, etc. They don't usually create art for art's sake. But, when they do, can we deny them some degree of self-consciousness? :smile:

    ELEPHANT SELF-PORTRAIT
    TVucAax.jpg
  • enqramot
    64
    Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that.Gnomon

    Let's stick to "consciousness" then :)

    Therefore, the philosophical question here is "does a robot (AI) know that it knows"? Is it self-aware? To answer that question requires, not an Operational (scientific) definition, but an Essential (philosophical) explanation.Gnomon

    I'd rather philosophy steered clear of questions already settled. The operational principle of AI is already known, described in technical terms, there should be no need for an alternative explanation. We can by similar token ask: "Is a wall aware of its existence?" To which one would be tempted to respond: "What an absurd idea!"

    When an octopus acts as-if it recognizes its image in a mirror, is that just an operational function of sentience, or an essential function of self-awareness? We could debate such rhetorical questions forever. So, I can only say that, like most philosophical enigmas, it's a matter of degree, rather than Yes or No. Some intelligences are more conscious than others.Gnomon

    Yes, various animals (including humans) seem to be conscious to varying degrees. For instance, I feel like a zombie when I haven't had enough sleep and feel that my level of consciousness is lower than when I'm in top shape. Some people, otoh, seem to run on autopilot for most of their lives. It would be interesting to scan them with "consciousness-O-meter", should such a thing exist. Anyway, thanks for your input.
  • Gnomon
    3.7k
    I'd rather philosophy steered clear of questions already settled. The operational principle of AI is already known, described in technical terms, there should be no need for an alternative explanation.enqramot
    Ha! Philosophy has no "settled questions", and philosophers are not content with mechanical "operational principles". So, the OP goal of encapsulating Consciousness, is still an open question.

    Nevertheless, pragmatic scientists are still working on a Consciousness Meter to update the crude EEGs and somewhat more sophisticated MRIs. They are even using Artificial Intelligence to search for signs of Consciousness in Natural Intelligences that appear to be anaesthetic (unconscious). However, they are not looking for philosophical essences, but for operational signs & symptoms. So, even then, the OP on The Philosophy Forum will go unanswered. :smile:


    Artificially intelligent consciousness meter :
    https://www.monash.edu/data-futures-institute/study/phd-scholarship/artificially-intelligent-consciousness-meter

    The hunt for hidden signs of consciousness in unreachable patients :
    https://www.technologyreview.com/2021/08/25/1031776/the-hunt-for-hidden-signs-of-consciousness-in-unreachable-patients/
  • Agent Smith
    9.5k
    They are even using Artificial Intelligence to search for signs of ConsciousnessGnomon

    Like how non-life (Mars rovers) looks for (signs of) life.

    Most interesting! — Ms. Marple
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.