• NOS4A2
    9.3k
    Consider Hans Moravec's Paradox:

    it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

    It’s great that computers can play chess, search the internet, and mimic human intelligence, but human intelligence is arguably the easiest behavior to mimic. As one of our youngest behaviors, it is relatively less evolved and complex as other traits, like perception and mobility. Even now, nearly 40 years after Moravec’s observation, robots tend to look like bumbling fools wherever they mimic other behaviors, even if they could still school the best of us at chess and math.

    I’m curious about what Moravec’s Paradox might imply about the philosophy of mind and request the wisdom of others. What questions might it raise for the field?

    Just as an example, I’ve never been too impressed by intelligence as I am with other forms of natural ability, and I suspect that this paradox helps to illustrate why. I have an instinctual aversion to analytic philosophy and the general notion that a man who stares at words and symbols all day can afford me a higher value to my education or the pursuit of wisdom than, say, an athlete or shop teacher, or anyone else who prefers to deal with things outside of themselves. I prefer common sense to the rational, the body to the mind, the objective to the subjective, and tend to defend one from the encroachment of the other. Does anyone else feel this way? Have we glorified intelligence at the expense of the other abilities?

    At any rate, I thought Moravec's Paradox and its implications for the philosophy of mind to be a good topic of discussion.
  • ENOAH
    846
    prefacing my thoughts with the caveat of being unlearned in the technological aspect, and especially the deeper sciences involved in shaping AI (and, come to think of it, unlearned in any facet of this issue), I think, the potentially insurmountable challenge (assuming the "goal" is to make artificial humans, not simply AI) is going to be in reproducing organic "feeling." By this I mean those related to "mood" and an unidentifiable "form" of thought (thoughtless thought); but not as in touch.

    Technology is capable of reproducing mobility, sensation, and as you pointed out, information processing (intelligence).

    But the root of our "aware-ing", independent of Mind (though "hijacked" or displaced thereby) is the way we are triggered to feel stemming from experience, and by that, every nano-"second" and corresponding subtle variation thereof.

    And sure, we can duplicate a reward/punishment system with subtle variations possibly as sophisticated as our (I submit, some of which is even imperceptible to mind) sense of (inner) feeling. But whereas with the others, it seems we can even surpass the Organic faculties; when it comes to, what I would call the "real human consciousness" as opposed to Mind/Self consciousness, aware-ing-feeling, I have doubts we can ever succeed.

    I think Mind itself fails to represent those feelings, but projects representations called emotions. Emotions are already a projection from Reality. It might be that we cannot duplicate a projected/represented Reality, now twice removed.

    As a simple illustration (not purporting in any way to be an analogy, let alone sound) its like other forms of Fiction. When we project a real life character in books or movies, we can duplicate it in all respects but it's feelings. Think of the actor who played Ghandi. Even thoughts (at least knowable ones) can be transmitted if there was a way to record and transmit. But the Organic being is necessary for the feeling. Even the "how it feels" has "left" the Organism and entered Mind. That can be duplicated. But not aware-ing feeling.
  • NOS4A2
    9.3k


    Nice thinking.

    I think you’re right about “human feeling”, much of what I believed is derived from embodied experience. It’s like we’ve started AI in the wrong direction, conceiving it first as disembodied brains and building it in that direction, rather than as embodied beings, which is probably so fundamental to experience that to forget it seems foolish.
  • ENOAH
    846
    we’ve started AI in the wrong direction, conceiving it first as disembodied brainsNOS4A2

    Yes, because we also approach mind/body in the wrong direction, as if real being somehow inhabits the mind.
  • alleybear
    7
    I do not concern myself with how much artificial intelligence can do what humans do. My fascination is what's gonna happen when artificial intelligence starts doing something humans cannot do, use wisdom in all their actions. Wisdom being defined as making correct assumptions and judgements covering longer and longer periods of time. Then you have to start being concerned about what, if any, ethics they were trained on.
  • hypericin
    1.6k
    My take on this is that chess and math are performed by the part of the brain that does the generic computation. The slow part, the part you are keenly aware of as you laboriously work through a problem.

    Chess and math are indeed far less complex than say motion and perception and language. Those things would be totally overwhelming to us if we had to consciously think them through. The brain is furnished with special purpose machinery that handles those things, and we have no conscious access to the workings of those parts of the brain, only to their results.

    When a computer performs a task done by our slow brains, it can excel. Taking on a task done by our fast brains is far more formidable, and the breakthroughs for those things happened only recently.

    Even now, nearly 40 years after Moravec’s observation, robots tend to look like bumbling fools wherever they mimic other behaviors, even if they could still school the best of us at chess and math.NOS4A2

    https://www.youtube.com/shorts/zS6vNNW5bEo
    https://www.youtube.com/watch?v=UAG_FBZJVJ8&pp=ygUHI2JvdGRvZw%3D%3D
    https://chatgpt.com/

    I have an instinctual aversion to analytic philosophy and the general notion that a man who stares at words and symbols all day can afford me a higher value to my education or the pursuit of wisdom than, say, an athlete or shop teacher, or anyone else who prefers to deal with things outside of themselves.NOS4A2

    And yet, 9.3k and counting.
  • L'éléphant
    1.6k
    But the root of our "aware-ing", independent of Mind (though "hijacked" or displaced thereby) is the way we are triggered to feel stemming from experience, and by that, every nano-"second" and corresponding subtle variation thereof.ENOAH
    This caught my attention, besides the OP's really good point.
    This, to me, is the 'insight', which only human consciousness and intelligence can possess. The AI is denied this experience.
    The 'growth' and 'maturing' that humans experience cannot be duplicated in the machine because of the inherent nature of the neural networks in the brain.

    Edit: consider the learning of speech -- the sounds that are produced through the vibration of the vocal cords must always begin with the babies making unintelligible sounds.
  • ENOAH
    846
    This, to me, is the 'insight', which only human consciousness and intelligence can possess.L'éléphant

    Your good points aside, in case my thought needs clarifying. I'm suggesting that only a living organism has (among other things which might apply) feelings; and by thar I mean what the brain, neurons, and I guess for e.g. the limbic system produce. And it is therd that experience is real, as it is for other creatures. In Mind, which presumably AI is at least currently focusing on replicating, there is only the script which uniquely for humans, gives meaning (usually I'm narrative form) to the feelings. But it is empty code without the feelings. It's one thing to know what love is, its another thing to feel it.
  • GrahamJ
    43
    I am a mathematician and programmer. I've been interested in AI since the 1980s. I don't particularly remember Moravec's paradox but a lot of people were saying similar things at that time. Here are three things I do remember.

    1. David Marr was a biologist turned computer scientist. He is sometimes known as the father of computational neuroscience. You can think of computational neuroscience as being like AI but restricted to use only algorithms which the brain might plausibly use, and to only use data of the sort that humans have access to during their lives. I think there is so much wisdom in this quote.
    If we believe that the aim of information-processing studies is to formulate and understand particular information-processing problems, then the structure of those problems is central, not the mechanisms through which their solutions are implemented. Therefore, in exploiting this fact, the first thing to do is to find problems that we can solve well, find out how to solve them, and examine our performance in the light of that understanding. The most fruitful source of such problems is operations that we perform well, fluently, and hence unconsciously, since it is difficult to see how reliability could be achieved if there was no sound underlying method.

    Unfortunately, problem-solving research has for obvious reasons tended to concentrate on problems which we understand well intellectually but perform poorly on like mental arithmetic and cryptarithmetic, geometry theorem proving, or the game of chess - all problems in which human skills are doubtful quality and in which good performance seems to rest on a huge base of knowledge and experience.

    I argue that these are exceptionally good grounds for not yet studying how we carry out such tasks. I have no doubt that when we do mental arithmetic we are doing something well, but it is not arithmetic, and we seem far from understanding even one component of what that something is. I therefore feel we should concentrate on the simpler problems first, for there we have some hope of genuine advancement.
    — David Marr, Vision, 1982

    2. Douglas Hofstadter's essay 'Waking up from the Boolean Dream' (1982). It's 22 pages long, so these are tiny snippets from it. In 1980 AI researcher Herbert Simon said "Everything of interest in cognition happens above the 100 millisecond level - the time it takes you to recognise your mother." Hofstadter takes the opposite viewpoint "Everything of interest in cognition happens below the 100 millisecond level - the time it takes you to recognise your mother." One subtitle in the essay is "Not Cognition, But Subcognition Is Computational".

    3. John Holland's classifier systems and in particular the paper Escaping Brittleness (1986). Holland's classifier systems are sometimes described as the first fully-fledged reinforcement learning system in AI. The brittleness being escaped here is the brittleness of expert systems.
    In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert.[1] Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. — Wikipedia

    In my opinion, reinforcement learning is the most important part of AI for philosophers to understand. It is especially relevant to understanding the way our brains work if it is restricted in the way that I described above for computational neuroscience.

    Sadly there doesn't seem to be anyone except me on TPF who understands reinforcement learning or shows much interest in learning about it. There was once. I hoped to have a discussion with @Malcolm Lett. But as soon as I made a comment (https://thephilosophyforum.com/discussion/comment/900869) on his OP he disappeared from TPF and has never posted since. I live in hope.

    @ENOAH, I agree that feelings are central. Replying to Malcolm Lett's "Our emotional affect additionally adds information, painting our particular current emotional hue over the latent state inference that is made from the raw sensory data", I said
    Feelings are not paint on top of the important stuff. They are the important stuff. In my opinion any theory of consciousness must incorporate feelings at a very fundamental level. In reinforcement learning there is a reward function, and a value function. Why it is I could not tell you, but it seems that our own reward functions and value functions (I think we have multiple ones) are intimately connected with what we subjectively experience as feelings. To go back to Marr, "What is the goal of the computation?" That is where you start, with goals, purposes, rewards. The rest is just engineering...GrahamJ

    Reward functions and value functions are technical terms from reinforcement learning.

    The central role of value estimation is arguably the most important thing that has been learned about reinforcement learning over the last six decades. — Barto and Sutton, Reinforcement Learning, 2018
  • ENOAH
    846
    feelings are central.GrahamJ

    Yes, but my own thoughts may not align fully with either yours or Lett's (based on my extremely limited exposure here). While not wishing to put any of them into identifiable boxes, my thinking may be a strange hybrid.

    I will explain super-briefly and in the context of this discussion about AI and my original reply to the OP.

    I think emotions are a painting over of direct sensation; the paint being meaning.

    I think feelings are a direct sensation, they regulate the body's mood but in a much broader way than conventionally thought of. To keep it brief, even that which triggers belief is a sensation.

    The 'code' which Mind writes and projects into the world to give meaning to these direct feelings is emotion.

    The emotion is available to AI because it is just code/meaning.

    It's the feelings which are unique to living beings like us and therefore not accessible to AI. And I would speculate never will be.

    To give an overly simplistic illustration.

    I hold my newborn child fresh out of the womb and instantly feel [a bond]. That is an organic and real sensation the AI cannot have.

    Within 'a second' Mind constructs from history, meaning to attach to the feeling (because I am human and blessed/burdened with Mind) 'love' to displace that initial feeling. Now I have the emotion, subjective, "I love my baby." That emotion is a construction and can be programmed into AI.

    But just as for us, the emotion is not Consciousness. It is not even real. It is programmed code. Triggered by the same feedback loop that makes me nervous when I hear a siren and call upon History to attach meaning.

    I'm saying the AI cannot have consciousness not because it cannot have emotions which only we humans construct; but because it cannot have feelings, the real source of our drives, moods, etc., and that which we share with many other species in the real world.

    Anyway this may have been to brief and simple, but for what it's worth...

    Your information by the way was fascinating. I sense that I might unwittingly align with Hofstadter. I'm not sure about terminology, 'cognition' etc. But for me real 'experience' for humans is like that nanosecond before the sensation gets flooded with constructions from History and displaced by perception or emotion or desire etc. etc.
  • GrahamJ
    43
    Thanks for your reply.

    We are using language very differently, particularly the word emotion. It's hard to tell how much we disagree about feelings though I certainly disagree about the possibility of AI having feelings (though exactly how we disagree is unclear). When talking with Malcolm Letts I was discussing the hard problem. My version of the hard problem is: how can anything ever have any feelings at all? I will start by defining how I want to use the word feelings in this thread.

    I try to follow psychologists when using words like feeling and emotion because I figure they're the experts who study these things. Mind you, psychologists don't agree about these things so I pick and choose the psychologists I like ;-)

    I use 'feelings' to mean bodily pains and pleasures, and the subjective experience of emotions and
    moods. It is a very flexible word, and I want to restrict its meaning. People often use the words emotion and feeling as synonyms. But psychologists (so far as I can see) regard feelings as only one part of emotion. For example Scherer’s Component Process Model:
    • Cognitive appraisal: provides an evaluation of events and objects.
    • Bodily symptoms: the physiological component of emotional experience.
    • Action tendencies: a motivational component for the preparation and direction of motor responses.
    • Expression: facial and vocal expression almost always accompanies an emotional state to communicate
      reaction and intention of actions.
    • Feelings: the subjective experience of emotional state once it has occurred.
    You'll notice this is quite backwards from the way you are using the word emotion. You seem to be referring to the way we talk about emotions after all these five components including the feeling have happened. I am not very interested in the way we talk about emotions (and I am completely uninterested in the way ChatGPT talks about emotions).

    I am excluding the meanings of feelings that relate to intuition (‘I feel 87 is my lucky number’) and the sense of touch (‘feeling my way in the dark’).

    I am also excluding uses of the word such as “feelings of identification with the particular object that happens to be your body” (Anil Seth) and your "feel [a bond]" where I am not clear what is meant, but it is something more general than the narrow way I want to use the word. Probably these are complex experiences with multiple components, some of which are feelings of the sort I want to talk about.

    I'll go through the model again with your example
    • Cognitive appraisal: Your brain must recognise what it is you're holding before you can have any reaction.
    • Bodily symptoms: I'm sure your heart rate increased, whether you were aware of it or not.
    • Action tendencies: holding a newborn baby needs a load of sensorimotor processing.
    • Expression: I'm sure your face showed something, whether you were aware of it or not.
    • Feelings: I won't venture to say anything.
    Note that only the fifth component is necessarily conscious. The others may or may not be. I would quibble about Scherer’s 'once it has occurred'. The cognitive appraisal must come first, or at least start first, but I'd expect the other four to occur in parallel.

    Your conscious mind lags about 1/3 of a second behind reality. That's over three hundred million nanoseconds, enough time for your brain to process something like a million million bits. In top-level tennis, a player must return a serve before they are consciously aware that the ball has left the server's racquet. The conscious mind is so slow that everything seems instantaneous to it. I think there is a lot of calculation involved to produce a feeling.

    Enough for now. Later, I hope will shake your confidence a bit about AI never being able to have feelings.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment