• NOS4A2
    9.2k
    Consider Hans Moravec's Paradox:

    it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

    It’s great that computers can play chess, search the internet, and mimic human intelligence, but human intelligence is arguably the easiest behavior to mimic. As one of our youngest behaviors, it is relatively less evolved and complex as other traits, like perception and mobility. Even now, nearly 40 years after Moravec’s observation, robots tend to look like bumbling fools wherever they mimic other behaviors, even if they could still school the best of us at chess and math.

    I’m curious about what Moravec’s Paradox might imply about the philosophy of mind and request the wisdom of others. What questions might it raise for the field?

    Just as an example, I’ve never been too impressed by intelligence as I am with other forms of natural ability, and I suspect that this paradox helps to illustrate why. I have an instinctual aversion to analytic philosophy and the general notion that a man who stares at words and symbols all day can afford me a higher value to my education or the pursuit of wisdom than, say, an athlete or shop teacher, or anyone else who prefers to deal with things outside of themselves. I prefer common sense to the rational, the body to the mind, the objective to the subjective, and tend to defend one from the encroachment of the other. Does anyone else feel this way? Have we glorified intelligence at the expense of the other abilities?

    At any rate, I thought Moravec's Paradox and its implications for the philosophy of mind to be a good topic of discussion.
  • ENOAH
    843
    prefacing my thoughts with the caveat of being unlearned in the technological aspect, and especially the deeper sciences involved in shaping AI (and, come to think of it, unlearned in any facet of this issue), I think, the potentially insurmountable challenge (assuming the "goal" is to make artificial humans, not simply AI) is going to be in reproducing organic "feeling." By this I mean those related to "mood" and an unidentifiable "form" of thought (thoughtless thought); but not as in touch.

    Technology is capable of reproducing mobility, sensation, and as you pointed out, information processing (intelligence).

    But the root of our "aware-ing", independent of Mind (though "hijacked" or displaced thereby) is the way we are triggered to feel stemming from experience, and by that, every nano-"second" and corresponding subtle variation thereof.

    And sure, we can duplicate a reward/punishment system with subtle variations possibly as sophisticated as our (I submit, some of which is even imperceptible to mind) sense of (inner) feeling. But whereas with the others, it seems we can even surpass the Organic faculties; when it comes to, what I would call the "real human consciousness" as opposed to Mind/Self consciousness, aware-ing-feeling, I have doubts we can ever succeed.

    I think Mind itself fails to represent those feelings, but projects representations called emotions. Emotions are already a projection from Reality. It might be that we cannot duplicate a projected/represented Reality, now twice removed.

    As a simple illustration (not purporting in any way to be an analogy, let alone sound) its like other forms of Fiction. When we project a real life character in books or movies, we can duplicate it in all respects but it's feelings. Think of the actor who played Ghandi. Even thoughts (at least knowable ones) can be transmitted if there was a way to record and transmit. But the Organic being is necessary for the feeling. Even the "how it feels" has "left" the Organism and entered Mind. That can be duplicated. But not aware-ing feeling.
  • NOS4A2
    9.2k


    Nice thinking.

    I think you’re right about “human feeling”, much of what I believed is derived from embodied experience. It’s like we’ve started AI in the wrong direction, conceiving it first as disembodied brains and building it in that direction, rather than as embodied beings, which is probably so fundamental to experience that to forget it seems foolish.
  • ENOAH
    843
    we’ve started AI in the wrong direction, conceiving it first as disembodied brainsNOS4A2

    Yes, because we also approach mind/body in the wrong direction, as if real being somehow inhabits the mind.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment