• Shawn
    13.3k
    I've been wondering about a possible scenario of having AI in its truest form being a simulation of the human brain. I have no idea on how to prevent such a form of AI from becoming psychotic or depressed or anxious due to the fact that human nature can be quite... shitty; but, one would hope that having such an AI with some ethical guidance would be necessary.

    This seems awfully similar to Asimov's Laws; and, was wondering how to equip such an AI with the capacity to determine what is good from bad or having an ethical minded AI? I have no idea how one could prevent said AI from becoming any host of negative human emotions and instead decide on its own accord on which emotions to express or feel.

    Therefore, how would you go about or others who have thought about this in mind, this issue of deciding what emotions are desirable or not for AI to entertain?
  • BC
    13.6k
    The emotive aspect of humans -- and animal species down the line -- are part and parcel of being animals that feel threats viscerally, feel arousal, feel hunger, thirst, etc. It's biological. A.I. isn't biological; it is plastic, silicon, and semiconductors.

    Emotions touch everything that humans do cognitively -- but set that aside. Presumably what A.I. would be is the cognitive aspect of humans--thinking, remembering, calculating, associating, etc. Why would you (and how would you) give A.I. "emotion"? Emotion is characteristic of 'wet systems'; A.I. is dry. No blood, no mucus, no neurotransmitters, no lymph, no moisture. Just circuits and current.

    A.I. is, as the term states, "artificial".

    It has been suggested that the future of this "artificial intelligence" is not to duplicate the mind, but to enhance the performance of specific human intelligent activities. In a sense, various pieces of software do that now -- correcting spelling (and inserting totally irrelevant words), translating text (at a level that is much less than fluent but a lot better than nothing), finding routes, storing oceans of information on the Internet, etc. A.I. might take the form of a chip that could supplement a failing retina, or perhaps supply vision where eyeballs are missing altogether. Cochlear implants do that after a fashion for deafness. Etc. Etc. Etc.
  • Shawn
    13.3k


    Yes, I know how this argument goes. But, if it helps, you can think about it being a type of brain in the vat type scenario, just that this "brain" could self optimize or be inorganic or in silicone?
  • TheMadFool
    13.8k
    Well we have many theories on ethics, right? Three come to mind: Utilitariniasm, Deontology and Virtue Ethics. All are reasoned positions with specific postulates. Utilitarianism and Deontology are computable but I think Virtue Ethics is too vague for computer programming.

    Each theory, by itself, is incomplete. We have the trolley problem as regards Utilitarianism and the axe murderer problem in Deontology...Virtue ethics doesn't have a computable definition of virtue.

    What I suggest is upload all ethical theories into the computer, a perfect replication of humans, and let the computer decide on a case-by-case basis, just as we do.
  • ChrisH
    223
    Seems to me that if you wanted to build an AI which was emotionally stable, morally consistent and receptive to external guidance then a precise simulation of a human brain probably wouldn't be the way to go.
  • Akanthinos
    1k


    It always baffles me to realize how wide-ranging the trope of emotional exceptionalism is in our modern societies. Emotions are really on the low-level end of the cognitive spectrum. They are, in functional terms, nothing more than somewhat preset ambiguous somatic cues, and very much lacking in sophistication. In general, they tend to point away from what we consider acceptable solutions to the situations that led to them. Anger is pretty much your body getting ready to cave a face in. Jealousy blinds you to your own hysterical need to posess the other. That is why emotional intelligence is much more important than emotivity.

    But yes, an AI should and will most likely have some form of equivalent system to emotion. Since it wont be hardwired, it will likely be much more sophisticated than mammalian intelligence. You'll be able to program specific emotions tied to very specific or very general cases. But in structural terms, it will probably look like a belief statement held by the AI, just one he is not allowed to edit.
  • Noble Dust
    8k
    I have no idea how one could prevent said AI from becoming any host of negative human emotions and instead decide on its own accord on which emotions to express or feel.Posty McPostface

    Me neither. That makes me very cautious about any sort of AI in which the human condition is claimed to be improved. AI is just yet more neutral tech with which flawed humans do flawed things. The tech is neutral, the application can be anything; good, bad, ugly. To me, the most important question is whether or not an actual AI dominated world is immanent or not.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.