The Robot Who was Afraid of the Dark Phobias. If we could program the computer to learn and adapt information from its environment that increased its probability of survival by evoking a fear response, then the only way a fear response in a robot would become a phobia is if it was a fault. Is that your position?
Yes, and that goes for all feelings. The robot may through a faulty thought proscess decide that it loved brick houses or hated cement even if it had no logical ground.
Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.
What I'm saying is that the robot should be able or even prone to illogicality. The robot may develop a pie phobia or maybe it won't. It is all up to if the robot calculates the situation right. Without a chance to misjudge or calculate situations badly it would remain static, never change personally and always reply with the same answers. it would be as aware as a chatbot.
Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?
It's up to the robot to judge if we are speaking truth and if it should listen. It may not be the reasonable thing to do and that is the point.
What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.
That is the problem. If the robot is totally logical it would have total control over its feelings and that is obviously not how real creatures work. And if reason is not connected with feelings the feelings still have to be controlled by a programmed logical system. My solution is giving the robot a mostly logical thought proscess that sometimes misjudges and acts illogical. That way we get a bit of both