Non-human animals can think rationally, I don't deny that but they can't do it as well as humans just like we can't ratiocinate as well as a computer can [given the right conditions]. It's in the difference of degrees that we see a distinction between computers, humans, and non-human animals. — TheMadFool
So there's actually a couple sneaky issues with the thrust that
AI has no emotion...
Firstly, it depends on the kind of AI we're talking about; with the right design, we can indeed approximate emotion in simulated AI agents and worlds (more on this later...).
Secondly, human minds/bodies are still far better at "general-purpose thinking" than any other known system. Computers do arithmetic faster than us, and in some respects that might give computer-bound intelligent systems an advantage over our wet-ware, but we haven't yet made a system that can out-think humans across any reasonably broad spectrum of task domains and sensory/action modalities. We haven't yet made any competent high level reasoning systems whatsoever (they're all just narrow models of specific tasks like image recognition or chess/go).
Emotions are a really massive part of how humans pull it all off: emotions are like intuitive heuristics that allow us to quickly focus on relevant stimulus/ignore irrelevant stimulus, and this guides our attention and thoughts/deliberations in ways that we can't do without. For example, when something messy or unpredictable (some new phenomenon) is happening around us, there might be some part of our brain that is automatically aroused due to the unfamiliarity of the stimulus. The arousal might lead to a state of increased attention and focus (stress in some cases), and depending on what the new stimulus can be compared to, we might become expectant of something positive, or anxious/fearful of something unknown/bad. Just entering this aroused state also prepares our neurons themselves for a period of learning/finding new connections in order to model the new phenomenon that must be understood.
Furthermore, to at least some degree, we should not expect computers to be able to understand our human-emotion laden ideas (and therefore interact with us appropriately and reciprocally) unless they have something like emotions of their own (eg: can a sophisticated non-emotion having chat-bot ask meaningful questions about subjects like "happiness"?). The most advanced language AI models like GPT-3 are capable of generating text that is uncannily human, but the actual content and substance of the text it generates is fundamentally random: we can prompt it with some starting text and ask it to predict what should come next, but we cannot truly interact with it ("it" doesn't understand us, it is just playing memory games with arbitrary symbols that it doesn't comprehend;
it's not even an "it").
GPT-3 is the largest language model ever trained, but it's not good enough to function as a philosophy bot. Unless some extraordinary breakthrough is made in symbolic reasoning AI, it looks like human level understanding is too much to ask for using only current and standard AI approaches (it takes too much compute just to get the pathologically lying homonculus that is GPT-3).
Finally there's the problem of epistemological grounding from the AI's perspective. In short: how does the AI know that what it is doing is "logic" and not just some made up bull-shit in the first place? At present, we just feed language transformer AI systems examples of human text and get it to learn a Frankenstein's model of language/concepts, and we can never cut humans out of that equation, else the bots would just be circle-jerking their own nonsense.
Another way of looking at the "truth signal"/epistemological grounding issue for AI is that they would need to actually have an experience of the world in order to test their ideas and explore new territory/concepts (otherwise they're just regurgitating the internet). For the same reason that we need to actually test scientific hypotheses in the real world to know if they are accurate, general artificial intelligence needs some input/output connection to the real world in order to discover, test, and model the various relationships that entities have within it.
Conclusion: The first chat bots that we actually want to interact with will need to be somewhat human/animal like. They will most likely exist in simulated worlds that approximate the physical world (and/or are linked to it in various ways), where "embodied" AI systems actually live and learn in ways that approximate our own. Without emotion-like heuristics (at least for attention), it's really difficult to sort through the high dimensional sensory noise that comes from having millions of sensory neurons across many sensory modalities. That high dimensional experience is necessary for us to gather enough data for our learning and performance, but it creates a dilemma of high computational cost to just *do logic* on all of it at once;
a gift/curse of dimensionality. Emotions (and to large degree,
the body itself) is the counter-intuitive solution. The field of AI and machine learning isn't quite there yet, but it's on the near horizon.