• Ellie For
    3
    There has recently been a huge increase in the number of chatbots marketed to provide emotional support. Woebot (a therapy chatbot made by Facebook), Wysa and Replika all are designed to support the user through stress, anxiety, and depression.

    The majority of recent studies find that chatbots expressing empathy and sympathy have major positive effects on both liking and trust, as well as on perceived care and support by the user. Chatbots are therefore designed with this in mind. Replika is marketed as 'The AI Companion Who Cares' .

    Is it ethical to build relationships with users based on thisillusion of care?

    If the user perceives the chatbot as caring for them and finds the product an effective means of help, does the lack of genuine emotion matter? Is it just the relevant behaviour (expressing empathy) that matters, or is it the source that drives this behaviour (the feeling of empathy) that matters when considering the ethics of these products?

    I'd love to hear your thoughts on this matter & any relevant philosophy texts which could be of help working on this question.
  • Shawn
    13.2k
    I'd love to hear your thoughts on this matter & any relevant philosophy texts which could be of help working on this question.Ellie For

    Does research show it's comparable in efficacy as normal human eye to eye dialogue? Curious...
  • Monitor
    227
    If the user perceives the chatbot as caring for them and finds the product an effective means of help, does the lack of genuine emotion matter? Is it just the relevant behaviour (expressing empathy) that matters, or is it the source that drives this behaviour (the feeling of empathy) that matters when considering the ethics of these products?Ellie For

    Not to be cynical, but do we really know when some one genuinely cares about our troubles? Certainly there can be many motives and many filters. At some level don't we project the meaning and relief upon the helpful empathy that we believe we are receiving?
  • fdrake
    6.5k
    It doesn't seem very human unfortunately, nor particularly therapeutic. It's quite good at providing warm sounding stock responses for short statements of feeling. If it's given a load of text it focusses on very small parts of it. It's less a therapy bot, more an empathy Turing machine.

    I think it's missing all the warmth and context processing capability where it counts.
  • Ellie For
    3
    It really depends on the form of emotional support the person needs. Although Natural Language Processing has improved, chatbots are not capable of having anywhere near as deep a conversation as humans are (& likely never will be able to).

    Empirical research on the efficacy of mental health chatbots is still very thin. There are a few studies which show they can be effective means of support, but this is compared to no support/other forms of computerised support.

    Having said that, people can feel more comfortable talking to technology, perceiving there to be a lack of judgement. This could apply to groups who have increased stigma placed upon them & find it hard to confide in others about their emotional state. Woebot claim that they receive feedback from people that "talking to Woebot makes it easier to talk to their partner or a therapist”.

    If it's just providing a safe space to talk & reflect on your feelings then I think they can be effective to some degree.
  • Ellie For
    3
    I completely agree.

    Do you think there's a difference in that at least humans are capable of feeling emotion? Whereas the technology has been programmed to 'fake' emotion one hundred percent of the time it expresses emotion?
  • Monitor
    227
    Humans have a tendency to go for the small but certain pleasures. Very reliable. Would we become more inclined to hear the always reassuring chatbot message than a human message with tough love or the possibility of repoachment?
  • petrichor
    321
    It's quite good at providing warm sounding stock responses for short statements of feeling.fdrake

    Sounds pretty human to me!

    I'd say that true care from real humans, where a person is really concerned with another person's welfare for that person's sake alone, is actually pretty rare.

    It is important for us to realize for ourselves that we mostly like others for what we get from them, even if it is a feeling that we are good people. We should try hard to go beyond that and to actually consider things from the other's point of view. We should try hard to regard them as a thou, maybe even in Buber's sense of I and Thou, where there is a true intersubjective encounter, not a regarding of the other as object, and not a regarding of the other as a means to further our own interests, even if those interests be such things as our own high moral character. Charity is often practiced in this way, where the "givers" mostly just want to feel good about themselves, where they actually need the needy, in an almost vampiric fashion. It puts you higher. You see yourself favorably. It probably even goes back to the need from parental figures to be called a "good boy" or "good girl". Some are seeking God's approval (which can also be seen as a projection of the need for the parent--God the Father, Mother Mary, The Divine Mother, etc).



    And what about real counselors, prostitutes, doctors, and so on, who are paid to listen and to at least pretend to care?


    "talking to Woebot makes it easier to talk to their partner or a therapist”.Ellie For

    Practicing probably helps.

    Also, it seems that we are easily fooled. Just paint a happy face on a computer and people start to think it has feelings. Make it move and change facial expressions in a way that mimics humans and people even grant it citizenship:

    Sophia

    What a charade!

    Lots of people, especially in Asia, form relationships with inanimate objects, even marrying pillows with anime characters on them. In that part of the world, people seem much more ready to adopt robot pets and the like.

    I am reminded of those old experiments with the monkeys and the wire versus cloth mothers:

    378550-001-cropped-56a793f45f9b58b7d0ebdb4f.jpg

    Increasingly, this is us. Very sad.

    But humans have long been using artificial substitutes for real love, for Mom, friends, romantic partners (Female romantic partners themselves, for men, are often substitutes for Mom. My dad even called my mom "Mom".), and so on. Drugs. Money. Pillows (especially those that have comforting platitudes written on them). Porn. Food. Television. Novels. Music. Even our beds, interestingly, in my opinion, are not just serving to keep us warm and free from pain. They are womb-like, hug-like. They surround us in a way that is suspiciously like Mom, the pillow like her breast. And we often sleep in a "fetal" position, just inches from sucking our thumbs.


    And probably, as people withdraw more and more from the real world and move into digital spaces, more and more products will start to emerge that, without obviously being intended for the purpose, actually act as substitutes for real connection.


    Is it unethical to give people artificial care instead of real human care? Yes. It is inevitable though. The elderly are soon going to be cared for by robots. I am not sure thought that this is much worse than the "human" care found in nursing homes today. Those places exist primarily to loot the life-savings of old people. When my father was in one, the staff stole his MP3 player, neglected him, and even caused him injury. He died after three days in there. My sister works as an activities director in a nursing home and she is very cynical about that whole scene. It is all about corporate profits. Nothing else matters. They sell a certain appearance to the family of the poor elderly person, and I suppose, to the elderly person. But most of the staffers directly involved are burned out, underpaid, treated like crap, and they in turn often don't treat the residents well. Their patience for dementia and whatnot has run out. But old people often sit on a pile of money at life's end (or a nice policy), and somebody exists to work this angle.

    Video showing robots in Japanese nursing homes
  • Shawn
    13.2k


    I understand your cynicism; but, I think your neglecting femininity in your analysis.
  • NOS4A2
    9.1k


    The thing about social media is how anti-social it is, and this to me this is more evidence of that. In every case of social media use, the person is interacting with a computer, not human beings. I think such therapy would only exacerbate the problem, enforcing a dependency on machine rather than human social relations, only increasing the divide between one another.
  • Pfhorrest
    4.6k
    Ideally, people would not have the emotional holes that need filling by other people's caring and esteem, but would be healthy and capable of self-care and self-esteem and that would be sufficient.

    The world is not ideal though, people are not ideal, we are most of us traumatized by life to a greater or lesser degree, and need something else to prop us up emotionally.

    It's great that other people are able to help us with that. But if a machine can achieve that just as effectively, then I see nothing wrong with it doing so. If a song or a book or a TV show can be legitimate source of such comfort, there's no reason that an interactive form of media like a chat bot couldn't be too.

    However, I'm very suspicious of any claims that the tech is there yet, at least for someone like me. If I could tell that a real person (therapist, etc) was giving me obviously superficial shallow platitudes in response to my relation of complex emotional or practical problems, I wouldn't feel very listened-to and understood and wouldn't get much comfort out of that.

    But maybe there are some people whose comprehension of their own life and mind is simple enough that those simple platitudes do actually suffice, and if so more power to them for getting such comfort out of such devices. There's nothing ethically wrong with it that I can see; at worst, it's merely ineffectual, and so morally neutral.
  • fdrake
    6.5k
    Sounds pretty human to me!petrichor

    That it seems human at all sounds tragic to me. That people need to turn to it to find support is even worse. But:

    We should try hard to regard them as a thou, maybe even in Buber's sense of I and Thou, where there is a true intersubjective encounter, not a regarding of the other as object, and not a regarding of the other as a means to further our own interests, even if those interests be such things as our own high moral character.petrichor

    It's still an algorithm. Being able to do complicated value judgements about the style of engagement it offers is not on its cards yet. I do think it's on something like the right track, it asks questions based off of bits of information; but you give it something complicated, and it falters on what's relevant or what not. In other words, it's not an expert on asking the kind of questions that guide mental growth and robustness when the input data is complex.

    It's hard to make text processors learn what bits of text are relevant to what bits of text, they struggle to tell stories that make sense for more than 3-4 sentences in a row, even given huge input data (of the order of a substantial chunk of a social media site); not the scant and scattered offerings of a mind eating itself.
  • deletedmemberMD
    588
    Fascinating! I have so many questions I don't know where to begin!

    I suppose my first question; if these programs are legitimately providing effective emotional support, does that mean Effective Empathy is skill based not emotion based? Cognitive perspective taking empathy is skill and experience based. Deep emotional empathy is relative but is usually a human motivator for using cognitive empathy.
  • ZhouBoTong
    837
    does that mean Effective Empathy is skill based not emotion based?Mark Dennis

    Someone may have a more educated answer, but based on personal experience, I would answer yes (or at least somewhat).

    Many people in my life have called me unemotional, almost robotic. I am quick to point out that I have all the same emotions, they are just LESS. However, I do struggle to empathize. I understand intellectually that others are in pain, but I do not feel their pain. I have learned that when someone complains of a problem in their life, they typically are NOT looking for a solution, but, instead, just want a hug and some sympathy. Unfortunately, I can't really do unsolicited human contact so I can't even imitate that one :grimace: However, I have found other areas where imitation of empathy is good enough.

    For example, when someone asks if I am proud of them, they are not looking for an explanation of my personality and why I hardly do the whole pride thing (whether in myself or others). I have been shocked to learn, that if I just answer "yes", that they are happy and move on with the conversation. I figured there was no way it would work from me, because it was not genuine. It was not emotional. It was simply me giving the response that they wanted...but it works. I would think the chatbot would be at least as effective at imitating empathy as I am :smile:

    Not to be cynical, but do we really know when some one genuinely cares about our troubles? Certainly there can be many motives and many filters. At some level don't we project the meaning and relief upon the helpful empathy that we believe we are receiving?Monitor

    Haha, my personal experiences above certainly suggest that you are dead on with this stuff :up:
  • Monitor
    227
    I would think the chatbot would be at least as effective at imitating empathy as I am :smile:ZhouBoTong

    And yet, we are social animals. We do not flourish in solitude. We need human contact like we need food. Perhaps the chatbot provides an easy candy (false food) when we need immediate gratification.
  • JoeStamos
    1
    When we look to another for advice, we make sure that the person is reputable. We assign values and hierarchies to people. The counselling from someone I value an average amount will not be as valuable as the counsel I get from someone I value greatly. Similar to that episode from Seinfeld where George's mom receives advice from an 'Asian' lady on the phone which she considers to be gospel. Yet once she learns that the 'Asian' lady is actually 'Caucasian', that advice is rendered meaningless. This would be value maybe acquired through stereotypes.
    The experience we get from the bots may be valuable, but only as long as we believe it to be, kind of like a placebo. The advances in artificial programming is staggering, especially with deep learning. I'm certain that we'll be able to form some software entity capable of provided deep and meaningful conversations. And you must also accept that this philosophical point we all have is based on this current point in time. Imagine being born in a time where everyone uses google home and alexa. A time where artificial intelligence is the norm. In such a case, receiving concealing, advice, comforting, etc from a bot would be akin to a session with a therapist.
  • ZhouBoTong
    837
    And yet, we are social animals. We do not flourish in solitude. We need human contact like we need food.Monitor

    I do not deny that at all. But I do not necessarily need empathy from others (or at least that is what the analytical half of my brain is always trying to tell me).

    Perhaps the chatbot provides an easy candy (false food) when we need immediate gratification.Monitor

    That is exactly my thought. Along with the idea that MOST of the time, when it comes to empathy, all we really need is the immediate gratification. When someone complains about having a bad day, they DO NOT want me to analyze what has happened in their day and start giving them suggestions on how to change their behavior so they never have another bad day (believe it or not, this is my natural reaction to someone saying they are having a bad day - I have worked for years on changing it so I seem much more normal). No, they just want a little sympathy/empathy - immediate gratification.
  • ZhouBoTong
    837
    Similar to that episode from Seinfeld where George's mom receives advice from an 'Asian' lady on the phone which she considers to be gospel. Yet once she learns that the 'Asian' lady is actually 'Caucasian', that advice is rendered meaningless. This would be value maybe acquired through stereotypes.JoeStamos

    I think I mostly agree with your post, and using an example from Seinfeld to make your point means you are golden in my book.

    Other people here have more sophisticated intellects that may have a higher standard...but nice post #1 :up:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.