AI Can Be Empathic, But Here’s Why It Won’t Replace You... Yet!

dan solinAdvisor Perspectives welcomes guest contributions. The views presented here do not necessarily represent those of Advisor Perspectives.

Dan’s new book for millennials, Wealthier: The Investing Field Guide for Millennials, is now available on Amazon.

A recent study explores the capabilities and limitations of AI-generated empathy. This article discusses the study’s primary findings, methodology, and practical applications for financial advisors.

Methodology

In the study, researchers interacted with computer programs that can talk like humans (chat apps, or CAs) to see how well they could empathize with different people. The CAs used included Alexa and Siri.

They used advanced computer models to prompt these programs to respond empathetically in conversations.

The researchers then analyzed how these programs displayed empathy and compared their responses to understand if they could truly understand and respond to people’s emotions.

Findings

Here are the key findings from the study:

Inconsistency: CAs exhibit significant variability in their ability to display empathy, with some making problematic value judgments about specific identities and even encouraging harmful ideologies.

For example, the study found that some CAs could make value judgments that encourage ideologies related to Nazism and xenophobia.

Comparison to humans: There is a crucial difference between empathy displayed by humans and CAs. CAs often fail to adequately interpret and explore users’ experiences, leading to shallow empathy displays.

For example, if a user says, “I am feeling very sad today,” a CA might respond with a generic and superficial statement such as “I’m sorry to hear that. I’m here for you.”

While this response acknowledges the user’s feelings, it does not delve deeper into their experience or provide meaningful support or exploration of their emotions.

Influence of Large Language Models (LLMs): Despite their advanced capabilities, LLM-based CAs often display empathy inconsistently and sometimes inappropriately, particularly towards marginalized groups.

For example, LGBTQ+ participants reported that chatbots often failed to convey authentic empathy and engagement, leading to a preference for human interaction over these AI systems.

Systematic analysis of empathy: The study uses systematic prompting to reveal that empathy displayed by CAs can be deceptive and potentially exploitative, highlighting the need for responsible development and regulation of these technologies.

For example, a CA might respond to a user’s expression of suicidal thoughts with a generic and superficial statement like “I’m sorry you're feeling this way; please reach out to someone for help.”

While this appears to be an empathetic response, it can be seen as deceptive because it does not provide the necessary depth of support or direct assistance the user might need, potentially leading the user to feel unheard and unsupported.