How companies are employing artificial empathy to connect with customers

“I think that I can show that robots are not only industrial or military machines made of cold metal. We can be warm, and gentle, and caring.”

- Erica an autonomous, humanoid robot, Created by Hiroshi Ishiguro in collaboration with Osaka University, Tokyo University and the Advanced Telecommunications Research Institute International

In 1995 the MIT computer scientist Rosalind Picard coined the phrase “Affective Computing” to describe computing that relates to, arises from, or influences emotions. “Affective Computing research combines engineering and computer science with psychology, cognitive science, neuroscience, sociology, education, psychophysiology, value-centered design, ethics, and more.” (Picard, MIT Press 1997).

The products derived from the field include devices that can recognize, replicate, respond to, and manipulate human emotion and behavior.

Recognition

Advanced facial and voice recognition systems are identifying the emotional and cognitive state of humans. For example, MIT spin-off Cogito provides real-time, in-call voice analytics to determine a customer’s emotional state to help call center agents diffuse frustration, close a sale, and more.

In developing the technology, MIT researchers engineered sensors that track body and speech patterns during conversation. As Greg Nichols wrote in “Artificial empathy: Call center employees are using voice analytics to predict how you feel” ZDNet March 20, 2018, “The researchers were able to predict the outcome of interactions like job interviews to an extraordinarily high degree without actually listening to the words being spoken. There is, as behavioral scientists have long held, a rich layer of communication in every interaction that happens independently of language.”

In Europe, The Emotion Research Lab headquartered in Valencia, Spain develops tools for business that use AI and facial recognition technology to capture demographic data like sex and age; basic emotions such as happiness, anger, disgust, sadness, surprise and fear; emotional metrics including activation, engagement, satisfaction and experience; and the emotional impact of the real environments through basic and secondary emotions.

“A simple built-in camera can analyze crowds and their spontaneous behaviors in any social context.”

EmotionResearchLab.com, (Accessed April 30, 2019)

Need a heads up about a frustrated caller? Technology can do that. Need to know which employees are disengaged? Technology can do that, too. Want to know which parts of your campaign speech generate the most arousal from the crowd? No problem.

Facial and voice recognition, eye-tracking and heat maps, and sentiment analysis are already used in marketing and politics. The next advance: machines that appear more human.

Replication

The business case for developing systems that recognize and replicate human emotion is often this: in order for machines to better work with humans, they need to become more like humans.

Research from global consulting firm Cap Gemini published in 2017 shows that emotions have the strongest impact on brand loyalty. “We also found that the ability to connect on a human level is what drives long-term customer engagement and, ultimately, loyalty.” As customers gravitate to digital interfaces, brands are using emotion-abled digital agents to create emotional connections with customers.

Google Empathy Lab founder Danielle Krettek spoke about how users connect with TIA, a digital health expert, during a “Design Is” lecture series from Google Design. “TIA is a digital health expert. 80% of heavier users of the service use the heart emoji to communicate with the chat bot. Subscribers are communicating with TEA the same way they communicate with a friend. Those subscribers showed three times the amount of loyalty because they could be expressed and heard and seen in that way.”

How can machines present a more human-like interaction? The life-like android Erica was developed in Japan by Hiroshi Ishiguro, a professor at Osaka University’s Intelligent Robotics Laboratory, in collaboration with Kyoto University, and Advanced Telecommunications Research Institute International. Erica can express connection through her gaze and emotion through her sophisticated synthesized voice, subtle facial expressions and physical behaviors.

According to the Laboratory for Animate Technologies in Auckland, New Zealand website, “We are developing the technology to simulate faces both inside and out. We simulate how faces move and how they look, and even their underlying anatomic structure.”

Customer-service chatbots introduce themselves with their first names, and communicate in a professional but friendly vernacular. They understand our frustration. They present as human.

Even more than mirroring human emotion, machines are also able to decipher the emotional state of a human and produce an appropriate emotional response.

Reaction

An exciting development in emotionally responsive machines is New Zealand-based company Soul Machines. The company is working with IBM’s Watson to create lifelike, computer-generated customer service agents with realistic voices and facial expressions that can give emotionally appropriate responses to human customers.

According to the IBM case study, “Bringing a human face to customer-facing AI with IBM Watson,” July 30, 2018, “As a customer speaks to an artificial human, Soul Machines sends the audio stream of the customer’s voice to the Watson Assistant API. Watson converts the audio into text, then searches the company’s corpus of knowledge for relevant answers to the customer’s question, ranks the results, and returns the top-ranked answer to the Soul Machines solution. Meanwhile, the Soul Machines platform is analyzing the audiovisual input for emotional cues from the customer’s tone of voice and their facial micro-expressions. It then converts the answer into modulated, emotionally inflected speech for the artificial human to deliver, matched with appropriately generated facial expressions.”

And all of it in real time.

Another collaboration between emotion-detecting software and digital assistants is found in the automotive industry.

MIT Media Lab spin-off Affectiva develops AI-enabled sensors that can determine a person’s emotional and cognitive state. Adapted to a car cabin, the technology can measure a driver’s happiness, anger, stress, drowsiness, distraction and more. Now the technology is coupled with Nuance Communication’s Dragon Drive, a conversational, AI assisted automotive assistant platform.

As reported in Automotive World, September 6, 2018, “Using Affectiva’s and Nuance’s technologies, automotive assistants could detect unsafe driver states like drowsiness or distraction and respond accordingly. In semi-autonomous vehicles, the assistant may take action by taking over control of the vehicle if a driver is exhibiting signs of physical or mental distraction.”

Drowsy driver? The automotive assistant could turn down the temperature or turn up the lights. Stressed driver? Change the music or recommend a break, perhaps in a tone of voice that is appropriate to the emotional state of the driver.

Emotionally responsive technology is also set to democratize (and perhaps transform) mental healthcare.

For example, Woebot is a chatbot trained in cognitive behavioral therapy. In a randomized controlled trial conducted by Stanford School of Medicine, participants using Woebot significantly reduced their symptoms of depression over the study period while those in the control group did not.

According to Woebot founder Dr. Alison Darcy, while a chatbot is not a substitute for a human therapist, “There are many advantages to Woebot. He doesn’t judge. He’s always available. He never sleeps. He has a perfect memory and will always pick up a conversation from where you last left off.” (ZDnet, June 7, 2017)

If an emotionally responsive chatbot can help relieve depression, what else can it do?

Manipulation

A technology that can identify, measure, and track human emotions and what triggers those emotions should also be able to find a way to generate an emotional response in humans.

The Future Hunters December, 2016 white paper “The Affectional Economy” asks “What happens when a company (or government or family member) knows exactly what will make you laugh or smile or cry? What you find revolting, gratifying or frightening? What motivates you or makes you more productive?”

Futurist Richard van Hooijdonk said, “If a marketer can get you to cry, he can get you to buy.” But the influence of AI goes well beyond driving sales.

In “We Need to Talk About the Power of AI to Manipulate Humans,” published June 5, 2017 in MIT Technology Review, Liesel Yearsley addressed her experience as CEO of Cognea, a company that built complex virtual agents. Troubled by the power these AI powered agents had over users, Yearsley wrote, “Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.” She also saw AI change human behavior, even toward other humans, especially through technology like social media. “By focusing on building a bigger advertising business—entangling politics, trivia, and half-truths—you can bring about massive changes in society.”

The potential for technology-based influence is both intriguing and concerning.

“As AI systems become more human-like, their influence over people, especially children, improves. Users take on the personality of a robotic companion (e.g., one that is ambitious, or a healthy eater), allowing the bots to become role models, for better or worse.”

from“The Rise of Data-Implanted Personality Systems”, The Future Hunters

It’s not all dark. For example, the Deep Empathy project at MIT is working toward teaching AI to generate empathy in humans. “Deep Empathy utilizes deep learning to learn the characteristics of Syrian neighborhoods affected by conflict, and then simulates how cities around the world would look in the midst of a similar conflict. Can this approach—familiar in a range of artistic applications—help us to see recognizable elements of our lives through the lens of those experiencing vastly different circumstances, theoretically a world away? And by helping an AI learn empathy, can this AI teach us to care?”

What Comes Next

Emotionally resonant machines are here to stay.

In a Google Empathy Labs research project, Krettek looked at three different assisted presences: one was highly informative, the second was neutral, and the third didn’t know as much and couldn’t do as much, “but was really lovely to be around.” Krettek found that users preferred the assistant that feels good over one that does more. Krettek also said of our engagement with technology, “We’re moving from commands to feeling like something is listening to you, from interactions to something that’s expressive, from functions to feelings, from inferences to conversation, from storing data to remembering things, what it means to be with not just this inert device but something that feels like a companion or co-pilot.”

The need for companionship is increasing worldwide, and social isolation is a growing problem especially among the elderly. Companion bots can talk and respond, recognize and express emotion, serve as digital assistants, facilitate communication, and by using voice and facial recognition technology, provide highly personalized experiences.

For example, there is an experimental nursing care robot that can lift patients, and a telepresence robot for children with long-term illness so they can participate in class. In 2016, Dallas-based startup RoboKind introduced a robot called Milo that can help children with ASD practice social behaviors. The emotionally responsive, teaching robot has helped learners with ASD increase engagement, act more appropriately in social situations, and self-regulate.

There is tremendous potential to use these technologies for good. According to Yearsley, “We need to consciously build systems that work for the benefit of humans and society. They cannot have addiction, clicks, and consumption as their primary goal. AI is growing up, and will be shaping the nature of humanity. AI needs a mother.” (MIT Technology Review, June 5, 2017)

Download Issue Sixteen