AI systems claiming to 'read' emotions pose discrimination risks

Expert says technology deployed is based on outdated science and therefore is unreliable

Artificial Intelligence (AI) systems that companies claim can “read” facial expressions is based on outdated science and risks being unreliable and discriminatory, one of the world’s leading experts on the psychology of emotion has warned.

Lisa Feldman Barrett, professor of psychology at Northeastern University, said that such technologies appear to disregard a growing body of evidence undermining the notion that the basic facial expressions are universal across cultures. As a result, such technologies – some of which are already being deployed in real-world settings – run the risk of being unreliable or discriminatory, she said.

“I don’t know how companies can continue to justify what they’re doing when it’s really clear what the evidence is,” she said. “There are some companies that just continue to claim things that can’t possibly be true.”

Her warning comes as such systems are being rolled out for a growing number of applications. In October, Unilever claimed that it had saved 100,000 hours of human recruitment time last year by deploying such software to analyse video interviews.

The AI system, developed by the company HireVue, scans candidates’ facial expressions, body language and word choice and cross-references them with traits that considered to be correlated with job success.

Amazon claims its own facial recognition system, Rekognition, can detect seven basic emotions – happiness, sadness, anger, surprise, disgust, calmness and confusion. The EU is reported to be trialling software which purportedly can detect deception through an analysis of micro-expressions in an attempt to bolster border security.

“Based on the published scientific evidence, our judgment is that [these technologies] shouldn’t be rolled out and used to make consequential decisions about people’s lives,” said Feldman Barrett.

Speaking ahead of a talk at the American Association for the Advancement of Science’s annual meeting in Seattle, Feldman Barrett said the idea of universal facial expressions for happiness, sadness, fear, anger, surprise and disgust had gained traction in the 1960s after an American psychologist, Paul Ekman, conducted research in Papua New Guinea showing that members of an isolated tribe gave similar answers to Americans when asked to match photographs of people displaying facial expressions with different scenarios, such as “Bobby’s dog has died”.

However, a growing body of evidence has shown that beyond these basic stereotypes there is a huge range in how people express emotion, both across and within cultures.

In western cultures, for instance, people have been found to scowl only about 30% of the time when they’re angry, she said, meaning they move their faces in other ways about 70% of the time.

“There is low reliability,” Feldman Barrett said. “And people often scowl when they’re not angry. That’s what we’d call low specificity. People scowl when they’re concentrating really hard, when you tell a bad joke, when they have gas.”

The expression that is supposed to be universal for fear is the supposed stereotype for a threat or anger face in Malaysia, she said. There are also wide variations within cultures in terms of how people express emotions, while context such as body language and who a person is talking to is critical.

“AI is largely being trained on the assumption that everyone expresses emotion in the same way,” she said. “There’s very powerful technology being used to answer very simplistic questions.”


Hannah Devlin Science correspondent

The GuardianTramp

Related Content

Article image
AI will create 'useless class' of human, predicts bestselling historian
Smarter artificial intelligence is one of 21st century’s most dire threats, writes Yuval Noah Harari in follow-up to Sapiens

Ian Sample Science editor

20, May, 2016 @12:20 PM

Article image
AI programs exhibit racial and gender biases, research reveals
Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say

Hannah Devlin Science correspondent

13, Apr, 2017 @6:00 PM

Article image
Automated virtual reality therapy helps people overcome phobia of heights
Scientists hope computer programme which requires no human therapist could be used to treat other mental health problems

Nicola Davis

11, Jul, 2018 @10:30 PM

Article image
AI project to preserve people's voices in effort to tackle speech loss
Clinic hopes to help those at risk of losing ability to speak maintain sense of identity

Nicola Davis

09, Nov, 2019 @8:01 AM

Article image
It's a riot: the stressful AI simulation built to understand your emotions
Inspired by global unrest, Riot uses artificial intelligence, film and gaming technologies to help unpick how people react in stressful situations

Katy Vans

29, Mar, 2017 @11:30 AM

Article image
'It's able to create knowledge itself': Google unveils AI that learns on its own
In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go ... with no human help

Ian Sample Science editor

18, Oct, 2017 @5:00 PM

Article image
Scientists create online games to show risks of AI emotion recognition
Public can try pulling faces to trick the technology, while critics highlight human rights concerns

Nicola Davis Science correspondent

04, Apr, 2021 @1:54 PM

Article image
Tate Britain project uses AI to pair contemporary photos with paintings
IK prize-winning system matches images from the 24/7 news cycle with centuries-old artworks and presents them online

Nicola Davis

28, Aug, 2016 @2:23 PM

Article image
Thousands of leading AI researchers sign pledge against killer robots
Co-founder of Google DeepMind and CEO of SpaceX among the 2,400 signatories of pledge to block lethal autonomous weapons

Ian Sample Science editor

18, Jul, 2018 @4:01 AM

Article image
Human-robot interactions take step forward with 'emotional' chatbot
Researchers describe the ‘emotional chatting machine’ as a first attempt at the problem of creating machines that can fully understand user emotion

Hannah Devlin Science correspondent

05, May, 2017 @5:25 PM