Scientists create online games to show risks of AI emotion recognition

Public can try pulling faces to trick the technology, while critics highlight human rights concerns

It is a technology that has been frowned upon by ethicists: now researchers are hoping to unmask the reality of emotion recognition systems in an effort to boost public debate.

Technology designed to identify human emotions using machine learning algorithms is a huge industry, with claims it could prove valuable in myriad situations, from road safety to market research. But critics say the technology not only raises privacy concerns, but is inaccurate and racially biased.

A team of researchers have created a website – emojify.info – where the public can try out emotion recognition systems through their own computer cameras. One game focuses on pulling faces to trick the technology, while another explores how such systems can struggle to read facial expressions in context.

Their hope, the researchers say, is to raise awareness of the technology and promote conversations about its use.

“It is a form of facial recognition, but it goes farther because rather than just identifying people, it claims to read our emotions, our inner feelings from our faces,” said Dr Alexa Hagerty, project lead and researcher at the University of Cambridge Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk.

Facial recognition technology, often used to identify people, has come under intense scrutiny in recent years. Last year the Equality and Human Rights Commission said its use for mass screening should be halted, saying it could increase police discrimination and harm freedom of expression.

But Hagerty said many people were not aware how common emotion recognition systems were, noting they were employed in situations ranging from job hiring, to customer insight work, airport security, and even education to see if students are engaged or doing their homework.

Such technology, she said, was in use all over the world, from Europe to the US and China. Taigusys, a company that specialises in emotion recognition systems and whose main office is in Shenzhen, says it has used them in settings ranging from care homes to prisons, while according to reports earlier this year, the Indian city of Lucknow is planning to use the technology to spot distress in women as a result of harassment – a move that has met with criticism, including from digital rights organisations.

While Hagerty said emotion recognition technology might have some potential benefits these must be weighed against concerns around accuracy, racial bias, as well as whether the technology was even the right tool for a particular job.

“We need to be having a much wider public conversation and deliberation about these technologies,” she said.

The new project allows users to try out emotion recognition technology. The site notes that “no personal data is collected and all images are stored on your device”. In one game, users are invited to pull a series of faces to fake emotions and see if the system is fooled.

“The claim of the people who are developing this technology is that it is reading emotion,” said Hagerty. But, she added, in reality the system was reading facial movement and then combining that with the assumption that those movements are linked to emotions – for example a smile means someone is happy.

“There is lots of really solid science that says that is too simple; it doesn’t work quite like that,” said Hagerty, adding that even just human experience showed it was possible to fake a smile. “That is what that game was: to show you didn’t change your inner state of feeling rapidly six times, you just changed the way you looked [on your] face,” she said.

Some emotion recognition researchers say they are aware of such limitations. But Hagerty said the hope was that the new project, which is funded by Nesta (National Endowment for Science, Technology and the Arts), will raise awareness of the technology and promote discussion around its use.

“I think we are beginning to realise we are not really ‘users’ of technology, we are citizens in world being deeply shaped by technology, so we need to have the same kind of democratic, citizen-based input on these technologies as we have on other important things in societies,” she said.

Vidushi Marda, senior programme officer at the human rights organisation Article 19 said it was crucial to press “pause” on the growing market for emotion recognition systems.

“The use of emotion recognition technologies is deeply concerning as not only are these systems based on discriminatory and discredited science, their use is also fundamentally inconsistent with human rights,” she said. “An important learning from the trajectory of facial recognition systems across the world has been to question the validity and need for technologies early and often – and projects that emphasise on the limitations and dangers of emotion recognition are an important step in that direction.”

Contributor

Nicola Davis Science correspondent

The GuardianTramp

Related Content

Article image
Why people believe Covid conspiracy theories: could folklore hold the answer?
Researchers use AI – and witchcraft folklore – to map the coronavirus conspiracy theories that have sprung up

Anna Leach and Miles Probyn

26, Oct, 2021 @7:00 AM

Article image
'It's able to create knowledge itself': Google unveils AI that learns on its own
In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go ... with no human help

Ian Sample Science editor

18, Oct, 2017 @5:00 PM

Article image
No 10 worried AI could be used to create advanced weapons that escape human control
At global summit in UK, Rishi Sunak will highlight risk of criminals and terrorists using technology to make bioweapons

Kiran Stacey and Dan Milmo

25, Sep, 2023 @7:45 AM

Article image
AI systems claiming to 'read' emotions pose discrimination risks

Expert says technology deployed is based on outdated science and therefore is unreliable

Hannah Devlin Science correspondent

16, Feb, 2020 @5:00 PM

Article image
AI will create 'useless class' of human, predicts bestselling historian
Smarter artificial intelligence is one of 21st century’s most dire threats, writes Yuval Noah Harari in follow-up to Sapiens

Ian Sample Science editor

20, May, 2016 @12:20 PM

Article image
EU border 'lie detector' system criticised as pseudoscience
Technology that analyses facial expressions being trialled in Hungary, Greece and Latvia

Daniel Boffey in Brussels

02, Nov, 2018 @9:58 AM

Article image
Force for good: humanoids convene at AI for Good summit in Geneva
Ai-da, Desdemona, Nadine and Geminoid join world’s largest gathering of humanoids to promote AI as force for good

Hannah Devlin Science correspondent

06, Jul, 2023 @5:00 AM

Article image
‘Some people feel threatened’: face to face with Ai-Da the robot artist
Self-portraits by ultra-realistic android go on show at Design Museum in London

Mark Brown Arts correspondent

18, May, 2021 @1:09 PM

Article image
UK will lead on ‘guard rails’ to limit dangers of AI, says Rishi Sunak
PM sounds a more cautious note after calls from tech experts and business leaders for moratorium

Rowena Mason in Hiroshima

18, May, 2023 @9:00 PM

Article image
Artificial intelligence 'judge' developed by UCL computer scientists
Software program can weigh up legal evidence and moral questions of right and wrong to predict the outcome of trials

Chris Johnston and agencies

23, Oct, 2016 @11:01 PM