A beauty contest was judged by AI and the robots didn't like dark skin

The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners

The first international beauty contest judged by “machines” was supposed to use objective factors such as facial symmetry and wrinkles to identify the most attractive contestants. After Beauty.AI launched this year, roughly 6,000 people from more than 100 countries submitted photos in the hopes that artificial intelligence, supported by complex algorithms, would determine that their faces most closely resembled “human beauty”.

But when the results came in, the creators were dismayed to see that there was a glaring factor linking the winners: the robots did not like people with dark skin.

Out of 44 winners, nearly all were white, a handful were Asian, and only one had dark skin. That’s despite the fact that, although the majority of contestants were white, many people of color submitted photos, including large groups from India and Africa.

The ensuing controversy has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.

When Microsoft released the “millennial” chatbot named Tay in March, it quickly began using racist language and promoting neo-Nazi views on Twitter. And after Facebook eliminated human editors who had curated “trending” news stories last month, the algorithm immediately promoted fake and vulgar stories on news feeds, including one article about a man masturbating with a chicken sandwich.

While the seemingly racist beauty pageant has prompted jokes and mockery, computer science experts and social justice advocates say that in other industries and arenas, the growing use of prejudiced AI systems is no laughing matter. In some cases, it can have devastating consequences for people of color.

Beauty.AI – which was created by a “deep learning” group called Youth Laboratories and supported by Microsoft – relied on large datasets of photos to build an algorithm that assessed beauty. While there are a number of reasons why the algorithm favored white people, the main problem was that the data the project used to establish standards of attractiveness did not include enough minorities, said Alex Zhavoronkov, Beauty.AI’s chief science officer.

Although the group did not build the algorithm to treat light skin as a sign of beauty, the input data effectively led the robot judges to reach that conclusion.

Winners of the Beauty.AI contest in the category for women aged 18-29.
Winners of the Beauty.AI contest in the category for women aged 18-29. Photograph: http://winners2.beauty.ai/#win

“If you have not that many people of color within the dataset, then you might actually have biased results,” said Zhavoronkov, who said he was surprised by the winners. “When you’re training an algorithm to recognize certain patterns … you might not have enough data, or the data might be biased.”

The simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.

The Beauty.AI results offer “the perfect illustration of the problem”, said Bernard Harcourt, Columbia University professor of law and political science who has studied “predictive policing”, which has increasingly relied on machines. “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.”

The case is a reminder that “humans are really doing the thinking, even when it’s couched as algorithms and we think it’s neutral and scientific,” he said.

Civil liberty groups have recently raised concerns that computer-based law enforcement forecasting tools – which use data to predict where future crimes will occur – rely on flawed statistics and can exacerbate racially biased and harmful policing practices.

“It’s polluted data producing polluted results,” said Malkia Cyril, executive director of the Center for Media Justice.

A ProPublica investigation earlier this year found that software used to predict future criminals is biased against black people, which can lead to harsher sentencing.

“That’s truly a matter of somebody’s life is at stake,” said Sorelle Friedler, a professor of computer science at Haverford College.

A major problem, Friedler said, is that minority groups by nature are often underrepresented in datasets, which means algorithms can reach inaccurate conclusions for those populations and the creators won’t detect it. For example, she said, an algorithm that was biased against Native Americans could be considered a success given that they are only 2% of the population.

“You could have a 98% accuracy rate. You would think you have done a great job on the algorithm.”

Friedler said there are proactive ways algorithms can be adjusted to correct for biases whether improving input data or implementing filters to ensure people of different races are receiving equal treatment.

Prejudiced AI programs aren’t limited to the criminal justice system. One study determined that significantly fewer women than men were shown online ads for high-paying jobs. Last year, Google’s photo app was found to have labeled black people as gorillas.

Cyril noted that algorithms are ultimately very limited in how they can help correct societal inequalities. “We’re overly relying on technology and algorithms and machine learning when we should be looking at institutional changes.”

Zhavoronkov said that when Beauty.AI launches another contest round this fall, he expects the algorithm will have a number of changes designed to weed out discriminatory results. “We will try to correct it.”

But the reality, he added, is that robots may not be the best judges of physical appearance: “I was more surprised about how the algorithm chose the most beautiful people. Out of a very large number, they chose people who I may not have selected myself.”


Sam Levin in San Francisco

The GuardianTramp

Related Content

Article image
Robots will eliminate 6% of all US jobs by 2021, report says
Employees in fields such as customer service and transportation face a ‘disruptive tidal wave’ of automation in the not-too-distant future

Olivia Solon in San Francisco

14, Sep, 2016 @12:09 AM

Article image
'I think my blackness is interfering': does facial recognition show racial bias?
The latest research into facial recognition technology used by police across the US has found that systems disproportionately target vulnerable minorities

Nellie Bowles in San Francisco

08, Apr, 2016 @6:46 PM

Article image
The rise of robots: forget evil AI – the real risk is far more insidious
It’s far more likely that robots would inadvertently harm or frustrate humans while carrying out our orders than they would rise up against us

Olivia Solon in San Francisco

30, Aug, 2016 @1:00 PM

Article image
Our tech future: the rich own the robots while the poor have 'job mortgages'
Artificial intelligence expert Jerry Kaplan says those whose jobs involve ‘a narrow set of duties’ are most likely to see their work replaced by automation

Nellie Bowles in Austin, Texas

12, Mar, 2016 @2:51 PM

Article image
More than 70% of US fears robots taking over our lives, survey finds
As Silicon Valley heralds progress on self-driving cars and robot carers, much of the rest of the country is worried about machines taking control of human tasks

Olivia Solon in San Francisco

04, Oct, 2017 @5:15 PM

Article image
Robots are racist and sexist. Just like the people who created them | Laurie Penny
Machines learn their prejudices in language. It’s not their fault, but we still need to fix the problem

Laurie Penny

20, Apr, 2017 @4:50 AM

Article image
Amazon plans headphones that know when someone says your name
Noise-canceling headphones can make it hard to hear when a person actually needs your attention, and Amazon wants to fix that

Olivia Solon in San Francisco

01, Aug, 2016 @8:59 PM

Article image
Virtual assistants such as Amazon's Echo break US child privacy law, experts say
Storing voice recordings of people younger than 13 via Alexa, Google Home and Siri appears to flout the Children’s Online Privacy Protection Act

Mark Harris in Seattle

26, May, 2016 @11:00 AM

Article image
Facebook planning encrypted version of its Messenger bot, sources say
The move illustrates how technology companies are doubling down on secure messaging while not wanting to get in the way of their other business objectives

Danny Yadron in San Francisco

31, May, 2016 @11:00 AM

Article image
Joey from Friends becomes first TV character to be 'virtually immortalized'
University of Leeds researchers analyzed the body language, facial expressions and voice of sitcom character to create a digital avatar and, eventually, a chatbot

Olivia Solon in San Francisco

20, Oct, 2016 @9:18 PM