Do no harm, don't discriminate: official guidance issued on robot ethics

Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented “the first step towards embedding ethical values into robotics and AI”.

“As far as I know this is the first published standard for the ethical design of robots,” Winfield said after the event. “It’s a bit more sophisticated than that Asimov’s laws – it basically sets out how to do an ethical risk assessment of a robot.”

The BSI document begins with some broad ethical principles: “Robots should not be designed solely or primarily to kill or harm humans; humans, not robots, are the responsible agents; it should be possible to find out who is responsible for any robot and its behaviour.”

It goes on to highlight a range of more contentious issues, such as whether an emotional bond with a robot is desirable, particularly when the robot is designed to interact with children or the elderly.

Noel Sharkey, emeritus professor of robotics and AI at the University of Sheffield, said this was an example of where robots could unintentionally deceive us. “There was a recent study where little robots were embedded in a nursery school,” he said. “The children loved it and actually bonded with the robots. But when asked afterwards, the children clearly thought the robots were more cognitive than their family pet.”

The code suggests designers should aim for transparency, but scientists say this could prove tricky in practice. “The problem with AI systems right now, especially these deep learning systems, is that it’s impossible to know why they make the decisions they do,” said Winfield.

Deep learning agents, for instance, are not programmed to do a specific task in a set way. Instead, they learn to perform a task by attempting it millions of times until they evolve a successful strategy – sometimes one that its human creators had not anticipated and do not understand.

The guidance even hints at the prospect of sexist or racist robots, warning against “lack of respect for cultural diversity or pluralism”.

“This is already showing up in police technologies,” said Sharkey, adding that technologies designed to flag up suspicious people to be stopped at airports had already proved to be a form of racial profiling.

Winfield said: “Deep learning systems are quite literally using the whole of the data on the internet to train on, and the problem is that that data is biased. These systems tend to favour white middle-aged men, which is clearly a disaster. All the human prejudices tend to be absorbed, or there’s a danger of that.”

In future medical applications, there is a risk that systems might be less adept when diagnosing women or ethnic minorities. There have already been examples of voice recognition software being worse at understanding women, or facial recognition programmes not identifying black faces as easily as white ones.

“We need a black box on robots that can be opened and examined,” said Sharkey. “If a robot is being racist, unlike a police officer, we can switch it off and take it off the street.”

The document also flags up broader societal concerns, such as “over-dependence on robots”, without giving designers a definitive steer on what to do about these issues.

“One form of this is automation bias, when you work with a machine for a certain length of time and it gives you the right answers you come to trust it and become lazy. And then it gives you something really stupid,” said Sharkey.

Perhaps with an eye on the more distant future, the BSI also alerts us to the danger of rogue machines that “might develop new or amended action plans … that could have unforeseen consequences” and the potential “approbation of legal responsibility” by robots.

Dan Palmer, head of manufacturing at BSI, said: “Using robots and automation techniques to make processes more efficient, flexible and adaptable is an essential part of manufacturing growth. For this to be acceptable, it is essential that ethical issues and hazards such as dehumanisation of humans or over-dependence on robots, are identified and addressed.

“This new guidance on how to deal with various robot applications will help designers and users of robots and autonomous systems to establish this new area of work.”

Contributor

Hannah Devlin Science correspondent

The GuardianTramp

Related Content

Article image
Study explores inner life of AI with robot that ‘thinks’ out loud
Italian researchers enabled Pepper robot to explain its decision-making processes

Natalie Grover Science correspondent

21, Apr, 2021 @3:00 PM

Article image
Robocrop: world's first raspberry-picking robot set to work
Autonomous machine expected to pick more than 25,000 raspberries a day, outpacing human workers

Julia Kollewe and Rob Davies

26, May, 2019 @4:15 PM

Article image
Science fiction no more? Channel 4’s Humans and our rogue AI obsessions
We’ve told ourselves stories about the robot revolution for decades – but technological advances are hauling artificial intelligence out of the fictional realm. As the real world catches up, is it time to rewrite the script?

Simon Parkin

14, Jun, 2015 @10:00 AM

Article image
Scientists try to teach robot to laugh at the right time
Research team hopes system could improve natural conversations between humans and AI systems

Hannah Devlin Science correspondent

15, Sep, 2022 @4:00 AM

Article image
The robot will see you now: how AI could revolutionise NHS
From diagnosis to recovery, machines could take on a range of jobs, a new report suggests

Denis Campbell Health policy editor

10, Jun, 2018 @11:01 PM

Article image
‘Some people feel threatened’: face to face with Ai-Da the robot artist
Self-portraits by ultra-realistic android go on show at Design Museum in London

Mark Brown Arts correspondent

18, May, 2021 @1:09 PM

Article image
Musk, Wozniak and Hawking urge ban on warfare AI and autonomous weapons
More than 1,000 experts and leading robotics researchers sign open letter warning of military artificial intelligence arms race

Samuel Gibbs

27, Jul, 2015 @10:18 AM

Article image
Science fiction sheds light on robot debate | Letters
Letters: I wonder why we haven’t yet succeeded in imitating Poul Anderson

Letters

21, Apr, 2017 @4:12 PM

Article image
Ex-Google worker fears 'killer robots' could cause mass atrocities
Engineer who quit over military drone project warns AI might also accidentally start a war

Henry McDonald

15, Sep, 2019 @2:35 PM

Article image
Why workers needn’t fear the new robot age | Letters
Letters: Automated inspection machines and artificial intelligence aren’t designed to cost human workers their jobs; in fact, quite the opposite

Letters

13, Oct, 2016 @6:36 PM