The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster | Editorial

Google’s ethical principles for the use of artificial intelligence are little more than a smokescreen, but they show that many engineers are rightly worried by the possible uses of the technology they’re developing

Frankenstein’s monster haunts discussions of the ethics of artificial intelligence: the fear is that scientists will create something that has purposes and even desires of its own and which will carry them out at the expense of human beings. This is a misleading picture because it suggests that there will be a moment at which the monster comes alive: the switch is thrown, the program run, and after that its human creators can do nothing more. They are left with guilt, perhaps, but no direct responsibility for what it goes on to do. In real life there will be no such singularity. Construction of AI and its deployment will be continuous processes, with humans involved and to some extent responsible at every step.

This is what makes Google’s declarations of ethical principles for its use of AI so significant, because it seems to be the result of a revolt among the company’s programmers. The senior management at Google saw the supply of AI to the Pentagon as a goldmine, if only it could be kept from public knowledge. “Avoid at ALL COSTS any mention or implication of AI,” wrote Google Cloud’s chief scientist for AI in a memo. “I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.”

That, of course, is exactly what the company had been doing. Google had been subcontracting for the Pentagon on Project Maven, which was meant to bring the benefits of AI to war-fighting. Then the media found out and more than 3,000 of its own employees protested. Only two things frighten the tech giants: one is the stock market; the other is an organised workforce. The employees’ agitation led to Google announcing six principles of ethical AI, among them that it will not make weapons systems, or technologies whose purpose, or use in surveillance, violates international principles of human rights. This still leaves a huge intentional exception: profiting from “non-lethal” defence technology.

Obviously we cannot expect all companies, still less all programmers, to show this kind of ethical fine-tuning. Other companies will bid for Pentagon business in the US: Google had to beat IBM, Amazon and Microsoft to gain the Maven contract. In China the state will find no shortage of people to work on its surveillance apparatus, which uses AI techniques in what may well be the world’s most sophisticated system for spying on a civilian population.

But in all these cases, the companies involved – which means the people who work for them – will be actively involved in maintaining, tweaking and improving the work. This opens an opportunity for consistent ethical pressure and for the attribution of responsibility to human beings and not to inanimate objects. Questions about the ethics of artificial intelligence are questions about the ethics of the people who make it and the purposes they put it to. It is not the monster, but the good Dr Frankenstein we need to worry about most.

Contributor

Editorial

The GuardianTramp

Related Content

Article image
The Guardian view on artificial intelligence: not a technological problem | Editorial
Editorial: The dream of a computer system with godlike powers and the wisdom to use them well is a theological construct

Editorial

16, Apr, 2018 @5:30 PM

Article image
The Guardian view on machine learning: a computer cleverer than you? | Editorial
Editorial: There are dangers of teaching computers to learn the things humans do best – not least because makers of such machines cannot explain the knowledge their creations have acquired

Editorial

22, Sep, 2019 @5:56 PM

Article image
The Guardian view on facial recognition: a danger to democracy | Editorial
Editorial: We don’t want our faces stored in vast databases, whether these are public or private

Editorial

09, Jun, 2019 @5:30 PM

Article image
The Guardian view on spooky science: AI needs regulating before it’s too late | Editorial
Editorial: If by 2052 a computer could match the human brain then we need better ways to build it

Editorial

01, Nov, 2021 @6:25 PM

Article image
The Guardian view on AI in the NHS: a good servant, when it’s not a bad master | Editorial
Editorial: The NHS collects vast amounts of data. It must be used in imaginative ways that respect privacy and make life better for patients and health workers

Editorial

21, May, 2018 @5:42 PM

Article image
The Guardian view on killer robots: on the loose | Editorial
Editorial: Lethal autonomous weapons are a reality, but the campaign to prevent their use is ours to win

Editorial

29, Aug, 2017 @6:40 PM

Article image
The Guardian view on regulating AI: it won’t wait, so governments can’t | Editorial
Editorial: With growing concerns inside as well as outside industry, it is clear that counting on developers to police themselves is not sufficient

Editorial

07, Apr, 2023 @5:30 PM

Article image
The Guardian view on DeepMind’s brain: the shape of things to come | Editorial
Editorial: This is an achievement that answers one big scientific question but raises more fundamental ones for society

Editorial

06, Dec, 2020 @6:30 PM

Article image
The Guardian view on artificial intelligence's revolution: learning but not as we know it | Editorial
Editorial: GPT-3, the software behind the world’s best non-human writer, is a giant step forward for machines. What about humanity?

Editorial

11, Aug, 2020 @5:05 PM

Article image
The Guardian view on new dictionary words: a parlour game that can clarify a scary reality | Editorial
Editorial: AI has given us hallucination as word of the year. We should quarrel with this humanising definition while recognising that it evokes unprecedented times

Editorial

19, Nov, 2023 @6:25 PM