The Guardian view on artificial intelligence: human learning | Editorial

Computers can’t be held responsible for anything. So their owners and programmers must be

In a modern company like Amazon, almost all human activity is directed by computer programs. They not only monitor workers’ actions but are used to choose who should be employed. Yet it emerged last week that the company had scrapped an attempt to use artificial intelligence to select workers on the basis of their CVs, since the results consistently discriminated against women.

This is a welcome decision that illuminates two important facts about machine learning, the most widely used technique of AI at the moment. The technical or operational point is that these programs, no matter how fast they learn, can only learn from the data presented to them. If this data reflects historic patterns of discrimination, the results will perpetuate those patterns.

That’s what Amazon found: by training its AI with the records of those job applicants who had been hired in the past, who were overwhelmingly men, it taught the program to discriminate against applications from women. Since the program had access to immense amounts of data about the applicants, it was able to infer their sex from factors such as whether they had attended an all-woman college. And since it had neither conscience nor consciousness, the machine behaved as if being female were a sign of inferiority, just as the industry it learned from had done.

This is an instance of a wider problem that has appeared in more sinister contexts, such as decisions over which prisoners should get parole. It is also one that is extremely hard to surmount. When you ask computers to detect patterns in data, which is the short description of machine learning, the patterns they find are usually genuine ones, even if we have not noticed them before.

This kind of mesh of inference is implicit in the way that language works, as Joanna Bryson, one of the authors of a ground-breaking study of the way that machine learning can expose the prejudices embedded in our use of language, points out. We can’t get away from it. It encodes both the wisdom and the folly of all those who have used language before us. Patterns of language describe the way the world is, whether or not it ought to be that way. So when we want to distinguish an “ought” from the “is” of usage, it requires a sustained collective effort.

The technical aspects of the story are not the only salient ones. What matters for the future is the recognition that the responsible actor in the story was Amazon itself, the company, and not the AI it built and used. Discussions of AI too often proceed as if the technology will appear among us like the monolith in the film 2001: something alien and immensely powerful but immediately recognisable and clearly distinguished from the hominids around it. It’s not happening like that at all.

AI is already all around us and is always a hybrid or symbiotic system, made up of the humans who tend the programs and feed them data quite as much as the computers themselves. Companies such as Google or Amazon – and even traditional media and retailers – are now partly constituted by the operations of their computer systems. It is therefore essential that moral and legal responsibility be attached to the human parts of the system.

We hold Facebook or Google responsible for the results of their algorithms. The example of Amazon shows that this principle must be more widely extended. AI is among us already, and the companies, the people, and the governments who use it must be accountable for the consequences.

Contributor

Editorial

The GuardianTramp

Related Content

Article image
The Guardian view on taxing tech: needed and fair | Editorial
Editorial: If data is the new oil, the state must assert its right to raise revenue from it – and use it for the public good

Editorial

30, Oct, 2018 @6:09 PM

Article image
The Guardian view on big tech: a new era needs new rules | Editorial
Editorial: Google, Apple, Facebook and Amazon are too big and too powerful. Regulation has to catch up with the changing character of the digital economy

Editorial

21, Mar, 2018 @5:24 PM

Article image
The Guardian view on artificial intelligence: not a technological problem | Editorial
Editorial: The dream of a computer system with godlike powers and the wisdom to use them well is a theological construct

Editorial

16, Apr, 2018 @5:30 PM

Article image
The Guardian view on capitalism without capital | Editorial
An idea whose time has come: Over the holiday season the Guardian is examining themes that have emerged to give shape to 2018. Today we look at intangible economies

Editorial

26, Dec, 2017 @1:46 PM

Article image
The Guardian view on artificial intelligence's revolution: learning but not as we know it | Editorial
Editorial: GPT-3, the software behind the world’s best non-human writer, is a giant step forward for machines. What about humanity?

Editorial

11, Aug, 2020 @5:05 PM

Article image
The Guardian view on social media’s metaverse: it may remain science fiction | Editorial
Editorial: The online virtual reality experience that almost every tech giant today wishes to commercially exploit may not catch on

Editorial

07, Feb, 2022 @7:11 PM

Article image
The tech giants dominated the decade. But there’s still time to rein them in | Jay Owens
Google, Amazon and Facebook moved at a scale and speed governments couldn’t match. Now regulators are trying to catch up, says writer and researcher Jay Owens

Jay Owens

25, Dec, 2019 @1:00 PM

Article image
The Guardian view on the automated future: fewer shops and fewer people | Editorial
Editorial: Low-paid and unskilled jobs in retail will soon be automated away. What will happen to the people?

Editorial

29, Feb, 2016 @6:59 PM

Article image
The Guardian view on automation: put human needs first | Editorial
Editorial: An OECD report suggests that technological change will abolish one job in six. The challenge is to ensure they’re not replaced with worse jobs – or with none

Editorial

02, Apr, 2018 @5:06 PM

Article image
The Guardian view on machine learning: a computer cleverer than you? | Editorial
Editorial: There are dangers of teaching computers to learn the things humans do best – not least because makers of such machines cannot explain the knowledge their creations have acquired

Editorial

22, Sep, 2019 @5:56 PM