The Guardian view on artificial intelligence: not a technological problem | Editorial

The dream of a computer system with godlike powers and the wisdom to use them well is merely a theological construct

The House of Lords report on the implications of artificial intelligence is a thoughtful document which grasps one rather important point: this is not only something that computers do. Machine learning is the more precise term for the technology that allows computers to recognise patterns in enormous datasets and act on them. But even machine learning doesn’t happen only inside computer networks, because these machines are constantly tended and guided by humans. You can’t say that Google’s intelligence resides either in its machines or in its people: it depends on both and emerges from their interplay. Complex software is never written to a state of perfection and then left to run for ever. It is constantly being tweaked, increasingly often as part of an arms race with other software or networks that are being used to outwit it. And at every step of the way, human bias and human perspectives are involved. It couldn’t be otherwise. The dream of a computer system with godlike powers and the wisdom to use them well is a theological construct, not a technological possibility.

The question, then, is which forms of bias and which perspectives are desirable, and which we should guard against. It is easy to find chilling examples – the Google image recognition program that couldn’t distinguish between black people and gorillas, because it had been trained on a dataset where almost all the human faces were white or Asian; the program used by many American jurisdictions to make parole descriptions turns out to be four times as likely to recommend that white criminals be freed than black ones when all other things are equal. Without human judgment we are helpless against the errors introduced by earlier human judgments. This has been known for some time, but the report discusses these dangers very clearly.

One thing that has changed in recent years is that a lot of the underlying technology has been democratised. What had used to require the resources of huge corporations can now be done by private individuals, either by using the publicly available networks of Amazon, Google, and other giants, or simply by using cleverly designed software on private computers. Face recognition and voice recognition are both now possible in this way, and both will be used by malicious actors as well as benevolent ones. Most worries about the misuse of facial recognition software stem from their authoritarian use in places like China, where some policemen are already wearing facial recognition cameras, and concert-goers at large events are routinely scanned to see if they are of interest to the police. But the possibilities when they get into the hands of anarchists or apolitical bullies are also worrying.

We can’t step back into the past and we can only predict the future in the broadest terms. The committee is right to suggest principles, rather than detailed legislation. Since personal data can now be used for good and ill in ways that are impossible for the people from whom it has been gathered to predict, the benefits of this use need to be widely shared. The report is important and right in its warnings against the establishment of “data monopolies” where four or five giant companies have access to almost all the information about everyone, and no one else does. It is also prescient to identify “data poverty”, where people do not have enough of an online presence to identify them credibly as humans to other computer networks, as a threat for the future. But neither the problems, nor any solutions, are purely technological. They need political and social action to solve them.

Contributor

Editorial

The GuardianTramp

Related Content

Article image
The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster | Editorial
Editorial: Google’s ethical principles for the use of artificial intelligence are little more than a smokescreen, but they show that many engineers are rightly worried by the possible uses of the technology they’re developing

Editorial

12, Jun, 2018 @5:10 PM

Article image
The Guardian view on artificial intelligence: human learning | Editorial
Editorial: Computers can’t be held responsible for anything. So their owners and programmers must be

Editorial

14, Oct, 2018 @4:58 PM

Article image
The Guardian view on taxing tech: needed and fair | Editorial
Editorial: If data is the new oil, the state must assert its right to raise revenue from it – and use it for the public good

Editorial

30, Oct, 2018 @6:09 PM

Article image
The Guardian view on machine learning: a computer cleverer than you? | Editorial
Editorial: There are dangers of teaching computers to learn the things humans do best – not least because makers of such machines cannot explain the knowledge their creations have acquired

Editorial

22, Sep, 2019 @5:56 PM

Article image
The Guardian view on an NHS coronavirus app: it must do no harm | Editorial
Editorial: Smartphones can be used to digitally trace Covid-19. But not if the public don’t download an app over privacy fears – or find it won’t work on their device

Editorial

06, May, 2020 @6:13 PM

Article image
The Guardian view on data protection: informed consent needed | Editorial
Editorial: When privacy becomes a commodity to be traded, the integrity of democratic politics is at risk

Editorial

19, Mar, 2018 @6:07 PM

Article image
The Guardian view on artificial intelligence's revolution: learning but not as we know it | Editorial
Editorial: GPT-3, the software behind the world’s best non-human writer, is a giant step forward for machines. What about humanity?

Editorial

11, Aug, 2020 @5:05 PM

Article image
The Guardian view on artificial intelligence: look out, it’s ahead of you | Editorial
Editorial: There is a tendency to see intelligence where it does not exist. But it is just as wrong to fail to see where it is emerging

Editorial

08, May, 2016 @7:41 PM

Article image
The Guardian view on ad tech: a tangled web | Editorial
Editorial: Martin Lewis is suing Facebook. The question is whether companies can be held responsible for the behaviour of their software

Editorial

23, Apr, 2018 @5:01 PM

Article image
No one can read what’s on the cards for AI’s future | John Naughton
AI is now beating us at poker, but not even Google co-founder Sergey Brin can say with any certainty what the next steps for machine learning are

John Naughton

29, Jan, 2017 @7:00 AM