Researchers working for Google have produced a new kind of computer intelligence which can learn in ways less immediately dependent on its programmers than any previous model. It can, for instance, navigate its way through a map of the London underground without being explicitly instructed how to do so. For the moment, this approach is less efficient than the old-fashioned, more specialised forms of artificial intelligence, but it holds out promise for the future and, like all such conceptual advances in computer programming, it raises more urgently the question of how society should harness these powers.
Algorithms in themselves long predate computers. An algorithm is simply a sequence of instructions. Law codes can be seen as algorithms. The rules of games can be understood as algorithms, and nothing could be more human than making up games. Armies are perhaps the most completely algorithmic forms of social organisation. Yet too much contemporary discussion is framed as if the algorithmic workings of computer networks are something entirely new. It’s true that they can follow instructions at superhuman speed, with superhuman fidelity and over unimaginable quantities of data. But these instructions don’t come from nowhere. Although neural networks might be said to write their own programs, they do so towards goals set by humans, using data collected for human purposes. If the data is skewed, even by accident, the computers will amplify injustice. If the measures of success that the networks are trained against are themselves foolish or worse, the results will appear accordingly. Recent, horrifying examples include the use of algorithms to grade teachers in the US and to decide whether prisoners should be granted parole or not. In both these cases, the effect has been to punish the poor just for being poor.
This kind of programming is, in the programmer Maciej Cegłowski’s phrase, like money-laundering for bias. But because the obnoxious determinations are made by computer programs, they seem to have an unassailable authority. We should not grant them this. Self-interest, as well as justice, is on the side of caution here. Algorithmic trading between giant banks is certainly to blame for such phenomena as “flash crashes” and is very plausibly responsible for the great financial disaster of 2008. But there is nothing inevitable about the decision to hand over to a computer the capacity to make decisions for us. There is always a human responsibility and this belongs with the companies or organisations that make use of – or at least unleash – the powers of the computer networks. To pretend otherwise is like blaming the outbreak of the first world war on railway timetables and their effect on the mobilisation of armies.
The cure for the excesses of computerised algorithms is not in principle different from the remedies we have already discovered for algorithms that are embedded in purely human institutions. Expert claims must be scrutinised by outsiders and justified to sceptical, if intelligent and fair minded, observers. There needs to be a social mechanism for appealing against these judgments, and means to identify their mistakes and prevent them happening in future. The interests of the powerful must not be allowed to take precedence over the interests of justice and of society as a whole.