The Guardian view on killer robots: on the loose | Editorial

Lethal autonomous weapons are a reality, but the campaign to prevent their use is ours to win

The first meeting of the UN-backed group of experts, intended to start work on getting a ban on lethal autonomous weapons, was supposed to wrap up at the end of last week. But only days before it was due to start it was cancelled: funding shortfalls were blamed. A lack of will feels the more likely explanation. Alarmed by the delay the day it was due to begin, more than 100 of those most closely involved in developing the artificial intelligence on which such weapons would rely, led by Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman, wrote a public letter of bleak warning: killer robots amount to a third revolution in warfare, the sequel to gunpowder and nuclear weapons. They are right. The only thing more frightening than a machine that can’t decide for itself who to kill is one that can.

But the technology is out there, within reach of scientists backed by billions of dollars poured into the development of AI by the Pentagon’s Defense Advanced Research Projects Agency, or Darpa, and certainly matched in other less transparent regimes. Some semi-autonomous weaponry is already available, like the border guarding system on the ceasefire line between North and South Korea. The process of what its critics, such as the campaigning group Article 36, call “bureaucratising” weapons, where targets are defined according to an explicit hierarchy, is under way.

Scientific discovery and technological advance are never unmade; the history of asking states and their generals to abandon military advantage and behave morally, while not futile – there have been international laws of war for more than a century and in parts of the world for very much longer – has only limited success when applied to specific weapons, in certain times. For example, for the first half of the 20th century, the US banned unrestricted submarine warfare. It lasted less than 24 hours after Pearl Harbour.

Yet the morality of merely distancing human involvement from conflict has been anxiously debated at least since the invention of cannon; after the Austrians sent up balloons laden with bombs with timed fuses against forces defending Venice in 1849 (the wind changed, to Austrian disadvantage) balloon bombing was outlawed at the first Hague Peace Conference in 1899. It was one of the earliest such bans, but like the later prohibition on the use of chemical weapons – a rare success for the League of Nations in the Geneva Protocol of 1925 – the ban was the easier to negotiate for never having battle-winning potential. In contrast, attempts at a universal prohibition on bombing from planes, which had been used against military targets since the early years of the 20th century, never made progress despite lip service being paid to the idea in the 1930s by both Stanley Baldwin and Hitler. And even when bans were in place, as there was against chemical weapons, the US used napalm in Korea and Vietnam before ratifying the beefed up UN biological weapons convention that outlaws production, stockpiling and use in 1975. In the early years of debate on the development of autonomous weapons, the Pentagon argued that technology must be servant not master of the soldier. Not any more.

Yet the weight of public opinion – even when it is not a majority view – can stiffen governments against the demands of military expediency and strategic choice. CND protests from the Aldermaston marches of the 1950s to the Greenham Common peace camp in the 1980s played their part in building a climate that made arms control a diplomatic objective. But for all the progress in controlling nuclear weapons that has been made since the first test ban treaty was signed in 1963, no nuclear power has ever given up their capacity to launch a nuclear attack.

But the fact of removing human intervention from the decision to kill raises the most profound questions – both of international law and ethics and, as the next issue of the International Institute of Strategic Studies journal Survival argues, wider issues of global peace and strategic stability. Campaigners believe that through the UN they can build a coalition, a platform for a sustained campaign to control autonomous weapons. They are heartened by the backing of scientists who know most intimately how AI could be developed, and maybe learn to develop itself. By exploiting the power of stigma, they have won campaigns against anti-personnel mines and cluster bombs. The global order is more fragile than at any time since 1945; a hard place to build consensus. But theirs is an argument that must be made.

Contributor

Editorial

The GuardianTramp

Related Content

Article image
We can’t ban killer robots – it’s already too late | Philip Ball
Telling international arms traders they can’t make killer robots is like telling soft-drinks makers that they can’t make orangeade, says science writer Philip Ball

Philip Ball

22, Aug, 2017 @7:00 AM

Article image
Thousands of leading AI researchers sign pledge against killer robots
Co-founder of Google DeepMind and CEO of SpaceX among the 2,400 signatories of pledge to block lethal autonomous weapons

Ian Sample Science editor

18, Jul, 2018 @4:01 AM

Article image
The Guardian view on artificial intelligence: look out, it’s ahead of you | Editorial
Editorial: There is a tendency to see intelligence where it does not exist. But it is just as wrong to fail to see where it is emerging

Editorial

08, May, 2016 @7:41 PM

Article image
The Guardian view on machine learning: people must decide | Editorial
Editorial: Each advance in artificial intelligence increases the power of computer networks, but the responsibility for their use remains with human beings

Editorial

23, Oct, 2016 @6:37 PM

Article image
The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster | Editorial
Editorial: Google’s ethical principles for the use of artificial intelligence are little more than a smokescreen, but they show that many engineers are rightly worried by the possible uses of the technology they’re developing

Editorial

12, Jun, 2018 @5:10 PM

Article image
The Guardian view on robots as weapons: the human factor | Editorial
Editorial: Drone wars signal a future in which weapons may think for themselves. The world may have to take responsibility for computerised conflict

Editorial

13, Apr, 2015 @6:40 PM

Article image
The Guardian view on the automated future: fewer shops and fewer people | Editorial
Editorial: Low-paid and unskilled jobs in retail will soon be automated away. What will happen to the people?

Editorial

29, Feb, 2016 @6:59 PM

Article image
Why the rise of the robots could allow humans to flourish again | Giles Fraser: Loose canon
Loose canon: Nobody’s job is safe. But a citizen’s income in a post-work world could see us avoid the Terminator scenario and return to pre-capitalist sources of value

Giles Fraser

31, Aug, 2017 @3:05 PM

Article image
Killer robots will only exist if we are stupid enough to let them
As long as humans are sensible when they create the operating programs, robots will bring enormous benefits to humanity, says expert

Hannah Devlin

11, Jun, 2018 @10:00 AM

Article image
The human touch is optional in robot wars | Letters
Letters: If wars can be fought by robots, would that not be better than human slaughter?

Letters

15, Apr, 2015 @6:04 PM