Artificially intelligent weapons are transforming armies around the world - should we be worried? Matthew Campbell investigates the new arms race.
A white drone hovers high above a sunny Californian valley. Then a bigger black drone appears, mimicking its movements, stalking it. "It's like on those nature shows when a lion's chasing a wildebeest — you know it's not going to end well for the wildebeest," says Chris Brose of Anduril Industries, an American company.
He is showing off some of Anduril's latest products: the black drone suddenly darts upwards at 100mph (160km/h) and knocks the other whirring machine out of the sky. Preprogrammed to recognise and destroy unauthorised intruders by smashing into them in an act of drone suicide, it has done this without referring to any human.
The killer robots and "kamikaze" drones are here: the artificially intelligent weaponry of science fiction is now a reality — and is about to transform armies, navies and air forces around the world.
This has unleashed a new arms race. "Think about what the machinegun did to war in 1914 or what aircraft did in 1939," says Peter Singer, an American "futurist" and bestselling author. "Why would anyone expect that artificial intelligence and robotics are somehow going to have a lesser impact?" Eventually, he chuckles, there will be a Waze app for generals: "It'll say you'll have fewer casualties if you go that way."
Autonomous weapons are already on the battlefield: the use of Turkish drones to hunt military targets — armoured vehicles and soldiers — in Libya last year is thought to be one of the first examples of artificial intelligence being unleashed to kill on its own initiative. Launched with a few taps on a keyboard (and at a fraction of the cost of any traditional air force), similar "loitering munitions" were deployed by Azerbaijan to destroy most of Armenia's artillery and missile systems and some 40 per cent of its armoured vehicles in a war last year.
"Those numbers are astounding," Singer says. "Like being back in the early days of the tank."
The "hit" Israel was accused of late last year on the head of Iran's nuclear weapons programme, Mohsen Fakhrizadeh, may have been another example of death by robot. Fakhrizadeh was shot dead by a machinegun mounted in the back of a pick-up truck by the side of the road, the Iranians say. The hyper-accurate gun was switched on by an operator in a different country and programmed to open fire as soon as it recognised the approaching target. Fakhrizadeh was killed instantly, but his wife, sitting next to him, was unharmed.
For years "sentry" guns capable of opening fire by themselves against intruders have been deployed by South Korea along the Korean Demilitarised Zone and by Israel on the border of the Gaza Strip. An unarmed "robot dog" is already in service at one US air force base, carrying out perimeter patrols; a version armed with a remote-controlled assault rifle was unveiled at an army convention in the US last month. Cry havoc and let loose the dogs of war for real. Machines with an even greater degree of autonomy and lethal power — including swarms of miniature drones and unmanned battle vehicles — will soon proliferate on the battlefield. But can we trust them to go into battle on our behalf? Can we afford not to?
Risking machines instead of humans in combat has its attractions for military planners. The robots are cheap and expendable. "Here's the choice," says General Sir Richard Barrons, a former British army commander: "I can build a machine that can go into a dangerous place and kill the enemy or we can send your son — because that's the alternative. How do you feel now? People will say, you know what, that machine is a better alternative."
What is more, machines don't need holidays or payment. They don't get tired or disobey orders. "These systems do what they're programmed to do," Barrons adds. "They don't need regular training or miss a shot by failing to release the safety catch in all the excitement."
Machines already have outclassed seasoned fighter pilots in aerial combat. In August last year a US pilot identified only as "Banger", who has 2,000 hours of flying experience in an F16, was consistently beaten by an algorithm in a flight simulator. Banger said afterwards that the algorithm was "not limited by the training and thinking that is engrained in an air force pilot". On the ground Russian and US armoured vehicles that can roam the battlefield deciding for themselves when to attack — albeit within parameters set by humans — are rolling off the production line.
The technology makes many people uneasy. Robots defying their human masters have long been a Hollywood staple, most memorably in the 1968 film 2001: A Space Odyssey, where a space-walking astronaut trying to get back into the flight capsule says, "Open the door, Hal," and the computer chillingly replies, "I'm sorry Dave, I cannot do that."
Some scientists have invoked science fiction nightmares to scare us away from automation. Imagine robots such as the unstoppable shape-shifting assassin in the Terminator films roaming the land in pursuit of prey; or machines that end up enslaving their masters. Or perhaps reality will follow the latest Bond film, which features killer nanobots that can target individual DNA.
The late Stephen Hawking warned that AI could "spell the end of the human race", and Elon Musk says we are "summoning a demon" by pursuing "lethal autonomous weapons systems" — Laws, as they are known. Vladimir Putin, the Russian president, is more enthusiastic — and his words should give us cause for alarm: "Whoever leads in AI will rule the world," he has said.
Calls for a ban on autonomous weapons have intensified. "Machines killing people is an unacceptable direction of travel for us, we want to build a sense of moral agreement on that," says Richard Moyes, director of Stop Killer Robots, a group pushing for a ban and whose supporters have staged protests, dressed as robots, from London to Berlin. The Pope, too, has entered the fray in support of regulation, denouncing autonomous weapons for "lacking humanity and public conscience".
Many countries have signed up for regulation in a series of UN-sponsored talks in Geneva — but not Britain and a handful of other countries, including the US, China and Israel. Britain's official position is that it will never produce Laws. But it defines these as being "capable of understanding higher-level intent and direction". This sounds like an artful dodge: nobody really believes we are anywhere near producing generally intelligent robots, let alone human-like thought processes — or consciousness — in machines that might inspire them to take over.
The threat worrying activists comes from autonomous drones and other robot weapons that already exist and can be unleashed on the battlefield in automatic mode, having been programmed to select and destroy targets without awaiting orders from humans.
"Legal controls need to be applied to systems that exist now, not just to the Terminator in a thousand years time," says Moyes, who notes that some of the killer drones can operate autonomously for two hours or more. "When they are being used over a longer period of time and over a wider area, the user can have relatively little control over what actually is going to get targeted."
Loitering drones and other robot weapons use image recognition, accessing a library of images to determine whether an object is a tank or a car, or if a human being is a combatant or civilian. "One area of risk," a security source says, "is that while a T72 tank is a tank, someone wearing combat camouflage might not deserve to be shot."
The problem is that, technologically sophisticated as they are, robot weapons "don't have intuition, the ability to do lateral thinking or common sense", the security source says. At present the technology functions best in "cut-and-dried circumstances, for example when a missile is heading towards its ship".
As the technology develops, Britain's next generation of combat aircraft, Tempest, will include an uncrewed version able to fly for many hours picking its own targets if unleashed in automatic mode. So will Russia's latest jet, christened Checkmate. Uncrewed submarines and autonomous mine-hunting ships are also in the pipeline. Recent satellite images showed that China's military is already building "uncrewed surface vessels" — in other words robot warships — at a secret base on China's northeast coast.
"People will argue whether what we're doing is really autonomous," Barrons says, "because you've defined some parameters within which you are happy that they will kill the right target."
Either way, though, the technology is developing at a time when the world seems increasingly unstable and insecure — fertile terrain for a revolution in war-making. People's reservations about the ethics of killer robots will fade the more they feel threatened, Barrons argues. "If you are fighting the Chinese, as they arrive in Dover or on the shores of Taiwan you'll feel very differently about the risk of deploying automated weapons. Many objections to doing unpleasant things will fall away. We haven't had to think like that for some time, not since the end of the Cold War."
What worries military planners most is the prospect of automated systems being used as weapons of mass destruction by a state or terrorist group. Imagine a country unleashing millions of mini killer machines armed with a small explosive charge into the enemy's cities to eliminate those selected by algorithms on the basis of their political beliefs, race or religion. This nightmarish prospect was highlighted by the 2017 YouTube video Slaughterbots. It showed swarms of small drones fitted with face-recognition systems and explosives being unleashed to seek out and kill selected individuals. Some of the mini drones are seen teaming up to blow a hole in a wall to gain access to targets in the US Congress.
"It was fictional," says Stuart Russell, a British artificial intelligence professor at the University of California, Berkeley, who created the film with funding from the Future of Life Institute, a group of scientists and technologists. "Our aim was to show what the logical consequences are of manufacturing weapons such as this." Now he feels vindicated. "Similar machines are already in use," he says, referring to the Libyan and Armenian battlefields.
Imagine what a well-resourced and genocidal dictator could do. "If a country is really serious there would be not much in the way of fielding hundreds of thousands and even millions of drones, I think that's a serious concern," Russell says.
He still hopes the spread of killer robots can be controlled. "People say, 'If we don't do it our enemies will.' It's a bit like the biological arms race in the 1960s and 1970s," Russell says. "But we stopped chemical weapons, we stopped biological weapons, we stopped landmines. I don't see why we shouldn't stop this."
Mary Wareham, arms advocacy director for Human Rights Watch, agrees. She says "delegating life-and-death decisions to machines on the battlefield is a step too far" that could result in "the further dehumanisation of warfare".
Yet the benefits of autonomous systems are so great that some powerful countries are resisting regulation on the grounds that competitors might cheat and gain an unbeatable advantage in war.
"There's absolutely no mileage in saying these things can't exist, because they do already," Barrons says. "And you can't stop people who want them from making them. You can't disinvent this stuff."
Other experts argue that attempting to work out where to draw the line on which robotics are allowed and which are not will prove impossible, especially since the technology is already being used in many aspects of modern military operations.
"Whether it's combat droids dogfighting far overhead or AI advising politicians and generals, we are on the cusp of artificially intelligent warfare," writes Kenneth Payne in I, Warbot, a recent book about military automation. The rapid development of AI military technology makes European and US security officials nervous about how countries involved in "existential" conflict (Israel, for example) might be prepared to apply "a whole raft of these technologies"; and how this might escalate tensions in the Middle East with catastrophic consequences. "It's a path where a lot of us don't want to go," an intelligence source says.
The new arms race has prompted an explosion of private sector research. Anduril Industries was founded in America in 2017 by Palmer Luckey, 29, a virtual reality headset designer and prominent Republican donor. Currently valued at US$4.6 billion, Anduril this year began supplying its Ghost drone to the UK's Royal Marines.
Chris Brose, the company's chief strategy officer, says the Ghost is used in conjunction with AI software called Lattice, which enables drones to "collaborate with each other, taking that cognitive burden off human operators" as they scan the battlefield for threats or "objects of interest" about which they have been programmed to alert their human masters. They can also be equipped with weapons and programmed to fire them automatically when they detect certain targets.
However, Brose believes "a lot of future warfare" will not be "kinetic" — a military euphemism for shooting. Just as important will be "the ability to scan the battlefield and conduct electronic warfare", jamming the other side's machines.
This is uncharted territory. The more machines are used in combat, the more war may become a contest between each side's robots. It could mean countries go to war more often — since machines are expendable — with the risk of provoking a wider conflagration. Or it could make war less ruinous in human terms.
Experts are sceptical, though, about the possibility of bloodless wars. "By that argument we could just settle all our disputes by playing tiddlywinks," Russell says. He warns that automation could make escalation more likely: machines might respond to a perceived attack with a real attack, setting off an all-out war. "We had multiple near misses with nuclear war, but fortunately there were humans in the loop somewhere making a decision."
Oliver Lewis, a former deputy director of the UK Government Digital Service and co-founder of Rebellion Defence, a company producing AI software for the military, also worries about accidental escalation. "You could get a situation where you have a machine written with one logic and another written with another logic which, regardless of human operators' intent, get locked into a death spiral with an inability to de-escalate even if we intended them to."
He does not believe a ban or regulation will be agreed any time soon, making it all the more urgent for countries to work out how to de-escalate in what he refers to as the "likely" event of a high-stakes clash between rival automated systems in the near future.
So some humans are making a stand, a last-ditch defence against the machines. "This is about algorithms and computers making decisions about our lives," says Moyes of Stop Killer Robots. "We all have a stake in drawing a line here".
Written by: Matthew Campbell
© The Times of London