1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
Conflicts

Should 'killer robots' be banned?

August 27, 2018

In wars of the future, life-or-death decisions may be made by machines — independent of human control. The development of autonomous weapons is moving fast. The UN in Geneva is trying to get countries to agree on a ban.

Robot Method II standing in hall in South Korea
Image: picture-alliance/NurPhoto/Seung-il Ryu

Noisy mini-drones buzz through a lecture hall, home in on specific students and shoot them in the head. This a scene from a fictional film posted on the video portal YouTube by opponents of autonomous weapons. Each smaller than a saucer, the drones use algorithms to identify their victims. Once they've locked on, the target cannot escape. The mini-drones gather their data from social media and are networked in swarms. In the fictitious story, they only kill people who've shared a certain critical video.

Just science fiction?

This alarming short, with the title "Slaughterbots," has had more than 2.5 million views since it was posted in November 2017. So is this just science fiction, alarmist paranoia? Not at all, says Thomas Küchenmeister, a campaigner for a ban on autonomous weapons. "It's only a small step away," he says.

Küchenmeister is the managing director of Facing Finance, a German organization that's part of the international campaign "Stop Killer Robots". He often visits weapons fairs to look at products and talk to manufacturers. Weapons with a certain degree of autonomy are already part of the standard range, such as rockets that independently seek out possible targets and ultimately make their own decisions about which ones to destroy. They're still operated by humans in principle, but a soldier no longer gives the actual order to fire. Often, no soldier can stop it, either.

Read more: Listicle: Uncontrolled fighting robots?

Conflict with international law

Küchenmeister sees this as "highly problematic," because "such a weapon cannot make a fine distinction between military and civilian vehicles" — yet this is required under international humanitarian law. Distinguishing between combatants and civilians is one of the most important rules of the "jus in bello," the law governing the conduct of warfare. It obliges the warring parties to afford civilians and civilian buildings the greatest possible protection.

With this in mind, the International Committee of the Red Cross (ICRC) defines autonomous weapons as weapons with autonomy in their critical functions of selecting and attacking targets. Autonomy here means without human intervention — and this is precisely the problem. What if the autonomous target selection on a rocket like this not only destroys enemy missiles but kills civilians as well? "We can't equip these weapons with a chip for international humanitarian law," Thomas Küchenmeister warns.

The unmanned aircraft 'Taranis' from British arms company BAE Systems has autonomous functionsImage: picture-alliance/AP Photo/BAE Systems

AI in the weapons industry

The degree of autonomy in weapons systems is steadily increasing, thanks to rapid progress in the fields of artificial intelligence (AI) and robotics. Machines are now capable of learning; they process experience by means of artificial neural networks similar to the human brain. The arms industry is making use of this. Weapons are becoming faster and more efficient, while the danger for the soldiers using them decreases. This is precisely what armies want. However, the boundaries are fluid. A robot that autonomously seeks, recognizes and defuses mines may generally be accepted, while a robot that autonomously seeks, recognizes and shoots people clearly contravenes international humanitarian law.

Negotiations under time pressure

But how do you apply international law to these new weapons? The international community has been arguing about this at UN headquarters in Geneva since 2014. The discussions are taking place within the framework of the UN Convention on Certain Conventional Weapons, or CCW. Initial informal discussions became official negotiations in 2017, with more than 70 countries participating, as well as scientists and NGOs.

From 27 to 31 August 2018, the topic under discussion is lethal autonomous weapons systems:  LAWS for short. These could be robots that fight each other on the battlefield, for example. The killer drones from the video are also covered by this description. They don't exist yet — but they will do in the near future.

There's little time left, yet the negotiations are not making progress. Broadly speaking, the international community is divided into three camps: Opponents and supporters of a binding prohibition on autonomous weapons, and countries such as Germany and France that want to start off by finding a middle way.

Drones combined with AI could become dangerous weapons — for terrorists as wellImage: Imago/ITAR-TASS

AI rules the world

Countries that are investing a lot of money in the military use of artificial intelligence, such as the United States, Israel, Russia and Britain, are arguing against a ban. The essence of Russian President Vladimir Putin's message to a group of schoolchildren in September 2017 was that whoever leads the field in artificial intelligence rules the world. Putin worries that the US or China may gain supremacy. For some time now, progress in the field of artificial intelligence has been driving an arms race for the "cleverest" autonomous weapons.

The US government is currently massively increasing its defense budget. During the last round of negotiations in Geneva in April it even presented autonomous weapons in a positive light, saying that they helped prevent the killing of civilians and "collateral damage" in wars. It argued that whereas a soldier is easily overwhelmed by the large amount of information on the battlefield, a computer maintains an overview and makes fewer mistakes. The US delegation explicitly warned against stigmatizing these weapons.

In favor of a ban

So far, 26 countries have called for a mandatory ban on autonomous weapons, and are being enthusiastically applauded by civil society for this. More than 230 organizations and 3,000 individuals have signed a petition against autonomous weapons initiated by the American Future of Life Institute. They include leading researchers and businessmen from the artificial intelligence sector, such as the Tesla boss Elon Musk and Google Deep Mind. The signatories have pledged not to support the development of lethal autonomous weapons systems, and are calling for "strict international norms and laws" to ban them. They say the decision to kill a human being should never be delegated to a machine.

The international campaign 'Stop Killer Robots' has called for a ban on autonomous weapons since 2013Image: Getty Images/AFP/C. Court

Germany: Yes, but not right now

In its coalition agreement, the German government has pledged to support a "worldwide proscription" of autonomous weapons. However, it believes that aiming directly for a ban at the talks in Geneva would be a tactical mistake, because opinions differ too widely.

Germany, along with France, is therefore opting for a middle way. The German government suggests a political declaration should be the first step, followed by a military code of conduct, with an agreement on a ban only as the final step. German diplomats believe this multistage approach has a chance of bridging the divides. The hope is that opponents of a ban will sign up to a nonbinding political declaration. Once certain standards have been established, they believe, it will be easier to take the next step towards a mandatory ban under international law.

The activists from the "Stop Killer Robots" campaign don't share this view. They're demanding that Germany, as a big and important European country, take on a leadership role and support an immediate ban. Thomas Küchenmeister believes other countries would then join in. "If the German government wants these weapons proscribed, it should show this — and that means taking responsibility."

Austria and Belgium have already come out in favor of a banImage: imago/ITAR-TASS

Time is running out

On one thing, though, both diplomats and activists agree: Time is running out. This is also the view of the renowned American computer scientist Stuart Russell, who has been researching artificial intelligence for 35 years. He was the one who had the idea for the film "Slaughterbots." A scenario like this can still be stopped, Russell warns, "but the window is closing fast.” Armies are already experimenting with swarms of drones, while others, including the German Bundeswehr, are developing weapons to defend against them. If the discussions in Geneva fail to make progress, Küchenmeister prophesies that pressure from civil society will grow. An agreement on banning autonomous weapons could then be made outside of the UN, as happened in the fight against landmines. The international campaign to outlaw landmines was awarded the Nobel Peace Prize in 1997.

Each evening at 1830 UTC, DW's editors send out a selection of the day's hard news and quality feature journalism. You can sign up to receive it directly here

Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW