More than 1,000 leading scientists have penned an open letter condemning the possibility of an AI arms race. It is only a "matter of time" before dictators and warlords also get the technology, they say.
Advertisement
How intelligent is Artificial Intelligence?
Stephen Hawking can attract attention to pretty much any scientific debate. The discussion of the safety of Artificial Intelligence is by now a dusty one, indeed. So why the sudden resurgence?
Image: "Cheetah Robot image courtesy of Boston Dynamics
More dangerous than the atomic bomb?
Silicon Valley icon Elon Musk, founder of SpaceX and Tesla Motors, is famous for his warnings with regard to Artificial Intelligence. Last summer he declared that AI was the greatest conceivable threat to our existence. Stephen Hawking isn't new to the discussion, either, calling it famously the "worst mistake ever made."
Image: Reuters/L. Nicholson
Hysteria exaggerated?
Aren't AI robots more helpful than harmful like in the recent Hollywood film Chappie? A reprogramming gives the robot feelings and thoughts, and he helps humanity against an aggressive robotic police force.
Image: picture-alliance/dpa/S. Blomkamp
High-speed drone flop
Recent occurrences have shown, however, that not every instance of AI is without fault. All it took was nine minutes for the Falcon HTV-2 to sink in the ocean on a test flight in the summer of 2011. The US military drone was unmanned.
Image: picture-alliance/dpa/DARPA
Not really all that new
Despite the resurgence, AI in military systems is a foregone conclusion. For over two decades, machines and robotic components have been advancing military systems. One prime example - the Eurofighter.
Image: Getty Images/S. Pond
Sci-Fi meets reality
The intelligent machines are getting more and more advanced - in many cases operational. The four-legged robot BigDog can haul cargo on offroad terrain, ice and snow. The robotics developer Boston Dynamics was bought by Google.
This week's letter has made clear that the entire AI community is seeking ethical guidelines, and even political regulation, to ensure that standards are set for how machines can be programmed. This is the only way to prevent abuse of Artificial Intelligence - and to put a stop to it getting out of control.
Image: picture-alliance/dpa/DARPA
6 images1 | 6
A group of scientists, philosophers and technology experts including Stephen Hawking and Apple co-founder Steve Wozniak issued a stern warning on Tuesday against a global arms race of "killer robots," or weaponized artificial intelligence (AI).
While proponents argue that using robotics in the arms industry can save human lives on the battlefield, the letter signed by the scientists and presented at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires painted a much bleaker picture. Not only could the weapons easily fall into the hands of dictators or warlords and be used to assist in atrocities like ethnic cleansing, their components make them much easier to produce than nuclear weapons.
"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting," read the letter signed by around 1,000 global tech chiefs, adding that if any military pushes ahead with developing AI weapons, such as arms race is "virtually inevitable."
"It will only be a matter of time until they appear on the black market, and in the hands of terrorists, dictators wishing to better control their populace," the scientists cautioned.
"There are man ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people," the letter concluded.
Elon Musk, the co-founder of PayPal and CEO of Tesla Motors, urged the public to sign up to the campaign, tweeting: "If you're against a military AI arms race, please sign this open letter."