116 Specialists Including Elon Musk Call For Complete Ban of Killer Robots
First, Tesla CEO Elon Musk tweeted that artificial intelligence is a bigger problem than a volatile North Korea. Now, Musk and other tech industry leaders are taking action. Leading robotics and AI engineers want the United Nations to ban killer robots now -- before it's too late.
[Image Source: TerminatorWiki]
The fight is championed by Musk and Alphabet (Google's parent company) executive Mustafa Suleyman. In total, 116 specialists are calling for the ban. These 116 researchers span 26 countries and a variety of disciplines within robotics.
In the letter, the signers said: "Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."
"We do not have long to act. Once this Pandora’s box is opened, it will be hard to close."
If this sounds strange, it should. Musk has theoretically pitched some of the biggest AI ideas in recent years. Teslas operate on a rapidly-learning computer system (however, it's far from becoming the Terminator). Suleyman co-founded one of the most impressive AI systems in existence with Google DeepMind.
However, that doesn't mean these men don't keep safety first. Recently, Suleyman and Musk partnered their two AI projects -- DeepMind and Open AI, respectively -- to figure out a better way of safer AI. This is also not the first time that Musk has mentioned hesitancy with AI. In 2015, he partnered with physicist Stephen Hawking and Apple's Steve Wozniak. The three wanted to ban machinery that would kill without direct human command. In short, those men didn't want autonomous killing machines. Musk even engaged in a Twitter war against Facebook CEO and Founder Mark Zuckerburg. Musk said, "His understanding of the subject is limited."
— Darren Cunningham (@dcunni) July 25, 2017
The letter serves as a focal issue for discussion at the International Joint Conference on Artifical Intelligence. The IJCAI, hosted in Melbourne, offers the best floor for discussion of issues surrounding AI.
Toby Walsh serves as the Scientia professor of artificial intelligence at University of New South Wales in Sydney. In an interview with the Guardian, he noted that technology -- including AI -- has a duality.
"Nearly every technology can be used for good and bad, and artificial intelligence is no different. It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis.
"However, the same technology can also be used in autonomous weapons to industrialize war. We need to make decisions today choosing which of these futures we want."
The founder of Clearpath Robotics Ryan Gariepy said AI could mean more now during the development than in its manifestation.
“Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”
Why do we do it, how can we stop it, and who else is at it?