Tech Leaders, Including Elon Musk and DeepMind Founders, Pledge Not to Build Killer Robots

A new pledge seeking to mitigate the dangers of using AI in weaponry is signed by the world's leading AI experts.

A research organization that aims to safeguard life called Future of Life Institute released a pledge today signed by some of the world's biggest names in artificial intelligence (AI), including Elon Musk and three founders of Google AI firm DeepMind.

The pledge, published at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, saw the AI leaders commit not to develop “lethal autonomous weapons."

The danger of AI in military uses

The text highlighted the increasingly dangerous role of AI in military affairs leading to a need to establish a distinction between acceptable and unacceptable uses of the technology. The moral implications of having machines make decisions regarding taking human lives was also explained as well as the potential dangers of such powerful tools.

As such, the pledge called out to members of all fields influencing AI, from tech firms to policymakers, to join its mission. The text emphasized that this was an important step due to the current absence of legal measures in place to keep the industry in check.

"We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," reads the pledge.

Along with Musk and DeepMind's founders, the pledge was also signed by Skype founder Jaan Tallinn and world-renowned AI researchers Stuart Russell, Yoshua Bengio and Jürgen Schmidhuber. Physics professor at the Massachusetts Institute of Technology and president of the Future of Life Institute Max Tegmark announced the pledge.

"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," Tegmark said. "AI has huge potential to help the world - if we stigmatise and prevent its abuse."

Previous letters expressed similar concerns

The 2017 IJCAI conference in Melbourne and the 2015 one in Buenos Aires saw the introduction of similar open letters. The 2015 letter was also endorsed by British physicist Stephen Hawking, Apple Co-founder Steve Wozniak and cognitive scientist Noam Chomsky, among others.

Robotics

116 Specialists Including Elon Musk Call For Complete Ban of Killer Robots

Meanwhile, December of 2016 saw 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons agree to begin discussions on autonomous weapons. Since then, 26 countries, including China, have announced support for some type of ban.

Previous conventions have seen the ban of biological and chemical weapons. The pledge's signatories hope their letter will encourage a similar international agreement regarding AI weapons.

"Lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security," states the pledge.