Concerns about the potential of turning autonomous robots into deadly weapons is a case of too little too late according to experts in the field. Leading figures in the Artificial Intelligence (AI) and robotics industry have joined forces to demand an outright ban on ‘killer robots’. The open letter to the UN is signed by Tesla and Open AI CEO Elon Musk as well as 115 other ‘experts’ including Alphabet's Mustafa Suleyman.
The group writes, “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
But observers have labeled the letter naive and pointed to examples where defense and robotics are already working hand in hand together.
Taranis [Image Source: BAEsystems]
Examples include the Taranis, an unmanned combat aircraft system built by BAE and others, as well as the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border. The autonomous robot is equipped with a machine gun and uses heat and motion sensors to find targets as far as 2 miles away.
The answer to this issue is not easy. Definition of terms is still a massive problem within the robotics and AI fields. The confusion was well illustrated by the social media stoush between Musk and Facebook CEO Mark Zuckerberg over their own understanding of AI.
Britain's Ministry Defence said they would not support a pre-emptive ban on killer robots. They said while they currently have no plans to build autonomous weapons, they would also not support a ban. A Ministry spokesperson said, “It’s right that our weapons are operated by real people capable of making complex decisions and, even as they become increasingly high-tech, they will always be under human control.”
A discussion about weapons cannot happen without a broader discussion about the commodification of war generally. There is a fine line between private defense contractors and governments that have become blurred more than once in the history of humankind. Would it be possible to change the conversation from questioning the need for deadly robots to questioning the need for warfare generally? While granted this too is a naive response, the debate needs to be broad and open to be able to find any answers at all.
On one hand, it can be argued if there is to be combat, then wouldn’t it be better for technology to be doing the killing and being killed rather than human soldiers? Could this even make warfare a ‘fairer’ fight? Robots might be less likely to hit the wrong target or be at the mercy of fear, adrenaline and shock when making strategic decisions.
A Human Rights Watch released a report on Killer Robots in 2012 sees it the other way. The report says, “Distinguishing between a fearful civilian and a threatening enemy combatant requires a soldier to understand the intentions behind a human’s actions, something a robot could not do,” it goes on to say “robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians”.
No matter your view, this is an essential debate for our time.