Killer Robots Could Cause Mass Atrocities
Robots will be used as soldiers in the future. In fact, some robots as well as drones are already being deployed by the military. Intelligent and autonomous robots programmed by humans to target and kill could commit crimes in the future. Unless, there is a treaty for robotics and Artificial Intelligence (AI) to be responsibly used.
Having the technology that can build robots that kill available does not mean we have to use it. As Spiderman's uncle once said, "with great power comes great responsibility."
Experts in Machine Learning and military technology say it would be technologically possible to build robots that make decisions about whom to target and kill without having a human controller involved. Because facial recognition and decision-making algorithms are increasingly becoming more powerful, to achieve the creation of such kinds of robots would get easier.
The Risks of Killer Robots
Researchers in AI and public policy are trying to make the case that killer robots are a bad idea in real life. The creation of fully autonomous weapons would bring new technical and moral dilemmas.
Scientists and activists, for this reason, have pushed the United Nations and world governments to acknowledge the problem and consider a preemptive ban.
Can AI be used as a weapon?
The short answer is, yes. Just like any other existing technology, Artificial Intelligence can be used for good, but also to kill. Facial recognition and object recognition are technologies that have improved during the last few years. We could say that they have become vastly more accurate, yet, they are far from being perfect.
Facial recognition and object recognition are skills that are likely to become essential as part of a toolkit for lethal autonomous weapons (LAWS). However, it is also pretty easy to fool these technologies, if one really wants to fool them.
Military robots: Present and future

Military robots are remote-controlled autonomous robots or drones that have been designed for military applications. Military robots can be used for transport, search and rescue, and also for attack, with the implication of killing humans, and even destroying cities.
Some of these systems are currently being deployed, and some others are under development. Some military robots are developed under strict secrecy in order to prevent others from learning about their existence.
The United States already flies military drones over areas where The U.S. is at war or engaged in military operations. So far, human controllers decide when these drones will fire.
Although lethal autonomous weapons (LAWS) do not quite exist yet,--or that's what we think-- the technology to replace the human controllers with an algorithm that makes the decision of when and what to shoot does exist, indeed. Some AI researchers believe that LAWS, in the form of small drones, could be used as weapons in less than two years.
While today’s drones transmit a video feedback to a military base, where a human soldier makes the decision of whether the drone should fire on the target or not, with a fully autonomous weapon the soldier will not make that decision any longer. An algorithm would make the decision instead.
Lethal autonomous weapons (LAWS): Are humans at risk?
The era of machine-driven warfare is not too far into the future. The military has been experimenting with robots that can be part of the battlefield and used as killer weapons. The wars of the future can then be more high-tech infused and less human. What consequences for humanity would this bring?
The risk posed by lethal autonomous weapons (LAWS), also known as killer robots is real. Some Artificial Intelligence (AI) researchers have plenty of reasons to support their consensus that the world should ban the development and deployment of lethal autonomous weapons.

The reason is pretty simple, military powers could mass-produce an army of killer robots quite cheaply. However, humanity could pay a high price; the manufacture and activation of killer robots would increase the likelihood of proliferation and mass killing.
Killer robots: A human is always responsible for any action a robot takes
At this point, the question arises, who should be responsible for a robot's actions? And, what roboethics should be applied to lethal autonomous robots? During a war, or even during a smaller conflict things can get out of hand. Should killer robots assume total control?
Robots which are meant to be used in a war conflict as mechanical weapons embedded with Artificial Intelligence and Machine Learning should have an on/off switch of some sort.
Today, Isaac Asimov's Three Laws of Robotics represent more problems and conflict to roboticists than they solve.
Roboticists, philosophers, and engineers are seeing an ongoing debate on machine ethics. Machine ethics --or roboethics-- is a practical proposal on how to simultaneously engineer and provide ethical sanctions for robots.
Roboethics deals with the code of conduct that robotic designer engineers must implement in the Artificial Intelligence of a robot. Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
The Three Laws of Robotics: Should they be applied to killer robots?
In 1942, science fiction writer Isaac Asimov introduced the Three Laws of Robotics --also known as Asimov's Laws-- in his short story Runaround. Even though Asimov's Three Laws of Robotics were part of a fictional world, they could be a good robot programming starting point today.
Before and if governments deploy lethal autonomous weapons, they should make sure they can prevent killer robots designed to be deployed in war zones from starting a war by themselves, or causing mass atrocities such as killing civilians.
Some may think that the following Laws of Robotics go against what a soldier should do. Perhaps that is the main point after all. In other words, perhaps a human should not give the dirty work to a machine which is not yet able to individualize situations and make a judgement.
-
A robot may not injure a human being, or, through inaction, allow a human being to come to harm
-
A robot must obey the orders given by human beings, except where such orders would conflict with the First Law
-
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Stop Killer Robots
There is a campaign to Stop Killer Robots, a coalition of non-governmental organizations who seek to pre-emptively ban lethal autonomous weapons (LAWS).
The United Nations (UN) Secretary-General has urged states to move expeditiously to address concerns over lethal autonomous weapon systems in his 2020 Report on the Protection of Civilians in Armed Conflict. This is the first UN protection of civilians report since 2013 to highlight general concerns over killer robots.
In the 2020 report, Secretary-General António Guterres finds that “all sides appear to be in agreement that, at a minimum, retention of human control or judgement over the use of force is necessary.” He also notes that “a growing number of Member States have called for a prohibition of LAWS.” Since November 2018, the UN Secretary-General has repeatedly expressed his desire for a new international treaty to ban killer robots.
The campaign to Stop Killer Robots commends the United Nations to urge states to agree on “limitations and obligations that should be applied to autonomy in weapons.” States should launch negotiations now on a new international treaty to prohibit fully autonomous weapons while retaining meaningful human control over the use of force.
How to kill a robot
Most likely, lethal autonomous robots manufacturers ship their killer robots with an instruction manual where there is an option to remotely insert a security code to deactivate a killer robot.
If that is not an option, a reliable way to kill a robot would be by using an electromagnetic pulse (EMP) to induce a high current that will burn out the robot's circuitry. Of course, this is assuming that the killer robot is not protected by a Faraday Cage.
Related Articles:
-
The Three Types of Artificial Intelligence: Understanding AI
-
A Look at the Most Used Terminology around Artificial Intelligence
-
Artificial Intelligence Creates a New Generation of Machine Learning
-
Drone Hunters: 9 of the Most Effective Anti-Drone Technologies for Shooting Drones out of the Sky
-
U.S. Interior Department Restricts Use of Chinese-Made Drones