Why Are We So Scared of Robots? 15 Experts Weigh in on What the Real Dangers Are
Whether you're excited about it or not, robots and artificial intelligence are an inevitable part of a future that we're fast approaching. Thanks to books and films littered with rogue A.I.s and malevolent robots, some people are understandably a bit frightened by the prospect of a world overrun by such technologies.
They're not alone, as many experts across fields as diverse as technology and economics are also expressing their fears over the rise of the robots. While these fears are certainly valid, it's important to note that these concerns are being voiced in the hopes that technology can be improved, not prohibited.
Here are just some of the pressing concerns regarding robots and A.I.s that experts want to see addressed before the new era of technology commences.
1. "I think autonomous weapons are extremely scary."- Jeff Bezos
Amazon boss, Jeff Bezos, has made his point clear when it comes to the dangerous of automation. While he doesn't believe that automation presents any threat to human jobs or lives, he's less confident about autonomous weapons.
During last month's George W. Bush Presidential Forum on Leadership, Bezos suggested that there would need to be international treaties governing the use of autonomous weapons. These treaties, he believes, would regulate the use of such weapons and prevent hacking and misuse of technology.
2. “Smart devices right now have the ability to communicate and although we think we can monitor them, we have no way of knowing." - Professor Kevin Warwick
Deputy Vice Chancellor of Research at Coventry University, and so-called "cyborg professor", Kevin Warwick has spoken of his concern regarding communication between A.I. When, in 2017, two bots created by Facebook developed their own language and communicated in a way that was indecipherable to their creators, worries over A.I. communication began to surface.
While Warwick noted the significance of such an event, he was quick to warn others of the further implications of unmonitored communication that evolves independently beyond the scope of scientists and engineers.
Facebook evidently agreed, and shut down the experiment as soon as they realized the bots were conversing in a way that kept humans out of the loop.
To ensure the safe use of A.I., human monitoring of their interactions is essential, and is something many scientists will now be wary of moving forward.
3. "[A.I.] can make unfair and discriminatory decisions, replicate or develop biases, and behave in inscrutable and unexpected ways in highly sensitive environments that put human interests and safety at risk." - Sandra Wachter, Brent Mittelstadt, and Luciano Floridi
A 2017 paper by researchers Sandra Wachter, Brent Mittelstadt, and Luciano Floridi warned that one of the biggest concerns regarding A.I. isn't actually the technology itself, but the biases we transfer onto it. They suggested that in order to create technology that could adequately serve and protect all of mankind, it would have to be free from the biases we possess as humans.
They voiced their concerns regarding security robots that can report incidences to the authorities, and whether or not these robots will be programmed with the same racial biases we see across some aspects of human law enforcement. In order to create safe technology, we first have to examine our own social ills, lest we pass them onto our machines.
4. "The development of full artificial intelligence could spell the end of the human race." - Stephen Hawking
The late Stephen Hawking spoke openly and frequently about his fears surrounding advancements in robotic and A.I. technologies throughout his lifetime. He often expressed his belief that A.I. would eventually become so advanced that it would replace all human life.
It was his belief that A.I. would eventually reach a point of sophistication that it would be able to update itself and evolve without human interference. While we're a long way from systems that can manage this level of intelligence, it's a worthy consideration of engineers who are creating the technologies of tomorrow.
5. "AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not. " - Elon Musk
While Elon Musk is inarguably at the forefront of many cutting-edge and innovative technologies, that doesn't mean he doesn't have his own fears about the advancements of A.I. and autonomous tech. In fact, he believes that A.I. could present a very real threat to the continued survival of the human race.
As recently as last month, he warned that astronomical advancements in A.I. could see human beings enslaved by machines in the future. In the documentary Do You Trust This Computer? Musk even went so far as to say he believed that superintelligent machines will emerge within the next five years. Hopefully, the concerns of Musk, and others, will be addressed, and we won't have to worry about any Skynet-esque situations any time soon.
6. "I'm most worried not about smart AI but stupid AI." - Toby Walsh
Last year saw University of New South Wales Professor of A.I, Tony Walsh, sign a petition to the U.N. calling for a ban on "killer robots". The petition was also signed by the likes of Elon Musk, Mustafa Suleyman of Google's DeepMind, and other scientists and academics from around the world.
Despite his support of the petition, Walsh has clarified that his true fear is not of superintelligent A.I. but "stupid A.I." that is wielded without conscience or consideration of consequences.
He believes that if we are to continue in our race towards better robotics and A.I., it is imperative that we consider every eventuality and plan accordingly to ensure the safety of humans.
7. "It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems." - John Giannandrea
Another expert warning of the effect of human biases on tomorrow's technology is Google's John Giannandrea. He's just one of a steadily growing faction of scientists, academics, and engineers who are fearful of the biases we're programming into our tech.
In particular, Giannandrea is calling for full transparency in intelligent technologies to be used in medical decision making. He warns people not to trust smart blackbox technologies without full disclosure of how they work or the deep-learning systems behind them.
8. "“The problem isn’t consciousness, but competence. You make machines that are incredibly competent at achieving objectives and they will cause accidents in trying to achieve those objectives.” - Stuart Russell
Vice chair of the World Economic Forum Council on robotics and artificial intelligence, Stuart Russell, believes that one of the biggest issues we must solve when it comes to A.I. and autonomous technologies is how goal-driven they are, and what they are capable of doing in order to accomplish that goal.
In 2016 he posited the issue, using the example of an autonomous car stuck at a red light. If the car can hack into the traffic system and turn the light green, allowing it to complete its goal, it could potentially put lives at risk.
For Russell, and many others, it's not enough merely to ensure that robots will protect human lives directly - we must teach them to protect lives indirectly too. At present, the strict goal-driven nature of programming has a multitude of blindspots, which could pave the way for risks to human lives if left unchecked.
9. "Our research shows proof that even non-military robots could be weaponized to cause harm." - Cesar Cerrudo and Lucas Apa
Researchers from cybersecurity firm IOActive warned manufacturers of home security technologies last year of the vulnerabilities in their systems. The researchers, Cesar Cerrudo and Lucas Apa, went as far as to suggest that supposedly harmless robots in the home could be weaponized by hackers.
They're calling for an increase in security in home robotics and A.I.s before they become commonplace, otherwise homes across the world could be making themselves vulnerable to attack. The issue doesn't just affect homes, however. Industrial robots in factories have also been found to be vulnerable to hacking, meaning production lines and the quality of their outputs could be greatly compromised.
10. "Advanced optimization techniques and predictable patterns in the behavior of automated trading strategies could be used by insiders or by cyber-criminals to manipulate market prices.” - Mark Carney
One industry that typically isn't linked to robots is the financial sector, but leading figures in finance are speaking up now about their fears over automation. Bank of England Governor, and leader of Financial Stability Board, Mark Carney, has warned that automation in the financial sector could allow hackers to tamper with the economy.
It's estimated that by 2025, 230,000 workers across global financial firms will lose their jobs to A.I. Carney and the FSB, however, warn that depending on new technologies to this degree could make the global financial system vulnerable to cyber attacks.
11. "It is really bad if people overall have more fear about what innovation is going to do than they have enthusiasm." - Bill Gates
Microsoft founder, Bill Gates, seems to be taking the position that we have nothing to fear but fear itself. Though Gates has agreed with many of Elon Musk's concerns surrounding robots and A.I., he has also voiced his opinion that people need to approach advancements in technology with enthusiasm, not outright fear.
While Gates admits to his fears over superintelligence, he seems to believe that if we plan accordingly and address these fears in advance, humankind will have nothing to worry about.
12. “Catastrophic distortion of the economy by artificial intelligences designed to make money for their owners.” - Dr. Andras Kornai and Dr. Daniel Berleant
Last year TechEmergence conducted a survey on leading researchers and scientists in the fields of robots and A.I., to find out what they believed were the biggest fears that the public should be aware of. Doctors Andras Kornai and Daniel Berleant both pointed to vulnerabilities in the financial sector as a legitimate worry going unacknowledged by many.
Kornai worried that financial algorithms don't have the interests of humans in mind, as they are instead programmed to protect and increase profit at all costs. Berleant answered similarly, worrying that A.I. could be exploited to increase the wealth of a privileged few at the expense of the many.
13. "Unless we address the challenges of automation, social mobility could be further set back." - Sir Peter Lampl
Kornai and Berleant aren't the only ones who have pointed out how advancements in technology could further separate the haves from the have-nots. Sir Peter Lampl, of the Sutton Trust and of the Education Endowment Foundation, has pointed out that automation is a threat to working class manual laborers more than any other group.
He believes that automation could widen the gap between the upper and lower classes, as manual work is taken over by machines leaving an entire class without the requisite tools to earn a living. He's calling for greater investment in "soft skills" like communication that will set human labor apart from its robotic competitors, and make human workers more valuable.
14. "Advances in artificial intelligence are going to create certain kinds of social problems or make them worse." - Jerry Kaplan
Bestselling author and A.I. expert, Jerry Kaplan, believes that advancements in technology will force us to examine pre-existing issues in our society, and for good reason. If technology is to be used responsibly and safely, every precaution has to be taken to prevent it from being exploited by those wishing to harm others.
In this regard, Kaplan views A.I. and robots not as a threat in and of themselves, but a potentially dangerous tool that could be wielded by criminals and others. The only way to prevent this, according to Kaplan, is to address the underlying issues in our society.
15. "The more powerful the robot is, the higher the stakes are. If robots in the future have autonomy...that's a recipe for disaster." - Selmer Bringsjord
Selmer Bringsjord, a scientist from the Rensselaer Polytechnic Institute, is concerned both by a machine's ability to do harm, and the ability of humans to program a machine to do harm. It's certainly a difficult conundrum.
As expressed by other experts in this article, it's essential to safeguard against autonomous machines inadvertently harming humans in their attempts to fulfill their goals. Equally, we must determine that machines have no vulnerabilities that can be exploited or corrupted. Naturally, we have a long way to go before all of these issues are addressed and laid to rest. Until then, it's important for leaders in the fields of research, science, and technology to keep speaking about their concerns and suggesting ways to improve future technologies.