Should We Fear Artificial Superintelligence?
Speaking at a conference in Lisbon, Portugal shortly before his death, Stephen Hawking told attendees that the development of artificial intelligence might become the “worst event in the history of our civilization,” and he had every reason for concern. Known as an artificial superintelligence (ASI) by AI researchers, ethicists, and others, it has the potential to become more powerful than anything this planet has ever seen and it poses what will likely be the final existential challenge humanity will ever face as a species.
Why Are People Afraid of Artificial Intelligence?
To better understand what concerned Stephen Hawking, Elon Musk, and many others, we need to deconstruct many of the popular culture depictions of AI.
The reality is that AI has been with us for a while now, ever since computers were able to make decisions based on inputs and conditions. When we see a threatening AI system in the movies, it's the malevolence of the system, coupled with the power of a computer, that scares us.
However, it still behaves in fundamentally human ways.
SEE ALSO: 10 ESSENTIAL TED TALKS ON ARTIFICIAL INTELLIGENCE
The kind of AI we have today can be described as an Artificial Functional Intelligence (AFI). These systems are programmed to perform a specific role and to do so as well or better than a human. They have also become more successful at this in a shorter period of time than almost anyone predicted, beating human opponents in complex games like Go and Starcraft II which knowledgable people thought wouldn't happen for years, if not decades.

While we may engage in gallows humor about our need to welcome our robot overlords, there’s no actual risk that these systems themselves pose the kinds of risk Hawking was talking about. AlphaGo might beat every single human Go player handily from now until the heat death of the Universe, but ask it for the current weather conditions and it lacks the intelligence of even single-celled organisms that respond to changes in temperature.
What we think of when we talk about a dangerous AI is what computer scientists call an Artificial General Intelligence (AGI), an artificial system that completely emulates the human mind and is as intelligent as a human being in any area of knowledge, except it can think billions of times faster than we can. This is what movies tend to depict as incredibly dangerous Skynets hellbent on wiping out humanity but as terrifying as this may seem, this isn't the real concern.
As threatening as this system might seem, we will likely never actually see an AGI come into existence. The real concern is what lies one step beyond AGI.
Building Beyond Human-Level Intelligence
The problem with developing AGI in the traditional sense is that its impossible to program a decision tree for every question an AGI would have to solve. There will always be something it is called upon to do that it simply isn’t programmed for, like asking AlphaGo for the weather.
Humans face this challenge all the time and it's our capacity to learn and form new connections in our brains that makes us capable of sophisticated intelligence and problem-solving. If we don’t know how to solve a problem, we can analyze it and find an answer. It's this ability that we are just now beginning to develop into the most advanced AI systems we have and it is truly stone age stuff in the grand scheme of things.
To truly reach AGI, a system needs a key tool that humans take for granted, largely because it's automatic. It needs to be able to rewrite its programming to make itself smarter, the way human biology automatically rewires the brain in order to learn new things. This is where the implications and concerns about artificial intelligence, the genuine, scientific concerns rather than the Hollywood version, begin to take shape.

Suppose we program a system that could rewrite its own programming to make itself more intelligent in any subject, skill, or ability that humans are capable of. In the beginning, it wouldn’t be very intelligent at all, but each successive refinement would, in turn, improve its ability to improve itself. Every tiny, incremental step will build on the last, growing exponentially.
There will come a point in this process where the system will cease to be an AFI. Like a solid sublimating into its gaseous state, this AFI would appear to pass AGI entirely as its growth in intelligence becomes explosive, a moment that renown AI authority Nick Bostrom calls the ASI Lift-off. It is literally beyond our capacity to imagine what this sort of intelligence is truly like.
Can’t We Just Unplug an Artificial Superintelligence?

This is generally the most common response from the public when they think about a Skynet-style runaway AI. Just unplug it or something equally mundane as if an artificial superintelligence was a modem that needed to be reset. We already know that this won’t work, the same way you cannot delete a computer virus or stop its spread by shutting down an infected computer. Once the infection is there, it's too late.
If a virus can embed itself in a system to resist being deleted or copy itself and infect other systems before we even know what is happening, an artificial superintelligence would be infinitely more difficult to remove. Worse, with something this intelligent, it could discover ways of preserving itself that we would think completely impossible because we lack the intelligence to know how to accomplish it, like trying to conceive of the physics of an airplane while having the brain capacity of a baboon.
We could appear to shut down an artificial superintelligence only to watch it reappear on the other side of the world as if by magic and we would never know how it was able to get there.
Then Why Do It At All?

This is the question we naturally ask, but the problem is that there is no real way to develop a system that is even a significant percent of AGI that does not necessitate giving control of the system's growth and development to the AI itself, the way the development of our own intelligence is an automatic function of our brain’s forming new neural connections on its own.
If we want to go beyond the rudimentary AFIs we currently have, then we have to assume that an artificial superintelligence is as inevitable as nuclear weapons were inevitable after splitting the atom to use nuclear fission as a power source. Ultimately, the only way to prevent an artificial superintelligence from emerging is to stop any further development of artificial intelligence at all, which doesn’t look likely or even possible at this point.
Just as an artificial superintelligence has infinite potential for harm, it can just as easily be beneficial, at least to its creators. If you have two adversarial nations, how could one nation trust the other with a system this powerful? Just as the launch of Sputnik by the USSR jolted the nascent US space program into overdrive, AI development is already far enough advanced that no one wants to come in second place in the AI race. The downside to falling behind is simply too great.
If the incentives are greater on the side of developing more sophisticated AIs than the next guy, then an AI arms-race is inevitable, and as we've already seen, there is no road to AGI that doesn't produce an ASI almost immediately afterward. So, if its emergence is almost guaranteed the more we research and develop artificial intelligence, then you have even more incentive to be the one who develops it first. That way, you stand the best chance that the superintelligence will be benevolent toward you, or at least not hostile.
Welcome to the Prisoner's Dilemma of artificial intelligence.
Should We Fear Artificial Superintelligence?

Of course.
As with every technological development, there are always unintended consequences, and once we have an artificial superintelligence, there is no going back.
But, we should remember that it’s in our nature to see the threat such a system poses because we are hardwired by evolution to spot danger and avoid it. Something this powerful could indeed decide, and would likely have the power, to exterminate all human life if its goal was to preserve the overall life of the planet. The greatest threat to life on Earth is human civilization, after all.
But, if it has such power, it would have just as much power to prevent or even reverse Climate Change instead, and therein lies the choice humanity must make.
[see-also]
Just as the potential downsides of an ASI are endless, it is just as impossible to put a limit on the good something like this can accomplish. Whatever we think of, an ASI has the potential to do more. Climate change, the eradication of disease, the end of want and famine, an end to death itself, and even faster than light travel to distant galaxies are all just as conceivable—and maybe even more likely—a result of an ASI than the immortal, malevolent dictatorial monster that Elon Musk warns against.
We have every reason to believe that in the end, an ASI will work to our benefit. Every technological advance has come at a cost, but human civilization has advanced because of it. In the end, human beings have a solid track record when it comes to technology. Yes, we can produce weapons of incredible destructiveness, but we can also eradicate smallpox and polio. We have largely conquered famine as it is and even wars are on a steady decline.
If our future is anything like our past, then we can be justifiably optimistic about the future of artificial intelligence. An artificial superintelligence then will be what we make of it, just as children are more than just the biological product of their parents, so it's critical that we decide as a civilization just what sort of artificial superintelligence we wish to create.