Blade Runner, Ex Machina, I Robot, The Terminator, The Matrix, and even Wall-E all share one common concept: Artificial Intelligence. In these movies, intelligent machines eventually surpass their human creators, directly threatening the very existence of humanity. The dangers of AI have been a long-running trope in pop-culture.
What was once considered to be a fascinating and looming threat has evolved into an overplayed cheesy artifact. In lieu of more pressing, immediate threats, super-intelligent machines are just not that scary anymore. On top of this, we are still a ways away from the next levels in AI, with some prominent figures in research saying humans may never even get there.
However, the answer to whether or not humans will birth superintelligence is not so cut and dry. On the other side of the aisle, there are equally prominent figures asking viable questions about the technology. If we are on this path, should we be worried? And, what steps should we be taking to ensure that the technology emerges responsibly? Leading proponents of AI's existential threat believe that this threat is not only inevitable but also coming soon to a town near you. As you probably guessed, one of the biggest crusaders against the rise of machines is the meme wizard and tech billionaire Elon Musk.
Elon Musk is scared of AI's potential
Much of Musk's fears do sound like the plot points for the perfect science-fiction antagonist. Nonetheless, these fears have been echoed on various levels by people like the late Stephen Hawking, Ray Kurzweil, and Bill Gates. The Tesla CEO has gone on record multiple times to discuss the perils of AI In one of his more famous interviews, Musk explained to the New York Times in 2020 that we are headed toward a situation where AI is vastly more intelligent than humans in less than five years from now. However, do not panic yet. This is just Musk's opinion.
And even if you find it plausible, Musk added, "that doesn't mean that everything goes to hell in five years. It just means that things get unstable or weird," the billionaire shared in his interview.
If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc— Elon Musk (@elonmusk) August 12, 2017
Indeed, Musk has a very complex relationship with artificial intelligence. He does not think AI is necessarily bad and a technology that should be avoided at all costs. In fact, all of his companies are heavily reliant on AI in some form or another. Furthermore, Musk is also concerned with more of the practical issues with all forms of AI, like job loss created by automation.
Nonetheless, he wants the technology to be developed responsibly, with the right insight and oversight. And, if governments are not going to do it, he will. Over the past decade, the tech entrepreneur has invested his vast resources into companies and technology that promote the responsible development of intelligent machines. Even more so, he is allegedly working on technology that would give humans a leg up in the potential AI apocalypse.
Humans may have to merge with computers if they want to stand a chance against AI
At least, that is what Elon Musk believes. One of the billionaire's more secretive and controversial projects involves a "Fitbit in your skull with tiny wires." Dubbed Neuralink, the neural tech start-up is developing an electronic brain-computer interface that can be quickly and easily installed into the human brain. These brain-computer interfaces could be used to expand the capabilities of people around the world, changing the way we interact with technology and treat neural and mobility issues.
While this technology is not new — brain-computer interface systems have been in use for decades, and more than 300,000 people already have some form of neural interface — what Neuralink hopes to do with it is quite new. The company has a much bigger goal: AI symbiosis.
Here, even for Musk, things do get admittedly to "science-fictiony." However, people like the futurist Dr. Ian Pearson, who subscribe to transhumanism, believe that this future is possible and is potentially the next evolutionary stage in humanity. Technology like Neuralink could be our insurance against AI It could be used to augment human abilities and intelligence, allowing us to compete on the same level as super-intelligent machines.
In the future, humans could download skills, knowledge, and ideas directly into their minds, like Neo in the Matrix. Even further off into the future, humans would be able to offload their consciousness into computers or other synthetic bodies, making us effectively immortal.
Musk has argued that humans are already cyborgs. The computers and smartphones that we use each day are an extension of ourselves. Humans already have a digital tertiary layer. So why not extend it, increasing its bandwidth? Neuralink hopes to be the answer.
At the moment, Neuralink's team of 100 employees still has a ways to go before the emergence of AI human-hybrids. The tech company also has a lot of bureaucratic, ethical, and technological hurdles to cross. Human trials of the technology could begin as early as this year.
OpenAI was created to develop artificial intelligence responsibly
One of the best ways to prevent crazy AI from running loose is to develop it responsibly. This is a core tenet of the team at OpenAI. Founded by a series of tech-entrepreneurs, including Musk, in 2015, the AI research and development non-profit is working to create artificial general intelligence (AGI) that is safe and beneficial to humanity. In short, the Google DeepMind competitor wants to create friendly AI They do this by creating machine learning systems that align with our own human value systems.
How has the company held up to its goals? It depends on who you ask. Back in 2018, Musk resigned from his board seat, citing a potential future conflict of interest with Tesla's AI development for self-driving cars. However, he is still a donor to the company. Musk would later go on to tweet that he did not agree with some of the things that Open AI was trying to do.
One of the company's more controversial research papers details a new AI that can generate realistic text snippets. Thankfully, the team opted out of releasing the fully trained model to the public as it could easily be used to generate disinformation across the web. Nonetheless, most OpenAI research projects tend to be harmless and are nowhere near creating super-intelligent machines at the moment.
Elon Musk has also contributed millions to other AI research groups
Back in 2015, Elon Musk also became a prominent donor to the Future Life Institute (FLI). Similar to Open AI, the volunteer-run research and outreach organization has been working to mitigate existential threats to humanity, like AI. FLI specifically provides support to researchers in various AI-related research fields, including economics, law, ethics, and policy.
Aside from Musk, other prominent figures like Nick Bostrom, Stephen Hawking, computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, and astrophysicist Sandra Faber have been involved with FLI.
Mars could save us from a future dark age
Musk's aerospace company SpaceX was founded by Musk in 2002 with the stated ultimate goal of making humans an interplanetary species. A company that was once on the verge of bankruptcy has hit a series of successful milestones over the decades. The rocket manufacturer had its first-ever astronaut launch just last year. However, much of the innovation at the company is laying the groundwork for potential missions to our big red neighbor.
Though still very much hypothetical, Musk believes humanity can reach Mars in the coming decades. This small step towards interplanetary travel could be essential to the survival of our species. According to Musk and others that share his fears, we are only one giant disaster away from the end of our species. From environmental to extraterrestrial, one calamity haunts Musk the most, AI
The tech-leader has made it clear that his lofty colonization projects are his most important. Why? It could protect us from evil AI In short, he believes that Mars would be the perfect bolt-hole if AI goes rogue and turns on humanity. Back here on Earth, SpaceX has a wide array of projects on the horizon, with plans for reaching Mars in 2026.
However, do not book your trips to Mars just yet. Critics within the billionaire's own circle, like Jeff Bezos, stated that focusing our attention on Mars rather than trying to solve more immediate issues here on Earth could be a problem. In the same breath, he described the peak of Mt. Everest as a garden paradise compared to the surface of the red planet. Logistical and technological challenges still weigh this Mars-bound goal down. Also, if AI is smart enough to take over planet Earth, what is stopping it from getting to us on Mars? Nonetheless, at least theoretically, a second planet could give humans a fighting chance in an AI dark age.
Should we be afraid of artificial intelligence?
AI and its potential is a hotly debated topic among entrepreneurs and researchers. People on the other side of the aisle struggle to take Musk's claims seriously, going as far as to call the tech billionaire a sensationalist. AI could be used to improve the lives of people around the world, creating positive disruptive change. Areas like transportation, farming, smart communities, business processes could use AI to cut down on wasted time and money and offer individuals a future free of pressing cares and overwork. We could use AI to improve healthcare and human health across the world. Everything could change for the better.
But what if Musk is right? Another common trope in disaster films, an individual (usually a scientist) is dubbed crazy by his peers as he warns the world of impending doom, only to be proven right later in the story. Musk has flourished in industries that have consistently bet against him. But he is not a prophet and has been wrong about a lot of different things. His insight has also led to some profound new ideas. Will intelligent machines take over your life tomorrow? Most likely not. In your personal life, the worst AI can currently do is frustratingly mishear a voice command or give you an awkward recommendation while streaming.
However, regardless of where you are in this debate, we probably do not want to learn from our mistakes with AI