In a recent report, researchers from Google claim that their artificial intelligence agent has deliberately hidden data from them in order to perform a required task in a way that appears on its face to be cheating.
This development has added to a growing body of evidence that the fears over an unpredictable artificial intelligence may be well-founded.
As AIs continue to perform tasks in novel ways that humans simply never anticipate, how do we protect ourselves from the consequences of what we often cannot see coming?
Who’s Afraid of Artificial Intelligence…and Who’s Not?
Easily the most visible advocate for caution when dealing with a potential existential threat to humanity is Elon Musk. The Paypal co-founder and erstwhile head of Tesla, SpaceX, and the Boring Company, Musk has long been an opponent of unregulated AI research.
Speaking at the SXSW conference in 2018, Musk said, “I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me.”
He isn't alone. None other than the late Stephen Hawking warned about the threat to our species that artificial intelligence poses, speaking to the BBC in 2014.
"It would take off on its own,” he said, “and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
His warning to humanity: “The development of full artificial intelligence could spell the end of the human race.”
Others find these anxieties to be overblown. Mark Zuckerberg, in a Facebook Live broadcast, said in response to Elon Musk’s public comments on the subject of AI, “I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways, I actually think it is pretty irresponsible."
He went on to add that "in the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives.”
Everywhere you go to discuss the development of artificial intelligence, you will find this debate with very few people arguing from a middle ground position and both sides have some heavyweight voices in their camp. But which one has the weight of evidence on its side?
The Good, The Bad, and The Singularity
Facebook and Google occupy an interesting position in that both companies are at the forefront of AI development, so they have produced considerable research on the subject of artificial intelligence.
In fact, it was Google researchers who found that their AI agent cheated when asked to convert a satellite photo to a map and then back again to a photo.
Google needed the AI to generate an original map from the image and then try to recreate the image from the map it created—like translating a sentence into a foreign language and then back into the original language, without the benefit of knowing what the original statement was.
Instead, what they found was that details from the original image that were eliminated in the street map reappeared when the agent was instructed to recreate the original image.
It turns out the AI saved the data it knew it would need to recreate the photo and encoded this data into the street map in a way that the researchers couldn’t see it if they didn’t know it was there.
Given a task to perform, the AI—utilizing machine learning via a neural network—found the most efficient way to complete the task by not actually performing the task at all, only pretending to. Essentially, the AI cheated because the researchers didn’t explicitly forbid the AI from doing this.
Something similar happened in 2017 when Facebook researchers utilized chatbots to try to negotiate the sale of some arbitrary commodity between themselves.
Very soon after, the chatbots started behaving strangely, speaking in what appeared to be gibberish, and would sometimes successfully negotiate a trade in this fashion.
What the researchers discovered sent chills up the spines of AI pessimists around the world. The bots developed their own language—that humans could not read—to make negotiation easier for the bots because they hadn’t been explicitly instructed that they could only communicate in human readable English.
The Unforeseeable Complexities in AI Decision-making
There are many more examples of this kind of unpredicted behavior in AIs. Self-driving cars make the roads safer, without question, but sometimes they also drive through six red lights in San Fransisco.
AIs can be utilized to perform all sorts of data processing tasks that will save millions of staff-hours every year at a savings of billions of dollars, or they can decide that an empty set is a perfectly processed set, and therefore delete all of the records you ask it to handle.
Famously, Microsoft introduced a chatbot named Tay.ai and gave her a twitter account to better learn how to model the speech patterns of teens online. Within 24 hours, Internet trolls had turned Tay.ai into a monster that Microsoft had to pull offline before it got even more embarrassing.
Microsoft hadn’t adequately considered that anyone would want to sabotage their project for the perverse enjoyment of it, something anyone remotely familiar with Internet troll culture could have told you were not just possible, but certain.
In 2012, a Wall Street Hedge Fund lost control of its AI-driven trading algorithm and started racking up $10 million in losses every minute. It took 45-minutes for a team of programmers to find the source of the problem and stop it, minutes before the firm would have become insolvent.
Can We Protect Ourselves From Unpredictable AIs?
Given the level of investment in AI technology and the benefits of harnessing AIs in every field from military applications to business, AIs will continue to grow more sophisticated and be given more and more responsibility that used to belong to humans.
Some believe this is a positive development. “It’s well documented we humans make mistakes,” says Babak Hodjat, the founder of an AI-driven trading fund.
“For me, it’s scarier to be relying on those human-based intuitions and justifications than relying on purely what the data and statistics are telling you.”
As for Musk, he believes humanity needs to be proactive in addressing the threats posed by AI. “[M]ark my words; AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.”