AI Attitudes: What the Experts Consider of Concern

Is AI something to be feared or something to be celebrated? The answer depends on whom you ask. Ultimately, people must take responsibility for AI's effects, and they should act accordingly by being proactive about managing risks.
Ariella  Brown

As we saw in Our Brave New World: Why the Advance of AI Raises Ethical Concerns, there are a few areas in which AI’s use seems less than advantageous to society. They include promulgating racial bias in facial recognition and other algorithmic secret formulas, as well as a threat to jobs and possibly even overall safety, according to the nightmare scenario suggested by Elon Musk.

You can hear Musk describe his concerns about AI in the video below.

 The clash of views between Elon Musk and Mark Zuckerberg

Back in 2017, when Musk publicized his views, he prompted Zuckerberg to express a much more “optimistic” take during a Facebook Live broadcast. In fact, Zuckerberg charged that the ones who are irreponsible are not the ones pursusing AI but those who emphasize the dangers. He said:

"And I think people who are naysayers and try to drum up these doomsday scenarios... I just don't understand it. It's really negative and in some ways I think it is pretty irresponsible."

 AI Attitudes: What the Experts Consider of Concern

“Right back at you” could sum up Musk’s response to Zuckerberg’s take, which he expressed on Twitter: "I've talked to Mark about this. His understanding of the subject is limited."

So Musk resorts to a claim of greater expertise to bolster his point of view. In fact, though, the optimistic versus pessimistic takes on AI are not divided by level of expertise in the subject.

The AI Optimists

A number of AI experts share Zuckerberg’s optimistic take. For example,  Kevin Kelly, the co-founder of Wired and author of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future. made this sweeping declaration: “The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.”

Kelly expanded on his view in an interview with IBM: “Through AI, we’re going to invent many new types of thinking that don’t exist biologically and that are not like human thinking.“

In Kelly’s view, that is wholly positive: “Therefore, this intelligence does not replace human thinking, but augments it.”

Another AI optimist is the futurist and inventor, Ray Kurzweil, who made the following declaration in a 2012 interview:

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” 

Kurzweil  explained it as the equivalent of “singularity” in the science of physics, because it would be “a profound disruptive change in human history.” Ultimately, he envisions that human “thinking will become a hybrid of biological and non-biological thinking.”

The AI pessimists

There are also experts who do not dream of a utopia in which AI takes over jobs and much of humanity’s thinking. They see the possibility of putting AI in charge of processes as fraught with danger.

Among those is Yuval Noah Harari, who wrote Homo Deus: A Brief History of Tomorrow. Within the book Professor Harari made the following  pronouncement: 

“You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins. It’s not a perfect analogy, of course, but it is the best archetype we can actually observe rather than just imagine.” 

Harari’s take is the polar opposite of the positive futurists in picturing the rise of what he calls “dataism.” in which humans cede the superior ground to advanced artificial intelligence. This is a future that is dominated by a “cosmic data-processing system” that is both omnipresent and omniscient, and, as the Borg would say, resistance is futile. 

The late English theoretical physicist, cosmologist, and author who was director of research at the Centre for Theoretical Cosmology at the University of Cambridge, Stephen Hawking, falls into the pessimist camp as well. Like Harar, he considered human experience in pursuing goals in projecting on the possibilities of AI following its own dictates without regard for humanity.

In the following video Stephen Hawking offers a very pessimistic take on AI. 

In early 2015, during a Reddit AMA (Ask Me Anything) Q&A session Hawking was asked the following question from a teacher who wanted to know how to address certain AI concerns that come up in his classes:

How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?

Hawking’s response was as follows:

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. 

Is the threat of AI domination real?

Anthony Zador, a professor of neuroscience at Cold Spring Harbor Laboratory, and Yann LeCun, professor of computer science at New York University and chief AI scientist at Facebook, argue that obsessing over AI domination is misguided. They explained their view in  a Scientific American blog entitled Don’t Fear the Terminator published on On September 26, 2019. 

You can hear Zador on AI in this video:

LeCunn talks about other aspects of AI in this video:

In answer to the question, “Why would a sentient AI want to take over the world?” they offer the simple two-word answer: “ It wouldn’t.”

Perhaps they had Harari’s book in mind when they addressed the role of intelligence in “social dominance” throughout “evolutionary history.” They went on to explain it in terms of a tool rather than a driver:

“And indeed, intelligence is a powerful adaptation, like horns, sharp claws or the ability to fly, which can facilitate survival in many ways. But intelligence per se does not generate the drive for domination, any more than horns do.”

So humans use their intelligence to help them survive. But when it comes to the artificial forms of intelligence there is no such “survival instinct,” and that is why AI would not have any reason to take over the humans who program it. 

“In AI, intelligence and survival are decoupled, and so intelligence can serve whatever goals we set for it.” 

The groundless concern about science fiction type plots like that of Avengers: The Age of Ultron, or what Zador and LeCun (possibly referencing the question Hawking asked) call “the Terminator scenario” is actually counterproductive because it “just distracts us from the very real risks of AI.”

The AI risks  we should be worried about

Zador and LeCun continue on to distinguish their position from that of the AI optimists. Musk actually got it right in their book about AI becoming “weaponized,” as well as the other threats it poses to humanity, including that of the loss of jobs.  

“While AI will improve productivity, create new jobs and grow the economy, workers will need to retrain for the new jobs, and some will inevitably be left behind. As with many technological revolutions, AI may lead to further increases in wealth and income inequalities unless new fiscal policies are put in place.”

In addition to those risks that can already be anticipated, there are those that are “unanticipated risks associated with any new technology—the ‘unknown unknowns.’” 

Just because they haven’t been imagined in a science fiction plot doesn’t mean they are not cause for concern, they argue. They emphasize that humans are the ones who will be responsible for the outcomes of expanded AI, which cannot develop any independent agency or ambition. 

While stressing human responsibility, though, Zador and LeCun do not outline any particular plan to avoid the risks -- both known and unknown -- of new technology.

Proactive planning for AI

Whether the solution is government regulation, as Musk suggested, or some kind of industry standard, it seems that some kind of planning with an awareness of potential dangers is in order. A number of experts have suggested just that.

Professor Klaus Schwab, the founder of the World Economic Forum who coined the term the “Fourth Industrial Revolution,” published his thoughts on its current direction in 2016.

Like the positive futurists, he envisioned that the future will fuse “the physical, digital and biological worlds in ways that will fundamentally transform humankind.” But he did not take it for granted that it would all work out for the best,” urging people to plan ahead with awareness of both “the risks and opportunities that arise along the way.”

Even with driverless cars on the horizon, people are still in the driver’s seat when it comes to planning out what AI is to do. “There’s nothing artificial about AI,” declared Fei-Fei Li, an expert in the field. “It’s inspired by people, it’s created by people, and — most importantly — it impacts people.”

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board