Looming AI apocalypse? Industry leaders say we should be worried about superintelligence
One would have to be living under a rock to not see how our world is changing every day with the new-age artificial intelligence (AI) technology. While AI is playing an instrumental role in reducing human labor, there are those who believe that the rate at which AI is progressing should terrify us.
We’re already seeing how AI is slowly making its way into various industries and threatening to replace humans.
The argument being used against AI is that it could lead to stunting of the human mind and stall creative development. Will AI become a crutch that can diminish the human power to think and imagine? While that may sound plausible, industry trailblazers argue that the biggest threat we face is human extinction. This has been a recurring theme for some time.
AI vs Humanity
One of the first persons to sound the alarm, Geoffrey Hinton quit his cushy Google job to speak freely about the risks posed by developing AI technology too fast. A month later, another pioneer of machine learning, Professor Yoshua Bengio said that he felt "lost" over his life's work.
Both are signees of a pledge-like statement by the non-profit Center for AI Safety, which issued a warning that AI could lead to the extinction of the human race.
Hinton and Bengio, along with other signees like Sam Altman, and Bill Gates, agree that we need new technological breakthroughs to control and steer AI systems much smarter than us.
Should we be worried about superintelligence?
Also called as artificial general intelligence (AGI), superintelligence is a system that can reason, plan, and learn from experience at the same level as humans do or possibly above them.
Even OpenAI, which pioneered an avant-garde AI after cooking up a storm last year with ChatGPT, believes that it’s very probable that superintelligence will arrive this decade.
“It has happened many times before that species were wiped out by others that were smarter. We, humans, have already wiped out a significant fraction of all the species on Earth. That is what you should expect to happen as a less intelligent species – which is what we are likely to become, given the rate of progress of artificial intelligence. The tricky thing is, the species that is going to be wiped out often has no idea why or how,” said Max Tegmark, an AI researcher at Massachusetts Institute of Technology, in an interview with The Guardian.
The Guardian spoke to Brittany Smith, an associate fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, who said, “Far-future, speculative concerns often articulated in calls to mitigate “existential risk” are typically focused on the extinction of humanity. If you believe there is even a small chance of that happening, it makes sense to focus some attention and resources on preventing that possibility.”
Call for robust regulations
On July 5, OpenAI announced a new project called Superalignment, which will have a team of top machine learning engineers and researchers. They will oversee the alignment of superintelligence with humanity in order to prevent the disempowerment of humanity or even human extinction.
In fact, OpenAI even announced ten $100,000 grants for anyone with good ideas on how AI can be governed to help address bias and other factors, Interesting Engineering reported earlier.
It’s not just private companies that are taking cognizance of the risks. The European Union passed a landmark AI Bill which aims to instill privacy standards, transparency laws, and fines.
In the US, one can expect stringent state data privacy laws, Federal Trade Commission rulemaking, and new AI standards by the National Institute of Standards and Technology.