Top AI researchers and CEOs unite in warning: 'risk of extinction' posed by AI demands global attention

In a concise 22-word statement, leading AI researchers and CEOs emphasize the urgent need to address the existential threat of AI.
Abdul-Rahman Oladimeji Bello
AI processor
AI processor


In a straightforward 22-word statement, a group of AI researchers, engineers, and CEOs has raised concerns about the existential threat posed by artificial intelligence. The report highlights the urgent need to prioritize mitigating AI risks alongside global-scale perils like pandemics and nuclear war.

The statement reads, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Notable signatories include Demis Hassabis, CEO of Google DeepMind, and Sam Altman, CEO of OpenAI. Turing Award recipients Geoffrey Hinton and Youshua Bengio have also supported the warning. At the same time, Yann LeCun, the third winner, still needs to sign on.

The statement, published by the San Francisco-based Center for AI Safety, contributes to the ongoing AI safety debate. Earlier this year, some endorsers of the 22-word message penned an open letter advocating for a six-month "pause" in AI development. However, the letter faced criticism due to varying opinions on the severity of AI risk and the proposed solution.

Dan Hendrycks, the executive director of the Center for AI Safety, explained that the brevity of the latest statement was intentional to avoid generating further disagreement. In addition, Hendrycks emphasized that proposing numerous potential interventions to mitigate the AI threat might dilute the message. Instead, the concise statement represents a collective concern from influential industry figures who have long harbored apprehensions about AI risk.

Debates for and against the decision

While the debate is well-known, the details can be complex, often revolving around hypothetical scenarios where AI systems surpass safe control limits. Supporters of AI risk point to the rapid advancements in technologies like large language models as evidence of potential future intelligence gains that may become uncontrollable.

On the other hand, skeptics highlight the current limitations of AI systems, such as the ongoing challenges in developing fully self-driving cars despite significant investment and effort.

Top AI researchers and CEOs unite in warning: 'risk of extinction' posed by AI demands global attention
Artificial chat with AI

Regardless of differing perspectives, AI risk advocates and skeptics acknowledge the present-day threats AI systems pose. These threats range from facilitating mass surveillance and flawed "predictive policing" algorithms to creating and spreading misinformation and disinformation.

As the debate continues, this concise yet impactful warning acts as a wake-up call to the world, urging us to address the challenges posed by AI and find ways to mitigate its potential risks. The future of AI and its impact on humanity hangs in the balance, emphasizing the need for cautious and foresighted navigation.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board