Elon Musk, AI experts sign open letter for immediate pause of AI more powerful than GPT-4

The open letter presently has over a thousand signatories, including CEO of Stability AI Emad Mostaque, Steve Wozniak, professor Marcus at New York University, and Elon Musk.
Baba Tamim
Elon Musk.
Elon Musk.

Getty Images 

A joint letter signed by over a thousand influential people calling for the blocking of all artificial intelligence (AI) more powerful than GPT-4 was made public on Wednesday.

Over 1,000 influential people claimed to have demanded immediate suspension in developing AI systems more potent than GPT-4, with a minimum 6-month "moratorium," a temporary ban, according to the letter.

"We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors," read the letter. 

"If such a pause cannot be enacted quickly, governments should step in and institute a moratorium." 

According to the open letter, numerous research has demonstrated that AI systems with competitive human intelligence approved by leading AI institutes may represent severe hazards to society and people.

The letter has received 1125 signatures since it was published on Future of Life (FLI), a nonprofit organization that works to reduce global catastrophic and existential risks facing humanity, particularly existential risk from advanced AI.

"A big deal: @elonmusk, Y. Bengio, S. Russell, ⁦⁦@tegmark⁩, V. Kraknova, P. Maes, ⁦@Grady_Booch, ⁦@AndrewYang⁩, ⁦@tristanharris⁩ & over 1,000 others, including me, have called for a temporary pause on training systems exceeding GPT-4," Gary Marcus, a scientist and a leading voice in AI, tweeted on Wednesday. 

Worries about expanding AI abilities 

The capabilities of AI systems are expanding quickly as there is more data and computing power. Large models are growing capable of outperforming humans in many fields. What this entails for our societies cannot be predicted by a single firm, reasons the letter. 

"Consider an example: You can use an AI model meant to discover new drugs to create pathogens instead," FIL wrote on Twitter, announcing the letter

"This model will promptly generate over 40k pathogens - including VX, the deadliest nerve agent ever - in roughly six hours."

The institute was referring to an international security conference that examined the potential exploitation of AI technologies for drug discovery to create biological weapons from scratch. "A thought experiment evolved into a computational proof," the Nature journal article cited by FIL in the thread of tweets to assert the severity of the issue. 

With AI, humanity may look forward to a prosperous future. We may now experience an "AI summer" in which we reap the benefits, engineer these systems for the obvious advantage of all, and allow society time to adapt after successfully developing robust AI systems, noted the letter. 

"Society has hit pause on other technologies with potentially catastrophic effects on society," concluded the letter. "We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall."

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board