Elon Musk's co-signed letter on AI research has problems
A letter co-signed by Elon Musk asking for a six-month moratorium on artificial intelligence (AI) research has now become controversial. Musk was one among 1,800 signatories who were called for a "six-month pause" on the development of AI systems "more powerful than GPT-4". It now also appears that some of the signatures on the letter were not even real, while others have called it "unhinged."
Merely months after the debut of GPT3.5, which powers the conversational chatbot ChatGPT, its developers OpenAI have now launched GPT-4, which they claim is much superior to its predecessor. In addition to text, GPT-4 can also process image inputs and is claimed to be more creative and accurate than its predecessor.
Although the sudden rise of AI's capabilities has raised concerns about how it will affect jobs, the likes of Elon Musk are worried about far worse. In an interview with The New York Times, Musk said that he expected AI to be more sophisticated than humans in less than five years.
The flaws in the letter
The letter that became a talking point last week after it called for the government to step in and stop the development of the technology was actually published by Future of Life Institute (FLI), a non-profit organization, which is funded by the Muskused Foundation, The Guardian reported.
AI researchers have argued that the letter "treats a lot of questionable ideas as a given" but "ignores active harms right now." The researchers are worried that AI could be used to influence decision-making in relation to climate change, nuclear war, and other existential threats, something that the letter does not even touch upon.

Risks from these real scenarios using present-day AI systems are much more of a problem today than what could happen when AI reaches or surpasses human-level intelligence.
One of the researchers cited in the letter was flummoxed by the wordings used, such as "more powerful than GPT-4".
Others pointed out that the letter itself did not carry sufficient verification protocols for signing, and one of the signatories even included Chinese President Xi Jinping when he clearly did not sign up for this, just like Chief AI scientist at Meta, Yann LeCun.
The letter presented an opportunity for stakeholders to come together and set some ground rules for AI research but seems to have lost its purpose by amplifying only a few ills of AI.
As AI scientist Gary Marcus, who was also a signatory of the letter, put it later in a tweet, "this is perhaps the single best moment in history to bridge party lines to protect underrepresented people against the incursions of big tech. Let’s not squander it."