OpenAI CEO cautions AI like ChatGPT could cause disinformation, cyber-attacks

Society has a limited amount of time "to figure out how to react" and "regulate" AI, says Sam Altman.
Baba Tamim
OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023.
OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023.

JASON REDMOND/AFP via Getty Images 

OpenAI CEO Sam Altman has cautioned that his company's artificial intelligence technology, ChatGPT, poses serious risks as it reshapes society.

He emphasized that regulators and society must be involved with the technology, according to an interview telecasted by ABC News on Thursday night. 

"I'm particularly worried that these models could be used for large-scale disinformation," Altman, 37, told ABC News

"Now that they're getting better at writing computer code, [they] could be used for offensive cyber-attacks."

OpenAI released GPT-4, the most recent iteration of its language AI model, less than four months after the initial version debuted, which saw it become the consumer application with the fastest-growing user base in history.

In spite of the risks, the AI engineer claimed ChatGPT might be "the greatest technology humanity has yet developed."

"We've got to be careful here," Altman warned. "I think people should be happy that we are a little bit scared of this," he added optimistically.

Altman, while referring to ChatGPT-4, said that even though the latest version was "not perfect," it had achieved 90% on US bar examinations and an almost perfect score on the SAT math test for high school students.

He added that it could also create computer code in the majority of programming languages.

AI regulations a must before its late 

Concerns about AI technology replacing people are widespread. According to a recent study documented by Interesting Engineering, ChatGPT has been implemented by more than half of American businesses.

Approximately half of these companies revealed that ChatGPT had already replaced a lot of their personnel, as per the survey conducted by Resumebuilder.com with 1000 business leaders. 

Yet Altman emphasizes that AI can only function with human guidance or input. He is more concerned with the people who will be in charge of the technology than with the technology itself.

"It waits for someone to give it an input," he said during the ABC interview. "This is a tool that is very much in human control." 

There will be those who disregard some of the safety restrictions we impose. "Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it," he added. 

Elon Musk, the CEO of Tesla, has often warned that AI is more hazardous than a nuclear weapon. Musk was also one of the initial investors in OpenAI while it was still a non-profit organization.

"Compared to AI, progress with Neuralink will be slow and easy to assess, as there is large regulatory apparatus approving medical devices," Musk tweeted in December.