Bias inherent in ChatGPT's database says OpenAI CEO Sam Altman

In a candid conversation with Lex Fridman, Altman discussed all things AI and more.
Baba Tamim
Sam Altman, CEO of ChatGPT's parent company OpenAI.
Sam Altman, CEO of ChatGPT's parent company OpenAI.

Jovelle Tamayo/The Washington Post/Getty Images 

Sam Altman, the CEO of OpenAI, sat down with Lex Fridman for a frank discussion on a variety of subjects, including the contentious question of whether the language model GPT is "too woke" or prejudiced.

Although admitting that the term "woke" has changed over time, Altman finally conceded that it is prejudiced and probably always will be, according to the interview published by Fridman on Saturday. 

Altman added that between versions GPT-3.5 and GPT-4, OpenAI significantly enhanced the GPT model.

He expressed gratitude to critics who note the advancements made by OpenAI while also noting that much more has to be done.

Altman also responded to Elon Musk’s criticism of OpenAI’s AGI safety research. Altman sympathized with Musk’s worries but wanted Musk to concentrate more on the difficult work of addressing AI safety issues.

Under attack

“Elon is obviously attacking us some on Twitter right now on a few different vectors, and I have empathy because I believe he is understandably so really stressed about AGI safety. I’m sure there are some other motivations going on too, but that’s definitely one of them," he said.

“I definitely grew up with Elon as a hero of mine. You know, despite him being a jerk on Twitter or whatever, I’m happy he exists in the world, but I wish he would do more to look at the hard work.”

There is a significant distinction between artificial general intelligence (AGI) and artificial intelligence (AI). AGI is a machine that can understand or learn any intellectual task that a human can, whereas AI is a machine that excels at a specific task.

Altman also opened up about the probability of AI risks.

"I think a lot of the predictions, this is true for any new field, but a lot of the predictions about AI in terms of capabilities, in terms of what the safety challenges and the easy parts are going to be have turned out to be wrong," he said.

Altman continued by stating that a lot of forecasts made regarding the potential of AI and its safety issues turned out to be inaccurate. In relation to OpenAI’s objective for people to have control over the models while having open bounds, he also spoke about the topic of “jailbreaking.”

"It kinda sucks being on the side of the company being jailbroken. We want the users to have a lot of control and have the models behave how they want within broad bounds. The existence of jailbreaking shows we haven’t solved that problem yet, and the more we solve it, the less need there will be for jailbreaking. People don’t really jailbreak iPhones anymore," said Altman.

In addition, more general subjects like the meaning of life, the difference between fact and fiction, and parenting tips were discussed. Altman emphasized the value of intellectual honesty, accepting that mistakes can be made, and growing from them.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board