Sam Altman confirms OpenAI is not training GPT-5, but AI safety concerns persist
OpenAI's CEO and co-founder, Sam Altman, has revealed that the business is not currently training GPT-5, the technology behind artificial intelligence (AI) chatbot sensation ChatGPT.
Altman made these comments at a discussion on the dangers posed by AI systems at a Massachusetts Institute of Technology (MIT) event, reported The Verge, a US technology news website, on Friday.
The event was prompted by an open letter that went around the tech community asking that laboratories like OpenAI halt the creation of AI systems "more powerful than GPT-4" due to safety concerns.
The Elon Musk-backed letter was slammed by Altman for "missing most technical nuance about where we need the pause," and he made it clear that OpenAI is not currently training GPT-5.
Altman, however, emphasized that OpenAI is developing GPT-4's capabilities while taking the potential effects on safety into account.
"We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter," Altman stated.
The difficulty of measuring and tracking progress is a fundamental issue in the discussion of AI safety that Altman's remarks bring to light.
"Version numbers" are a misconception that has caused misunderstanding in the business by implying that numbered tech updates reflect clear, linear gains in capability.
Altman countered that just because OpenAI isn't working on the GPT-5 training right now doesn't mean they aren't developing other ambitious tools, such as connecting GPT-4 to the internet and optimizing GPT-4.
AI safety concerns
People concerned about the safety of AI will not find any solace in Altman's statement that OpenAI is not actively working on GPT-5. GPT-4's potential is still being expanded by the company, and other companies in the sector are developing tools with comparable lofty goals.
Additionally, the likelihood that OpenAI will release GPT-4.5 first, as it did with GPT-3.5, emphasizes the potential for version numbers to be deceiving, said The Verge.
Discussions about AI safety should center on capabilities, examples of what these systems can and cannot accomplish, and scenarios for future changes.
Even if governments were to impose a prohibition on future AI advances, the systems now in use are not fully understood due to the overwhelming existing availability of AI systems.
The industry is still developing the capabilities of current AI systems, and the version number fallacy makes it challenging to quantify and track advancement effectively, noted The Verge report.
The capabilities of AI systems, forecasts of how they might evolve over time, and addressing the safety concerns that come with maximizing their potential should be the main topics of discussions on AI safety.