Google engineers had built ChatGPT-like AI years ago but executives blocked it
Years ago, Daniel De Freitas and Noam Shazeer, engineers at Google, had developed a ChatGPT-like conversational chatbot that could talk about philosophy and TV shows and make pun jokes.
However, the executives at the company blocked it from being tested outside the company or released as a public demo citing concerns over the lack of meeting company standards, the Wall Street Journal (WSJ) reported. The duo has since left the company.
Conversational chatbots are the shiny new thing in the tech industry, with companies looking to incorporate them across their products.
Microsoft has taken quite the lead as it looks to aggressively push ahead with the advantages arising from its partnership with OpenAI.
Google, on the other hand, has been on the back foot after its chatbot Bard not only made a much-delayed entry but also messed up during a demonstration.
Many employees called Google's attempts to release Bard "botched" and "rushed." However, as it appears from WSJ's report, Google seems to have been responding too slowly to what its engineers have been building.
How Google executives strangled its AI chatbot
Back in 2013, Google founder Larry Page brought in Ray Kurzweil, a pioneer in language processing models, who began working on multiple chatbots that were never released.
Facing employee backlash over the use of AI for military and surveillance purposes, Google announced a framework of seven AI principles to guide its work, which included testing them for safety.
De Frietas, who was working on Google's YouTube, started the AI chatbot that could mimic human conversations as a side project. The project was originally called Meena, which was featured in a research paper in 2020, the WSJ said in its report.
Back then, Meena was trained on 40 billion words from social media conversations, while OpenAI's GPT-2, a predecessor to GPT-3 that powers the AI chatbot, had been trained on eight million web pages. OpenAI released a version for researchers to test it out, something that Meena did not get approval for.
Nevertheless, work on the project continued, and Noam Shazeer, a longtime engineer at Google's Brain, the AI research unit, joined the team.
The project was renamed LaMDA, short for Language Model for Dialogue Applications, and Shazeer added the Transformer, a new type of AI model that allows powerful programs like ChatGPT to be built.
Even though Google showcased LaMDA to the public, the chatbot was not. It made it to the news when engineer Blake Lemione called it sentient but was fired for public disclosures.
De Freitas and Shazeer continued to work on the chatbot and, in 2020, managed to integrate it into Google Assistant. As tests were conducted internally, Google's executives did not allow a public demo of the technology, frustrating the two engineers.
CEO Sundar Pichai also intervened in the matter and asked the duo to continue working on LaMDA, but with no assurances of making the chatbot public, the duo left Google in 2021 and started their own company that provides interactive chatbots that can role-play as Socrates or Elon Musk.
A former employee told WSJ, Google is struggling to find the balance between taking a risk versus maintaining thought leadership.
Even as Pichai urges his current employees to focus on "building a great product," the search engine giant may have waited too long to get it right.
Dr. Brad Tucker was the first expert on the scene after two farmers found pieces of space debris, now known to have come from SpaceX's Crew-1 mission.