AI could surpass humanity in next 10 years – OpenAI calls for guardrails
A week after the OpenAI CEO made an appearance before a U.S. senate committee to address the country’s concerns over artificial intelligence (AI), Sam Altman said Monday that there is a need to create a governance body to mitigate the risks of the technology.
“Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” said the CEO in a blog post.
Given the spate of developments taking place in the AI space, he hinted that future AI systems will be dramatically more capable than even Artificial General Intelligence (AGI) – a system that can reason, plan, and learn from experience at the same level as humans do or possibly above them.
In fact, researchers at Microsoft, the tech giant which has invested billions of dollars in OpenAI, released a paper that suggested that GPT-4 has made great progress in applying common sense and can reason like humans. The paper drew sharp criticism for an ‘overreaching claim’ because the large language model still has multiple limitations.
Guardrails against superintelligence
Citing the example of nuclear energy, Altman said that we would soon need an AI governing body similar to the International Atomic Energy Agency (IAEA), which looks after global nuclear safety and security framework to protect people and the environment against the harmful effects of radiation.
Calling for public input on the ‘existential risk’ posed by AI, Altman added, “We believe people around the world should democratically decide on the bounds and defaults for AI systems.”
OpenAI’s launch of its chatbot ChatGPT in November last year gave birth to a generative AI revolution. Every tech company on Earth has been scouting to make amends to its existing technology with AI or coming up with new AI tools in what is now a greatly competitive market. While AI is set to impact almost all industries, concerns have been raised about the unknown potential of AI. For example, what if the goals of superintelligence do not align with human interests?
To address this, Altman, on May 16, appeared before Congress, seeking to further the technology's advantages while limiting its misuse. He suggested that the U.S. should consider severe licensing and testing requirements for the development of AI models, as Interesting Engineering had reported earlier.