UN Security Council meets for first time to discuss artificial intelligence risks
For the first time, the United Nations Security Council met at the New York headquarters to discuss one of the most hotly debated issues of 2023 - artificial intelligence.
Convened by the United Kingdom, the member nations discussed the risks and rewards inherent in this emerging technology, as per the press release.
Comparing AI to the printing press, Secretary-General of the United Nations António Guterres said that while it took over 50 years for printed books to become widely available across Europe, “ChatGPT reached 100 million users in just two months.”
This comes after the United Nations led the ‘AI for Good’ conference in Geneva, which saw the attendance of some of the world’s most powerful humanoid robots.
One of the AI-powered robots raised quite a few eyebrows at a press conference when it said that robots can run the world better than humans.
Call for global regulations
As countries around the world are stepping up to build guardrails around AI, the Security Council was also of the opinion that respective member states develop national strategies on the responsible design, development, and use of AI, consistent with their obligations under international humanitarian law and human rights law.
Guterres also called on member states to agree on a global framework to regulate and strengthen oversight mechanisms for technologies like AI, in counter-terrorism measures.
Noting how AI systems can improve our understanding of biology but can also be used to construct biological weapons, co-founder of Anthropic Jack Clark said, “We cannot leave the development of artificial intelligence solely to private-sector actors.”
Anthropic is one of the leading AI companies currently in the world. Only last week the company launched Claude 2, a chatbot in direct competition with OpenAI’s ChatGPT.
Concerns about human extinction
A double-edged sword, AI is changing our world from the ground up, but a possible human extinction due to an uncontrollable AI, which many have raised concerns over in the past, took center stage at the meeting.
Yi Zeng, of the Institute of Automation at the Chinese Academy of Sciences, said that both near-term and long-term AI will carry a risk of human extinction simply because “we haven’t found a way to protect ourselves from AI’s utilization of human weakness.”
He added that AI does not “know what we mean by human — [by] death and life.”
“This is why they, of course, cannot be trusted as responsible agents that can help humans to make decisions,” emphasized Zeng. Stressing the need to ensure sufficient, effective, and responsible human control for all AI-enabled weapons systems, he added.
“AI should never ever pretend to be human.”
Disinformation and hate speech
The advent of generative AI “could be a defining moment for disinformation and hate speech”, observed Guterres.
AI can be used to propagate hate speech via bots and can also be used to curtail online hate speech by employing automated detection, like Twitter owner Elon Musk’s infamous ‘dick-pic-bot.’
Clark also said since the world today lacks best practices on how to test AI systems for things such as discrimination, misuse, or safety, governments can keep companies accountable and companies can earn the world’s trust by developing robust and reliable evaluation systems.
Without such an investment, the international community runs the risk of handing over the future to a narrow set of private sector actors, he warned.