Time for the US to regulate AI? Tech CEOs and lawmakers meet
US lawmakers met with the who’s who of the tech industry on Wednesday to discuss regulations for artificial intelligence and potentially work towards a law that protects US citizens from the dangers of the technology.
In attendance were Tesla CEO Elon Musk, Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, NVIDIA CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, former Microsoft CEO Bill Gates, and AFL-CIO labor federation President Liz Shuler, reported Reuters.
It's these tech behemoths that are almost solely responsible for the progress that has been taking place in AI at the speed of light. While the language and topics discussed in the forum were mostly around generative AI, other areas of tech like robotics, IoT, and natural language processing were also taken into consideration.
Organized by Democratic Senate Majority Leader Chuck Schumer, over 60 senators took part in the AI Insights Forum on Capitol Hill. What the US lawmakers seek is guardrails against deepfakes such as bogus videos, election interference, and attacks on critical infrastructure.
What did the tech titans say?
Comparing AI regulations to sports, Musk told reporters, "It's important for us to have a referee." He also added that a regulator would "ensure that companies take actions that are safe and in the interest of the general public." Musk was one of the signatories of an open letter calling for a 6-month pause in the training of AI systems more powerful than OpenAI’s GPT-4.
Zuckerberg said, “This is an emerging technology, there are important equities to balance here, and the government is ultimately responsible for that.” Adding a word of caution on the possibility of emergence of artificial general intelligence (AGI), he added “Now, if at some point in the future these systems get close to the level of superintelligence, then these equities will shift and we’ll reconsider this approach.”
AGI is a stage where AI machines will start to think and act like human beings or become more intelligent than human beings. Many great scientists have warned about the possible implications of AGI.
Pichai called the event “productive” and stressed the need for the US to balance the “innovation side and building the right safeguards.” Pichai, along with Sam Altman, Anthropic CEO Dario Amodei, and Microsoft CEO Satya Nadella, met with President Joe Biden and Vice President Kamala Harris in May to discuss the responsibilities of big tech corporations in ensuring that their AI products are safe for public consumption.
After the Capitol Hill forum, OpenAI’s Sam Altman said, “We all share the same incentives of getting this right.” This isn’t Altman’s first formal appearance before US lawmakers. He appeared before a Senate committee hearing in May, where he said that the US should consider severe licensing and testing requirements for the development of AI models.
Is the US running behind?
The European Union, on the other hand, approved an Act in July, making it the first comprehensive legal framework of its kind in the world to regulate AI. Other countries like China also have strict legal boundaries and AI regulations in place.
Senator Schumer said that in its bid to bring a law about, the EU went “too fast.” “If you go too fast, you can ruin things,” he said. Although the US doesn’t have a law governing artificial intelligence yet, the White House has published a white paper on the possible constitution of the laws.
Titled ‘The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People,’ the white paper is based on five principles: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration and Fallback.
You can read the laws in fine print here.