Regulators turn to old laws to tackle AI technology like ChatGPT
Organizations like the European Union (EU) are at the forefront of drafting new AI rules that could set a global benchmark. However, enforcing these regulations will take several years.
"In absence of regulations, the only thing governments can do is to apply existing rules," Massimiliano Cimnaghi, a European data governance expert at consultancy BIP, told Reuters.
This means that regulators are turning to laws already in place, such as data protection laws and safety regulations, to address concerns related to personal data protection and public safety. The need for regulation became evident when Europe's national privacy watchdogs, including Italian regulator Garante, took action against OpenAI's ChatGPT, accusing the company of violating the EU's General Data Protection Regulation (GDPR).
OpenAI responded by installing age verification features and allowing European users to block their information from being used to train the AI model.
However, this incident prompted other data protection authorities in France and Spain to launch probes into OpenAI's compliance with privacy laws.
Consequently, regulators aim to apply existing rules that cover various aspects, including copyright, data privacy, the data fed into AI models, and the content they generate.
Proposals for the AI Act

In the European Union, proposals for the AI Act will require companies like OpenAI to disclose any copyrighted material used to train their models, exposing them to potential legal challenges. However, proving copyright infringement may not be straightforward, as Sergey Lagodinsky, a politician involved in drafting the EU proposals, explains.
"It's like reading hundreds of novels before you write your own," he said. "If you actually copy something and publish it, that's one thing. But if you're not directly plagiarizing someone else's material, it doesn't matter what you trained yourself on.
Regulators are now urged to "interpret and reinterpret their mandates," says Suresh Venkatasubramanian, a former technology advisor to the White House. For instance, the U.S. Federal Trade Commission (FTC) has used its existing regulatory powers to investigate algorithms for discriminatory practices.
Similarly, French data regulator CNIL has started exploring how existing laws might apply to AI, considering provisions of the GDPR that protect individuals from automated decision-making.
As regulators adapt to the rapid pace of technological advances, some industry insiders call for increased engagement between regulators and corporate leaders.
Harry Borovick, general counsel at Luminance, a startup that utilizes AI to process legal documents, expresses concern over the limited dialogue between regulators and companies.
He believes that regulators should implement approaches that strike the right balance between consumer protection and business growth, as the future hinges on this cooperation.
While the development of regulations to govern generative AI is a complex task, regulators worldwide are taking steps to ensure the responsible use of this transformative technology.