China will regulate AI-generated content with a new set of regulations

The rules will go into effect starting in 2023.
Brittney Grimes
AI concept with a flag of China.
AI concept with a flag of China.

Igor Kutyaev/iStock 

China has issued rules and guidelines that regulate the use of artificial intelligence within the country.  The regulations are cautious when it comes to AI. This includes the trending AI chatbots, such as ChatGPT, AI-generated art, the many methods of utilizing AI in the health care sector and all forms of artificial intelligence, in general.

The Cyberspace Administration of China (CAC), has issued a set of rules to follow when incorporating AI. The CAC is the internet regulator and censor in China. The agency released guidelines on “deep synthesis”. The regulatory measures will take effect starting on Jan. 10, 2023.

The conditions regarding AI, or “deep synthesis”

In the statement, CAC refers to “deep synthesis” as technology that generates text, images, art, voiceovers, or videos of people. “Deep synthesis technology refers to the use of deep learning, virtual reality and other generative synthesis algorithms to produce text, images, audio, video, virtual scenes and other network information,” the agency stated on its website, translated by Google Voice. The set of rules are a preventive measure to make sure there are no ethical issues or copyright infringement, an ongoing concern around the world that is often mentioned when creators utilize artificial intelligence systems.

“In recent years, the rapid development of deep synthesis technology, while serving user needs and improving user experience, has also been used by some lawbreakers to produce, copy, publish, disseminate illegal and bad information,” the CAC wrote in a statement. The agency said that the document was issued to “prevent and resolve security risks, as well as to promote the healthy development of deep synthesis services and improve the level of regulatory capabilities.”

Most Popular

The CAC and its regulations

The CAC is responsible for implementing policies on issues and concerns that arise in relation to internet use in China. Many of the provisions within the document are factors that are raised globally, sparking concern, as AI is starting to become more popular and mainstream. Some common questions include, who would get credit for AI-generated art, or how can AI be unbiased when given datasets as input by people?

The rules state that individuals who use AI to generate work must label the content as being AI-generated. It states that creators should add identifiers to their work if any form of artificial intelligence was used. The label or mark must be included and used for any AI-generated writing, faces, voices changed by AI, or other alterations.

“Where deep synthesis service providers provide the following deep synthesis services that may cause confusion or misidentification among the public, they shall make conspicuous marks in reasonable locations and areas of the information content generated or edited, and remind the public of the deep synthesis situation,” the CAC said on their website.

Individuals will have to register when using AI

The agency also wants users to register if they are using AI, eliminating the use of anonymity for AI content within the country. People who want to create work with AI would have to register and verify their identity using their phone numbers, documents, or another form of authentication.

They must provide “real identity information authentication”. Content produced using AI cannot be created under a pseudonym or false name. “Deep synthesis service providers shall lawfully conduct real identity information authentication for deep synthesis service users,” the agency wrote.

The CAC also stated that it must review the algorithms being used in the artificial intelligence systems, along with a ban on negative information from spreading using AI. “Deep synthesis service providers and technical supporters shall strengthen technical management, periodically reviewing, assessing, and verifying mechanisms for generating synthetic algorithms.”

The datasets would have to be reviewed before being used as input for AI, along with a review of the resulting output. The assessment would be done either technically or manually, according to the CAC.