Regulatory Approach and Framework for 

Generative AI In China

📅 08/04/2025

 

Generative artificial intelligence (gen AI) is undergoing rapid development. Technological advancements have significantly improved the quality and diversity of AI-generated content, driving its wide-ranging applications across various sectors. However, as gen-AI tools continue to evolve and commercialise, a series of challenges related to ethics, safety and privacy, intellectual property, discrimination, and inappropriate content have emerged.

 

In response to the regulatory, legal, and ethical challenges posed by AI technologies, Cyberspace Administration of China (CAC) and six other government departments jointly issued the Provisional Measures for the Administration of Generative Artificial Intelligence Services (Gen AI Measures) on 13 July 2023. This marked the world’s first dedicated regulation governing gen AI.

 

Most recently, the CAC, along with three other departments, has issued the Measures for Identifying AI-Generated Synthetic Content (Identification Measures), which will come into effect on September 1, 2025. It shows a continued deepening of China’s governance of gen AI.

 

These gen-AI related measures apply to any entity, domestic or international, that provides services using gen-AI technology to create content such as text, images, audio, or video for the public within China. Companies that fall within the scope of the rules must understand the regulatory principles and requirements. 

China’s AI governance framework

The Gen AI Measures adopt a regulatory approach rooted in the principles of inclusive encouragement and prudent supervision. The measures include a series of supportive provisions for the development of gen AI, and impose a range of legal and compliance obligations on gen AI service providers. These providers are subject to requirements such as algorithm compliance, content compliance, training data compliance, data labeling compliance, and user rights protection.

 

Given the rapid evolution of gen-AI technology and services, these regulations are designed to be implemented gradually and remain forward-looking. China’s Gen AI Measures specifically sets out the principle of tiered and categorised regulations on the gen AI services, with specific rules and guidelines to be formulated over time by regulatory authorities based on the characteristics, risk profiles and development of gen AI technology and its applications in relevant industries and fields. The goal of this approach is to create an open and supportive environment that promotes innovation and progress, while achieving a balance between development and security through governance founded on well-defined rules. 
 

Regulatory principles

The Gen AI Measures form an essential part of a foundational legal framework for the governance of AI. Other key regulations under the framework include:

  • the Administrative Provisions on Algorithm Recommendations for Internet Information Services (the Algorithms Regulation, effective from 1 March 2022),
  • the Administrative Provisions on Deep Synthesis for Internet Information Services (the Deep Synthesis Regulation, effective from 10 January 2023), 
  • the Interim Measures for Ethical Review of Science and Technology (effective from 1 December 2023), and
  • the Measures for Identifying AI-Generated Synthetic Content.

While there is not yet a national AI law, the State Council did include the Draft AI Law in its Legislative Work Plans for both 2023 and 2024. It is expected that the regulatory regime governing the research, development, deployment, and use of Generative AI will continue to evolve and become more comprehensive, systematic, and forward-looking. Companies should closely monitor the latest legal and regulatory developments in this rapidly developing field.
 

Scope of application and foreign investment access

The application of China’s gen AI rules is far reaching, as foreign entities can also be required to comply with the regime if they provide gen AI-powered services, such as those generating text, images, audio, or video, to the public within the territory of China.

 

However, the regime does not apply to providers that do not offer services directly to the public in China. Industry associations, enterprises, educational and research institutions, public cultural institutions and relevant professional bodies that develop or use gen AI technologies without offering public-facing services in China are not covered by the Gen AI Measures.

 

Currently, China has not established a specific administrative licensing regime or rules restricting foreign investment access in the provision of gen AI services. However, if a gen AI service operates within regulated sectors, such as value-added telecommunications, internet culture operations, online audio-visual services, or broadcasting and television production, the service provider must comply with the applicable licensing requirements and any existing foreign investment restrictions relevant to these sectors.

Key regulatory requirements

1. Security assessment and the filing of algorithms and models

 

Algorithms and models serve as the foundation of gen AI technology. If any service possesses attributes that influence public opinion or have the capacity for ‘social mobilisation’, the service providers are under stricter regulatory scrutiny. The Algorithms Regulation requires algorithm recommendation service providers (including gen AI service providers and technical supporters) with the aforesaid attributes to file the relevant algorithms with the CAC or its local counterparts within 10 working days from the date of service provision. In addition, the Gen AI Measures require gen AI service providers  to conduct a security assessment, and complete gen AI large model filing. 

 

The Gen AI Measures do not provide a clear definition of "public opinion attributes or social mobilisation capabilities". Service providers should refer to the Regulations on Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilisation Capabilities jointly issued by the CAC and the Ministry of Public Security in 2018. However, this regulation has a relatively broad definition of "public opinion attributes or social mobilization capabilities," and in practical application, detailed analysis is required on a case-by-case basis.

 

2. Training data requirements

 

The maturity of gen AI large models and the quality of generated content are highly dependent on the training data. As a result, the training process requires extensive data collection to ensure the models can perform with precision and generate accurate outputs. The Gen AI Measures place stringent requirements on training data, including:  

  • only using data and fundamental models from legitimate sources;
  • avoiding infringement of intellectual property rights; 
  • obtaining personal consent or ensuring the compliance with relevant regulations if personal information is involved;
  • taking effective measures to improve the quality of training data and enhance the authenticity, accuracy, objectivity and diversity of training data; and
  • complying with other relevant laws and regulations, such as the Cybersecurity Law, Data Security Law and Personal Information Protection Law.

3. Data labeling

 

Data labeling refers to the process of annotating raw data to train machine learning models, which is critical for improving the accuracy of gen AI models. The Gen AI Measures require the service providers to develop clear, specific and operational labeling rules; conduct quality assessments and sampling checks of labeling accuracy; and provide appropriate training to labeling personnel and ensure labeling is conducted in a compliant and standardized manner.

 

4. Content regulation

 

Gen AI service providers bear the responsibility of internet information content producers and are required to fulfill internet information security obligations. If personal information is involved, they bear the responsibility of personal information processors and need to meet personal information protection requirements.

In cases where illegal content is discovered, the service providers must promptly take actions such as stopping content generation and transmission, removing the content, optimising models through retraining, and reporting the issue to the relevant authorities. If users are found to be engaging in illegal activities via gen AI services, the service providers must take measures in accordance with the law and their service agreements, including issuing warnings, limiting functionality, suspending or terminating services, preserving relevant records, and reporting to authorities.

 

5. Content identification and review 

 

The obligation to label AI-generated content is intended to ensure transparency, allowing the public to know that certain content is AI-generated and to decide whether to trust or value it accordingly. The Gen AI Measures require the service providers to label content such as images and videos in accordance with the Deep Synthesis Regulation, which delineates categories of content that necessitate explicit labeling, such as intelligent dialogue, intelligent writing, synthesised human voice, voice cloning, face generation and etc. The upcoming Identification Measures, effective from September 1, 2025, establish a comprehensive framework for labeling requirements throughout the entire content generation and distribution process.

 

6. User rights protection

 

The Gen AI Measures impose various obligations on the gen AI service provider to protect user rights. These obligations include:

  • providing safe, stable, and continuous services to ensure normal user access;
  • clearly disclosing the target users, scenarios, and purposes of the service; 
  • educating users on rational and lawful use of gen AI and implementing safeguards to prevent minors from over-reliance or addiction;
  • protecting input data and usage records;
  • Responding promptly to individual requests to access, copy, correct, supplement, or delete their personal information, and avoiding collecting unnecessary personal information; and
  • establishing and improving complaint and reporting mechanisms, so users can have convenient channels for complaint, know the handling procedures and response times, and receive timely feedback on their complaints and reports.

7. Scientific ethical review

 

The Interim Measures for Ethical Review of Science and Technology, effective from 1 December 2023, emphasise the ethical implications of technologies such as gen AI. Institutions engaging in AI research involving ethically sensitive areas are required to establish ethics review committees and carry out ethical risk assessments and reviews. Additionally, both the Algorithms Regulation and Deep Synthesis Regulation require the establishment of ethical review systems and the adoption of corresponding technical safeguards.

Key contacts / Authors

Yuhua YANG: yuhua.yang@thornhill-legal.com

April XIAO: april.xiao@thornhill-legal.com

Rhea YU: rhea.yu@thornhill-legal.com

Standard Terms of Business   |   Legal Notice    |    Privacy Policy   |   Terms & Conditions     |   Our Compliants Policy   |   Cookies Policy

© Copyright Thornhill Legal Ltd. All rights reserved. 

We need your consent to load the translations

We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.