New AI Content Labelling Rules in China: What are they and how do they compare to the EU AI Act?

Written By

toby bond module
Toby Bond

Partner
UK

I'm a partner in our Intellectual Property Group. Having studied physics at university, I'm fascinated by technology and the way in which it is reshaping our world.

On March 14, 2025, the Cyberspace Administration of China and other relevant departments issued the Labeling Measures for Content Generated by Artificial Intelligence (the “Measures”) along with the mandatory national standard, Cybersecurity Technology—Labeling Methods for Content Generated by Artificial Intelligence (the “Methods”). These rules are set to take effect on September 1, 2025 and build on the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services (“Deep Synthesis Provisions”) which came into effect on October 1, 2023 (see our summary here).

This article explains the Measures and compares them to the forthcoming EU rules on AI content labelling under the EU AI Act. 

The Measures impose obligations on four types of entities: AI content generation service providers, internet information content propagation service providers, Apps distribution platforms, and users. 

  1. AI content generation service providers 

AI content generation service providers are defined as organisations or individuals that use artificial intelligence technologies (including through APIs) to provide services to the public for generating or synthesising text, images, audio, video, virtual scenes, and other content. (Article 3.7 of the Methods) These providers must include explicit and/or implicit labels in AI-generated synthetic content, depending on the specific circumstances. 

Explicit labels: According to Article 4 of the Measures, explicit labels are required in the scenarios stipulated by paragraph 1 of Article 17 of the Provisions on the Administration of Deep Synthesis of Internet-Based Information Services (“Deep Synthesis Provisions”), which includes creation and/or editing services that may cause confusion or misunderstanding among the public with respect to (a) intelligent dialogue, intelligent writing; (b) synthetic voice, voice imitation; (c) facial generation, facial replacement, facial manipulation, posture manipulation; (d) immersive realistic scene generation or editing. There is also a catch all provision to impose this requirement on all services that may generate or significantly change information to the extent that confuses or cause misunderstanding among the public. Exhibit B of the Methods adds three more circumstances that could be caught by the paragraph 1 of Article 17, which refer to text-to-image generation; music generation; and text-to-video, image-to-video generation that may lead to confusion or misunderstanding by the public. 

According to Article 4 of the Measures and the specific rules set out in the Methods, explicit labels may include textual indicators such as "AI-generated" or icons, such as displaying "AI" at the beginning or end of the text. For images, the height of the text for explicit labelling should not be less than 5% of the shortest side length of the image. For audio content, explicit labels may involve spoken indicators (e.g., reading "AI-generated" at normal speaking speed of 120–160 words per minute) or rhythmic audio cues, such as the Morse code rhythm for "AI" (short-long-short-short). The AI content generation service provider must ensure that the explicit labels are kept on the content generated by AI along with the downloading, copying, export or other services it provides. 

Implicit labels: Article 5 of the Measures builds upon the requirements set out in Article 16 of the Deep Synthesis Provisions by clarifying that implicit labels must be embedded in the metadata of all AI-generated synthetic content. Article 5 also recommends the service providers to add implicit labels to the content generated by AI through digital watermarks, but this is not mandated. The obligation for implicit labelling is broader than Article 4, which only requires application of explicit labels to specific categories of content generated by AI that may lead to confusion or misunderstanding by the public, but the implicit labels are required to apply to all AI-generated synthetic content irrespective of whether it may cause public confusion or misunderstanding. Implicit labels include embedding metadata fields such as "AIGC" in files, along with the identity information of the content producer, such as a company’s unified social credit code or a citizen’s identification number, and a unique content identifier sourced from the AI content generation service provider. Article 5 also encourages, but not mandates, the use of other implicit labels, such as digital watermark. 

2. Internet information content propagation service providers 

According to Article 6 of the Measures, Internet information content propagation service providers must take measures to regulate the distribution of AI-generated synthetic content. This obligation could apply to various social media platforms, video sharing platforms, and news portal websites. This includes verifying whether file metadata contains implicit labels. If file metadata explicitly indicates that the content is AI-generated, propagation service providers must (a) add its own company identifier and its unique content identifier to the file metadata implicit label, and (b) inform the public that the content is AI-generated by adding prominent noticing labels around the published content. 

If metadata does not contain implicit labels but users declare the content as AI-generated, or if metadata lacks implicit labels and users do not declare the content as AI-generated but the propagation service provider nevertheless detects explicit labels or other traces of AI generation, the propagation service provider is still required to add its own metadata information and notify the public that the content is likely or suspected to be AI-generated. The propagation service provider is also required to provide necessary identification functions and remind users to proactively declare whether the published contents contain generated synthetic contents.

In practice, questions arise regarding whether a propagation service provider is required to implement explicit labelling which extend beyond the compliance obligations of an AI content generation service provider. Based on a literal interpretation of Article 6(1) of the Measures, this could be the case because a propagation service provider is required to implement prominent noticing labels around the published content, once the content generated by AI carries only implicit label. However, whether it places an excessive burden on the propagation service provider, potentially outweighing its primary duty to regulate the distribution of AI-generated synthetic content? This question may require further clarifications.   

3. App distribution platforms 

According to Article 7 of the Measures, App distribution platforms are required to verify whether Apps providers offer AI-generated synthetic content services during the approval process for app listings or launches. If App providers offer such services, the distribution platform must review the relevant materials related to the labelling of content generated by AI. The Measures do not specify what sorts of materials App distribution platforms need to review, but presumably the App providers should at least provide materials to show how they implement the mandated explicit and implicit labelling of AI-generated content in accordance with the Labelling Measures. 

4. Users 

According to Article 10 of the Measures, users who utilise internet information content propagation services to publish AI-generated synthetic content must proactively declare and use the labelling functions provided by service providers. 

Article 10 of the Measures also prohibit any organisation or individual from maliciously deleting, altering, falsifying, or concealing identification labels for AI-generated synthetic content as stipulated in the Measures. Furthermore, it is prohibited to provide tools or services to assist others in such malicious acts or to use improper identification methods to harm the legitimate rights and interests of others. 

However. the Measures also provides flexibility that if a user does not want the content generated by AI to carry an explicit label (maybe because the size and format does not align with the user’s aesthetic preferences), he/she could apply to the service provider to remove the explicit label (Article 9). In this scenario, the service provider can contractually allocate the labelling obligation on the user and provide the content without explicit labelling, while maintaining relevant logs for no less than six months.

Comparison with the EU

AI content labelling has also received a lot of attention in the European Union, where the transparency rules contained in Article 50 the EU AI Act are due to come into effect on 2 August 2026. The AI Act’s content labelling rules apply at two levels; obligations on providers of AI systems and obligations on deployers of AI systems. An immediate difference from the Measures is that the AI Act does not impose direct obligations regarding AI content labelling on other parties in the supply chain, i.e. internet information content propagation service providers and App distribution platforms.

Obligations on providers of AI systems

Article 50(2) requires providers of AI systems which generate synthetic audio, image, video or text content to ensure that the outputs of the system are marked in a machine-readable format and detectable as artificially generated or manipulated. While the language of Article 50(2) makes clear that the marking must be machine readable, it could leave room for interpretation regarding whether there are circumstances where the marking must also be human readable. For example, Recital 133 talks about the use of watermarks. It is currently unclear whether this is talking about invisible watermarks (e.g. applied to the metadata of an image file) or could encompass an obligation to include a visible watermark, e.g. in an image file. 

This potential uncertainty arises because the EU AI Act contains less detail than the Measures regarding the manner in which the marking must be applied. Rather than providing specific guidance, the AI Act includes a general requirement that providers must ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. The AI Act does however envisage that the European Commission will develop codes of practice in future which will provide further guidance on this marking requirement. 

Obligations on deployers of AI systems

Article 50(4) requires deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. A deep fake is defined as an AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful. This definition of a deepfake appears to be narrower than the concept of content that may lead to confusion or misunderstanding by the public under Article 4 of the Measures. It also only covers image, audio or video content whereas the Measures also include text content.

The proactive labelling obligation imposed on deployers of AI systems by Article 50(4) also differs from the obligations imposed on users under the Measures, which focus more on preventing users from removing labels which have been applied upstream by the content generation service provider. Article 50(4) is also narrower in scope, as a deployer under the AI Act excludes individuals acting in their personal and non-professional capacity (Article 3(4)). This contrasts with Article 10 of the Measures, which apply more broadly to all users of internet information content propagation services. 

Overall, the framework provided by Article 50(2) and 50(4) of the AI Act is currently less comprehensive and well developed than the Measures. Some of that will change once the Commission publishes Guidelines and a Code of Practice addressing these obligations. However, structural difference in approach will remain. Both structural differences and differences regarding the specifics of content labelling, between the EU, China and other jurisdictions are likely to add significant complexity for organisations who offer AI content generation in multiple jurisdictions and those who wish to use AI generated content, e.g. as part of international advertising campaigns. 

For more information regarding the transparency obligations contained in the EU AI Act, along with the Act’s other provisions please see Bird & Bird’s comprehensive AI Act Guide which is available here in both English and Chinese.

 

*The article was co-authored by Emma Ren – an Associate at Lawjay Partners.

Latest insights

More Insights
Curiosity line pink background

UK-India Trade: Opportunities for your business under the new Free Trade Agreement

3 minutes May 19 2025

Read More
featured image

Comparative advertising: online comparison sites

1 minute May 16 2025

Read More
Curiosity line teal background

Another step towards clarity in the regulation of digital assets: full federal court hands down a new judgment

May 14 2025

Read More