This newsletter summarises the latest developments in cybersecurity and data protection in China with a focus on the legislative, enforcement and industry developments in this area.
If you would like to subscribe to our newsletters and be notified of our events on China cybersecurity and data protection, please contact James Gong at james.gong@twobirds.com.
In November and December 2025, relevant regulatory authorities focused on key areas such as artificial intelligence, platform governance, and the protection of minors, and promoted the standardized development of the industry through a multi-dimensional approach encompassing legislation, enforcement, and industry guidance. While advancing technological progress and the digital empowerment of industrial innovation, regulators simultaneously strengthened the governance of cyberspace security and the online ecosystem. During this period, enforcement efforts in the areas of artificial intelligence and online platform governance were significantly intensified, demonstrating a strong regulatory focus:
Follow the links (underlined) below to view the official policy documents or public announcements.
The NDRC, in collaboration with SAMR and CAC, issued the Internet Platform Price Behaviour Rules, aiming to improve the price supervision mechanism for the platform economy and standardize price behaviours. The rules apply to price behaviours such as the sale of goods or provision of services through information networks by platform operators and intra-platform operators. First, the rules clarify operators' independent pricing rights, stipulating that platforms must not intervene in merchant pricing through means such as traffic restrictions or algorithmic downgrading, and that the solicitation period for adding or changing fee items shall be no less than seven days. Second, the rules refine operators' price labelling behaviours, requiring platforms and merchants to clearly label prices, prominently display rules for dynamic pricing and promotional activities, and not to charge fees not clearly marked. Third, the rules strictly prohibit improper price competition behaviours, banning below-cost sales, price discrimination, collusion to manipulate market prices using platform rules and algorithms, price gouging, price fraud, etc., and explicitly prohibiting irregular price increases for livelihood and emergency supplies during emergencies. Fourth, the rules strengthen consumer price rights protection, requiring prominent prompts and convenient cancellation channels for automatic renewals and bundled services. Fifth, the rules improve supervision mechanisms, requiring platforms to establish price behaviour compliance management systems, with relevant departments conducting supervision according to their duties; minor violations that are promptly corrected and cause no harmful consequences may, in accordance with relevant provisions, not be subject to administrative penalties.
2. CAC issued negative list to standardize the conduct of internet celebrity accounts (26 December)
The CAC issued the Negative List of Conduct for Internet Celebrity Accounts, aiming to strengthen the regular management of internet celebrity accounts, guide them in consciously regulating their online conduct, and prevent inappropriate online words and deeds from causing negative impacts. The negative list explicitly prohibits spreading vulgar content, publicizing negative value orientations, cultivating a negative public persona centered on the glorification of ugliness, spreading false information, distorting the interpretation of policies and public events, encouraging group confrontation, randomly doxxing others, organizing online quarrels and offline duels, collecting negative clues, calling on fans to gather, implementing interest blackmail, engaging in unlicensed operation, and concealed involvement in illegal and underground businesses. Platforms shall strictly fulfill their primary responsibilities on the basis of the negative list, further improve community rules and user agreements related to the management of internet celebrity accounts, strengthen the management of conduct such as information release, live streaming interaction, and topic opening by internet celebrity accounts in accordance with the laws and agreements, and guide operators of internet celebrity accounts in reasonably using their influence and regulating their online words and deeds.
3. CAC planned to issue public opinions on measures to standardize the development and application of anthropomorphic interactive services of AI (27 December)
The CAC solicited public opinions on the Interim Measures for the Administration of Anthropomorphic Artificial Intelligence Interactive Services, aiming to promote the healthy development and standardized application of anthropomorphic interactive AI services. The measures apply to products or services that utilize AI technology to provide simulation of human personality traits, thinking patterns, and communication styles to the public within the territory through text, images, audio, video, and other means for emotional interaction. First, the measures clarify identity disclosure and prompting requirements, stipulating that providers shall prominently prompt users that the interaction object is AI, with pop-up reminders required for first use, re-login, and continuous use exceeding 2 hours, to prevent cognitive confusion; second, the measures delineate 8 major prohibited behaviours, strictly prohibiting generation of content endangering national security and social order, obscene, gambling, or violent content, insulting or defamatory content, false promises, suicide, self-harm, or mental health-related content, unreasonable decision-making content, and inducement or extraction of classified or sensitive information, to uphold service bottom lines; third, the measures strengthen protection for special groups, requiring establishment of minors mode, provision of emotional companionship services to minors only with guardian consent and guardian control functions, and prohibition of simulating elderly relatives or specific relational persons to provide services. For minors and elderly users, providers shall require entry of guardian and emergency contact information during registration; fourth, the measures regulate data management, requiring adoption of data encryption, security auditing, access control, and other measures to protect interaction data, allowing users and guardians to apply for data deletion, and prohibiting use of interaction data or users' sensitive personal information for model training without separate user consent.
4. SAMR planned to issue public opinions on guidelines to guide internet platform operators in antitrust compliance (15 November)
The SAMR solicited public opinions on the Antitrust Compliance Guidelines for Internet Platforms, aiming to guide platform operators to effectively prevent antitrust compliance risks, improve compliance management mechanisms, and maintain fair competition order. First, the guidelines emphasize that antitrust compliance management shall adhere to the principles of pertinence, comprehensiveness, penetration, and continuity, and, based on domestic antitrust supervision and enforcement practices in the platform economy field and drawing on relevant overseas experience, propose 8 novel monopoly risk scenarios according to the characteristics of the platform economy industry, business models, and competition rules, including inter-platform algorithmic collusion, organizing or assisting intra-platform operators to reach monopoly agreements, unfair high prices by platforms, below-cost sales by platforms, blocking and banning, “choose one from two,” “lowest price across the web,” and platform differential treatment. Second, the guidelines, in accordance with relevant laws and regulations, systematically sort out monopoly risks for internet platforms, clearly prompt key risks that may exist when platform operators utilize data and algorithms, technology, capital advantages, platform rules, etc., and guide platform operators to implement primary responsibilities for establishing compliance management institutions and improving systematic monopoly risk prevention and control systems including compliance reporting, compliance training, compliance assessment, compliance supervision, and compliance management informatization. Third, the guidelines support and encourage platform operators to establish and improve full-chain compliance management systems covering pre-event, in-event, and post-event stages, and adopt methods such as platform rule review and algorithm screening at key links in specific scenarios to identify, assess, and prevent relevant risks.
The TC260 solicited public opinions on the Cybersecurity Standard Practice Guideline - Basic Security Requirements Large Model All-in-one Machine Products and the Cybersecurity Standard Practice Guideline - Technical Specifications for Security Functions of Artificial Intelligence Acceleration Chips, aiming to address risks and challenges brought by the rapid development of AI technology. Among them, the Cybersecurity Standard Practice Guideline—Basic Security Requirements for Large Model All-in-one Machine Products applies to relevant entities such as R&D parties, production parties, deployment parties, and users of large model all-in-one machine products, elaborating the technical architecture and functional modules of such products, identifying potential security risks faced by the products, and stipulating basic requirements in aspects such as system and hardware security, data security, model security, and application security. The Cybersecurity Standard Practice Guideline—Technical Specifications for Security Functions of Artificial Intelligence Acceleration Chips stipulates security function requirements for AI acceleration chips in 7 aspects—hardware security, interface security, firmware security, secure storage unit, cryptographic technology mechanisms, fault detection and diagnosis, and data protection—and provides corresponding evaluation methods.
6. CAC released announcement on filed generative AI services, adding 73 filed AI services and 35 registered AI services (11 November)
The CAC issued the announcement on filed generative AI services from September to October 2025. From September to October, 73 generative AI services were newly filed with the national CAC; for generative AI applications or functions that directly call filed model capabilities through API interfaces or other methods, registrations are conducted by local CACs, with 35 newly completed registrations in this phase. As of November 1, 2025, a cumulative total of 611 generative AI services have completed filing, and 306 generative AI applications or functions have completed registration.
7. CAC notified two batches of typical cases regarding online chaos in the automobile industry, involving dissemination of false and untrue information, publication of derogatory information, etc. (12 November, 11 December)
The CAC notified two batches of typical cases regarding online chaos in the automobile industry, involving dissemination of false and untrue information, malicious smear and defamation against automobile enterprises and products, and other illegal and non-compliant behaviours. The involved accounts have been lawfully and contractually closed or otherwise disposed of. Violations in the first batch include: publication of derogatory information and false and untrue information infringing enterprise goodwill and product reputation; publication of unverified automobile sales data interfering with normal production and operation of enterprises. The second batch of reported cases further highlights typical scenarios, including: deliberately compiling negative information about automobile enterprises, persistently sensationalizing automotive industry incidents, and smearing automobile enterprises and their product quality; arbitrary publishing derogatory information to defame and attack automobile enterprises and entrepreneurs; producing and distributing automotive performance evaluation videos that selectively disclose product test data to draw misleading conclusions; distorting interpretations of publicly available corporate financial statements to maliciously undermine business performance; and, at key time points such as new product releases of automobile enterprises, riding on hotspots and collectively publishing homogenized content smearing product performance, complaining about price positioning, and comparative marketing for traffic attraction.
8. CAC addressed chaos involving clear pricing, traffic inducement, and implied provision of academic paper trading, and closed relevant accounts (13 November)
The CAC addressed a batch of illegal and non-compliant internet accounts involving clear pricing for trading, traffic inducement for trading, and implied provision of trading academic paper services. Typical cases specifically include: using promotional marketing language to clearly price and sell academic paper trading, proxy submission, and proxy publication services; using implicit language such as “paper tutoring,” “journal consultation,” and “paper plagiarism reduction” to induce traffic to private domain groups for academic paper trading; and hinting at provision of illegal services through topics, profiles, comment interactions, etc. Relevant accounts have been lawfully closed.
The CAC addressed a batch of illegal and non-compliant internet accounts involving use of AI technology to impersonate public figures' images for publishing marketing information in live streaming, short videos, etc., misleading netizens and suspected of false advertising and online infringement. At the same time, the CAC urged websites and platforms to issue governance announcements and carry out centralized cleanup and rectification, having cumulatively cleaned up more than 8,700 pieces of relevant illegal information and disposed of more than 11,000 accounts impersonating public figures. Moving forward, cyberspace authorities will continue to strengthen the responsibility of websites and online platforms and maintain high-pressure and strict control over the problem of using AI to impersonate public figures for live–streaming marketing. For malicious marketing accounts, cyberspace authorities will identify, dispose of and expose in batches to maintain a sound network ecosystem. The aforementioned accounts have been severely disposed of.
The CAC addressed a batch of mobile internet applications involving problems where certain websites and platforms failed to effectively implement label requirements for AI generated and synthesized content. The main illegal and non-compliant circumstances are as follows: First, AI-generated and synthesized content service providers failed to add explicit identification to generated and synthesized content; failed to add explicit labels in files when providing export functions for generated and synthesized content; failed to add implicit labels containing the attribute information on the generated or synthesized content, the name or code of the service provider, the content number and other information on production elements in the file metadata; and failed to properly position implicit labels, etc. Second, the service provider providing the services on the network information content communication failed to implement implicit label detection and add prominent cautionary labels around the published contents as required; failed to add the attribute information on the generated or synthesized content, the name or code of the communication platform, the content number and other communication element information to the file metadata involved in the communication activities of generated or synthesized content; failed to provide users with functions to declare and synthesized content, etc. The CAC lawfully and regulatorily interviewed the aforementioned applications, ordered rectification within a time limit, removed them from shelves and offline, and imposed other disposal penalties. Websites and platforms shall strictly implement the requirements of the Measures for the Measures for the Labelling of Artificial Intelligence Generated and Synthesized Content, and strengthen mutual identification of marks and content detection capability building.
11. CAC launched the “Clear and Bright · Addressing Chaos in Network Live-Streaming Tipping” special action to address low-quality group live-streaming inducing tipping, and continued to press websites and platforms to fulfil primary responsibilities (26 November, 3 December)
The CAC continued to advance the “Clear and Bright · Addressing Chaos in Network Live Streaming Tipping” special action, focusing on prominent problems such as low-quality group live streaming inducing tipping, false personas inducing tipping, inducing minors to tip, and stimulating irrational tipping by users. Since the launch of the special action one month ago, a cumulative total of more than 73,000 illegal live streaming rooms and more than 24,000 accounts have been disposed of, and websites and platforms have been guided to issue 28 batches of governance announcements and typical cases. Typical cases include: wearing revealing clothing during live streaming and frequently making sexually suggestive actions to render a low-quality and undesirable atmosphere and induce tipping; using uncomfortable punishment methods and making indecent sounds to induce users to engage in low-quality interactions with streamers through tipping; deliberately displaying gift amounts of group live streaming members and inducing users to send gifts to streamers to improve rankings through forms such as amount comparison and camera focus on sensitive parts of high-ranking streamers.
The CAC notified the disposal situation of a batch of illegal and non-compliant online celebrity accounts involving illegal acts such as incitement of group confrontation and promotion of undesirable values: long-term fabrication of remarks inciting group confrontation, continuous promotion of undesirable values such as flaunting wealth and worshipping money, publication of content promoting overseas pornographic films and prolonged use of foul language in live streaming, tax evasion, flaunting wealth and worshipping money. The CAC has lawfully adopted measures such as closure, long-term muting, suspension of streaming, and suspension of profit-making permissions for live commerce against the aforementioned accounts.
The CAC released the announcement on the 2025 disposal situation of counterfeit and impersonating websites and platforms, lawfully disposing of 1,418 counterfeit websites throughout the year, representing a year-on-year increase of 1.7 times compared with 2024, including 317 impersonating state agency websites, 323 impersonating state-owned enterprise and public institution websites, 250 impersonating academic journal websites, 135 impersonating private enterprise websites, 61 impersonating financial institution websites, and 332 fabricated and impersonating school websites. Typical cases of the above six categories of counterfeit and impersonating websites sequentially include: fabricating a website in the name of the Ministry of Human Resources and Social Security, publishing false notices, setting pop-ups such as “Declaration on 2025 Fiscal Personal Subsidies” on the homepage to induce netizens to click and input personal information for fraud; mirroring and copying the online business hall website of China Petrochemical to induce consumers to recharge personal fuel cards online, with no actual recharge success, resulting in netizen losses; setting up a website falsely in the name of the journal China Rural Health purportedly supervised by the National Health Commission and sponsored by the China Rural Health Association to publicly solicit contributions, defrauding authors of paper manuscripts, correspondence addresses, and other information, and inducing payment of review fees; impersonating Qilu Pharmaceutical to operate a false corporate official website and application program, inducing netizens to log in and purchase virtual currency under the pretext of business cooperation to defraud netizens of money; using a counterfeit SPD Bank official website to defraud investor trust, inducing investors to download and install false apps to carry out illegal securities activities such as illegal stock recommendation and illegal margin financing; fabricating non-existent academic education “schools” such as Guizhou Polytechnic Vocational and Technical College to publish false enrolment information and issue false university admission notices, misleading candidates and parents to defraud money.
The SPC released three typical cases involving online protection of minors and punishment of illegal crimes, highlighting online protection of minors' legitimate rights and interests, regulation of online behaviour, and punishment and prevention of crimes. In Case 1, an online store used a minor's portrait for product promotion without guardian consent, constituting infringement, and was ordered to apologize in writing and compensate for losses. This case clarifies that an online store's unauthorized use of a minor's portrait in business activities without explicit guardian consent constitutes infringement and shall bear legal liabilities such as apology and compensation for losses. In Case 2, a minor published insulting and mocking remarks against a classmate in a class group and on social platforms, infringing reputation rights, and the guardian was ordered to apologize in writing and compensate for losses due to failure to fulfil guardianship duties. This case clarifies that a minor's insult or defamation of others also constitutes infringement of reputation rights, and parents, as guardians, shall bear tort liability for infringement by their minor child. In Case 3, a minor, influenced by undesirable information on a short video platform, committed extortion and was lawfully sentenced to fixed-term imprisonment and fined; the court issued judicial suggestions to the platform regarding content review loopholes and failure of minors’ mode, urging the platform to fulfil primary responsibilities for online protection of minors.
The SPC released the Sixth Top Ten Cases on Lawfully Safeguarding Rights and Interests of Women and Children, one of which involved joint liability for online infringement where dissemination of untrue videos on an online platform infringed a minor's personality rights. Specifically, a minor was made into and had disseminated videos containing his/her portrait, name, social account, and other personal information mixed with sexual rumours, solicitation advertisements, etc., on social software, with viewership rapidly increasing in a short time; the court held that the relevant online service provider, in circumstances where such infringing content was “obviously identifiable” and the risk of dissemination and diffusion was high, failed to promptly adopt necessary measures such as keyword screening and manual review, and shall bear joint liability with the infringing user, ordering it to compensate for mental damage consolation money and reasonable rights defence expenses. This case also emphasizes that, in addition to fulfilling post-event “notice-deletion” obligations, platforms shall also bear pre-event obligations to take preventive measures against infringement and in-event disposal obligations when they “know or should know” of the infringement; for illegal information seriously infringing minors' rights and interests, such as minors' personal privacy and sexual rumours, platforms shall fulfil a higher duty of care for review, to promote the establishment of long-term mechanisms for network ecosystem governance.
16. Beijing CAC released announcement on registered generative AI services, adding 30 generative AI services that completed registration (19 December)
The Beijing CAC released the announcement on registered generative AI services. As of December 19, 2025, Beijing added 30 generative AI services that completed registration. Pursuant to the Interim Measures for the Administration of Generative Artificial Intelligence Services and relevant provisions, generative AI applications or functions that directly call filed large model capabilities through API interfaces or other methods are subject to registration administration and allowed to go online and provide services.
17. Shanghai CAC released announcement on registered generative AI services, adding 6 generative AI services that completed registration (31 December)
The Shanghai CAC released the announcement on registered generative AI services. As of December 31, 2025, Shanghai added 6 generative AI services that completed registration, with a cumulative total of 145 generative AI services registered.
The Shanghai CAC released the top ten typical disposal cases on optimizing the business network environment at the “Clear and Bright Pujiang · 2025” Network Ecosystem Governance Summary Activity. In 2025, creating a clear and bright network environment was included in the 8.0 version of Shanghai's action plan for optimizing the business environment. Shanghai municipal and district cyberspace administrations, together with public security, procuratorate, and relevant industry regulatory departments, focused on prominent problems such as “fabricating rumours, hyping old news, soft extortion, and black PR armies,” and deeply carried out the “Clear and Bright Pujiang e-Enterprise Companion” special action on optimizing the business network environment, lawfully disposing of a batch of typical cases. The final ten representative disposal cases mainly cover three categories: first, severely addressing online chaos in key industries, focusing on technology, automobile, cruise tourism, and other industrial fields; second, severely punishing illegal and criminal acts, including public opinion extortion and blackmail, AI-generated rumours, and brushing orders for fraudulent subsidies; third, disposing of infringing accounts and information involving enterprises, such as counterfeit and impersonating well-known enterprise brands, malicious smear of enterprise executives, etc.
The Shanghai CAC carried out focused governance of “AI abuse” under the “Sword Pujiang · 2025” special enforcement work. During the special action period, multiple measures were adopted to guide enterprises in the generative AI sector to operate in compliance: first, strengthening routine management, guiding local app stores to delist 54 apps using generative AI technology in violation; in accordance with the Measures for the Labelling of Artificial Intelligence Generated and Synthesized Content, guiding expert teams to formulate “Practical Analysis” to provide enterprises with answers to difficult questions; second, strengthening disposal and penalties, conducting “look-back” inspections on 26 generative AI service websites previously required to delist functions to prevent recurrence of problems, and for the first time applying the Interim Measures for the Administration of Generative Artificial Intelligence Services to file cases and impose penalties on 3 websites that refused rectification; third, exempting first-time violators from penalties, urging 5 websites that were first discovered to have problems such as failure to conduct security assessments as required by the state or illegally providing “paper ghost writing” functions to delist relevant functions, and providing guidance on filing of generative AI services.
20. CAC held symposium to advance the building of a community with a shared future in cyberspace to a higher level (22 December)
The CAC held the Symposium on the Tenth Anniversary of Building a Community with a Shared Future in Cyberspace in Beijing, aiming to advance the building of a community with a shared future in cyberspace to a higher level. The meeting pointed out that in 2015, General Secretary Xi Jinping personally attended the opening ceremony of the second World Internet Conference and delivered a keynote speech, creatively proposing the concept of building a community with a shared future in cyberspace. The meeting emphasized actively practising the Global Development Initiative, Global Security Initiative, Global Civilization Initiative, and Global Governance Initiative, striving to build a more prosperous and inclusive, peaceful and stable, open and inclusive, fair and just cyberspace, and further promoting the building of a community with a shared future in cyberspace to improve quality and efficiency and achieve steady and long-term progress. The meeting further emphasized that to deeply study and implement the concept of building a community with a shared future in cyberspace, it is necessary to deepen conceptual interpretation to make this important concept more deeply rooted in people's hearts; guide innovative practices and summarize and promote excellent practice cases; strengthen coordination and collaboration to better pool global forces for internet development and governance.
The CAC released the National E-Government Development Report (2014-2024), aiming to summarize the achievements and experience of China's e-government development from 2014 to 2024, analyze the current situation and challenges, and propose key tasks for the next step. The report points out that over the past decade, various regions and departments have adhered to giving full play to the leading and driving role of informatisation, solidly promoted steady progress in e-government, achieved an overall improvement in empowering performance of duties, realized leapfrog development in informatisation of government services, systematically reconstructed the government data resource system, and achieved a comprehensive optimization of the e-government development environment. The report points out that facing the “15th Five-Year Plan” period, national e-government development faces new situations and new requirements, and it is necessary to pay more attention to benefiting the people, intelligent collaboration, empowerment and efficiency improvement, security and reliability, build and improve a national integrated development pattern, consolidate the foundation of intelligent and intensive support, deepen the development and utilization of government data resources, improve the inclusive and intelligent level of government services, regulate and guide innovative application of frontier technologies, strengthen network security barriers, improve talent training systems, expand international cooperation and exchanges, and promote e-government development to a new level.
The Internet Society of China hosted the 2025 “AI+” Industry Ecosystem Conference— “AI and Security Forum,” focusing on trends in AI security governance, with experts from various industries exchanging and sharing around cutting-edge industry research and implementation practical experience. Experts pointed out that it is necessary to consolidate technical foundations, deepen integrated applications, and improve governance ecosystems; focus on prominent aspects such as policy guidance, consolidating security foundations, and strengthening technological breakthroughs to comprehensively consolidate the foundation for network and data security protection in the information and communications industry; adhere to mission positioning, continue to lay out forward-looking research in the AI security industry, and vigorously promote the construction of industrial infrastructure ecosystems and standard systems. The forum released the Cloud-based Intelligent Agent Security Development Research Report, aiming to provide reference and practical guidance for security construction of intelligent agents to relevant entities such as cloud vendors, security vendors, and industry users.
The BCA, in collaboration with 8 e-commerce platforms including JD.com, Meituan, PDD, VIPSHOP, Douyin, Kuaishou, Rednote, WeChat Live, formally signed the first national Commitment Letter on Promoting the Standardized Application of AI Technologies, aiming to establish enterprise self-discipline norms from aspects such as technical identification, content review, and responsibility traceability. Over 3 months, the BCA conducted a special investigation through questionnaire surveys, experiential surveys, expert seminars, and other methods, identifying three prominent problems infringing consumer rights in current application of AI synthesis technology: first, insufficient protection of consumers' right to know; second, abuse of synthesis technology generating consumption risks; third, shortcomings in platform review mechanisms. In response to the above problems, the commitment letter proposes six specific measures—“uphold legal bottom lines and clarify compliance boundaries,” “strengthen identification requirements to safeguard right to know,” “combine technical monitoring and manual review to block illegal dissemination,” “provide tool support to guide active compliance,” “severely crack down on identification falsification to safeguard institutional credibility,” and “strengthen training and empowerment to improve industry awareness”—to delineate “compliance red lines” for AI technology application, press platform enterprises to fulfil primary responsibilities, and effectively safeguard consumers' legitimate rights and interests.
* With thanks to Olive Chang (Paralegal, Beijing) for help in drafting this update.