China TMT: Bi-monthly Update – September and October 2025 Issue

This newsletter summarises the latest developments in Technology, Media, and Telecommunications in China with a focus on the legislative, enforcement and industry developments in this area.

If you would like to subscribe for our newsletters and be notified of our events on China Technology, Media, and Telecommunications, please contact James Gong at james.gong@twobirds.com.

Key Highlights

In September and October 2025, relevant regulatory authorities advanced the standardized development of industries through multi-dimensional approaches including legislation, enforcement, and industry guidance in key areas such as artificial intelligence (“AI”), intelligent connected vehicles, platform governance, and minors’ protection, promoting technological and digital empowerment for industrial innovation, with particular emphasis on the integration of AI with existing industries:

  • Artificial Intelligence: At the legislative level, the Cybersecurity Standardization Technical Committee (“TC260”) issued technical specifications related to government affairs large models to guide the development and application of artificial intelligence large models in the government affairs field; the Ministry of Public Security (“MPS”) issued 19 public security industry standards, covering high-risk internet services such as online live streaming and online payments, promoting the upgrade of network security protection to intelligent standardization. On the enforcement front, the National Development and Reform Commission (“NDRC”) and the State Administration for Market Regulation (“SAMR”) continued to intensify enforcement on internet advertising supervision, investigating and handling multiple cases of using AI technology to release false advertisements; the Beijing Internet Court released eight typical cases in the artificial intelligence field, involving specific issues such as the legal attributes of AI-generated images, rights in AI-synthesized natural person voices, infringement of personal information rights by AI face-swapping, and the legal attributes and originality determination of virtual digital human images; the Beijing Municipal Administration for Market Regulation (“Beijing SAMR”) investigated and handled the first case of using AI technology to forge images of well-known public figures for false advertising promotion, demonstrating the determination of regulatory authorities to maintain market order in emerging technology environments. At the industry level, the Cyberspace Administration of China (“CAC”) and the NDRC issued guidelines to standardize AI large model applications in the government affairs field; TC260, in collaboration with the China National Computer Network Emergency Response Technical Team/Coordination Center (“CNCERT”), released a new version of the artificial intelligence security governance framework, which, based on the previous version, sorted and adjusted risk classifications, explored and proposed graded governance principles, and strengthened full-lifecycle technical governance measures; the Cyber Security Association of China (“CSCA”) released a document focusing on real challenges faced by artificial intelligence services for minors, proposing industry norms from seven aspects: health priority, content assurance, privacy protection, clear rules, transparency and trustworthiness, collaborative co-governance, and ethics first.
  • Intelligent Connected Vehicles: At the legislative level, the Beijing SAMR issued 4 local standards in the autonomous driving field, focusing on standardizing two core areas: closed testing venues for autonomous vehicles and vehicle-road-cloud integrated roadside infrastructure. On the enforcement front, the Ministry of Industry and Information Technology (“MIIT”) and five other departments jointly issued a notice, focusing on addressing three major issues: illegal profiteering, false promotion, and malicious defamation attacks.
  • Network Platform Governance: At the legislative level, the General Office of the State Council issued measures to standardize and strengthen electronic seal management, promoting in-depth application and mutual trust and recognition of electronic seals. On the enforcement front, the CAC launched multiple “Clear and Bright” special actions to address issues of maliciously provoking negative emotions and chaos in online live streaming rewards, creating a more civilized and rational online environment; the CAC investigated and handled cases of multiple internet media platforms disrupting the online ecosystem, urging platforms to fulfill primary responsibilities; the SAMR initiated an investigation against a certain large live e-commerce platform, emphasizing e-commerce platform primary responsibilities, protecting the legitimate rights and interests of consumers and small and medium-sized merchants, and promoting improved compliance levels in the live e-commerce industry; the Hangzhou Municipal Administration for Market Regulation (“Hangzhou SAMR”) released ten major data protection enforcement cases, involving issues such as traffic hijacking, illegal data acquisition, infringement of data enterprise trade secrets, and “data grafting.” At the industry level, the MIIT solicited public opinions on the construction of the computing power standard system to provide institutional guarantees for the construction of a national integrated computing power network and further promote healthy and orderly industry development; the NDRC and five other departments jointly issued measures to strengthen the discovery and cultivation of digital innovation enterprises, promoting the emergence of more gazelle enterprises and unicorn enterprises in the digital economy field; the National Data Administration approved construction plans for national digital economy innovation development pilot zones in seven locations and carried out targeted pilot trials.
  • Protection of Minors: At the legislative level, the CAC solicited opinions on measures for identifying network platform service providers with huge numbers of minor users and significant influence on minor groups, further strengthening online protection for minors and safeguarding their legitimate rights and interests. At the industry level, multiple large internet platforms in Shanghai released social responsibility reports on online protection for minors, showcasing to the public the work measures, achievements, and highlights of network platforms in fulfilling primary responsibilities and actively performing obligations for online protection of minors.

 

Follow the links below to view the official policy documents or public announcements.

Legislative Developments

1. The General Office of the State Council issued the measures to regulate the management and use of electronic seals (9 October) 

The General Office of the State Council issued the Measures for the Administration of Electronic Seals, aiming to strengthen the standardized management of electronic seals and promote their widespread application. The measures apply to administrative organs, enterprises and public institutions, social organizations, and other organizations lawfully established, clarifying that an electronic seal is specific-format data based on cryptographic and related digital technologies, used for reliable electronic signatures on electronic documents, and that compliant electronic seals have the same legal effect as physical seals. In terms of production, filing and cancellation management, the measures require relevant entities to submit authentic, lawful and valid production materials as required, and production must meet requirements such as valid certificates, standardized formats and compliant images; in terms of use management, the measures stipulate that electronic seal use shall follow the principle of “whoever owns it controls it, whoever applies the seal is responsible”, ensuring data authenticity, integrity and non-repudiation, with process information records preserved for traceability; in terms of security management, the measures require relevant entities to establish information protection systems to ensure information security, and the construction and operation & maintenance of related information systems shall comply with cryptographic, network and data security standards.

2. CAC planned to issue public opinions on measures for identifying network platform service providers with a huge number of minor users and significant influence on minor groups (16 September)

The CAC solicited public opinions on the Measures for Identifying Network Platform Service Providers with a Huge Number of Minor Users and Significant Influence on Minor Groups, aiming to further compact the primary responsibility of network platforms for online protection of minors. The measures are intended to further implement the requirements of the Regulations on the Online Protection of Minors by refining the specific identification standards, identification procedures and related work requirements for network platform service providers with a huge number of minor users or significant influence on minor groups. When determining whether the number of minor users is “huge”, the measures distinguish whether the platform’s products or services are limited to minors and set clear numerical thresholds; when determining whether the platform has “significant influence” on minor groups, the measures comprehensively consider multiple factors including user scale and commercial influence, depth of minor usage, and content relevance. Identification shall in principle be conducted every three years, but may also be initiated under special circumstances. Network platform service providers shall submit a self-assessment report within 20 working days after receiving the notice.

3. TC260 issued technical document to regulate security management of large model applications in government affairs scenarios (11 September)

The TC260 released the Security Specification for the Application of Large Models in Government Affairs, aiming to provide executable security baselines and evaluation methods for the introduction and use of large models in government affairs scenarios and to compact the primary responsibility of departments. The document focuses on the full chain of “selection—deployment—operation—decommissioning”: in the selection stage, already-filed models shall be adopted, open-source models shall undergo license and integrity verification, API calls shall include certificate verification and prevention of shell impersonation, and the use of retrieval-augmented generation (RAG) technology based on external knowledge bases is encouraged to improve accuracy, timeliness and controllability; in the deployment stage, centralized construction and unified protection are required, and security testing shall be conducted on software/hardware equipment and third-party tools required for large model deployment; in the operation stage, generated/synthesized content shall be marked, authoritative information shall undergo internal review, model limitations shall be prominently prompted with retention of manual takeover interfaces, and application operation logs shall be retained for no less than one year with regular audits. Appendix A of the specification simultaneously sets forth “security guardrail” functional requirements, and Appendix B simultaneously provides an application security testing guide, forming an implementable security baseline for large model applications in government affairs.

4. MPS released a batch of public safety industry standards, covering areas such as online live streaming services, online payment services, big data system security extension requirements, and network security graded protection cloud computing evaluation guidelines (23 October).

The MPS has approved 19 public safety industry standards, including Internet Interactive Service Security Management Requirements Part 12: Online Live Streaming Services. Many of the contents are directly related to the basic requirements of network security graded protection, aiming to promote the upgrade of network security protection toward intelligence and standardization. Among them, the mandatory standards primarily target high-risk internet interactive services such as online live streaming and online payment, requiring relevant service providers to strictly implement security management requirements; the recommended standards cover cutting-edge technology fields such as edge computing, big data systems, IPv6, blockchain, cloud computing, and 5G, providing specific security protection guidance for relevant enterprises and institutions.

5. Beijing SAMR released local standards to regulate safety management in the autonomous driving field (27 October)

The Beijing SAMR issued 4 local standards in the autonomous driving field, aiming to standardize two core areas: closed testing venues for intelligent connected vehicles and vehicle-road-cloud integrated roadside infrastructure. In the field of closed testing venues for intelligent connected vehicles, the Intelligent Connected Vehicles Closed Testing Venue Testing Technical Specifications Part 1: Passenger Cars focuses on L4-level driving automation function testing, adding new tests for unmanned operations, vehicle-cloud communication, and V2X communication security. The Intelligent Connected Vehicles Closed Testing Venue Testing Technical Specifications Part 2: Unmanned Delivery Vehicles focuses on two cores: safe passage and scenario adaptation, setting test items such as recognition of non-motorized lane markings, response to motorized-non-motorized isolation barriers, interaction with dense pedestrians and non-motorized vehicles, and conflict avoidance at roadside bus stops, especially for emergency conditions such as pedestrians crossing from behind obstructions and non-motorized vehicles illegally cutting in. The Vehicle-Road-Cloud Integrated Roadside Infrastructure Part 6: Information Security Technical Requirements proposes general technical requirements for information security. Security function requirements involve identity authentication and identification, access control, vulnerability and malicious program protection, intrusion detection and defense, cryptographic support, and other requirements; communication network security aspects involve device access, network transmission, short-range wireless communication, and direct communication requirements; system software upgrade aspects involve requirements for the upgrade package itself, communication during upgrades, OTA upgrade devices, and the upgrade operation process; data security aspects involve integrity and confidentiality requirements. The Vehicle-Road-Cloud Integrated Roadside Infrastructure Part 7: Application Technical Requirements for Operation and Maintenance Management Systems stipulates the overall architecture, functional requirements, performance requirements of the roadside intelligent infrastructure operation and maintenance management system, as well as data requirements for roadside intelligent infrastructure.

 

Enforcement Developments

6. CAC launched a “Clear and Bright” special action to address issues of maliciously provoking negative emotions (22 September)

The CAC organized departments at all levels to carry out the “Clear and Bright · Remediation of Malicious Provocation of Negative Emotions” special action, thoroughly addressing problems such as maliciously inciting confrontation and promoting violent hostility and other negative emotions. This special action focuses on social, short video, and live streaming platforms, comprehensively investigating key areas including topics, rankings, recommendations, bullet comments, and comment sections, with particular emphasis on remediating the following issues: provoking extreme confrontational emotions among groups, including labelling and hyping contradictions, “fandom” attacks and mutual denigration, and inciting confrontation against specific groups; promoting panic and anxiety emotions, covering the fabrication and dissemination of emergencies, creation of rumour information, and “selling” anxiety in areas such as employment and education; provoking cyber violence and hostility, involving scripted violent plots, dissemination of bloody content, AI-rendered violence, dangerous gimmicks in live streaming, and offline conflicts; excessively rendering pessimistic and negative emotions, such as advocating “uselessness theory”, one-sidedly amplifying negative cases, and spreading decadent content.

7. CAC launched a “Clear and Bright” special action to address chaos in online live streaming rewards (28 October) 

The CAC organized departments at all levels to carry out the “Clear and Bright · Remediation of Chaos in Online Live Streaming Rewards” special action, further strengthening the management of online live streaming rewards. This special action targets key areas prone to chaos in live streaming rewards, such as entertainment-oriented group streaming and private-domain live streaming, with focused remediation of the following four categories of issues: vulgar group streaming to induce rewards, including the use of indecent behaviour, vulgar gameplay, and discomfort-inducing methods; fraudulent personas to deceive rewards, including fabricating false personas, scripts, and AI content, or setting false interaction rules, promotional methods, and pretexts to deceive rewards; inducing minors to reward, including inducing minors to reward under various pretexts or assisting them in evading supervision to reward, knowingly inducing minors to reward, or disguising as/claiming to be minors to attract rewards from others; stimulating irrational rewards from users, such as platforms failing to set reward limits, determining rankings by rewards, or designing harmful PK mechanisms to stimulate irrational consumption.

8. CAC investigated and handled cases of multiple internet media platforms disrupting the online ecosystem, implementing primary responsibility for information content management (11 September, 20 September, 23 September)

The CAC investigated and handled cases of multiple internet media platforms disrupting the online ecosystem, aiming to urge websites and platforms to fulfil their primary and social responsibilities and effectively maintain a clean and healthy cyberspace. The handled cases include: for a certain platform’s failure to implement primary responsibility for information content management, frequently presenting multiple entries hyping celebrities’ personal dynamics and trivial matters in key hot search ranking sections and other undesirable information content, the platform was subjected to measures including interviews, orders to rectify within a time limit, and strict handling of responsible persons; for a certain platform’s failure to implement primary responsibility for information content management, prominently displaying a large number of entries hyping celebrities’ personal dynamics and trivial matters in the main hot search rankings and other undesirable information content, the platform was subjected to measures including interviews, orders to rectify within a time limit, and strict handling of responsible persons; for a certain platform’s failure to implement primary responsibility for information content management, clustering extreme sensitive malignant case entries from non-authoritative departments or media in the main hot search rankings, involving topics related to cyber violence and minors’ privacy, the platform was subjected to measures including interviews, orders to rectify within a time limit, and strict handling of responsible persons.

9. MIIT and five other departments jointly deployed a special action to remediate network chaos in the automotive industry (10 September)

The MIIT and five other departments jointly launched a remediation action against network chaos in the automotive industry, aiming to implement requirements for effectively regulating competition order in the new energy vehicle industry. This special remediation action will focus on addressing network chaos such as illegal profiteering, exaggerated and false promotion, and malicious defamation attacks, with particular emphasis on the following three categories of issues: first, illegal profiteering, including creating false content to hype negative topics about automakers for traffic monetization, conducting false or non-standard evaluations under the guise of “supervision” or “science popularization”, and using technology to produce false content with new types of “network water armies” for profit; second, exaggerated and false promotion, including making false or misleading claims about vehicle and power battery performance, functions, quality, and sales to deceive consumers, manipulating evaluators to use false data for rankings or selectively disclosing data and releasing incomplete rankings, and hyping topics during industry events to create adverse impact; third, malicious defamation attacks, including organizing “water armies” and “black PR” to disseminate negative information and smear competitors and products in order to suppress rivals, as well as automaker executives engaging in online “stepping” and provocation.

10. SAMR initiated an investigation against a certain large live e-commerce platform, compacting the primary responsibility of e-commerce platforms (19 September)

The SAMR filed a case for investigation into a certain large live e-commerce platform suspected of violating the E-Commerce Law and other laws and regulations, aiming to further compact the primary responsibility of e-commerce platforms, better protect the legitimate rights and interests of consumers and small and medium-sized merchants, and promote improved compliance levels in the live e-commerce industry. The SAMR pointed out that problems such as “false marketing and counterfeit/inferior products in violation of laws and regulations remain persistent” in the live e-commerce industry. In the next step, the SAMR will advance the case investigation in accordance with the law, and the investigation results will be announced to the public in a timely manner.

11. SAMR announced typical cases of illegal internet advertisements, in which a certain company was fined 1.2 million yuan for using AI technology to generate false character images to mislead consumers (16 October).

The SAMR announced ten typical cases of illegal internet advertisements, among which two cases involved using AI technology to generate false character images or imitate celebrity images to mislead consumers: A certain company published unapproved medical device advertisements on the internet, and in the advertisements used AI technology to generate false character images such as “Inheritor of the Thousand-Year Miao Formula Miao Ancient Gold Paste” and “56th Generation Inheritor of Miao Ancient Gold Paste,” while also using false information such as “Dedicated for middle-aged and elderly” in the advertisements, deceiving and misleading consumers, and was fined 1.2 million yuan; A certain company published ordinary food advertisements in the form of live broadcasts and short videos, in which AI technology was used to imitate the image of a certain famous host to promote the product, and claimed that the product has therapeutic effects, and was fined 200,000 yuan.

12. Hangzhou SAMR released typical cases of data protection enforcement, involving specific issues such as traffic hijacking and infringement of data enterprise trade secrets (15 October)

The Hangzhou SAMR has released the top ten cases of data protection administrative enforcement, covering aspects such as traffic hijacking, illegal data acquisition, infringement of data enterprise trade secrets, and “data grafting” Typical cases include: A certain data company used technical means to insert link advertisements into the target customers’ browsers on other websites without the consent of the website operators, thereby hijacking traffic from other websites, and was fined 1.5 million yuan; A certain software technology company employed web crawler programs and other technical means to illegally acquire and use others’ data, providing technical support for others to replicate and operate stores, resulting in substantial substitution of other operators’ stores and platform services, and was fined 1.2 million yuan; A certain technology company used improper technical means to illegally acquire and store product transaction information from platforms such as Taobao and Tmall, infringing on the platforms’ trade secrets; at the same time, it engaged in the behaviour of fabricating product sales quantities through “buy A ship B” methods, calling interfaces to divert traffic to specified products, thereby organizing false transactions. The two behaviours together resulted in a total fine of 725,000 yuan; After resigning, Chen [First Name Withheld] unauthorizedly used his former work key to log into the right holder’s system and downloaded all documents, including technical code, infringing on the right holder’s trade secrets, and was fined 300,000 yuan; A certain network technology company used technical means to illegally acquire data, fabricated infringement complaint reports to submit complaints to intellectual property platforms, hindering and disrupting the normal operation of network products or services legitimately provided by other operators, and was fined 600,000 yuan.

13. Beijing Internet Court released typical cases in the artificial intelligence field, involving legal attributes of AI text-to-image generation, infringement of personal information rights by AI face-swapping, and other specific issues (10 September)

The Beijing Internet Court has released eight typical cases in the artificial intelligence field, involving the legal attributes of AI text-to-image generation, rights related to AI processing of natural persons’ voices, infringement of personal information rights by AI face-swapping, allocation of burden of proof between network content service platforms and users, legal attributes and originality determination of virtual digital human images, and other specific issues. These cases respectively clarify: Content generated by individuals using artificial intelligence, if it meets the definition of a work, should be recognized as a work and protected under copyright law; Voices processed by artificial intelligence technology, as long as they possess identifiability, should be included within the scope of protection for the natural person’s voice rights; The key to whether “AI face-swapping” infringes on portrait rights lies in whether it has identifiability, and the collection, use, and analysis of personal information during the synthesis process constitutes processing of the plaintiff’s personal information; Network content service platforms, in scenarios of real-time creative text generation, bear a moderate obligation to explain the results of automated decision-making using algorithms; Unauthorized use of AI software by actors to parody or uglify others’ portraits constitutes infringement of others’ personality rights; Virtual digital humans, if they embody the unique aesthetic choices and judgments of the production team and meet the originality requirements for works, can be recognized as artistic works and protected under copyright law; Natural persons’ personality rights extend to their virtual images; Network service providers, through algorithmic design that substantially participates in the generation and provision of infringing content, should bear infringement liability as content service providers.

14. Beijing SAMR released enforcement case, investigating and handling the first case of abusing AI technology to publish false advertising (17 October)

The Beijing SAMR investigates and handles the first case of using AI technology for false advertising: A certain company used AI technology to edit videos of well-known CCTV hosts, added self-designed voice-over content, and published advertisements for ordinary food in the form of short videos on its own network video account, claiming that it has medical effects, violating the relevant provisions of the “Advertising Law of the People’s Republic of China,” and has now accepted administrative penalties.

 

Industry Developments

15. CAC and NDRC issued the Guidelines for the Deployment and Application of Artificial Intelligence Large Models in the Government Affairs Field to regulate the deployment of artificial intelligence large models in the government affairs field (10 October)

The CAC and the NDRC released the Guidelines for the Deployment and Application of Artificial Intelligence Large Models in the Government Affairs Field, aiming to regulate and guide the development and application of artificial intelligence large models in the government affairs field. The guidelines consist of five parts: general requirements, application scenarios, standardized deployment, operation management, and supporting measures, mainly providing work orientation and basic reference for government departments at all levels in the deployment and application of artificial intelligence large models. Focusing on government services, social governance, office operations, and auxiliary decision-making, the guidelines propose four categories and 13 reference scenarios for explorable application of artificial intelligence large models; in terms of standardized deployment, the guidelines require government departments to reasonably select implementation paths based on actual work and scenario characteristics, carry out deployment in a coordinated and intensive manner, explore unified management and reuse, and continuously consolidate the data foundation; in terms of operation management, the guidelines require government departments to strengthen the operation management of artificial intelligence large models in the government affairs field, clarify application management requirements, continuously promote iterative optimization, solidly implement security management, and strictly fulfil confidentiality requirements.

16. MIIT planned to issue the Guidelines for the Construction of the Computing Power Standard System to guide the construction of the computing power standard system (21 October)

The MIIT solicited public opinions on the Guidelines for the Construction of the Computing Power Standard System (2025 Edition), aiming to accelerate the construction of a national integrated computing power network, establish an advanced and applicable computing power standard system that meets industry development needs, and guide the formulation and revision of standards in the computing power field. In terms of construction approach, the guidelines intend to clarify the structure and framework of the computing power standard system. In terms of key directions, the guidelines focus on nine areas: basic general standards, computing facility standards, computing equipment standards, computing-network integration standards, computing interconnection standards, computing platform standards, computing application standards, computing security standards, and green low-carbon standards. The guidelines propose that by 2027, more than 50 standards shall be formulated or revised in areas including basic general, computing facilities, computing equipment, computing-network integration, computing interconnection, computing platforms, computing applications, computing security, and green low-carbon, with more than 500 enterprises carrying out standard publicity, implementation and promotion.

17. NDRC and five other departments jointly issued measures to strengthen the cultivation of innovative enterprises in the digital economy (26 September)

The NDRC and five other departments jointly issued the Several Measures on Strengthening the Cultivation of Innovative Enterprises in the Digital Economy, aiming to accelerate the cultivation of innovative enterprises in the digital economy. The measures state that innovative enterprises in the digital economy are those that take data as the key production factor, with digital technology innovation, application scenario innovation, and data value innovation as the core driving forces, possessing high agility and high growth potential, and serving as important practitioners in developing new quality productive forces. The measures propose ten initiatives, mainly including: improving the source discovery mechanism for digital innovation enterprises; strengthening multi-dimensional data utilization support; strengthening the supply of computing power resources; enhancing original innovation capabilities; improving the mechanism for achievement transformation; strengthening the supply of scenarios and opportunities; strengthening overseas expansion services for enterprises; optimizing investment and financing services; establishing an open, inclusive and prudent innovation environment; and strengthening the construction of talent teams.

18. NDA approved construction plans for national digital economy innovation and development pilot zones in seven locations, promoting the development of the digital economy (15 October)

The NDA issued the Reply on the Construction Plans for National Digital Economy Innovation and Development Pilot Zones, approving the construction plans of seven locations including Tianjin Municipality, Hebei Province (Xiong’ an New Area), Shanghai Municipality, Jiangsu Province, Zhejiang Province, Guangdong Province, and Sichuan Province. Focusing on seven aspects including market-oriented allocation reform of data elements, infrastructure construction, productive force layout and regional cooperation, deep integration of scientific and technological innovation and industrial innovation, scenario application expansion, international cooperation, and digital adaptation reform, the National Data Administration extracted and formed a list of 158 reform items, requiring all locations to further highlight key points in accordance with the overall design and carry out targeted pilot trials. The National Data Administration proposed that all locations shall, in accordance with the overall requirements for pilot zone construction and annual work priorities, timely summarize the progress and results of pilot tasks, analyse deficiencies and problems, combine digital economy operation monitoring and statistical accounting pilot tasks, simultaneously clarify high-frequency quantitative indicators, and adapt measures to local conditions to improve the digital economy monitoring indicator system. The National Data Administration will conduct dynamic assessments and phased construction evaluations of the pilot zones in each location.

19. TC260, in collaboration with the CNCERT, released a governance framework to promote AI security governance (15 September)

The TC260, in collaboration with CNCERT, has officially released the 2.0 version of the AI Security Governance Framework (hereinafter referred to as the “Framework” 2.0 version). The “Framework” 2.0 version adds new principles of “trustworthy application and preventing loss of control,” and introduces a new category of “security risks derived from AI applications”; it requires strengthening the assessment of security defects in foundation models and open-source models propagating downstream; establishes a “circuit breaker” mechanism and “one-click control” measures; improves explicit/implicit marking and tracing mechanisms for synthetic content; explores the establishment of a consensus-based security risk grading methodology and adopts corresponding differentiated prevention measures; builds an AI security evaluation system, conducting layered assessment tests on model algorithms, security performance, and specific scenarios; encourages organizations to conduct AI security vulnerability crowd-testing activities; according to the division guidelines for the AI system development lifecycle, it sets up a “three-stage” security guidelines: 6.1 Model Algorithm R&D Security Development Guidelines, 6.2 Application Construction and Deployment Security Guidelines, 6.3 Application Operation Management Security Guidelines.

20. CSCA released normative consensus, setting ethical boundaries for AI services for minors (24 September)

The CSCA, in collaboration with units from various industry sectors, has released the Consensus on Ethical Norms for Artificial Intelligence Services for Minors. Focusing on the real challenges faced by AI services for minors, it proposes industry norms from seven aspects: health priority, content assurance, privacy protection, clear rules, transparency and trustworthiness, collaborative governance, and ethics first. It advocates that AI service providers earnestly fulfil their social responsibilities in areas such as product design, content management, privacy protection, and psychological health guidance, to jointly create a safe, healthy, and trustworthy digital environment.

21. Shanghai MCEI released action plan to promote high-quality development of intelligent terminal industry (14 October)

The Shanghai MCEI has released the Shanghai Action Plan for High-Quality Development of the Intelligent Terminal Industry (2026-2027), aimed at deeply implementing the national “AI+” initiative and promoting the high-quality development of the electronic information industry. The action plan proposes 20 specific measures across three key areas: In the area of building core terminal products, the action plan proposes to accelerate the development of AI computers, cultivate AI smartphone terminal brands, expand the scale of intelligent computing terminals, strengthen robot terminal capabilities, support leaps in smart glasses capabilities, nurture satellite internet terminal products, stimulate vitality in silver economy terminals, launch industrial terminal products, speed up research and development of future terminals, and innovate new consumer terminal products. In the area of laying the foundation for key terminal technologies, the action plan proposes to strengthen the layout of edge-side AI chips, enhance edge-side model performance, promote collaborative innovation between software and hardware, reinforce intelligent module capabilities, and advance the development of next-generation display technologies. In the area of optimizing the industrial ecosystem, the action plan proposes to enhance the competitiveness of Shanghai’s intelligent terminal brands, build large-scale production bases for intelligent terminals, accelerate the formation of industrial clustering effects, strengthen financial support for the industry, and encourage the application and promotion of intelligent terminal products.

22. Shanghai’s major internet platforms released social responsibility reports on online protection for minors, strengthening the level of protection for minors (10 October and 14 October) 

The Shanghai’s major internet platforms released social responsibility reports on online protection for minors: A certain internet platform optimized its multi-dimensional fraud account identification model, implementing parallel intelligent and manual reviews to reduce fraud risks involving minors; a certain internet platform conducted full-chain interception and filtering of inappropriate content across five stages—”search-recommendation-browsing-transaction-warning”—to create a “green” shopping experience for minors; two major internet platforms streamlined complaint and reporting channels involving minors and established growth protection platforms; a certain internet platform set up an “Energy Refuelling Station” and introduced psychological counselling services to focus on minors’ mental health; multiple internet platforms implemented more precise age-based recommendations, deeply cultivated high-quality content ecosystems, and created exclusive content pools for minors; a certain internet platform deepened anti-addiction measures and collaborative governance with parents; a certain internet platform launched the “Workplace Rising Stars” series of activities to enhance minors’ digital literacy; a certain internet platform donated to establish the “Dot Light Plan” special fund for youth protection, promoting projects such as network security education in schools and empowerment through innovation labs.

Latest insights

More Insights
featured image

New Revision of the Numbering Plan in France

3 minutes Dec 05 2025

Read More
Curiosity line teal background

New Judicial Interpretation from the Supreme People’s Court of China: Clarification of Role of Patent Evaluation Reports in Utility Model or Design Patent Infringement

4 minutes Dec 02 2025

Read More
featured image

New UK Public Procurement thresholds from 1st January 2026

3 minutes Nov 28 2025

Read More