Back to Global Rankings

Global Market Leaders: An Artificial Intelligence Overview

The Evolution and Imperatives of Artificial Intelligence Governance in China

From blueprint to implementation: an eight-year legislative sprint reaches a milestone

In early 2017, China’s New Generation Artificial Intelligence Development Plan set a clear mandate: establish preliminary legal, ethical and policy frameworks for AI by 2025. As that target year ends, the foundational architecture has largely taken shape. Core legislative pillars – the Cybersecurity Law (2017, amended 2025), the Data Security Law (2021) and the Personal Information Protection Law (2021) – are firmly in place, complemented by targeted rules governing algorithmic recommendation services (2022), deep synthesis (2023), generative AI (2023) and facial recognition (2025). In contrast to the European Union’s omnibus AI Act, China has adopted a problem-driven, iterative approach, moving quickly to address issues that arise alongside advances in AI. Throughout 2025, China has continued to advance AI governance, promoting and regulating development through a mix of laws, administrative regulations and policy instruments.

A decade after the “Internet Plus” Action Plan, China unveiled the “AI Plus” Action Plan in August 2025 to catalyse AI adoption across the economy and society. Positioned as a successor to top-level digital economy policy, the plan aims to integrate AI into technology, industry, consumption, public services, governance and international co-operation, outlining a three-step roadmap. The final stage, targeting 2030, envisions AI fully empowering high-quality development, with the application rate of next-generation intelligent terminals and intelligent agents exceeding 90%.

In parallel, the Cybersecurity Law was amended in October 2025 to introduce, for the first time at the statutory level, provisions that promote the safe development of AI – signalling a new phase of regulatory maturation. Regulatory activity has continued apace: China enacted a facial recognition regulation, a key AI application area, subjecting facial-recognition activities to a mandatory filing regime and bringing them squarely within the supervisory framework.

These developments reflect China’s fundamental principle of placing equal emphasis on development and security in cyberspace governance. In the AI domain, this means drawing clear boundaries through law while fostering industry through policy. Rather than a purely rights-based paradigm, a “development–security balance” characterises China’s AI governance model: the state actively promotes innovation and industrial application while building a robust legal and regulatory structure to mitigate potential risks on national security and social stability. The result is a distinctive, multi-layered system that moves with notable speed from high-level principles to enforceable, technical rules. For international businesses, understanding this trajectory is now a critical component of strategic risk management and market expansion.

Beyond balancing industry development with risk control, China’s approach to AI regulation is also marked by agile, scenario-specific oversight of use cases that may pose societal risks. For example, on 27 December 2025, the Cyberspace Administration of China released the Interim Measures for the Administration of Artificial Intelligence Personified Interaction Services (Draft for Comment), which seeks to regulate “products or services that use artificial intelligence technologies to simulate human personality traits, thinking patterns, and communication styles, and that engage in emotional interactions with the public within the People’s Republic of China through text, images, audio, video, or other means”. This initiative is not only a timely response to emerging business models such as AI companions and virtual idols, but also provides important guidance for the healthy, orderly and ethical development of the AI industry.

Establish a supervisory system for AI safety

While earlier plans contemplated comprehensive, standalone AI legislation, recent legislative work has pivoted toward promoting the healthy development of AI within an existing rule-of-law architecture, with the State Council underscoring the need to build a legal system that ensures AI safety. In practice, AI compliance spans licensing and record-filing requirements, cybersecurity, data security, personal information protection, content governance and ethical norms. China is also researching and exploring systematic, unified legislative solutions for AI, alongside continued refinement of sectoral rules and a broader clean-up of regulations and normative documents.

China’s overarching objective is to maintain a “clean and bright” cyberspace – removing misinformation, deepfakes and fabricated news to protect users and public order. The Cybersecurity Law provides the backbone for content governance and enforcement. Complementing this, the Administrative Measures for AI Labeling and a mandatory national standard now require both visual and non-visual labels on AI-generated synthetic content – covering images, text, audio, video, virtual scenes and other outputs – to enhance user awareness and enable regulatory verification. Enforcement campaigns have been launched to curb the spread of false information produced by generative AI and rectify prominent algorithmic issues on online platforms.

Aligning with international concerns, China has reinforced data protection in AI. The regulatory framework reiterates data security and personal information protection obligations across algorithmic recommendation services, deep synthesis and generative AI – backstopped by the Personal Information Protection Law and the Data Security Law. Authorities have continued inspections and rectification efforts targeting apps and mini-programmes, while issuing targeted facial recognition rules. In 2025, China introduced scenario-specific instruments, including administrative provisions requiring entities that store facial information of more than 100,000 individuals to register with local cyberspace regulatory authorities, and a practice guide for personal information protection in face-recognition payment scenarios.

China has embedded AI into its enforcement regime to keep the industry both vigorous and compliant. The Cyberspace Administration of China has continued its multi-agency Qinglang (Clear and Bright) campaign, including a 2024 initiative focused on governance of typical algorithmic issues on online platforms – signalling the regulators’ sustained focus on AI-related risks and misuse.

Ethics remains integral to China’s AI governance. Foundational ethical frameworks call for human-in-the-loop control, as well as safety and controllability of AI systems, while the 2023 Measures for the Review of Science and Technology Ethics identify certain AI activities as high-risk and require institutional ethics review mechanisms. Together with earlier technical guidelines on AI ethical safety risk prevention, these instruments indicate that ethical considerations will weigh increasingly in China’s AI governance, in step with global trends.

The road ahead: implications and strategies

China’s achievements and resolve in AI development are widely recognised, and the world is closely watching how it will build a sound regulatory system for the sector. Looking ahead, China’s regulatory strategy for AI is expected to reflect the following trends.

Advancing international consensus on AI regulation

China helped facilitate the release of the Shanghai Declaration on Global AI Governance at the 2024 World Artificial Intelligence Conference and High-Level Meeting on Global AI Governance, and the Action Plan on Global AI Governance at the World Artificial Intelligence Conference 2025. These documents emphasise that “only by working together as a global community can we fully unlock the potential of AI while ensuring its safety, reliability, controllability, and fairness”. In addition, China proposed an AI Safety Governance Framework in both 2024 and 2025 to encourage the international community to reach an early consensus on a framework for AI safety governance. Going forward, China will continue to contribute its perspectives and proposals, working with the international community to identify optimal governance pathways for the AI era.

Promoting open-source development

This will be done under the principles of openness and shared benefit, fostering cross-border open-source communities, lowering technical barriers, and enabling broad global access to the benefits of AI.

The State Council’s Opinions on Deepening the Implementation of the “AI Plus” Initiative explicitly state: “Support the development of AI open-source communities; promote the open aggregation of models, tools, datasets, and other resources; and cultivate high-quality open-source projects. Establish and improve mechanisms for evaluating and incentivising AI open-source contributions, and encourage universities to incorporate open-source contributions into student credit recognition and faculty achievement evaluations. Support enterprises, universities, and research institutions in exploring new inclusive and efficient models of open-source application. Accelerate the creation of an open-source technology system and community ecosystem open to the world, and develop open-source projects and developer tools with international influence.”

Adopting an ethics-first strategy for AI development

As early as 2022, China’s Ministry of Foreign Affairs released the Position Paper on Strengthening the Ethical Governance of Artificial Intelligence, advancing the core concept of “people-centred, AI for good”, a principle further reflected in the Global AI Governance Initiative proposed by China in 2023. In August 2025, the Ministry of Industry and Information Technology and other departments solicited public comments on the Administrative Measures for Services for AI Science and Technology Ethics (Trial), which serve to refine and implement the Opinions on Strengthening the Governance of Science and Technology Ethics and the Measures for the Review of Science and Technology Ethics (Trial) in the AI domain.

Building sustained AI regulatory capacity underpinned by oversight of data, algorithms and computing power

China’s regulatory systems and capabilities in cybersecurity, data security, personal information protection and algorithm filing have reached an initial stage of maturity. Future AI regulatory capacity will need to integrate effectively with these existing mechanisms.

Strengthening protections for minors, the elderly, and other social groups

This will be done in the course of AI development, with particular emphasis on safeguarding cognitive well-being.

Recent enforcement actions related to AI, together with the Regulations on the Protection of Minors in Cyberspace and the Interim Measures for the Administration of Artificial Intelligence Personified Interaction Services (Draft for Comment), prioritise the rights and interests of minors and the elderly. They elevate protections against risks such as emotional dependency and social alienation, cognitive manipulation and value-shaping, and mental health and safety concerns.