The EU AI Act and National AI Standards: Risk of Fragmentation of the Internal Market
Bruno Lebrun and Wafa Lachguer of Janson discuss the EU AI Act, considering its purpose and potential concerns with respect to the free movement of goods in the EU, as well as possible regulatory inconsistencies due to member states being permitted some discretion to further enhance the regulation of AI in their respective countries.
Bruno Lebrun
View firm profileWafa Lachguer
View firm profileIntroduction
The free movement of goods is a cornerstone of the EU’s internal market that may be threatened by the rapid proliferation of artificial intelligence (AI) technologies across diverse economic sectors. Various member states may attempt to regulate those technologies when they create risks for their citizens.
In this context, the Regulation (EU) 2024/1689 (“AI Act”) and the 2025 Commission Guidelines attempt to harmonise the EU legal framework to mitigate the risk that national regulatory divergence in AI impedes the free movement of AI-powered goods and services.
Yet, even after the AI Act, the tendency of member states to keep using national AI standards raises concerns about market fragmentation, innovation disincentives, and compliance burdens. This article presents how the EU tries to balance national regulatory initiatives and the need for mutual recognition and regulatory coherence in the EU.
The AI Act: new harmonised standards for AI systems
General overview
Given their adaptability and cross-sectoral applications, AI systems demand a unified regulatory approach to prevent legal uncertainty and market distortions, which is the exact purpose of the AI Act.
Initially proposed in April 2021, the AI Act’s legislative process encountered delays due to technological advancements, most notably the emergence of generative AI as ChatGPT, and was eventually adopted on 13 June 2024 .
From a pure internal market point of view, the AI Act reflects the EU commitment to full harmonisation, as outlined in the European Commission’s “Competitiveness Compass for the EU” which emphasises the enforcement of uniform rules to prevent market fragmentation and ensure a level playing field.
Scope
As one of the most comprehensive AI regulations worldwide, the AI Act establishes a structured framework for AI policy introducing compliance and transparency obligations founded on a risk-based approach.
The AI Act adopts a broad definition of AI systems to address the rapid evolution of these technologies. It focuses on key characteristics such as:
- autonomy;
- adaptability;
- inference (logic or machine-learning-based reasoning); and
- the ability to influence physical or virtual environments.
This definition aligns with the OECD framework and promotes international regulatory convergence and legal certainty (see the 2019 OECD Recommendation of the Council on Artificial Intelligence).
The AI Act applies:
- to all actors in the AI value chain; and
- to all AI systems whether embedded within a larger product or deployed as a standalone service.
Importantly, the AI Act follows the EU’s traditional extraterritorial enforcement. It applies to AI systems operated from outside the EU if their outputs are used within the EU.
Risk-based approach
The AI Act categorises AI systems based on their potential to cause harm: the higher the risk, the stricter the obligations on the AI supplier.
The Act distinguishes four risk levels:
- unacceptable;
- high risk;
- limited risk; and
- minimal risk.
For high-risk AI systems, the Act imposes strict compliance requirements, ensuring transparency, accountability and safety measures before deployment.
Harmonisation vs national discretion
The main objective: harmonisation
The AI Act attempts to prevent the fragmentation of the internal market with a single set of rules including key harmonisation mechanisms:
- CE marking for AI Systems as the compliance with harmonised standards enables AI-integrated products to bear the CE marking, signalling conformity with essential safety, health, and fundamental rights protections;
- mutual recognition as AI systems that comply with the AI Act and are lawfully marketed in one member state can circulate freely within the EU, preventing trade barriers and fostering the internal market;
- the 2025 Commission Guidelines, providing practical interpretations of AI prohibitions, ensuring consistent enforcement across member states; and
- standardisation bodies – the CEN, CENELEC and ETSI – in co-operation with the European Commission, develop technical standards aligned with the AI Act, preventing member states from imposing divergent national rules.
National AI standards
While the AI Act established harmonised rules, it nevertheless granted member states the discretion to introduce additional measures, particularly in sensitive domains. This dual approach may cause some regulatory fragmentation of the EU internal market.
If national authorities impose excessive compliance requirements, it can lead to regulatory inconsistencies across the internal market. For example, Germany could introduce additional safety requirements for autonomous vehicles beyond EU standards, forcing companies to develop country-specific product versions, thereby increasing costs and reducing market efficiency. Such national rules could breach the prohibition on barriers to the free movement of goods which prohibits quantitative restrictions and measures having equivalent effect between member states (firmly established in the CJEU’s (alternatively, the CJUE) well-settled jurisprudence Cassis de Dijon). Thus, national AI standards may potentially introduce legal uncertainty for businesses, particularly SMEs, and should be carefully monitored by market players and the European Commission.
“The AI Act harmonises AI regulation across the EU, ensuring the free movement of AI-powered goods.”
Still, the AI Act takes a proactive approach, anticipating challenges and establishing safeguards to prevent regulatory divergence. For instance, the AI Act does not apply to AI systems used exclusively for military, defence or national security purposes, but dual-use AI systems – those with both civilian and security-related applications – must still comply with the AI Act’s provisions for non-exempt uses.
Similarly, while member states may introduce stricter transparency and accountability requirements for sensitive AI applications like facial recognition and automated decision-making, these measures must remain proportionate and non-discriminatory. Moreover, the European Artificial Intelligence Board (EAIB), in collaboration with standardisation bodies and industry stakeholders, is expected to accelerate the development of harmonised AI standards, pre-empting the need for potentially divergent national regulations. Regular consultations and joint initiatives, as recommended in the 2025 Guidelines, facilitate this alignment and prevent unjustified restrictions. That is how the AI Act strikes a balance between allowing regulatory flexibility and maintaining a harmonised EU approach.
Conclusion
The AI Act harmonises AI regulation across the EU, ensuring the free movement of AI-powered goods. While it prevents market fragmentation, member states retain some discretion, which creates a risk of regulatory inconsistencies. Thus, strict enforcement and co-ordination are crucial to upholding mutual recognition and innovation-friendly policies. If effectively implemented, the AI Act will strengthen the EU’s regulatory framework for AI, while preserving the internal market and fundamental rights.
Janson
2 ranked departments and 5 ranked lawyers
Learn more about the firm’s ranking in Chambers Europe
View firm profile