The designation of “high risk” AI systems in the EU’s AI Act carries with it the most stringent obligations and is mainly where businesses need to focus their immediate compliance efforts. High-risk AI systems are generally those that pose a significant threat to health, safety, or fundamental rights. The Act broadly defines them in two ways:
- AI systems intended to be used as a safety component of a product, or are themselves a product, covered by specific EU harmonisation legislation (e.g., medical devices, machinery, aviation) and require a third-party conformity assessment; and
- AI systems falling into specific predefined areas listed in Annex III of the Act which include critical infrastructure management (road traffic, gas or water supply), education and vocational training (e.g., assessing learning outcomes), employment and worker management (e.g., recruitment, performance evaluation), access to essential public and private services (e.g., eligibility to public hospitals, healthcare patient triage systems, credit worthiness evaluation and health or life insurance risk assessment), law enforcement, migration and border control, and the administration of justice and democratic processes.
For providers of high-risk AI systems, the compliance journey is extensive and continuous. A core obligation is establishing a risk management system, which isn’t a one-off assessment but an ongoing, iterative process. Providers must continuously identify, analyse, evaluate, and mitigate risks to health, safety, and fundamental rights throughout the AI system’s entire lifecycle, diligently considering both its intended use and any reasonably foreseeable misuse.
Furthermore, high-quality data governance is paramount, as the integrity of AI hinges on the data it uses. Providers must ensure their training, validation, and testing datasets meet stringent quality criteria, including relevance, representativeness, completeness, and accuracy, with a keen focus on mitigating biases.
Beyond data, comprehensive technical documentation is mandatory, providing the necessary information to assess the AI system’s compliance and facilitate post-market monitoring. This documentation includes intricate details on its design, development, algorithms, data, and training processes.
Ensuring transparency and information for deployers is also crucial; systems must be designed to be transparent, allowing deployers to fully understand their functioning, capabilities, and limitations. Clear instructions for use, including human oversight measures, are absolutely essential.
A key safeguard woven into the Regulation is human oversight. High-risk AI systems must be designed for effective oversight by human beings, ensuring that human operators can monitor, interpret outputs, intervene, or override decisions where necessary.
Moreover, systems must achieve appropriate levels of accuracy, robustness, and cybersecurity, performing consistently and being resilient against errors, faults, and malicious attacks.
Providers must also implement a documented quality management system covering all aspects of the AI system’s lifecycle, from design and development to testing, deployment, and post-market monitoring.
Before a high-risk AI system can even be placed on the market, it must undergo a conformity assessment and if this is successful, providers are required to issue an EU Declaration of Conformity and affix the prominent CE marking to the product. Finally, most high-risk AI systems and the use of such systems by public authorities must undergo registration in an EU database, before being placed on the market or deployed.
Deployers of high-risk AI systems, too, have to meet their own obligations. Public authorities and private entities using high-risk AI in sensitive areas (e.g., employment, credit scoring, law enforcement), need to carry out a fundamental rights impact assessment (FRIA) before deploying the system. This would assess the specific risks in their context of use.
Even if a full FRIA isn’t mandated, deployers are generally expected to perform due diligence to ensure the high-risk AI system they acquire is compliant and that its use aligns with fundamental rights. Deployers are often responsible for maintaining human oversight, monitoring and reporting risks in the use of the AI system and ensuring transparency to affected individuals.
The deadlines for compliance are approaching rapidly. While the general application of the AI Act is set for 2 August 2026, specific provisions are already active or will become active sooner – by way of example, the prohibitions on unacceptable AI practices began to apply as of last February whilst the obligations relating to General Purpose AI (these will be dealt with in our next article) came into force last month, in 2nd August 2025.
This article was first published in ‘The Sunday Times of Malta’ on 14/09/2025.