On 14 June, 2023, the European Parliament concluded many months of negotiations and adopted its position on the proposal for the Artificial Intelligence Act (AI Act)[1]. The Parliament’s position represents a significant departure from the versions formulated to date – and this applies to both the European Commission’s version[2] presented in April 2021, and the Council’s version[3] adopted in December 2022. The most significant changes, primarily relating to definitions and the way AI systems are classified, are described below. This will have a direct impact on the obligations of providers, users (operators), importers, and distributors of AI systems.


An AI system redefined

In its position, the Parliament adopted an approach to defining an AI system that differs entirely from the earlier proposals. The current position is that an AI system is a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. This is a much broader definition than that adopted to date, and closely resembles the definition adopted by the OECD in 2019[4] and used to some extent in US law[5].

The Council’s definition of a general purpose AI system has also been revised. The EP has now defined this as an AI system that can be used in or adapted to a wide range of applications for which it was not intentionally and specifically designed.


Developments regarding prohibited AI systems

In the proposal, the EP has significantly expanded the list of AI systems that pose an unacceptable risk and which it would be prohibited to place on the market, put into service, or use.

The list of prohibited AI systems now also includes the following:

  • biometric categorization systems that categorize natural persons according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics;
  • systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
  • systems that infer emotions of a natural person in the areas of law enforcement, border management, in workplace and education institutions;
  • systems that analyze recorded footage of publicly accessible spaces through ‘post’ remote biometric identification systems, unless they are subject to a pre-judicial authorization and strictly necessary with respect to a specific serious criminal offense..

 The Parliament has also expanded and made more specific the range of systems listed in this regard to date, namely systems that deploy subliminal techniques, systems that exploit vulnerabilities of specific groups, and social scoring systems. Rules on use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement have also been significantly modified.

Moreover, predictive systems based on profiling, location, or past criminal behavior have been reclassified as prohibited, being previously high-risk. This rule has also been modified to some extent with regard to specific range.


Changes regarding high-risk AI

There have also been major changes with regard to high-risk AI systems, and presently they are classified more narrowly. According to the European Parliament’s position, it is no longer sufficient for a particular AI system to be listed in annex III to the AI Act. In addition, to be classified as a high-risk system, it has to pose a significant risk of harm to the health, safety, fundamental rights, or the environment, and the Commission will be required to issue the relevant guidelines to address this in detail. Meanwhile, if it does not consider a system listed in annex III to pose a significant risk, the provider of that AI system will be required to submit a reasoned notification to the national supervisory authority.

The list itself, of systems and categories specified in annex III, has also been modified and expanded. The scale of classification applied to date has been expanded and fine-tuned, and in addition will now include AI systems intended to be used in the following ways:

  • to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems;
  • for use as safety components in the management and operation of the supply of water, gas, heating, electricity, and critical digital infrastructure;
  • to influence the outcome of an election or referendum or the voting behavior of natural persons when exercising their vote;
  • to be used by social media platforms designated as very large online platforms under the DSA in their recommender systems to recommend to recipients of the service user-generated content available on the platform.

There are also changes to detailed obligations placed on high-risk AI systems related to risk management systems, data and data governance, technical documentation, and incident record-keeping. In addition, the time limit for reporting serious incidents of these systems to national supervisory authorities has been reduced from fifteen to three days. The EP has also made it compulsory to conduct a fundamental rights impact assessment before a high-risk AI system is put into service.


Foundation models and generative AI systems

The European Parliament also introduced the notion of foundation models, to assume in some way most of the responsibility for obligations imposed in relation to general purpose AI systems under the Council’s proposal. These obligations were not considered in any way in the Council’s original proposal. In the EP’s position, a foundation model is an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks”.

A provider of a foundation model has to ensure that it is compliant with the requirements described below (among other requirements) prior to making it available on the market or putting it into service, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open-source license, as a service, as well as other distribution channels:

  • identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law, where this may be caused by that model;
  • processing and incorporating into those models only datasets that are subject to appropriate data governance measures;
  • designing and developing the foundation model in order to achieve appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity;
  • designing and developing the foundation model, and making use of applicable standards to reduce energy and resource use and to increase energy efficiency and the overall efficiency of the system;
  • drawing up extensive technical documentation and intelligible instructions for use in order to enable downstream providers to comply with all of their obligations under the AI Act,

Meanwhile, providers of foundation models used as generative AI systems also have the following obligations:

  • to comply with transparency obligations;
  • to train, and where applicable, design and develop, their models in such a way as to ensure adequate safeguards against the generation of content in breach of Union law;
  • to document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.

 

Other changes

In addition to proposals concerning the definition of AI system and new rules on classification, the Parliament also proposed modifying the AI Act proposal for example with respect to the following:

  • penalties for breach of obligations under the AI Act, which in the view of the Parliament should be higher,
  • the time limit for application of the AI Act, making it two, not three years from the effective date,
  • creating and using all AI systems and foundation models based on principles of trustworthy AI,
  • guaranteeing the right to request a clear and meaningful explanation of the role of an AI system in decision-making procedure.

 

Further work

When the European Parliament’s position is adopted, this means that trilogues can be initiated between the Commission, Council, and Parliament. The first meeting in this regard was held on 14 June 2023, while the work will most certainly intensify as of the beginning of July. At that time, the EU Council Presidency will be taken over by Spain, which has said that it considers AI issues a priority, and that it wishes to use the Council Presidency to finalize the wording of the AI Act by the end of 2023.


_______________________________________________________

[1] https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf

[2] https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF  

[3] https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf

[4] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449#mainText

[5] https://www.congress.gov/bill/116th-congress/house-bill/6216/text#toc-H41B3DA72782B491EA6B81C74BB00E5C0