The EU Artificial Intelligence Act: Finding A Way for Technology to Serve Humanity │ Italy
As the world’s first comprehensive AI law enters the final stages of its pathway to enactment, Rocco Panetta from Panetta Studio Legale charts its progress so far and argues that the latest amendments by the European Parliament go some way to shaping a regulatory framework fit to restore faith in this controversial technology.
With its vote in June 2023, the European Parliament took a decisive – and, it can be said without rhetoric, even historic – step towards the approval of the world’s first horizontal and harmonised regulatory framework on AI. The EU is thus preparing to present itself on the global geopolitical chessboard once again as a regulatory power in the AI sector.
The EU Artificial Intelligence Act (the “AI Act”) will regulate the development of the AI phenomenon as such for the first time. However, this does not mean that AI is not governed by important regulations in many respects already.
What Is the Relationship Between the AI Act and the GDPR?
This pre-existing safety net is constituted first and foremost by EU legislation on the protection and circulation of personal data and, in particular, the General Data Protection Regulation (GDPR). This is down to the link of interdependence that binds AI and data inseparably.
“The close link between the GDPR and AI will not be broken even once the EU Artificial Intelligence Act is finally approved.”
As proof of the extent to which data protection legislation already governs the AI phenomenon, one need only recall how data protection authorities have been the only authorities able to exert coercive and extremely persuasive powers of intervention on AI systems. The Italian Data Protection Authority (Garante per la protezione dei dati personali, or “the Garante”) has led the way in Europe, thanks to two well-known processing blocking measures ordered against the chatbots Replika and ChatGPT.
This close link between the GDPR and AI will not be broken even once the AI Act is finally approved. In fact, the two pieces of legislation will regulate the complex technological phenomenon known as AI in a complementary way.
How Far Is the AI Act from Final Approval?
When trying to briefly trace the journey of the new AI Act, a few milestones can be identified. The first is the presentation of the EC’s proposal in April 2021, a highly anticipated moment that implemented the European AI strategy launched in 2018. From there, the other two EU institutions began working towards defining their “common positions” (general approach). The European Council’s came in December 2022, under the Czech Presidency, while the European Parliament’s negotiating position was finally confirmed by the June 2023 vote.
“The amendments to the EU Artificial Intelligence Act strike a balance between innovation and the protection of fundamental rights.”
The result achieved in the European Parliament now opens the door to a new phase of work – the so-called trilogue, whereby representatives of the three European institutions will meet to negotiate the final text. This will hopefully be published in the Official Journal of the European Union between the end of 2023 and the beginning of 2024 – ie, before the European elections next year.
What Amendments Have Already Been Made to the AI Act?
Despite the large number of changes proposed by the European Parliament (771 amendments were voted on), the regulatory architecture identified by the EC in its proposal remained valid and central. The objective of striking a balance between innovation and the protection of fundamental rights is thus confirmed.
The approach chosen is a risk-based one, with the identification of a series of risk classes on which to base bans (for AI systems with unacceptable risk), requirements, and obligations (aimed particularly, but not exclusively, at high-risk AI systems). All will be subject to a widespread governance system and, importantly, a sanctioning system – albeit ideally counterbalanced by a number of rules supporting innovation.
Among the main points that the European Parliament’s amendments concern are:
- the choice of a new definition of “AI system”, opting for the formula developed by the OECD;
- the introduction of a list of general principles, including privacy and data governance, that shall apply to all AI systems;
- the expansion of prohibited AI practices to include new applications (eg, predictive policing systems);
- the decision to completely rule out the admissibility of the use of “real-time” remote biometric identification systems in publicly accessible spaces and to prohibit the use of “ex post” remote biometric identification systems in publicly accessible spaces, unless the latter are subject to prior judicial authorisation and are strictly necessary for targeted research linked to a specific serious crime;
- the modification of the classification criteria and assumptions for high-risk AI systems;
- the inclusion of new obligations for providers of foundation models and generative AI systems – for example, the obligation to declare that the content has been generated by AI);
- the inclusion of an obligation to conduct a fundamental rights impact assessment for high-risk AI systems;
- the establishment of the European Artificial Intelligence Office – instead of the European Artificial Intelligence Board proposed by the EC –to be entrusted with the task of monitoring the implementation of the regulation;
- the modification of certain aspects of the sanctioning system; and
- more extensive alignment with the GDPR rules.
The Outlook
The EU has thus struck a first blow that has been heard loud and clear across the member states, as well as overseas. Will this be enough to restore confidence in this extraordinary technology and its potential to serve humanity? Certainly, the road ahead is the right one, but further efforts may be needed to make AI environmentally friendly and work for the needs of many rather than just a few.