In 2020, the SyRI program used by the Netherlands government to determine which citizens were more likely to defraud the State was not only prohibited for violating privacy regulations but also deemed discriminatory for regularly assigning high risk scores to immigrants, thus stigmatizing this population in relation to the government of that country. More recently, in April of this year, a suicide was recorded in Belgium induced by a GPT-J system, a chatbot that suggested to its interlocutor to take their own life given the problems and uncertainties they were facing.


These cases are just some of the reasons why since 2017, the European Council urged the European Parliament to be aware of the need of regulating artificial intelligence technologies (hereinafter referred to as "AI") to ensure a high level of data protection, the realization of digital rights, and compliance with ethical standards. Following this statement, the European Parliament initiated the development of a Regulation that would address, among other topics, ethical issues surrounding AI, civil liability, privacy, and intellectual property.


In this regard, the specific objectives of the regulation were outlined as follows: (i) to ensure that AI systems used in the European Union are safe and comply with fundamental rights legislation; (ii) to provide legal certainty to promote investment and innovation in the field of AI; (iii) to improve governance and the application of fundamental rights legislation, including how it affects the security requirements of AI systems, and (iv) to facilitate the development of a single market for AI applications that promote legal, safe, and reliable use of such technology.


To achieve the aforementioned objectives, the Parliament focuses on establishing a regulation that promotes safety, transparency, traceability, and non-discrimination of AI systems. For this purpose, it defines different risk levels characterizing the systems: unacceptable risk, high risk, and limited risk.


Unacceptable risk systems are prohibited by the regulation as they are considered a threat to fundamental rights, while high-risk systems are divided into two categories based on their field of use and are subject to more extensive regulatory requirements, and limited-risk systems are those that do not pose an imminent threat to users' rights, and their regulation focuses on ensuring the realization and exercise of digital rights through minimum requirements that must be met.


It is worth emphasizing at this point the high-risk systems, whose development and application requirements represent the most extensive regulatory burden. This has raised the greatest amount of controversy around the Regulation as some argue that the costs associated with complying with all the requirements for these systems could deter technological development and even be seen as a "danger to technological sovereignty" and the competitiveness of Europe in the technological field. However, over the past two years, cases of discrimination and violation of fundamental rights due to AI systems have emerged, underscoring the need for minimum limitations to be imposed on these systems. As a result, the main requirements of the Regulation regarding high-risk systems are: (i) primarily requiring adequate data governance to mitigate biases and discrimination risks, (ii) transparency in algorithms and documented support for these, and (iii) continuous human control that allows for non-automated correction of the system's decisions and the ability to stop these decisions at any time with a "stop" button.


From the Colombian perspective, it is relevant to highlight that, according to the European Parliament, biometric identification and characterization systems of natural persons are part of high-risk systems. These systems are increasingly being used in Colombia, yet there is still no specific regulation on personal data protection or data governance. Hence, for the Colombian case, this regulation represents an important reference for reflecting on the need of regulating these systems, both from the perspective of personal data protection and from the perspective of other fundamental and digital rights that may be affected by AI systems.