Navigating the Challenges of AI and Personal Data Protection: A Cause for Concern? | Poland

Paulina Komorowska-Mrozik and Julia Olszewskaa of PricewaterhouseCoopers in Poland look at the challenges of protecting personal data, especially in light of the increased use of AI technology in a growing number of sectors. There is now a pressing need for a comprehensive approach to the interplay of AI solutions with personal data protection principles and data subjects’ rights.

Published on 15 September 2023
Paulina Komorowska-Mrozik, PwC, Chambers expert focus contributor
Paulina Komorowska-Mrozik
Julia Olszewskaa, PwC, Chambers expert focus contributor
Julia Olszewskaa

In the European Union, the General Data Protection Regulation (GDPR) holds a central role in governing the processing and use of personal data. It serves as the primary legislative framework ensuring free movement of personal data, while maintaining a high level of protection of fundamental rights and freedoms of natural persons (data subjects).

The GDPR sets high standards for data protection, and its principles also apply to all new data-driven technologies, including all solutions based on AI algorithms. AI solutions rely on Big Data, characterised, among others, by massive amounts of processed data. The GDPR is supposed to be a technology-neutral regulation, yet the use of Big Data in AI solutions poses several challenges for protecting fundamental rights and freedoms of natural persons.

Lawfulness, transparency and fairness of personal data processing

The GDPR requires that entities which decide about means and purposes of processing personal data ensure that data subjects are aware of how, by whom and for what purposes their data is collected, used and otherwise processed (transparency principle). Natural persons should also be able to exercise their rights – eg, request access to their data and obtain a copy of it or request deletion of their data.

Given the complexity of AI systems based on machine learning and the characteristics of Big Data, compliance with these obligations can be challenging.

Due to processing vast amounts of personal data, determining the full scope of data relating to a given person, and thus responding to data subjects’ requests, may be difficult in AI systems. Importantly, however, such requests may not be regarded as unfounded or excessive solely because they are harder to handle than in non-AI-related environments. Observing the transparency principle can be demanding, as the roles of multiple entities and, consequently, their responsibilities, in complex data processing activities can be blurred. Additional security measures, such as the data protection by design approach and the Data Protection Impact Assessment (DPIA), can help ensure compliance. Fairness can also be ensured by taking a considerate approach to data collection, by collecting accurate data, limited to what is necessary to achieve the aims of processing.

Purpose limitation, accuracy and minimisation principle

Balancing the core principles of personal data protection, such as purpose limitation, accuracy and minimisation, becomes a difficult task in the context of AI technology. Infringement of these principles could have more profound consequences than just regulatory non-compliance. AI systems that rely on biased, inaccurate or excessive datasets can perpetuate disparities in outcomes based on factors like race, gender or socio-economic status.

AI systems often require large and diverse datasets to optimise their performance, potentially straining the purpose limitation principle. In addition, the purpose of data processing in the learning phase will differ from that of the production (operational) phase, and the use of vast amounts of data by AI systems can potentially conflict with the data minimisation principle. A solution to this could be the use of fictitious data and application of pseudonymisation techniques during the AI learning and testing phases. The dynamic nature of AI algorithms poses challenges to maintain accurate and up-to-date personal data, potentially challenging the accuracy principle. To avoid this, AI systems should be monitored and audited regularly to detect inaccurate data, verify its integrity and enable correction or erasure.

Data breach risk

The use of AI systems carries risks related to data breaches which could pose a threat to the rights and freedoms of individuals, such as identity theft or cyberbullying. The complex nature of AI systems, coupled with extensive personal data processing, may increase vulnerability to unauthorised access or accidental leaks. In case of breach, one of the main challenges relates to the difficulty in collecting background information needed to assess risks of a personal data breach, such as the categories and approximate number of data subjects and personal data records concerned.

Does the EU AI Act respond to these challenges?

The legislative landscape in the EU is rapidly evolving in response to the advancement of AI technology. The EU has taken significant steps in this direction with the proposal of the AI Act, considered the world’s first comprehensive artificial intelligence regulation. The AI Act does not handle the issue of privacy and protection of personal data per se, indicating expressly that it is without prejudice to the GDPR and aims to regulate AI systems while respecting the principles of the GDPR. It does, however, promote responsible AI development in the European Union which – at least to some extent – translates into protection of fundamental rights and freedoms of individuals.

The AI Act explicitly prohibits certain AI practices, including “real-time” remote biometric identification systems in publicly accessible spaces and “post” remote biometric identification systems, unless implemented under specific law enforcement circumstances with judicial authorisation. Moreover, biometric categorisation systems that employ sensitive characteristics such as gender, race, ethnicity and political orientation will also be prohibited. The regulation seeks to prevent the untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, as this action may infringe upon fundamental rights, in particular the right to privacy.

Recently, more activities of European data protection bodies, such as European Data Protection Supervisor (EDPS) and European Data Protection Board (EDPB), can also be observed in the area of addressing AI risks. National supervisory authorities, such as the French CNIL or British ICO, are also actively engaged in the public debate by, for example, publishing guidelines on AI-based systems. These initiatives showcase the aspiration to foster further development of AI data-driven technologies in line with data protection principles, despite the challenges faced. As a result, achieving AI development while upholding data protection is feasible, but it requires a diligent and conscientious risk-based approach.

PwC Legal

PwC Legal firm logo
4 ranked departments and 11 ranked lawyers
Find out more about the firm's ranking in Chambers Global
View firm profile

Chambers In Focus Newsletter

Sign up for our newsletter and never miss out on thought leadership content from legal experts and the key stories driving the legal profession forward.
Sign up here