The evolving legal landscape of Artificial Intelligence
Learn about the change in laws relating to AI and gain insights into the ever-evolving artificial intelligence landscape from a legal standpoint.
AI the most transformative technology to date
The world is waking up to the possibilities, benefits, legal dangers, and limitations of Artificial Intelligence (AI). Today, we see companies and governments seeking to implement AI programs for candidate selection processes, employees striking against AI replacing their jobs and artists inviting their fans to use AI to create new songs.
The root of this recent AI hype is generative AI. Generative AIs are trained using vast datasets to create remarkably 'human-sounding' content. Anyone with access to the internet can interact with ChatGPT, Bing and similar technologies, which provide ‘human-like’ responses to whatever the user asks of them.
Jacob Turner of Fountain Court Chambers describes this as “the most transformative technology,” explaining that “we have never had a technology that does not just replicate human ingenuity, but goes beyond it.”
The lawyers who spoke with Chambers about the evolving legal landscape for AI warned about the legal risks associated with the use of the technology, especially those related to Intellectual Property (IP), Data Protection and Human Rights.
AI poses novel questions for Intellectual Property
Most of the companies that are building AI models rely on scraping data from the internet to feed their machine learning algorithms, which means the risk of infringing other companies’ intellectual property is a significant one.
Getty Images recently sued an AI art generator, Stable Diffusion, for copying 12 million images to train its AI model without asking Getty’s permission or providing compensation. This dispute poses novel questions about copyright and asks lawyers to test the limits of our current IP regulation.
The crucial issue is whether AI developers can be held accountable for using copyrighted works to train their AI models. The litigants must answer these difficult questions without any AI-specific laws or regulations in place, and the Getty lawsuit could set crucial precedent on this question.
In the meantime, lawyers rely on existing IP laws to draw speculative conclusions about what the boundaries of AI and IP might be. Some jurisdictions, including the UK and US, have begun to try to define those boundaries, but this is just the beginning. With generative AI just beginning to take off, the legal world is attempting to catch up with these technological developments and regulate a practice area that is continually evolving.
Lawyers with expertise in these issues are already seeing how AI is testing the boundaries of IP law, and are advising clients on AI patents, copyright and trademarks, and clients should be cautious of IP infringements when training AI.
Putting human rights and data protection into AI
Whilst generative AI has been front page news for only a few months, companies and governments have quietly been using AI models to help automate certain processes for more than a decade.
The French Government proposed using AI for mass surveillance during the Olympic Games, and UK police forces have also used live facial recognition technology during a Beyoncé concert in Cardiff, at football matches and during the King’s Coronation.
Businesses have also started implementing AI models, using them to sift through job applicants’ CVs, and local councils have even used algorithmic models to determine who may be eligible for housing benefits. At face value, these might seem like simple examples of automating tedious and formulaic processes; however, lawyers and academics see the risk of direct and indirect discrimination in these AI-assisted decision-making processes.
For example, AI face recognition technology was able to determine the gender of a white male face 99% of the time, but could only correctly identify the gender of darker-skinned women 35% of the time.
AI-specialist lawyers fear that, given the lack of transparency in AI-assisted processes, these inbuilt biases will lead to greater discrimination against minority groups. AI can make decisions that reflect the biases of the material on which it was trained, and it isn't always possible to interrogate its reasoning to work out whether its decisions are based on discriminatory factors. This means that increasing use of AI to aid or replace human decision-making could increase the level of bias and discrimination in society, rather than reducing it.
With no current AI regulation in place, it is difficult for academics and practitioners to provide guidance on the use of AI to assist or replace human decision-making. Instead, they rely on data protection regulations, such as the GDPR (General Data Protection Regulation) to warn against its use. In fact, Italy recently attempted to ban the use of ChatGPT due to concerns about data breaches and GDPR violations.
Currently, the GDPR provides a detailed overview of how to apply its principles to the use of information in AI system. This aims to provide clarity on issues such as explainability, transparency, discrimination, and lack of human oversight. The EU AI Act also aims to provide guidelines on the use of AI systems by businesses. Despite this, security and data concerns remain widespread and AI models continue to be targets for data breaches and GDPR violations.
Chambers & Partners’ coverage of the AI Law market
The legal AI market is growing, and lawyers and law firms are playing catch-up. Those who use AI, create AI or want to implement AI will increasingly need to seek advice from AI specialist lawyers and firms.
Law firms with expertise in AI law are invited to send submissions to our Artificial Intelligence – Global Market Leaders category for Chambers Global in Spring 2024.