How Does AI Work?


AI systems are capable of creating original pieces of work, ranging from music and video games, to news articles. To do this, often the AI tool “scrapes” online consent without the permission of the content owner.

UK legislation contains an exception for the copying of text and data, for the sole purpose of research and non-commercial purposes. However, in the modern world, AI is often used for commercial purposes.


Potential Injustice for Creators – Getty Images Inc (Getty) v Stability AI


This commercial use can create an injustice for creators, whereby their work is used without remuneration or permission. In what is likely to be a precedential case, Getty has issued a claim against Stability AI for “scraping” millions of images from their website without Getty’s consent and used those images to unlawfully train and develop its AI model.


Getty therefore claims copyright infringement, database right infringement, trademark infringement and passing off. An interim application hearing caught the attention of lawyers, whereby Stability AI sought to strike out various issues within the Claimants’ claim. Ultimately, Stability was unsuccessful and the matter continues to a final hearing. The case verdict will no doubt have far-reaching implication for creators and AI users.


UK Response


Recognising the need for clarity in this domain, the UK Government and working groups were aiming towards a voluntary AI copyright code. This non-legislative and sector-led approach sought to focus on the following key principles:-


  • safety, security and robustness,
  • appropriate transparency and explainability,
  • fairness,
  • accountability and governance,
  • contestability and redress”


The Communication and Digital Committee highlighted the need to also update copyright legislation in order to accommodate the AI advancements, and specifically noted that the Government “cannot sit on its hands” while large language model (LLM) developers exploit the works of rightsholders. The report also requires measures to allow rightsholders to check training data for copyright breaches, and to encourage tech firms to pay for licenced consent.


However, in February 2024 the UK Government announced that the code had been “shelved” for now as the working group was unable to agree on the voluntary code.


EU Response


Whilst EU law no longer applies to the UK, the approach is still of interest. In March 2024, the European Parliament voted to approve an Artificial Intelligence Act which outlines safety, compliance, and fundamental rights. Provisions are made for the compliance with EU copyright law and the need to publish detailed summaries of the content used for training AI. The Act is not yet in force for EU member states, and the UK has made no indication that it intends to adopt similar provisions.


UK and US Partnership


Instead, on 1 April 2024, the US and UK announced a new partnership in order to work jointly on research, safety evaluations and guidance for AI. We await updates on the progression of this partnership.


Company Autonomy


The current position surrounding copyright rules leaves scope for companies to implement their own rules. Steam, a gaming platform, announced in January 2024, that all developers will now need to declare whether the game includes any AI generated content, i.e. art, code, or sound, alongside a declaration that the use is not “illegal or infringing”. This is a step towards the EU’s proposals.


Conclusion


Whilst we move closer to achieving the intending principles of safety, transparency, accountability, there remains a long way to go to ensure creators, especially those without significant funds to commence litigation, are protected. The uncertainty surrounding AI and copyright continues.