Artificial intelligence (‘AI’) is no longer a distant prospect in healthcare – it is here, fundamentally changing both clinical practice for individual healthcare professionals and the delivery and management of services by healthcare organisations. From AI-powered speech-to-text tools that assist with clinical note-taking, to machine learning algorithms enhancing radiology image analysis, and AI chatbots designed for patient triage or talking therapy, these innovations are increasingly shaping the future of healthcare in the UK.
The UK Government’s focus on AI is clear. The 2023 AI Regulation White Paper set out a framework for ‘pro-innovation’ regulation, and the NHS’s 10-Year Health Plan (2025–2035) places digital transformation, including AI, at the heart of its strategy. Reflecting this ambition, the NHS has introduced a world-first predictive AI system designed to proactively detect patient safety risks before they occur. More recently, the National Commission into the Regulation of AI in Healthcare, established by the Medicines and Healthcare products Regulatory Agency (‘MHRA’) and launched in September 2025, further highlights the growing regulatory appetite to keep pace with technological advances.
Yet, despite the promise of efficiency and innovation, the legal and regulatory risks can appear daunting. As a healthcare professional or organisation, how can the benefits of AI be harnessed without inviting legal and regulatory problems? This article explains some of the key legal and regulatory risks to help healthcare providers navigate these challenges.
The legal landscape for AI in healthcare
There is no singular ‘AI law’ in the UK. Healthcare providers and practitioners must navigate a patchwork of regulations and legal duties. AI sits at the intersection of:
- Legislation: The use of AI in healthcare is subject to a range of existing laws governing data protection, intellectual property, contract law, consumer protection, and other statutory obligations. Further, if the output from your use of an AI system is intended to be used in the EU, the EU AI Act may also apply.
- Regulation: AI technologies that function as medical devices are regulated by the MHRA, which oversees conformity assessment, safety standards, and post-market surveillance. Additionally, healthcare providers must comply with registration and inspection requirements set by regulators such as the Care Quality Commission (‘CQC’) in England and equivalent bodies in the devolved nations.
- Good governance: Robust clinical governance frameworks are essential for the safe adoption of AI. This includes adherence to professional codes of conduct, such as those set by the General Medical Council (‘GMC’) and the Nursing and Midwifery Council (‘NMC’). Equally, compliance with National Institute for Health and Care Excellence (‘NICE’) guidance, risk management protocols, and maintaining clear lines of accountability for clinical decisions involving AI support are required.
Ultimately, accountability for the use of AI in healthcare remains with clinicians and healthcare providers. This principle was emphasised in the high-profile case involving the Royal Free London NHS Foundation Trust (‘the Trust’) and Google DeepMind. In 2017, the Information Commissioner’s Office (‘ICO’) found that the Trust had breached data protection law by sharing 1.6 million patient records with DeepMind for an AI-powered app without ensuring adequate transparency or a proper legal basis.
Although this case centred on data governance rather than patient safety, it is a clear reminder that legal and regulatory responsibility cannot be outsourced to technology providers or AI systems. UK regulators such as the CQC, GMC, and ICO have repeatedly stressed that they expect healthcare professionals and organisations to maintain robust oversight over AI use and are prepared to take enforcement action where standards are breached.
Data protection
The use of AI systems in healthcare will often require the use of information relating to living, identifiable individuals; in other words, ‘personal data’. Further, personal data relating to health is part of a subset (‘special category data’) which attracts stricter protection under UK data protection law (UK GDPR and the Data Protection Act 2018). As such, compliance with these key pieces of legislation is crucial.
Three key obligations stand out:
- Lawful basis for processing: Any use of personal data requires one of a limited range of ‘lawful bases’ to be in place, and the use of health data is more restricted still. Often, only express opt-in patient consent will suffice.
- Data minimisation and security: It is fundamentally important that sensitive information such as healthcare data is only recorded and retained where necessary, and that it is held in a secure manner. The ICO will expect healthcare providers to make their own assessments about the appropriateness of using an AI system provider, including the security systems implemented by the system provider. Selecting an AI provider based only on price and functionality would leave you vulnerable if the data security proved to be inadequate.
Transparency: Patients must be told in clear accessible language if an AI system is being used in their care, and how their data is processed by any such system. This language should be contained in the relevant privacy notice/policy but can’t be buried within a long text that patients are unlikely to read. It should be clearly understood by the patient.
The ICO’s Guidance on AI and Data Protection will be a very helpful guide for healthcare providers considering implementing AI tools.
Intellectual property rights
Key forms of intellectual property, such as copyright and, potentially, patent rights, may be created as a result of use of an AI system. For example, an AI auto-transcription tool which generates notes of a patient consultation will be generating a work which is capable of copyright protection. Who owns the rights in such works?
Data and other material which a user inputs into the AI system will, typically, remain owned by the user. Also, the outputs generated by the AI system will usually be owned by the user, but this will need to be checked in the relevant contract/terms.
Potentially more contentious would be the rights which the AI system may reserve to use the user’s inputs and results for the benefit of the system as a whole and, potentially, other users. Again, this will be a key point to check in the relevant contract/terms.
Contractual terms
AI systems and technologies will be provided subject to contractual terms put in place by the relevant supplier. Whilst such terms might be dense and time-consuming to review, it is crucial to understand them. For example, are there any restrictions on how the AI system can be used which are incompatible with your desired use? Under what circumstances might the supplier be able to withdraw the AI system from your use, and what happens to the data which you have submitted to the system prior to that point? What are your rights if the AI system does not work, or produces incorrect results? Understanding the contractual terms which apply to the AI system will be a very important factor in deciding whether or not to use the system.
If you have questions or concerns about the use of AI in healthcare, please contact Dan Tozer or Natasha Ricioppo.