Medical AI: A Cure with Legal Side Effects?
- The Rise of AI in Healthcare
In recent years, artificial intelligence (AI) has become a commonplace tool across various sectors, and healthcare is no exception. From algorithms that detect diseases with greater accuracy than an experienced radiologist[i], to systems that predict medical complications by analyzing thousands of medical records[ii], AI is profoundly transforming the way human health is diagnosed and understood.
The enthusiasm is understandable: faster results, lower margin of error, and potentially democratized access to quality diagnoses. However, behind this innovation, lie unresolved legal questions. What happens if an algorithm makes a mistake? Who is responsible? Who owns the data used to train these systems? Can a company claim that its medical software "heals better" than a human professional?
- Intellectual Property, Data, and Liability: Who Is Responsible for Errors?
From the perspective of Peruvian intellectual property law, AI-generated results do not –at least in principle– qualify as protected works if there is no direct human creative involvement. However, when these diagnoses are part of commercial solutions, alternative protection mechanisms arise, such as trade secrets or rights over the underlying database.
This brings up the issue of algorithmic bias[iii]: if the data is poorly distributed –for example, if a model has been trained only on medical records from certain population groups– the diagnostic results may be inaccurate or even dangerous. This is another key legal dimension, as it affects both the product’s reliability and the potential liability in case of harm.
In traditional medicine, the healthcare professional who provides a diagnosis is liable for their actions according to medical standards (lex artis). However, when AI is used as a support tool –or in some cases, as a system that autonomously proposes diagnoses– a new scenario of shared responsibility emerges among physicians, healthcare institutions, and technology developers.
The main challenge lies in the opacity of many AI models, especially those based on deep learning, which do not always allow for an understanding of how a particular conclusion was reached—this is known as the “black box” problem[iv]. In case of mistake, this complicates both the traceability of the failure and the assignment of responsibility.
Broadly speaking, three possible approaches to liability can be outlined:
- Medical liability: when AI acts as diagnostic support and the professional is responsible for accepting or rejecting its recommendation.
- Manufacturer liability: when the software is marketed as a product with a specific diagnostic accuracy, and accountability may arise under warranty or misleading advertising standards.
- Institutional liability: when healthcare providers integrate AI into their services without properly training their staff or implement it poorly within their systems.
For now, most countries operate under analog legal frameworks, which leads to uncertainty. In this context, transparency, clinical validation, and algorithm traceability will be essential not only to improve the technology, but also to ensure that legal systems can fairly assign responsibility when the inevitable occurs: a medical AI fails.
- Advertising and Information Provided to Patients
Beyond technical and liability issues, the deployment of medical AI systems raises questions about how these products are presented to the public, particularly when diagnostic solutions are offered directly to patients or healthcare professionals.
In Peru, advertising in the healthcare sector is subject to strict legal regulations, aimed to protect public health, preventing consumer deception, and ensuring that information about the products and services offered is truthful, verifiable, and not misleading, given that the recipient is making decisions that may directly affect their health.
In the case of medical AI systems, misleading advertising scenarios may arise if users are led to blindly trust the algorithm or if the technology is compared to human medical performance without solid scientific evidence.
This becomes even more problematic when AI systems are marketed in environments lacking rigorous validation standards, because it can give an unfair advantage to companies with more aggressive and less ethical marketing strategies over those that are more cautious.
Thus, the regulation of commercial communication on medical AI should ensure that innovation is not built on exaggerated promises or at the expense of the consumer and should be sustained not only on the basis of the communication of the success of the system, but also on their technical limitations.
- Conclusions and perspectives
The incorporation of artificial intelligence-based applications in the field of medical diagnosis represents one of the most profound and promising transformations in the healthcare sector in recent decades. However, as with any disruptive innovation, its benefits come with substantial legal challenges that cannot be overlooked.
From the point of view of liability, the technical nature of medical AI requires an in-depth analysis of each individual case to ensure the correct attribution of damages in case of errors. While in terms of advertising and consumer relations, it is essential to avoid unfair or misleading practices that could undermine confidence in the health system.
In this context, current regulatory frameworks are, in many cases, insufficient. There is a need to move toward adaptive regulatory models that combine flexibility –to foster innovation– with clear safeguards –to protect patients’ rights and ensure market transparency–. Initiatives such as regulatory sandboxes and algorithmic traceability standards are steps in that direction.
The challenge for lawyers specializing in technology, healthcare, and competition is clear: to accompany the development of these tools with a critical, constructive, and multidisciplinary approach that ensures a proper and ethical use of this technology.
While medical AI can offer countless opportunities and practical applications with a direct impact on the health of the population, only proper attention to its legal implications will prevent it from becoming a new source of systemic risk.
[i] Abadia, Andres F. PhD*; Yacoub, Basel MD*; Stringer, Natalie BSc*; Snoddy, Madalyn BA*; Kocher, Madison MD*; Schoepf, U. Joseph MD*; Aquino, Gilberto J. MD*; Kabakus, Ismail MD, PhD*; Dargis, Danielle BSc*; Hoelzer, Philipp PhD†; Sperl, Jonathan I. PhD†; Sahbaee, Pooyan PhD†; Vingiani, Vincenzo MD*,‡; Mercer, Megan MD*; Burt, Jeremy R. MD*. Diagnostic Accuracy and Performance of Artificial Intelligence in Detecting Lung Nodules in Patients With Complex Lung Disease: A Noninferiority Study. Journal of Thoracic Imaging 37(3):p 154-161, May 2022. | DOI: 10.1097/RTI.0000000000000613
[ii] Kraljevic Z, Bean D, Shek A, Bendayan R, Hemingway H, Yeung JA, Deng A, Baston A, Ross J, Idowu E, Teo JT, Dobson RJB. Foresight-a generative pretrained transformer for modelling of patient timelines using electronic health records: a retrospective modelling study. Lancet Digit Health. 2024 Apr;6(4):e281-e290. doi: 10.1016/S2589-7500(24)00025-6. Erratum in: Lancet Digit Health. 2024 Oct;6(10):e680. doi: 10.1016/S2589-7500(24)00195-X. PMID: 38519155; PMCID: PMC11220626.
[iii] Min, A. (2023). Artifical Intelligence and Bias: Challenges, Implications, and Remedies. Journal of Social Research, 2(11), 3808–3817. https://doi.org/10.55324/josr.v2i11.1477
[iv] Bathaee, Y. (2018). The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology, 31, 889.