AI is penetrating healthcare faster than many realize. It can accelerate diagnostics, improve physicians' decision-making, and make cutting-edge care accessible even where specialists are lacking. At the same time, however, it raises questions that the legal framework must address: who will bear responsibility for damage caused by an algorithm, how to set up oversight of its decision-making, and how to safely handle sensitive patient data? European AI regulation already brings concrete rules and healthcare facilities must adapt to them. This article therefore focuses on key legal aspects of using artificial intelligence in medicine – from regulation and personal data protection to liability for damages.

Artificial intelligence (AI) is already being used in healthcare facilities today, or will be used increasingly frequently in the near future across a wide range of areas. The most common examples are assistance with diagnostics through analysis of medical images or laboratory results, risk assessment of complications in specific patients, automated patient triage in emergency departments or pre-hospital care, and accelerated processing and evaluation of medical documentation. Artificial intelligence can save physicians time, increase the accuracy and consistency of diagnoses, and also make specialized care accessible to patients even in regions where specialists might not otherwise be available. In the future, the development of AI-based personalized medicine is also expected, where treatment recommendations will be tailored to genetic and other individual patient characteristics.

Regulation of Artificial Intelligence

AI in healthcare has enormous potential, but also fundamental impacts on human health and safety. Therefore, it is subject to strict regulation in the European Union. The Artificial Intelligence Regulation introduces categorization of systems according to risk level. Artificial intelligence systems used in healthcare that simultaneously meet the definition of medical devices or in vitro medical devices according to relevant EU regulations fall into the category of high-risk systems. Such classification brings a series of obligations for AI system providers (manufacturers) – from mandatory conformity assessment, through requirements for training data quality, system robustness and safety, to transparency requirements and detailed technical documentation. This is supplemented by obligations arising from medical device regulation – namely proving clinical safety and efficacy through studies.

Healthcare facilities implementing these solutions typically act as so-called deployers (professional users). Healthcare facilities have obligations to maintain operational records, ensure appropriate staff training, conduct regular system monitoring and report defects and adverse events to the system provider, and also ensure that human oversight of the system is always guaranteed. Some obligations – such as ensuring oversight – can be contractually transferred back to the AI system provider. The professional community also discusses the need to obtain informed patient consent for the use of AI systems at least when the AI system will be used experimentally, i.e., in the development and testing phase.

Questions arise regarding the categorization of some other solutions that are not medical devices. For example, patient triage systems will be high-risk when used in crisis situations. However, the precise definition of crisis situations is not yet entirely clear. This may involve extraordinary events with high numbers of injured, but it is uncertain whether routine hospital emergency departments could also be included among crisis situations. Conversely, automated processing of medical documentation usually does not fall into the high-risk category. However, if another service were operated over the documentation, such as a chatbot providing advice to patients, this would be a limited-risk system, and thus with its own set of obligations for providers and healthcare facilities.

Artificial Intelligence and Personal Data Protection

A separate and very significant area is the protection of personal data and data subject to medical confidentiality. Healthcare data belongs among special categories of personal data under GDPR, specifically health data. Their processing is possible when providing healthcare, but this exception cannot be automatically applied for purposes of personal data processing by AI providers if they want to use data for further training or improvement of their systems. In such cases, healthcare facilities should very carefully address conditions in contracts with these providers, ideally prohibiting data transfer or allowing their sharing exclusively in anonymized form. However, when deciding, the nature of the specific system and the purpose for which it is to be used must be considered. From a functionality perspective, the more quality data the system has available, the more accurate and reliable results it achieves.

Liability for Damages

Liability for damages is absolutely crucial in the area of AI use in healthcare. AI systems can significantly influence diagnostics, treatment decisions, and healthcare organization. If it were not clearly established who is responsible for potential errors or damages caused by an AI system, patients would find themselves in uncertainty and risk not being able to assert their rights to compensation. AI systems learn independently and make autonomous decisions. Even the system provider is often unable to explain why the system generated a particular output, i.e., why it decided as it did.

We believe that when providing healthcare services, liability under the Civil Code will apply, specifically under Section 2936, as has been the case so far. This provision states that whoever is obligated to perform something for someone and uses a defective thing in doing so shall compensate for damage caused by the defect of the thing. This expressly applies also to the provision of healthcare, social, veterinary and other biological services. The liability is objective, i.e., regardless of fault. This means that a healthcare facility is liable to a patient for damage caused by a defective AI system, even if it did not cause the defect or error itself. The problem with this provision is determining whether the thing, i.e., the AI system, is defective. An AI system or algorithm may not be defective at all, but its output may harm the patient. The system may "decide" based on erroneous data, or based on its poor interpretation of error-free data. Therefore, the physician will probably have the final word (at least in the foreseeable future), who will have to establish the diagnosis and treatment procedure lege artis.

If liability for damages were not assessed in this way, the patient could easily find themselves in a situation where they would have no possibility of obtaining compensation. However, since healthcare facilities do not have systems and their outputs completely under their control, it is appropriate for them to contractually address damage compensation in relation to AI system providers. The contract should clearly determine who and to what extent is responsible for what – for example, for which AI system errors the system provider will be liable to the healthcare facility. These contractual provisions are crucial for AI systems, as the AI provider could argue in case of a damage compensation claim that they have no control over the system due to its learning function.

Healthcare facilities should actively verify whether their existing insurance coverage also includes damages caused by AI systems. Traditional insurance products may not expressly cover risks associated with the use of new technologies, which could lead to disputes with insurers. Special insurance contracts or extensions of existing policies can ensure that AI-related risks are covered in the same way as other forms of professional liability.

Conclusion

Artificial intelligence in healthcare offers enormous opportunities for increasing efficiency and quality of patient care, but at the same time brings new legal challenges. Healthcare facilities should not fear using AI, but must not underestimate preparation. Key is paying attention especially to setting up contractual relationships with system providers, consistently defining liability for damages, and verifying adequate insurance coverage. Only through careful preparation can we ensure that not only the interests of patients and healthcare facilities themselves are protected, but also that artificial intelligence does not create distrust between patients and physicians.

This article was prepared by lawyer JUDr. Eva Fialová together with partner JUDr. Ing. Michal Matějka from the law firm PRK Partners, who specialize in information and communication technology law, personal data protection, and legal aspects of new technologies.