Ganado Advocates’ IP/TMT partner Paul Micallef Grimaud, met with consultant nuclear medicine physician, Dr. Andrew Mallia, Professor Alexiei Dingli – University of Malta Senior Lecturer of Artificial Intelligence and entrepreneur and lawyer Dr. Gege Gatt to look at how AI is positively impacting the health sector and providing us with unprecedented levels of cure and health management, whilst also discussing the legal and ethical risks involved.

Listen to the full interview on Spotify:
https://open.spotify.com/episode/17MZarOp8DOTEeHfOt6M1W

Looking through the lens of a nuclear medicine physician whose profession revolves around the interpretation of medical images, Dr. Andrew Mallia is of the view that if used cautiously, Artificial Intelligence (AI) and, in particular, deep learning and computer vision technology can lead to the desired level of precision medicine. As explained by Dr. Mallia and further elaborated upon by Prof. Alexiei Dingli, this science leads the technology to highlight those details in the medical images that require further investigation and analysis by the medical professional, thus cutting down on time and allowing for more precision in the interpretation of the images.

This science is based on the machine’s acquired intelligence which it derives from the “studying” of large quantities of images which it then processes internally to be able to form its understanding of what is a “normal” image and one which is “abnormal” and requires investigation.

Dr. Mallia explains that it has been demonstrated through studies that machines can perform these limited tasks better than humans, although no machine is infallible.

According to Dr. Gege Gatt, this could be explained by the very fact that human decisions are often impacted by the personal circumstances in which the individual taking the decision finds himself/herself. This was demonstrated, for instance, in a study undertaken in the US with respect to judges adjudicating parole cases, where it was found that the decisions were very often influenced by the judges’ subjective views, their personal mood and other extraneous circumstances.

Interestingly, however, humans still instinctively trust other humans more than machines that perform their tasks objectively. This said, the participants all agreed that transparency of the machine’s decision-making process is key to building trust in the technology and it is not surprising that more emphasis is being placed on glass boxes (a system whereby the decision-making process of the machine can be followed through), rather than black boxes (a system whereby the machine’s output is not explainable).

With the medical sector being so key to society, one cannot but expect the legal and regulatory framework to be rather intense. This brings together privacy rights under the GDPR, the European Convention on Human Rights and the EU Charter on Fundamental Rights, which require that patients be informed very clearly as to what use is to be made of their data and be given the right to freely and expressly consent to their health data being used in such a manner. It is also prohibited, save for the patient’s free, informed and express consent, that fully automated decisions be taken using the patient’s health data. Likewise, the patients’ charters of rights insist on the patients being fully informed of the management solutions related to their care and cure, whilst the European Commission’s draft regulation on AI classifies the use of AI in the medical sphere, in particular where diagnosis and cure is concerned, as high risk, necessitating high levels of certification by independent auditors regulated by national regulatory authorities in accordance with the requirements that will be harmonised across Europe once the Regulation is adopted.

The interplay between product liability and medical professional responsibility is particularly interesting, as alluded to by Dr. Mallia, and will most definitely be tested before the courts in the near future.

Despite this intense legal framework, which some may criticise as slowing down the pace of development, being mindful and respectful of privacy rights when developing technological solutions leads to greater trust levels and widespread uptake of the solutions. Dr. Gatt has experienced this first hand in the discussions he has had with the NHS. He and his team at EBO.ai, a company that is providing AI solutions for patient management to the NHS, went through the various legal and regulatory steps over a two-year journey. But, beyond passing the legal and regulatory tests, what inspired trust and ultimately acceptance of their product was the ability to demonstrate that ethical considerations flowed throughout the developmental stages with the consequence that the technological solution meets human-centric goals, including inclusivity, augmentation (and not replacement) of human judgment, privacy and patient empowerment.