Ganado Advocates’ IP/TMT  partner, Paul Micallef Grimaud, met with Prof. Alexiei Dingli to introduce the emerging technology of Artificial Intelligence (AI).

Listen to the full interview on Spotify:
https://open.spotify.com/episode/4iCuSU4ScqzWoAwuzbcGXV


What is AI?

Despite the impression that the definition found in the European Commission’s Draft AI Regulation is wide, Prof. Dingli argues that it is actually restrictive given the definitions of the two terms “Artificial” and “Intelligence” which are not limited to software. Ultimately, AI can be defined as that field of study that seeks to get machines to simulate intelligent actions performed by humans.

The different levels of intelligence

There are different degrees of “intelligence”, the most essential being automation, or the application of a set of commands and rules. Learning, on the other hand, is far more complex and exciting and in the field of AI involves machines that take decisions autonomously and that can communicate not just with other programs but with humans.

Other subfields of AI include natural language processing (where a computer processes and understands natural languages – take Siri, Alexa or Google Assistant as an example) and computer vision (where the AI is capable of understanding a picture or video).

Without us necessarily being aware of it, we are fully immersed in AI, from systems that set the temperature for air conditioners, applications linked to home appliances and cars, email filtering and social media.

But there is no disputing the fact that Machine Learning is the “superstar of AI” due to its widespread use and adaptability in all the other sub-fields. In this discipline a machine is fed data which it is trained to learn from and in turn develop its own intelligence.

Addressing Distrust in AI

It is a fact that persons hesitate to entrust risky activities entirely to machines. This said, we very often do this without knowing. Aeroplanes are a prime example, where 90% of flights are being flown by an AI agent and where most landings are fully automated. Yet it is the figure of the pilot sitting in the cockpit that elicits trust, and not the machine.

Even though a machine may, within its programmed limitations, outperform a human being, the distrust is even more accentuated when it is difficult, if not impossible, to understand how the machine reached its conclusions. This is known as the “black box” phenomenon.

Technology is not flawless and there have been a number of instances which have demonstrated the need to understand the machine’s “thought process” in arriving from a defined input to a particular output. Results, although accurate, may sometimes be based on an identifier which is not necessarily pertinent to the answer itself or where the result is only accurate to the AI itself and not the problem being addressed in practice.

Take as an example that machine that correctly distinguished between pictures of huskies and wolves, but to the programmers’ dismay, based its results solely on the surroundings (wild versus domestic) in the picture; or that medical program trained to decide whether patients with pneumonia should be sent home and which, based on the data fed to it, decided that all patients with pneumonia and asthma should be discharged from the wards, failing however to understand that such patients should not be sent home but rather directed to other wards for more intensive care.

The issue of Bias

These examples show how pertinent the adage “Garbage In Garbage Out” is to AI and machine learning. This raises concerns of sexual and racial bias in the algorithms used by the machine based on the biased data fed to it, and consequently unfair and unjust conclusions being reached. It is extremely hard, if not impossible, to eliminate bias because, in Prof. Dingli’s words, “data ultimately reflects our world and our world is biased ”. This is demonstrated by a Microsoft chat box experiment in which the chat box was left to train on data that it freely captured over the internet and within 15 hours it was found to have become, amongst others, a “racist and Nazi sympathiser”.

The responsibility for taking the necessary precautions is therefore on the human programmers who are ethically bound to ensure that the quality of the data being fed to the technology is balanced and well established. This ties in with what the European Commission is advocating – the need to have “Trustworthy AI”. With this aim in mind, the Commission’s draft AI Regulation classifies AI in accordance with the risk of its application in society, creating a form of traffic light system where there are applications which are outright detrimental or too risky, and hence prohibited; others which are risky and need to be appropriately controlled; whilst others can be applied without too much concern and need for regulatory intervention.

The second band of high risk applications are the ones that may have the highest positive impact on society, if applied properly. They include applications that deal with one’s health, wellbeing, finances, jobs and education. Here the Commission is imposing an obligation for such applications to obtain certification by approved and regulated certifiers, on the basis of criteria aimed at eliminating, or balancing out, the risks attached to the particular use of the AI.

AI and Privacy

Finally, one cannot forget that the use of data in AI is also subject to the levels of protection granted by the GDPR, the European Convention on Human Rights and the European Union Charter on Human Rights. Ultimately, privacy is a fundamental human right and, save for the exceptions contemplated in the law, no personal data should be used unless the data subject has freely and expressly consented to his/her data being used after being fully informed of how this data will be used, by whom and to what end. Moreover, and pertinently, no decisions that could impact, amongst others, one’s health, well-being, livelihood and career progression should be taken based on the automated processing of one’s personal data if not with the data subject’s free, informed and express consent.

The Future

Looking to the future, Prof. Dingli believes that ubiquitous computing will become the norm and the traditional computer and mobile phones as we know them today will be replaced by augmented reality and applications built into spectacles or even contact lenses. Most certainly, there is no stopping how technology, and in particular AI, will continue to shape the world we live in and our ways of life.