AI and employment law

 

Technological developments, and artificial intelligence (AI) in particular, can be formidable tools for the strategic development of organisations and the well-being of individuals.

No one can deny this when, in developed societies, we have at our disposal, on a daily basis, increasingly reliable tools for making predictions, detecting and preventing all kinds of risks, and obtaining personalised products and services.

Over the last decade, technological progress has gone hand in hand with profound changes in forms of employment and work organisation.

The clear dividing line that once separated the employed from the self-employed is blurring in the shadow of the emergence of new working 'families', such as the 'click workers' and micro-entrepreneurs of the gig economy, the 'toilers' of the digital age.

These 'new' worker statuses often give the courts a run for their money when they are called upon to engage in the traditional (and increasingly perilous) exercise of defining the legal status of the employment relationship.

Many employers - whether more or less aware of the fact - have long been using algorithmic processing as part of the organisation of their business. Whether it's to optimise their employees' productivity, to predict and prevent the loss of talent, or even to extend surveillance beyond the company walls!

This inevitably has an impact on individual and collective working relationships, which raises profound legal issues. At the very least, every employer should be aware of the challenges posed by the increasing digitalisation of labour relations.

All practitioners have been able to observe the hardening of employment litigation - both individual and collective - in recent years, particularly since the now famous General Data Protection Regulation (GDPR) came into force in 2018. Employees, increasingly aware of their fundamental rights and freedoms, no longer hesitate to appropriate the legal provisions in an attempt to "resist" the subordination that subjects them to the employer.

While it is true that there is no AI law as such (yet), employment law provides a regulatory framework that cannot be ignored without exposing oneself to a plethora of regulatory, legal, financial and reputational risks.

There is no doubt that AI litigation will be part of the festivities, and that the increasing interference of algorithms in labour relations will not be without its problems.

In order to grasp the issues that may arise, it is imperative to understand what is meant by the term 'artificial intelligence', more specifically in the context of employment relations.

This article does not pretend to define what is (or should be) called AI from a technical point of view, nor to classify the wide variety of technologies resulting from the modelling of algorithms.

Rather, this modest reflection focuses on the reception that positive law reserves for the increasing 'algorithmisation' of employment relationships and on the possible challenges that this poses for employment law practitioners.

 

I.             Understanding the concept of artificial intelligence

Traditionally, any lawyer confronted with a given issue has the reflex of anchoring himself to the definitions, notions and concepts conceived by the law, jurisprudence or doctrine and scientific literature.

The main difficulty with AI is that the very concept of "artificial intelligence" is controversial.

In our view, there is no single definition of AI and, given the state of positive law, qualifying an AI can prove perilous for a labour lawyer, who is generally a layman in the field.

The European Parliament defines AI as "the ability of a machine to reproduce human-related behaviours, such as reasoning, planning and creativity."

The European Commission, in its Proposal for an EU Regulation, defines "artificial intelligence system" (AI system) in more detail as "software that is developed by means of one or more [...] techniques and approaches [...] and that can, for a given set of human-defined objectives, generate results such as content, predictions, recommendations or decisions influencing the environments with which it interacts;".

Annex I of the Proposed Regulation lists the techniques and approaches concerned:

   a.       a.        Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods, including deep learning.

   b.       b.        Logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deduction engines, (symbolic) reasoning and expert systems.

    c.       Statistical approaches, Bayesian estimation, search and optimisation methods.

The French Data Protection Authority (CNIL) expressly refers to these two definitions, adding the following clarification: "Artificial intelligence is a logical and automated process generally based on an algorithm and capable of carrying out well-defined tasks". It adds the following clarification: "Artificial intelligence is not a technology in the strict sense of the term, but rather a scientific field in which tools can be classified if they meet certain criteria".

Even more interesting is the deconstruction work being carried out by Luc Julia, co-creator of the Siri software, in his book "L'intelligence artificielle n'existe pas", published in 2019. In it, Luc Julia denounces the myth of computerised intelligence capable of competing with, if not surpassing, human intelligence.

He highlights the absolute dependence of AI on data, asserting that "If mistakes happen, it's because of errors in the algorithms or in the data".  

Far from dwelling on the complexity of the expert debate on the existence or qualification of AI, we can retain that it is broadly defined by its ability to function, adapt and improve automatically, without close human supervision.

At the same time, AI cannot exist without (databases) to be exploited, which is a decisive element in the analysis of the correlation between AI and labour law.

Understanding what is meant by AI in the context of the relationship to work is necessary because, although it is not a technology as such, by its nature and the potential it offers, its use in companies will mobilise different bodies of very specific rules.

 

 

I.             The "algorithmisation" of labour relations

The growing use of AI technologies in the enterprise can be explained quite logically by the democratisation and availability of this type of product on the market.

As a result, it can be used throughout the entire employment relationship.

A.       From recruitment to the formation of the employment contract

1.- Pre-recruitment phases

These days, (probably) no large organisation relies on manual collection, analysis or sorting of applications. All sorts of sophisticated tools capable of identifying the best candidates from hundreds of thousands of profiles are now available to recruiters.

Even more fascinating are the devices capable of evaluating a candidate's elocution, fluency, facial expressions and coherent speech from a simple video.

The promise of automated, objective and discrimination-free processing is one of the main (marketing) arguments generally put forward by companies marketing this type of product.

Unfortunately, AI does not always guarantee the elimination of the risk of discriminatory treatment and may, on the contrary, confirm it.

One of the famous "GAFAMs" had bitter experience of this when it realised that the recruitment AI it had developed discriminated against women.

To put it simply, AI works on the basis of computer models that it creates, generates and even improves, without human intervention, but always on the basis of pre-existing data.

In this case, the training data used by the AI was mainly extracted from the CV pool of the company's employees. This pool was marked by the predominance of male employees in software development jobs over the previous ten years.

Using the data available to it, the machine generated an algorithmic model that was tainted by a gender bias, resulting in a higher proportion of applications from women being automatically rejected.

In this context, the question obviously arises of the responsibility of the employer-recruiter towards the candidate who may have suffered discrimination as a result of the biased algorithmic processing of their application.

Should a judge called upon to analyse discrimination in recruitment systematically examine the quality of the training data used as a basis for the model generated by the AI?

It is true that legislative initiatives have been taken within the EU to create a liability regime for AI.

Pending clarification from the European legislator, we believe it will be difficult for the potential employer to hide behind a lack of technical expertise to avoid liability.

Although the tool will most likely have been developed and marketed by a third party, the employer-recruiter could be held responsible (particularly vis-à-vis the individuals concerned) for the selection criteria and training data used by the AI to build its decision-making or predictive model.

Unsuccessful candidates can take advantage of the provisions of the Luxembourg Labour Code and Criminal Code relating to prohibited discrimination.

In civil terms, it is 'sufficient' for a person who believes they have been treated unequally in terms of access to employment to establish facts likely to characterise discrimination, for the burden of proof to the contrary to be transferred to the alleged perpetrator.

In other words, the recruiter will have to prove that the decision to reject the application is not tainted by any discriminatory bias.

Articles 13, 15 and 22 of the RGPD could be used as powerful weapons in AI discrimination litigation.

The potential employer must not lose sight of the fact that, as the data controller (as will very often be the case), it will have to inform the candidate that he or she is the subject of automated decision-making and/or profiling.

In this context, the data subject will be able to assert his or her right to human intervention and to challenge the decision generated by the tool. It is worth remembering that a decision significantly affecting an individual and based exclusively on automated processing is theoretically only possible in very limited circumstances!

Furthermore, if the right of access is exercised, the data controller - with the help of the technology provider if necessary - could be obliged to communicate to the applicant the algorithmic processing data relating to his or her application.

In this way, the use of recruitment AI could give the applicant a much better opportunity to obtain information relevant to supporting their discrimination claim.

While it will be more difficult to highlight an individual's (more or less conscious) biases in the context of 'traditional' recruitment, the data stored by the AI will most certainly be coveted by the plaintiff with a view to preparing for the trial.

2.- The surreptitious intrusion of AI into contractual relationships

Future and existing employees are not left out.

Thanks to "general public" generative AI offers, in just a few clicks, CVs and cover letters perfectly calibrated to the job title are available to candidates.

But what about the 'authenticity' of their 'motivation', and the credit to be given to the skills highlighted (potentially with emphasis) by the writing techniques developed by AI?

No doubt an experienced recruiter will be able to spot inconsistencies quickly, but the fact remains that the use of AI can increase the chances of bypassing certain skimming phases in the recruitment process.

As employment contracts are intuitu personae, i.e. concluded on the basis of the intrinsic qualities of the other party, the stakes may be higher than they appear.

The misfortune of a New York lawyer who allowed himself to be seduced by ChatGPT, a generative AI conversational tool, gives us food for thought as to how employers should protect themselves against such a risk in the context of the employment relationship.

This lawyer, who had a 30-year career and an honourable reputation, naively (?) referred to case law and legal references generated by ChatGPT in a lawsuit between his client and an airline.

It later transpired that the cases with which he had illustrated his pleadings had never existed. In his defence, the lawyer explained that he had believed ChatGPT to be a 'simple' search engine and that he was unaware of the tool's ability to generate ('create') content.

Obviously, the consequences were extremely damaging for him, for his client... and for his partner, his name and his negligence having been revealed to the world.

If we were to transpose this case into the context of the employment relationship, there are several points to consider.

Firstly, the employer alone bears the normal risks of the business. In fact (apart from a few exceptions), he is a priori liable to third parties for the faults and negligence committed by his employees in the performance of their duties.

The company director could quickly find himself in a multitude of inextricable situations.

Some employers probably tolerate the tacit use of AI tools by employees as long as such a practice promotes productivity. The situation will be different if, as in the case of this lawyer, the employee infringes the rights of third parties.

Indeed, the risk of the employee committing an infringement of intellectual property by appropriating the "authorship" of a text generated by a conversational robot cannot be ruled out.

The accidental disclosure of highly sensitive strategic information for a company and its customers is a real risk, as one of the world's electronics giants demonstrated in the first half of this year.

No organisation, and no user profile, is immune to the inappropriate use of AI.

 

What are employers to do if they discover that the excellent quality of their employees' work is in fact due to compulsive use of AI systems?

This is far from a theoretical reflection; some employees confess to deliberately using ChatGPT without their employers' knowledge.

Will mistaken belief in an employee's essential and determining qualities be an admissible defence if it turns out that, without the help of AI, the employee is incapable of maintaining the level of performance to which his employer has become accustomed?

The question is of some interest insofar as, if the judge detects a defect in consent, the employer will automatically be released from the contractual grip by cancelling the contract.

If, on the other hand, the employer has to dismiss the employee, it will be up to him to prove the existence of misconduct (or professional inadequacy) on the part of the employee, and also to justify the objective reasons why the employee's actions have made it impossible to continue the employment relationship.

Notwithstanding the interest of this issue for litigants, it is in the best interests of the parties to the employment contract to define, upstream, clear rules for the use, and where applicable, the prohibition of AI systems in the context of work performance.

IT policies and charters, often (wrongly!) considered obsolete or useless, have a major role to play in preventing and managing these risks.

The setbacks described above should convince even the most reluctant to invest in prevention.

 

A.          AI to strengthen management power

Monitoring employee performance is an area in which the use of technology has become essential. The massive use of teleworking during the Coronavirus pandemic went hand in hand with an explosion in the purchase of remote monitoring solutions.

1.- Improved control capability

Digital interference in the employment relationship strengthens the employer's power of direction, giving him a capacity for virtual omnipresence (and omniscience?).

Controlling an employee's performance and productivity is a natural prerogative of the employer, who is the only party to the contract to bear the risk of the activity.

Nevertheless, inappropriate use of these monitoring tools will expose him/her to legal liability, as the CNIL (French Data Protection Authority) was forced to point out following numerous reports of abuse during the pandemic.

Remote activation of webcams, remote access to the employee's screen, installation of "keyloggers" (or "keystroke recorders"), voice recognition, and so on.

But beware of the incompatibility of these practices with the principles forged by European regulations and case law.

The courts have long since endorsed the dogma of protecting the fundamental rights and freedoms of the individual in the professional sphere (including respect for privacy and the protection of personal data).

2.- Supervising the use of AI in companies

In terms of labor law, we need to pay particular attention to how the digitization of individual working relationships fits in with employers' obligations to protect workers' health and safety.

The vocabulary[1] of psycho-social risks has been enriched by notions such as "techno-stress[2]" or "technological stress", "over-connection" or "digital divide".

In line with this trend, Luxembourg's labor legislation has been considerably strengthened in recent years, with the criminalization of moral harassment in the workplace (including the digital workplace) and the introduction of a right to disconnect.

The misappropriation or abusive (and uncontrolled) use of technologies by employees - such as a manager demanding continuous activation of the webcam when employees are teleworking - is likely to amount to disproportionate surveillance.

Controlling this risk will depend on the employer's ability to supervise the use of technology by its employees, and to provide appropriate training and awareness-raising initiatives. It is probably worth pointing out that the French Labor Code requires all employees to take care not only of their own health, but also that of others[3], and that failure to do so may result in disciplinary action.

A.   As we can see, the development of AI has a definite influence on individual employment relationships. While technology can optimize processes and strengthen the powers of the company director, we must not lose sight of the protective role of labor law with regard to subordinate employees.

B.   While the employer may be the head of the company, he or she will nevertheless have to deal with the "counter-power" represented by staff representation bodies.

AI at the heart of social dialogue and collective bargaining

As a general rule, the employee delegation has a right of review over employer decisions concerning the improvement of working conditions, risk prevention activities and major changes in work organization.

In large organizations (i.e. with more than 149 employees), the staff delegation even has the right to participate in decision-making, for example, in the implementation of certain monitoring[4] systems, or when it comes to defining general employee[5] appraisal criteria.

These prerogatives should not be underestimated.

In fact, while the head of the company has unquestionable managerial authority in certain matters, he is not the sole captain of his ship.

Thus, if the decision to use AI falls within the scope of article L. 414-9 of the Labour Code, without the agreement of the delegation, the employer will (theoretically) have to abandon his project.

If he overrides the delegation's refusal, he could in particular commit an offence of obstruction and compromise the validity of evidence collected via the tool if he intends to use it in a disciplinary context. Here, small and medium-sized companies have the advantage of not being hindered by this sharing of decision-making power.

In the field, employers obliged to debate the advisability of introducing certain technologies into the company deplore the lack of technical knowledge on the part of employee representatives. The latter, for their part, are often opposed in principle, for want of being able to take a position on the merits.

These prerogatives should not be underestimated.

While the head of the company has unquestionable managerial authority in certain matters, he is not the sole captain of his ship.

Thus, if the decision to use AI falls within the scope of article L. 414-9 of the Labour Code, without the agreement of the delegation, the employer will (theoretically) have to abandon his project.

If he overrides the delegation's refusal, he could in particular commit an offence of obstruction and compromise the validity of evidence collected via the tool if he intends to use it in a disciplinary context. Here, small and medium-sized companies have the advantage of not being hindered by this sharing of decision-making power.

In the field, employers obliged to debate the advisability of introducing certain technologies into the company deplore the lack of technical knowledge on the part of employee representatives. The latter, for their part, are often opposed in principle, for want of being able to take a position on the merits.

Employee representation law does, however, offer a number of mechanisms for balancing power between social partners.

In fact, the staff delegation can seek the help of an advisor[6] or, when it considers the matter to be of decisive importance for the company or its employees, it can call in an external expert.

While expertise has traditionally focused on legal, accounting and financial fields, it is not unreasonable to anticipate the gradual involvement of technology experts, given the increasing complexity of the technologies and AI systems available on the market.

In any case, it would be naïve to rely on the lay nature of some delegates to underestimate the counterweight that employee representation can represent in the technological mutation of workspaces.

European and international trade union confederations have been at work for several years now, campaigning for the primacy of collective bargaining in the regulation of AI within companies.

Emphasis is placed on the need to train employee representatives and reduce the knowledge gap that could work against them in their dealings with company management.

The Global Union Federation of Public Service Workers, for example, has set up a digital platform, the "Digital Bargaining Hub", a veritable digital arsenal for collective bargaining.

The fears - whether fantasized or proven - that AI arouses in terms of the transformation of work and job destruction, are likely to lead to a "professionalization" of the employer's interlocutors within the framework of social dialogue and collective bargaining.

However, these predictions need to be qualified.

A recent study published on the STATEC[7] website establishes a correlation between the digitization of work and falling union membership in Luxembourg.

According to the researchers, the digitization of tasks and the rejuvenation of the workforce are among the causes of the decline in union membership.

Armed with this information, it will be interesting to observe whether this trend will have any effect on the attitude that national unions adopt at the bargaining table.

 

In short, there's no reason to see AI and labor law as two antagonistic concepts. On the contrary, these reflections highlight the fact that, as it stands, Luxembourg labor law coexists without great difficulty with the societal technological changes that are affecting relationships at work.

The Luxembourg legislator's emphasis on social dialogue and collective bargaining (cf. the legal regime governing teleworking and legislation on the right to disconnect) is undoubtedly one of the reasons for the resilience of the legislative framework in the face of changes in the business world.

This does not mean, however, that we should ignore the many legal issues - some of them unprecedented - with which the courts will have to grapple.


[1] European Parliament resolution of July 5, 2022 on mental health in the digital world of work, European Parliament Committee on Employment and Social Affairs (published June 21, 2022)

[2] Refers to stress related to the use of technology in the workplace

[3] Article L. 313-1 of the French Labour Code

[4] Article L. 261-1 (3) of the Labor Code

[5] Article L. 414-9 point 5 of the Labor Code

[6] Article L. 412-2 (1) and (2) of the Labor Code (only in companies with at least 51 employees)

[7] The syndicats en déclin dans un monde du travail en mutation, Regards n°01, 03/22