Employers will be well aware of the risks of using AI in the workplace but the recent High Court judgment of Ayinde v LB Haringey has provided a timely reminder about the potential risks and perils of using generative AI in the context of legal proceedings.

Alleged Use of AI in Legal Proceedings

The background facts are not relevant for the purpose of this article, except that the Judge considered an application for wasted costs made by the London Borough of Haringey against Mr Ayinde’s barrister (i.e. court advocate) and solicitors. One of the grounds for the wasted costs application was that Mr Ayinde’s barrister had included five fake cases in her pleadings, which the London Borough of Haringey submitted may have been produced using generative AI (although this was denied by Mr Ayinde’s barrister).

Although the Judge was not prepared to make any findings on whether the cases had been generated using AI, he granted a wasted costs order on the basis that the behaviour of Mr Ayinde’s barrister and solicitors had been improper, unreasonable and negligent. In particular, the Judge concluded that if the barrister had used generative AI to prepare her pleadings without checking them, then she would have acted negligently, and it was also unreasonable for her to assert that the inclusion of the fake cases were “minor citation errors” as described in her submissions to the court. In addition to granting a wasted costs order, the Judge referred Mr Ayinde’s barrister and solicitors to their respective regulators, the Bar Standards Board and Solicitors Regulation Authority.

Further Warning from the High Court

Following the judgment, the High Court on its own motion used its Hamid jurisdiction (i.e. the High Court’s inherent power to regulate its own procedures and to enforce its duties) to refer the matter to the High Court King’s Bench Division, given that the facts of the case raised concerns about the competence and conduct of Mr Ayinde’s barrister and solicitors. During the hearing, the President of the King’s Bench Division considered whether any action should be taken (if any) in relation to Mr Ayinde’s legal representatives, including whether criminal contempt of court proceedings should be initiated, as placing false material before the court with the intention that the court treats it as genuine may, depending on the person’s state of knowledge, amount to a contempt. Despite finding that the threshold for initiating contempt of court proceedings had been met, the President of the King’s Bench Division decided not to initiate any contempt of court proceedings against Mr Ayinde’s barrister, given that there were a number of mitigating factors which would not be able to be determined in the course of contempt proceedings. However, the High Court made it clear that the decision not to initiate contempt of court proceedings against the barrister should not be considered as a precedent and that lawyers who do not comply with their professional obligations in this respect risk severe sanction.

Key Takeaways

While there is no doubt that AI will continue to be used in the conduct of litigation in the future, this case serves as a good reminder that AI can suffer from ‘hallucinations’ (i.e. where an AI tool produces incorrect or misleading information) and that all information obtained through AI should be carefully checked to ensure that it is genuine, especially if it is used in the context of legal proceedings. It is also demonstrative of a growing issue that is being dealt with and regulated by courts and regulators worldwide and will be an issue that will no doubt continue to be at the forefront as the use of AI in the legal profession continues to grow.