AI has been thought of as the solution to everything for the past couple of years. The use of AI in legal disputes presents positive opportunity, but issues have been spotted, resulting in various guidelines and rules being published. The concerns are now growing in the UK following embarrassment of lawyers and trade mark attorneys in using AI that produced inaccurate outputs.

Not everything is Generative AI (genAI), meaning you ask it for something, and it generates an outcome. The generative product of AI is what has caused most concern in legal proceedings. The first reported case of lawyers relying on the use of genAI occurred in the US in May 2023. This involved two New York lawyers who used an AI tool, ChatGPT, for legal research, which produced results that included made-up cases. These results were submitted in federal court filings without being reviewed or validated by attorneys, resulting in Judge Castel demanding the legal team explain itself. Despite the widespread attention this case garnered, American attorneys still continue to submit ChatGPT output without review or validation. There are also similar examples from Canada, for example, in April 2025 the case of Hussein v. Canada, where the lawyer apparently relied on a tailored legal genAI tool called Visto.ai designed for Canadian immigration cases, but still ended up using fake cases in the submissions, but also then cited real cases but making the wrong points. Canada requires disclosure of the use of AI but that did not stop these mistakes.

The judge commented:

“[39] I do not accept that this is permissible. The use of generative artificial intelligence … must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human…

Use of AI in English courts

The English courts do not ban the use of AI but both judges and lawyers have been told they are responsible for the material which is produced in their name. In England & Wales, AI can be used but the human user is responsible for its accuracy, and responsible for any errors. In November 2023, the Solicitors Regulation Authority issued guidance on AI use and The Bar Council published guidance in January 2024. More recently in 2025, the Chartered Institute of Arbitrators also issued guidance.

England has been looking to technology and, potentially, AI helping with cases for some time. In March 2004, algorithm-based digital decision making was working behind the scenes in the justice system. Lord Justice Birss explained then that algorithm-based decision making was already solving a problem at the online money claims service, with a formula applied where defendants accept a debt but ask for time to pay. Birss LJ went on to say that looking to the future: “AI used properly has the potential to enhance the work of lawyers and judges enormously.” In October 2024, the Lord Chancellor and Secretary of State for Justice, Shabana Mahmood MP and the Lady Chief Justice, The Right Honourable the Baroness Carr of Walton-on-the-Hill, also echoed the potential of technology for the future of the courts and justice system.

However, alongside accuracy, there is concern about the ethics in the use of AI. On ethical AI and international standards, the UK promotes the Ethical AI Initiative, and the international standard – specifically ISO 42001, the AI management system. This may be adopted as a standard in English procedure at some point. In April 2025 the judiciary updated its guidance to judicial office holders on the use of AI. Yet all this guidance seems to be unheeded: The need for clearer understanding of the rules and policing of lawyers is clear.

Use of AI in American and Canadian courts

As discussed above, the first case to involve a lawyer caught submitted inaccurate ChatGPT-generated content was Mata v. Aviance, Inc. No. 1:2022cv01461 (S.D.N.Y. 2023) involving attorneys Steven Schwartz and Peter LoDuca and their firm Levidow, Levidow & Oberman. The judge sanctioned both attorneys and their firm, levying a $5,000 fine for misleading the court. The judge found that the lawyers acted in bad faith and made “acts of conscious avoidance and false and misleading statements to the court” in order to obfuscate their conduct. The judge found Schwartz did not understand ChatGPT’s limitations, did not self-verify the AI-generated results, and relied on ChatGPT’s self-verification.

In any American court, there is an obligation for attorneys submitting legal citations to verify the citations are accurate. Lawyers signing legal filings are responsible for this verification, whether underlying work was done by junior attorneys or genAI systems.

Federal Rule of Civil Procedure 11(b)(2) requires that when an attorney or unrepresented party submits legal documents to the court, they certify that the legal claims or arguments are supported by existing law or by a reasonable, nonfrivolous argument to change the law.

Some courts have created bespoke rules dealing with the use of genAI by litigants. For instance, in 2023, U.S. District Judge Brantley Starr of the Northern District of Texas required attorneys to file a certificate to indicate either that no portion of any filed document was AI-generated, or that a human being validated any AI-generated text.

Law360 maintains a tracker of standing U.S. district and magistrate judge standing orders on AI. About 2% of judges have such orders. Some judges ban the use of AI outright, others require disclosure of the use of AI and attestations related to accuracy. Some courts have simply reminded counsel that they are responsible for ensuring any information submitted to a court is accurate.

In Lacey v. State Farm Gen. Ins. Co., No. cv-24-05205 FMO (MAAx) (C.D. Cal. May 6, 2025), attorneys from K&L Gates and Ellis George were fined $31,100 for submitting briefs with non-existent or incorrect citations. Similarly, in P.R. Soccer League NFP Corp. v. Federación Puertorriqueña de Futbol, No. 3:23-cv-01203-RAM-MDM (D.P.R. Apr. 10, 2025), more than $50,000 in attorney’s fees was awarded to Paul Weiss after opposing counsel filed motions with made up content.

Most recently, lawyers for MyPillow CEO Mike Lindell were fined after submitting a legal brief filled with AI-generated errors. U.S. District Judge Nina Wang of the District of Colorado found that attorneys from McSweeney Cynkar and Kachouroff “were not reasonable in certifying that the claims, defenses and other legal contentions… were warranted by existing law.” The Court fined Kachouroff and the attroneys $3,000 each.

If you have questions or concerns about the use of AI in legal research, please contact James TumbridgeRobert Peake and Ryan Abbott.