GLOBAL-WIDE: An Introduction to Litigation Services
From Misuse to Mastery: Navigating AI’s Role in Legal Practice
The integration of AI into legal practice presents significant opportunities if one can successfully navigate the risks. AI-powered tools like LLMs promise to revolutionize legal research, document review and case strategy. However, as recent case law demonstrates, improper use of these technologies can lead to severe consequences, including sanctions, reputational damage and ethical violations. Lawyers must therefore transition from casual users of AI to responsible stewards, ensuring that AI enhances rather than undermines the integrity of their legal practice.
This article explores the evolving role of AI in litigation, highlighting the risks associated with its misuse and the potential for mastery through ethical and professional diligence. By analysing recent case law and judicial perspectives, a road map is provided for legal practitioners to harness AI responsibly and effectively.
Consequences of misusing AI
Recent cases illustrate the dangers of uncritical reliance on AI-generated legal research. Courts have stressed that while AI can be a useful tool, attorneys remain responsible for the accuracy and reliability of their work product.
In Mata v. Avianca Airlines attorneys submitted a legal brief citing six fictitious cases generated by ChatGPT. The fabricated cases included non-existent airlines and erroneous legal analyses. When confronted, the lawyers admitted they had no prior experience with AI tools and were unaware that such tools could generate false information, known in the industry as “hallucinations.”
The court found that the attorneys acted in bad faith and made false and misleading statements to the court. The judge imposed USD5,000 in sanctions and ordered the lawyers to notify the judges who were cited as authors of the fake cases—all of them real—of the sanctions, emphasising that while using AI is not inherently improper, ethics rules “impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” This ruling underscores that the ethical obligation of competence compels attorneys to verify AI-generated information before submitting it to the court.
In Gauthier v. Goodyear, a wrongful termination case, the plaintiff’s attorney submitted court filings containing fictitious citations. The court imposed a USD2,000 fine and required the attorney to attend a course on generative AI in the legal field. The judge criticised the attorney for failing to verify the AI-generated research and for not correcting the errors when they were highlighted by opposing counsel. The attorney was also ordered to provide a copy of the court’s decision to the client. This case reinforces the importance of an attorney’s duty to provide competent representation by verifying the authenticity of AI-generated content and acting promptly when errors are discovered.
Similarly, in Park v. Kim the Second Circuit addressed the citation of a non-existent case in an appellate brief. The court found that citing a fictitious case constituted conduct that “falls well below the basic obligations of counsel.” As a result, the attorney was referred to the court’s Grievance Panel for further investigation and was ordered to furnish the decision to the client. Park is a cautionary tale of the reputational and professional risks lawyers face when they fail to exercise due diligence with AI-assisted research.
AI as a tool for legal mastery
Despite these missteps, when employed within a structured and ethically sound framework, AI offers powerful opportunities such as enhanced efficiency, sharpened legal reasoning and new interpretative methodologies. In Snell v. United Specialty Insurance Company, Eleventh Circuit Judge Newsom proposed that LLMs could help determine the “ordinary meaning” of legal terms, suggesting that LLMs may supplement traditional interpretive tools and can provide several benefits to the “textualist toolkit … to inform ordinary-meaning analyses of legal instruments.” These benefits include LLMs’ accessibility to the public, lawyers and judges and their being trained using ordinary language, understanding context and generating transparent output. Newsom’s concurrence in Snell signals a judicial openness to exploring how AI could enhance legal analysis so long as its use is appropriately contextualised and transparent.
Ethical and Professional Obligations When Using AI
Attorneys engaging AI tools must do so with careful attention to their ethical and professional responsibilities. There are four key areas that attorneys may use as a road map for responsible AI implementation.
Verification and accuracy
Just as supervisory attorneys are accountable for their junior associates’ work, they are also responsible for any content produced by AI and therefore must rigorously verify any AI-generated output before including it in court filings. Best practices include using AI as a research assistant, not a research substitute, and cross-checking citations in trusted databases such as Westlaw to confirm that cases exist and are accurately summarised.
Competence and education
The Model Rules of Professional Conduct require lawyers to be competent in the tools they use. Competence doesn’t mean technical expertise, but it does require a reasonable understanding of an AI tool’s capabilities, limitations and risks. As emphasised by the ABA in Formal Opinion 512, attorneys should understand how AI tools generate output, stay informed about best practices and new developments and seek training before relying on AI in their practice.
Transparency and disclosure
Transparency is critical when AI plays a significant role in case strategy. Attorneys have an ethical obligation to inform clients about the means used to achieve legal objectives and must disclose the use of AI when it may materially affect a matter’s outcome. Responsible AI use also means disclosing AI’s role in legal filings where it influences substance, attributing AI-generated material appropriately and avoiding any implication that AI-generated arguments are the attorney’s own legal analysis.
Privilege and confidentiality
Using AI tools could have grave implications for privilege and confidentiality. Thus attorneys must safeguard client information by avoiding entry of confidential material into unsecured or public AI tools and instead use enterprise-grade tools with robust privacy protections and review vendor policies to ensure compliance with ethical duties.
Charting the path forward
AI’s influence on litigation will only grow. The legal community must shape that growth in a way that preserves the profession’s ethical foundation. In order to shift from the risks of misuse to the rewards of mastery, law firms, courts and regulators must collaborate in developing 1) best practice guidelines for AI use, 2) judicial education programs on AI capabilities and limits and 3) professional standards for responsible AI adoption.
Conclusion
AI is neither inherently dangerous nor inherently transformative—it is a tool. Used without care, it can jeopardise cases, clients and careers. Used thoughtfully, it can enhance efficiency and improve legal reasoning. The future of AI in litigation depends on whether lawyers can rise to meet their professional obligations while embracing innovation. By applying the principles of verification, competence, transparency and confidentiality, attorneys can navigate the evolving landscape of AI with both caution and confidence, ultimately harnessing AI’s power to enhance—rather than erode—the integrity of their legal practice.