Arbitrators and AI Chatbots: Are They Compatible?

RPC partner Yi-Shun Teoh and associate Yuki Chiu discuss the compatibility of arbitration with the evolving use of AI chatbots.

Published on 15 August 2023
Yi Shun Teah, RPC, Expert Focus contributor
Yi-Shun Teoh

Following its release in November 2022, ChatGPT reportedly reached 1 million users in five days and an estimated 100 million by January 2023, making it the fastest-growing application in history. The technology is also developing quickly – the recently released GPT-4 scored in the 90th percentile in the US Uniform Bar Exam, whereas the previous version placed in the 10th percentile.

If an artificial intelligence (AI) chatbot could perform better than 90% of law students, digest voluminous information in a fraction of the time and output answers that are convincing and grammatically correct, could it be good enough to assist humans who arbitrate human affairs? For some lawyers, the answer is a tentative “yes”. In May 2023, a Colombian judge cited ChatGPT in a ruling on the medical funding of an autistic boy. The judge included extracts of his conversations with the chatbot in his judgment, but stressed that he had not relied on the technology to make his decision. The judge opined that ChatGPT could generate efficiencies, effectively performing the work of a tribunal secretary, so long as judges still exercise their independent judgement.

This leads arbitration users to consider the extent to which AI chatbots can responsibly be used to assist the decision-making of arbitrators.

Impartiality, Independence and Freedom from Bias

Impartiality, independence and freedom from bias are the fundamental principles required of arbitrators under the leading institutional rules (see Article 11 of the ICC Rules and HKIAC Rules).

Depending on how AI chatbots are used, they may hamper the arbitrator's ability to uphold these principles, since:

  • AI chatbots rely on text and data mining and machine learning, and may develop biases over time as a result of reviewing and processing biased data; and
  • AI chatbots may pull information from sources connected to a stakeholder in the arbitration and the arbitrator may unwittingly use such information.

The counter-argument to this is that human intelligence operates likewise in a black box and is just as prone to inherent biases arising from education and experience. In this regard, using an AI chatbot as a sounding board may be no worse than chatting with a colleague or having an internal monologue. That said, using an AI chatbot as a tool to gather information about parties or summarise case facts may involve greater risk – arbitrators would have virtually no control over how such information is filtered and presented to them.

Arbitrators are required by the UNCITRAL Model Law (Article 12) and leading institutional rules and guidelines (see IBA Guidelines on Conflicts of Interest in International Arbitration) to disclose circumstances that are likely to give rise to justifiable doubts as to their impartiality or independence. Arbitrators should of course avoid using AI chatbots in a way that would jeopardise these principles, but even if there is no obvious risk it would be prudent for arbitrators to disclose any intended use to parties and seek informed consent.

Expectation of Competence

Parties have a reasonable expectation that the arbitrators they have appointed will be competent in terms of experience, technical expertise and skill. There is an inherent risk that delegation to AI chatbots could undermine this competence. While chatbots can perform some simple legal analysis well, they can also face issues of inaccuracy. AI chatbots have been known to produce “hallucinations” in which they invent responses and supporting sources. A lawyer in the US and a litigant in person in the UK each faced criticism recently when they cited case law in court that was fabricated by ChatGPT. There is also a risk that AI chatbots will produce answers that are outdated: ChatGPT is only current up to 2021. AI chatbots can also suffer from a lack of transparency of data sources, making the verification of results challenging. Such issues might damage the perception of an arbitrator's competence if they were to rely too heavily on AI chatbots in reaching decisions.

Confidentiality

Parties and arbitrators are prohibited from disclosing material related to the proceedings, unless agreed between the parties. This is another distinguishing factor from litigation, and parties will often opt for arbitration to avoid making their disputes public. However, AI chatbots retain and process data that is inputted by users for the purposes of machine learning regardless of whether that data is confidential in nature, which can lead to wrongful disclosures. After software engineers used ChatGPT to fix source code, Samsung's commercially sensitive information was unintentionally leaked on three occasions. Arbitrators would need to be particularly careful to input only information that is general in nature, and to avoid disclosing parties’ identities or details that are specific to the dispute.

Cost-efficiency

Perhaps the greatest potential benefit to arbitrators of using AI chatbots is increased cost-efficiency. AI has been used by UK start-up DoNotPay since 2015 to assist customers in making small claims, such as disputing parking tickets, which might otherwise be prohibitively expensive to contest. It is feasible that chatbots could be safely used to assist arbitrators with simple tasks such as indexing or sorting documents in chronological order, with proper oversight, generating similar cost-efficiencies.

That said, arbitrators will have to balance any efficiency gains against the risks outlined above. Increased efficiency was the justification cited by judges in India and Pakistan who recently used ChatGPT to assist their judgments in bail hearings for criminal rape and murder trials. While the judges stressed that they did not rely on the AI chatbot, only asking it simple legal questions, the seriousness of the alleged crimes begs the question of whether any increased efficiency is worth the risk of imprecision, bias or breach of confidentiality (or at least the risk of perception of these) where personal liberty is at stake.

Conclusion

Arbitrators are encouraged to carefully consider the risks of using AI chatbots in their current versions to assist in their decision-making before doing so. Even if arbitrators are able to deploy AI safely, using careful review and exercise of independent judgement, this may still carry the risk that parties may be concerned about the perception of the accuracy, independence and confidentiality of the decision, potentially negating the finality of awards, which is a key factor in drawing parties to the use of arbitration in the first place.

RPC

RPC, Expert Focus contributor
22 ranked departments and 46 ranked lawyers
Learn more about the firm's ranking in Chambers Greater China Region
View firm profile

Chambers In Focus Newsletter

Sign up for our newsletter and never miss out on thought leadership content from legal experts and the key stories driving the legal profession forward.
Sign up here