Switzerland: An Arbitration Counsel Overview
Contributors:
View Firm profile
The Evolving Regulation of AI in Arbitration: A Swiss Perspective
Artificial intelligence (AI) is no longer a future prospect in arbitration. Counsel, parties and arbitrators increasingly use AI-enabled tools when analysing evidence (eg, document review, data extraction, translation, management of evidentiary records), for research and drafting, or for the automated generation of hearing transcripts. Used appropriately, these tools can help tribunals and counsel work more efficiently.
Given the advantages that AI offers, and the speed with which large language models have become available, there is little doubt that attempts to exclude AI from arbitration altogether would be futile. At the same time, the growing reliance on AI in arbitration gives rise to real risks. Generative AI may suggest hallucinated authorities, distort summaries by omitting relevant facts or documents, or produce plausible but inaccurate analysis. Other risks are more specific to arbitration: AI may be used to manipulate evidence (eg, deepfakes, synthetic documents or altered metadata), or arbitrators may delegate aspects of their decision-making mandate in unacceptable ways. Additionally, both counsel and arbitrators may breach professional secrecy or other confidentiality obligations by sharing information with AI tools.
These risks are beginning to attract the attention of both legislators and arbitral institutions. In the EU, the Artificial Intelligence Act seeks to regulate AI comprehensively across sectors, including, to some extent, its use by courts and arbitral tribunals. Meanwhile, arbitral institutions and professional bodies have begun to consider more systematically how AI may be used in arbitration without undermining due process, confidentiality, the personal mandate of the arbitrator or the enforceability of awards.
These developments are of particular importance to Switzerland, given its continued prominence as a seat of international arbitration. No specific legislative framework comparable to the EU’s AI Act is currently planned in Switzerland. The limits to the admissible use of AI in Swiss-seated proceedings must therefore be derived from the existing legal framework, in particular Chapter 12 of the Swiss Private International Law Act (PILA), read together with the case law of the Swiss Federal Supreme Court and statutory rules on professional secrecy, data protection and cybersecurity.
Under Swiss arbitration law, the use of AI by arbitrators is not inadmissible as such. It is accepted that arbitrators may rely on software tools, including AI, to assist with organisational tasks, the analysis of evidence, legal research or the preparation of drafts. The decisive limit lies elsewhere: an arbitrator’s mandate is inherently personal. If an arbitrator were effectively to outsource the adjudicative function to an AI system, this could expose the award to challenge under Chapter 12 PILA. In this respect, the position is analogous to the case law concerning tribunal secretaries: they may support the work of the arbitrators, but they cannot replace the arbitrators’ own assessment of the facts and the law, nor reduce their responsibility for the outcome.
Existing rules and principles also limit the use of AI by counsel and parties. Here, the central procedural constraint is the right to be heard. The mere fact that AI has been used in the preparation of submissions or evidence does not in itself render an award vulnerable under Swiss law. Nor does Swiss law permit a general review of whether an arbitral tribunal assessed the facts correctly simply because AI may have influenced the process. The position changes, however, where AI is used to produce fabricated evidence, alter documents or submit fictitious authorities.
Such conduct may impair the opposing party’s ability to respond effectively to the case against it. In that situation, procedural fairness is at stake and, if the outcome of the arbitration is affected by a serious misuse of AI, the award may be subject to annulment under Article 190 of the PILA. Where AI-assisted manipulation of evidence or submissions comes to light only after the arbitration has concluded, Swiss law may also permit a request for revision.
Separate issues arise in relation to professional secrecy, data protection and confidentiality. These do not primarily concern arbitration law as such but relate to the professional and statutory duties governing the conduct of counsel and arbitrators. Lawyers admitted in Switzerland are bound by professional secrecy, and a breach may constitute a criminal offence. Personal data contained in arbitration files is also subject to the Federal Act on Data Protection, and often also to foreign data protection laws, such as the General Data Protection Regulation (GDPR). Additional confidentiality duties may arise from the agreement underlying the dispute, from applicable institutional rules or from the terms under which arbitrators are appointed. Processing case-related information with the help of AI tools may therefore raise issues for both counsel and arbitrators, unless adequate precautions are taken to ensure compliance with the relevant rules and regulations.
While Swiss law provides a robust framework for assessing the admissible use of AI in arbitration, it does not answer every practical question that counsel, arbitrators or parties may face in day-to-day proceedings. It is therefore unsurprising that arbitral institutions and professional bodies are increasingly looking to develop practical guidance for the international arbitration community. Institutions such as the International Chamber of Commerce administer cases across jurisdictions and legal traditions. They have a strong interest in preserving the quality and legitimacy of arbitration as an internationally accepted dispute resolution mechanism, and in assisting arbitrators and counsel in the responsible use of these new tools. This objective – ensuring that arbitration remains modern and efficient while safeguarding due process guarantees and the enforceability of awards – is reflected in the ongoing work of the ICC Task Force on Artificial Intelligence in Dispute Resolution and other institutions.
What is emerging from the work of arbitral institutions is a constructive and increasingly practical discussion about how best to support a responsible and human-centric use of AI in arbitration. Institutions are seeking to identify where the limits of permissible use should lie, how arbitrators may be assisted by AI without compromising the personal nature of their mandate and to what extent transparency around the use of AI may be required or advisable. At the same time, there is a clear awareness that technology is evolving rapidly and that overly prescriptive rules may do more harm than good. Rigid disclosure requirements, in particular, could quickly become outdated and invite tactical objections or frivolous challenges by unsuccessful parties. The more promising direction is therefore a principles-based and case-sensitive approach, combining practical guidance with sufficient flexibility to accommodate technological change and the diversity of arbitral proceedings.
The Swiss arbitration community is actively engaged in these discussions and committed to an approach that reconciles innovation and efficiency with human judgment, procedural fairness and reliability as key pillars of international arbitration.




