AI applications in financial services range from offering benefits such as enhanced risk assessment, to fraud detection and customer service. Use of AI in a regulated industry brings a different level of challenge for business.

Risks include:

  • If the accuracy of input or output of AI is imperfect, it could lead to inaccurate risk assessments and loss.
  • AI contravenes regulations, creating regulatory risks and sanctions.

Financial services are generally regulated at the national level; although there is some international harmonisation, there is not yet on use of AI. In the UK, presently there is not an AI regulator, but many regulators like the Financial Conduct Authority are looking at AI. The UK Government has recently announced a consultation on its approach to developing cross-sectoral AI regulation (see below). The global attitude to regulation is also varied; recent research suggests that the UK and Hong Kong want a principles basis without statutes, the EU and China want prescription, whereas places like Australia and Japan have favoured a voluntary approach to conduct (though Japan has recently adopted an innovation-focussed law on AI governance).

Whilst there have been global meetings of governments discussing AI, it cuts across multiple sectors and evolves rapidly, making it impractical or too challenging for global regulations. What we have seen is a desire for certain standards as outlined here.

The September 2024 Convention/Treaty on AI from the Council of Europe said its core was about promotion of human dignity and ensuring safe AI development. With no binding global authority, different regulatory approaches are emerging. The OECD and G20 speak mainly of voluntary frameworks but national interests take precedence over global regulatory coherence. Whilst the EU’s AI Act sets out a prescriptive, rules-based approach, the US favours a more innovation-first model with lighter oversight. For financial services, where there is concern to protect investors, the different regimes do present challenges to business given how the approaches differ. The divergence means different regulatory regimes ranging from detailed rulebooks to flexible, pro-innovation guidance.

What is the UK’s approach to AI?

The UK Government has recently launched a ‘call for evidence’ in relation to an initiative called the UK AI Growth Lab. It is the Government’s proposed project to establish the UK as ‘the best place to develop and launch innovative AI applications’. The proposal is for a sort of regulatory incubator in which AI innovators will ‘sandbox’ their products in the UK regulatory environment to ‘generate real-world evidence of their impact’. Regulators would be active participants in the AI Growth Lab, and there would be opportunity for supervised pilot schemes operating within ‘modified’ regulatory environments.

The call for evidence notes that AI is already impacting the financial services sector, suggesting that current regulation may be hindering advancement and growth. The example given is of complex and opaque AI models that do not meet the required level of ‘explainability’ for the advice given by Independent Financial Advisors. Recent research, it is said, shows that ‘Claude Opus exceeds human-expert personal financial advisers, and is significantly cheaper.’

Whilst pro-innovation AI regulation for the UK is a laudable aim – and the Government cites the Organisation for Economic Co-operation and Development’s (OECD) 2023 report on regulatory sandboxes in AI, so is alive to the international context – any new national rules will mean another set of compliance hurdles for businesses. What businesses require is regulatory clarity and a consistent understanding of expectations across borders without duplication or conflicting regulations. Therefore, there is a need for cross-border discussion. Some view an increasing tendency toward data localisation policies as a problem. They are typically introduced to protect personal data privacy and security, but they can limit the scalability and effectiveness of AI systems that depend on large data sets.

Global approaches differ but the OECD leads the way

The OECD AI Principles created an international standard for AI governance, being the first intergovernmental framework. Originally adopted in May 2019 and updated in May 2024, all OECD member countries signed up, with 10 additional countries plus the EU.

The OECD AI Principles seek to promote use of AI that respects human rights and democratic values. The aims are:

  1. Inclusive growth, sustainable development and well-being
  2. Human rights and democratic values, including fairness and privacy
  3. Transparency and explainability
  4. Robustness, security and safety
  5. Accountability.

The OECD AI Principles provide a framework for jurisdictions to build upon. The OECD also has five recommendations:

  1. Invest in AI Research and Development
  2. Foster a Digital Ecosystem for AI
  3. Ensure a Policy Environment that Supports AI
  4. Build Human Capacity and Prepare for Labor Market Transformation
  5. International Cooperation for Trustworthy AI.

G20 AI Principles

The G20 Principles, adopted in 2019, give political affirmation to the OECD AI Principles and emphasis a ‘human-centric, trustworthy, and inclusive approach to AI governance’.

G7 Code of Conduct on Advanced AI Development

Under the Hiroshima AI Process of October 2023, the G7 unveiled their AI Principles with a Code of Conduct on Advanced AI Development. Its primary goal is to promote the safe, secure, and trustworthy development, deployment, and use of advanced AI systems, including generative AI.

OECD Alignment with G20 and G7 AI Principles

The G20 AI Principles, endorsed in 2019, are largely based on the OECD AI Principles. They emphasise human-centred AI and promote international cooperation to ensure AI technologies are trustworthy and beneficial:

  1. Inclusive growth, sustainable development and well-being
  2. Human-centred values and fairness
  3. Transparency and explainability
  4. Robustness, security and safety
  5. Accountability.

The OECD had led to countries applying values-based principles and recommendations generally voluntary in nature. Jurisdictions can interpret and apply them as they see fit. While OECD and G20 member countries will reflect the principles in their own guidelines, there are limitations of applying broad non-binding requirements, with countries often taking different approaches for domestic AI regulations.

The G7 Code of Conduct shares core values with the OECD AI Principles, both aiming to promote trustworthy and responsible AI, risk-based approach, privacy, and interoperable policy standards. However, the G7 Code of Conduct is more targeted towards advanced AI/generative AI models. With 11 principles focused on advanced AI systems, it provides practical steps for organisations implementing governance and risk policies compared with the broad principles-based approach of the OECD.

The OECD now includes general-purpose and generative AI, and the original OECD Principles are more universally applicable to all AI. While the G7 Code of Conduct is more explicit about advanced AI systems with some actions focused on specific models (i.e. generative and foundation), there is not yet a global agreement on regulation.

If you have questions or concerns abou

t the regulation of AI, please contact James Tumbridge and Robert Peake.