The Bank of England, in conjunction with the Financial Conduct Authority and Prudential Regulation Authority, has published a discussion paper on the use of artificial intelligence (AI) and machine learning (ML) in financial services. Brandon Wong, a solicitor with Burges Salmon LLP offers this primer on the topic.
The regulators’ discussion paper looks at the potential benefits and risks associated with AI in the sector as well as its view of the current regulatory framework which governs the use of AI by financial services firms. The paper aims to further the FCA and PRA’s (The Regulators) understanding and to deepen dialogue on how AI may affect their regulatory objectives, ultimately to ensure safe AI adoption. The paper forms part of the wider programme of AI-related work, including the AI Public Private Forum and the government’s policy paper on ‘Establishing a pro-innovation approach to regulating AI’.
What is AI?
The paper does not provide a definition of AI, but instead discusses the benefits and risks associated with the approach to defining AI. It recognises that ‘algorithm’ and ‘model’ already have specific meanings within financial services regulation and that while both may be components of an AI system, they may not in themselves necessarily be deemed to be AI.
The two general approaches highlighted are:
- providing a more precise legal definition of what AI is; or
- viewing AI as part of a wider spectrum of analytical techniques with a range of elements and characteristics. Such approaches may include a classification scheme to encompass different methodologies or a mapping of the characteristics of AI.
Each approach aims to clarify what constitutes AI within the context of a specific regulatory regime and, therefore, what restrictions and requirements may apply to the use of the technology.
Benefits of Regulators adopting a more precise AI definition include creating a common language for participants, harmonising regulatory responses towards AI and clarifying whether specific use cases fall in scope (i.e. giving regulatory perimeter certainty). However, challenges are also recognised including the robustness of a definition against rapid technology development, the risk of a definition being too broad, missed use cases and potential misclassification by firms to reduce regulatory oversight.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataGiven the risks, the Regulators consider that an alternative approach could be more suitable for the regulation of AI in UK financial services.
Where is AI being used in financial services?
In addition to the paper, a joint report on machine learning in UK financial services that examines firms’ ML implementation strategies has found that the number of ML applications used in UK financial services continues to increase.
Conducting surveys with financial services firms, it found that respondents from the banking and insurance sectors have the highest number of ML applications. Other types of firms surveyed with ML applications included FMI and payments firms, non-bank lenders, and investments and capital markets firms.
In terms of the range of ML use cases, the report found that firms are developing or using ML across most business areas:
- ‘Customer engagement’ and ‘risk management’ continue to be the areas with the most applications;
- The ‘miscellaneous’ category, which includes business areas like human resources and legal departments, had the third highest proportion of ML applications; and.
- The business areas with the fewest ML applications are ‘investment banking’ and ‘treasury’.
Although this is the current state of play, as technology and firms’ utilisation develops, we will likely see more use cases and implementation across the sector.
Benefits and risks
AI may bring important benefits to consumers, financial services firms, financial markets, and the wider economy, making financial services and markets more cost-effective, efficient, accessible, and tailored to consumer needs.
However, AI can pose novel challenges, as well as create new risks or amplify existing ones. To support the safe and responsible adoption of AI technologies in UK financial services, the Regulators suggest that they may need to intervene to mitigate the potential risks and harms related to AI applications. However, the importance of a proportionate approach is recognised.
In respect of risks, the paper notes that the primary drivers of AI risk in financial services relate to three key stages of the AI lifecycle:
- Data – the input. Given that AI relies significantly on large volumes of data in its development (training and testing) and implementation, data-related risks can be amplified and have significant implications for AI systems;
- Models – the processing. These could include inappropriate model choices, errors in the model design or construction, lack of explainability, unexpected behaviour, unintended consequences, degradation in model performance, and model or concept drift; and
- Governance – the oversight. Drivers of risk here include the absence of clearly defined roles and responsibilities for AI, insufficient skillsets, governance functions that do not include the relevant business areas or consider the relevant risks (such as ethical risks), a lack of challenge at the board and executive level, and general lack of accountability.
Depending on how AI is used in financial services, issues at each of the three stages can result in a range of outcomes and risks that are relevant to financial services regulation.
In terms of where these risks may materialise, the Bank of England identified the following areas of particular relevance:
- Consumer protection – particularly in respect of bias and (unlawful) discriminatory decisions;
- Competition – with issues around barriers to market due to costs of entry;
- Safety and soundness – amplifying prudential risks (including credit, liquidity, market, operational and reputational risks);
- Insurance policyholder protection – inappropriate pricing and marketing, concept drift and lack of explainability, inaccurate predictions and reserve levels; and
- Financial stability and market integrity – including concerns around uniformity across models, herding and flash bubbles.
Despite the risks, AI can bring a number of benefits in respect of each of the areas identified above and firms should look at the ways in which they can mitigate against the risks identified when developing and deploying their AI systems.
Existing regulatory framework
One of the challenges to the adoption of AI in UK financial services is the lack of clarity surrounding the current rules, regulations, and principles, in particular, how these apply to AI and what that means for firms at a practical level. The Bank of England has sought to address this challenge in its paper by discussing some parts of the current regulatory framework that it considers are most relevant to the regulation of AI.
Given the technology-agnostic nature of the UK financial services regulations, there is no one source of AI regulation. Instead, the paper recognises a broad range of legislation and regulatory frameworks that could apply in the context of AI, including (non-exhaustively):
- The FCA’s new Consumer Duty;
- The FCA’s guidance on vulnerable customers;
- Rules in the FCA’s PROD sourcebook;
- The Equality Act 2010;
- UK data protection law – including the proposed Data Protection and Digital Information Bill;
- Competition law; and
- Anti-money laundering legislation.
The paper also highlights a number of more specific sets of rules and guidance that may apply depending on the firm in question. While there may not be any great surprises in the list of potentially applicable regulations, a key takeaway is the breadth of distinct regulatory frameworks that can apply. Firms will therefore want to ensure that they have undertaken a full analysis of their regulatory obligations where they intend to develop and utilise AI as part of their business models to ensure they are not caught out.
As well as the discussion points above, the paper raises a number of questions on which it seeks opinions from a broad range of market participants and stakeholders. Comments can be made on the discussion paper until 10 February 2023.
DP5/22 – Artificial Intelligence and Machine Learning
Algorithms, 5G and blockchain are transforming asset inspections