Search
Close this search box.

UK Regulation of AI and Machine Learning in the Financial Services Sector

  1. This paper looks at how AI and machine learning is currently regulated in the Financial Services sector. It also examines the recent market engagement process which the UK regulatory authorities have undertaken regarding AI and machine learning, and anticipates possible future developments in this space. It will be of interest primarily to regulated firms but also to AI and machine learning developers, advisors and other stakeholders.

Background

  1. On 26 October 2023, the UK regulatory authorities – the Bank of England, the Prudential Regulation Authority and the Financial Conduct Authority – published feedback statement FS/23[1] summarising the responses to their joint discussion paper (DP5/22) on Artificial Intelligence (AI) and machine learning, which was issued in October 2022.  The aim of the market engagement process was to further the regulators’ understanding of how AI might affect their respective objectives. The discussion paper set out the Bank and the FCA’s views on the issues surrounding use of artificial intelligence and machine learning in UK financial services.
  1. International governments and regulators have taken very different approaches to the regulation of AI and machine learning. These vary from (a) prescriptive regulations and laws in the EU, the PRC and Canada; (b) cross-sectoral principles such as the UK approach and the OECD principles; and (c) specific principles for financial services firms such as in the Netherlands, Hong Kong and Singapore.
  1. The discussion paper and feedback statement forms a part of the UK regulatory authorities wider programme of work related to AI and machine learning, which includes the UK government policy paper regarding the regulation of AI. In that policy paper, the UK government stated that its preferred approach was to set out the core characteristics of AI to inform the scope of the AI regulatory framework but allow regulators to set out and evolve more detailed definitions of AI according to their specific domains or sectors. The government’s view was that the UK should regulate the use of AI rather than the technology itself, and regulators’ responses to the risks should be proportionate, so that they consider guidance or voluntary measures in the first instance.

How is AI used in Financial Services?

  • The financial services sector was an early adopter of AI and AI is now widespread within the industry. It has been used extensively in credit decisions, savings and investment advice, anti-money laundering and fraud detection, customer onboarding, money management and personalised finance offers, information extraction and document scanning and marketing. In most cases, firms are using AI to augment or upgrade from existing rules-based models, and there generally remains an element of human input, rather than there being full-scale automation of decision-making in financial services. However, the industry is evolving rapidly. Firms are using AI for more material business areas and use cases. From anti-money laundering (AML) functions and credit and regulatory capital modelling in banking; claims management and product pricing to capital reserve modelling in insurance; and order routing and execution to generating trading signals in investment management. For more detail on this, see the October 2022 report published by the UK regulatory authorities on machine learning in financial services.

Risks in AI & Machine Learning

  1. In DP5/22, the UK Regulators noted:

a) The primary drivers of AI risk in financial services relate to three key stages of the AI lifecycle: (i) data; (ii) models; and (iii) governance. In relation to each:

  1. Data: AI can take and analyse both traditional data sources and unstructured / alternative data from new sources (such as image and text data). This means data quality is vital, but it also means that AI may pick up bias within datasets and may not perform as intended;
  2. Model: traditional models tend to be rules-based, whereas AI can learn the rules and update model parameters iteratively. This makes them more difficult to monitor; they require more data to train the models; and they can be more opaque; and
  3. Governance: autonomous decision-making poses governance challenges. How can we ensure effective oversight and accountability?

b) DP5/22 paper acknowledged the benefits of AI for consumers, competition, firms and financial markets. However, it also highlighted potential risks for these various stakeholders:

  1. For consumers, AI could lead to harmful targeting of consumers’ behavioural biases or characteristics of vulnerability, discriminatory decisions, financial exclusion, and reduced trust.
  2. For competition, AI could be damaging if it aided harmful strategic behaviour such as collusion, or creating or exacerbating market features that hinder competition, such as barriers to entry or to leverage a dominant position.
  3. For firms, there were potential risks for safety and soundness, in relation to data, model risk management and governance.
  4. For financial markets, there were potential risks to system resilience and efficiency, such as models becoming correlated in subtle ways, adding to risks of herding or procyclical behaviour in stressed markets.

Key Existing Financial Services Regulation which governs AI

  1. PRA and FCA authorised firms are already subject to a wide range of legal requirements and guidance which are relevant to mitigating the risks associated with AI. These are considered further below, in relation to the themes set out in paragraph 5(b) above.

Consumers

  1. The FCA’s rules regarding consumer protection hang on its Principles for Business, supported by other rules. Principle 6 requires a firm to pay due regard to the interests of its customers and treat them fairly’, and Principle 7 obliges firms to pay due regard to the information needs of its clients and communicate information to them in a way which is clear, fair and not misleading. This has been supplemented by the new Consumer Duty (Principle 12), which came into force in July 2023. This sets higher and clearer standards of consumer protection across financial services, and requires firms to put their customers’ needs first. The Consumer Duty makes firms deliver good outcomes for retail customers, including those who aren’t clients of the firm. They have to act in good faith, avoid foreseeable harm, and enable retail customers to pursue their financial objectives. Products and services must be designed to meet retail customer needs, and offer fair value. Communications must be clear and not misleading, and firms must provide ongoing support to their customers. Consumer protection requirements are also contained in the Consumer Protection from Unfair Trading Regulations 2008.

Risks of Bias, Vulnerability, Discrimination & Exclusion

  1. Firms should beware AI leading to harmful targeting of consumers’ behavioural biases. Firms are permitted to have price models which price differently per group – for example to account for different risks – but the price must be reasonable and justifiable, provide fair value, and take into account any protected characteristics (e.g. under the Equality Act). Certain AI-driven pricing strategies could breach the requirements noted above if they result in poor outcomes for retail customers.
  2. Firms should also take account of the FCA’s vulnerable customer guidance.  Firms should understand the characteristics of vulnerability of their target market and main customer base and ensure that their products and services meet their needs. This includes in respect of AI driven strategies.
  3. Discriminatory decision making by AI systems could breach the Equality Act. The EHRC has primary responsibility for upholding the Equality Act, but the UK regulatory authorities must also have regard to eliminating discrimination under the Equality Act. Firms designing products or services will need to define a target market and ensure the product or service meets the needs, characteristics and objectives of the target market, without discrimination. Firms should also beware of exclusion of customers, which has been highlighted in the FCA’s 2022 – 2025 Strategy Document.

Competition

  1. Firms should be cognizant of any AI-related behaviour that might give rise to a risk of collusion. In DP5/22, the FCA highlighted in particular the risk of AI detecting price changes from rivals and enabling rapid or automatic response: this could potentially facilitate collusive strategies between sellers and punish deviation from a collusive strategy. Further, where AI is particularly relevant to a business’ practice, the costs of entry (including both the staff and skills, as well as the data and technology itself) may be raised to a level that limits market entry with potentially harmful effects on competition. In DP5/22, the FCA and the PRA noted their existing powers under the Competition Act 1998 and the Enterprise Act 2002, and drew particular attention to their powers to carry out market studies.
  2. It is worth noting, as well, that financial services firms will also be under other existing obligations when using AI, aside from those specific to financial services. For example, when processing personal data, firms have obligations under the UK GDPR and the Data Protection Act 2018. Where financial services firms use AI to process personal data, firms will have obligations under UK data protection law. For the ICO’s guidance on AI, see Explaining decisions made with AIand the Guidance on AI and Data Protection. Further, when using AI systems in their decision making processes, firms will need to ensure that it does not result in unlawful discrimination based on protected characteristics, in line with the Equality Act 2010.

Data Risks

  1. As noted above, data quality is vital for financial services firms. Poor quality data can compromise any process which relies on it. There are a number of elements of the current regulatory framework which address this risk. For example, the Basel Committee on Banking Supervision’s Principles for effective risk data aggregation and risk reporting (BCBS 239) contains principles aimed at strengthening prudential risk data aggregation such as ensuring the accuracy, integrity, completeness, timeliness, and adaptability of data. Systemically important banks are expected to adhere to these principles.
  2. Data architecture and infrastructure refers to the system, standards, and policies in which data are stored, arranged, and integrated. Data resilience refers to the ability for data to be preserved after a failure or disruption. Regulation within this area is more general, requiring firms to have strong data architecture and risk management infrastructure. Examples include the Risk Control Part of the RulebookFundamental rules 5 and 6, and BCBS’s Guidelines on ‘Corporate governance principles for banks.
  3. Firms are also expected to have appropriate data governance. Requirements include BCBS 239, and the Markets in Financial Instruments Regulation, which impose obligations to trade data with the aim of improving protections for investors.

Model Risk Management

  1. Model risk management (MRM) is becoming increasingly important as a primary framework for some firms to manage and mitigate potential AI-related risks. On 17 May 2023, the PRA published its feedback to a consultation on model risk management principles for banks, as well as the PRA’s final policy. The policy will take effect on 17 May 2024.
  2. The validation and independent review of an AI model is important in order to ensure an objective view is given on the model, inclusive of the way in which it is developed and that it is suitable for the intended purpose. The IOSCO Board, an international standards-setting body for securities regulation, has also developed guidance for regulators on supervising the use of AI and ML by market intermediaries and asset managers, specifically on the development, testing and ongoing monitoring of AI techniques.
  3. Effective governance provides support and structure to MRM activity through policies that define relevant risk management activities and procedures that implement these policies. It also covers the allocation of resources and mechanisms for testing whether policies and procedures are being carried out as required. 

Governance

  1. Good governance is essential to support safe and responsible AI adoption by firms. It underpins proper procedures, clear accountability and effective risk management across the AI lifecycle by putting in place a set of rules, controls and policies for a firm’s use of AI. Areas covered by governance regulatory requirements include risk control, compliance, internal audit, financial crime, outsourcing and record keeping. These are mostly general requirements but are relevant to a firm’s use of AI.
  2. Within the SM&CR there is at present no dedicated SMF for AI. Currently, technology systems are the responsibility of the SMF24. Separately, the SMF4 (Chief Risk function) has responsibility for overall management of the risk controls of a firm including the setting and managing of its risk exposures. These functions apply to PRA-authorised SM&CR banking and insurance firms and FCA-authorised enhanced scope SM&CR firms, but not core or limited scope SM&CR firms.
  3. In DP5/22, the UK regulators acknowledged that there is a question as to whether there should be a dedicated SMF and/or a Prescribed Responsibility for AI under the SM&CR. Arguably, AI use may not have yet reached a level of materiality or pervasiveness to justify these changes, but this is an area to watch.
  4. The concept of ‘reasonable steps’ is a core element of the SM&CR. SMFs can be subject to enforcement action under S66A(5) and/or S66B(5) of FSMA if an area of the firm for which the SMF has responsibility breaches regulatory requirements and the FCA and/or PRA can demonstrate that they failed to take such steps as a person in the senior manager’s position could reasonably be expected to take to prevent or stop these breaches. The UK regulatory authorities have acknowledged that further work is needed to develop guidance on what constitutes reasonable steps in the context of AI. However, DP5/22 acknowledges the importance of human involvement in the decision-loop.

Operational Resilience, Outsourcing & Third Party Risk Management

  1.  Operational resilience has been a priority of the UK regulatory authorities since 2018. It refers to the ability to prevent and manage operational disruptions. Firms are required to identify important business services and risk tolerances for those services. Operational resilience applies to the use of AI by firms when it supports an important business service. This means firms and FMIs should set an impact tolerance for disruption for each of those important business services that involve AI, and ensure they are able to remain within their impact tolerances for each important business service in the event of a severe but plausible disruption.
  2. The regulators have stated that many of the principles, expectations, and requirements for operational resilience may provide a useful basis for the management of certain risks posed by AI and support its safe and responsible adoption. For example, developing and implementing effective business continuity and contingency plans for AI systems that support an important business service. The regulators have also noted that firms are expected to and/or required to meet applicable operational resilience requirements and expectations irrespective of whether the AI is developed in-house, or by third parties. Therefore, the supervisory authorities’ requirements for outsourcing and third party risk management (see SYSC 8.1 , 13.7 and 13.9 of the FCA Handbook (for CRR firms) the Outsourcing Part and (for Solvency II firms) Chapter 7 of the Conditions Governing Business Part of the PRA Rulebook, and PRA SS2/21) also apply to third party AI models used by firms.

Feedback Statement FS/23

  1. As noted above, Feedback Statement FS/23 published on 26 October 2023 provides a summary of the responses to the October 2022 Discussion Paper on AI and machine learning. The statement aimed to summarise the responses to DP5/22 and identify the main themes emerging from the feedback.  The FCA has not published policy proposals at this stage; nevertheless, the statement gives some hints as to the regulatory direction of travel and flags how firms are prioritising AI risks and approaching creating AI governance frameworks. 
  2. Some key points to note from the statement are as follows:
  • Most respondents thought that a regulatory definition of AI would not be useful;
  • A majority of respondents cited consumer protection as an area for the regulatory authorities to prioritise;
  • Respondents highlighted that the speed and scale of AI could increase the potential for (new forms of) systemic risks, such as interconnectivity between AI systems and the potential for AI-induced firm failures;
  • Respondents suggested that third-party providers of AI solutions should provide evidence supporting the responsible development, independent validation, and ongoing governance of their AI products, providing firms with sufficient information to make their own risk assessment;
  • There is an increasing risk of AI tools being used by bad actors for fraud and money laundering. This might include, for example, generative AI being exploited to create deepfakes as a way to commit fraud. The use of generative AI may increase rapidly in financial services. The risk of this technology are not yet fully understood, especially risks related to bias, accuracy, reliability, and explainability. Due to ‘hallucinations’ in GenAI outputs, respondents also suggested that there may be risks to firms and consumers relying on or trusting GenAI as a source of financial advice or information; and
  • Many respondents were opposed to creating a new prescribed responsibility for AI to be allocated to a Senior Management Function.  Most respondents thought that further guidance on how to interpret the ‘reasonable steps’ element of the SM&CR in an AI context would be helpful, although only if it was practical or actionable guidance.

Next Steps

  1. The regulatory authorities are clearly cognizant of international developments in the AI space. This is particularly relevant to financial services firms, many of which operate in multiple jurisdictions. A coordinated international response will be necessary to ensure that AI risks are dealt with appropriately and proportionately. In FS/23, the regulatory authorities have suggested that further policy proposals may be forthcoming in the AI sphere, although we do not currently have a clear direction of travel. We will continue to monitor developments in this space. If you have any questions, please do not hesitate to contact us.

[1] Also published as PRA FS2/23 by the PRA

Share:

More Posts

Send Us A Message