As improvements in the capabilities of AI continue apace, many industries are learning how they can effectively leverage this new technology. The financial services industry is no exception to this trend with many providers integrating AI into their services in order to improve customer outcomes and reduce costs. However, unlike many other industries, financial services providers are heavily regulated (in the UK and elsewhere) and so both regulated firms and their regulators are grappling with the implications of a world where the use of AI becomes commonplace.
In addition to the generic use of AI by financial services providers (for example, in customer service interactions), there are many potential use cases for AI which are specific to the financial services industry:
While the opportunities presented by AI for the financial services industry are significant, its deployment is not without its challenges and risks. Issues such as data bias, model explainability, cyber risks, and operational resilience have led UK regulators (such as the FCA and PRA) to closely examine AI’s role in financial services.
The UK’s financial regulators (the FCA and the PRA) have adopted a principles-based approach to regulating AI in the financial services sector. Whilst the regulators are aware of the potential for AI to benefit financial services firms and their customers (and the regulators acknowledge that they are exploring how they, as regulators, can use this new technology to regulate more effectively), they are also conscious of specific risks that the use of AI may pose to consumers. Their regulatory stance reflects a balance between fostering innovation and ensuring consumer protection, financial stability, and market integrity.
As with the development of other technologies (such as blockchain technology), the UK regulators have said that their approach to the regulation of AI will be “technology neutral”. This means that the FCA and PRA are unlikely to adopt specific rules addressing AI but are more likely to address any underlying issues posed by the use of AI. The FCA has, however, said that it will be closely monitoring how firms are integrating the use of AI into their risk management frameworks.
Although the FCA has no specific rules concerning the use of AI, a number of the FCA’s existing rules will be relevant to the deployment of AI by UK regulated firms:
In the last year, the FCA’s duty to ensure good outcomes for consumers (the “consumer duty”) has come into force. To the extent that firms are using AI in the provision of services to consumers, firms will need to consider how this may impact upon their compliance with the consumer duty. Key questions which a firm may wish to ask itself are:
By way of an example, if a consumer credit firm is using AI to make creditworthiness assessments (see example above) then it may wish to monitor whether the AI-facilitated assessments are accurate. So, for example, are significant numbers of borrowers being granted credit based on the AI’s recommendations but who go on to default on their loan?
The FCA has stated that it is important that AI models are explainable and free from bias. Many AI systems are “black boxes” in the sense that, although they are often highly accurate in predicting outcomes, they cannot explain why they have arrived at a particular conclusion.
This problem, the lack of explainability, is a concern to regulators because it means that important decisions could be made that affect a regulated firm without anyone being able to understand why that decision has been made.