The regulation of AI in the UK is sometimes regarded as lagging behind the EU – with a complex and fragmented approach. But rather than trailing Brussels, the UK is taking a sectoral approach to AI regulation in contrast to the EU, with oversight of AI regulation currently spread across multiple sector regulators rather than a single AI authority.
This article cuts through the complexity that is a result of this sectoral approach, helping businesses understand the role of the UK sector regulators and the actions that each key regulator has taken to oversee AI in their sector.
This article updates our earlier article published in May 2024 (“2024 Article”).
In March 2023, the UK’s Conservative government proposed in its AI White Paper that the UK’s existing regulators would regulate AI within their sector remits, rather than establishing a new regulator for AI.
Almost two years later, the AI Opportunities Action Plan was published by the new Labour government, and was swiftly endorsed by Sir Keir Starmer.
While the role that had been given to the regulators was not overruled by Labour, there was a notable shift in focus. The Sunak-led government’s White Paper envisaged regulators using their enforcement powers to regulate AI. In the new Action Plan, regulators are encouraged to promote AI innovation within their sectors.
On 17 November 2025, Baroness Lloyd, a minister for the Department for Science, Innovation & Technology, endorsed the approach and stated in Parliament:
“I remind the House that AI is already regulated in the UK and we regulate on a context-specific approach. Our regulators can take account of the developments in AI, which are indeed rapid, and ensure that they are tailored. In addition, as noble Lords know, we have got various regulators undertaking regulatory sandboxes and the new proposal for the AI growth lab, which will look across all sectors and allow regulators to collaborate on this quite rapidly changing technological development.”
This statement underlines the role that the government expects regulators to play in UK AI regulation.
However, challenges remain for regulators regulating AI in their sector as foundation models cross all sectors, but this is less of an issue if the role of the regulators is primarily promotional, rather than enforcement-focused.
In this article, we examine how the key regulators are responding to AI in practice. Some regulators have, inevitably, been more active than others e.g. the CMA has been particularly active, whilst the ICO, FCA and Ofcom have issued guidance documents and discussed best practices but have steered away from enforcement. Other regulators have less resource and technical expertise to address developments in AI in a meaningful way.
Agentic AI is an emerging form of AI that is gathering momentum and it poses a host of new risks. The Digital Regulation Cooperation Forum ("DRCF"), a collaboration between four key UK regulators - the CMA, OFCOM, ICO and FCA - has been looking into emerging AI applications such as agentic AI. The ICO has also individually published a new report on agentic AI.
However, as this article shows, other regulators have been slower to “take account of the developments in AI” in the way envisaged by Baroness Lloyd.
As reported in our earlier 2024 Article, the CMA endorsed the UK government’s proposed principles-based approach to AI regulation in its April 2024 update paper. The CMA’s update paper identified an “interconnected web” of over 90 partnerships and strategic investments involving Google, Apple, Microsoft, Meta, Amazon and Nvidia across the AI value chain.
Since then, the CMA in July 2024 joined the European Commission, US Department of Justice, and Federal Trade Commission in issuing a joint statement on competition in generative AI foundation models, emphasising their shared concerns about concentrated control of key inputs, entrenchment of market power in AI-related markets and the potential for partnerships between major firms to undermine competition.
Further, the CMA has continued to collaborate with other UK regulators (including the ICO, FCA and OFCOM) through the DRCF under the leadership of Sarah Cardell, CEO of the CMA, who became Chair of the DRCF this year. The 2025/26 work plan indicates the DRCF will focus on developing the regulators' understanding of how their respective regulatory regimes might apply to AI and working to identify and resolve any points of conflict. As part of this work, in October 2025, the DRCF issued a call for views on agentic AI to understand the practical challenges and regulatory uncertainties businesses face when deploying these systems.
Over the past year, the CMA has intensified its scrutiny of AI-driven transactions, most notably through its merger control powers.
Since December 2023, the CMA has initiated five merger control investigations into AI partnerships including:
In four out of five of these cases, the CMA concluded it lacked jurisdiction to assess the transactions in depth.
In both Microsoft / Mistral AI and Alphabet / Anthropic, the acquirers were found not to have acquired control or “material influence” over the AI targets as neither were given any voting or exclusivity rights conferring dependency or locking developers in, as the AI firms in both cases remained free to work with other cloud providers.
In Amazon / Anthropic, the CMA found it did not have jurisdiction under the target turnover or share-of-supply tests, meaning it was unable to pursue a full investigation. Notably, had the new “hybrid test” under the Digital Markets, Competition and Consumers Act 2024 (“DMCCA”) been in force at the time, it is possible this deal would have been caught given Amazon's large UK turnover and Anthropic’s UK nexus.
In Microsoft / OpenAI, the CMA opened an investigation following the dismissal and reappointment of Sam Altman as CEO in 2023, having initially investigated the deal in 2019. The CMA took 15 months to investigate whether Microsoft had increased its level of control, ultimately concluding that Microsoft did not acquire de facto control as it was unable to control OpenAI's commercial policy – it could only exert a high level of material influence.
Interestingly, in Microsoft / Inflection AI, the CMA did find jurisdiction as the transfer of employees (an “acqui-hire”), know-how and IP rights was sufficient to create a relevant merger situation. The CMA unconditionally cleared the transaction at Phase 1, finding no realistic prospect of a substantial lessening of competition as Inflection faced significant competition in the development and supply of consumer chatbots and foundation models.
The CMA is clearly interested in regulating AI. With its growing expertise, including an 80 person Data, Technology and Analytics unit, and new powers under the DMCCA, the CMA is likely to intensify oversight of partnerships between multinational firms and early-stage AI companies with a UK presence.
In June 2025, the ICO announced its AI and biometrics strategy, “Preventing Harm, Promoting Trust”, which aims to ensure that organisations can develop and deploy AI and biometric technologies with confidence whilst safeguarding individuals from harm. The ICO's targeted action plan for 2025/2026 includes, among other initiatives:
The ICO has also published several tech futures reports into cutting edge technologies over
the past few years, most recently a report on agentic AI, analysing the opportunities brought
by these systems and data protection risks. These include unclear controller/processor
responsibilities in supply chains, increased automated decision-making, overly broad
processing purposes, unnecessary personal data processing, and other related concerns.
The ICO emphasised that system design and architecture provide good opportunities for
privacy by design and privacy-friendly innovation in agentic AI, and it recommended that
organisations leverage these opportunities for responsible deployment.
These initiatives represent a continuation of the ICO’s multi-faceted approach to regulating AI, which incorporates the issuance of guidance, operation of regulatory sandboxes, conduct of voluntary audits and, where necessary, enforcement action.
In its latest series of regulatory sandboxes, the ICO is focusing on emerging technologies including neurotechnologies, next-generation search engines with embedded AI capabilities, quantum computing, synthetic media (such as deepfakes) and its identification and detection, consumer health-tech (including wearable devices, digital diagnostics, therapeutics and healthcare infrastructure), immersive technology and virtual worlds, as well as personalised AI focusing on the customisation of large language models based on individual users’ search patterns, personal preferences and characteristics to create more tailored user experiences and better-targeted outputs.
The ICO has also been active in conducting investigations, particularly in relation to the intersection of AI and the protection of children online (Snap (My AI chatbot)) and the use of biometric data (Clearview AI), the latter of which resulted in a successful appeal for the ICO at the Upper Tribunal in October 2025. Clearview AI was subsequently granted permission to appeal the Upper Tribunal’s decision to the Court of Appeal on 19 December 2025.
With respect to voluntary audits, the ICO released a report on developers and providers of AI powered tools used in recruitment which uncovered that whilst many providers monitored their systems for accuracy and bias, some lacked adequate testing. Certain tools enabled discrimination through filtering by protected characteristics or inferred gender and ethnicity without a lawful basis or candidate knowledge. Some tools collected excessive personal information, scraping millions of profiles from job sites and social media without user awareness. In addition, several AI providers incorrectly defined themselves as processors rather than controllers, thereby avoiding compliance responsibilities through vague contractual arrangements. During the audit, the ICO made approximately 300 recommendations to improve compliance, all of which were accepted by the participants.
The FCA’s approach to overseeing AI involves a consistent emphasis on testing and collaboration over introducing new AI-specific rules and active enforcement. In September 2025, the regulator reconfirmed its position as a “technology-agnostic, principles-based and outcomes-focused regulator”, making clear that financial services firms will not face bespoke new rules to govern their use of AI. The FCA's chief executive Nikhil Rathi has emphasised that the regulator will "not come after you every time something goes wrong" with AI innovation, stating that the FCA will only be concerned about "egregious failures that are not dealt with". This signals a fundamentally different regulatory relationship that accepts "there will be bumps in the road" with innovation.
The FCA emphasises the application of existing rules to firms' AI use, particularly highlighting the importance of effective internal governance and clear accountability under the Senior Managers & Certification Regime, especially where AI is outsourced to or provided by third parties.
The FCA has expanded its sandbox initiatives for AI beyond its existing Innovation Hub. In June 2025, the regulator launched a “supercharged sandbox”, offering early-stage firms access to data, computing power, regulatory support and technical expertise, including NVIDIA accelerated computing and AI Enterprise Software. Most notably, following an engagement paper published in April 2025, the FCA launched an “AI live testing” scheme in September 2025 that allows firms to test AI models in real-world conditions with access to high-quality synthetic data. The service enables firms to collaborate with the FCA whilst checking that their new AI tools are ready to be deployed safely and responsibly. Applications for the first wave of AI live testing closed in September 2025, with the first cohorts now underway. Insights from these initiatives are feeding directly into policy work with the Bank of England and ICO.
The FCA has also developed its AI Spotlight programme, which in November 2025 became the foundation for a cross-border partnership with the Monetary Authority of Singapore. This UK-Singapore AI-in-Finance Partnership enables joint testing of AI solutions and cross-sharing between the two jurisdictions, helping firms scale innovations across both markets whilst ensuring AI is used safely and responsibly.
Following a roundtable held with industry leaders in May 2025 and a two-day AI Sprint in January 2025 (which involved 115 participants from across the industry, academia, regulators, technology providers and consumer representatives), the FCA announced that it will develop a statutory Code of Practice for firms developing or deploying AI and automated decision-making systems. This move, being developed jointly with the ICO, is aimed at setting clearer expectations and reducing regulatory uncertainty.
The FCA has not taken enforcement action relating to AI to date.
Since publishing its strategic approach to AI in June 2025, OFCOM continues to monitor and consider the use of AI and how it can impact conduct and compliance in its regulated sectors covering telecoms, spectrum, post and online safety.
Online safety is a key priority for the UK government and OFCOM remains vigilant about the potential for AI to cause harm online. OFCOM has made it clear that existing regulatory frameworks apply to AI-enabled services. For example, OFCOM recently published guidance explaining how AI chatbots are covered by the UK’s Online Safety Act 2023 and the aspects that are not covered. Non-compliance with security, resilience or online safety duties - whether involving AI or not - will be addressed through standard enforcement procedures. In November 2025, OFCOM issued its second fine under the Online Safety Act to Itai Tech Ltd, an operator of an AI-powered "nudification" site, for failing to implement mandatory age verification measures. OFCOM has also opened a formal investigation concerning the use of the Grok AI chatbot on X.
OFCOM has also issued guidance on the impact of AI to online service providers. In November 2025, OFCOM published "The Era of Answer Engines" discussion paper, exploring how GenAI search works, the key industry players, consumer usage patterns and potential safeguards. OFCOM also recently commissioned a research study to explore traditional online versus GenAI search experiences. The research report provides valuable insights for online service providers and search platforms. Looking ahead, OFCOM has issued calls for evidence on age assurance effectiveness and children's use of app stores, with reports due in 2026-2027.
Whilst OFCOM has not launched a standalone AI regulatory sandbox, it continues to support AI innovation through various initiatives outlined in their strategic approach – see our article here. The Online Safety Technology Lab continues to provide practical research into safety technologies, enabling platforms to trial AI-driven content moderation tools whilst ensuring compliance with online safety duties. OFCOM also conducts research into the adoption and use of GenAI to assess the impact on its regulated sectors.
OFCOM is collaborating with other regulators to deepen their understanding of AI's risks and opportunities. The regulator is cooperating with the CMA, ICO and FCA through the DRCF to understand emerging AI applications, such as agentic AI. Knowledge sharing will be vital for future safeguarding, as the technology continues to evolve at a rapid pace.
Since publishing its initial strategic approach to AI in 2024, OFGEM has taken several steps to advance its framework for safe and ethical AI use in the energy sector. Whilst OFGEM maintains that the existing regulatory framework is appropriate to govern the use of AI, it issued additional guidance in May 2025 to complement and support that framework. This additional guidance centred on ethical AI deployment, focusing on governance, transparency, and sustainability principles for licensees and other stakeholders, whilst noting the opportunities to harness AI to deliver clean, secure and affordable energy solutions. This fulfilled OFGEM’s commitment to publish regulatory guidance on AI as part of their Forward Work Programme for 2025 – 2026. The additional guidance covers governance measures and policies to ensure effective oversight of AI, an approach to help stakeholders identify and manage risks associated with AI, and competencies required for the ethical adoption of AI.
OFGEM seeks to harness AI’s potential to drive efficiencies, enhance consumer services and drive the net zero energy transition, whilst being aware of the risks AI could introduce without appropriate oversight and safeguards. As such, OFGEM opened a consultation to develop a technical sandbox in July 2025 to assess the case for a technical sandbox and to focus on the design, development and evaluation of AI systems.
Such technical sandbox would create a safe and controlled digital space to test AI uses. This would complement OFGEM’s existing AI Regulatory Labs which are regular discussion-based sessions where energy sector stakeholders can test hypothetical or real AI uses against existing regulations. The latest AI Regulatory Lab in October 2025 invited industry input on how controlled environments could support innovation while managing risk. The next AI Regulatory Lab is due to be held on 25 February 2026. Both the development of a technical sandbox and the AI Regulatory Labs are initiatives which aim to complement the outcome-based approach and risk framework introduced in 2024.
To date, OFGEM has not taken enforcement action specifically related to AI, instead favouring a proactive approach to manage risk. OFGEM continues to monitor emerging risks such as algorithmic collusion and liability concerns, which were highlighted in its earlier strategy and subsequent stakeholder discussions.
The UK regulations governing medical devices placed on the UK market, including AI as a medical device (“AIaMD”), are under review as part of the wider programme of reform to medical device regulation.
While there has been extensive discussion regarding AIaMD, no binding updates have yet been made to existing regulations to address the complexities of AI in medical devices. AI-specific guidance and commentary have, however, been published on several related topics.
Currently, the applicable regulatory framework and guidance for AIaMD are the same as for Software as a Medical Device (“SaMD”). Accordingly, the overarching framework in the UK is the UK Medical Devices Regulations 2002 (as amended) (“UK MDR”). The primary guidance applicable to SaMD, as published by the MHRA, also applies to AIaMD.
According to the MHRA’s policy paper of 30 April 2024, where AI is used for a “medical purpose”, it is very likely to fall within the definition of a general medical device. This means that it must meet the requirements of the UK MDR before being placed on the UK market.
In short, the MHRA proposes to deliver regulatory reform in this area through light-touch regulatory amendment and updated guidance, more streamlined processes for SaMD/AIaMD, and the potential up-classification of such devices (from the current Class I). This area remains in flux and should be monitored closely.
In the MHRA’s AI White Paper, a number of key principles are addressed from an AI perspective, including transparency and explainability of AI, fairness, and accountability and governance.
The MHRA has also (re-)launched a regulatory sandbox for AIaMD, known as the AI Airlock. Using real-world products, the AI Airlock brings together expertise from within the MHRA and partners such as UK Approved Bodies, the NHS, and other regulators. The outputs of this initiative will inform future MHRA guidance and policy, while exploring limitations of current approaches to demonstrating regulatory compliance for AIaMD. The second phase of the AI Airlock is currently ongoing.
The MHRA has also recently (on 18 December 2025) launched a call for evidence to inform the recommendations of the National Commission into the Regulation of AI in Healthcare regarding the regulation of AI in healthcare in the UK. This call for evidence closes on 2 February 2026.
Looking forward, the MHRA is developing supplementary guidance to ensure AIaMD placed on the UK market is supported by robust assurance regarding safety and effectiveness and outlining technical methods to test AIaMD. Key topics for manufacturers include:
Whilst the ASA has not taken any steps to regulate AI directly, it has released numerous guidance notes explaining how AI interacts with the CAP Code.
For example, in November 2024, the ASA released a report entitled “AI as a Marketing Term – A Quantitative Review of Usage in UK Advertising”. Whilst this report focussed mainly on the ASA’s research into how advertisers used AI as a marketing term, it also provided some useful tips for advertisers on how to ensure compliance with the CAP Code, as follows:
More recently, in May 2025, the ASA released some additional guidance covering when advertisers should disclose the use of AI in advertising.
The main takeaway from this more recent advice is that advertisers should ask themselves whether the audience is likely to be misled if the use of AI is not disclosed and if so whether the disclosure clarifies the ad’s message or contradicts it.
The ASA points to how advertisers have been using post-production to manipulate sounds and images for many years, without a regulatory requirement to always disclose those techniques and without necessarily producing misleading content. Where AI is used in a similar way, this guidance suggests there is relatively little difference in terms of consumer perception.
Whilst the Commission has not introduced any AI-specific regulations in its Licence Conditions, it has released guidance relating to AI and some of this guidance does have a regulatory impact. For example, under Licence Condition 12.1.1(3), operators are required to keep up-to-date with emerging risks information published by the Commission and that their policies, procedures and controls take into account learning or guidelines published by the Commission.
In April 2025, the Commission posted an emerging risks publication calling out the use of AI to bypass customer due diligence. In this publication, the Commission explained that operators must consider all information they hold on a customer and, where documents are received from a customer, must ensure that these documents are appropriately scrutinised. Operators must also ensure that staff are appropriately trained to assess customer documentation, including how to identify false and AI generated documents.
More recently, in October 2025, the Commission published a bulletin setting out common trends in compliance and enforcement activity. In this bulletin, the Commission explained that it had seen an increase in the use of AI for anti-money laundering (“AML”) purposes and that operators should take certain actions to ensure this use of AI is compliant. These actions are as follows:
Where gambling operators use AI in regulated activities such as marketing and/or consumer protection, we expect that they would be held fully accountable for errors made by that AI.
Since our March 2025 article on artificial intelligence in civil aviation (available here), there has been limited progress in the CAA's regulation of AI in the aviation sector. In October 2025, the Regulatory Innovation Office announced that the CAA is working with DSIT on proposals for a new sandbox for the next financial year, building on the previously delivered £2 million Rendezvous, Proximity Operations sandbox for space operations. The CAA is using AI for analysis of air accident investigation reports and is seeking to ease the process of approving new drone applications by trialling AI technologies within the Future of Flight programme. The CAA’s consultation on statutory charges for 2026/27, released in November 2025, includes a £0.5m item for implementation of “AI-led safety reporting tools improving operational efficiency and enabling data-driven insights to support regulatory decisions” and to support “scalable and ethical AI adoption across aviation”. While the CAA has focused on enabling AI innovation through sandboxes and guidance initiatives, we were not able to identify instances of the CAA taking enforcement action against regulated entities in relation to the misuse of AI.
The CAA has committed to publishing key performance metrics from the beginning of 2026, including average licensing times, the volume and complexity of applications, and growth in new use cases, to provide businesses with greater clarity for planning purposes.
Beyond these developments, there have been no significant new regulatory frameworks or guidance documents specifically addressing AI in aviation since our March article. This contrasts with the European position, where the European Union Aviation Safety Agency appears to be more advanced and has authorised use of AI elements which meet requirements that, by comparison, are still in draft or subject to consultation in the UK.
The EHRC continues to take AI issues into account in its more general policy considerations and to provide guidance on the equality implications of the use of AI, particularly for public sector bodies. For example, Pillar Two of the EHRC Strategic Plan 2025–28 (issued in March 2025) refers to an “agile response to equality and human rights risks and opportunities” and states that assessments should be made to ensure that AI does not lead to discrimination on the basis of race, and that decision making is based on evidence.
Guidance was issued by the EHRC in September 2025 to help the public sector embed equality considerations in their policies, including decisions to commission and/or use artificial intelligence (AI) technologies. This included:
At the same time the EHRC issued AI-specific guidance for public bodies in Scotland focusing on how AI-based technology can be used to help meet equality legal obligations and guidance on assessing the equality impact of this technology. The guide outlines six discussion points to assist in the assessment of the equality impact of AI-based technologies. This AI-specific guidance does not appear to have been issued for use outside Scotland. The guide highlights devolved functions and the need for equality considerations in AI use within Scotland's context, though the general principles apply across the UK under the Equality Act 2010.
We will continue to track developments in how the UK’s key regulators oversee AI and will share practical insights as things progress.