As the knowledge lawyer in our Tech Transactions team, I play a key role in keeping both clients and colleagues at the forefront of tech-related legal and market developments.
With over 25 years' experience as a leading technology lawyer and now based in both our London and San Francisco offices, I have extensive experience advising on tech infrastructure and outsourcing projects.
Yesterday, on 21 October 2025, the UK government issued a press release on piloting AI. There are two key points to note.
Regulatory sandboxes for AI: The government is planning to introduce regulatory sandboxes to encourage the adoption of AI in the UK. Within a safe, controlled testing environment, individual regulations would be temporarily switched off or tweaked for a limited period so that businesses and developers can test new AI products in real-world conditions. If any unacceptable risks emerge, the testing of the AI tool would be halted. The intention is that the sandboxes would accelerate AI-related efficiencies in sectors such as healthcare, transport and advanced manufacturing.
New AI Growth Lab: The government has launched a consultation on creating a new AI Growth Lab to operate the sandboxes. This body would facilitate the testing and piloting of AI tools that are “hindered by regulation” and would provide strict supervision of the sandboxes. The government has issued a public call for organisations to submit their views on this proposed AI Growth Lab. It may be a single government-operated Lab or perhaps lead regulators would operate each sandbox on a sector basis.
The Labour government’s proposal to use regulatory sandboxes for AI is not a new proposition. In fact, we saw it proposed in the Conservative government’s AI WhitePaper back in March 2023.
Speaking at the Bloomberg Summit yesterday, Kanishka Narayan, the UK’s AI Minister, explained that organisations seeking to deploy AI are concerned about liability and risk management regimes. He said there is currently a “lack of clarity” with regulation as AI is nascent technology that is being deployed in a regulatory context that is not as nascent. He explained that yesterday’s sandboxing announcement seeks to address this.
Bird & Bird analysis
Inverted application of sandboxes: Typically, regulatory sandboxes are used in highly regulated industries to restrict the rules when piloting new technology. However, here the absence of rules on AI in the UK is creating regulatory uncertainty and consequently a need for sandboxing. One reason for the sandbox approach could be that the government is using sandboxes to identify barriers to the deployment of AI and could use this knowledge to inform its approach to introducing AI-specific regulation.
Moving away from the AI Security Institute: The introduction of an AI Growth Lab signals a shift away from the AI Security (formerly Safety) Institute (AISI). As per the AI Opportunities Action Plan, the Labour government is focused on promoting the deployment of AI to achieve economic growth. The AI “Growth” Lab appears to support this objective, in comparison to the safety and security focus of AISI.
No information yet on which laws will be disapplied: Whilst regulatory sandboxes will likely be of interest to businesses, the government’s announcement is light on detail regarding which specific laws would be disapplied when working in a sandbox. The press release expressly rules out any restrictions on consumer protection and safety provisions, fundamental rights, workers’ protections and intellectual property rights. Perhaps certain sector-specific rules will be disapplied, for example in healthcare, and potentially the UK GDPR could be disapplied.
Businesses will likely welcome the government’s intentions to create regulatory sandboxes. Sandboxes can help speed up the process of: (i) identifying regulatory barriers for AI innovation and adoption; and (ii) bringing about regulatory modifications through updated codes of practice and statutory amendments. However, with the plans in such an early stage and with many details still to be decided upon, it is too soon to know how effective they will be in practice. We are tracking developments closely.