Last week, the UK Jurisdiction Taskforce (UKJT) published a consultation on its Legal Statement on Liability for AI Harms under the private law of England & Wales (the “Legal Statement”). In this short alert we explain the objectives that underpin the Legal Statement and what follows next.
Scope of the Consultation
The UKJT is part of a UK Ministry of Justice initiative focussed on clarifying key questions regarding the law relating to digital transformation. In the past it has examined the legal status and legal principles applicable to crypto assets and it is now considering the legal issues evolving out of AI. It considers that its role is to “explain as quickly and accurately as possible” how the common law is likely to deal with the issues created by new technology and provide as much legal certainty as possible for businesses in this area. However, it is not the Law Commission and so the consultation is not proposing reform to the law.
Increasing awareness of potential for AI to harm
As we learn more about AI there is an increasing awareness that it has the potential to cause , and in some instances, is already causing harm but it is not always clear who might be responsible for that harm. As a result, there is a perceived legal uncertainty in relation to who bears the responsibility for harm caused by AI resulting in a risk that it will deter the take up of beneficial AI related tools in certain sectors including amongst risk-averse professionals, curtail research and development and cause negative consequences for connected industries such as insurance, amongst other possible implications.
Key legal issues considered
The UKJT acknowledges that the consultation and Legal Statement are limited in scope. It only focuses on specific aspects of private law liability and does not consider matters such as criminal law, IP, contract and public law liability. As there is no agreed definition of AI the UKJT has adopted a technology agnostic definition within the Legal Statement which focuses on a key characteristic of such systems: autonomy.
The UKJT Legal Statement considers that liability for AI harms will fall within existing, well-established legal principles. The legal analysis starts with the premise that under English law AI does not have a legal personality and so it cannot be held responsible for physical and economic harm. Instead, this liability must rest with legal persons under existing legal principles, in this case the tort of negligence. The Legal Statement is therefore rooted in existing non-contractual duties which protect persons from harm, considers how a duty of care might arise, and how the courts will approach the issue of causation in a situation where the technology in question is autonomous.
The Legal Statement also considers how the common law and existing legislation, for example in the context of product liability under the Consumer Protection Act 1987, applies to AI harms, especially where there is currently ‘no fault’ liability for defective products. Additionally, this analysis addresses issues of liability and the integration of AI within the framework of professional negligence law. It further considers, under common law principles, the connection between AI-related harms and negligent misstatement or defamation arising from false statements, in particular from chatbots.
Question the Legal Statement seeks to address
The question the Legal Statement seeks to address is “in what circumstances, and on what legal bases, English common law will impose liability for loss that results from the use of AI”. As mentioned, its primary focus is an analysis of the law of negligence and how it applies to physical and economic harms caused by AI. However, the analysis also considers:
The UKJT seeks specific input in relation to the following questions:
Next Steps
The consultation remains open until 13th February 2026, and we are planning to submit a response. Please feel free to contact any one of the authors if you would like to discuss this area in more detail.