AI Liability in light of the new 2024 PLD: expanded liability, challenging defences, and new evidentiary burdens

Contacts

charles henri caron Module
Charles-Henri Caron

Partner
France

I'm a litigation lawyer helping pharmaceutical companies, biotech and medical device manufacturers navigate their most complex legal challenges. I specialise in mass tort cases, product liability disputes and clinical trials litigation, working with French and international clients to protect their interests when the stakes are highest.

linah bonneville Module
Lînah Bonneville

Associate
France

I am a member of Bird & Bird's litigation team in Paris.

Key points

  • The European Union has withdrawn the proposed AI Liability Directive, leaving a dual framework: the AI Act (a compliance legislation) and the 2024 Product Liability Directive (a strict liability regime for claims brought by injured persons).
  • The 2024 Product Liability Directive expands manufacturers and other economic operators’ liability for AI-enabled products through broader product definitions, AI-specific defect criteria, and shifted evidentiary burdens.
  • New presumptions of defectiveness are triggered by non-compliance with AI Act requirements or other EU sectorial legislation, technical complexity or failure to comply with an order to disclose evidence.
  • High risk industry sectors notably include automotive, life sciences, home appliances, consumer technology, and industrial manufacturing — each facing significant liability exposures.
  • Companies must act now to implement robust documentation systems, reassess contractual frameworks, enhance insurance coverage, and prioritize transparency in AI system design.

******************

The European Union’s strategic approach to AI regulation has undergone a substantial shift—from an initial focus on civil liability to a comprehensive risk based compliance framework. This evolution culminated in the European Commission’s withdrawal, in 2025, of the proposed AI Liability Directive (the “AILD”).

Two key instruments now remain:

- the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence (the “AI Act” – for the record, Bird & Bird’s guide on the AI Act can be found here), and

- the Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (the “2024 PLD”), which will apply to products placed on the market or put into service after 9 December 20261.

A dual regulatory framework – The AI Act input

Anticipating the risk of divergence of national rules on AI, the AI Act aims to ensure high and consistent protection throughout the EU, avoiding market fragmentation and reducing legal uncertainty for operators.

The AI Act notably provides that:

  • high-risk AI systems may be placed on the market, put into service, or used only if providers implement several measures (continuous risk-management system, high-quality data, technical documentation, etc.);
  • systems must be transparent, allow human oversight, and achieve declared levels of accuracy, robustness, and cybersecurity;
  • authorized high-risk AI systems must be CE-marked; providers are responsible for conformity assessment, CE marking, EU registration, and corrective actions.

The AI Act does not harmonize liability. Liability for defective products rather falls under the 2024 PLD. However, the two frameworks are closely interconnected: non-compliance with AI Act safety obligations will be considered when assessing potential defectiveness under the 2024 PLD and can trigger a presumption of defect for failure to meet mandatory safety requirements.

In other words, the legislation establishes a dual framework for developers of AI systems and manufacturers of AI enabled products: the AI Act introduces a risk based compliance and certification regime without harmonizing civil liability, while the 2024 PLD imposes strict liability for defective products, leaving fault based claims to national law.

This framework is further complemented by sector-specific regulations as AI raises specific challenges across different sectors.

In the Automotive industry, AI-related risks could for instance be the following:

  • collision caused by an autonomous vehicle misinterpreting sensor data;
  • over-the-air software update degrading emergency braking performance.

In the Life Sciences sector, AI-driven medical devices and diagnostics risks could entail:

  • diagnostic AI recommending inappropriate treatment due to training data skewed toward specific populations;
  • AI-powered medication dosing system miscalculating paediatric doses;
  • predictive health monitoring device failing to alert for life-threatening conditions.

The Consumer sector bears also some risks for the various products placed on the EU market. One could for instance think of:

  • smart oven overheating and causing fire due to learning algorithm misinterpreting usage patterns;
  • voice-activated devices misinterpreting command and causing injury (e.g., water heater temperature);
  • connected baby monitors failing to alert parents due to algorithmic false negative.

In such cases, the AI Act and 2024 PLD must be applied alongside cross-sectorial regulations such as the General Product Safety Regulation (Regulation (EU) 2023/988 or the “GPSR”), or sector-specific regulations such as the General Safety Regulation (UE) 2019/2144 in the automative sector.

In the Life Sciences sector, the Medical Devices Regulation (Regulation (EU) 2017/745 or the “MDR”) or the In Vitro Diagnostic Medical Devices (Regulation (EU) 2017/746 or the “IVDR”) will be the relevant pieces of legislation for manufacturers of AI-driven medical devices. Indeed, the recently proposed MDR amendments of 16 December 2025 seek to avoid overlapping obligations between the AI Act and other applicable regulations. It clarifies that AI-enabled medical devices should remain primarily regulated under the MDR or IVDR, while the AI Act should apply only in a limited and targeted way.

The 2024 PLD – Adapting existing legislation to AI

In the context of the rapid emergence of AI and to avoid an excessive regulatory burden, the EU has chosen to adapt its existing legislative framework rather than create entirely new regimes for every aspect of AI.

This approach is obvious in the revision of the 1985 PLD, resulting in the 2024 PLD, which now explicitly addresses AI-related risks and integrates provisions tailored to AI-enabled products and software. While this ensures continuity, it also raises new challenges for all stakeholders who must interpret and apply traditional liability concepts in scenarios involving autonomous and learning systems.

  • Broader product scope, more liable parties

The 2024 PLD confirms that AI systems, software, and goods equipped with AI (including updates) are "products" within the meaning of the Directive, meaning that a person can seek compensation when a defective AI product causes death, bodily injury, property damage, or data loss (Article 4).

Liable economic operators include, as stated in Article 8: the manufacturer of a defective product; the manufacturer of a defective component where that component was integrated within the manufacturer's control and caused the product to be defective; and, for manufacturers established outside the Union, the importer, the authorized representative, and, where there is no importer or authorized representative established in the Union, the fulfilment service provider.

Any person that substantially modifies a product outside the manufacturer's control and thereafter makes it available on the market or puts it into service is a manufacturer of that product.

Distributors are liable where an economic operator established in the EU cannot be identified and the distributor fails to identify such an operator or its own supplier within one month of receiving a request from the injured person. The same applies to online platforms that allow consumers to conclude distance contracts with traders, provided the conditions set out in the Digital Services Act are fulfilled.

  • Defect standard: AI-specific criteria that expand exposure

A product is considered defective where it does not provide the safety that a person is entitled to expect or that is required under Union or national law (Article 7).

The 2024 PLD significantly expands the criteria for assessing defectiveness, reflecting the complexity of AI-enabled products. Under Article 7, they now include dynamic factors such as the product’s ability to continue learning or acquire new features after being placed on the market.

This evolution places a higher burden on manufacturers, who must anticipate reasonably foreseeable changes in performance and functionality over time, as well as the interaction with other interconnected products. The assessment also considers compliance with relevant safety requirements, including cybersecurity obligations, and the timing of when the product left the manufacturer’s control (if at all).

Furthermore, any recall or safety intervention, the specific needs of intended user groups, and the failure of products designed to prevent damage may also be regarded as relevant circumstances.

  • Evidence, disclosure, and presumptions

The question of who bears the burden of proof is of the essence in the complex and opaque functioning of AI.

The 2024 PLD introduces significant changes to the evidentiary framework, aiming to ease the position of claimants. Under Articles 9–11, national courts may order defendants to disclose relevant evidence once the claimant has shown that the claim is plausible. Such disclosure must, in theory, remain proportionate and respect trade secrets, but courts can also require that technical information be presented in a clear and accessible way.

However, AI technologies often operate as “black boxes,” meaning that their internal decision-making processes are complex and not easily understood, especially as the systems are evolving through continuous learning and algorithmic updates. This may create a tension in future product liability cases: courts need sufficient transparency to assess compliance and liability. However, providing full disclosure risks exposing sensitive details such as algorithms, training data, and system architecture while proceedings aiming at enforcing the protection of trade secrets are likely to add a layer of complexity and uncertainty. Despite safeguards, translating highly technical material into comprehensible documents for non-specialist judges is time-consuming, expensive, and carries the risk of exposing trade secrets or other protected information.

Moreover, the 2024 PLD introduces rebuttable presumptions of defectiveness and causation. Notably, a product is presumed defective if the defendant fails to comply with a disclosure order, if it breaches mandatory safety requirements designed to prevent the alleged damage, or if the alleged damage results from an obvious malfunction during reasonably foreseeable use.

For AI providers and developers, this means that non-compliance with obligations under the AI Act (or other relevant sector-specific legislation) could trigger such presumptions. Courts must also presume defect and/or causation when claimants face excessive difficulties proving these elements due to technical complexity, provided they show it is likely that the product was defective or caused the alleged damage. This presumption originates from difficulties reported by patient associations in proving causation in complex healthcare cases. However, the PLD’s broadly worded provision can cover much wider scenarios, and cases involving AI systems could qualify as technically complex, triggering this presumption.

These rules effectively shift the burden of proof, requiring defendants to demonstrate that the product was not defective or that any defect did not cause the alleged damage. While traditional defences may remain, such as development risk (this will depend on each Member State) or compliance with legal requirements, the 2024 PLD may narrow exemptions for software-related issues.

Potential other legal grounds for AI Liability under French law

While developments above were framed in the context of product liability, one should bear in mind that contractual liability or tort-based claims could also be an option for claimants, depending on national laws.

Under French law, tort liability (Articles 1240–1242 of the French Civil Code) requires proving that a fault, negligence or lack of caution caused a damage. This legal ground – generally considered more demanding for claimants, as proving wrongdoing is harder than demonstrating a defect under the PLD – also presents challenges for AI due to the “black box” nature of AI systems. Courts may therefore rely on “serious, precise, and consistent presumptions” of the French Civil Code (let’s not forget that le 2024 PLD presumptions originate from French case law derived from this French provision…), reasonable probability, or evidence of (non-)compliance with the AI Act to assess fault or causation.

Moreover, under French law, in addition to the 2024 PLD or instead of the 2024 PLD (depending on parties concerned and damages at stake) professionals who have control over an AI system could be held responsible not only for damages caused by their own actions but also for those caused by the AI under their supervision.

And France is just one of the 27 EU Member states, so businesses may have to face different types of proceedings depending on the legal grounds available under the national laws of the different Member states.

For manufacturers, this new framework will be particularly demanding. They will need to implement rigorous documentation and record-keeping systems, while prioritizing transparency and explicability over technical efficiency, as required by the AI Act. Contracts and insurance coverage will need to evolve to address these new potential liabilities.

Our recommendations for businesses

  • Run a risk-assessment covering data protection/privacy, cybersecurity threats and vulnerabilities, bias/fairness, model risk, and sector rules.
  • Classify systems under the EU AI Act (and document the rationale) and flag early any high-risk or prohibited use-cases. Develop sector-specific compliance programs. Regularly review compliance as regulations and guidance evolve.
  • Prioritize transparency and explainability in system design to the extent possible. Ensure explanations are accessible to intended users and suitable for audit and regulatory review. Establish and maintain comprehensive and standardized AI governance and documentation systems.
  • Review existing policies for AI-related exclusions or gaps (product liability, cyber, professional indemnity). Align coverage limits and triggers with identified AI risks.
  • Enhance post-market surveillance and continuous monitoring. Establish alerting thresholds, incident response procedures, and rollback mechanisms. Periodically reassess risk classification.

___________________

[1] Directive 85/374/EEC (the “1985 PLD”) will be repealed with effect from that date but continue to apply to products placed on the market or put into service before then.


 

Latest insights

More Insights
featured image

Data centres Australia

3 minutes Feb 17 2026

Read More
featured image

Hungary: GVH launches accelerated sector inquiry into cooking oil market

2 minutes Feb 17 2026

Read More
featured image

New episode in TV-market cartel: Dutch appeal court seeks clarity on the role of inter-brand competition in assessing vertical restrictions by object

5 minutes Feb 17 2026

Read More