Share This Article
AI liability under the Defective Products Directive will fundamentally change the legal framework for software and artificial intelligence in the European Union from 9 December 2026. Directive (EU) 2024/2853 introduces a clear and disruptive principle: software and AI systems are products for the purposes of strict liability.
This means that companies developing, integrating or deploying AI systems will face product liability exposure similar to that traditionally applied to manufacturers of physical goods. In this article, I explain what changes, why AI liability under the Defective Products Directive matters for digital businesses, and which open questions still need clarification.
AI Liability under the Defective Products Directive: What Changes?
The previous regime, based on Directive 85/374/EEC, was conceived in an analogue era. A product was essentially a tangible movable item. Software was covered only if embedded in a physical carrier. Stand-alone software, SaaS services and AI models were largely outside the strict liability perimeter.
Directive (EU) 2024/2853 modernises this framework. From 9 December 2026, the definition of product expressly includes:
-
Embedded software (firmware, IoT systems, automotive software, medical devices).
-
Stand-alone software.
-
Cloud-based and SaaS functionalities where they are essential to the functioning of a product.
-
AI systems, including machine learning models and large language models.
-
Digital manufacturing files.
This expansion is not merely semantic. It directly triggers strict product liability for digital technologies. It is a structural shift in risk allocation for the digital economy.
Strict Liability for AI Systems and Software
Under the new directive, liability remains strict. Claimants do not need to prove negligence. They must demonstrate:
-
A defect in the product.
-
Damage.
-
A causal link between defect and damage.
In the digital environment, however, the notion of “defect” becomes more complex.
Potential defects may include:
-
Coding errors and software bugs.
-
Failure to provide security updates.
-
Cybersecurity vulnerabilities.
-
Non-compliance with the AI Act or cybersecurity legislation.
-
Substantial modifications after commercialisation.
-
Malfunctions linked to the evolving nature of self-learning systems.
A particularly relevant innovation is the introduction of presumptions of defect in case of regulatory non-compliance. If an AI system breaches mandatory cybersecurity or AI requirements, this may facilitate the claimant’s burden of proof. In practical terms, AI governance and regulatory compliance become directly linked to product liability exposure.
Compensation for Digital Damage
Another key development under the AI liability framework is the recognition of compensation for destruction or corruption of digital data, without minimum thresholds. This is a significant change. Traditionally, product liability focused on personal injury or property damage. Now, purely digital harm can trigger compensation. For AI-driven ecosystems, this is critical. A defective AI system that corrupts data, disrupts cloud infrastructures or compromises digital assets may lead to civil claims even without physical damage. Financial institutions, healthcare providers, industrial platforms and SaaS providers should carefully assess this expanded scope of liability.
Disclosure Obligations and Litigation Risks
The directive strengthens disclosure obligations in civil proceedings. Courts may order the disclosure of technical evidence necessary to substantiate claims. In the context of AI systems, this raises delicate issues:
-
Protection of trade secrets.
-
Access to training data and model documentation.
-
Transparency of algorithmic decision-making.
-
Handling of probabilistic or opaque outputs.
The intersection between product liability, trade secret protection and AI transparency will likely generate complex litigation strategies. From an SEO perspective, businesses searching for guidance on AI liability under the Defective Products Directive should consider not only compliance, but also evidentiary readiness in case of disputes.
The Open-Source Exception
The directive excludes open-source software developed and distributed without commercial purpose. However, this exception is narrow. Where open-source components are integrated into commercial products, liability may arise at the level of the economic operator placing the product on the market. Companies relying on open-source AI frameworks cannot assume automatic immunity.
A Key Question: Do AI Updates Create a “New Product”?
Perhaps the most challenging issue concerns the dynamic nature of AI systems.
AI systems can:
-
Continuously learn.
-
Be fine-tuned.
-
Receive security patches.
-
Undergo major version upgrades.
-
Modify performance over time.
This raises a fundamental question under the AI liability regime:
- Should the evolving nature of an AI system be considered when assessing defectiveness?
- And can significant updates or fine-tuning amount to a “substantial modification” that transforms an existing AI model into a new product under the Defective Products Directive?
- If major updates are treated as new products, the liability timeline may effectively restart. This would have profound implications for:
- Insurance coverage.
- Contractual allocation of risk.
- Product lifecycle management.
- AI governance documentation.
For companies operating continuous deployment models, this is not a theoretical issue. It directly affects business strategy.
Why AI Liability under the Defective Products Directive Is a Governance Issue
AI liability under the Defective Products Directive is not merely a litigation risk. It is a governance challenge.
Companies should start preparing by:
-
Implementing structured AI lifecycle documentation.
-
Ensuring traceability of updates and model versions.
-
Integrating cybersecurity-by-design.
-
Aligning AI Act compliance with product liability risk assessment.
-
Reviewing contractual clauses across the supply chain.
The digital economy can no longer rely on the intangible nature of software to mitigate exposure. AI systems are now legally qualified as products. And products entail strict liability. As courts begin interpreting Directive (EU) 2024/2853, we will gain clarity on how evolving AI systems are assessed in practice.
Until then, one thing is certain: AI liability under the Defective Products Directive is set to redefine risk management for software and AI in Europe.
On a similar issue, you can read the article “Have board directors any liability for a cyberattack against their company?“. Also, don’t miss our AI Law Journal available HERE.

