Up
0
Down

Navigating the nexus of Policy, Digital Technologies, and Futures (S1/E12)

S1/E12: AI: You’d better do no harm

Hello SWForum.eu readers! I’m glad you’re following this crash introductory course on how to navigate the European Union’s (EU) digital policies and legislation. In this episode I’ll make a brief detour into the unsuspected area of products liability, because recent initiatives by the European Commission (EC) come in complement of the EU’s drive to regulate the development of software and the deployment of systems that use Artificial Intelligence (AI).

The EU Product Liability Directive

The EU Product Liability Directive (PLD) provides a liability regime that allows the victims of material damages to ask for compensation from the product manufacturer, like when a defective microwave oven accidentally destroys your kitchen. Notably, the previous version of the PLD was one of the first EU laws to harmonise the European single market. It dates back to 1985. That’s probably way before you, kind reader, were even born. Generation Z was not in the roadmaps. Apple’s MacIntosh personal computers were having a start. The Berlin wall stood tall.

The EC correctly thought that it was high time to review the PLD and integrate in its revision all things digital, including services, meaning ‘having software inside’. As for its scope, the updated Product Liability Directive covers all tangible and intangible unsafe products, including embedded or standalone software and digital services necessary for the products’ functioning.

It’s important to know that, as for e.g. cars, ‘digital’ liability would continue after the product is launched on the market, covering, inter alia, software updates, failure to address cybersecurity risks, and machine learning. In other words, developers would continue to be responsible for AI systems that learn independently and for the deployment of updates, or lack thereof.

The manufacturers of connected devices and related services will need to meet essential requirements to address hacking vulnerabilities throughout the products’ lifecycles, for instance via security updates. However, such requirements are not set in the PLD, but rather in the recently proposed Cyber Resilience Act.

Now, remember that the Digital Decade Policy Programme 2030 includes Putting people and their rights at the centre of the digital transformation and Increasing safety and security of individuals. Or, in other words, the concept of liability will henceforth include also harm to fundamental rights, as protected in the EU.

And yes, it’s a mes(s)h. Imagine a poor manufacturer of Virtual Reality glasses. (Well, perhaps “poor” isn’t correct, as these include Meta and Apple…) They’ll have to comply with half a dozen or more different laws if they want to sell their products in the EU. In this series I’ve already explored the GDPR, the DMA, the DSA, the Data Act, Artificial Intelligence, and today the PLD. And still unexplored, but also of paramount importance are the Cyber Resilience Act and other legislation belonging to this soup of letters, as described in Episode 1.

The implications of such a wide regulatory net on competition are huge. If I were a C-level staff at any of the Big-Techs (usually from the USA), I’d be celebrating day after night. After all, such a compliance effort raises the entry barrier to smaller competitors to heights probably unsurmountable. No SME has the capacity to afford legal departments that big.

Interestingly, significant exemptions are made for open-source software that lay outside a commercial activity.

The AI Liability Directive

Ok. I know you thought it was all. But not really. The updated product liability rules, described above, were published together with another directive that may be used to reverse the burden of proof for damage or harm caused by AI applications, such as autonomous drones or cars, under certain conditions.

If you’re not a lawyer you may feel a little confused. As I wrote before, I fully sympathise. I’ll try to be clear.

To start with, there are two main types of liability, namely defective products and faulty behaviour. These are based on the classic rules of liability, with the need to provide a triple proof, as follows: 1) there is fault or defect, 2) there is damage and/or harm, and 3) there is a causal link between the fault/defect and the damage/harm.

While the Review of the Product Liability Directive, described above, provides a legal basis for claims, this AI Liability Directive tries to harmonise at the EU level certain aspects of legal proceedings initiated under national fault-based liability regimes.

Therefore, it is said that the two directives follow slightly different principles. On one hand, the PLD is based on the strict liability described above, with the need for the complainant to establish the triple proof. On the other hand, the AI directive alleviates the victims' burden of proof by introducing a presumption of a causal link in the case of fault/defect, that can be roughly described as follows.

If victims can prove 1) that someone was at fault, meaning that someone was not compliant with a duty of care laid down in Union or national law directly intended to protect against the damage that occurred, 2) that a causal link can reasonably be made between the fault and the output produced by the AI system, meaning that the fault has influenced the output produced by the AI system that causes the damage, and 3) that the output produced by the AI system gave rise to the damage. If all these three criteria are met, the court can presume that this non-compliance caused the damage. In such a case, it’s up to the liable person to provide arguments to counter such a presumption, by proving, for example, that a different cause provoked the damage.

According to AI systems providers, the risk here is that the proposal increases too much the probability of a de facto reversal of the burden of proof. This is because courts may decide to presume the causal link between the fault of the defendant and the damage/harm, in cases where the claimant faces “excessive difficulties” in providing proof, due to technical complexity, which, we can agree, is inherent to most, if not all, AI systems. Given the statistical nature of Machine Learning systems, there is concern that this would allow courts to easily presume the causal link, while it would be close to impossible for providers to rebut such a presumption. (Here you should be seeing the emoji of “The Scream” by Edvard Munch, but this interface doesn’t have it.)

Again, only time – and judges – will tell.

 

This is it for AI-related initiatives. I wish you luck if you want to launch a new tech start-up in the area. If my knowledge can be of help, you know where to find me.

Keep an eye on this space! The next episode will be my farewell, as, sadly, SWForum.eu is coming to its term as a H2020 project.

 

 

[This blog series is inspired by research work that is or was partially supported by the European research projects CyberSec4Europe (H2020 GA 830929), LeADS (H2020 GA 956562), and DUCA (Horizon Europe GA 101086308), and the CNRS International Research Network EU-CHECK.]

 

Afonso Ferreira

CNRS - France

Digital Skippers Europe (DS-Europe)