NEWS

Draft EU directive on liability rules to artificial intelligence

03.08.2023 | EN, News EN

Draft EU directive on liability rules to artificial intelligence

3.08.2023

The main objective of the Directive is to simplify claiming damages in the event of damage caused by AI. It will apply to persons who have incurred damage caused by an outcome produced by an AI system or by the fact that such a system failed to produce an outcome where one should have been produced. The provisions will only apply to non-contractual liability.

As indicated in the explanatory memorandum to the proposal, in most Member States standard liability arising out of a tort is fault-based. This means that the injured party, in a non-contractual damage situation, as a general rule has to prove:

  • unlawful act or omission of the person causing damage;
  • fault
  • amount of the damage
  • existence of a causal link.

In the event of damage caused by AI, an additional difficulty will be to identify who should be held liable. Furthermore, in AI cases, the courts of individual Member States will interpret national civil liability laws, which may make it difficult to ensure uniform standards of protection in the EU. It is therefore necessary to harmonise the rules at the EU level.

Solutions introduced by the Directive

By its scope, the Directive covers liability claims arising from the fault of any person (the liable person can be both the provider and the user of AI technology – both terms are defined in Article 3 of the so-called AI Act). The enforcement of liability against these individuals is to be facilitated by:

1. alleviating the burden of proof;

  • Pursuant to Article 4(1) of the Directive – a rebuttable presumption of a causal link between the failure to fulfil obligations and the result produced by the artificial intelligence system, or the fact that the artificial intelligence system failed to produce the result that led to the damage, will be introduced.
  • However, for such a presumption to arise: (1) the claimant must prove fault on the part of the defendant or a person for whose conduct the defendant is liable; (2) it can be considered reasonably likely that the fault affected the result obtained by the artificial intelligence system; (3) the claimant has shown that the damage was caused by the result obtained (or which was supposed to be obtained – but was not) by the artificial intelligence system.
  • Furthermore, according to the Directive, proving fault consists in showing non-compliance with existing duties of care laid down by the EU law or by national law directly intended to protect against the harm.
  • In summary, this means that if the injured person is able to establish fault on the part of the person who failed to exercise duties of care, the court may itself presume that the failure to comply with the duty of care in question was the cause of the damage. The only condition is that the causal link be reasonably plausible. The presumption is rebuttable, which means that the other party can rebut it with counter-evidence.

2. persons claiming damages will be able to obtain information on high-risk artificial intelligence systems that are to be recorded/documented in accordance with the AI Act.

  • This means that the court can order the disclosure of relevant evidence relating to specific high-risk AI systems that are suspected to have caused damage.
  • Importantly – the court can only order disclosure to a limited extent, i.e. only to the extent necessary to substantiate the claim.
  • Where a defendant fails to comply with an order to disclose evidence, the court will presume that the defendant has not complied with the duty of care. This means that the court presumes the defendant’s fault.

To illustrate the above regulation, the new provisions may prove useful in situations such as:

  • experiencing discrimination in a recruitment process carried out using AI (for a job or school/university recruitment);
  • damage caused by autonomous vehicles using AI that have misinterpreted the traffic situation;
  • damage to property by AI-controlled drones that dropped a parcel in transit that led to, for example, damage to a car.

Summary

The Directive aims to increase legal certainty and ensure consistency with other EU legislation, primarily the AI ACT. It is intended to simplify proceedings and therefore make it easier for victims to seek protection in the case of damage caused by AI. The new rules will apply to damage that arise after the implementation of the Directive into national orders.

Importantly, the Directive will not include criminal provisions.

The deadline for adopting the necessary transposition measures under the draft is to be two years of the date of entry of the directive into force.

It is also worth noting that Member States may adopt or maintain national rules that are more favourable to claimants in terms of justifying a non-contractual civil law claim for damages caused by an artificial intelligence system, provided that such rules are in accordance with the EU law.

Full text of the draft directive at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0496

See more:

Amendment to the Copyright Act

On the 20th of September 2024, an amendment to the Act of 4 February 1994 on Copyright and Related Rights (Copyright Act) entered into force, which brings the legislation in line the European law. The long-awaited amendment enters into force more than three years...

Contact

We invite you to contact us

Warsaw

Sobieszyńska St., no. 35
00-764 Warsaw
tel. +48 664 948 372

Contact form

15 + 5 =