Adopting non-contractual rules of civil liability of artificial intelligence systems

Przeczytasz w 5 minut

The European Commission proposed to regulate the rules of non-contractual civil liability for the use of artificial intelligence systems by entities, adopting the so-called AI Liability Directive (Proposal).

The legal act aims to increase trust, strengthen fundamental rights and ensure transparency in the use of artificial intelligence systems on the EU market. The regulation proposal was based on the strategy of the White Paper on Artificial Intelligence and the AI ​​Liability Report. The European Parliament initiated actions by adopting a resolution that obliged the EC to prepare the rules of civil liability for the use of artificial intelligence systems.

The current regulations of civil liability in the EU market do not take into account the specificity of products and services based on artificial intelligence technology. Moreover, current legal acts oblige injured persons to prove harmful or wrongful action of defective products or services towards them. Artificial intelligence systems - in view of the specificity of their operation, complexity, autonomy and the effects of the so-called black holes in the functioning process - can cause difficulties in the context of asserting the rights by the injured person.

At the stage of asserting rights by the injured person, there may arise immeasurably high costs for that person already in identifying the person responsible for causing the damage, thus prolonging the process of proving the burden of liability and complicating the claim for compensation. National courts may take different approaches in interpreting the damage that has occurred in the process of claiming rights by the injured person. Many national AI strategies have addressed the issue of civil liability for artificial intelligence in some way, e.g. in Poland, Czech Republic, Malta, Italy and Portugal. However, the specificity of the operation of artificial intelligence systems beyond the physical borders of member states requires centralized legal regulation of civil liability.

The directive proposal is part of a package of tools supporting the development of AI in the European Union and is complemented by, among others, the harmonized rules of the so-called AI ​​Act, a revision of sectoral and horizontal product safety rules, as well as liability rules related to artificial intelligence systems in EU.

The directive proposal regulates the legal aspects of the following issues:

1. Subject and field of ​​application

The draft act applies to non-contractual liability related to the operation of artificial intelligence. It is mainly applicable in the case of pursuing compensation for damage caused as a result of the use of implemented high-risk AI systems, and in particular enables the process of obtaining information, records and documentation regarding data of artificial intelligence systems that will be regulated in the future by the so-called AI Act. The act will not have retroactive legal effect, but will be only a legal recourse to claim damages and compensation after the entry into force of the AI ​​Act.

2. Introduced terms and definitions

Definitions regarding artificial intelligence systems and users refer to the definitions presented in the so-called AI Act. According to the definition, affected party may be a natural person or a legal person who has been affected by the artificial intelligence systems. In addition, the definition of the affected party extends the concept to entities that were covered, for example, by a cooperation or entrustment agreement or which may act on behalf of the affected party.

3. Presumption of causation

Art. 4 of the draft act establishes a targeted rebuttable presumption of causation. This means that, if the aggrieved party brings a claim to court for compensation, in case of damage caused by artificial intelligence, the court is to presume a causal link between the fault of the defendant using AI and the action or lack of action of AI causing the damage. So the one who uses artificial intelligence will have to prove that he should not be liable, not the injured person.

4. Disclosure of Evidence

Evidence of harmful use of high-risk AI systems should be presented at the request of the court to which the claim for damages has been filed. The obligation to provide evidence will be imposed on both system providers and users (those implementing AI systems).

5. Adoption, monitoring and evaluation

The Draft proposes the implementation of a mechanism for monitoring and measuring the efficiency of the compensation claim procedure, including the judicial procedure, and out-of-court settlement proceedings based on the exchange of information between Member States. The exchange of information is supposed to include data collection and types of evidence presented in complaints.

6. Implementation at the level of national legislation

Member States should implement rules and regulations at the national level to protect people affected by the use of artificial intelligence systems against them.

7. The proposal of directive is a step in the right direction, but needs further development

The Directive makes it possible to regulate the resulting legal gaps in the pursuit of rights on the grounds of civil liability, enables collective assertion of rights and compensation for natural persons, and also increases social trust in relation to the institutions of law enforcement and the judiciary. However, it should be noted that it is not a separate legal act, but only a supplement to the proposed legal provisions on the liability of artificial intelligence systems used on the EU market, which is based on the principles of the Product Liability Directive for defective products placed on the EU internal market.

The proposed legal act regarding the regulations of non-contractual civil liability does not cover all aspects of use of artificial intelligence systems. It focuses mainly on the mechanism of asserting rights for individuals in the case of high-risk AI systems. The issue of claiming compensation by injured persons using general-purpose AI systems against them remains questionable. The responsibility of an organization for implemented AI systems is also not entirely clear. Artificial intelligence systems can be not only a product, but also an element of supporting process management in data processing and automatic decision-making, in particular in the case of the public sector. The proposed Directive is a step in the right direction, but definitely needs further work to better ensure the right to compensation for individuals affected by operation of artificial intelligence.

Iwona Karkliniewska AI Researcher at Moje Państwo Foundation