EU AI Act: ITRE's opinion for IMCO

Przeczytasz w 8 minut

Summary of the opinion of the ITRE Committee (Committee on Industry, Research and Energy) to the IMCO-LIBE Committee on the harmonization of the provisions of the artificial intelligence regulation

In the course of the work on the adaptation of the AI Act, the ITRE Committee presented an opinion to the IMCO-LIBE Committee in June this year on the proposed amendments.

The opinion drew attention to the need for a horizontal approach in the implementation of AI systems. The implemented systems of intelligent management and automatic decision-making should be based on a prior risk analysis. This is an important approach that should contribute to the development of transparent, reliable and understandable rules for the implementation and development of technological solutions with the use of artificial intelligence.At the same time, it should be in line with European values and be trustworthy for society.

The document highlights the strengthening of regulatory sandboxes, the need for a precise definition of AI systems, and sets standards for accuracy, resilience and cybersecurity, data and data management. Importantly, it was emphasized that the definition of AI systems in legislation should protect not only the rights of entrepreneurs, but also citizens and civil society, including fundamental human rights. Attention was also drawn to the need for a code of conduct and standardization activities.

In its proposal for a regulation, the Committee has proposed a number of important proposals for changes to strengthen the protection of the fundamental rights of individuals:

1.The catalog of definitions has been extended, including introduced proposal for the concept of artificial intelligence systems, the notion of a "serious incident", regulatory sandbox, and competences in the field of artificial intelligence as well.

2. The definition on artificial intelligence systems introduces elements of key features of systems that can make forecasts, formulate recommendations and make decisions in a virtual and physical environment (Article 3, paragraph 1, point 1). This means, that the definition takes into account the ability of machine systems with different levels of autonomy to perceive the real and virtual environments with which they interact, the ability of these systems to model on the basis of the analysis carried out in an automated way, as well as the ability of these models to formulate how to act or use the information they have (Preamble, Recital 6). The autonomy of the system has been further clarified. (Article 3 (1), point 1a (new)).

3. The term "serious incident" means not only any incident, but also a malfunction of an artificial intelligence system that may or may have led in the past to one of the following incidents: violation of the obligations to protect fundamental rights under EU law (Article 3 (1), point 44, point (1) b a new)) or create a threat to the safety of a individuals or violate their fundamental rights (Article 3 (1) (44) 1) a).

4. A provision was proposed for the research carried out with the use of persons or personal data. Personal data has been determined in accordance with the definition of personal data of the GDPR (Article 4 point 1). Conducted research or research towards the implementation or placing on the market of artificial intelligence systems should be ethical (in line with ethical standards), and should also be applied in the scope of the initially defined research purpose (Preamble, Recital 12).At the stage of research, these activities may not violate the fundamental rights of individuals who take part in the process of the work being carried out (Article 2, section 5a new).

5. By designing systems, should be carried out with accuracy and the greatest care, verifying and eliminating possible errors at every stage (Preamble, Recital 44) and contain the following components of the artificial intelligence system already at the stage of its validation and testing, namely to define the purpose and the environment, in which the system will be used (Article 10, paragraph 2, letter g a (new)), transparency of the original purpose of collecting data in the system (Article 10, paragraph 2, letter aa (new)), take into account data collection process (Article 10, paragraph 2, letter b), subject to the provision on the technical capabilities of high-risk AI systems (Article 10, paragraph 1). High-risk AI systems should be designed with the principle security by default (Article 15, paragraph 1). At the same time, attention should be paid of possible bias so - called “feedback loops” in the outputs data sets, which can be used as input data in the future and may have a negative impact on health, safety and fundamental rights, or lead to discrimination against persons (Article 10 (2) (f)), especially women, children and people with disabilities. 

6. The process of standardization of artificial intelligence systems has been highlighted and the requirements for service providers and technological solutions have been established (Preamble, Recital 61). Activity of service providers regarding the technical adaptation of AI systems should also take into account social aspects, including the potential risk to the fundamental rights of individuals of the used algorithms and the democratic rights of society, with particular regard to the risk impact of these technologies on children.The implementation of AI innovations and technical solutions should be ethically justified (Preamble, Recital 71).

7. An important aspect is the provision concerning the disposal of understandable information on artificial intelligence systems by the users and end recipients to whom these systems are used, and enabling these people to make conscious and independent decisions regarding the use of AI in relation to them (Preamble, Recital 46). At the same time, high-risk artificial intelligence systems should limit the potential risk of negative impact on individuals or society.Be consistent with the primary purpose, for which they are implemented and provide information to users in a clear, transparent, easily understandable, comprehensible and legible manner with an appropriate level of accuracy and robustness (Preamble, Recital 49).As regards artificial intelligence systems, defined as different in the AI ​​Act, they should be ethical and socially responsible (Preamble, Recital 81).

The Foundation considers it appropriate to introduce an obligation for the public sector to inform regarding the use of the AI ​​system to make a decision with respect to the person to whom the decision relates (not only in the case of high-risk systems).

The use of AI systems in the public sector has direct impact on the situation of the individuals to whom these systems are applied. Citizen should be aware that the decision made against him by a public institution is a consequence of the operation of a specific AI system. In order for an individual to be able to protect himself from the negative effects of AI systems - in the first place - he must know that such system was applicable. From the point of view of the person whose case is affected by the AI ​​system, it does not matter whether the specific effects are the result of using the high-risk system or not. Therefore, the proposed obligation for the public sector to inform about the use of AI system in the decision-making process for a specific person applies to each AI system (regardless of the risk classification of the system).

The obligation would not apply to AI systems used by the public sector for purely ancillary administrative activities that do not affect the actual decision-making of the public sector.

At the same time, the right of individuals to receive information on the use of AI by the public sector to make a decision to the person concerned should be introduced accordingly.

8. Another important point in the opinion of the ITRE Committee is the issue of regulatory sandboxes. The created regulatory sandboxes (the definition proposed in Article 3, paragraph 1, point 44 a (new)) aimed at supporting SMEs and start-ups should comply with the Charter of Fundamental Rights of the European Union and the Regulation on the Protection of Personal Data - GDPR ( Preamble, Recital 72). Regulations supporting artificial intelligence innovators are to introduce appropriate safeguards and inspire public trust with appropriate criteria and conditions created by legislation at the national level of EU Member States (Preamble, Recital 72a). At the same time, threats in terms to health, safety and fundamental rights should be constantly monitored and identified (Article 53, paragraph 1).

9. Proposed provisions for the acquisition of competences and training in artificial intelligence (Article 3, paragraph 1, point 44 b (new)) and enhancing public communication and awareness-raising activities on AI with the support of national and EU agencies data protection, ENISA, digital innovation hubs (Preamble, Recital 73).In our opinion, educating representatives of the public sector in the field of artificial intelligence will be of particular importance. Such education could positively influence public confidence in AI systems and the quality of their implementation, maintenance and supervision in the public sector.Citizens should be able to understand how artificial intelligence systems work and influence their situation. Officials, in turn, should be able to explain how a specific decision is made in relation to the citizen.

10. An important issue in the provisions is also the proposal to establish an Artificial Intelligence Advisory Committee (Article 57, paragraph 3 a (new)) within the subgroup of the Council, which should include not only representatives of industry and scientific and research communities, but also civil society, social partners and experts in the field of fundamental rights (Preamble, Recital 76 a new). The Foundation has repeatedly emphasized that the right conclusions from discussions around artificial intelligence in the public sector and in other sectors can only arise from discussions with many stakeholders from different backgrounds.

The presented opinion on the harmonization of AI ACT by the ITRE Committee was adopted by the joint IMCO-LIBE Committee. The proposed provisions were included in the elaborated collective proposal of amendments to the legal act. The consolidated proposal as a result of the work of the EP Committee will be submitted to the plenary session of the European Parliament this autumn.

We will present the analysis of consolidated changes in AI ACT in our next study.