Summary of the opinion of the CULT Committee (Committee on Culture and Education) for the IMCO-LIBE Committee on the harmonisation of the provisions of the artificial intelligence regulation
The CULT Committee during work on the harmonization of the provisions on artificial intelligence systems in the regulation of the so-called AI ACT has submitted a report with the proposed amendments. In its proposal, the Committee drew attention to and indicated the need to extend the catalog of high-risk artificial intelligence application areas, indicated in Annex III in the so-called AI ACT and include education
in the list. The Rapporteur also proposed to extend the prohibition of scoring social behavior scoring, which is used by private and public entities. It was pointed out that the provision concerning the prohibition of the use of remote biometric identification technology in public space was included (European Parliament resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters).
In the proposed amendments to the provisions in AI ACT, the CULT Committee draws attention to the following issues:
- Artificial intelligence systems should be based on ethical principles (Preamble, Recital 1, Recital 3 and Recital 13), have a human-centric approach (Preamble, Recital 5), serve the objectives of public interest, democracy,the rule of law and the environment, while minimizing social discrimination and reducing the negative impact as a result of using AI/ADM algorithms (Preamble, Recital 1). At the same time, artificial intelligence systems should be secure and reliable (Preamble, Recital 2), protect personal data and the right to privacy (Preamble, Recital 4). The functioning of AI systems should be based on the principles of fundamental rights (Preamble, Recital 5) and take into account digital rights and principles in the adopted declaration of January 26, 2022, European Declaration on Digital Rights and Principles for the Digital Decade. Technologies which interact physically with persons, in particular children, the elderly or persons from vulnerable groups, should communicate such interaction in a sufficiently clear and explicit manner (Article 52, paragraph 3, subparagraph 1) and contain appropriate wording and legal disclaimers (Preamble, Recital 70 , Article 52, paragraph 3 (new)).
- There is a requirement for a semi-annual assessment of high-risk AI systems, which are included in Annex III to the AI ACT, involving experts and representatives of different fields, including ethics experts, mental health professionals (Preamble, Recital 86 a (new)), as well as a requirement for transparency of the systems concerned throughout the entire life cycle of the system (Article 1, paragraph 1, point c).
- It has been proposed to include a provision specifying the notion of trustworthiness of artificial intelligence (Article 4a), which should be applied to all artificial intelligence systems placed on the market and used in accordance with the Charter of Fundamental Rights of the European Union and meeting the following principles:
- The principle of lawfulness, fairness and transparency;
- The principle of human agency and oversight;
- The principle of safety, accuracy, reliability and robustness;
- The principle of privacy;
- The principle of data governance;
- The principle of nondiscrimination and diversity;
- The principle of traceability, auditability, explainability and accountability;
- The principle of social responsibility.
- Aspects of functioning of biometric identification systems, other than remote ones, used in public space, including real and virtual space (Preamble, Recital 9), workplace and in educational institutions, were indicated. Educational, training and educational establishments that are included in the catalogue of high-risk areas of AI due to the use of AI technology in the educational process should be extended to include the activities of the AI system monitoring students’ inappropriate behaviour (preamble, recital 34A (new), chapter III, paragraph 1, point. 3, point B B (new)). It was also specified that artificial intelligence systems designed to monitor student behavior and emotions during tests should be considered as high-risk systems, as they may violate the right to privacy and data protection (Preamble, Recital 35).
- As regards the processing and management of data that are used in high-risk AI systems on the basis of training or validation data sets (article 10), they should be analysed not only for possible bias, but also for deviations that may affect safety, health or lead to discrimination against persons (article 10 (2) point (f). The design of the high-risk system should be designed in such a way that its operation is understandable and easy, and the interpretation of results and the use of information is based on informed decisions (Article 13, paragraph 1).
- It was recommended to introduce and expand the concept of competences in the field of artificial intelligence, which should refer to the skills, knowledge and understanding of the use of artificial intelligence systems by both operators, users and citizens (Preamble, Recital 14b, Art. 3, paragraph 1, point 44 d (new)). In order to increase competences, it is necessary to develop education and training in this area, as well as to strive to increase the digital skills of employees (Preamble, Recital 5a). The catalogue of the definition of 'education and training institutions' includes all educational establishments and institutions providing training services, irrespective of the age of the persons (Art. 3, act. 1, point 44A (new)). Building competences in the field of AI will contribute to the requirements of system reliability (Article 4a).
- The Foundation especially considers the above recommendation of the CULT Committee concerning the introduction and expansion of the concept of competences in the field of artificial intelligence as valuable. In the context of the public sector, the Foundation proposes to provide officials with education in the management of AI/ADM systems.
We believe that in every public entity that uses AI/ADM systems in a way that affects the situation of citizens, there should be at least one person who would be regularly educated on AI/ADM. Such person would act as a point of contact if other officials or citizens have doubts about the operation of the systems. This would allow for more effective oversight in the implementation, maintenance and auditing of AI/ADM systems.
The submitted opinion
on the AI ACT was adopted by the IMCO-LIBE Committee, and the proposals of the provisions of the amendments were included in the developed consolidated legal act. Proposals for changes to the AI ACT will be presented for a vote in the EP this autumn.
We will present the analysis of consolidated changes in AI ACT in our next study.
Iwona Karkliniewska, AI Researcher at Moje Państwo Foundation