Let’s take a closer look at some of the general principles covered by the regulation. We will explore them further in the next couple of weeks.
What is good about the proposal that this is done in a human-centric manner and the EC recognizes the risks of harming human rights which AI can bring. At the same time, the regulation seems to be limiting the possibility of intervention to provide more tight control over the AI implementation from the side of public authorities. At least, we had an appetite for more. On the other hand we appreciate the bigger role of public authorities in monitoring the conformity assessment, than it was present in the leaked document.
There is also a ban on certain technologies, indlucing the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (with some exceptions including looking for a missing child”
The Commission has chosen to introduce the term: high-risk system and tries to explain its meaning already in the preamble. For example, “Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts” More examples are also given in Annex II of the regulation with the notation that the Commission can expand the list later. We would rather like to see that all AI systems implemented in the public sector which can have an impact on the rights and obligations of citizens should be considered as high-risk.
Also, according to the authors of the proposal (in the Explanatory Memorandum), “AI can also bring about new risks or negative consequences for individuals or the society”. This is a quite good response to many comments, that risks which are coming out from the usage of some AI are not only significant for individuals, but also for societies.
When the system is considered as high-risk, the provider would be obliged to create a risk management system and would be responsible for conformity assessment (whether the system meets the requirements of the regulation). This is all nice but sounds a bit too self-regulatory at the end, although, as written above, we welcome the introduction of more concrete obligations of public authorities to monitor and accept this process.
We definitely welcome the statement in the preamble that “a certain degree of transparency of high-risk AI systems should be required. Users should be able to understand and control how the AI system outputs are produced. High-risk AI systems should thus be accompanied by relevant documentation and instructions of use and include concise, clear and, to the extent possible, non-technical information.” Still the term “a certain degree” is too general, but it goes in a good direction, especially that the Commission considers that also some other examples of AI systems should be a subject of greater transparency.
The information on the high-risk system should be submitted to an especially created database which will be elaborated by the Commission and will serve as a public registry of selected AI-based solutions. As this consumes one of our recommendations already published in 2019
, we will prepare more information on the database and our assessment of this concept in the first episode of our AI Series. In-depth Analysis of the Selected Provisions of AI Regulation Proposal,
which will be released next week.