In this series of texts, we will concentrate shortly on selected provisions of the proposal, as due to their importance they deserve a more in-depth analysis. We will prepare comments on selected provisions of the proposal in the following weeks, trying to bring more light to specific solutions. The grand finale is to share the overall opinion with the European Commission by 22nd June - the last day of public consultations.
Let’s start this series with good things (which still can be better). We welcome the idea of creating the EU database on high-risk AI systems (art. 60). We have supported it during AI White Paper consultations last year, but we also recommended it already in 2019 in our first "alGOVrithms - State of Play" report (attached at the end of this publication). Back then, we were proposing setting up a coordination body within the government which will “be responsible for coordinating ADM implementation, including coordination of the process of AMD creation and knowledge of existing tools and their performance.” The Commission proposes that the database shall contain the data regarding high-risk AI systems mentioned in the art. 6 (2) of the proposal (see: ANNEX III of the proposal for more details). That means that only some AI systems will be registered in the database. Back in 2019, we were recommending including all AI systems which are being implemented in the public sector in such a database. Not just the “high-risk” ones. We still believe that every example of AI which may have an impact on the rights and obligations of citizens should be considered as high-risk and not only these which are described in ANNEX III of the proposal.
While we welcome the extensive list of high-risk systems presented in the ANNEX, we do not feel that it covers all systems that may infringe fundamental rights, including, for example, systems that allocate judges or public officials to specific court cases.
The database will be established and maintained by the EC but the data will be submitted by the providers rather than competent authorities. We see this as a risk also in the context of letting providers conduct self-assessment of conformity in the case of the most high-risk systems.
Definitely, the good point is that the database will be open to the public. But not too much will be public at the end.
The detailed information which has to be provided to the database is described in ANNEX VIII and it includes the following info:
1. Name, address and contact details of the provider;
2. Where submission of information is carried out by another person on behalf of the provider, the name, address and contact details of that person;
3. Name, address and contact details of the authorised representative, where applicable;
4. AI system trade name and any additional unambiguous reference allowing identification and traceability of the AI system;
5. Description of the intended purpose of the AI system;
6. Status of the AI system (on the market, or in service; no longer placed on the market/in service, recalled);
7. Type, number and expiry date of the certificate issued by the notified body and the name or identification number of that notified body, when applicable;
8. A scanned copy of the certificate referred to in point 7, when applicable;
9. Member States in which the AI system is or has been placed on the market, put into service or made available in the Union;
10. A copy of the EU declaration of conformity referred to in Article 48 [The EU declaration of conformity states that the high-risk AI system in question meets the regulation requirements];
11. Electronic instructions for use; this information shall not be provided for high-risk AI systems in the areas of law enforcement and migration, asylum and border control management referred to in Annex III, points 1, 6 and 7.
12. URL for additional information (optional)
According to the Commission, the database “will also enable competent authorities, users, and other interested people to verify if the high-risk AI system complies with the requirements laid down in the proposal and to exercise enhanced oversight over those AI systems posing high risks to fundamental rights.”
European Commission treats the database also as a tool to monitor and evaluate the proposal itself but looking at the information which is required to be provided, it is not clear how, as we would not see examples of detected errors or examples of stories where the usage of this particular solution infringed fundamental rights. There is still room for improvement….
Next week we will take a closer look at high-risk systems and check how good or bad are concrete provisions in this regard.