AI Regulation - conformity assessments

Przeczytasz w 3 minuty

From the perspective of our concerns regarding the human (rights)-centered approach, the European Commission’s proposal on AI regulation is in fact promoting self-regulation of AI providers.

The main obligations of securing the compliance of AI systems with human rights standards are imposed on providers. Public authorities which will implement such systems will rely on, in the majority of cases, on providers’ self-assessment. What could possibly go wrong?

The proposal introduced the term “conformity assessment” which is the process of verifying whether the requirements set out in the Regulation relating to an AI system have been fulfilled. These requirements are connected with the establishment of the quality management system, data governance, technical documentation, record keeping, transparency, and provision of information to users, human oversight, accuracy, robustness, and cybersecurity. 

The obligation to prepare the conformity assessment will only apply to high-risk AI systems (see our previous opinion). The general rule is that all such systems, before placing them on the market, should undergo such a process. With one exception. “Under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.” We can easily imagine that during events such as the recent COVID-19 pandemic, the whole concept of trustworthy AI would not exist in practice, which can remain explainable. But we do not see the reason to exclude the obligation of conformity assessments in the name of the protection of industrial and commercial property. 

As we wrote above, however, the assessment is not voluntary; it is conducted by the provider of the system. Only with the exception of remote biometric identification systems that would be subjected to third-party conformity assessment. Notified bodies (set up to control the process in accordance with article 43 of the Regulation) shall verify the conformity of high-risk AI systems in accordance with the conformity assessment procedures. The good thing is that they will have the power to confront the provider when the conformity assessment is dubious and call for excluding the system from the market. Still, they would rely on what was prepared by the provider. 

The conformity assessment would have to be repeated in any case of changes in the way AI systems are being created and operate. This solution should guarantee that any changes of the system will be also in line with the safeguards protecting fundamental rights. 

Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system in question and, where applicable, the authorised representative and importers accordingly.

To summarize, the idea of the conformity process as the quasi algorithmic impact assessment is in line with our previous recommendations. What is definitely missing is the lack of active involvement of public sector entities that will use AI in the automated decision-making processes. Even though the conformity assessment may be done in a perfect manner and the system can operate in the private sector in a non-harmful way, when implemented in government-citizens relations it still can cause individual and societal harm. Sadly the proposed Regulation does not respond to this challenge.