Artificial Intelligence and Fundamental Rights

Przeczytasz w 3 minuty

On September 20, 2021 a scientific seminar "Artificial Intelligence and Fundamental Rights" took place, organized by the Office for Personal Data Protection. The invited guests discussed important aspects of artificial intelligence systems use and their impact on fundamental human rights, in particular the protection of personal data. Maciej Gawroński, a member of the Scientific Council of Good Data Protection Standard sp. z o.o., gave a speech going deepest into the protection of fundamental rights: "Human-centric Artificial Intelligence as a threat to fundamental rights and where you can see that in the draft regulation". Below, we present key points of the speech, discussing several human rights from the EU’s Charter of Fundamental Rights.

From the perspective of AI prediction, the key problem is respect for human dignity and privacy. The speech indicated that this is difficult to regulate, if at all possible. The right to personal liberty and security was discussed in light of the ban of real-time biometric identification in public spaces (in private areas it remains permitted). As it stands, the provision creates great risks of abuse by public authorities, for example by building a database of persons suspected of committing a serious crime. The Cambridge Analytica & 2016 US presidential election case underline the problem of the right to freedom and its poor protection when certain vulnerable social groups are faced with subliminal advertising. The speaker emphasized likely physical or mental harm which follows.

Regarding personal data protection in the proposed EU AI regulation, the speech pointed to a peculiarity of the prohibition of the use of social scoring systems resulting in unfavorable treatment of a person, but only when processing their personal data for purposes other than those collected. The speaker considers this "an interesting” approach to the subject, leaving in the possibility of using social scoring in all other cases. Moreover, enabling AI to be trained on real data in regulatory sandboxes contradicts one of the basic principles of personal data processing - the principle of data minimalization.

In the field of AI automated decision-making, the problem of discrimination was pointed out. Also, the speaker criticized the idea of mitigating it by labeling certain data categories as “special”, which would lead to major regulatory confusion.

Attention was also put on the problem of ensuring proper and fair working conditions in terms of systems monitoring employees' work. They may lead to abuses by employers through applying employee rights provisions literally.

Two problems related to the judiciary were presented: personnel lacking skills and preparation for AI analysis and violating the presumption of innocence and the right to defense when using programs like Compass for granting parole.

At the end, the speech raised the problems of exercising the right to be heard and of obtaining public administration decision justification. Another source of a broader analysis was pointed out: the report of the European Union Agency for Fundamental Rights of 14 December 2020 (available at

The summary of the presentation focused primarily on the dubious guarantee for AI security due to a missing certification process and the lack of direct reference to the principles included in the General Data Protection Regulation (GDPR). The use of relative prohibitions in the entire content of the regulation leaves a lot of room for interpretation, which translates into less transparency when applying the provisions. It was emphasized that work on the final shape of the AI Regulation and the interpretation of its content must take place simultaneously in each of the member states in order to prevent circumvention of the rules contained in this act.

Clearly, there are still many issues to be discussed and thoroughly analyzed. Hence there is a great need to have open discussions like this one, and organized not only by public supervisory authorities.