AI Act should defend the fundamental rights of refugees and migrants

Przeczytasz w 2 minuty

In December 2022, an international initiative was launched to protect human rights against artificial intelligence systems. Our foundation, together with more than 200 international civil society and academic organizations, including EDRi and Access Now, signed Joint Statement to ensure protection of the rights of persons affected by the use of artificial intelligence. The statement points out the need to amend the provisions of the AI Act regarding the expansion of the adequate prohibition of activities and practices against unacceptable use of artificial intelligence systems and algorithms in the migrations space. 

Since January 30, 2023, discussions have been ongoing in the European Parliament on the development of a compromise solution to the catalogue of prohibited practices of the use of artificial intelligence systems against persons (Article 5, AI Act). Despite the previous recommendations and appeals of civil society organizations on illegal practices of the use of AI against persons on the move across EU borders and migrants, these have still not been included in the legal act.

Our foundation, to strengthen the international public voice of the civil community, has sent a letter to MEPs who are involved in the negotiation process on the aforementioned AI Act compromise solution. 

The letter indicates the following postulates to be taken into account when extending the catalog of prohibited practices of using artificial intelligence systems:
  • Automated profiling and risk assessment systems. These predictive systems assess whether people present a ‘risk’ of unlawful activity or security threats. Such systems are inherently discriminatory, pre-judging people on the basis of factors outside of their control, or on discriminatory inferences based on their personal characteristics. Such practices therefore violate the right to equality and non-discrimination, the presumption of innocence and human dignity.
  • Predictive analytic systems used to interdict, curtail and prevent migration. These systems generate predictions as to where there is a risk of “irregular migration” and are potentially used to facilitate preventative responses to forbid or halt movement. These systems risk being used for punitive and abusive border control policies that prevent people from seeking asylum, such as pushbacks, expose them to a risk of refoulement, violate their rights to free movement and present risks to the right to life, liberty, and security of the person. 

In addition, attention was also drawn to the prohibition of practices towards migrants and refugees using:
  • All emotion recognition and biometric categorization systems;
  • Remote biometric identification for both real-time and post-real-time scenarios, including biometric identification at borders, in and around detention facilities.

Iwona Karkliniewska, AI Researcher at Moje Państwo Foundation