Work on the AI Act in the European Parliament and the European Commission is bringing tangible results in terms of ethical and more transparent use of AI. There is still a lot of work to do, but the direction of change is positive. We have written about this in our previous text
Technology in the public sector should operate in a transparent manner. Citizens need to know how the public systems operate and affect their situation. Public administration in the CEE region has been actively using AI algorithms for almost ten years. Such systems appear in decision-making processes, in how its branches function, as well as in the process of collecting and processing the data of citizens in such areas as security, social services or medicine.
Artificial intelligence systems in the public sector of CEE are implemented at both national and local levels. For example, in Poland, AI systems are part of the Smart City strategy implementation at the regional level, and they also appear in the framework of organized competitions of the Tech Gov program.
In support and information access systems, a chatbot or assistant can help reduce the time to receive service or access to information. Technology can make the public sector more efficient. Previous examples show that algorithms used in the public sector are often controversial (e.g. SyRi – Systeem Risico Indicatie in Netherland) and sometimes become completely withdrawn from service - (e.g. unemployment profiling used by Polish Ministry of Family and Social Policy together with the Districts Labour Offices).
Estonia is one of the leading countries in terms of using AI algorithms in the public administration. In 2020, the Estonian government declared the implementation of 50 AI systems in public administration. Its KrattAI system is designed to integrate the available e-services, which seems to be convenient for citizens, but can also lead to the risks due to the significant concentration of the information about the citizens by public authorities. Much depends on how the implementation and the service control process (throughout its lifecycle) is designed and executed, and whether this process adequately takes into account the risks that may arise in the context of citizens' rights (e.g. privacy violations, unjustified aggregation of data from different sources and acquiring new information on this basis).
Enhancing social security and the protection of human rights has led to a discussion on the legal framework for the functioning of AI systems in compliance with the charter of fundamental and civil rights (Article 34 of the EU Charter of fundamental rights
, articles 12, 13 and 14 of the European Social Charter
Many activities performed by AI algorithms and systems still lack transparency about the purpose and appropriateness of the use and further processing of personal data without informed and clear consent from the individual data subject (articles 4 of the GDPR). It is appropriate to require transparency from AI systems based on machine learning algorithms and natural language processing (NLP). A good example of the use of sensitive data on a large scale are traffic monitoring and testing systems in Poznań
In 2021, the EC issued guidelines
on the use of biometric data (facial recognition) in the public space. On the UODO (Personal Data Protection Office) website you can find the Polish version of the guidelines
. Compliance with the guidelines and framework for data processing by monitoring is not currently known in the CEE region.
In the report of the European Parliament’s committee AIDA (specially appointed committee on artificial intelligence), the position
of the IMCO and LIBE committees clearly defined the areas where sensitive data can be used (security, health) and the catalogue of prohibited biometric data. According to these guidelines, persons should also be informed about the use of their data and the collected data should be limited in processing or processed with a high degree of anonymisation.
In the context of AI in the public sector, it would be beneficial to:
- define areas where AI algorithms can be applied;
- publicly provide information on used or planned AI algorithms, via one easily accessible database;
- define artificial intelligence precisely (in many cases public services are alternately called AI or Internet of Things (IoT));
- require transparent use of AI systems at each stage of implementation (on the one hand, the availability of information on the operation of the system, on the other – proactively informing citizens);
- create a catalogue of illegal AI practices that may have a negative impact on the fundamental rights of the citizen.