The AI Act follows a risk-based approach when targeting effective regulatory interventions and setting appropriate rules for AI systems. Such tailored approach is based upon the varying intensity and scope of risks that are generated by specific AI systems. Accordingly, we could distinguish between 3 different risk levels: unacceptable risk, high risk and moderate risk,
First, the AI Act specifically outlines several types of AI systems that pose an unacceptable risk level. These are AI systems involving practices whose risk level is very high as they contradict fundamental human rights and the basic principles of democracy. Due to that harmful impact, such AI systems cannot be placed on the market or put into service. The AI Act outlines those prohibited practices in its Article 5 who has already entered into force as of 2 February 2025. Non-compliance with the prohibitions outlined in Article 5 can result in the harshest administrative fines established under the AI Act – up to 35 million Euros or 7% of the company’s total worldwide annual turnover for the preceding financial year, whichever is higher.
The first prohibited practice involves the deployment or use of an AI system that employs manipulative techniques with the aim to distort a person’s behavior and causing that person to take a decision that causes significant harm to him or others.
The second prohibited practice targets the deployment or use of an AI system that exploits the vulnerabilities of a person (due to age, disability, social status, etc.) with the aim to distort the behavior of that person in a manner that causes significant harm.
The third prohibited use case is related to the deployment or use of an AI system for biometric categorization that categorizes individuals based on biometric data in order to deduce their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
The fourth prohibition deals with the deployment or use of an AI system for evaluation of natural persons based on their social behavior or personal characteristics, where the established social score leads to unfavorable treatment of those persons in a different social context or unfavorable treatment that is unjustified or disproportionate in view of their behavior.
The fifth prohibited use case deals with the use of real time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement. However, the AI Act provides for three exceptions where law enforcement could use such remote AI identification systems.
The sixth prohibition is related to the deployment or use of an AI system for making risk assessment of a natural person in order to assess or predict the risk of that person committing a crime, on the sole basis of profiling or assessment of personality traits. It must be noted that this prohibition does not impact the legitimate use of AI tools that assess the risk of financial fraud in certain transactions.
The seventh prohibited use case deals with the deployment or use of an AI system that creates facial recognition databases through mass scraping of facial images from the internet or CCTV footage.
Prohibition number eight is related to the deployment or use of AI systems which infer emotions in a workplace environment or educational institution. The use of such AI systems would only be allowed for medical or safety reasons.
Secondly, the AI Act introduces the category of high-risk AI systems in Article 6. Such high risk AI systems include any AI systems intended to be used as a safety component of a product, or themselves being a product, required to undergo a third-party conformity assessment as per EU safety legislation before being placed on the market or put into service. Such relevant EU safety legislation involves directives dealing with toys safety, radio equipment, cableway installations, personal protective equipment, medical devices, civil aviation security, marine equipment, rail system interoperability, etc. Additionally, there is a concrete list of AI systems in Annex III to the AI Act that are also considered to be high-risk. These include AI systems used in the areas of biometrics, critical infrastructure, access to education, recruitment of personnel, access to public services and benefits, law enforcement, etc.
When an AI system is classified as high risk this brings a set of specific obligations both for its providers and deployers.
For providers, these include requirements for establishing a risk management system, quality criteria for training data sets, preparation of technical documentation for the AI system, record keeping (logging) of events, human oversight, appropriate level of accuracy, robustness and cybersecurity, etc.
For deployers, there are obligations among others to:
- apply appropriate technical and organizational measures to make sure that they operate the system in accordance with the specific instructions for use issued by the provider;
- assign human oversight in view of the AI system to persons who have the necessary training, competence and authority;
- monitor the operation of the high-risk AI system and inform its provider and the market surveillance authority in given scenarios;
- keep the logs that are automatically generated by that system for a period of at least 6 months;
Third, there are AI systems whose risk level is below that of high-risk AI systems and for which the AI Act envisions compliance with certain transparency obligations. We could take the liberty to label those as AI systems that present moderate risk despite the fact that no such explicit classification is made in the regulation. Examples include AI systems that are intended to interact directly with natural persons, AI systems that generate synthetic content and AI systems that create deep fakes. Such systems pose risks of deception or impersonation. Therefore, certain information shall be made available by their providers and deployers to the end users.
What exactly will those information obligations entail?
Providers of all AI systems that are intended to interact directly with natural persons will need to guarantee that those systems are designed in a way that ensures that the natural persons are informed that they are actually interacting with an AI. The rule will not apply only in view of AI systems that are authorized by law to detect, prevent, investigate or prosecute criminal offences.
Providers of AI systems, including general purpose AI systems, that create synthetic content (text, images, audio, video) will be obliged to ensure that the AI outputs are marked in a machine readable format and are detectable as being generated or manipulated by AI. This inevitably means that those providers will have to implement technical solutions in order to comply with the obligation. This obligation will not apply to AI systems that perform an assistive function for standard editing or where the input data is not substantially altered and AI systems authorized by law to detect, prevent, investigate or prosecute criminal offences.
Deployers of emotion recognition systems or biometric categorization systems will have to inform all exposed natural persons about the operation of such systems. Furthermore, they are obliged to process the relevant personal data in compliance with the applicable EU regulations and directives concerning processing of personal data. The information obligation will not apply only in view of Ai systems permitted by law to detect, prevent or investigate criminal offences.
Deployers of AI systems that generate deep fake content will have to disclose that the content has been artificially manipulated. Additionally, deployers of AI systems that generate or manipulate text (for example, news or magazine articles) that is aimed at informing the public on matters of public interest will be obliged to disclose that the text has been artificially generated. The information obligation will not apply in case the AI-generated text has undergone a process of human review or editorial control before the publishing or where the use is authorized by law to detect, prevent, investigate or prosecute criminal offences.
You must be logged in to post a comment.