Author: Dori
Publication date: 29.07.2024
On July 12, 2024, the AI Act was published in the Official Journal of the European Union, boosting innovation while guaranteeing our security and fundamental rights. The legislation will enter into force on August 1 and will be applied gradually, with all Member States having to comply with its provisions after 24 months.
The AI Act will be directly applicable in all EU Member States, without national implementation. The new legislation is a landmark in the European Union; it is the world's first comprehensive AI legislation and aims to boost innovation while guaranteeing citizens' digital security and fundamental rights.
Subjects of the AI Act
The Artificial Intelligence Regulation regulates AI systems, which are machine-based systems that have a degree of autonomy, and the ability to adapt, i.e., to learn and therefore derive conclusions from input data, make predictions, create content, and make recommendations or decisions. Examples of such AI systems include a credit scoring system, biometric facial recognition, CV pre-screening, but also chatGPT, and other large language models.
AI categories according to the AI Act
The use, development, and distribution of prohibited AI solutions are forbidden.
Impose special, stricter rules for high-risk AI systems.
Specific provisions apply to general-purpose AI systems.
Limited-risk AI systems are subject to some transparency requirements.
There are no legal requirements for AI systems with minimal or no risk.
Obligated subjects of the AI Act
Most of the obligations apply to the providers of AI systems, who develop, market, and operate the AI system under their name. In comparison, there will be fewer obligations for distributors and importers of AI systems who make an AI system developed by another provider available in the EU. Equally, product manufacturers who produce products incorporating AI will have the same obligations. The regulation does not stop there; it also sets out obligations for entities using AI systems.
Service providers, distributors, importers, and product manufacturers that are not established in the EU will not be able to avoid EU AI compliance. Indeed, if they market their AI system in the EU or if their AI system affects EU residents, the new AI Act will apply to them, and they will have to ensure compliance.
Prohibited AI systems
Using subliminal, manipulative, or deceitful strategies to distort behavior and prevent informed decision-making results in significant harm.
Exploiting vulnerabilities such as age, disability, or socioeconomic status distorts behavior and causes considerable harm.
Except for labeling or filtering of legally acquired biometric datasets or when law enforcement categorizes biometric data, biometric categorization systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation) are not allowed.
Social scoring, i.e., evaluating or categorizing individuals or groups based on social behavior or personal characteristics, can lead to unfavorable treatment of those individuals or groups.
Assessing the possibility of an individual committing a criminal offense exclusively based on profile or personality features, unless used to supplement human judgments based on objective, verifiable data directly related to criminal conduct.
Creating facial recognition databases through untargeted scraping of facial pictures from the internet or CCTV footage.
Detecting emotions in the workplace or educational institutions, save for medical or safety concerns.
"Real-time" remote biometric identification (RBI) in publicly accessible locations for law enforcement, except when i.e. looking for missing persons, victims of abduction, human traffic, or sexual exploitation; preventing a significant and immediate threat to life, an anticipated act of terrorism; or identifying suspects in severe crimes.
The use of these prohibited AI solutions is strictly prohibited, and an assessment must be made within 6 months of entry into force as to whether the organization is developing, distributing, or using such a prohibited AI solution and, if so, must cease developing, distributing, or using it within that period.
High-risk AI systems
AI systems are often regarded as high risk if they seriously threaten people's health, safety, or fundamental rights, particularly by significantly affecting decision-making.
Annex III of the AI Act contains a list of AI systems that should always be considered high-risk.
Immediate actions by providers
As a service provider, there are two immediate things to do: classify the risk level and build data governance. Data governance includes reducing the potential for bias, avoiding infringement of relevant intellectual property rights, and avoiding the risk of operational disruption during deployment. This should be followed by a focus on clarifying transparency and liability issues. This includes documenting "how an AI system arrived at its decision and what data it used to teach the system."For high-risk systems, it is also a requirement that the operations performed are transparent enough "to enable the users to interpret the output of the system and use it appropriately."
Service providers will have to apply a three-tier regulatory regime. These include:
Product Liability Directive
rules for national liability schemes
AI Liability Directive
In the context of generic schemes, intellectual property, and trade secrets should be given particular attention, but accessibility and social principles (accountability, fairness, reliability, data protection, and inclusiveness) should not be neglected.
Sanctions for AI Act violations
Violations of the EU AI Act can also lead to heavy fines for providers, importers, distributors, and users. The maximum fine for the use of prohibited AI is €35 million, or 7% of annual global income, and for other breaches of the AI Act, the maximum fine is €15 million, or 3% of annual global income. The maximum fine for misleading information is 1% of annual global income or €7.5 million.
You can also read about:
Comentários