Last week, members of the European Parliament reached a political deal with the Council on the AI Act. This bill aims to ensure that AI in Europe is safe, respects fundamental rights and democracy, and allows businesses to thrive and expand. This agreement indicates the likely shape of the final version of the AI Act.
Parliament and Council negotiators have reached a provisional agreement on the Artificial Intelligence Act. This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact.
A key aspect of the Act is the establishment of safeguards for general purpose AI, ensuring these technologies are developed and utilized in a manner that respects ethical standards and societal values. This includes measures to ensure transparency, accountability, and the protection of fundamental rights.
Biometric identification
Another element of the Act is the strict limitation on the use of biometric identification systems by law enforcement agencies. This move addresses growing concerns about privacy and civil liberties, particularly in the context of surveillance and public monitoring. By setting clear boundaries, the Act aims to balance the benefits of these technologies with the need to protect individual freedoms.
Furthermore, the Act introduces outright bans on certain AI applications deemed harmful or unethical. This includes the prohibition of social scoring systems, which could lead to discrimination and societal divisions, and AI systems designed to manipulate or exploit user vulnerabilities. These bans reflect a commitment to prevent the misuse of AI in ways that could harm individuals or undermine democratic values.
Consumers get the right to lodge complaints and demand meaningful explanations regarding decisions made by AI systems. This provision ensures greater transparency and accountability, allowing individuals to challenge and seek redress for decisions that might adversely affect them.
Finally, the Act establishes an enforcement mechanism, with substantial fines for non-compliance. Entities found violating the Act’s provisions could face penalties ranging from 35 million euros or 7% of their global turnover to 7.5 million euros or 1.5% of their turnover.
Banned applications
The AI Act, recognizing the potential threats to citizens’ rights and the principles of democracy posed by certain applications of artificial intelligence (AI), has established a series of prohibitions aimed at safeguarding individual freedoms and societal values.
A significant aspect of this legislation is the prohibition of biometric categorization systems that process sensitive characteristics such as political, religious, or philosophical beliefs, sexual orientation, and race. This measure aims to prevent discrimination and protect the privacy and dignity of individuals against intrusive and potentially biased AI technologies.
Another area of concern addressed by the Act is the untargeted scraping of facial images from the internet or CCTV footage for creating facial recognition databases. This prohibition is a direct response to privacy concerns and the potential for mass surveillance, ensuring that facial recognition technologies are not misused in a manner that infringes on personal freedoms and privacy.
Workplace
In the workplace and educational institutions, the Act also bans the use of emotion recognition systems. These technologies, which claim to assess a person’s emotional state, raise serious concerns regarding privacy, consent, and the accuracy of such assessments. By prohibiting their use in these settings, the Act seeks to protect individuals from unproven and potentially invasive technologies.
Social scoring systems, which evaluate individuals based on their social behavior or personal characteristics, are also explicitly banned. Such systems pose a significant threat to fairness and equality, potentially leading to discrimination and societal divisions based on arbitrary or biased criteria.
Furthermore, the Act prohibits AI systems designed to manipulate human behavior, thus circumventing an individual’s free will. This measure addresses the growing concern about AI technologies that could unduly influence people’s decisions, behaviors, or beliefs, thereby undermining their autonomy.
Lastly, the Act takes a stand against the use of AI to exploit the vulnerabilities of specific groups of people, such as those based on age, disability, social, or economic situation. This provision aims to protect the most vulnerable members of society from being targeted or taken advantage of by AI systems that could exacerbate their vulnerabilities or lead to exploitation.
Law enforcement exemptions
Negotiators agreed on a series of safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.
The AI Act introduces nuanced regulations surrounding the use of “real-time” Remote Biometric Identification (RBI), reflecting a balanced approach to leveraging technology for public safety while safeguarding individual rights. The use of real-time RBI, a sophisticated tool capable of identifying or verifying individuals through biometric data, is subject to stringent conditions, ensuring its deployment is both necessary and proportionate to the objectives pursued.
Under these regulations, the application of real-time RBI is tightly restricted in terms of both time and location, with its use being permissible only for specific, critical purposes. One of the key allowable uses of this technology is in targeted searches for victims of serious crimes, such as abduction, human trafficking, and sexual exploitation. In these cases, the swift and accurate identification of victims can be crucial in rescue operations and in preventing further harm.
Another permitted use of real-time RBI is in the prevention of specific and imminent terrorist threats. In scenarios where there is a clear and present danger posed by terrorist activities, the technology can be deployed as a tool to swiftly identify and neutralize threats, thereby protecting public safety and national security.
Additionally, real-time RBI can be used for the localisation or identification of individuals suspected of having committed serious offenses. These offenses are explicitly listed in the regulation and include grave crimes like terrorism, human trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, and environmental crime. In these contexts, the technology serves as a critical aid in law enforcement efforts, helping to apprehend suspects and prevent further criminal activities.
Obligations for high-risk systems
For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed. MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.
To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
For high-impact GPAI models with systemic risk, Parliament negotiators managed to secure more stringent obligations. If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.
Measures to support innovation and SMEs
MEPs wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain. To this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.
Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.
Next steps
The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcoming meeting.