AI Act Proposal – a catch-all regulation?
Artificial Intelligence (AI) is advancing so quickly that experts around the world are warning against the risks of unregulated deployment. Meanwhile, the European Commission (Commission) is trying to catch up by accelerating its plans to regulate AI. After publishing its first draft of an AI Act Proposal in April 2021, the Council adopted its common position (General Approach) on 6 December 2022. Since then, the Proposal has made great strides: the committee work in the European Parliament was completed on 11 May 2023, and the Proposal was adopted already a month later with a clear majority: 499 votes in favor, 28 against and 93 abstentions. Next up will be trialogue negotiations between the European Parliament, European Commission and the Council. If this pace is maintained, the AI Act could be passed before the end of the year, which would make it the world’s first comprehensive AI law.
In this briefing, we take a closer look at the key provisions and recent changes to the AI Act Proposal.
New legal definition of AI systems
One of the key challenges for EU regulators has been the rapid pace of technological development of AI, which generally makes it difficult to find a consistent, technology-neutral, and future-proof definition.
While a previous definition referred to extensive lists of underlying techniques and approaches, the latest Proposal opted for a concise and broad definition: “AI systems” are “machine-based systems that are designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”.
As “Generative AI” models such as ChatGPT or Midjourney were not specifically considered in the first Proposal but are now becoming impossible to ignore, the Commission included a specific definition and corresponding new obligations (see below). Generative AI are defined as foundation models used in AI systems that are “specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video” (Article 28b AI Act Proposal).
Risk-based approach
The AI Act Proposal follows a risk-based approach. It imposes prohibitions and obligations on providers and deployers of AI systems, depending on the level of risk that AI may pose.
Prohibited AI systems
The Proposal outrightly prohibits AI systems that pose an unacceptable level of risk to human safety, in particular systems deploying manipulative techniques, exploiting human vulnerabilities, or that are used for social scoring (i.e. classifying people based on their social behaviour, socio-economic status, or personal characteristics).
The latest Proposal has yet expanded the bans to include prohibitions on intrusive and discriminatory uses of AI, such as biometric surveillance, emotion recognition, and predictive policing.
High risk AI systems
The Proposal classifies AI systems as “high-risk” that could have a negative impact on security or fundamental rights. High risk AI systems are not banned entirely but subject to extensive obligations. In general, these include (i) AI systems used in products covered by the Product Safety Directive (PSD), and (ii) AI systems grouped in eight specific categories that must be registered in a new EU database, such as biometric identification and categorization of natural persons, management and operation of critical infrastructure, and education and training.
The latest Proposal even includes a kind of unfair competition provision whereby certain unfair contractual terms concerning the supply of tools, services, components or processes that are used or integrated in a high-risk AI system shall not be binding (Article 28a AI Act Proposal). This shall be the case where a stronger bargaining position may be abused. In addition, users of high-risk AI solutions will have to conduct a fundamental rights impact assessment, taking into account aspects such as potential negative impacts on marginalized groups and the environment (Article 29a AI Act Proposal).
Low-risk AI systems
Finally, low-risk AI systems, such a Generative AI, must at least meet certain transparency requirements, e.g. (i) disclose that their content was generated by AI, (ii) design the model to prevent it from generating illegal content, and (iii) publish summaries of copyrighted data used for training. The latter might even be a result of the wave of lawsuits subject to our previous briefing.
In addition to risk classification, all AI systems must comply with general principles, including that AI systems should be subject to human control and oversight, be technically robust and secure, and be developed and used in accordance with existing privacy and data protection laws. They must also meet a number of new, detailed technical requirements. In particular, Generative AI must be designed as energy efficient as possible, taking into account for instance, waste generation.
National impact
After final adoption, the AI Act will be directly applicable in all Member States, leaving little room to manoeuvre at national level. A lot of work waits ahead, as all national legislation will have to be brought in line with the new rules.
And more may come: The establishment of so-called “regulatory sandboxes” (Article 53 AI Act Proposal) – meaning controlled environments established by public authorities, to test new AI systems for a limited period before they are brought to the market – is still left to the discretion of the Member States. However, they shall “as a next step be made mandatory with established criteria” (Recital 71 AI Act Proposal).
A catch-all regulation?
The intention behind the AI Act is not to stifle innovation, but to promote the adoption of human-centered and trustworthy AI and to protect health, safety, fundamental rights and democracy from its harmful effects.
However, striking a balance between protecting users and promoting business opportunities for companies is no walk in the park for EU regulators, who seem to want to catch all conceivable risks in all AI systems – including existing and future.
There is some concern that the AI Act will become an overly broad and complex piece of regulation that will be difficult for companies to apply. Many of its provisions are already extremely detailed and technical; broad definitions combined with a set of detailed rules that do not take into account the particularities of each case could potentially lead to overburdened authorities and legal uncertainty.
While the AI Act Proposal barely touches on competition concerns with its provisions on unfair contractual terms for high-risk AI systems, no further plans are in sight for regulation on the general competition aspects of AI. This is surprising in light of the many other areas where specific AI legislation is on the way – notably the proposed AI Liability Directive and the proposed Product Liability Directive – as well as the Commission’s general aim to regulate EU digital markets.
BLOMSTEIN will continue to monitor and inform about the development of the AI Act and all competition-related issues concerning AI. If you have any questions, please contact Max Klasse, Anna Huttenlauch, Jasmin Sujung Mayerl and BLOMSTEIN’s entire competition law team for advice.