Used Services and Cookies

Our website uses cookies to enhance your user experience. Some cookies are essential for the operation and management of the site, while others are used for anonymous statistics or personalized content. Please note that limiting cookie use may impair certain functions of the website.

More information: Imprint, Data protection

Essential cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website or, for example, saving your cookie settings. The website cannot function properly without these cookies. This category cannot be deactivated.
  • Name:
    ukie_a_cookie_consent_manager
  • Domain:
    blomstein.com
  • Purpose:
    Stores the cookie preferences of website visitors.
  • Name:
    blomstein_session
  • Domain:
    blomstein.com
  • Purpose:
    The session cookie is essential for the basic functioning of the website. It allows users to navigate through the site and use its basic features.
  • Name:
    XSRF-TOKEN
  • Domain:
    blomstein.com
  • Purpose:
    This cookie serves security purposes and aids in preventing Cross-Site Request Forgery (CSRF) attacks. It is a technical necessity.
These cookies collect information about how you use a website, e.g. which pages you have visited and which links you have clicked on.
  • Name:
    _ga
  • Domain:
    blomstein.com
  • Purpose:
    The Google Analytics cookie _ga is used to distinguish users by assigning a unique identification number to each visitor. This number is sent to Google Analytics each time a page is accessed in order to collect user, session and campaign data and to statistically evaluate the use of the website. The cookie helps website operators to understand how visitors interact with the website by collecting information anonymously and generating reports.
  • Name:
    _ga_*
  • Domain:
    blomstein.com
  • Purpose:
    The _ga_[container_id] cookie, specific to Google Analytics 4 (GA4), is used to distinguish website visitors by assigning a unique ID for each session and each user. It enables the collection and analysis of data on user behavior on the website in anonymized form. This includes tracking page views, interactions and the path users take on the website to give website operators deeper insights into the use of their site and improve the user experience.
  • Name:
    _gid
  • Domain:
    blomstein.com
  • Purpose:
    The _gid cookie is a cookie set by Google Analytics that is used to distinguish users. It assigns a unique identification number to each visitor to the website, which is sent to Google Analytics each time the page is accessed. This makes it possible to track and analyze user behavior on the website over a period of 24 hours.
  • Name:
    _gat_gtag_UA_77241503_1
  • Domain:
    blomstein.com
  • Purpose:
    The _gat_gtag_UA_77241503_1 cookie is part of Google Analytics and Google Tag Manager and is used to throttle the request rate, i.e. it limits data collection on high traffic websites. This cookie is linked to a specific Google Analytics property ID (in this case UA-77241503-1), which means that it is used for performance monitoring and control of data collection for that specific website property.

AI Act Proposal – a catch-all regulation?

Artificial Intelligence (AI) is advancing so quickly that experts around the world are warning against the risks of unregulated deployment. Meanwhile, the European Commission (Commission) is trying to catch up by accelerating its plans to regulate AI. After publishing its first draft of an AI Act Proposal in April 2021, the Council adopted its common position (General Approach) on 6 December 2022. Since then, the Proposal has made great strides: the committee work in the European Parliament was completed on 11 May 2023, and the Proposal was adopted already a month later with a clear majority: 499 votes in favor, 28 against and 93 abstentions. Next up will be trialogue negotiations between the European Parliament, European Commission and the Council. If this pace is maintained, the AI Act could be passed before the end of the year, which would make it the world’s first comprehensive AI law.

In this briefing, we take a closer look at the key provisions and recent changes to the AI Act Proposal.

New legal definition of AI systems

One of the key challenges for EU regulators has been the rapid pace of technological development of AI, which generally makes it difficult to find a consistent, technology-neutral, and future-proof definition.

While a previous definition referred to extensive lists of underlying techniques and approaches, the latest Proposal opted for a concise and broad definition: “AI systems” are “machine-based systems that are designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”.

As “Generative AI” models such as ChatGPT or Midjourney were not specifically considered in the first Proposal but are now becoming impossible to ignore, the Commission included a specific definition and corresponding new obligations (see below). Generative AI are defined as foundation models used in AI systems that are “specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video” (Article 28b AI Act Proposal).

Risk-based approach

The AI Act Proposal follows a risk-based approach. It imposes prohibitions and obligations on providers and deployers of AI systems, depending on the level of risk that AI may pose.

Prohibited AI systems

The Proposal outrightly prohibits AI systems that pose an unacceptable level of risk to human safety, in particular systems deploying manipulative techniques, exploiting human vulnerabilities, or that are used for social scoring (i.e. classifying people based on their social behaviour, socio-economic status, or personal characteristics).

The latest Proposal has yet expanded the bans to include prohibitions on intrusive and discriminatory uses of AI, such as biometric surveillance, emotion recognition, and predictive policing.

High risk AI systems

The Proposal classifies AI systems as “high-risk” that could have a negative impact on security or fundamental rights. High risk AI systems are not banned entirely but subject to extensive obligations. In general, these include (i) AI systems used in products covered by the Product Safety Directive (PSD), and (ii) AI systems grouped in eight specific categories that must be registered in a new EU database, such as biometric identification and categorization of natural persons, management and operation of critical infrastructure, and education and training.

The latest Proposal even includes a kind of unfair competition provision whereby certain unfair contractual terms concerning the supply of tools, services, components or processes that are used or integrated in a high-risk AI system shall not be binding (Article 28a AI Act Proposal). This shall be the case where a stronger bargaining position may be abused. In addition, users of high-risk AI solutions will have to conduct a fundamental rights impact assessment, taking into account aspects such as potential negative impacts on marginalized groups and the environment (Article 29a AI Act Proposal).

Low-risk AI systems

Finally, low-risk AI systems, such a Generative AI, must at least meet certain transparency requirements, e.g. (i) disclose that their content was generated by AI, (ii) design the model to prevent it from generating illegal content, and (iii) publish summaries of copyrighted data used for training. The latter might even be a result of the wave of lawsuits subject to our previous briefing.

In addition to risk classification, all AI systems must comply with general principles, including that AI systems should be subject to human control and oversight, be technically robust and secure, and be developed and used in accordance with existing privacy and data protection laws. They must also meet a number of new, detailed technical requirements. In particular, Generative AI must be designed as energy efficient as possible, taking into account for instance, waste generation.

National impact

After final adoption, the AI Act will be directly applicable in all Member States, leaving little room to manoeuvre at national level. A lot of work waits ahead, as all national legislation will have to be brought in line with the new rules.

And more may come: The establishment of so-called “regulatory sandboxes” (Article 53 AI Act Proposal) – meaning controlled environments established by public authorities, to test new AI systems for a limited period before they are brought to the market – is still left to the discretion of the Member States. However, they shall “as a next step be made mandatory with established criteria” (Recital 71 AI Act Proposal).

A catch-all regulation?

The intention behind the AI Act is not to stifle innovation, but to promote the adoption of human-centered and trustworthy AI and to protect health, safety, fundamental rights and democracy from its harmful effects.

However, striking a balance between protecting users and promoting business opportunities for companies is no walk in the park for EU regulators, who seem to want to catch all conceivable risks in all AI systems – including existing and future.

There is some concern that the AI Act will become an overly broad and complex piece of regulation that will be difficult for companies to apply. Many of its provisions are already extremely detailed and technical; broad definitions combined with a set of detailed rules that do not take into account the particularities of each case could potentially lead to overburdened authorities and legal uncertainty.

While the AI Act Proposal barely touches on competition concerns with its provisions on unfair contractual terms for high-risk AI systems, no further plans are in sight for regulation on the general competition aspects of AI. This is surprising in light of the many other areas where specific AI legislation is on the way – notably the proposed AI Liability Directive and the proposed Product Liability Directive – as well as the Commission’s general aim to regulate EU digital markets.

BLOMSTEIN will continue to monitor and inform about the development of the AI Act and all competition-related issues concerning AI. If you have any questions, please contact Max Klasse, Anna Huttenlauch, Jasmin Sujung Mayerl and BLOMSTEIN’s entire competition law team for advice.