Verwendete Dienste und Cookies

Unsere Website nutzt Cookies, um Ihre Nutzungserfahrung zu verbessern. Einige Cookies sind essentiell für das Funktionieren und Managen der Seite, während andere für anonyme Statistiken oder personalisierte Inhalte verwendet werden. Bitte beachten Sie, dass bei eingeschränkter Cookie-Nutzung bestimmte Webseitenfunktionen beeinträchtigt sein können.

Weitere Informationen: Impressum, Datenschutz

Notwendige Cookies helfen dabei, eine Webseite nutzbar zu machen, indem sie Grundfunktionen wie Seitennavigation und Zugriff auf sichere Bereiche der Webseite ermöglichen oder z.B. Ihre Cookie-Einstellungen speichern. Die Webseite kann ohne diese Cookies nicht richtig funktionieren. Diese Kategorie kann nicht deaktiviert werden.
  • Name:
    ukie_a_cookie_consent_manager
  • Domain:
    blomstein.com
  • Zweck:
    Speichert die Cookie-Einstellungen der Website-Besucher.
  • Name:
    blomstein_session
  • Domain:
    blomstein.com
  • Zweck:
    Der Session-Cookie ist für das grundlegende Funktionieren der Website unerlässlich. Er ermöglicht es den Nutzern, durch die Website zu navigieren und ihre grundlegenden Funktionen zu nutzen.
  • Name:
    XSRF-TOKEN
  • Domain:
    blomstein.com
  • Zweck:
    Dieser Cookie dient der Sicherheit und hilft, Cross-Site Request Forgery (CSRF)-Angriffe zu verhindern. Er ist technisch notwendig.
Diese Cookies sammeln Informationen darüber, wie Sie eine Website nutzen, z. B. welche Seiten Sie besucht und auf welche Links Sie geklickt haben.
  • Name:
    _ga
  • Domain:
    blomstein.com
  • Zweck:
    Das Google Analytics Cookie _ga wird verwendet, um Benutzer zu unterscheiden, indem es eine eindeutige Identifikationsnummer für jeden Besucher vergibt. Diese Nummer wird bei jedem Seitenaufruf an Google Analytics gesendet, um Nutzer-, Sitzungs- und Kampagnendaten zu sammeln und die Nutzung der Website statistisch auszuwerten. Das Cookie hilft Website-Betreibern zu verstehen, wie Besucher mit der Website interagieren, indem es Informationen anonym sammelt und Berichte generiert.
  • Name:
    _ga_*
  • Domain:
    blomstein.com
  • Zweck:
    Das Cookie _ga_[container_id], spezifisch für Google Analytics 4 (GA4), dient der Unterscheidung von Website-Besuchern durch Zuweisung einer einzigartigen ID für jede Sitzung und jeden Nutzer. Es ermöglicht die Sammlung und Analyse von Daten über das Nutzerverhalten auf der Website in anonymisierter Form. Dies umfasst das Tracking von Seitenaufrufen, Interaktionen und dem Weg, den Nutzer auf der Website zurücklegen, um Website-Betreibern tiefere Einblicke in die Nutzung ihrer Seite zu geben und die Benutzererfahrung zu verbessern.
  • Name:
    _gid
  • Domain:
    blomstein.com
  • Zweck:
    Das Cookie _gid ist ein von Google Analytics gesetztes Cookie, das dazu dient, Benutzer zu unterscheiden. Es weist jedem Besucher der Website eine einzigartige Identifikationsnummer zu, die bei jedem Seitenaufruf an Google Analytics gesendet wird. Dies ermöglicht es, das Nutzerverhalten auf der Website über einen Zeitraum von 24 Stunden zu verfolgen und zu analysieren.
  • Name:
    _gat_gtag_UA_77241503_1
  • Domain:
    blomstein.com
  • Zweck:
    Das Cookie _gat_gtag_UA_77241503_1 ist Teil von Google Analytics und Google Tag Manager und wird verwendet, um die Anfragerate zu drosseln, d.h., es begrenzt die Datensammlung auf Websites mit hohem Verkehrsaufkommen. Dieses Cookie ist mit einer spezifischen Google Analytics-Property-ID (in diesem Fall UA-77241503-1) verknüpft, was bedeutet, dass es für die Leistungsüberwachung und -steuerung der Datenerfassung für diese spezielle Website-Property eingesetzt wird.

AI Act Proposal – a catch-all regulation?

Artificial Intelligence (AI) is advancing so quickly that experts around the world are warning against the risks of unregulated deployment. Meanwhile, the European Commission (Commission) is trying to catch up by accelerating its plans to regulate AI. After publishing its first draft of an AI Act Proposal in April 2021, the Council adopted its common position (General Approach) on 6 December 2022. Since then, the Proposal has made great strides: the committee work in the European Parliament was completed on 11 May 2023, and the Proposal was adopted already a month later with a clear majority: 499 votes in favor, 28 against and 93 abstentions. Next up will be trialogue negotiations between the European Parliament, European Commission and the Council. If this pace is maintained, the AI Act could be passed before the end of the year, which would make it the world’s first comprehensive AI law.

In this briefing, we take a closer look at the key provisions and recent changes to the AI Act Proposal.

New legal definition of AI systems

One of the key challenges for EU regulators has been the rapid pace of technological development of AI, which generally makes it difficult to find a consistent, technology-neutral, and future-proof definition.

While a previous definition referred to extensive lists of underlying techniques and approaches, the latest Proposal opted for a concise and broad definition: “AI systems” are “machine-based systems that are designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”.

As “Generative AI” models such as ChatGPT or Midjourney were not specifically considered in the first Proposal but are now becoming impossible to ignore, the Commission included a specific definition and corresponding new obligations (see below). Generative AI are defined as foundation models used in AI systems that are “specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video” (Article 28b AI Act Proposal).

Risk-based approach

The AI Act Proposal follows a risk-based approach. It imposes prohibitions and obligations on providers and deployers of AI systems, depending on the level of risk that AI may pose.

Prohibited AI systems

The Proposal outrightly prohibits AI systems that pose an unacceptable level of risk to human safety, in particular systems deploying manipulative techniques, exploiting human vulnerabilities, or that are used for social scoring (i.e. classifying people based on their social behaviour, socio-economic status, or personal characteristics).

The latest Proposal has yet expanded the bans to include prohibitions on intrusive and discriminatory uses of AI, such as biometric surveillance, emotion recognition, and predictive policing.

High risk AI systems

The Proposal classifies AI systems as “high-risk” that could have a negative impact on security or fundamental rights. High risk AI systems are not banned entirely but subject to extensive obligations. In general, these include (i) AI systems used in products covered by the Product Safety Directive (PSD), and (ii) AI systems grouped in eight specific categories that must be registered in a new EU database, such as biometric identification and categorization of natural persons, management and operation of critical infrastructure, and education and training.

The latest Proposal even includes a kind of unfair competition provision whereby certain unfair contractual terms concerning the supply of tools, services, components or processes that are used or integrated in a high-risk AI system shall not be binding (Article 28a AI Act Proposal). This shall be the case where a stronger bargaining position may be abused. In addition, users of high-risk AI solutions will have to conduct a fundamental rights impact assessment, taking into account aspects such as potential negative impacts on marginalized groups and the environment (Article 29a AI Act Proposal).

Low-risk AI systems

Finally, low-risk AI systems, such a Generative AI, must at least meet certain transparency requirements, e.g. (i) disclose that their content was generated by AI, (ii) design the model to prevent it from generating illegal content, and (iii) publish summaries of copyrighted data used for training. The latter might even be a result of the wave of lawsuits subject to our previous briefing.

In addition to risk classification, all AI systems must comply with general principles, including that AI systems should be subject to human control and oversight, be technically robust and secure, and be developed and used in accordance with existing privacy and data protection laws. They must also meet a number of new, detailed technical requirements. In particular, Generative AI must be designed as energy efficient as possible, taking into account for instance, waste generation.

National impact

After final adoption, the AI Act will be directly applicable in all Member States, leaving little room to manoeuvre at national level. A lot of work waits ahead, as all national legislation will have to be brought in line with the new rules.

And more may come: The establishment of so-called “regulatory sandboxes” (Article 53 AI Act Proposal) – meaning controlled environments established by public authorities, to test new AI systems for a limited period before they are brought to the market – is still left to the discretion of the Member States. However, they shall “as a next step be made mandatory with established criteria” (Recital 71 AI Act Proposal).

A catch-all regulation?

The intention behind the AI Act is not to stifle innovation, but to promote the adoption of human-centered and trustworthy AI and to protect health, safety, fundamental rights and democracy from its harmful effects.

However, striking a balance between protecting users and promoting business opportunities for companies is no walk in the park for EU regulators, who seem to want to catch all conceivable risks in all AI systems – including existing and future.

There is some concern that the AI Act will become an overly broad and complex piece of regulation that will be difficult for companies to apply. Many of its provisions are already extremely detailed and technical; broad definitions combined with a set of detailed rules that do not take into account the particularities of each case could potentially lead to overburdened authorities and legal uncertainty.

While the AI Act Proposal barely touches on competition concerns with its provisions on unfair contractual terms for high-risk AI systems, no further plans are in sight for regulation on the general competition aspects of AI. This is surprising in light of the many other areas where specific AI legislation is on the way – notably the proposed AI Liability Directive and the proposed Product Liability Directive – as well as the Commission’s general aim to regulate EU digital markets.

BLOMSTEIN will continue to monitor and inform about the development of the AI Act and all competition-related issues concerning AI. If you have any questions, please contact Max Klasse, Anna Huttenlauch, Jasmin Sujung Mayerl and BLOMSTEIN’s entire competition law team for advice.