AI hitting on competition?
Generative Artificial Intelligence (AI), which can be prompted to create wholly novel content, is a hot topic all around, from news and social media to policymakers and businesses. Image-generating AI models that create (digital) paintings inspired by van Gogh or other famous artists in seconds, or text-generating AI models like ChatGPT that pass university exams with ease are fascinating the internet community. Fact is that many generative AI-based applications have been made available to the public in the last months, from machine translations to image recognition and music generation. Yet, this is just the beginning of a ground-breaking technology, which will sooner than later be deployed in a wide spectrum of activities and become a critical infrastructure for many businesses.
There is a sense of urgency among legislators to address risks related to certain uses of such technologies. The AI Act proposal introduced by the European Commission on 21 April 2021 is expected to be voted by the European Parliament before the end of Spring. The Act aims to set a legal framework for secure, trustworthy and ethical AI, particularly mandatory requirements for high-risk AI models, classified as such based on their intended purpose, particularly those that pose risk to health, safety and fundamental rights of persons. Also under discussion are the AI Liability Directive (AILD) and the revised Product Liability Directive (PLD) proposals introduced by the Commission on 28 September 2022. Both are meant to establish a broader protection against harm caused by AI-based product or services.
While the legislative initiatives so far have addressed security concerns in the development of AI models and liability in case of damages, it becomes a pressing question whether AI models also pose potential risks to competition and whether current and planned regulation at EU and German level is well enough placed to deal with them?
Getty Images lawsuit – only the first wave of lawsuits?
Particularly in the creative industry, tempers are heating up with Getty Image’s recent lawsuit against Stability AI, owner of image generating AI model Stable Diffusion, filed before the High Court of Justice in London on 3 February 2023. Getty Images takes the view that Stability AI unlawfully copied and processed copyrighted images from their stock image database and used them to train the Stable Diffusion AI model (see press release).
The background is that generative AI models such as Stable Diffusion are generally trained with extensive datasets. These datasets are usually created with data found by crawling, i.e. the automated search for suitable data on the internet, and scraping, i.e. extracting and storing this data.
Getty Images has apparently asked the Court to order Stability AI to abstain from using its images and is seeking damages, including Stability AI’s profits from the alleged infringement. While the pending lawsuit has a competition component, it focuses on copyright and trademark law infringements. However, it should be considered whether competition law provides an additional angle to the questions at stake.
Digital Competition Law – will AI be targeted?
The Digital Markets Act (DMA) aims at creating a level playing field in the digital sector by addressing core platform services (CPS) with a gatekeeper function. Already 6 months after designation by the Commission, gatekeepers will have to comply with the DMA’s self-executing obligations and prohibitions, some of which will be further specified by the Commission on a case-by-case basis (see our recent briefing on the DMA). In parallel to the DMA, the Federal Cartel Office (FCO) intends to make extensive use of Sec. 19a of the Act against Restraints of Competition (ARC), a tool targeting anticompetitive behaviour related to gatekeeper positions in the digital sector (see our recent briefing on Sec.19a ARC).
So far, no company has yet been designated by the Commission as a gatekeeper (the DMA will only apply as of May 2023 and designations are not expected until early September 2023). In Germany, however, the FCO has already moved ahead and designated Alphabet (Google), Meta and Amazon to hold paramount significance across markets. Investigations into Microsoft’s market position were initiated on 28 March 2023 (see press release).
Will generative AI model providers qualify as gatekeepers?
At first glance, at least independent research labs like Midjourney may not be the first to come to mind as “gatekeepers”. But even though companies developing generative AI foundation models often come from humble beginnings and start as non-profits, their rapid growth and increasing influence across markets is undeniable. For example, in October 2022, Stability AI raised USD 101 million in funding, and was valued at USD 1 billion in a seed round. In addition, and more importantly, established Big Tech players such as Alphabet (Google), Meta or Microsoft have already entered the “AI market” with full force by developing their own models or investing in existing developers of AI models.
In fact, three key inputs are crucial for the development of competitive generative AI foundation models: computational resources (computing capacity or processing power), good quality data (volume and variety) and algorithmic innovation. Literature also points out that a fourth key input for AI model tuning will be human feedback (which is directly proportional to user base). These variables make it clear that the development of such models present extreme scale economies (nearly zero marginal costs and high investment costs), data-driven advantage and strong network effects. The more access to data, computing power and users, the more the AI will learn and be able to generate better responses for users, attracting even more users and their data, in a loop effect. Therefore, it is a market highly prone to “tipping”. Big Tech players have a head start due to their already existing computing capacity, access to large data sets and user base.
In addition, as underscored in the DMA’s recitals, high market capitalisation and a high ratio of equity value over profit can be indicators of the leveraging potential of certain businesses and of their capacity to tip the market in their favour.
Consequently, even if it’s unclear who will win the current race to develop the best generative AI foundation models, there is no question that those that succeed may end up as providers of critical infrastructure to users in the AI ecosystem, particularly to developers of AI-driven applications. To illustrate, there have been several public announcements of companies integrating their virtual assistants or search engines to generative AI models such as ChatGPT and Bard AI.
But what AI-related conduct could be targeted?
The full potential concerning the commercial use of generative AI models remains wide open given the very early stage of the technology and its speed of change. Therefore, the question on potential unfair competitive practices is also rather speculative at this stage. However, there is no doubt that AI applications will be rapidly integrated with and deployed into a variety of industry sectors. Companies providing gateways for the generative AI ecosystem will likely hold strong market positions and/or strengthen their market positions upstream and/or downstream the AI value of chain (e.g. search engines and virtual assistants).
The European Commission has stressed its objective of protecting the contestability of the digital sector and avoiding that core platform services enjoy an entrenched position in its operations. Therefore, enforcers are ready to impose a set of obligations to prevent leveraging as well as to facilitate switching and multi-homing. Under DMA provisions, potential gatekeepers of ecosystems relevant for AI-driven business models may face obligations to (i) guarantee interoperability (ii) assure access to data provided or generated by the business users (iii) abstain from self-preferencing (see our recent briefing on self-preferencing), (iv) abstain from using data provided by business users to improve or develop their own AI models and AI driven services, (v) abstain from cross-service use of personal end user data.
Apart from the above-mentioned digital competition law provisions and “classic” dominance abuse provisions, the (subsidiary) German Act against Unfair Competition (UWG) may also come into play, which generally prohibits the deliberate obstruction of competitors (Section 4 no. 4 UCA). For instance, the German Federal Court of Justice ruled in 2014 that scraping can qualify as deliberate obstruction where there is a circumvention of technical protection safeguards against automated queries; conversely, merely violating terms of use is not sufficient.
Outlook
Given the rapid development and growing utility of AI models in several business models, it remains to be seen whether the existing legal provisions at EU and national level are up for the challenge. It is yet not clear to what extent the DMA or Sec. 19a ARC will apply or would require an update to particularly address competition concerns evolving in the AI ecosystem. However, this is without prejudice to the applicability of Art. 101 / Sec. 1 ARC or 102 TFEU / Sec. 19 ARC.
While regulatory plans at EU level are still focused on safety and liability aspects and aim at building trust in AI models, it is high time to take a critical look at AI model providers also from a competition law perspective. Since legislative processes are by nature lengthy, it remains uncertain whether the regulators will be quick enough to create a legal framework that covers all challenges posed by AI models.
BLOMSTEIN will continue to monitor and inform about digital competition trends in Germany and across Europe. If you have any questions on European or German competition laws addressing AI, please contact Anna Huttenlauch, Jasmin Sujung Mayerl, Carolina Vidal and BLOMSTEIN’s entire competition law team for advice.