Prevent Big Tech’s takeover of genAI
Civil society tells EU Commission to act fast
This week, a group of civil society organisations, including SOMO, asked the European Commission(opens in new window) to swiftly intervene to prevent Big Tech from dominating the generative artificial intelligence market (genAI). A strong competition policy and enforcement are key to ensuring that genAI works for people and democracy.
GenAI is a machine-learning model(opens in new window) that is trained to create new content (text, image or other content) based on the input users give. Its rollout has been much faster than with previous technology. OpenAI’s ChatGPT reached one million users(opens in new window) in just five days.
The potential of AI technologies may be the biggest challenge – or opportunity – for Big Tech companies after they disrupted traditional companies twenty years ago. Back then, Big Tech companies overthrew traditional business models thanks to the characteristics of digital markets. Regulators ran too late while Big Tech took over growing sections of the digital economy, leaving it with monopolistic giants that control supply chains and markets.
Today, the cost of late or ineffective enforcement poses even bigger threats to the public because the value chain of the genAI is already highly concentrated. Companies like Microsoft, Google, Nvidia and Amazon enjoy significant influence over chips and cloud computing which are essential for the development of genAI. Due to their vertically integrated business models, they entrench their dominance over other layers of the genAI value chain such as foundation models and applications.
The current struggle to reign in Big Tech(opens in new window) shows that preventing further entrenchment of Big Tech over the genAI value chain is crucial. Repeating historic regulatory mistakes not only hinders the opportunity for more competitive markets but also extends the grasp of Big Tech over AI.
Bottlenecks and barriers to genAI
The CSOs that signed the joint submission identify three main barriers to entry in the AI stack which are essential to the development of genAI: data, computation and talent.
At the data level, core training material for large models often relies on open datasets, and specialised datasets on finance or health that are crucial in developing specialised training models. Specialised datasets are usually not available to the public to access and use. In this, Big Tech stands out, as its data collection spans entire ecosystems of sensitive data.
Moreover, access to better and more data results in the development of better products, attracting users and generating more data in turn. This self-reinforcing data dynamic solidifies Big Tech’s position since they already have huge datasets and user bases. Companies lacking this data advantage face severe entry barriers.
At the computation level, Big Tech companies control the means – the server farms, data centres and microchips – required to train the foundational models. This has led to a dynamic where these companies provide computing capacity for prospective AI labs in exchange for equity, or exclusive licensing of the resulting models. Microsoft’s OpenAI and Mistral ‘partnerships’ or Amazon and Google’s investments in Anthrophic are illustrative examples.
And the talent needed to train, develop and build generative AI models is scarce. This scarcity leads to high fixed costs in hiring, which further solidifies the position of the companies with substantial cash reserves.
Harm from market concentration
High levels of dependency and concentration in genAI are likely to create public harms. The value chain of the genAI is highly dependent on the upstream layers, like cloud computing and chips. This dependency may create competition problems, hurting small businesses and users. To prevent this, competition authorities must act fast and effectively. The action should block Big Tech companies from favouring their own services (or services with whom they have partnerships) vis a vis competitors.
Nevertheless, competition enforcement should not limit itself to exclusionary conduct but also include exploitative conduct to prevent Big Tech firms from imposing unfair trading conditions. Lastly, we point out the significance of interoperability in digital markets. Incumbents have incentives to close their ecosystems to downstream competitors to protect their moats. As a result, switching costs to alternatives increases and people, and businesses are locked-in. This practice excludes competitors and neutralises potential competitive forces.
Open versus closed systems
Open-source AI is important to increase competitiveness, transparency and diversity in the development and distribution of AI. Large companies, even those claiming openness, can pose threats to open-source development. For instance, while Mistral provides its smaller models on an open-source basis, the most capable Mistral “Large” model is only available through an API on the Microsoft Azure platform or Mistral’s own “Le Platforme(opens in new window) ”.
Regulators should focus on the characteristics of the ecosystems especially whether they change pro-user policies to exploitative policies once users are locked in. Promoting inclusive business models is essential to creating a competitive AI ecosystem.
Suggestions
To curb the growing concentration in the genAI market, we call for swift regulatory intervention. Specifically, we urge the European Commission to leverage competition powers like the EU Merger Regulation, Articles 101 and 102 TFEU, and the Digital Markets Act. Additionally, we propose enhancing existing tools and creating new ones to address evolving AI challenges effectively:
- A new market investigation tool: a broader market investigation tool is needed, modelled on similar regimes elsewhere – including in the UK and Germany – as well as the “New Competition Tool” previously considered by the Commission.
- Upgrading the Digital Markets Act: The lack of dominant cloud providers such as Microsoft Azure and Amazon Web Services designated as core platform services under the DMA is concerning. Further, the inclusion of AI foundation models in the DMA’s list of core platform services should be considered.
- Investigating anti-competitive partnerships: The current legislation, EU Merger Regulation, falls short because the decisive influence test does not capture minority shareholdings. The European Commission should be able to look at anticompetitive so-called partnerships and investments just as their counterparts in the US, the UK or respective competition authorities in Member States such as Austria and Germany.
- A legal presumption against acquisitions by dominant firms: Further, we encourage a change to shift the burden of proof in merger reviews related to dominant firms’ mergers and acquisitions (M&A) activities. Calculating the impact of a merger, especially in the digital era where network effects prevail and users are locked in, requires extensive resources. Big Tech firms are better positioned to conduct such analyses due to their extensive financial resources and their knowledge about their internal operations.
We will continuously put forward public interest, to promote the democratisation of technology and prevent power consolidation in the hands of big companies.
Do you need more information?
-
Margarida Silva
Researcher
Related news
-
-
Digital Markets Act: Big Tech’s pushback faces up to a bold EUPosted in category:Long readMargarida SilvaPublished on:
-
EU health data law rolls out the red carpet for Big TechPosted in category:Long readIrene SchipperPublished on: