Abstract
Platform governance scholarship commonly derives the role of the state from its actions as a regulator of platforms: a rule-setter that sets limits and restricts their activities. This article argues that three additional state roles enable and constrain the agency of states to regulate platforms: facilitator, buyer, and producer. Using the EU’s Artificial Intelligence Act as a case study, the article asks: How do different state roles in platform governance shape AI’s regulatory geographies? It answers this research question by outlining two policy dilemmas between those four state roles. First, the EU’s ambition to act as a facilitator of digital markets constrains its scope of interventions as a regulator of platforms. Second, the EU’s deficits in acting as a producer of AI infrastructure exacerbate its dependency as a buyer of Big Tech offerings, especially cloud computing services. The article contends that dilemmas between state roles are not anomalies but defining features of state-platform relations. As generative AI systems gain sophistication, an understanding of how state roles relate to each other helps to navigate their complex governance regimes.
Introduction
Platform governance scholarship commonly derives the role of the state from its actions as a regulator of platforms: a rule-setter that sets limits and restricts their activities. For example, this includes investigations of ‘competition and market regulation, copyright and privacy regulation, and hate speech and misinformation directives’ (van Dijck, 2021: 13). A substantial body of research discusses the role of regulatory frameworks, such as the EU’s General Data Protection Regulation (GDPR), in reconfiguring state-platform relations (Flew et al., 2019; Gorwa, 2019). This article scrutinizes an exclusive focus on the state-as-regulator role in platform governance. Willingly and unwillingly, a state is never just a regulator when governing platforms; it also serves as a facilitator of markets, a buyer of resources, and a producer of capabilities. Those three underexplored state roles both enable and constrain the agency of states to regulate platforms because they provide the context for regulatory action to take place. The article argues that platform governance is not simply a matter of regulation; it is a matter of balancing mutually opposed statale roles – a clash between political forces, economic interests, and strategic ambitions.
The governance of generative AI systems – which are defined by their ability to generate content such as text, video, images, or audio – exemplifies the need for such an alternative theory of platform governance. Given that most high-profile generative AI systems ‘are sold as-a-service through new, platform-based business models (…), global AI governance seems likely to become highly enmeshed with platform governance’ (Veale et al., 2023: 5–6). Governments around the world are currently facing the task to set guardrails and safeguards for generative AI systems like ChatGPT. Italy, for example, temporarily banned ChatGPT on the grounds of user data privacy violations (Satariano, 2023). Canada’s Privacy Commissioner launched an investigation into OpenAI on similar grounds (Fraser, 2023). In contrast, other governments prioritize facilitative action to seize the perceived economic benefits of generative AI. The United Kingdom, for instance, set up a designated ‘Foundation Model Taskforce’, funded with a total of £900 million taxpayer money. The UK government predicts that ‘such systems could triple national productivity growth rates’ (Department for Science, Innovation and Technology, 2023). Those examples signify a swiftly evolving array of policy strategies prompted by the increasing complexity of generative AI systems.
However, in absence of a conceptual compass to interpret those volatile, short-term developments as empirical manifestations of distinguishable states role in platform governance, AI’s regulatory geographies resemble an inconsistent spiderweb. I build on Peck and Phillips’s term the ‘regulatory geographies of platform capitalism’ (2020: 89), who use it to study the ‘interactions between states, institutional and legal orders, and the unevenly developing modalities of the platform model’. When using the term ‘platforms’, I refer to industry-dominating providers of computational resources, particularly cloud infrastructure (Narayan, 2022). As political-economic and economic-geographical perspectives on the AI industry show, a handful of Big Tech platform companies own the infrastructure that underpins real-world AI systems, namely, Amazon, Google, and Microsoft (Ferrari, 2023; Luitse and Denkena, 2021). Nonetheless, the impact that Big Tech’s infrastructural power (Birch and Bronson, 2022; Klinge, et al., 2022) has on different state roles in platform governance, thereby shaping AI’s regulatory geographies, remains poorly understood. Analysing the EU’s Artificial Intelligence (AI) Act as a case study, this article asks: How do different state roles in platform governance shape AI’s regulatory geographies?
This research question implies that the article does not aim to enumerate those four state roles in a static sense but rather focuses on how the tensions between them shape regulatory action. The article answers this question by highlighting two policy dilemmas between four roles that the EU fulfils in platform governance: regulator, facilitator, buyer, and producer. First, the EU’s ambition to act as a facilitator of digital markets constrains its role as a regulator of platforms. Public investments into AI to seize its perceived effects on digital markets undermine simultaneous approaches to regulate Big Tech companies because investments subsidize these firms as dominant infrastructure providers. Second, the EU’s deficiencies in acting as a producer of AI exacerbate its dependency as a buyer of Big Tech offerings. As long as there are no viable state-funded alternatives to Big Tech's proprietary AI infrastructure, states will continue to be reliant on the services provided by these firms. But rather than understanding those dilemmas as ephemeral anomalies that only apply to the case study of the EU’s AI Act, the article argues that those two tensions have a paradigmatic quality beyond the specificities of this particular case. For example, regardless of how sophisticated generative AI systems might become in future years, it is crucial to analytically distinguish facilitative policy approaches (e.g. the UK’s AI investments as part of the Foundation Model Taskforce) and regulatory policy approaches (e.g. Italy’s temporary ChatGPT ban).
To structure this argument, the article first contextualizes its approach within research on state roles in economic geography, specifically global production networks scholarship. A particular emphasis is on the discontinuities of Big Tech platforms as opposed to other multinational companies. After substantiating the distinction between regulator, facilitator, producer, and buyer roles, the article explains its case study methodology. At the heart of the article is a juxtaposition of two intertwined policy dilemmas: regulator vs. facilitator and producer vs. buyer. Ultimately, the article concludes by reflecting on the implications of analysing the tensions between state roles in platform governance beyond the case of the EU’s AI Act. The argument that states simultaneously fulfil four roles vis-à-vis Big Tech platforms is pivotal not only for identifying points of regulatory leverage of states but also for problematizing their structural limitations.
State roles in global production networks
In recent years, a considerable amount of the literature has been published on the role of the state in global production networks. This includes work on how to conceptualize the state and its roles (Horner, 2017; Mayer and Phillips, 2017; Smith, 2015) and a range of empirical case studies (Hughes et al., 2019; Lim, 2018). A common denominator of this work is a relational understanding of state governance, which holds that the repertoire of state actors is enabled and constrained by both (a) issues of political contestation and (b) the economic conditions of global production networks in which they are operating (Werner, 2020). A global production network denotes the ‘the nexus of interconnected functions and operations through which goods and services are produced, distributed, and consumed’ (Hess, 2018: 2). Although some of the state-oriented scholarship in economic geography uses alternative terminology (e.g. global value chains or global supply chains), I deploy the notion of global production networks as an overarching term to describe the production, distribution, and consumption of goods and services.
In providing a helpful typology to schematize the different functions of state actors in global production networks, Horner (2017) differentiates between four state roles: facilitator, regulator, producer, and buyer. The facilitator role refers to help or assist firms in light of the challenges of the global economy, such as in the form of tax incentives. The regulator role denotes measures that control and curtail the activities of firms. The producer role relates to state-owned or state-controlled firms, while the buyer role encompasses practices of public procurement. These roles, however, are not mutually exclusive; they can be intertwined. As Horner (2017: 6) puts it: States may adopt these roles in various combinations as they seek to take control of, or influence, production networks, based on considerations that may go beyond capturing greater economic value. These state roles are shaped by interactions with domestic and foreign firms, business associations, civil society, and even other states and supranational institutions. […] The roles can be interrelated as, for example, many states have played an enhanced enabling role in economic globalization by reducing the earlier regulatory capability.
According to Horner (2017: 4), acknowledging the existence of those four state roles, which are best understood as ideal types, can help to ‘demonstrate the limitations, as well as the possibilities, of state and policy agency in economic development’. But despite the mushrooming of this state-oriented research, Werner (2020: 2) notes that, compared to the analysis of corporate power, ‘much less attention has been paid to how politics and the state determine these dynamics and, in turn are reshaped by them’. Consequently, the aim of this section is to provide an overview of how the economic geography literature deals with the relations between different high-level state roles to provide a basis for the analysis of their tensions in platform governance.
Facilitating markets, regulating firms
States, rather than being passive actors, are ‘intentional architects’ (Mayer and Phillips, 2017: 135) in establishing the environments within which global production networks operate. Horner (2017: 7) relates the facilitative role to state-led activities that ‘promote, attract, and retain private investment, particularly that which may be footloose and has a considerable degree of locational choice as well as support local actors in order to participate in these chains and networks’. As common examples, he mentions trade agreements, intellectual property rights, export processing zones, incentives like tax breaks, and subsidies in key sectors such as pharmaceutical and agricultural industries. According to Mayer and Phillips (2017: 141), an impetus for such facilitative policies is a ‘growing tolerance for market concentration and the diminution of competition policy at national and international levels’. In other words, the argument is that states are more inclined to tolerate higher degrees of industry consolidation as long as some of the economic value captured by lead firms remains in their territory and they can benefit from it.
In terms of the digital economy, a clear example of the facilitator role can be found in the low corporate tax rate in Ireland, which has attracted more than 800 US companies employing a total of 180,000 people (Lyons, 2021), including the European subsidiaries of some of the biggest technology firms. Ireland’s tax incentives include generous tax credits to encourage investments in research and development, which is particularly attractive for technology companies. Although it remains controversial if the boosting of employment and the Irish economy or the generation of corporate tax is more important in this context, this example illustrates Horner’s spotlight on footloose production networks that can, in theory, be grounded in any location. Historically, such facilitative policies to assist firms have also been associated with the political aftermath of the global financial crisis post-2008, with the state's role as a global production network facilitator shifting ‘from the academic margins to the policy centre’ (Werner, 2020: 3).
Beyond incentivizing foreign companies, state facilitation can also refer to the strengthening of domestic lead firms in light of the challenges of the global economy. In their analysis of how the technology giant Tencent – the world’s largest video game vendor – is shaped by China’s regulatory context, Coe and Yang (2022: 317) argue that the company benefits from a combination between facilitative censorship policies and relatively loose competition regulation. Foreign game developers, for instance, are ‘restricted from entering China’s market directly and forced to cooperate with local publishers to distribute their games in China’ (Coe and Yang 2022: 317). Consequently, Tencent consolidates its market power not only as a producer of games but also as a distributor of games that were produced elsewhere. This interplay between facilitative and regulatory policies leads us to probe how states may act as regulators in transnational production arrangements.
The regulatory role encompasses a state’s activities of ‘limiting and restricting economic activity within its boundaries’ in order to ‘protect various societal interests’ (Horner, 2017: 7). By taking into consideration the action of states as rule-setters, this role goes beyond perceiving states exclusively as facilitators that shape the geographies of production through financial and fiscal incentives. Examples of regulations include international trade policies, labour regulations, and environmental standards. Yet importantly, the rigorism of such regulatory regimes and states’ enforcement capabilities are unevenly distributed across the world, especially when it comes to ensuring workers’ rights and safety. As Mayer and Phillips (2017: 143) rightly put it, ‘a great deal of production in contemporary value chains, particularly low-wage, labour-intensive work in agriculture, garments and other sectors takes place beyond the reach of regulatory coverage’.
Although matters of regulatory evasion are also key concerns of the bourgeoning body of interdisciplinary work on platform governance (Gorwa, 2019; van Dijck et al., 2018), this research remains detached from economic-geographical scholarship on state roles. This paper aims to bridge that disconnect, given that Horner’s (2017) typology of state roles helps to analyse the state-platform nexus and its impact on digital sovereignty and industrial policy at national and supranational levels (Cobby 2021). Noting the role of the European Commission as a key entity in platform regulation, (van Dijck et al., 2018: 157) state that ‘regulation at the supranational level has proven to be most effective with regard to antitrust and privacy protection’. At the same time, they point to two significant hurdles that complicate the efficacy of platform regulation. On the one hand, states lack a nuanced vocabulary to accommodate the ways in which Big Tech companies may consolidate their cross-sectoral dominance, such as algorithmic personalization and vertical integration (van Dijck et al., 2018: 158). On the other hand, many enforcement agencies are ill-equipped for ensuring that those companies comply with regulatory fixes. Complex platform architectures and proprietary algorithms as regulatory objects pose veritable technological challenges for regulators.
Those hurdles are especially noteworthy since a key concern in the literature relates to ‘how the regulator role can be adapted to shape the distribution of rents or gains’ (Horner and Alford 2019: 11) within geographies of production. Put differently, what is of interest here is how regulatory frameworks influence distributional outcomes and inequalities brought into being by the operations of Big Tech platforms. The analysis of the EU’s proposed AI Act as a case study fits neatly into this context, as it directly responds to Horner and Alford’s (2019) call for more research on the ramifications of digital policy frameworks (Kornelakis and Hublart, 2022). Before proceeding, however, the relationship between producer and buyer roles needs to be addressed.
Procuring services, producing infrastructure
While the producer role considers the establishment of state-owned enterprises with which states ‘take control of productive capacity in key strategic sectors (e.g., security and national resources)’ (Horner 2017: 8), the buyer role refers to instances of public procurement from domestic or foreign corporations. Horner and Alford (2019: 12) cite estimates that state-owned businesses make up between 5 and 10 per cent of overall economic activity in the OECD region, with higher proportions in emerging economies. The average proportion of public procurement is between 11 and 14 per cent of the GDP, according to a 2017 study across 89 countries. Despite their relevance, there is a dearth of empirical case studies on how producer and buyer roles manifest in practice. Werner (2020: 4) provides a plausible reason for this gap, arguing that most analyses ‘have tended to focus on consumer goods as opposed to strategic sectors such as energy, infrastructure and defence’ (sectors in which state ownership and public procurement practices are more common). Undoubtedly, platforms and AI are cross-sectoral issues that are of geopolitical relevance to governments around the world, illustrated by the mushrooming of national and supranational AI strategies (Cihon et al., 2020; Veale et al., 2023). Therefore, it is worth probing how economic geographers have empirically studied the state’s buyer and producer roles in other sectors.
In terms of the state-as-buyer role, Hughes et al. (2019) investigate ‘ethical’ purchasing practices in the UK public sector. They note that legal modifications to UK public procurement laws – which were adjusted in 2014 to account for sustainability and social factors more clearly as part of the contract bidding process – have accelerated the adoption of ethical codes for supply chains in procurement strategies. Nonetheless, they find that practices of ethical sourcing are even less advanced in the UK public sector than they are in consumer goods sectors, given the ‘low profile and hidden nature of so many of the materials used in public services, which provides less impetus for the public to trace their origins and biographies’ (Hughes et al., 2019: 12). In other words, compared to labour rights violations of well-known and high-profile companies such as fashion brands, consumer pressure is less of an issue in public sector procurement because most of these contracts and economic interactions remain hidden from the public eye. This public sector opacity is also highly relevant in the context of the public procurement of AI systems.
With respect to the state-as-producer role, Lim (2018) presents one of the few production network studies on how state-owned enterprises influence the geographies of production. Empirically, he analyzes the acquisition of a Canadian energy company by a Chinese state-owned offshore oil and gas company. According to Lim, China’s rationale for acquiring a Western energy company was not simply the creation of economic value but rather the consolidation of geopolitical power by gaining technological know-how and ensuring domestic energy supply. Lim’s work in relating empirical transformations in production arrangements to wider geopolitical considerations underscores the unique analytical strength of the global production network framework for the purposes of this article: its wide-ranging conceptual scope. At the same time, it becomes clear that producer and buyer roles can be closely interrelated, given that ‘the state-as-buyer role is more often used to shore up domestic producers’ (Werner, 2020: 5), including state-owned or subsidized enterprises. In short, there may be less necessity for public procurement from extraterritorial suppliers if domestic producers can provide a sufficient supply for a strategically relevant infrastructural service or product – a key part of policy discourses about Europe's ability to act independently in the digital world, branded as ‘European digital sovereignty’ (von Der Leyen 2020).
To sum up the scholarship on state roles in economic geography, the relevant literature usefully underlines that states do not affect the geographies of production in one particular deterministic or unidirectional way. Instead, there is a multiplicity of state roles whose empirical manifestations are highly context-dependent and may vary across sectoral and national contexts. However, although scholars have begun to stress how the relations and tensions between these four roles produce uneven distributional outcomes (Alford and Phillips, 2018), there is a tremendous demand for more research, especially on burgeoning digital policy frameworks. Building on this groundwork, the next section contextualizes and explains the article’s case study analysis.
Case study: The EU’s AI Act and state roles in platform governance
This section introduces the EU’s Artificial Intelligence Act as a case study to examine how different state roles in platform governance shape AI’s regulatory geographies. It is pivotal to begin by tracing its history. In 2014, the then-newly elected President of the European Commission, Jean-Claude Juncker, presented his agenda to members of the European Parliament. In this speech, Juncker (2014) stressed his objective to implement ‘ambitious legislative steps towards a connected Digital Single Market’ as one of his strategic priorities during the first 6 months of his mandate: I believe that we must make much better use of the great opportunities offered by digital technologies, which know no borders. To do so, we will need to have the courage to break down national silos in telecoms regulation, in copyright and data protection legislation, in the management of radio waves and in the application of competition law. […] We can create a fair level playing field where all companies offering their goods or services in the European Union are subject to the same data protection and consumer rules, regardless of where their server is based.
Geographically speaking, Juncker’s last sentence is crucial: the fact that a server or a data centre might not be located within the territorial borders of the EU does not immunize a company from regulatory action by the EU. Nearly 1 year later, Juncker’s ambitious vision was moulded into a comprehensive strategy document, which defined the Digital Single Market as one in which ‘individuals and businesses can seamlessly access and exercise online activities under conditions of fair competition, and a high level of consumer and personal data protection, irrespective of their nationality or place of residence’ (European Commission, 2015: 3, emphasis added). In the course of Juncker’s presidency between 2014 and 2019, the EU approved a total of 28 legal acts concerning the facilitation of the Digital Single Market (Cini and Czulno, 2022). The EU’s Digital Single Market Strategy is an overarching framework for recent digital policy initiatives, encompassing the Digital Services Act, the Digital Markets Act, and the AI Act.
Unlike other regulatory frameworks, such as the Digital Markets Act, the EU’s Artificial Intelligence Act is not restricted to particular companies. Rather, it applies to all sectors, with the exception of AI systems that were ‘exclusively developed or used for military purposes’ (European Commission, 2021a: 20). At the heart of the proposal for a regulatory framework is a risk-based approach that categorizes AI systems into four granular levels of product safety-related risk: unacceptable risk, high-risk, limited risk, and minimal risk. While there are no additional legal obligations for providers of AI systems that are classified as posing minimal risk, the AI Act proposes to impose specific transparency obligations for AI systems that fall in the category of limited risk. Regarding high-risk AI systems, the European Commission (2021b) suggests the introduction of mandatory requirements such as human oversight and ensuring the quality of training datasets. Systems that are seen as posing unacceptable risk, such as facial recognition software, would be banned entirely. As the European Commission (2021b: 1) explains: The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. However, [...] certain AI systems may create new risks related to user safety and fundamental rights. This leads to legal uncertainty for companies and potentially slower uptake of AI technologies by businesses and citizens, due to the lack of trust. Disparate regulatory responses by national authorities would risk fragmenting the internal market.
There were no particular provisions for generative AI systems in the European Commission's initial proposal in April 2021. However, the release of ChatGPT played a role in prompting a change. Although negotiations are still ongoing as of the time this article is being finalized (Bertuzzi 2023), most recent draft of the European Parliament (2023) includes specific requirements for generative AI providers such as OpenAI. These rules encompass a range of requirements, such as the disclosure of AI-generated content, preventing systems from generating illegal material, and the publication of summaries that detail copyrighted data used for training, among other obligations. Because of the designation of generative AI systems as specific regulatory objects, OpenAI’s CEO Sam Altman – the company behind ChatGPT – discussed the possibility of pulling out of the EU’s Digital Single Market if the firm cannot comply with the AI Act (Perrigo, 2023).
To analyse such empirical developments as expressions of tensions between distinguishable state roles in platform governance, the article uses a mix of document analysis (e.g. AI policy documents, whitepapers, and media articles) and event observations (e.g. policy workshops and panel discussions). The EU’s AI Act serves as the main empirical entry point because of its quality as a paradigmatic case study. According to Pavlich (2010: 645), such a paradigmatic case ‘involves placing an exemplar alongside a phenomenon; by virtue of so placing, it shows or reveals key elements of that phenomenon’. Another term for paradigmatic cases is instrumental cases, which can be contrasted to intrinsic cases. According to Stake (2005: 445), the intrinsic case study is ‘not undertaken primarily because the case represents other cases […] but instead because, in all its particularity and ordinariness, this case itself is of interest’. Conversely, the instrumental case ‘plays a supportive role, and it facilitates our understanding of something else’ (Stake 2005: 445). The analysis of the EU’s AI Act as part of its Digital Single Market Strategy helps to facilitate an understanding of state roles in platform governance and AI’s regulatory geographies. Given that the AI Act is widely perceived as a blueprint for AI regulation in other countries, also labelled as the Brussels effect, it entails a paradigmatic (or instrumental) case quality.
Therefore, the main aim of this article is to offer a conceptual contribution about the tensions between state roles in platform governance, rather than an in-depth analysis of the AI Act as a proposed regulatory framework. The following two sections discuss two interconnected dilemmas between state roles: regulator vs. facilitator and producer vs. buyer.
Facilitating digital markets vs. regulating Big Tech platforms
The first policy dilemma is that the EU’s goal to act as a facilitator of digital markets limits its scope of interventions as a regulator of Big Tech platforms. Increased public investments into AI, intended to harness AI’s perceived positive impacts on the EU’s Digital Single Market, undermine efforts to reign in Big Tech platforms because state investments subsidize these firms as dominant infrastructure providers on top of which AI systems are produced.
A quote by Brando Benifei, the European Parliament’s lead rapporteur on the Artificial Intelligence Act, encapsulates this dilemma. As lead rapporteur, Benifei’s task is to analyse the legislative proposal and draft a report on it, confer with experts in the relevant area and individuals who may be impacted, and ‘propose the political line to be followed’ (European Parliament, 2006). When Benifei was asked about what can be done to support companies in the EU in terms of the adoption of AI in light of global competition, he provided the following response (Olivi, 2022): We need to put enough investment in, because it cannot be only about regulation. We need to have public and private investment, and we need the regulation to also be helpful to support investment. It is also important that we look at the issue of our digital sovereignty and how we develop our model for AI and the digital space, because that’s a part of how we can be a strong Europe in a difficult world.
Benifei’s remarks open up the study of how facilitative state policies as part of the EU’s Digital Single Market Strategy (e.g. fostering public and private investment) stand in opposition to regulatory state policies (e.g. setting up mandatory requirements for providers of generative AI systems). In this context, it is pivotal to highlight that emerging policy developments regarding AI governance in the EU did not emerge out of a political vacuum. Instead, emerging frameworks such as the Digital Services Act, the Digital Markets Act, and the Artificial Intelligence Act are embedded in a decades-long ambition of facilitating a Digital Single Market for digital services and products. The Digital Single Market is often positioned as a way to establish ‘European digital sovereignty’ because it includes ‘protective mechanisms and offensive tools to foster digital innovation (including in cooperation with non-EU companies)’ (EPRS, 2020: 1). According to von Der Leyen (2020), President of the European Commission, AI is a key strategic frontier as part of the overarching political quest to achieve European digital sovereignty. Notwithstanding, von der Leyen also made clear that the European Commission’s ‘aim is not more regulation, but practical safeguards, accountability and the possibility of human intervention in case of danger or disputes’.
At first glance, this juxtaposition appears contradictory. How can there be new protective mechanisms without more regulation? The Digital Markets Act, for example, introduces rules for ‘gatekeepers’ in digital markets by preventing them ‘from imposing unfair conditions on businesses and end users and at ensuring the openness of important digital services’ (European Commission, 2022). Whether or not a platform falls under the scope of the Digital Markets Act depends on its turnover in the European Economic Area (equal to or above €7.5 billion), market capitalization (at least €75 billion), as well as the number of users (more than 45 million monthly active users in the EU). Given those criteria, is no surprise that American commentators not only exhibited scepticism about those regulatory changes but also saw them as European protectionism. Charlene Barshefsky (2020), a former United States Trade Representative, laments that the European Commission plans to ‘handicap foreign companies and use regulation and funding to promote friendly, compliant local competitors’. Ostensibly, the EU does so by setting up ‘discriminatory digital services taxes […] rigged competition laws […] unjustified barriers for foreign AI applications […] massive subsidies for a “European Cloud”’ – measures that should be ended in favour of ‘greater regulatory co-operation with the US’ to push back against Chinese protectionism’.
Democratic discourse, according to the critical political theorist Chantal Mouffe (2000), by definition, inevitably requires antagonisms: A ‘we’ needs a ‘them’ to justify its existence. For Mouffe (2000: 69), this means ‘establishing a frontier, defining an “enemy”’. Those antagonistic dynamics are at play when different actors leverage the idea of digital sovereignty as an ideological umbrella to achieve their interests. Without the dominance of a handful of American and Chinese lead firms in digital markets, there would not be a need for a regulatory instrument relying on a 'gatekeeper' categorization in the first place. For analysing AI’s regulatory geographies, it is key to consider the fact that a handful of technology giants control the infrastructure that underpins AI systems (Ferrari, 2023; Luitse and Denkena, 2021; Narayan, 2022). Because the initial training and day-to-day operations of generative AI systems are computationally intensive processes, they require processing power that is highly concentrated in the hands of Big Tech companies. Although data about industry structures vary depending on measurement tools, market research data provides an entry point. The research firm Gartner (2021), for example, analysed the market shares of 'infrastructure-as-a-service' providers, which offer standardized, highly automated computing resources, storage, and networking to customers. The data shows that three American firms (Amazon, Microsoft, and Google) and two Chinese firms (Alibaba and Huawei) dominate the market. Amazon holds the largest share (40.8%, nearly half of the worldwide market), followed by Microsoft (19.7%), Alibaba (9.5%), and Google (6.1%).
As long as those firms own the computational resources without which it would not be possible to develop and deploy state-of-the-art (generative) AI systems in the first place, the EU’s public and private investments will subsidize those firms and thereby further compound their infrastructural power. In its The Coordinated Plan on Artificial Intelligence, the European Commission (2018: 2) lays out its goal to invest at least EUR 1 billion into AI per year between 2021 and 2027 and ‘gradually increase public and private investment in AI to a total of EUR 20 billion per year over the course of this decade’. It is estimated that public and private AI investments seem to be approaching that objective. According to the EU’s Commission’s Joint Research Centre (2021: 3), in 2019, the EU invested ‘between EUR 7.9 billion and EUR 9 billion in AI’, which corresponds to ‘40–45% of the annual investment target of EUR 20 billion to be reached by 2030’. According to those estimates, approximately half of those investments relate to the compensation and training of staff (52.59%), while more than a quarter was used for investments in AI-related software, hardware, equipment, and data (30.51%). Expenditures on AI-related research and development made up 10.11%. The public sector (e.g. universities, schools, and hospitals) accounted for 41% of the EU’s AI investments, predominantly in the form of outlays in education and adopting AI technologies.
However, a key methodological limitation is that there is a lack of reliable data to trace the flows of investments from governments and public institutions (e.g. universities and schools) to providers of cloud computing infrastructure. As the authors of one of the few studies of how public money shapes the future directions of AI write, ‘opacity is characteristic throughout the funding ecosystem, from the design of programmes, to the allocation of funds, to the evaluation of outcomes’ (European AI & Society Fund, 2023). In other words, the study of the EU’s public investments in AI lacks accessible data on funding allocation and comprehensive reporting, making it challenging to trace them. Although the total amount of yearly investments is known, public data about proportional distributions remain at a fairly general level. For example, the EU’s Joint Research Centre (2021: 5) uses a broad definition of AI investments as ‘expenditures on labour and skills as well as tangible and intangible capital assets incurred by public and private organizations to develop and implement AI’. This lack of granular detail makes it hard to pinpoint the subsidization of Big Tech companies by public money quantitatively, complicating any attempt to measure the EU’s ‘regulatory role in relation to distributional outcomes’ (Horner and Alford, 2019: 11) within digital markets mediated by platforms.
Given its prominent role in the overall picture of EU AI investments, the higher education sector serves as an entry point to substantiate the dilemma between the EU’s regulator and facilitator roles in platform governance. As Williamson (2021: 63) writes it, ‘digital platforms create new dependencies for public universities on the private for-profit infrastructures that constitute platform capitalism […] by fusing an ever-increasing array of higher education functions and tasks to proprietary software systems, code, and algorithms’. This array includes freely available software libraries for the development of machine learning systems that often serve as enticing 'on-ramps' leading users into the proprietary infrastructures of major Big Tech platforms (Dyer-Whiteford et al., 2019). For instance, Google’s TensorFlow is widely used by universities to train students in machine learning skills, acting as a tool for the company to set standards for research and development (Luchs et al., 2023). Beyond this exertion of soft power by shaping cultural norms, TensorFlow also helps to expand Google’s infrastructural power as it is inextricably integrated with the company’s paid cloud offerings, such as data storage and AI computing capabilities. More sector-specific research is needed on how public investments in the higher education sector indirectly subsidize Big Tech platforms and what impact those dependencies have on the capacity of states to regulate platforms.
This section has argued that the facilitation of digital markets and the regulation of Big Tech platforms are mutually conditioning state roles: they enable and constrain each other. This argument motivates more detailed investigations of the subsidization of infrastructure providers by public institutions across sectors and how it constraints the ability of states to regulate those infrastructure providers. As the higher education sector illustrates, the tensions between facilitator and regulator roles in platform governance are not only of a monetary or financial nature, but they also include matters of infrastructural power given that Big Tech platforms offer some of their services on an unpaid basis. As the next section demonstrates, the EU’s buyer and producer roles further complicate the study of AI’s regulatory geographies. As such, the article aims to provide more specificity about the quintessential infrastructural dependencies of the public sector on the cloud computing services of Big Tech companies in the context of AI development.
Procuring Big Tech services vs. producing AI infrastructure
The second policy dilemma is that the EU’s deficiencies in acting as a producer of AI exacerbate its dependency as a buyer of Big Tech offerings. A fruitful starting point for this second dilemma is to return to Charlene Barshefsky, the former United States Trade Representative. As Barshefsky (2020) stresses, a key feature of the EU’s alleged shift towards digital protectionism is its objective to implement ‘massive subsidies for a “European Cloud”’. More specifically, the European Commission ‘proposes to invest €4bn-€6bn in cloud infrastructure to store and process data in Europe and to support European cloud providers’ – a proposal that was allegedly developed by ‘leading French and German digital companies’. Either intentionally or accidentally, Barshefsky’s quote presents two separate initiatives as a single approach. It is necessary to shed a spotlight on what these initiatives are and how they are related to each other.
First, what Barshefsky refers to as the subsidization of a ‘European Cloud’ can be identified as a declaration to create the ‘European Alliance on Industrial Data and Cloud’ that was signed by 27 member states in October 2020. Echoing the findings of Gartner (2021), the member states identify a strong tendency of concentration in the cloud computing industry, stressing that the ‘public cloud infrastructure market is converging globally around four large non-European players’ (European Commission, 2020a: 1). While the declaration does not specify who those players are, it includes a reference to market research data that denotes Amazon, Google, Microsoft as well as Alibaba as the dominant providers of cloud infrastructure. To push back against the market dominance of American and Chinese firms, the signatories agreed to foster ‘the emergence of a resilient and competitive European supply for the public and private sector needs of highly trusted, secure, interoperable, and energy-efficient cloud infrastructure and services’ (European Commission, 2020a: 2). Big Tech companies were even seen as prototypes of how this objective could be achieved. As Peter Altmaier, Germany’s Minister for Economic Affairs and Energy at that time and one of the signatories of the declaration, told Politico journalists: ‘In order to achieve digital sovereignty, we need to start approaching data processing the way major American and Chinese companies – the hyper-scalers – approach it’ (Heikkilä and Delcker, 2020).
Recapitulating Werner’s (2020: 4) point that the empirical manifestations of the state-as-producer and state-as-buyer roles are most prevalent ‘in strategic sectors such as energy, infrastructure, and defence’, this aim to mirror American and Chinese firms is an intriguing pattern. However, it would be too simplistic to make a case that the EU’s objective is to simply ‘shore up domestic producers’ (Werner, 2020: 5) by developing public procurement strategies for cloud infrastructure providers. Instead, it is paramount to consider the idiosyncrasies of the EU as a layered and complex supranational state actor that represents the interests of member states – and that particular member states are more powerful in enforcing their economic interests than others. For example, the European Commission itself could not establish EU state ownership over computational resources because the EU does not own companies in the first place. However, some EU member states own shares in companies that directly compete with Big Tech platforms. After the EU member states declared their willingness to invest in homegrown cloud providers, 27 European CEOs signed a report addressed to Thierry Breton, European Commissioner for the Internal Market. Breton is one of the leading political architects behind the Digital Markets Act and the AI Act. In the report, an alliance of companies that includes industry and software behemoths such as Siemens, Airbus, and SAP underlines the importance of investments – not least because an increase in state subsidies would be beneficial for expanding their own computational infrastructure.
This alliance of companies acts as an impetus to stress the second initiative that Barshefsky’s (2020) sweeping critique of contemporary European digital policy touches upon: GAIA-X. In their report to Commissioner Breton, the European CEOs mention the term ‘GAIA-X’ more than 20 times. They introduce GAIA-X as a project ‘that has already taken steps to align on common frameworks for federated cloud services’ (European Commission, 2020b: 9) and point to the importance of building synergies with it. What is GAIA-X? Initiated by Germany’s and France’s Ministries of Economic Affairs, the initiative’s self-proclaimed aim is decisively not to develop a ‘new cloud physical infrastructure […] [but] rather a software federation system that can connect several cloud service providers and data owners together’ (GAIA-X 2022). Initially, GAIA-X was met with enthusiasm within policy and business circles: a gateway to reduce an infrastructural dependence on American and Chinese infrastructure providers. Angela Merkel, Germany’s then-Chancellor, expressed her support for initiatives that support European digital sovereignty: ‘So many companies have just outsourced all their data to US companies […] and the value-added products that come out of that, with the help of artificial intelligence, will create dependencies that I’m not sure are a good thing’ (Chazan 2019). Along similar lines, as a board member of the partially government-owned telecommunications company Deutsche Telekom 1 put it: ‘it’s irrelevant whether we are currently already technically able to build up such an infrastructure – there’s no question that we must enable ourselves to do it’ (Delcker 2019). With that ambitious objective in mind, why is it that GAIA-X’s vision and mission statement explicitly rules out that the initiative could build its own physical cloud computing infrastructure?
Although there is a variety of ways to answer this question, such as bureaucratic complexity and infighting, I focus on the role of corporate lobbying. Soon after the initiative was first presented to the public, both Amazon and Microsoft openly challenged the plans. Intriguingly, a Microsoft spokesperson told The Wall Street Journal that although digital sovereignty is a legitimate goal, ‘in the cloud age, however, we think it is wrong to define sovereignty solely along territorial borders’ given that sovereignty ‘needs the most powerful cloud solution’ (Stupp 2019). But in spite of those reservations, or rather because of them, all major technology giants joined the initiative, with Alibaba and Huawei ending up sponsoring the project’s 2021 summit. As a French GAIA-X member emphasized after the summit, all board member companies ‘are either a client or a major partner of U.S. cloud giants […] [and] they have a direct interest to work with these players’ (Goujard and Cerulus, 2021). Given Big Tech’s deeply entrenched infrastructural power, it is not a surprise that the leverage of public–private initiatives such as GAIA-X remains structurally limited. Compared to lead firms in commodity-centric production networks, the scalability of platform-driven Big Tech companies enables them not only to swallow up smaller competitors but also to vertically integrate their products and services (Klinge et al., 2022).
This latter point is in line with economic-geographical scholarship on state roles in transnational production arrangements. As Horner and Alford (2019: 11) argue, many empirical studies have shown that ‘the capacity of public–private governance to achieve equitable distributional gains is fundamentally constrained by the sourcing practices of lead firms and the foundational logic’ of production geographies. Undoubtedly, the ‘foundational logic’ of AI’s global political economy is so skewed toward a concentration of power and control in the hands of a few firms that it is unlikely that public–private modes of governance can reconfigure those patterns of industry concentration. If we consider the creation of infrastructural dependencies in the domain of AI as a decades-long process, this also means that once fixed infrastructural dependencies are in place, it is unlikely that state actors can counterbalance or change them retrospectively.
As such, the EU's shortcomings as an AI producer amplify its reliance as a buyer of Big Tech’s services. The meaning of ‘producer’ is crucial here as this term illustrates how state-platform relations diverge from other cross-border economic interactions in global production networks. What is being ‘produced’ by EU companies in the context of AI systems is commonly a specific product that is developed on top of Big Tech’s proprietary infrastructure. Without this provision of computational resources, state-of-the-art AI systems could not be deployed at an industrial scale. Precisely herein lies the strategy of Big Tech platforms: as they decentralize the production of AI products and centralize the provision of AI infrastructure (Ferrari, 2023), they blur the distinction between producer and buyer roles. For the EU to become a producer in the economic-geographical understanding of the word, it would have to own and control the means of AI production: that is, the infrastructure that sits beneath domain-specific products and services. Instead, as long as those means of AI production remain in the hands of extraterritorial firms, political efforts to foster Europe’s digital sovereignty will go hand in hand with increased public AI procurement from Big Tech. The ways in which this procurement takes place require empirical scrutiny. As the EU’s High-Level Expert Group on AI emphasizes (European Commission, 2019: 18): Today, the most advanced governments are increasingly providing application programming interfaces (APIs) to trusted intermediaries as a way to open up their infrastructure to private sector services and entrepreneurs […]. Through its role as a procurer, the public sector can also make use of public procurement strategies to not only incentivise the development and responsible innovation of AI systems for the public good but also to promote responsible innovation.
This quote echoes Helmond’s (2015) focus on the centrality of APIs for ‘making web data platform ready’. Empirically tracing the different ways in which governments procure AI services may help uncover how different manifestations of ‘API governance’ (Van der Vilst et al., 2022), such as pertaining to the procurement of military AI tools or advanced surveillance systems, shape the formation of AI’s regulatory geographies within and beyond the EU.
Conclusion: AI’s regulatory futures
States have been the primary actors of global affairs for nearly 400 years. That is starting to change [...]. It is time to start thinking of the biggest technology companies as similar to states. These companies exercise a form of sovereignty over a rapidly expanding realm that extends beyond the reach of regulators: digital space. They bring resources to geopolitical competition but face constraints on their power to act.
Ian Bremmer's (2021: 113) analysis of the geopolitics of technology in Foreign Affairs is representative of an argument that has gained immense traction within academic and policy circles: the notion that a handful of American and Chinese digital giants have seized control over spheres of society, economics, and national security that used to be the sole purview of the state. Those companies, Bremmer writes, are now more than just corporate behemoths; they have become state-like entities whose territories are neither confined by national borders nor by legal jurisdictions. And unlike what Bremmer calls ‘physical space’, the ‘digital space’ that they are dominating supposedly sits outside the influence sphere of regulators. Leveraging a rhetorical instrument of Cartesian dualism, Bremmer divides the geopolitical field of action into two arenas of sovereignty: a well-known physical space versus a far-flung but omnipresent digital space.
However, contrary to this dualistic portrayal of AI’s regulatory geographies, this article does not find any evidence for a separate digital space ‘that extends beyond the reach of regulators’ (Bremmer, 2021: 113). Instead of evading the regulatory reach of policy-makers, Big Tech platforms cause a transformation of how this influence sphere presents itself. Far from rendering regulatory action redundant, state-platform relations in an age of AI reconfigure the policy repertoire of state actors, the expressions of their roles in platform governance, and the tensions between those roles. Regulation limits and constrains the operations of Big Tech platforms. The facilitation of digital markets provides the environment for platforms to mediate economic relations and interactions. The production of state-funded alternatives relates to the capabilities of states to own and control computational infrastructure, such as supercomputers for AI systems. The procurement of Big Tech’s offerings by states undermines this ambition, as it entrenches the dependency of the public sector on their infrastructural services. With the growing adoption of generative AI systems in the public and private sectors, the escalating demand for compute power and processing capabilities is likely to further accentuate the existing tensions.
The two policy dilemmas explicated in this article – facilitator vs. regulator and producer vs. buyer – shed different spotlights on how state roles in platform governance shape AI’s regulatory geographies. Those intertwined dilemmas are best understood as complementary perspectives that offer a more holistic picture of state-platform relations. Only in combination, not in isolation, they exemplify the utility of a new conceptual compass for studying the intermeshing nature of platform and AI governance: an expansion of the term from an exclusive focus on regulation toward a framework that includes facilitator, producer, and buyer roles. The two dilemmas address the same case study analysis but problematize those state roles from different points of view. Analysing the EU’s AI Act only in terms of tensions between facilitator and regulator roles would miss the key interplay between producer and buyer roles. Conversely, exclusively focusing on producer and buyer roles would miss out on the peculiar dynamics between facilitative and regulatory aspirations of states. Ultimately, three major implications for future research on AI’s regulatory geographies and state roles in platform governance follow from this article.
First, there is a need for a comparative research agenda on how tensions between different state roles may play out between and across contexts, both at a discursive level and at a material level. Materially, the metaphor of digital sovereignty presents itself as a proxy for geopolitical interests that go beyond issues of regulating firms, pertaining to a global rivalry between American and Chinese models of infrastructural power. AI’s regulatory geographies intersect with those geopolitical considerations as much as they intersect with national and supranational policy agendas. Dilemmas between state roles in platform governance also affect the relations between states. There is abundant room for studying how inter-state relations shape the operations of Big Tech platforms, thereby establishing regulatory geographies. Discursively, as Coleman (2009: 255) aptly puts it, ‘sovereignty is literally about the setting of limits to social practice […] via epistemological appeals about how best to know and act in the world’. Rather than identifying a singular definition of the term, we need to ‘understand how, for what ends, and in what contexts the term is used and expounded upon, as well as attend to the work done by it’ (Coleman, 2009: 255). As generative AI systems gain sophistication in acting as instruments of reality construction (Ferrari and McKelvey, 2022), struggles over digital sovereignty will take different shapes in the future.
Second, it is paramount to account for the limits of investigating state-platform relations through the prism of interdependent state roles as ideal types. Different state roles should not be seen as a priori categories. In policy practice, policy dilemmas cannot be simply understood according to already-existing state roles. For example, in the case of the EU’s Artificial Intelligence Act, the complexity of the EU as a supranational state actor hampers any simplified portrayal of platform governance. This complexity is not only derived from inter-institutional tensions between entities such as the European Commission and the European Parliament but also from asymmetrical power relations and conflicts between EU member states. Given the empirical fact that Germany and France are the two member states with the highest shares of investments in AI (Joint Research Centre, 2021: 11), it is not surprising that they are key driving forces in attempting to establish a form of European digital sovereignty. As Chacko and Jayasuriya (2018: 99) write, ‘because they favour certain class fractions over others and seek to effect change in other states’ institutional structures, regulatory geographies generate contestation both within and between states’. Comparative studies into how different countries both within and beyond the EU approach state-platform relations would be a productive way to engage with some of those limitations.
Third, acknowledging how different state roles in platform governance relate to each other provides an impetus to reimagine the future of AI’s regulatory geographies, contributing to cross-disciplinary conversations about digital policy. As Christophers (2020: 474) puts it, ‘if rentierism is defined largely by exclusive control of assets, then non-rentierist activity is […] either non-asset-intensive or in which assets are fundamental but not exclusively controlled’. The key question, then, is whether it is possible to organize the computational resources that underpin contemporary forms of AI in such a way that is not exclusively controlled. State ownership would not represent a magic bullet to solve this problem because it can also turn into a form of exclusive control. As the case of GAIA-X illustrates, even if the strategic ambitions of states are no less than to ‘assert ourselves in the world’ (Goujard and Cerulus, 2021), materializing those aspirations is a different matter entirely. The pressing question of how govern AI infrastructure as a democratic, open and public utility currently remains an unresolved political challenge.
No suitable precedent exists that could be examined to precisely anticipate how AI’s regulatory futures will unfold. Other forms of regulating multinational economic activities, such as the commodification of scarce natural resources, pose very different regulatory challenges. But for the first time in history, governments around the world face the task to reign in globally operating Big Tech platforms. Tracing AI’s regulatory geographies will require a spotlight on the tensions between mutually conditioning state roles as defining characteristics of state-platform relations.
Footnotes
Acknowledgements
An earlier version of this article was presented at the 2023 conference of the International Communication Association (ICA) in a session on ‘Platforms, the State, and Public Service Media’. Thanks to Christine Larson for chairing the session and to its participants for their helpful questions. The author would also like to thank Mark Graham, Derek McCormack, Jonathan Gray, Jean-Christophe Plantin, and the two anonymous reviewers for their generous feedback.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the UK’s Economic and Social Research Council (ESRC) under grant ES/P000649/1, studentship number 2094254, as part of the Grand Union Doctoral Training Partnership (DTP) and the University of Oxford (Scatcherd European Scholarship), and the Dutch Research Council (NWO), as part of the Spinoza Prize awarded to Prof. dr. José van Dijck.
