Abstract
The emergence of the modern state was closely intertwined with the advent of statistics and demographic data. Today, we are witnessing the ascent of artificial intelligence as a new technology of governance. This article seeks to lay the groundwork for a research agenda at the intersection of the state and artificial intelligence, unpacking the notion of “AI” and examining the consequences of the state transitioning from statistics to artificial intelligence as the means of “seeing” its subjects. The first part of the article argues that artificial intelligence represents a fundamental epistemic shift: from
Introduction
In several countries, artificial intelligence (AI) is now deployed to decide who will and will not receive benefits, seeking to identify the “optimal” distribution of public support. In the United States, police have begun using a proprietary facial recognition system with over 30 billion faces to identify individuals in photographs in mere seconds (Hill, 2023). In 2024, the UK government launched a chatbot based on OpenAI's ChatGPT 4o, enabling citizens to talk to the “the state”—personified as an AI agent (GOV.UK, 2024). Following this roll-out, the UK government announced its “AI Opportunities Action Plan” in early 2025: a wide-ranging scheme to implement AI across the public sector, with the aim of making “government more efficient” and becoming an “AI superpower” (GOV.UK, 2025). To many observers, such examples offer evidence of an ongoing transition in public governance—in which something described as “Artificial Intelligence” is emerging as a new governance technology. From welfare (Dencik, 2022; Jørgensen, 2023), to taxes (Reutter, 2022), to cities (Cugurullo et al., 2023), to borders (Amoore, 2021), AI are being leveraged to predict, profile, preempt, and even make decisions within the public sector (Boullier, 2023). Though geographically uneven, the rise of AI in governance is a worldwide trend, with the “Global South” often serving as a testing ground for its most advanced applications. Many scholars argue that we are standing on the precipice of a fundamental transition towards a governance paradigm characterized by the widespread use of AI techniques (Aradau and Blanke, 2022; Yeung, 2023), in which “systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish” (Alston, 2019: 4).
Such a shift in the technology of governance may have profound consequences. Scholarship has argued that new analytical methods and their associated forms of data have fundamental social and political consequences, reconfiguring the relationships between states and citizens—as epistemology, technology, and governance must be understood as inextricably interlinked (Bigo et al., 2019; Törnberg and Uitermark, 2025). When states and societies understand themselves and their problems through new technologies, the implications go far beyond the mere technical: it represents a set of fundamental epistemic and political transformations of politics and governance (Amoore, 2023). As AI arises as means of seeing and governing populations, it hence “does not merely change the political technologies of governance, but is itself a reordering of politics, of what the political can be” (Amoore, 2023: 20). Just as AI-techniques were associated to a fundamental shift within capitalism (Langley and Leyshon, 2017; Zuboff, 2019), so AI seems to potentially be in the process of delivering a similarly profound transformation of public institutions and the state. The rise of AI governance may hence bring what Foucault (1966) referred to as an “episteme”: a configuration of knowledge that defines the ideas and assumptions that govern the way we conceptualize and inquire about the world, intrinsically linked to prevailing power structures.
At the same time as AI is emerging as a new technology shaping public governance, the state has become an increasingly active force in shaping AI. Scholarship has hailed the beginning of the end of the neoliberal order (Gerstle, 2022), as growing geopolitical strife between China and the US has brought the return of a more active and interventionist state (Dew and Lewis, 2024; Zhang, 2024), to an important extent centered around AI ambitions and the chip technologies that enable them (Cheng and Zeng, 2023; Miller, 2022). The EU in particular is also seeking to protect its sovereignty through regulation, imposing a tranche of laws to limit foreign technological influence—including the GDPR, Digital Services Act, Digital Markets Act, and the recent AI Act (Bassens and Hendriske, 2022). While the contours of this emerging political order remain unclear, it appears to be forming at the intersection of two shifting and contested notions—“the state” and “AI”—suggesting the need for sustained scholarly attention.
However, while there is a growing sense of “AI” emerging to the fore as a technology of governance, the notion itself remains ambiguous, ill-defined, and often poorly understood. This ambiguity stems partly from the term's shifting historical meanings, and partly from the unprecedented influx of capital incentivizing its indiscriminate application on everything from smart toasters to medical diagnostics systems. Even less understood are the epistemic differences between AI and previous data-processing technologies, as well as their potential social and political implications.
This paper seeks to unpack the notion of “AI” and examine the epistemic consequences of the shift from statistics to AI as governance technology, thereby contributing to laying the conceptual groundwork of a research agenda examining the growing role of AI in state governance. We focus here primarily on AI as a means of analyzing data—in particular the type of data from which it would be nearly impossible to draw insights using conventional statistical methods, such as such as text, images, and other “unstructured” data (Boyd and Crawford, 2012). The article engages with two key questions essential to understanding the ongoing transition in governance: one aims to establish expectations for how AI may transform governance, and the other offers directions for empirically studying these transformations. First,
We begin by unpacking the meaning of “AI,” and the ways that it is becoming intertwined with public governance, before turning to the question of its epistemic consequences.
Unpacking “AI”
As Artificial Intelligence is named for an aspiration rather than a specific method, it has historically been used to refer to a diverse range of techniques. Early AI research in the 1950s–1960s was rooted in symbolic reasoning and logic, seeking to replicate human problem-solving through rule-based systems. In the 1970s–1980s, AI became associated with “expert systems” that encoded domain-specific knowledge into rules and used inference engines to derive conclusions. In the 1990s–2000s, however, AI became increasingly associated with various forms of machine learning models, such as decision trees, random forests, and various forms of nature-inspired stochastic optimization such as genetic algorithms and ant colony optimization (Wahde, 2008). While simple forms of artificial neural networks (ANNs) were used during this period, they were considered inferior for most tasks due to their relatively low performance and their black-box nature.
The contemporary understanding of AI began to take shape in the 2010s, as advances in parallel processing—originally developed for rendering video game graphics—and the vast influx of digital data generated through datafication and platformization (van Dijck, 2014) enabled a rapid expansion in the capabilities of ANNs. This led to rapid advancements across various subfields of AI, including machine learning, natural language processing, and computer vision. From 2020 onward, this understanding of AI was further cemented by the rise of “generative AI”—consisting of using a trained ANN to
While the rise into prominence of ANNs is hence relatively recent, the technology is far from novel. ANNs were first developed in the mid-20th century by the cyberneticists Warren McCulloch and Pitts (1943), inspired by neural networks in the biological brain (Yegnanarayana, 2009). Many of the early developments associated with ANNs took place in the public sector, within both academia and national security agencies such as the NSA, driven by their data analytic needs following from their mass data collection and surveillance efforts (Wiggins and Jones, 2023).
However, around the 2010s, the private sector gradually took the baton in shaping the evolution of AI, as private forms of mass data collection and surveillance far outpaced those of security agencies. As corporations began to see the value of the quickly accumulating troves of data resulting from the digital mediation of large parts of human life, ANNs were among the set of techniques that enabled the businesses to draw insights from petabytes of “unstructured” data (Bigo et al., 2019). While partially overlapping with statistics, techniques such as ANNs enable the analysis of a much broader range of data types: whereas statistics requires data structured as columns of variables and rows of individuals representative of a population, AI-based techniques can be applied to images, videos, text, sensor data, and the complex linked data structures associated to much of digital data (Boyd and Crawford, 2012).
ANNs were hence among the key technologies that enabled “surveillance capitalism” (Foster and McChesney, 2014), by lending credibility to the notion that digital data can offer unprecedented insights into human behavior (Söderström and Datta, 2024; Törnberg and Uitermark, 2025). Corporations like Google, Meta, and Microsoft began establishing dedicated research labs and pumping vast investments into AI innovation. The last decade of evolution of AI has hence been shaped by the interests and pursuits of the private sector, within the context of surveillance capitalism. The recent rise of generative AI has accelerated this development, and focused investments specifically on ANN-based forms of AI.
In recent years, there are, however, signs of a growing flow of AI from the private to the public sector, with what appears as a new wave of public sector digitalization. In Europe, the NGO AlgorithmWatch (2019) has identified a widely growing use of algorithmic decision-making across the public sector, many of which involve AI techniques (Dencik, 2022): allocating care for hospital patients, identifying supposed crime hotspots areas in cities, sorting and categorizing the unemployed population, identifying child neglect, detecting benefit fraud in the Netherlands (with infamous inaccuracy), using chatbots to help refugees seek asylum, and AI-based knowledge management systems (Wirtz et al., 2019).
This new wave of new digital technologies within the public sector follows previous waves studied by the literature, under labels such as “e-government” (Chadwick and May, 2003), “digital era governance” (Dunleavy et al., 2006), “smart governance” (Pereira et al., 2018) and “govtech” (Bharosa, 2022; Dener et al., 2021), and have ranged from the basic use of email and computer technology within public administration, to the digitization of administrative processes, to the employment of digital tools tailored for governmental purposes.
Compared to these previous waves, however, there are reasons to believe that the growing use of AI represents a more fundamental shift in the logic of governance (Jørgensen, 2023). While previous waves of digitalization were primarily concerned with internet-enabled communication, the new paradigm focuses on the automation of tasks, actions, and decision-making (Yeung, 2023). As a result, some scholars are already speaking of an emerging new state paradigm. Alston (2019) uses the term “digital welfare state,” describing how systems of social protection and assistance are progressively influenced by digital data and technologies for tasks such as automation, prediction, identification, surveillance, targeting, and punishment. Fourcade and Gordon (2020) refer to the “dataist state” to describe a new form of state governance characterized by the extensive use of data and algorithmic processes to manage society. Eubanks (2018) refers to a new “regime” of data analytics within the welfare sector in the US, as datafication and algorithms are used to automatically assess needs and determine eligibility across areas ranging from housing to healthcare. Yeung (2023) refers to as “new public analytics,” in which decision-making is partly or fully automated through the processing of vast quantities of data (Jørgensen, 2023), thus implying the reform of knowledge production through algorithmic forms of ordering, operating through the logics of categorization, classification, scoring, and selection (Dencik et al., 2019; Yeung, 2023).
While the technologies discussed in these literatures encompass a broader range, including conventional digital systems and data processing methods, AI arguably occupies a central and increasingly influential role, driving much of the current hype and momentum in digital innovation. We hence now turn to the question of how AI may shape governance through its implementation, by examining the epistemic differences between statistics and AI as means of data analysis.
Seeing like an ANN
Data and their associated methods of analysis have always been a core dimension shaping governance. As Hacking (2015) puts it, the modern state emerged in an “avalanche of printed numbers” in the 1820–1840 period, as the rise of statistics as a scientific discipline enabled a new view into the lives of citizens, using “statistical study of populations […] to amass gigantic quantities of data” (p. 280). “Statistics” is the
The transition from statistics to AI as the state's chief means of governance hence represents a potential radical transformation. As states draw on automated systems as a mode of governing relations with their citizens, these systems become a site for the practice of statecraft, its shaping of governance, and its politics (Maguire and Ross Winthereik, 2021). The design of AI systems has become a form of “epistemic politics”: a mode of assembling and ordering knowledge of society that fundamentally transforms how state and society come to understand themselves (Amoore, 2023).
This suggests the need to examine the epistemology of AI. Building on the literatures on the relationship between statistics and the state, we will here draw on the comparison between statistics and AI—treating the two as ideal types, while acknowledging that these technologies are far from monolithic or deterministic in their implications. While, as noted, ANNs constitute the technological foundation of large language models and generative AI, and some of the characteristics described above carry over to these techniques, we will focus here chiefly on ANNs as a means of data processing as this allows for a more direct comparison with previous technologies of governance.
While AI and statistics represent different ways of seeing, the boundary between the two is in many ways porous. Scholars have described AI and statistics as “two cultures”: while statistics seeks to map the relationship between variables as a means of explanation, AI focuses singularly on prediction (Breiman, 2001; Grimmer et al., 2021). Rather than a sudden epochal shift, his cultural difference over time enabled a gradual technical divergence, as the focus on prediction has led to the adoption of substantially more complex models and techniques. The two furthermore differ, as noted, in the types of data that they are capable of analyzing. Whereas statistics requires data structured as columns of variables and rows of individuals representative of a population, AI-based techniques can be applied to structured, semi-structured, and unstructured data—encompassing everything from images to text, from videos to sensor data.
ANNs are based on large-scale complex networks of interacting nodes—“neurons”—that carry out distributed computation. Each connection between the nodes has an associated weight, which is adjusted during the learning process. When data is fed into a neural network, it is processed through multiple layers of these interconnected neurons, transforming the input data at each layer and ultimately producing an output. These layers comprise an input layer (representing the initial data), one or more hidden layers (where the complex processing occurs), and an output layer (representing the final result or prediction).
When data is input into the network, it undergoes a series of transformations determined by the weighted connections and the activation functions of the neurons—often implemented as repeated matrix dot multiplications. In the training period, the network “learns” by comparing the result with the “correct” answer, adjusting its weights based on the error of its predictions using a method called backpropagation. Over successive iterations, the network minimizes the error between its predictions and the actual outcomes, refining its weights to improve accuracy. The result is an array of weights that define a highly complex function that maps input data to output data, enabling their use to detect and model intricate, non-linear relationships in data without any instructions from the developer or analyst.
Through the training, the ANN automatically identifies relevant “features” of the data. The first layer may recognize simple patterns, like edges or colors in an image, or basic word relationships in text. These simple patterns are passed to the next layers, where they are combined to form more complex patterns, such as shapes, textures, or sentence meanings. As the data moves through multiple layers, the network builds increasingly detailed and abstract representations, eventually identifying high-level features (like a face or a sentiment) needed to make predictions or decisions.
By drawing on epistemic differences between statistics and AI, we will here hypothesize three associated shifts in governance resulting from AI.
From variables to patterns
Statistics operates under a foundational framework anchored in deductive hypotheses testing, probabilistic models, and variable variance. The statistical state must impose on the subjects pre-defined categories and attributes, or a system of conventions to make them measurable (Desrosières, 1998). By systematically collecting, categorizing, and analyzing information about its citizens, the state can produce a coherent, albeit abstracted, representation of the vast array of individuals it governs. Central to this endeavor, variables serve as defined categories or metrics—like age, gender, income, ethnicity, or occupation—that allow for segmentation and differentiation within the population.
The statistical study begins with a hypothesis about the nature of a phenomenon and then tests that hypothesis by examining the variance of measured variables. This approach emphasizes understanding the sources of variability, estimating parameters with uncertainty, and drawing inferences about broader populations from sampled data. The world, through the lens of statistics, is perceived as a structured space where relationships between entities can be formulated and tested using a sample of individuals that are taken to be representative of a larger population. Through the lens of statistics, humanity began to be viewed through what Adolphe Quetelet termed the “homme moyen,” encapsulating the typical characteristics of a population and providing a benchmark against which variations and deviations could be measured (Rouvroy, 2024).
Statistical concepts and views were important in shaping the modern state and its governance (Desrosières, 1998; Foucault, 2008). The modern state was defined in relation to the statistical idea of a “population” as an entity in itself, with properties, attributes, and behavior. By collecting, aggregating, and analyzing data about births, deaths, health, education, employment, and other demographic factors, the state could identify trends, make predictions, and implement policies to manage the “social body.” The “population” becomes both an object and a subject of governance—its behavior can be influenced by state policies, and its statistical representations can in turn shape those policies.
In contrast, AI seeks not primarily to test preconceived hypotheses, but to inductively “learn” patterns based on historic datasets, prioritizing predictive accuracy over interpretability (Amoore, 2023; Yeung, 2023). While statistical models often require assumptions about data distributions or the functional form of relationships, AI can flexibly adapt to complex, non-linear patterns in large datasets without an explicit predefined model—moving beyond the notion of variables, to clusters, patterns, and features. While statistics requires episodic data, fixed and well-structured, AI can analyze constant streams of unstructured data (Isin and Ruppert, 2020b; Törnberg and Uitermark, 2025). Its subject matter is not a representative population with predefined attributes but features, patterns, and clusters within flows of data.
For the state, the move from statistics to AI thus represents the emergence of what Fourcade and Gordon (2020) call “inductive statecraft,” in which the state acts based on inductively identified patterns. The effect is a governmentality that “eschews fixed, long-term plans in favor of a constant state of real-time experimentation and reactivity to indicators” (Fourcade and Gordon, 2020: 87). This is illustrated by a city health inspection pilot program that carries out restaurant inspections based on identified patterns in residents’ Google searches about symptoms related to food-borne illness, contrasted with the more conventional approach of proactively mandating food safety procedures. As Rouvroy (2024) argues, AI furthermore enables anticipating and managing future risks, representing a move from responding to problems, to predicting and preempting them.
AI furthermore represents a shift in the representation of those governed. If statistics produced the “population,” the patterns and clusters of AI are producing new objects and subjects of governance. The data that fuels the digital state are collected from streams, apps, and sensors that often exceed state borders, thus redefining the relationship between the state and its territory. It is no longer necessary for the state to “flatten” society to make it legible (Fourcade and Gordon, 2020; Scott, 1998), as categories can emerge organically from the available data (Törnberg and Uitermark, 2025). The statistical state operates on a form of governance that can be made subject to critique and debate: race, for instance, can be operationalized—however crudely—as a variable. Through the AI lens, race instead appears as a cluster of outcomes: a family resemblance category that does not stem from theory but emerges from combinations of learned features. As such features do not neatly link to existing concepts or identities, they cannot be erased from the decision, and neither can they be made the subject of politics and attempts to change the
Instead of overt discrimination, AI may identify complex combinations of features that function as proxies for race or gender (Eubanks, 2018). The results are discriminatory consequences that do not explicitly draw on race or gender as categories, thereby not only evading existing anti-discrimination legislation, but even the possibility of a debate of the political choices and assumptions that go into given decisions. As AI does not draw on explicit categories, it does not, as Rouvroy and Berns (2013) put it, “allow the emergence of an active, consistent, reflexive statistical subject capable of legitimizing or resisting it” (authors’ translation). By stripping theory from governance, AI uproots the very foundation of politics.
From rules to associations
The computational paradigm that preceded AI was characterized by a fundamentally different form of rationality. Conventional algorithms are based on collections of “if-then”-style rules: if a certain condition holds, then a consequent action is executed. This form of logic fits neatly into the paradigm of rational bureaucracies and rule-based governance, which emphasized principles such as efficiency, predictability, and standardization. The ethos of rational bureaucracies was to streamline operations, minimize human variability, and ensure consistent outcomes, thus requiring an architecture of standardized procedures and protocols. The rationality and neutrality of computers offered a perfect analog. As Amoore (2023) argues, the postwar social and international orders were founded on such definitive and conclusive algorithmic procedures (MacBride, 1967). In this light, the advent of computers and their inherent logic can be seen not just as a technological revolution, but also as a reflection and reinforcement of the broader societal shift towards rationalization and methodical structure in the 20th century.
However, AI represent a shift away from this rule-based paradigm. While algorithms enable the construction of highly sophisticated and precise systems, they can fail in the face of tasks that may seem relatively simple. Take, for instance, the task of recognizing a picture of a cat. A digital picture consists of an array of millions of pixel color values, and manually describing the rules to capture when the combination of those pixels describes a cat is virtually impossible. As Dreyfus and Dreyfus (2005: 788) put it, “no amount of rules and facts can capture the knowledge an expert has when he or she has stored experience of the actual outcomes of tens of thousands of situations.” For decades, this tacit capacity was seen as fundamental to what separated machines from humans.
AI, in contrast, does not require explicit rules, but “learns” associations from large numbers of examples. The system is simply fed vasts amounts of pictures with cats, and without cats, and finds an optimal configuration of weights that map the input data to the target output. It identifies a function that optimally separates the cats from the non-cats, encoded as millions or even billions of “weights” or “parameters” that connect imagined neurons in a large network.
AI thus operates not on the logic of if-then, but on the logic of association and optimization. It operationalizes not strict definitions, but rather Wittgenstein's (1968) notion of “family resemblance,” capturing categories that have a series of overlapping similarities, without necessarily sharing a single common trait. While pictures of cats often feature fur, little paws, or pointy teeth, there is not any one feature that all cat pictures necessarily have in common. Neural networks identify such “features” from sets of pixels—fur, paws, teeth—and draws on these to conclude whether the photo is likely to contain a cat or not. In the same way, as it is impossible for a human to explain how they know that a photo contains a cat, it is nearly impossible to know
For institutions, the AI paradigm thus implies a novel way of carrying out functions, operationalizing them not as a set of rules, but as a question of
The “classification” logic of AI has become increasingly common in governance, as seen in how AI systems are often used to rank citizens in terms of “risk.” In Spain, for instance, the public employment service uses an AI-based system to calculate unemployment benefits and to allocate interviews, job offers, and training courses (AlgorithmWatch, 2019). This naturally risks perpetuating existing biases identified in the historical data.
The black-box nature of AI means that when questioned
Even if some level of explainability or transparency is achieved, the shift to AI represents a fundamental shift in governance logic. When states come to see their tasks through the lens of AI, it represents a move from pre-set rules to association and optimization. AI may thus recast the work of governance as a series of optimization problems, seeking to distribute benefits and punishments in such a way as to maximize a given objective function.
From surveys to sensors
The rise of AI means a shift in the types of data that can be used for analysis. As noted, statistics requires data that is specifically structured for statistical analysis—typically obtained through structured surveys distributed to a random sample, yielding datasets with rows of numerical or categorical answers designed for statistical analysis. Data collection through demographics and surveys tends to be structured, deliberate, and episodic, categorizing and measuring individuals based on predefined attributes, such as age, gender, occupation, or ethnicity.
AI, in contrast, has lax requirements when it comes to data structure and is capable of analyzing text, images, videos, and a broad range of trace and sensor data. Much of the data used to feed AI stems from “datafication,” in which everyday activities are remade so as to produce valuable data (van Dijck, 2014). Digital data are produced through the ubiquitous presence of sensors, digital devices, and platforms in our lives, which are designed so as to extract data, often in real-time and at scale. Characteristic of datafication is also that platforms pursue data accumulation without a clear predefined use of the data: seen for instance in Google's digitalization of large swaths of all books ever written, which a decade later came to provide key fuel for their AI ambitions.
While demographic and survey data collection required active participation and is often constrained by specific questions or categories, datafication is continuous, passive, and vast.
From our online searches, social media interactions, and e-commerce behavior to the sensors in smartphones that record our physical movements, vast amounts of data are constantly generated about us, often without us even being aware.
As AI spreads as a governance technology within the public sector, we may thus expect a shift in the type of data that are used to gain insights into the population. We may also expect pressures to design processes in such a way as to produce analyzable data. While the state has access to vast troves of data, pertaining to healthcare, education, criminal justice, tax and financial behavior, transportation and mobility, and so on, these data are often not stored in a way that allows for data analysis. The new value of data may hence be expressed in a wave of digitalization of state documents, with the explicit or implicit aim of feeding future AI systems.
Borders offer an example of an arena in which we are already seeing a growing use of sensors and automatic data collection. Within the literature on the digitalization of migration governance, a particular focus has been put on the entanglement between algorithmic sorting technologies and biometric data (Baykurt and Lyamuya, 2023). By streamlining refugee registrations, biometrics is seen as alleviating the administrative burden on aid workers. The United Nations High Commissioner for Refugees (UNHCR) has for instance developed a policy on “digital identity” for all displaced individuals, which involves the use of digitized biometric data that can be easily accessed and traced across borders. This digital identity serves as a means of gaining access to employment opportunities, remittances, and banking services, while preventing migrants from acquiring repeated assistance (UNHCR, 2018).
In sum, the above three shifts point to ways governance is transformed by the shift from statistics to AI as its dominant way of seeing, with potential for both benefits and risks.
Statistics and AI-based governance are not mutually exclusive, but represent two divergent forms of governance that coexist and intermingle. We now turn to our second question, seeking to provide a framework for conceptualizing the transition to an AI-based paradigm of governance.
How technological change happens
To understand how the epistemologies and logics of AI shape institutions and turn into epistemic politics, we must have a conceptual framework for how such technological changes actually occur. The epistemic impact of technologies does not take place in a vacuum; they are shaped by their social and institutional contexts.
To provide such a framework, we suggest understanding AI as a “radical” new technology, drawing on the literature's separation between
As scholars of sociotechnical transitions argue, technologies are embedded in “regimes” that make change difficult (Rip and Kemp, 1998). Regimes are interconnected social, technical, and institutional systems, practices, know-how, and norms that surround the development and use of a particular technology. Radical new technologies represent a threat to the status and power of incumbent actors, who might hence resist changes that threaten their vested interests.
Within the type of large institutions that dominate the public sector, technological innovation is primarily incremental, and the sociotechnical systems are hence often
Due to the social and institutional embedding of technologies, technological change tends to take place through negotiations, contestations, and power struggles among local actors—state officials, street-level bureaucrats, and administrators—strategically drawing on these artifacts to support the pursuit of their own interests, values, and agendas (Söderström and Datta, 2024). These negotiations take place in an environment shaped by the engagement of citizens, users, and social movements, engaging in the politicization of data systems as arenas of social struggle.
Reutter's (2022) study of AI within Norwegian tax governance offers an illustration of how these stabilizing forces act in relation to AI. “Policy, organizational structures, legal frameworks, subject matter experts, and existing data infrastructures” (Reutter, 2022: 8) make it challenging to bring AI into these institutions, as these new technologies are incompatible with existing ways of doing things. While public institutions may be rich in data, these data are often in the wrong format, and the new technologies may be poorly compatible with existing regulatory frameworks.
The inertia of technological shift however does not mean that radical technological change never occurs. It instead implies a process in which gradual incremental change can lay the foundation of a tipping-point dynamic, in which radical innovation occurs suddenly and unexpectedly. Such radical technological shifts tend to take place as transitions between different regimes, in which entangled social, political, and technological aspects are rapidly co-evolving until they stabilize in a new stable configuration (Freeman and Perez, 1988). Focusing on the case of the state's use of AI as a radical innovation, we may thus speak of a sociotechnical transition to a new “regime” (Eubanks, 2018) or “paradigm” (Yeung, 2023) of state governance.
Niches—Spaces of exception and experimentation
Due to the inertia of established technologies, radical new innovations tend to emerge in what Geels (2010) refers to as “niches”: spaces of exception, where less concern is made over risks and negative consequences. These spaces enable experimentation protected from market selection pressures, enabling new innovations to be tested and to build legitimacy and support.
In relation to AI-based governance, such niches are often found in relation to arenas in which the stakes are seen as high enough or where the risks are carried by marginalized groups for whom less concern is heeded. Technology firms have, for instance, been shown to strategically use the “Global South” as a testing-grounds for controversial new forms of surveillance and control—in a “twenty-first-century variant of the ‘boomerang effect of colonial practice’” (Amoore, 2023: 8). After having built sufficient legitimacy and momentum for these technologies, they can then be implemented in the Global North—echoing historical examples of colonial experimentation (see e.g. Rabinow, 1997).
In the North, datafication, and AI have been widely employed within particular fields, such as security, police, military, and anti-terrorism (Bellanova and De Goede, 2022). Intelligence agencies have long represented key niches for technological innovation in AI-enabled mass surveillance, motivated by anti-terrorism efforts (Wiggins and Jones, 2023). Predictive policing represents another important niche, described as a form of “uberization” or “platformization” of policing, as officers are guided by AI-enabled apps (Egbert, 2019; Sandhu and Fussey, 2021). These AI systems automatically identify spatial and temporal patterns in historical crime data and renders real-time predictions for criminal hotspots (Kaufmann et al., 2019). This naturally risks perpetuating existing racialized patterns of overpolicing, by creating feedback loops in which the prediction is validated by increased recorded crime.
Another type of niche is what Pereira and Raetzsch (2022) refer to as “banal surveillance”: areas of state activity where data extraction and AI can be implemented into the everyday backend infrastructure in ways that are too mundane or opaque to attract much attention or controversy. Pereira and Raetzsch use the example of a Danish system that analyzes license plates to track and fine vehicles for violations of local emission standards. Duez and Bellanova (2012) examine the EU-U.S. Passenger Name Records programs as examples of algorithmic regulation through backend infrastructures.
Another important niche for the emergence of AI governance has been mobility, migration, and border security (Chouliaraki and Georgiou, 2022; Collins, 2023; Leurs and Prabhakar, 2018). Affluent nations have in recent years turned their borders into “technological testing grounds” (Molnar, 2021), seeking private-sector collaboration in deploying digital technologies for border control (Beduschi, 2021; Madianou, 2019). In Europe, nearly 2000 official entry ports and 60,000 km of land and sea borders are managed through digital surveillance technologies (Leurs and Prabhakar, 2018). Drones and satellites trace migrants’ phone signals in the Mediterranean as part of the European Border Surveillance System (Eurosur). Upon arrival, migrants’ fingerprints are scanned to enable algorithms tracking of their future movements and actions through the European Dactyloscopy (EURODAC) biometric database (Leurs and Prabhakar, 2018). Many countries use AI-based dialect recognition tools in the processing of asylum claims (EUAA, 2022). These experiments have been presented as practical solutions to help recipient governments address the intricate and long-standing issues associated with migration management (Madianou, 2019).
A growing trend within migration management is to harness AI to streamline and automate tasks, to identify data patterns that would be otherwise undetectable, and to facilitate and automate decision-making processes (Beduschi, 2021). AI is being considered as a potential tool in a broad set of contexts, such as decision-making in asylum cases in countries like Canada and Germany, and within the wider European Union through the Schengen Information System (Collins, 2023). In Bangladesh, Nepal, and Malaysia, migration systems are partially automated. In Sweden, “migration crises” are predicted using machine learning-based forecasting (for further examples, see Beduschi, 2021).
A substantial literature on “digital migration” (Collins, 2023) and the “digital border” (Chouliaraki and Georgiou, 2022) has examined and theorized how combinations between digital technologies and biometric data are employed to make the “movement of people […] more orderly, predictable and productive, and thus more manageable” (Ghosh, 2007: 107). A subfield of “digital migration studies” (Leurs and Smets, 2018; Sandberg et al., 2022) is now emerging, examining the technological developments that have emerged over the last decades (Amoore, 2021). Scholars argue that the intersection of borders, security, technology, and data are shaping new forms of governance and control (Aradau and Blanke, 2017), as prediction and computation have been increasingly adopted as part of security—thus placing them as central within the governmental apparatuses of discipline and biopower (Aradau and Tazzioli, 2020; Perret and Aradau, 2023).
Within such niches, new innovations can gain momentum and stabilize as technologies through building social networks, expertise, and legitimacy—until they become capable of challenging the dominant regime. The technologies often see a gradual growth and a broadening of scope known as “function creep” (Dahl and Sætnan, 2009). Pereira and Raetzsch (2022) describe how the Danish vehicle tracking system gradually attained wider and wider uses. Another example can be found in India's universal biometric identity system, “Aadhaar,” which was created to support welfare benefit distribution, but has seen gradually expanding mandates and cooptation by private interests, to uses that are increasingly problematic in terms of civil liberties (Khera, 2019; Rao and Nair, 2019).
Crises as windows of opportunity
Once the technologies in niches become sufficiently stable, building up a network of supporters and institutional legitimacy, they can come to truly challenge the dominant regime—often triggered by a crisis (Geels and Schot, 2007). Crises legitimize change, motivate a state of exception, and allow changing or bypassing existing rules—thus providing a “window of opportunity” for change. A period of crisis may motivate bypassing, making an exception to, or changing the “policy, organizational structures, legal frameworks, subject matter experts, and existing data infrastructures” (Reutter, 2022: 918) that prevent new technologies from fully taking hold.
In relation to mobility, the purported European “migration crisis” of 2015–2016 offers an example, as it enabled the broadening use of datafication practices and AI technologies (Chouliaraki and Georgiou, 2022). As Taylor and Meissner (2020) note, the event emphasized technological “solutionism” (Morozov, 2013), where migration was treated as a technical problem that could be solved through digital innovation. The COVID-19 pandemic is pointed to as a second example of crisis, legitimizing previously controversial data collection and technological solutions (Söderström, 2021).
Once a transition has begun, the outcome is highly contingent and unpredictable, guided by internal tensions and logics in a path-dependent process. The resulting transitions are the product of a dynamic and ongoing process of negotiation and contestation among actors at different levels, each with their own interests, values, and agendas—producing a shift to a new sociotechnical regime. As Geels and Verhees (2011) argue, cultural legitimacy is a prerequisite for successful technological innovation journeys, and technological change is thus subject to contestation and cultural processes.
The suggestion is hence that AI's implementation within the public sector will be constrained and shaped by counteracting forces, playing out as drawn-out negotiations and conflicts between opposing skills and types of knowledge—with incremental and partial implementation of the new technologies. More radical use of AI will primarily be found in specific niches within which the concern for risks is lower or risk is considered acceptable, such as in relation to migration or security. Having been established within these niches, AI may then spread to broader applications in a rapid transition, often enabled by moments of (perceived) crisis.
Conclusion
In a period of rising geopolitical tensions, we appear to be in the midst of a renegotiation of the role of the state, in which AI has emerged as the epicenter of global political and economic tensions. AI is increasingly seen through a national security lens, motivating growing state involvement in shaping its technological evolution (Dew and Lewis, 2024; Zhang, 2024). At the same time, states are increasingly leveraging AI to expand their governance capacities, enabling the processing and analysis of data in ways that both build on and go beyond the scope of conventional statistics. As this article has argued, the incipient shift from statistics to AI in state governance may have profound consequences. Just as Foucauldian scholarship has demonstrated that the modern state was shaped by the technology of statistics, so too may AI shape a new paradigm of governance and biopolitics.
To lay the groundwork for research at the intersection between the state and AI, this article has sought to (1) examine how statistics and AI differ in epistemic terms, and what this implies for how AI may reshape governance, and (2) provide a theoretical framework to guide empirical study of the rise of AI in governance.
For the former, the article has argued that while the notion “AI” remains ambiguous in its current use and subject to substantial hype, there is yet substantial epistemic novelty to AI—with potentially fundamental potential implications for the nature of the state, and the meaning of concepts such as citizenship, democracy, and governance (Isin and Ruppert, 2020a). By examining the epistemic differences between statistics and AI, the article has identified potential such shifts in governance, and suggested several broad directions for future research.
First, statistics and its associated data operate on a defined “population” with particular attributes to be measured and correlated, thus defining the boundaries of a state based on a geographical area. AI and its associated forms of data however do not operate or delimit specific spaces or populations, but process flows of data that are often challenging to geographically delineate. AI is hence poised to challenge the relationship between states and their populations in complex ways, begging questions of who are among its citizens, and what is considered its territory.
Second, AI represents a shift from
Third, by representing a move from pre-defined variables to inferred “learned features,” AI suggests a shifting relationship to categories such as race and gender. The citizen of the AI state is defined not by fixed demographic variables, but by dynamic, data-driven profiles that can shift and evolve over time. The outcome is the possibility for the “state to address a cluster—to demand proof, to deny entry, to refuse a claim—even where this group simply never existed” (Amoore, 2021: 35). Concepts like “race” or “gender” are erased, and replaced by a collection of emergent features of a neural network, raising profound questions about the nature of identity and group membership in the AI state.
While these directions suggest that AI will have a transformative impact on state governance, the article has also argued that the actual consequences must be understood as a product of how technologies are politically, institutionally, and socially embedded. While statistics was central to shaping the modern state, its implications were far from monolithic—and the same should be expected for AI. To offer a framework to guide empirical study in how AI may be brought into state institutions, the paper has argued for conceptualizing AI as a “radical innovation,” drawing on the large literature on sociotechnical transitions. As such innovations represent a challenge to incumbent actors and established institutions, their implementation tend to face substantial inertia and become the subject of drawn-out conflicts and negotiations. This does not mean that all change is necessarily incremental, but rather that innovations are first implemented in smaller niches where they may establish support and build legitimacy, from which rapid transitions may then occur—often enabled by moments of perceived crisis.
The sociotechnical perspective offers suggestions for how to conceptualize and approach the empirical study of the state's growing use of AI. It implies paying attention to the situated micro-level of how new technologies—and the stories that surround them—enter into institutions and become part of existing struggles over power and resources, with specific attention to the spaces of exception within which more radical implementations are made. It suggests that AI adoption in governance is not linear or predetermined but involves contestation and negotiation among diverse actors, and is shaped by interactions between technological possibilities and societal structures. It suggests that AI operates at the intersection of local contexts and global interconnected processes, highlighting the importance of studying its adoption through a comparative and global lens. It suggests that the impact of AI is not merely technological, but defined by its interaction with human actors, organizational processes, norms, and infrastructure. It highlights path dependency, emphasizing how racism, colonialism, and other historical events and relationships influence the structure and dynamics of contemporary AI systems.
Finally, while the integration of AI into state governance raises legitimate concerns, it is equally important to acknowledge its potential for positive outcomes. The state is predominantly composed of dedicated public servants whose intentions are rarely malicious. After decades of neoliberal retrenchment, an expansion of the state's capacity and ambition is both necessary and desirable (Klein and Thompson, 2025). Rather than defaulting to criticism, scholarship should aim to develop also a constructive vision for AI governance, exploring how these technologies can be harnessed to empower a state that serves the public more effectively and equitably.
Footnotes
Acknowledgements
We are grateful to the anonymous reviewers for their invaluable comments and suggestions.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by the NCCR – on the move funded by the Swiss National Science Foundation grant 51NF40-205605.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
