Abstract
The success and widespread deployment of artificial intelligence (AI) have raised awareness of the technology’s economic, social, and political consequences. Each new step in the development and application of AI is accompanied by speculations about a supposedly imminent but largely fictional artificial general intelligence (AGI) with (super-)human capacities, as seen in the unfolding discourse about capabilities and impact of large language models (LLMs) in the wake of ChatGPT. These far-reaching expectations lead to a discussion on the societal and political impact of AI that is largely dominated by unfocused fears and enthusiasms. In contrast, this article provides a framework for a more focused and productive analysis and discussion of AI’s likely impact on one specific social field: democracy. First, it is necessary to be clear about the workings of AI. This means differentiating between what is at present a largely imaginary AGI and narrow artificial intelligence focused on solving specific tasks. This distinction allows for a critical discussion of how AI affects different aspects of democracy, including its effects on the conditions of self-rule and people’s opportunities to exercise it, equality, the institution of elections, and competition between democratic and autocratic systems of government. This article shows that the consequences of today’s AI are more specific for democracy than broad speculation about AGI capabilities implies. Focusing on these specific aspects will account for actual threats and opportunities and thus allow for better monitoring of AI’s impact on democracy in an interdisciplinary effort by computer and social scientists.
Artificial Intelligence and Democracy
The success and widespread deployment of artificial intelligence (AI) have raised awareness of the technology’s economic, social, and political consequences. The most recent step in AI development—the application of large language models (LLMs) and other transformer models to the generation of text, image, video, or audio content—has come to dominate the public imaginary of AI and accelerated this discussion. But to assess AI’s societal impact meaningfully, we need to look closely at the workings of the underlying technology and identify the areas of contact within fields of interest. This article proposes a corresponding conceptual framework for the study of the current and prospective impact of AI on democracy.
AI has become a pervasive presence in society. Recent technological advances have allowed for the broad deployment of AI-based systems in many different areas of social, economic, and political life. In the process, AI has had, or is expected to have, a strong effect on each area it touches. We see examples in discussions about the algorithmic shaping of digital communication environments and the associated deterioration of political discourse (Kaye, 2018); the flooding of the public arena with false or misleading information enabled through generative AI (Krebs et al., 2022); algorithmically stimulating political conflict (Settle, 2018); AI’s impact on international human rights law (Gellers & Gunkel, 2023); the future of work and AI’s role in the replacement of jobs and related automation-driven unemployment (Acemoglu & Johnson, 2023; Brynjolfsson & McAfee, 2016; Frey, 2019); and AI’s impact on shifting the competitive balance between autocracies and democracies (Filgueiras, 2022; Lee, 2018). With these developments, AI has also begun to touch the very idea and practice of democracy.
Current adaptation of AI resembles prior waves of technological change and their political impact. Technologies provide supporting structures for the coordination of social, economic, and political life. Through their design, underlying mechanisms, and inputs and outputs, different technologies influence the societal fields and processes mediated through them or on which they rely (Winner, 1980). Technology and technological shifts therefore have effects on politics and political competition by asymmetrically favoring actors, factions, or groups depending on their alignment or misalignment with the affordances emerging from the technology of the day (Bimber, 2003; Castells, 2009/2013; Jungherr et al., 2019; Müller, 2021). In the past, we saw this with the impact of soil maps, geometry, and writing (Stasavage, 2020), the printing press (Eisenstein, 1979; Kaufmann, 2019), newspapers (Schudson, 1978), broadcast media (Neuman, 1991; Prior, 2007), and, more recently, digital media (Jungherr et al., 2020; Williams & Carpini, 2011). The growing presence of AI-based systems in society demands an interrogation, along similar lines, of how AI affects the idea and practice of democracy (Risse, 2023).
AI is often discussed as a threat to society if not human life itself (Bostrom, 2014)—a discussion predicated on a largely imaginary artificial general intelligence (AGI) able to autonomously perceive, reason, decide, and act in varying contexts with human or superhuman capabilities. This notion, derived mostly from speculative fiction, has little correspondence with AI-based systems currently deployed or lab research on the development of AI (Agrawal et al., 2018/2022; Larson, 2021; M. Mitchell, 2019; Smith, 2019). In fact, actually existing AI is predominantly narrow AI trained on domain-specific data to perform domain-specific tasks (M. Mitchell, 2019, p. 45f.). Accordingly, in examining the impact of AI on democracy, it is important not to get sidetracked by imaginary AGI and instead focus on specific instances of narrow AI, the conditions for its successful deployment, its uses in specific areas of interest, and their effects.
This article provides answers to two questions:
What type of AI do we encounter when examining AI’s impact on democracy?
How and in what respects does this touch on the idea or practice of democracy?
The article provides a novel conceptual framework for assessing and monitoring how AI—even in its current narrow manifestation—affects the idea and practice of democracy. It combines a discussion of AI’s constitutive technical features with its expected impact on democracy, informed by political theory and empirical findings from various fields. The broadness of this account will of course lead to a lack of nuance in specific areas. While this might feel unfortunate to specialists, lack of nuance can be a feature of social theory, allowing for the development of abstract frameworks, affording in turn the theory-informed charting of new subjects and the establishment of connections between fields (Healy, 2017). While social scientists and computer scientists alike may feel shortchanged, the combination of both their perspectives promises to inform both groups. For technologists, the article grounds their discussion of AI’s impact on democracy in political theory. For social scientists, it connects their discussion with the actual workings of available AI technology. Some of the examples presented here will remain somewhat speculative. Again, this might be seen like a detriment by some. But if we accept that current technological developments—like AI—will come to shape and impact democratic practice, ideas, and potentially even structures, we must allow ourselves to speculate and work with thought experiments. Simply relying on data and examples documenting the present and the past will render us blind to the evolving nature of democracy and leave us silent on questions of design choices at the interface between technology and democracy (Papacharissi, 2021).
Artificial Intelligence
AI has a rich history of approaches, applications, and associated imaginaries (McCorduck, 2004; Nilsson, 2010; Russell & Norvig, 1995/2021). To develop specific expectations regarding AI’s impact on democracy, we must be precise about the term and how the underlying technology works. There is a wide variety of competing definitions, but for the purposes of this article, AI can be defined as “the study and construction of agents that do the right thing” (Russell & Norvig, 1995/2021, p. 22). More specifically, AI has been described as: “[T]hat activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.“ (Nilsson, 2010, p. xiii)
This includes approaches that allow machines to pursue tasks, sometimes on par with and sometimes surpassing the ability of humans. The idea of a powerful machine-intelligence has inspired far-reaching expectations and fears regarding the potential or threats associated with AI, ranging from economic growth (Brynjolfsson & McAfee, 2016) and post-human transcendence (Hanson, 2016) to a downright menace to human existence (Bostrom, 2014). Expectations of AI’s supposed general ability to address tasks or form decisions across multiple domains (AGI) are extrapolated from AI’s very real success in data-driven completion of specific tasks in specific domains (narrow AI). This, though, is a category error.
AI’s current successes do not belong to a still largely fictional AGI but rely instead on advances in narrow AI. The technology has succeeded in completing specific tasks in various domains. These successes depend strongly on advances in the identification of patterns that link available data points to outcomes of interest. On a very basic level, this can be the application of simple regression or machine-learning models that can be surprisingly successful in many tasks and are often characterized as a form of AI. But, of course, they are far from the understanding of AI introduced above. Truer to that understanding of AI are more advanced approaches such as deep learning (Goodfellow et al., 2016) and reinforcement learning (Sutton & Barto, 2018). These approaches have been successful in many different contexts, including computer vision, machine translation, medical diagnosis, robotics, and voice recognition (LeCun et al., 2015). Examples of their promise include predictions of possible but yet unknown biological or chemical compounds (Chow et al., 2018; King et al., 2009; Schneider, 2018) or strategic action in game play (Bakhtin et al., 2022; Silver et al.,2016, 2017, 2018). Recently, transformer models, a subclass of deep learning, have proven to be highly promising (Parmar et al., 2018; Vaswani et al., 2017) in the autonomous generation of text, image, and video content (Brown et al., 2020; Ramesh et al., 2022). As we will see below, these advances are starting to feature in politics and might come to impact democracies.
But these approaches do not work universally. In fact, they depend on a set of preconditions, which might limit their uses in politics. Some are obvious. For example, to be successful AI needs to be able to access some digital representation of its environment, either through sensors mapping the world or through the input of existing data. Where these representations are difficult to come by or data are scarce, as in many areas of politics, AI will not be successful. Other preconditions are not so obvious. For example, for AI to produce helpful results, the underlying connections between inputs and outputs must be stable over time. This points to two problems: unobserved temporal shifts between variables (Lazer et al., 2014) and the dangers of relying on purely correlative evidence without the support of causal models (Pearl, 2019; Schölkopf et al., 2021).
More important still, especially with respect to democracy, is that normatively speaking the past must provide a useful template for the future. Change is a crucial feature of societies, especially the extension of rights and the participation of previously excluded groups. Over time, many societies strive to decrease discrimination and increase equality. In fact, many policies are consciously designed to break with past patterns of discrimination. AI-based predictions and classifications based on past patterns risk replicating systemic inequalities and even structural discrimination (Bolukbasi et al., 2016; Christian, 2020; S. Mitchell et al., 2021).
Problems that share these characteristics can be found in many areas, such as the digital economy, commerce, digitally mediated social interactions, robotics, and sports. AI has proven highly successful in these areas. At the same time, few problems in politics and democracy more broadly share these characteristics. This limits the application of AI in society and, accordingly, its impact on democracy.
The Impact of Artificial Intelligence on Democracy
AI’s recent successes and its broad deployment in many areas of social, economic, and political life have begun to raise questions regarding whether and how AI impacts democracy. The idea and practice of democracy are highly contested concepts with competing accounts of great nuance. The associated discussions within political theory are highly productive and successful in identifying different normative, procedural, or structural features and consequences within our understanding of democracy (Dahl, 1998; Guttman, 2007; Landemore, 2012; Przeworski, 2018; Tilly, 2007). Still, for the purposes of this article, it is necessary to reduce this rich field to a few important—if sometimes contested—features of democracy in which we can expect AI impact. What the argument might lose in nuance, is compensated by the provision of a broadly applicable conceptual framework.
This article presents four areas of impact at different analytical levels: at the individual level, AI impacts the conditions of self-rule and people’s opportunities to exercise it; at the group level, AI impacts equality of rights among different groups of people in society; at the institutional level, AI impacts the perception of elections as a fair and open mechanism for channeling and managing political conflict; and at the systems level, AI impacts competition between democratic and autocratic systems of government.
This article does not follow one specific theory of democracy, instead approaching democracy as a multifaceted phenomenon (Asenbaum, 2022). This offers a broader view of the areas where AI might conceivably affect society in ways relevant to the performance and quality of democracy.
Artificial Intelligence and Self-rule
One tenet of democracy is that governments should be chosen by those they will serve. Such self-rule is a normative idea about legitimizing the temporal power of rulers over the ruled and a practical idea that distributed decision-making is superior to other more centralized forms of decision-making or rule by experts (Dahl, 1998; Landemore, 2012; Landemore & Elster, 2012; Schwartzberg, 2015). AI impacts both the ability of people to achieve self-rule and the perceived superiority of distributed decision-making over expert rule in complex social systems, highlighting potential limits to self-rule in several ways.
Shaping Information Environments
The legitimacy of self-rule is closely connected with the idea of people being able to make informed decisions for themselves and their communities. This depends at least in part on the information environment in which they are embedded (Jungherr & Schroeder, 2022). AI affects these informational foundations of self-rule directly. This includes how people are exposed to and can access political information, can voice their views and concerns, and how these informational foundations potentially increase opportunities for manipulation (Jungherr & Schroeder, 2023).
Algorithmic shaping of digital information environments based on people’s inferred information preferences or predicted behavioral responses (Narayanan, 2023) has raised particularly strong concerns (Kaye, 2018). Key among these is that people will be exposed only to information with which they are likely to agree, thus losing sight of the other political side. Empirical findings suggest that these fears may be overblown (Flaxman et al., 2016; Kitchens et al., 2020; Scharkow et al., 2020). In fact, in digital communication environments, people may encounter more political information about the other side and that they disagree with than in other information environments. This can be a problem, especially for political partisans, because it increases the salience of political conflict (Settle, 2018). But the degree to which this mechanism is driven by AI or might even be lessened through specific algorithm design remains as of now unknown.
Going further, several authors have diagnosed various ill effects of digital communication environments on information quality and political discourse, some AI-driven and others independent of AI (Bennett & Livingston, 2021). While clearly important, these diagnoses risk overestimating the quality of prior information environments and the role of information for people in their exercise of self-rule. In fact, critiques of the quality of media in democracies abounded well before digital media became prevalent (Keane, 2013).
In addition, most people do not follow the news closely, do not hold strong political attitudes, and do not perform well when tested on their political knowledge (Converse, 1964; Lupia & McCubbins, 1998; Prior, 2007; Zaller, 1992). They seem to rely on informational shortcuts or on social structures to exercise self-rule (Achen & Bartels, 2016; Kuklinski & Quirk, 2000; Lodge & Taber, 2013; Popkin, 1991). Hence, these mechanisms can also be expected to mediate the impact of AI-driven shaping of information environments. To assess AI’s impact fully, research needs to consider not only information environments but must also look at whether and how AI affects the structural and social factors that mediate the impact of political information on self-rule.
It does not appear that AI-driven shaping of digital information environments inevitably leads to a deterioration of access to information necessary for people to exercise their right to self-rule. Nevertheless, there is much opaqueness in the way digital communication environments are shaped. The greater the role of these environments in democracies, the greater the need for assessability of the role of AI in their shaping (Jungherr & Schroeder, 2023). We also need regular external audits of the effects of AI on the information visible on online platforms, especially the nature and kind of information that is algorithmically promoted or muted.
Economics of News
AI might also come to indirectly impact the creation and provision of relevant political information by changing the economic conditions of news production. For one, recent successes in the development of transformer models suggest that AI might soon be used by media providers to automatically generate text, image, or video content. This might lead to an acceleration of existing trends toward automated content generation in news organizations (Diakopoulos, 2019). This puts pressure on journalists who might see routine tasks shift toward AI-enabled systems but also on news organizations who might face a new set of ultra low-cost competitors who specialize on automatically generated news content. This potentially increases pressure on journalists’ salaries as well as the audiences and profits of news companies, intensifying existing pressures on news as a business (Nielsen, 2020).
In addition, AI reconfigures the way news and political information are accessed by the public. Search engines like Bing and Google are experimenting with LLMs to provide users with automatically generated content in reaction to search queries instead of links to content provided by news and information providers. This limits monetization opportunities for small- or middle-sized media organizations without strong brand identity and loyalty, which in the past could generate traffic based on query-based referrals from search engines or social networking sites. These new limitations on monetization opportunities might lead to a decline in the coverage of politics, or even a reduction in the number of news organizations. This in turn would limit the total amount and diversity of information available to people to develop informed decisions. This will hit political outsiders and challengers the hardest who rely on smaller information providers for coverage. This decline in monetization opportunities of news will thus likely lead to a strengthening of existing institutions, media brands, and associated power relations (Jungherr & Schroeder, 2023).
In addition, public perceptions of digital communication environments being dominated by AI-generated content—some of it correct, some of it actively misleading, some of it accidentally misleading—might contribute among parts of the population to an increased valuation of select news organizations, whose process of news production and quality insurance they have come to trust. These news brands might thus find themselves strengthened through an increase of AI-generated content in open communication environments or in the coverage by cost-cutting competitors. Of course, this expectation only holds if these news brands are seen as providing added value over AI-generated content.
It is also important to remember that this AI-driven turn to specific news brands is only likely to hold for audience members who engage with news and politics demanding accurate information and those interested in politics. This will likely be socio-economically well-resourced and politically engaged people (Prior, 2018; Schlozman et al., 2018). Others might feel fine with free or automatically generated content. This is likely to reinforce an informational divide between politically interested and disinterested audiences that already has grown following the switch from a low-choice mass media environment to high-choice digital communication environments (Prior, 2017). In countries without strong public broadcasters, like the US, this divide will also run along economic lines, allowing those able to pay for news to access high-quality, curated, and quality-checked information, while leaving those not able (or willing) to pay to the noisy, (partially) automated, and contested free digital information environment. Over time, this might mean that socio-economic divides decide (or are seen to decide) over the ability of people to come to informed political decisions.
Speech
AI does not only impact access to information, it also affects the expression of opinions, interests, and concerns in digital communication environments. With digital communication environments becoming increasingly areas for the expression of voice, surfacing of concerns, and construction of political identities, this is an important element in AI’s shaping of the conditions for self-rule.
The perceived ability of AI to classify content has put it at the forefront of the fight against harmful digital speech and misinformation. AI is used broadly by tech companies to classify user content to stop it from publication or flag it for moderation (Douek, 2021; Kaye, 2018). Details of the applied procedures, their successes, and error rates are opaque to outsiders, making it difficult to assess the broadness of AI’s uses and its effects on speech. This is problematic: harmful speech and misinformation are both difficult categories for classification. Neither category is objective nor stable and both require interpretation as meaning shifts across contexts and time. This makes them difficult to identify with automated data-driven AI and risks suppression of legitimate political speech.
In addition, the technical workings of AI also impact the type of speech becoming visible in AI-shaped spaces. By learning typical patterns within a given set of cases, AI will lean toward averages. For AI-enabled shaping and summarizing of speech or political positions, this will favor common positions, concerns, and expressions. Outsiders and minority positions, concerns, and expressions will in unadjusted AI-shaped communication environments end up submerged and become invisible. AI would thus negatively impact the ability of a society to make itself visible to itself, lower democracies’ information processing capacities, and strengthen the political status quo (Jungherr & Schroeder, 2023).
Still, there are few alternatives to AI-based moderation given the pure volume of content being published in digital communication environments (Douek, 2021), which makes it important to gain a better understanding of AI-based moderation’s workings and effects. Accordingly, AI-based moderation needs assessability provided by platforms and external audits to ensure its proper workings.
AI-based moderation, however, is not only a risk. Scholars and commentators have long pointed to the limits of large-scale political deliberation imposed through inefficiencies in information distribution, surfacing of preferences, and coordination of people. AI may improve on some of these inefficiencies by predicting individual preferences, classifying information, and shaping information flows (Landemore, 2022). This in turn might open opportunities for new deliberative and participatory formats in democracies, thereby strengthening and vitalizing democracy.
It is important to remain aware of both the risks and the opportunity AI provides for moderating speech and surfacing concerns in digital communication environments. AI can contribute to creative solutions to some of the technical challenges underlying successful self-rule. But if it is to do so, we need to know more about its actual uses, effects, and risks. This demands greater transparency from digital platforms and continued vigilance and attention from civil society.
Manipulation
AI could also negatively impact individual informational autonomy by predicting the reactions of people to communicative interventions. This could allow professional communicators to reach people in exactly the right way to shift opinions and behavior. Sanders and Schneier (2021) present a thought experiment that illustrates how lobbyists might use AI to predict the likelihood of success of bills they introduce to legislators. While still far from realization, their example shows interested parties employing AI to increase the resources available to them and potentially targeting interventions aimed at influencing people to behave in ways beneficial to those same parties. AI can also be used to generate messages aimed at persuading people, with early working papers indicating interventions designed by LLMs to have persuasive appeal (Bai et al., 2023). Similarly, LLMs are currently used by academics and campaign professionals to simulate reactions and attitudes by prototypical voters for message testing and research, although the precision and validity of these approaches are contested (Bisbee et al., 2023; Horton, 2023; Kim & Lee, 2023).
Fears also exist regarding people encountering targeted communicative interventions in digital communication environments. By predicting how people might react to an advertisement, digital consultancies could use AI to tailor interventions to influence people. The English consultancy firm Cambridge Analytica, which claimed to be able to predict which piece of information displayed on Facebook was necessary to get people to behave in ways beneficial to its electoral clients, provided a first taste of this problem. While the company’s claims have been debunked (Jungherr et al., 2020, pp. 124–130), the episode speaks to the perception of AI’s power to manipulate people at will, as well as the willingness of journalists and the public to accept widely exaggerated claims about the power of digitally enabled manipulation irrespective of contradicting evidence.
Recent advances in transformer models have opened new avenues for potential manipulation through the automated production of text or images (Brown et al., 2020; Ramesh et al., 2022). There are legitimate uses of these models, as well as nefarious ones. For instance, they facilitate the automated generation of content based on raw information or event data, as found in sports coverage or the stock market (Diakopoulos, 2019). This is largely unproblematic since AI translates information from one form of representation—such as numerical or event data—into another—such as a narrative news article.
More problematic are cases in which AI does not simply translate one representation of information into another but generates content based on prompts and past patterns. Examples include text or image responses to textual prompts in the form of questions or instructions. AI has no commitment to the truth of an argument or observation; it is only imitating their likeness as found in past data. Today’s AI is committed only to the representation of the world, an object, or an argument available to it, not to the world, object, or argument as such (Smith, 2019). Thus, AI output taken at face value cannot be trusted because it is not necessarily true, only plausible.
More problematic still is the chance that future AI could be used to produce fake information at scale. This could take the form of targeted fakes aimed at misleading people, or flooding information environments with masses of unreliable or misleading AI-generated content. This would dilute information environments, making it more difficult for people to access crucial information and/or making information appear untrustworthy.
At the same time, somewhat counterintuitively, a mass-seeding of automated misinformation might also contribute to the strengthening of professional news and information curation discussed above. When the prevalence of unreliable or misleading information in digital communication environments becomes evident, the premium for reliable information rises. Accordingly, professional, reliable, and impartial news sources might see a reversal of fortune compared to their economic and ideational challenges of the last 20 years. This way, automated misinformation in scale might turn out to strengthen intermediary institutions that provide information in democracies.
It is important to note that these uses of AI are still projected and may not come to pass given the limits of the underlying technology, the development of efficient countermeasures, and/or the persistence of mediating structures that limit the effects of information overall. But considering recent technological advances, these uses have come to feature strongly in the public imagination and demand for critical reflection by social and computer scientists.
Expert Rule
Support for self-rule is also closely connected with the assessment of expert rule being limited in complex social systems. Expertise is important, but has limited predictive power in complex societies, and the decentralized decision-making and preference surfacing of self-rule, while imperfect, are seen as superior for settling on collectively binding decisions (Dahl, 1998; Lindblom, 2001). The growing availability of data in ever more domains, coupled with new analytical opportunities offered by AI, have raised hopes for new predictive capabilities in complex societies (Kitchin, 2014). AI not only highlights the weaknesses of people making political decisions but also increases the power of experts.
AI brings new opportunities in the modeling and prediction of societal, economic, ecological, and geopolitical trends, promising to provide experts with predictions of people’s behavior in reaction to regulatory or governance interventions. While the actual quality of these approaches is still open to question, they have strong rhetorical and legitimizing power. They increase the power of experts, who—sometimes actually and sometimes rhetorically—rely on AI-supported models to ground their advice on how societies should act considering major societal challenges. This apparent increase in the power of experts to guide societies in responding to challenges can reduce the option space available for democratic decision-making, shifting the question from whether people can to whether they should decide for themselves. In this, AI could induce a transition from self-rule to expert rule and thereby weaken democracy.
Power of Technology Companies
AI also increases the power of firms over the public and even over states. While the theoretical breakthroughs in the current wave of AI began at universities, it is firms that lead in their practical application, further development, and broad rollout (Ahmed et al., 2023; Metz, 2021). Over time, the power to innovate and critically interrogate AI may shift from public to commercial actors, weakening AI oversight and regulation by democratically legitimated institutions. These challenges can be clearly seen in attempts by both the US and EU at getting to grips with regulating AI development and uses (Criddle & Murphy, 2023; Espinoza & Johnston, 2023).
There is also the issue of economic and political power. AI has allowed companies such as Google and Amazon to dominate multiple economic sectors (Bessen, 2022; Brynjolfsson et al., 2023). Governments have also begun to rely on AI-based service providers to support executive functions such as policing and security. The result is a growing government dependence on AI companies and an opaque transfer of knowledge from governments to these service providers. Add to this power over AI-enabled information flows and governance over political speech (Jungherr & Schroeder, 2023), and AI companies hold central positions in democracies, potentially negatively influencing the abilities of people for self-rule. This shows the importance of effective government and civil society oversight of companies that provide AI and those that employ AI to ensure that the foundations of meaningful self-rule hold as societies begin to rely more on AI-supported systems.
Artificial Intelligence and Equality
Democracy depends on people having equal rights to participation and representation (Dahl, 1998). While this ideal is imperfectly realized and strongly contested in practice (Phillips, 2021; Young, 2002), democracies are in an ongoing struggle to extend rights to formerly excluded groups. AI’s reliance on data documenting the past risks subverting this process and instead continuing past discrimination into the future, thereby weakening democracy.
By predicting how people will behave under various circumstances based on observations from the past, AI differentiates among people based on criteria represented in data points. This risks reinforcing existing biases in society and even porting socially, legally, and politically discontinued discriminatory patterns into the present and future (Eubanks, 2018; Mayson, 2019; Mehrabi et al., 2022; S. Mitchell et al., 2021; Obermeyer et al., 2019). This makes continuous observation and auditing of AI implementation crucial.
People’s visibility to AI depends on their past representation in data. AI has trouble recognizing those who belong to groups underrepresented in the data used to train it. For example, minorities not traditionally represented in data sets will remain invisible to computer vision (Buolamwini & Gebru, 2018), and historically underrepresented groups will not be associated with specific jobs and thereby risk discrimination in AI-assisted job procedures (Caliskan et al., 2017). This general pattern is highly relevant to democracy: for example, the systematic invisibility of specific groups means they would be diminished in any AI-based representation of the body politic and in predictions about its behavior, interests, attitudes, and grievances. Accordingly, already disenfranchised people could risk further disenfranchisement and discrimination in the roll out of government services, the development of policy agendas based on digitally mediated preferences and voice, or face heightened persecution from the state security apparatus.
AI also makes some people more visible. Historically, marginalized groups will be overrepresented in crime records, negatively impacting group members in AI-based approaches to policing or sentencing (Chouldechova, 2017; Christian, 2020; Ferguson, 2017). In countries like the US, where voting rights are withheld for felons to varying degrees depending on state jurisdiction, systematic biases in AI-supported policing and sentencing might over time come to systematically bias the electorate against historically disenfranchised groups (Aviram et al., 2017). AI-based approaches can also have a profound effect on electoral redistricting (Cho & Cain, 2020). AI could lead to a reinforcement of structural inequality and discrimination by continuing patterns found in historical data even if a society is trying to enact more equal, less discriminatory practices.
Extrapolating from this, we can expect subsequent AI-based representations of public opinion, the body politic, and AI-assisted redistricting to be biased against groups marginalized in the past. Different degrees of visibility to AI could increase the democratic influence of some groups and decrease that of others. For instance, AI might contribute to an increase of resources for the already privileged by making their voices, interests, attitudes, concerns, and grievances more visible and accessible to decision-makers. AI might use the preferences of visible groups in predictions about political trends and policy impact while ignoring those of less visible groups.
AI can also have adverse effects on the labor market. While in principle firms could invest in automation to allow workers to pursue new tasks and thereby increase the value of their labor, it appears that firms do so mostly to lower their own labor costs by substituting AI for human labor-based tasks (Acemoglu & Restrepo, 2019). This lowers workers’ bargaining power and income by substituting labor for capital, which in turn threatens to increase economic inequality and weaken workers’ collective bargaining power. Consequently, this could also lower workers’ political influence and representation (Acemoglu, 2021; Gallego & Kurer, 2022).
What type of labor is affected by AI-based technological progress, although, is uncertain. Automation traditionally substitutes for routine human tasks and thus affects mostly low-skilled workers (Acemoglu & Restrepo, 2022b; Frey, 2019). But subsequent waves of AI innovation have shown that routine tasks underly many professions, including white-collar and knowledge ones long perceived as being immune to automation. The impact of AI in changing the political fortunes of workers might thus concern larger groups in the economy than traditional forms of automation. This can already be seen in the current discussion about the impact of LLMs and generative AI on the creative and software industry, which until now seemed to be exempt from the dangers of automation-driven job replacement. These emerging fault-lines can already be seen in the Hollywood writers’ strike from 2023, in which screenwriters demanded contractual protection against studio uses of AI for writing tasks (Wilkinson, 2023).
At the same time, AI can help aging societies complete substitutable work tasks and concentrate the shrinking labor force on currently nonsubstitutable tasks, thereby maintaining productivity levels in the face of growing demographic pressures in several developed economies (Acemoglu & Restrepo, 2022a). But realizing AI’s economic potential for societies means ensuring that respective gains are broadly shared and do not only benefit a narrow elite. Especially with prosperity gains from digital technology, this link of shared prosperity gains seems to be broken. This raises concerns as to whether elites manage to capture respective AI-enabled gains while most people only face automation-driven economic risks (Acemoglu & Johnson, 2023). This would increase inequality in society and weaken democracy. This potentially dangerous development puts the specifics of AI’s implementation and its public and regulatory oversight into focus.
AI clearly touches on equality within democracies. Inequalities might arise in the allocation of options and state services using AI-based systems, people’s visibility and representation within AI-based systems, and the provision or withdrawal of economic opportunities for people whose job tasks can be replaced with AI. These are, therefore, important areas for further interrogation and, if necessary, regulatory intervention.
Artificial Intelligence and Elections
Democracies rely on elections, which channel and manage political conflict by providing factions the opportunity to gain power within an institutional framework. This works only if each faction sees a genuine opportunity to win power (Przeworski, 2018), making democracy a system of “organized uncertainty” (Przeworski, 1991, p. 13). AI applications threaten to offset this perceived uncertainty of who will lose and who will win elections. However, the uses of AI in this field are limited.
Data-driven approaches are limited in the prediction of individual voters’ behavior. While the voting behavior of committed partisans can be predicted with some probability (Hersh, 2015; Nickerson & Rogers, 2014)—at least in two-party systems—predicting the behavior of people who are only weakly involved with politics is much harder. People do not always vote, and when they do the context can vary greatly. Their vote choices are for the most part not available to modelers, making predicting voting behavior automatically a problem for which AI is not well suited. The uncertainty of election victories will thus remain for the foreseeable future. But campaigns can develop other relevant data-driven models of elections, such as someone’s probability of voting or donating money (Hersh, 2015; Issenberg, 2012; Nickerson & Rogers, 2014), which could give campaigns a competitive advantage. Any such advantage is likely fleeting, though, given the broad availability of AI-based tools and campaign organizations learning from others’ successes and failures (Kreiss, 2016).
Firms and governments might also seek to use AI to predict election outcomes or the electorate’s mood swings and possibly intervene. These efforts are limited by the same challenges raised above, but the public impression of this capability might be enough to undermine and delegitimize elections and give election losers a pretext to challenge results rather than conceding.
Cambridge Analytica’s supposed role in the United Kingdom’s Brexit vote and the 2016 US presidential election previewed some of the challenges. While there is little indication that data-based psychological targeting was widely used or had sizable effects, these episodes still loom large in the public imagination as an example of AI’s perceived power in election manipulation (Jungherr et al., 2020, pp. 124–130). We can expect widespread AI use in economic, political, and social life to shift people’s expectations of its uses and abuses in electioneering, irrespective of its actual uses or inherent limitations.
Overall, AI’s impact on elections seems limited, given the relative scarcity of the predicted activity—voting. While indirect effects are possible through potential opportunities for competitive differentiation, it is doubtful that this can translate into a consistent, systemic shift of power, given the broad availability of AI tools. More likely is the indirect impact mentioned above: that by transposing expectations regarding AI’s supposed powers from industry and science to politics, the public may come to believe that AI is actually able to offset the “organized uncertainty” of democratic elections. This alone could weaken public trust in elections and acceptance of election results. It is thus important to keep organized uncertainty alive in the face of AI, not weaken it through irresponsible and fantastical speculation.
Artificial Intelligence and the Autocratic Competition
AI also affects the relationship between democracy and other systems of governance, such as autocracy, which some have argued has an advantage in the development and deployment of AI. Firms and governments that in democracies face limits to AI deployment or pervasive data collection about people’s behavior have more leeway in autocracies. A close connection between the state and firms developing and deploying AI in autocracies creates an environment of permissive privacy regulation that provides developers and modelers with vast troves of data, allowing them to refine AI-enabled models of human behavior. Add centrally allocated resources and training of large numbers of AI-savvy engineers and managers, and some expect the result to be a considerable competitive advantage in developing, deploying, and profiting from AI-supported systems (Filgueiras, 2022; Lee, 2018). This may allow for asymmetric developmental progress in AI, state capacity, economic benefits, and potentially even military prowess favoring autocracies over democracies.
Leaving aside normative considerations, democracies have been seen, on a purely functional level, to be superior to autocracies due to their superior performance as information aggregators and processors (Kuran, 1995; Lindblom, 1965; Ober, 2008; Wintrobe, 1998). Free expression, a free press, and electorally channeled competition between factions provide democracies with structural mechanisms that surface information about society, the actions of bureaucracies, and the impact of policies. In contrast, autocracies restrict information flows by controlling speech, the media, and political competition, leaving governments in the dark regarding local situations, the preferences of the public, the behavior or corruption in their bureaucracies, and ultimately, the consequences of the policies they pursue.
Prospectively, AI might allow autocracies to overcome this disadvantage. The clearest example at present is China (Zeng, 2022), which uses large-scale data collection and AI to support social planning and control (Ding et al., 2020; Pan, 2020)—such as through its Social Credit System (Creemers, 2018; Liang et al., 2018; Sı́thigh & Siems, 2019). Capitalizing on AI’s potential could also help autocracies increase their state capacities through, for example, AI-assisted governance and planning. This in turn could increase the quality of state-provided public services. It also might provide people living in autocracies with greater cultural, economic, or health-related opportunities (Diamandis & Kotler, 2020; Lee & Quifan, 2021).
There are those who might see these benefits as a worthy trade-off with some individual freedoms, leading to strengthened public support for autocracies and state control. Differential opportunities in realizing the potentials of AI might thus reinforce tendencies already evident in countries facing economic, cultural, or security crises (Matovski, 2021). Particularly in times when democracies increasingly find themselves internally challenged with respect to the opportunities with which they provide people, these potentials of AI that asymmetrically favor autocracies represent an obvious challenge to democracies—if realized.
Going further, AI is a technology increasingly discussed in military and security circles (Buchanan & Imbrie, 2022; Goldfarb & Lindsay, 2022). While its normative role and functional potential in these areas are heavily contested, the growing concerns in these circles point to the broad perception that AI could facilitate democracies falling behind autocracies.
Over time, differential trajectories in the development and deployment of AI in democracies and autocracies may emerge. If the assumption holds that autocracies share a greater affinity with AI and can profit more from it than democracies, AI could lead to a power shift between systems and thus weaken democracy.
Artificial Intelligence and Democracy: The Road Ahead
While many AI applications still lie in the future, we already start to see AI’s impact on democracy. True, many of AI’s future uses and effects remain uncertain. However, it is important that social science engages early on with AI and helps observe, evaluate, and guide its implementation. This includes AI’s uses in politics, government, and its regulation and governance. This article provides a conceptual framework that allows the charting of important contact areas between AI and democracy, and connects AI’s uses and effects to relevant normative debates in political theory that can provide guidance in the assessment of AI’s uses and impacts. Going forward, the broad conceptual contact areas of self-rule, equality, elections, and competition between systems can serve as topical clusters for future work. Future work will provide a more fine-grained account and advance theories that explain use and effect patterns.
Social scientists need to consider AI in their analysis of features, dangers, and potentials of contemporary democracy. In doing so, they need to reflect the inner workings and domain-specific effects of the underlying technology. At the same time, computer scientists and engineers need to consider the consequences for democracy in AI development and deployment. This means focusing not only on the analysis of the technology itself, but also to consider its embeddedness in economic, political, and social structures that mediate its effects for better or worse. This makes the analysis of AI’s impact on democracy an important area of future interdisciplinary work.
The quality of the analysis of AI’s effects on democracy depends on specificity regarding the type of AI, how it functions, the conditions for its successful deployment, and the aspect(s) of democracy it touches. Narratives about an unspecified, super-powered AGI and its supposed impact on society may make for stimulating reading but offer little for the analysis of actual effects on society or democracy. In fact, interested parties can use the discussion of AGI and supposed extinction-level event dangers as a smokescreen, distracting public and regulatory attention from more mundane but crucial questions of AI governance, regulation, and the societal distribution of AI-driven gains and risks.
Although AI is often discussed as a danger or threat, it may also provide opportunities to offset some of the contemporary challenges to democracy. Thinking openly about the application of AI in democracy could provide some relief from these challenges. Conscious design choices and transparent audits can help ameliorate dysfunctions and uncover biases.
In general, AI’s impact depends on implementation and oversight by the public and regulators. For this, companies, regulators, and society need to be explicit and transparent about what economic, political, or societal goals they want to achieve using AI and how its specific workings can propel or inhibit this pursuit. By nature, this discussion combines normative, mechanistic, and technological arguments and considerations. It is important not to be sidetracked by grandiose, but ultimately imaginary, visions of an AGI, but instead focus on specific instances of narrow AI, their inner workings, uses in specific areas of interest, and effects. This includes the discussion of both potentially positive as well as negative effects.
AI is unlikely to impact many aspects of democracy directly. Nevertheless, public discourse is likely to continue to focus on threats, manipulation, and expected power shifts. This discourse and these expectations have the potential to shape public attitudes toward AI and its impact on democracy strongly, irrespective of their factual basis. Perceived effects can matter more strongly than actual effects. Researchers have a responsibility not to fan the flames of discourse with speculation, but instead remain focused on AI’s actual workings and effects.
The broad application of AI in society and politics is only beginning; future developments are, of course, unknown, and so the specific shape AI technology will take, AI’s application in society, and its subsequent effects on democracy must be continuously monitored. This article identifies contact areas between AI and democracy that allow for tracking and monitoring this process. More specifically, there are a series of opportunities for future work.
On a very basic level, there is a need for more systematic work on how AI is employed in politics and by governments (Engstrom & Haim, 2023). There are a few early studies—in, for example, the area of predictive policing and the legal system (Brayne, 2021; Ferguson, 2017)—but they are comparatively few and predominantly focused on the United States. There are opportunities for studies of other countries, and for comparative work. Also, much of what we think we know about AI in society is based on journalistic accounts, but while those can be deeply instructive, there is a consistent need for scientific accounts of these developments in different sectors and societies.
There is also a steady stream of studies examining the biases of specific AI systems in particular contexts, such as audits of AI-driven analysis of texts and images and the implementation of AI in societal processes, such as policing, judicial sentencing, credit decisions, or medicine (Bolukbasi et al., 2016; Buolamwini & Gebru, 2018; Caliskan et al., 2017; Mayson, 2019; Mehrabi et al., 2022; S. Mitchell et al., 2021; Obermeyer et al., 2019). However, AI can also be used to study historical biases in society by examining large-scale data sets of historical texts and images to identify the distribution and shifts in the representation of specific societal groups—for example, along gender or racial lines (Jürgens et al., 2022)—or, more generally, identify for further analysis promising cases in which empirical data contradicts model-based expectations (Munk et al., 2022; Rettberg, 2022).
Building on this, systematic work on the regulation of AI in different countries and contexts is also needed (Veale et al., 2023). While these are early days for official regulation of AI, there are already interesting activities—notably, government support for AI-focused research and development. The European Union and the US and Chinese governments have all identified AI as an area of strategic geopolitical and economic competition. Examining competing policy initiatives aimed at driving innovation is a promising early step in examining governments’ relationships with AI and related policies and regulations. Earlier comparative studies of the regulation of privacy and platform business models may offer templates for work focused on AI (Bradford, 2023; Cohen, 2019; Farrell & Newman, 2019; Thelen, 2018).
Finally, the development of AI as a scientific and commercial field is a promising subject of research itself. The highly interdisciplinary nature and shifts in the vanguard of AI development between scientific disciplines, academy and business, and geographic areas make this a very interesting case of the contemporary nexus of science, commerce, and society (Lee, 2018; McCorduck, 2004; Metz, 2021). Going further, AI is increasingly used within the social sciences as a method of discovery. The associated shifts in scientific workflow, questions asked, and approaches to theorizing are a fertile area for future reflection (Grimmer et al., 2021; Mökander & Schroeder, 2022).
There are many promising avenues for future scientific work on the impact of AI on democracy. Here, it is important to combine insights from different fields. Purely technological accounts risk overestimating AI’s impact on social systems, given their boundedness and the role of social structures. Accounts coming purely from the social sciences risk misrepresenting the actual workings of existing AI and thereby misattributing its consequences. The conceptual framework presented here offers one approach to combine these interdisciplinary perspectives productively.
The impact of AI on democracy is already progressing. Its systematic, interdisciplinary examination and discussion need to proceed as well.
Footnotes
Acknowledgements
The author thanks Scott Cooper, Valeska Gerstung-Jungherr, Pascal Jürgens, Oliver Posegga, Adrian Rauchfleisch, Ralph Schroeder, Alexander Wuttke, and various anonymous reviewers for their valuable feedback.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research underlying this paper has been generously supported by the VolkswagenStiftung.
