Abstract
As artificial intelligence evolves from predefined narrow applications to more capable general-purpose models, there is growing interest in how this technology affects international security. While most research focuses on the military realm, this reflective essay explores the broader implications of artificial intelligence for international security. It presents three main arguments: (1) the debate over whether we are experiencing an artificial intelligence revolution or just hype distracts from the more subtle but profound transformation already underway; (2) despite the current emphasis on artificial intelligence’s role in spreading disinformation, decision-makers will ultimately gain better and more relevant information about the intentions and actions of their adversaries; and (3) this fundamentally alters the dynamics of interactions within the international system, which are shaped by uncertainties. The essay concludes by posing open questions and proposing a research agenda for international relations scholars.
Keywords
Introduction
As I write these lines, artificial intelligence (AI) writes and dominates the headlines. AI’s rapid advancements and entrance into public consciousness has drawn the field of international relations to examine its impact on international security. The bulk of research focuses on the application of AI within the military sphere, particularly concerning the implications of lethal autonomous weapon systems. Scholars examine how AI’s integration into warfare could potentially reshape international security, stability, and the global order. Yet despite concerns surrounding the ethical implications of delegating life-and-death decisions to machines (Horowitz, 2016; Rosendorf et al., 2024), and the anticipated consequences stemming from the unparalleled speed, precision, and endurance of such systems (Horowitz, 2019; Payne, 2018; Tegmark, 2018), it is crucial to recognise that the impact of AI on international security and stability extends far beyond the conventional military domain.
In this reflective essay, I provide a broader perspective on how AI affects international security. I use an expansive definition of the term ‘artificial intelligence’ or AI to describe the capability of algorithms to perform cognitive tasks traditionally thought to require human intelligence (see McCarthy, 2007; Mitchell, 2019; Sheikh et al., 2023). AI is catalysing a subtle yet consequential shift that fundamentally transforms our information landscape. This transformation is important because interactions among actors in the international system are shaped by uncertainties and incomplete information. As AI begins to address some of the informational challenges at the heart of international relations, this raises a critical question: how might AI alter the nature of interactions among international actors?
I proceed this essay as follows. First, I outline the AI inflection point, which is characterised by the evolution from narrow AI to more general-purpose systems. Then, I explore how the nature of conflicts is already changing, albeit at a more incremental pace than often proclaimed, arguing that the debate over whether we are experiencing an AI revolution distracts from the more subtle but profound transformation already underway. Next, I examine how AI affects international security and stability beyond the military domain. Despite the current emphasis on disinformation, I contend that decision-makers will ultimately gain better and more relevant information about the intentions and actions of their adversaries than they have access to today. This transformation in the information landscape fundamentally alters the dynamics of interactions within the international system, which are shaped by uncertainties. I conclude by reflecting on the role our discipline can play in effectively addressing the challenges posed by this nascent technology.
The AI inflection point
The public release of ChatGPT is widely regarded as the moment when AI entered the mainstream. While researchers had been working on the underlying technology for years, the introduction of a new generation of large language models, starting with this release, fundamentally changed how humans engage with AI technology (Klein, 2023). Despite evident limitations, the user-friendly nature of chatbots and the incorporation of generative AI models into software we routinely use has made AI a part of our daily lives.
Yet, the ubiquity of AI-powered applications is not the sole reason why scholars of international relations should take AI seriously. The primary reason is that AI technology has progressed to a stage where it can facilitate increasingly complex tasks that far exceed the narrow, predefined objectives researchers originally set out to tackle. From an industry perspective, a key driver behind the current acceleration of AI research is the convergence of research and product development. The schism between basic research and product development that was prevalent in leading tech companies just a few years ago has now largely disappeared. 1 This convergence not only means that the most effective AI for products often relies on general AI techniques and systems. It also suggests that commercial investments benefit AI research more widely, further accelerating progress.
While we are far from achieving superintelligence – the notion that one day, self-improving AI systems surpass (and some argue, subvert or displace) humanity (Bostrom, 2014; Tegmark, 2018) – today’s leading AI systems are increasingly ‘general purpose’. This means that the same underlying technology can be adapted for a wide range of use cases. And we rapidly approach an era where AI models not only assist in a wide variety of tasks but also have agency to interact with the physical world (Suleyman and Bhaskar, 2023).
The media has naturally taken notice, particularly after the release of ChatGPT. Mention of an ‘AI revolution’ has surged in worldwide media 2 – but so has the term ‘AI hype’. 3 With the popular discourse often bordering the hyperbole, there is inevitably pushback, with some noting that talk of an AI revolution is ridiculous or premature. Unfortunately, the discussion on whether we are about to witness an AI revolution is not only unproductive – it could overshadow the changes that are already unfolding.
The challenge of incremental change
Nowhere is this tension more obvious than in the military domain. Talk of an imminent AI revolution in military affairs, alongside visions of robot armies dominating the battlefield, understandably prompt scepticism and eye-rolling from many observers. At the same time, armed conflicts are already changing, albeit in more incremental ways. One recent example is the use of AI systems by the Israeli army to identify and locate suspected members of the militant group Hamas. An investigative report suggests that the algorithm-enabled identification and targeting process increased the scope and speed of bombing campaigns and contributed to the exceptionally high number of civilian casualties in Gaza. However, it appears that the devastating impact was not caused by the algorithm directly, but rather in combination with only perfunctory human verification and highly permissive policies regarding accepted ‘collateral damage’ (Abraham, 2024). This case also points to how the already complex ethical considerations about killing in war may be further complicated by the incorporation of AI, which risks turning inherently moral decision-making processes into purely procedural ones (Brunstetter, 2023; Emery, 2022).
While in the Israeli case, humans were, at least nominally, in the loop to verify the identity of suspected targets, Ukrainian developers in 2023 acknowledged the use of drones that carry out autonomous strikes on Russian military objects without human intervention (Hambling, 2023). This is the first official confirmation of such use. 4 Despite the vivid warnings of delegating the decision to kill to machines, the Ukrainian use of autonomous drones to attack military objects has caused surprisingly little public outcry. Some Western observers may have been reluctant to raise concerns, fearing they would be seen as denying Ukraine a tactical advantage. This line of thinking, while understandable, undermines the very foundation of ethical guidelines: the principle that, to ensure their long-term effectiveness, these guidelines must be applied equally to all sides. 5
Another potentially more important reason for the lack of public outcry is the incremental nature of the change. When I first watched the recording of the Ukrainian drone identifying ground targets, 6 my immediate reaction was not one of horror, but surprise at how rudimentary the application seemed, contrasting with my subconscious vision of military AI shaped by its flashy depictions in popular culture. And despite experts emphasising the significant role of drones in the Ukraine conflict, it is primarily the widespread availability and affordability of commercial and self-made drones, rather than their (overall quite limited) autonomous capabilities that have been pivotal (Pettyjohn, 2024; Thompson, 2024). AI has made significant contributions to battlefield dynamics in the Ukraine conflict, most notably in the area of geospatial intelligence (Fontes and Kamminga, 2023; Wiedemar, 2023). Overall, however, battlefield dynamics in Ukraine have reaffirmed the continued importance of conventional military capabilities (Wilde, 2024).
Nonetheless, we should take the changes propelled by AI seriously – not despite, but because they are so incremental at this stage. It is impossible to miss a revolution, but it is much harder to identify a gradual transformation, especially when you are in the midst of one. The debate over whether we are experiencing a revolution or succumb to a hype further obscures this. Critics who downplay the current advancements in AI as mere hype may inadvertently imply that there is no urgent need to grasp the very real changes that are already underway. And importantly, the trajectory is still unclear. As we all remember from the COVID-19 pandemic, the curves of exponential growth and linear progression can look very similar in the early stages. Regardless of the growth curve, based on experiences with previous general-purpose technologies, we can expect the fundamental implications to only become apparent years down the road (Brynjolfsson et al., 2021; Ding and Dafoe, 2021).
Beyond the military realm
The implications of AI for international security are most readily apparent in the military realm. As a result, it comes as no surprise that this domain has become the primary focus for international relations (IR) scholars engaged in this avenue of research. IR scholars have thoroughly examined AI’s application for military objectives, revealing wide-ranging implications for areas such as tactical and strategic decision-making, the offence-defence balance, and the moral complexities of algorithmic approaches to military ethics (e.g. Altmann and Sauer, 2017; Emery, 2022; Gill, 2019; Horowitz, 2018, 2019, 2020a; Horowitz et al., 2018; Horowitz and Lin-Greenberg, 2022; Hunter and Bowen, 2024; Johnson, 2022, 2023; Payne, 2018; Rosendorf et al., 2024; Thorne, 2020).
These studies also demonstrate that ethical considerations carry very practical implications. They can influence the balance of power among various actors, as different types of actors face different constraints and incentives that compel them to weigh these considerations (Horowitz, 2019).
Yet as we study the impact of this nascent technology on international security, it is important to consider how AI shapes international security and stability beyond the military realm. There are efforts to explore these implications, but they tend to focus more on immediate opportunities and concerns. These efforts include research on the role of AI in spreading disinformation and misinformation in armed conflicts (Esberg and Mikulaschek, 2021; Spitale et al., 2023; Walter, 2022), on how AI can be used to mitigate humanitarian crises (e.g. Dietrich et al., 2024; Merkle et al., 2023; Mueller et al., 2021), detecting or predicting conflicts (e.g. Duursma and Karlsrud, 2019; Sticher et al., 2023; Yankoski et al., 2021), or assessing how AI may facilitate the negotiation of a settlement (e.g. UNDPPA and Centre for Humanitarian Dialogue, 2019; Hirblinger, 2020; Höne, 2019).
These studies are essential for identifying approaches to harness AI for Good, and to prevent harm. However, in contrast to research focused on the military use of AI, these studies rarely contemplate the broader and more long-term implications for international security. Considering AI’s profound impact on our information landscape, it is essential to understand the wider effects of AI on the international system. The lack of attention to this issue is surprising considering the nature of the international system, where uncertainties and imperfect information play a central role in shaping decisions about war and peace. Fearon’s (1995) seminal paper on rationalist explanations for war shows how actors have an incentive to misrepresent their true capabilities and resolve, which can drive diverging expectations about the outcome of war and lead to inefficient bargaining decisions. More generally, uncertainties about intention and effects are an essential feature of how diplomacy works. A fundamental feature of deterrence and compellence, as we understand these concepts, is a degree of uncertainty about how statements turn into action.
How does AI change this essential feature of the international system? Without a long-term focus, one might interpret insights from the growing body of research on AI and disinformation in wars to suggest that uncertainties increase. However, this overlooks the bigger picture. AI will continue to improve the ability of decision-makers to collect, process, and analyse vast amounts of data. And while some data sources, such as information on social media or in government-controlled media, are easily manipulated by adversaries, others, including a wide range of sensors spanning from weapons systems to satellites to cameras mounted to drones, are not. This suggests that those with access to a wide range of data and the latest AI models may have increasingly accurate information about the possible outcomes of their decisions and actions, and may be able to better assess the intentions and actions of their adversaries. If we know that they know that we know whether a threat is credible, a bluff, or only directed at internal audiences, such knowledge will undoubtedly shape decisions and actions on all sides – even if an unequal distribution of capabilities implies that some actors will benefit more than others.
My argument is not that actors will have perfect information. Regardless of AI progress, perfect information will remain unattainable, not least because some information resides in the minds of individuals. 7 But overall, decision-makers will have better and more relevant information, potentially leading to some convergence in their expectations. Will this reduce uncertainties about the implications of words and actions? Could it mitigate some commitment problems, by enabling actors to make more credible threats and promises? And how will these changes affect different actors in varying ways? For while decision-makers may benefit from enhanced information, wider constituents lacking access to privileged sources might face increased uncertainties (see Sticher, 2024). This discrepancy prompts further questions into how the role of internal audiences and constituent constraints will be affected. Will we ultimately see more efficient bargaining decisions, or could the changes propelled by AI instead introduce new inefficiencies, such which may be easier to predict but harder to avoid? And how may this vary among different types of actors, who are constrained by different audiences and have varying ability to shape the overarching narrative?
Outlook
These are just some of the questions we might explore as we contemplate the broader, more long-term implications of AI for international security. To be clear, studying the implications in the military realm is essential, and ongoing research on more immediate effects of AI remains crucial to navigate the opportunities and risks of this nascent technology. However, as a discipline, we should not shy away from asking larger questions. To do so, we need to adopt an expanding mindset, and view AI as an enabling technology (Horowitz, 2018; Ng, 2017) that drives changes in the international system, rather than merely conceptualising it as a tool.
Without a doubt, efforts to study the long-term implications are fraught with methodological and empirical challenges, as extrapolating from the current, limited experiences to the future risks missing the larger changes that are underway (Horowitz, 2020b: 386–387). But such challenges are not an excuse for not studying the broader implications, especially for a discipline fundamentally concerned with studying and understanding changes to the international system. In engaging with this topic, we are set to see a wide range of possibilities emerge. While my argument adopts a rationalist perspective on war, positing that uncertainties about outcomes, intentions, and actions can lead to bargaining inefficiencies that ultimately contribute to war, other scholars may apply different lenses, such as critical IR theory (see, for example, Emery, 2022; Feenberg, 2017; Kellison, 2019; McCarthy, 2018; Rengger and Thirkell-White, 2007). Such a perspective may explore questions including how AI may mitigate or re-enforce dominant narratives and structural (post-colonial) inequalities within the international system. Aside from ontological and epistemological differences, diverse perspectives will arise because the impacts of AI are not inherent to the technology itself but are instead shaped by how we choose to use and govern it (see Dunn Cavelty and Hagmann, 2021; Suleyman and Bhaskar, 2023).
Our discipline can also contribute to AI governance by building on such studies and drawing on experiences from other sectors and technologies. The vexing question of how to approach the governance of a technology with such fundamental implications calls for collaborations between those who understand how AI operates, and those who understand how and when international collaboration works (see, for example, Bremmer and Suleyman, 2023). Valuable lessons can be learned from established practices in negotiating international agreements on dual-use goods and technologies. In addition, unexpected sources, like the study of ceasefires, offer valuable perspectives on creating cooperative frameworks that address short-term incentives to break promises, ensure compliance, and manage potential spoilers.
There is a widely held belief that frogs will jump out of boiling water, but will boil to death if the water’s temperature is increased gradually. While scientifically incorrect (Obradovich and Moore, 2019), this myth serves as an apt metaphor for the transformation catalysed by AI. It underscores the importance of understanding and governing AI during this critical period when its effects on international security are profound yet subtle, rather than revolutionary.
Footnotes
Acknowledgements
The author is grateful to Elias Blum, Fabrizio Gilardi, and two anonymous reviewers for comments on earlier versions of this essay.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This essay draws on research conducted as part of the research project ‘Algorithmic Bargaining: How Artificial Intelligence Affects War Onset and Termination’, funded by the Swiss National Science Foundation (grant number 217793).
