Abstract
How does existentially dangerous technology get adopted and then locked in? The case of the atomic bomb offers a cautionary tale. In the long run, reliance on nuclear weapons is a recipe for catastrophe. Yet their perceived ability to reduce the frequency of war in the short term inhibits efforts to reform the international status quo. Drawing on the pioneering work of David Collingridge and Nathan Sears, this paper argues that nuclear deterrence became locked in for several reasons: initial disagreement about the threat it posed, the threat’s declining salience as time wore on and serial procrastination in addressing it. Unfortunately, the same is likely with any technology that involves low-frequency, high-impact risks, including solar geoengineering and possibly artificial intelligence. At worst, it can convert catastrophic risks to existential ones, while rendering them politically intractable.
Keywords
Introduction
Over the past two decades, interest has grown in mitigating global heating by spraying sulphates into the upper atmosphere. Stratospheric aerosol injection (SAI) would cost less directly and in the short term to the states undertaking it than would deep cuts in greenhouse gas emissions, and it could be deployed rapidly. A single major state might do it (Baum et al., 2013: 172; Young, 2023: 275). Yet SAI would also entail major risks. Rather than serving as a bridge to clean energy, it might discourage that transition. Once greenhouse gases have mounted, it might become dangerous to stop. The world could get ‘locked in’ to solar geoengineering (McKinnon, 2019; Preston, 2013 cf. Cairns, 2014: 657). In the best-case scenarios, with deep cuts in emissions, sulphate injections might be needed for only a few decades, but in the more realistic case of gradual reductions they would be required for at least a century, perhaps much longer. The risk that solar geoengineering would break down over an extended period is considerable (Baur et al., 2023: 375–376; Boucher et al., 2009; Neuber and Ott, 2020: 15). If the programme slowly eroded, temperatures might creep up, but if it collapsed, they could skyrocket – an outcome known as termination shock. Temperatures might shoot up by multiple degrees in a single decade (Baum et al., 2013: 172–173; Malm, 2022; cf. Parker and Irvine, 2018).
How likely is this scenario? If the history of nuclear weapons is a guide, the prospects are troubling. Long before the Industrial Revolution produced dangerous greenhouse gas concentrations, it was already producing increasingly destructive industrialised warfare. Already after the First World War, this prompted calls for new international practices and institutions to match the new technological realities. The development of nuclear weapons gave impetus to these demands, leading to calls for the international control of atomic energy (Bartel, 2015: 281; Deudney, 1995: 210–212; Uchaev and Kharkevich, 2023: 50–51; Wittner, 1993: 334–335). In the late 1940s, the possibility of nuclear disarmament was still regarded by some Western politicians and members of the public as a live option. But over time it has come to seem more and more distant (Hymans, 2024a; Pelopidas, 2021). Ritualistic calls for disarmament, decade after decade, increasingly resemble those for ‘world peace’: an alluring but unrealisable dream (cf. Rosendorf et al., 2021: 194). ‘Humanity’, notes Nathan Sears (2020), ‘appears to be “locked in” to nuclear anarchy’ (p. 602).
This failure to persuade states and publics of the need for radical action to address the nuclear threat has recently been described by Sears (2023), Evgenii Uchaev and Artem Kvartal’nov (2024) as an instance of macrosecuritisation failure (cf. Dalaqua, 2013). Securitisation, as theorised by the Copenhagen School, involves persuading an audience that extraordinary measures departing from ‘the normal political rules of the game’ are needed to address an existential threat to a valued object (Buzan et al., 1998: quoted passage at 24). Usually the valued object is the state, but macrosecuritisation, as Buzan and Wæver (2009) define it, invokes a threat to a larger entity, such as the ‘free world’. When macrosecuritisation involves ‘physical threat universalisms’, the threats are to ‘humankind on a planetary scale’ (quoted passage at 261). On their definition, securitisation occurs when the target audience accepts the legitimacy of emergency measures (Buzan et al., 1998: 25). For Sears (2023), in contrast – and on the criterion adopted in this paper – securitisation has not fully succeeded unless the measures are actually adopted (pp. 67–68).
Securitisation failure can be divided into two stages: the failure to convince an audience to take extraordinary measures to forestall the threat, and the failure to persuade them to take such action once it emerges. Sears, Uchaev and Kvartal’nov seek to explain the initial failure to macrosecuritise nuclear weapons in the 1940s, whereas William Walker (2020), Benoît Pelopidas (2021) and Jacques Hymans (2024a, 2024b) focus on the subsequent marginalisation of nuclear disarmament from policy and political discourse. Sears (2023) attributes the failure to achieve international control of atomic energy partly to the shifting balance of power in the 1940s, which led the superpowers to prioritise relative gains of national influence over absolute gains in international security. Uchaev and Kvartal’nov maintain that both governments had an interest in rejecting macrosecuritisation narratives that could challenge state sovereignty (Uchaev and Kvartal’nov, 2024; see also Heller, 1980: 30–31; Uchaev and Kharkevich, 2023; Uchaev and Kvartalnov, 2023). Some explanations of nuclear weapons’ subsequent lock-in are likewise ideational, arguing that nuclear status can become institutionalised as official dogma and national identity (Walker, 2020), or that the bomb has acquired a sacred status (Hymans, 2024a; Hymans, 2024b). Other explanations cite the role of vested interests, information asymmetries and the political and intellectual dominance of actors with a stake in the nuclear status quo (Craig and Ruzicka, 2013; Egeland, 2020; Egeland and Pelopidas, 2025; Walker, 2020).
This paper presents a new analysis of both the failure to achieve international control of nuclear weapons and their subsequent entrenchment. It shows how David Collingridge’s theory of the ‘hedging circle’ offers important insights into both stages of macrosecuritisation failure, suggesting that the phenomenon could repeat itself with solar geoengineering and artificial intelligence (AI). Much of the literature on technological lock-in has been inspired by Collingridge’s (1981[1980]) The Social Control of Technology (cf. Cairns, 2014: 651). Less often remembered is that one of his central examples was the nuclear arms race. The following section outlines Collingridge’s analysis and connects it to securitisation theory. In the third section I demonstrate how Collingridge’s theory helps explain the failure to macrosecuritise nuclear weapons in the 1940s. The fourth section describes how a sense of diminishing urgency led first to procrastination and then to the outright abandonment of efforts to transcend the nuclear predicament. In the fifth section, I argue that geoengineering could render the problem of global heating more like that of nuclear deterrence. Once in place, by mitigating many of the short-term effects of greenhouse gas emissions, it could mask the risk and incentivize procrastination. In conclusion, I warn that this phenomenon might encourage us to tolerate one existential risk after another, each of which we might fail to tackle straight up to the point that one materialises.
Securitisation and macrosecuritisation failure
Since the middle of the 20th century, the world has been accumulating existential risks – commonly understood by philosophers as those that ‘threate[n] the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development’ (Bostrom, 2013: 15; cf. Sears, 2023: 36–40). If we go on running such risks and adding new ones, disaster is all but certain in the long run (Deudney, 2020: 128–129, 140–141, 309; Ord, 2020: 31; Sears, 2020; Taleb et al., 2014: 2). Yet while states have taken drastic measures in the name of fighting terrorism, they have rejected comparably radical action against the potentially far more devastating threats of thermonuclear war, global heating and biological weapons. ‘The international body responsible for the continued prohibition of bioweapons’, noted Toby Ord in 2020, ‘. . . has an annual budget of just $1.4 million – less than the average McDonald’s restaurant’ (McDonald, 2024; Ord, 2020: 57; Sunstein, 2007; Uchaev and Kvartal’nov, 2024). States are often treated as rational actors, yet the failure to macrosecuritise existential threats seems a failure to maximise their own long-term expected utility (Sears, 2023: 3–4). 1
Securitisation is often considered a bad thing, due to its tendency to justify drastic and authoritarian measures and to divide the world into friends and enemies. Sears, to be sure, defines macrosecuritisation in such a way that it prescribes multilateralism. When an issue is macrosecuritised, he writes, ‘no one nation has the final say, no single state can act alone, and no individual society can decide for the whole world that an issue poses an existential threat to humanity and take emergency action for human survival’ (Sears, 2023: 6). This assumption probably reflects Sears’s belief that collective action is essential to address these risks (Sears, 2021). Yet it is not part of Buzan and Wæver’s (2009) original conception, on which ‘macrosecuritisations are defined by the same rules that apply to other securitisations: identification of an existential threat to a valued referent object and the call for exceptional measures’ (p. 257). Nothing in these criteria precludes a single state demanding – and adopting – emergency measures ‘for the sake of humanity’ (Hobson and Corry, 2023: 635–636). By building multilateralism into his definition, Sears obscures macrosecuritisation’s potential to lead to international conflict.
Nevertheless, when securitisation comes in the form of physical threat universalism, it is less prone to ‘us and them’ distinctions (cf. McDonald, 2008: 578–580). ‘[T]he grammar of a securitising speech act’, Olaf Corry (2012) observes, ‘is defined by the Copenhagen School only by the existence of an existential threat, not an existentially threatening subject . . . Securitisation is about threats rather than enemies per se’ (p.246, emphasis in original). This captures the essence of physical threat universalism, for which the sources of danger are natural phenomena or forms of technology. Mikhail Gorbachev’s ‘new thinking’, for example, sought not only to change the referent object of macrosecuritisation – from the ‘socialist camp’ to humanity as a whole – but also to convince both superpowers that the real existential threats they faced, such as nuclear war, were shared and global (Deudney, 2024: 35–36; Larson and Shevchenko, 2003). If, moreover, the threat is to the long-term future of life on earth, and existing arrangements are failing, extraordinary measures seem justified (McDonald, forthcoming; Sears, 2021: 4). While such arguments can be abused (Hobson and Corry, 2023: 635–636), failing to address an existential threat to the planet could be incomparably worse. ‘Given that the survival of humanity may be at stake’, Sears observes, ‘one might reasonably expect [physical threat universalist] macrosecuritization to be the norm. Yet the empirical record . . . reveals a pattern of recurrent failure’ (Sears, 2023: 70). 2 Why?
It might seem that in the case of nuclear weapons, there is no puzzle to explain. States have acquired and retained them, a common argument goes, because they recognise their value as deterrents (Lebovic, 2024: 14). Yet unless major war or the weapons themselves are eliminated, they will someday be used and, on our present trajectory, eventually in large numbers (Craig, 2003: xvii; Rendall, 2007: 533–534; Rendall, 2022: 783–784). 3 Moreover, even if nuclear deterrence promoted state security, the question would remain why states have not macrosecuritised biological weapons, global heating and AI, all of which pose nontrivial risks of existential catastrophe (McDonald, 2024; Ord, 2020; Sears, 2023; Uchaev and Kvartal’nov, 2024). Collingridge’s theories of the ‘dilemma of control’ and the ‘hedging circle’ offer a clue.
The Collingridge dilemma and the hedging circle
Collingridge argued that policymakers confronted with a new form of technology could face a dilemma. In the early stages of technological development, its social consequences could not be foreseen, ‘at least not with sufficient confidence to justify the imposition of disruptive controls’. Once the technology was adopted, other social arrangements formed around it, hampering subsequent efforts at management (Collingridge, 1981[1980]: 16–18). This dilemma is characteristic of threats to the natural environment. ‘One of the difficulties facing those attempting to securitize environmental issues’, Buzan et al. (1998) point out, ‘is that the threats are both new (or newly discovered) and controversial regarding their existential urgency’ (pp. 28–29).
Policymakers could, moreover, become trapped in what Collingridge called a ‘hedging circle’. Governments often faced a choice between trying to solve a problem on their own or with the cooperation of other parties. The safe option seemed the one that they could take unilaterally (Collingridge, 1981[1980]: chapter 5). But this could set off an action–reaction cycle that made it increasingly difficult to depart from the original choice. Collingridge gave the example of choosing between suppressing forest fires and allowing periodic burns. Once policymakers opted for suppression, the decision became harder and harder to reverse as the supply of combustible material mounted. Society was apt to build near the forest, further raising the stakes. The focus on preventing fires discouraged the discovery of means to manage them. Yet fires could not be suppressed indefinitely, and the eventual inferno was worse than if the forest had been allowed to burn all along (Collingridge, 1986: 329–330).
Something similar, Collingridge maintained, had happened with nuclear weapons. Faced in the late 1940s with the decision of whether to pursue the international control of atomic energy or to build their own arsenals, US and Soviet leaders had chosen the latter. Once they started down that path, it became increasingly risky to leave it. Diplomacy, Collingridge wrote: requires trust, but the existence of nuclear arsenals imposes such a cost on misplaced trust that trusting the other side is simply too dangerous. War, therefore, continues to be avoided by the one option which requires no trust, deterrence. Decisions to develop nuclear weapons in the early years after the War destroyed any hope of ever being able to avoid deterrence in the future and any chance of finding other ways of keeping the peace.
Ultimately, Collingridge concluded, these efforts were futile: Deterrence cannot be maintained forever. A reliance on diplomatic methods of avoiding war is just as certain to fail at some time or other, but the cost of its failure is far less than that of the failure of deterrence. (Collingridge, 1981[1980]: 89–91, emphasis in original)
Front-loaded and back-loaded goods
This last remark suggests an important lock-in mechanism that Collingridge himself overlooked. Like forest fire suppression, nuclear deterrence appears to reduce the frequency of a problem – major war – at the price of far greater damage when it finally occurs (Thayer, 1995; cf. Bell and Miller, 2015). This creates an incentive to defer addressing the problem – or never to tackle it at all. Some goods are what Stephen Gardiner calls front-loaded: ‘benefits accrue to the group that produces them, but their costs are substantially deferred and fall on later groups’. Fossil fuels are a paradigmatic example: their users capture many of their benefits, while externalising much of the cost to future generations (Gardiner, 2011: quoted passage at 151). The same is true of nuclear weapons. It is true that deterrence could fail at any time (Sears, 2023: 77; Uchaev and Kvartal’nov, 2024: 145). But while it is bound to fail someday, it will probably not fail soon, or so its defenders usually assume. This makes nuclear deterrence an expectedly front-loaded good: nuclear weapons’ present possessors enjoy the lion’s share of benefits, while most costs probably lie in the distant future, to be borne by generations after nuclear war (Rendall, 2022: 785).
If the annual probability of catastrophic threats is perceived to be low enough, states are apt to resist securitising them. Not only are the costs of the problem likely to fall on future generations, it can be tempting to put off addressing it even when doing so is in present people’s interest (Andreou, 2007). Environmental problems ‘often point to an unspecified, relatively remote future’, Buzan, Wæver and de Wilde note, ‘and therefore involve no panic politics. It is assumed that it hardly matters whether we act now or next year; therefore, “urgency” becomes reappropriated as a part of “normal politics”’. Securitisation, however, involves presenting an issue as an urgent existential threat to be given priority ‘because if the problem is not handled now it will be too late, and we will not exist to remedy our failure’ (Buzan et al., 1998: 26, 83). The result is that these problems are not securitised, even when extraordinary measures are required to solve them.
We see this pattern in the case of forest fire suppression. A recurrent theme in the forest fire literature is that for policymakers and the public the immediate risk of wildfires overshadows the prospect of a holocaust in the indefinite future. Robyn Wilson and her colleagues cite one manager’s statement as typical: ‘My immediate priority is to protect life and property now. Measures can be taken later to prevent the long term risk’ (Busenberg, 2004: 150; Collins et al., 2013: 7; Wilson et al., 2011: 813–815, quoted passage at 813). Over time forest fire suppression can become institutionalised as common sense despite the fact that it only defers the conflagration (Calkin et al., 2015: 4). A similar dynamic arose in the failure to macrosecuritise nuclear weapons.
The Baruch Plan and macrosecuritisation failure
The first stage of macrosecuritisation failure came with the discovery of the bomb. To American scientists it was clear that the United States would lose its atomic monopoly within a few years. No defences were on the horizon. Unless the bomb was abolished – or even war itself – devastating conflicts loomed in the future (Sears, 2023: 103–105; Wooley, 1988: 8–9). By mid-1946, American officials were familiar with the idea that the bomb presented an existential threat requiring significant curbs on state sovereignty. At least some were willing in principle to put it under international control. A State Department team led by Dean Acheson and David Lilienthal developed a good-faith proposal for doing so (Craig and Radchenko, 2008: 120–122; Kearn, 2010: 44–52; Sears, 2023: 106–109). However, President Harry S Truman’s decision to put the initiative in the hands of the American financier Bernard Baruch resulted in a plan requiring disarmament to be monitored by onsite inspections and enforced by the UN Security Council – in which Washington and its allies enjoyed a comfortable majority – without any right of veto. Only once all these provisions were in place would the United States start to dismantle its own arsenal.
Whether the Baruch Plan was a serious offer is controversial. A number of historians maintain that Baruch himself initially hoped his plan might succeed (Baratta, 1985: 601–602; Bartel, 2015: 289–293; Bernstein, 1974: 1037; Bundy, 1990[1988]: 166; Gerber, 1982: 82–85; Kearn, 2010: 61; Wittner, 1993: 252). In contrast, Campbell Craig, Sergey Radchenko and David Tal argue that the plan was designed from the start for rejection (Craig and Radchenko, 2008; Tal, 2008; see also Herken, 1988[1981]: 152–153). If so, Truman and Baruch got their wish. Moscow not only insisted on preserving the Security Council veto, but proposed that nuclear weapons be dismantled before monitoring started and that states should monitor their own compliance with the agreement (Baratta, 1985: 608). This was not a serious proposal. Washington might have compromised on the issue of the veto – Acheson thought it was a red herring (Bundy, 1990[1988]: 165) – but it could hardly be expected to take Soviet compliance on faith. Baruch soon gave up any hope of agreement and focused on ensuring that Moscow took the rap for the negotiations’ failure (Bernstein, 1974:1042–1043; Craig and Radchenko, 2008: 124; Herken, 1988[1981]: 177–178, 189).
In reality, whatever Washington might have offered, Soviet leader Josef Stalin was probably determined to get the bomb. In private with the Yugoslav communist Milovan Djilas, he spoke of it with enthusiasm (Timerbaev, 1999: 51). Stalin did not accept that the bomb posed a dramatic new threat to humanity or the need for extraordinary measures to address it (Craig and Radchenko, 2008: 146–148; Holloway, 1994: 166–171; Sears, 2023: 149–155). Soviet analyses played down the extent of the damage in Hiroshima and Nagasaki (Craig and Radchenko, 2008: 95, 109–110; Zubok, 1999: 54), and at the UN, ‘the Soviet delegation proposed replacing the existential formulation “bring about the destruction of civilization” with the more neutral “lead to mass extermination of civilian population and the destruction of peaceful towns”’ (Uchaev and Kvartal’nov, 2024: 152). 4 Marxist–Leninist dogma maintained that socialism would be attained through the medium of great power wars. Stalin may have thought the Soviet Union could survive any technological development (Craig, 2017). ‘The Soviet leaders’, remarked James Byrnes, Truman’s secretary of state from mid-1945 until early 1947, ‘do not yet appreciate that civilization and not state sovereignty is at stake’ (quoted in Gerber, 1982: 91; see also Baratta, 1985: 606; Tal, 2008: 17; Uchaev and Kvartal’nov, 2024: 151–152).
After the Cold War, research in Russian archives showed that Moscow designed its disarmament proposals to fail (Craig and Radchenko, 2008; Zubok, 1999: 51–52). Some Soviet officials may genuinely have been trying to find a compromise (Batiuk, 1995; Timerbaev, 1999: 52–54). Starting in 1947, Moscow began to show more flexibility in the UN disarmament talks. It agreed to the idea of an international inspectorate and later conceded that a ban on nuclear weapons and monitoring could go into effect simultaneously. By late 1950, it appeared to accept that inspections could be authorised by majority vote in the UN Atomic Energy Commission, ‘with no unanimity and no state having the right to impose the generally hated veto’ (Goldschmidt, 1986; India Quarterly, 1949: 77; United States Department of State, 1960: 1:253). 5 Tal (2008) argues that Moscow may genuinely have been seeking a disarmament deal (pp. 40–43). That seems unlikely, given Stalin’s strong commitment to the USSR’s bomb programme. Andrei Gromyko, Moscow’s UN ambassador in 1946, later concluded he never intended to renounce nuclear weapons, nor did he expect Truman to do so (Holloway, 2020). All the same, it is striking how little interest Washington showed in pursuing the Soviet overtures (Tal, 2008: 31–40).
The failure to achieve international control of atomic energy after the Second World War illustrates the dilemma of control and the hedging circle. For Stalin, the risk the atomic bomb posed was not clear enough to delay the acquisition of a nuclear deterrent. Truman would not ‘throw away our gun until we are sure the rest of the world can’t arm against us’ (quoted in Bernstein, 1974: 1042). Reports of Soviet atomic espionage gave Washington good reason to doubt that it could reach a deal or sell it to the American public if it did (Craig and Radchenko, 2008).
Short-term thinking
Sears (2023) maintains that one cause of macrosecuritisation failure in the 1940s was that the shifting balance of power predisposed both superpowers to focus on relative gains (pp. 88–89, 238–239). It is true that policymakers had to weigh the long-term risks of a nuclear arms race against the immediate risk of the adversary’s defecting from the agreement (Baratta, 1985: 611; Gerber, 1982: 77–78). Yet it is hard to avoid the impression that they were not thinking very far ahead at all (Zaidi and Dafoe, 2021: 40, 43). David Holloway (1994) infers from Stalin’s statements and policies that ‘he anticipated a new world war after an interval similar to that between the two world wars . . . [but not] in the short term’ (p. 151; see also Craig and Radchenko, 2008: 96; Zubok, 1999). At that time, it was reasonable to believe that the bomb was not yet a game-changer; Western analysts reached similar conclusions (Zubok, 1999: 54). Still, Stalin should have been able to foresee that hundreds or thousands of atomic bombs would someday pose an existential threat. Did he assume that worldwide revolution would solve the problem first, or did he just not look that far ahead? During the 1946 negotiations, Gromyko complained that the United States was preoccupied with the future, whereas for the USSR the real problem was the nuclear present (Sears, 2023: 137).
Some American officials were more far-sighted. Secretary of War Henry Stimson advised Truman that: [w]hether Russia gets control of the necessary secrets of production in a minimum of say four years or a maximum of twenty years is not nearly as important to the world and civilization as to make sure that when they do get it they are willing and cooperative partners among the peace loving nations of the world. (quoted in Herken, 1988[1981]: 99)
McGeorge Bundy (1990[1988]) observes that the State Department’s advisory board under David Lilienthal made little effort to convey to the president how soon the Soviet Union would obtain atomic weapons: ‘probably the board thought the question unimportant, considering that the realities that made international control imperative were independent of the number of years of grace that might precede a Soviet bomb’ (p. 175). For top US officials, however, it made a considerable difference. The belief that a Soviet bomb was as much as 20 years in the future clearly reduced Washington’s motivation to reach a deal. As negotiations dribbled to an end, Baruch wrote that ‘[W]e’ve got it and they haven’t and won’t have for a long time to come. I don’t know how long, but it will be some time’ (Bundy, 1990[1988]: 173–175, 195; Herken, 1988[1981]: quoted passage at 190; Wittner, 1993: 254).
Short-termism also influenced US policy in a more indirect way. Any feasible deal would probably have required Washington to begin immediate dismantlement of its nuclear arsenal in exchange for Soviet acceptance of intrusive onsite inspections (cf. Baratta, 1985: 611). But US policymakers believed that postwar demobilisation had left the United States dependent on the bomb to deter the Soviets (Bernstein, 1974: 1035–1036, 1041–1042; Gerber, 1982: 77; Herken, 1988[1981]: 178–179). In theory, the United States could have remobilized and relied on conventional deterrence. From a standpoint that gave equal weight to the interests of all people, present and future – or even those of all Americans – this would have been the best thing to do, rather than condemn future generations to the ongoing risk of nuclear holocaust (MacAskill, 2022: 10; McMahan, 1986). Yet Bernstein (1974) is surely right to judge that rearmament would have been resisted by both the US Congress and the American people (p. 1042).
The US and Soviet Union’s near-term focus seems to have reflected less a rational and impartial balancing of short-term and long-run risks than the disinclination to take the distant future into consideration, a myopia familiar from other policymaking contexts (Hansson and Johanneson, 1997: 171–173; Jacobs, 2016: 439–440; Nelson, 2017-18). The effect is to underweight low-frequency catastrophic risks, which may be unlikely to materialise in the near-term but could devastate the earth for centuries or millennia to come. ‘We knew that this revolutionary scientific creation [the atomic bomb] could destroy civilization unless put under control and placed at the service of mankind’, Truman later recalled. ‘The destruction at Hiroshima and Nagasaki was lesson enough to me. The world could not afford to risk war with atomic weapons’ (quoted in Tal, 2008: 3). But to take that risk was precisely what Truman chose to do, and to bequeath it to future generations.
Diminishing urgency and technological lock-in
Much of the impetus in the 1940s for both international control of atomic weapons and world government arose from the belief that nuclear war was otherwise imminent (Boyer, 1985/1994: 30–31; Deudney, 2019: 377; Wittner, 1993: 62–63, 69–70; Wooley, 1988: 28). In October 1945, a distinguished group including Albert Einstein and J. William Fulbright described world government as an ‘immediate, urgent necessity, unless civilization is determined on suicide’. ‘Time is short’, the Federation of Atomic Scientists wrote the following year in One World or None, ‘and survival is at stake’ (quoted in Wittner, 1993: 63, 66; see also Boyer, 1985/1994: 71). Harry Truman told Congress that the world ‘[could] no longer rely on the slow progress of time’ to achieve control of the bomb’; instead, what was needed was agreement ‘at the earliest possible date’ (quoted in Bundy, 1990[1988]: 142). ‘Unless the United Nations Commission can arrest the drift of events’, warned Business Week, ‘we are moving toward a horrible war’ (quoted in Boyer, 1985/1994: 56).
But as time passed, the sense of urgency faded. ‘[E]very day that the Russians have the atomic bomb and every day it has not been used’, remarked a member of the US House of Representatives’ foreign affairs committee in 1949, ‘is a day that makes people feel that the reasons given [for world federation] are that much less valid’ (quoted in Wooley, 1988: 62; see also Wittner, 1993: 325). Even as subtle a thinker as Reinhold Niebuhr concluded from the failure of the Korean War to escalate that the risk of nuclear war had been overblown (Bartel, 2015: 298). The US government and its tame intellectuals actively tamped down fear of atomic weapons, with considerable success. At the same time, the deadlock in US–Soviet negotiations led to growing resignation. Instead of the bomb, the main threat was increasingly seen to be a hostile Soviet Union (Boyer, 1985/1994: chapters 9, 24–27; Davis, 2022: 40, 199; Wittner, 1993: 324–325).
Over time what had been viewed as an urgent existential threat was normalised. Pelopidas (2021) detects at the beginning of the 1960s ‘a decrease of the sense of urgency for nuclear disarmament and an acceptance of postponing action towards it and transferring the associated responsibility to those who will come after’ (p. 497). In the mid-1980s the Soviet Union under Mikhail Gorbachev finally showed openness to taking radical steps towards the abolition of nuclear weapons. But now Western governments – with the exception of US president Ronald Reagan himself – were no longer interested. When Reagan and Gorbachev began a serious discussion of nuclear abolition at the Reykjavik summit meeting, the Western establishment reacted with shock and disdain. The Soviet military also seems to have been less than pleased, though Gorbachev did not face open criticism from his Politburo colleagues. While in the following years the two sides went on to make dramatic progress on arms control, discussion of actual nuclear disarmament soon dried up (Blanton and Savranskaya, 2011: 49; Egeland, 2020: 8–9; Evangelista, 2023: 210–212; Grinevskii, 2004: 510; Schell, 2000: 34–37; Sigal, 2000: 144–145).
Ironically, a key stumbling block in the Reykjavik negotiations was a macrosecuritisation of Reagan’s own, which called on the United States to build a ballistic missile defence to render nuclear weapons ‘impotent and obsolete’. Reagan’s proposal invoked an existential threat to the planet (‘isn’t it worth every investment necessary to free the world from the threat of nuclear war?’) and proposed an extraordinary solution (Reagan, 1983). At Reykjavik, Reagan offered to share the defence with the Soviet Union, and there is good reason to think he meant it. Gorbachev – well aware of the pro-nuclear establishment consensus in the United States – did not take the offer seriously (Lettow, 2005: 219–226). Yet as in the late 1940s, if both sides had been wholeheartedly committed to eliminating nuclear weapons, agreement might have been feasible. 6
Today, American policymakers ‘generally recognize that the eventual abolition of the U.S. nuclear weapons arsenal is a fundamentally important, if not urgent, policy objective’ (Ripberger et al., 2011: 892, emphasis added). The threat, of course, has not gone away; if we try to keep nuclear weapons forever, we will use them. But ‘[f]or policymaking elites, as well as mainstream analysts in nuclear weapons states’, Pelopidas (2021) observes: Future horizons do not extend that far; they are limited to the current term in office or the current generation . . . Their strategy therefore consists in postponing the moment of nuclear detonation or radical nuclear change beyond such a horizon. (p. 490)
The same loss of long-term perspective has occurred in mainstream International Relations (IR) scholarship. Postwar realist critics of nuclear deterrence ‘stress(ed) that the focus on short-term order and stability amounted to strategic, moral and political failure, producing a false sense of security and a host of negative side effects, as well as precluding sustainable long-term solutions’ (van Munster and Sylvest, 2014: 537). In the long term, the threat of nuclear war required international integration or even world government. While these realists did not believe either solution was just around the corner, prominent writers like Hans Morgenthau, John Herz and Reinhold Niebuhr made serious efforts to understand how they could eventually be brought about (Bartel, 2015; Craig, 2003; Herz, 1962[1959]: chapter 12; Scheuerman, 2009: 122–134; Scheuerman, 2011). In contrast, their neorealist successors have refused to acknowledge that nuclear deterrence is a recipe for long-run disaster, and have focused on managing it (Craig, 2003; Pelopidas, 2016: 330–333; Pelopidas, 2021; Scheuerman, 2011: 93–97; Uchaev and Kharkevich, 2023: 51–53). ‘Realists claim to base their philosophy on an understanding of the tough and gloomy path of human history, and the recognition of the dark side of human nature’, wrote Ken Booth and Nicholas Wheeler in 1992, ‘yet they, above all, believe that nothing can go catastrophically wrong with an order founded on nuclear deterrence. This belief constitutes a realist fiction’ (Booth and Wheeler, 1992: 29). At the same time, IR’s once urgent motive of averting great power war has increasingly been displaced by an ‘ideal of disinterested scholarship’ (Pelopidas, 2016: 327–328; Reus-Smit, 2012). All this could occur only in a field which has lost its sense of urgency. It has become increasingly easy to assume that nuclear deterrence won’t fail soon, and that we can take our time in solving the problem (Pelopidas, 2016: 330). Neorealists have given up trying to solve it at all.
Procrastination and shadow solutions
A cynic might conclude that leaders and publics regard nuclear deterrence as a good self-interested gamble, even if it is bound to lead to disaster in the long run. Yet both during the Cold War and as late as 2018, surveys have shown many Americans believing that nuclear war is likely to kill them. Polls of Russians in 1996, 2006 and 2019 found 37 percent, 30 percent and 41 percent, respectively, reporting being very worried or terrified about nuclear war (Nestik and Zadorin, 2020: 17; Rendall, 2022: 783, 785–786; Smetana et al., n.d.). Nevertheless, in neither Russia nor in the United States has the nuclear threat been a decisive political issue for many decades, if it ever was. Why?
Great power war is rare. But if nuclear weapons have the stabilising effects their proponents attribute to them, it should have become still less frequent. Low-frequency, high-impact risks usually take a long time to materialise. People tend to underestimate risks of which they have no experience, and hardly any of us have experienced nuclear war. Politicians win scant credit at the polls for forestalling catastrophes that never occur. While people may consider nuclear war a major threat, it is nevertheless not a very salient one (Posner, 2004: 136, 264; Schatz and Fiske, 1992: 6; Wiener, 2016). Moreover, surveys show that many people feel helpless to prevent it (Pelopidas, 2022: 191; Schatz and Fiske, 1992: 19–20). Problems of this kind can be so large and complex as to seem intractable. ‘When I have raised the topic of existential risk with senior politicians and civil servants’, reports Toby Ord, ‘I have encountered a common reaction: genuine deep concern paired with a feeling that addressing the greatest risks facing humanity was ‘above my pay grade’ (Ord, 2020: 60). Yet successful securitisation also requires the demonstration of ‘a possible way out’ (Buzan et al., 1998: 33). Some analysts have argued that efforts to securitise global heating have actually discouraged emissions reductions by making it appear inevitable (Oels and von Lücke, 2015; Warner and Boas, 2017). Surveys show that Moscow students believing nuclear war to be inevitable are less inclined to support efforts to avert it, and more prepared to entertain the option of preventive nuclear strikes (Zhuravlev et al., 2016: 58–61, 84).
What we have then are problems that are huge, hard to solve and possible to ignore without their immediately spinning out of control. The natural temptation is to defer them and work on something else (Andreou, 2007; Kunreuther et al., 2013: 402; Rendall, 2022: 786–787; Young, 2023: 271). When Americans were asked in the early 1980s why they had not chosen nuclear war as the country’s biggest problem, two of the most common answers were that it was ‘something to worry about for the distant future’ and something ‘the respondent [could] do nothing about’ (Schuman et al., 1986: 526–527). A prominent theory of procrastination holds that we discount future costs because they are less salient than present ones, and at the next decision point we do it again (Akerlof, 1991: 1). Faced with the risk of disasters, Robert Meyer and Howard Kunreuther observe, agents may ‘succumb to a continuous cycle of good-faith postponements . . . the high upfront costs of protective investments will always seem as [sic] more palatable when viewed as something to be done tomorrow’ (Meyer and Kunreuther, 2017: 17; see also Kunreuther et al., 2013: 404). Since people discount their own future wellbeing, this may help to explain why voters have been unwilling to prioritise nuclear war avoidance, despite many believing that they themselves are likely to be affected.
When a policy is dangerous in the long term but convenient in the short run, politics is prone to what Gardiner calls ‘shadow solutions’. The complexity of addressing global heating ‘provides each generation with the cover under which it can seem to be taking the issue seriously – by negotiating weak and largely substanceless global accords, for example, and then heralding them as great achievements without having to admit this even to itself (Gardiner, 2011: quoted passage at 48). The same lack of ambition and imagination has characterised the great powers’ approach to nuclear weapons. Both policymakers and scholars have come to accept that nuclear deterrence should be maintained indefinitely, with few if any serious attempts to think through an exit strategy (Craig and Ruzicka, 2013; Hymans, 2024b: 30; Pelopidas, 2021; Pelopidas and Verschuren, 2023; cf. Pauly, 2024: 19–20). ‘The commitment to eventual disarmament (always eventual) has been a true ambition of many of those engaged in nuclear decision-making in my experience’, observes William Walker: This said, they have seldom displayed the courage or found means to act in ways that would seriously advance the cause, at least prior to retirement when their influence over policy has waned. Instead, expressions of commitment to eventual disarmament . . . and participation in projects furthering its ends . . . deflect criticism and help to salve the consciences of decision-makers when their acquiescence to armament seems unavoidable politically and in the progress of their careers.
Serious steps towards nuclear disarmament are perpetually postponed till tomorrow, and tomorrow, of course, never comes (Walker, 2020: 23–24, 39–40).
Lessons for geoengineering
This problem could be replicated with solar geoengineering. In the past two decades, concern has mounted that global heating could devastate conditions for humans and other life on earth for centuries, even forever. An immense amount of welfare is at stake in expectation, even if global catastrophe is not the most likely outcome (Baum, 2024; Lenton et al., 2019; MacAskill, 2022: 137–138; Rendall, 2019; Weitzman, 2009). The past three decades have seen repeated efforts to securitise global heating as a physical threat to all humanity. Some states have embraced this framing. Yet even those that do have seldom acknowledged the need for radical measures – let alone adopted them – and greenhouse gas concentrations have continued to rise (Methmann and Rothe, 2012; Oels and von Lücke, 2015; Uchaev and Kvartal’nov, 2024). In response, some have advocated exploring geoengineering as a backup option should the world seem headed for catastrophe (Crutzen, 2006; Summers and Zeckhauser, 2008: 131–132; Weitzman, 2009: 17). Unlike a worldwide phase-out of fossil fuels, sulphate aerosol injection could be undertaken unilaterally. A government that did so would very likely macrosecuritise it as an emergency action to protect the world from an existential threat, but in this case, it would need to persuade only a domestic audience (Methmann and Rothe, 2013: 107; Young, 2023).
Because SAI would probably be deployed without worldwide consensus, it could lead to international conflict (Corry et al., 2024; Malm, 2023: 45–46; Nightingale and Cairns, 2014; cf. Reynolds, 2015: 181). Another worry is that by suppressing the effects of emissions, it could discourage efforts to reduce them (McLaren, 2016; Malm, 2022; cf. Halstead, 2018: 71–75; Reynolds, 2015). Christian Baatz describes the calculus: each generation has strong incentives to not mitigate because the benefits from GHG emissions mostly accrue to them while the benefits from emissions reductions would mostly accrue to future generations. Since each generation faces the same incentive structure, each generation might further contribute to the problem rather than solving it. . . . [T]he prospect of SRM [solar radiation management] contributes to this unfortunate incentive structure. (Baatz, 2016: 43; see also Gardiner, 2011: 161–163; Malm, 2022: 39)
Some costs solar geoengineering would impose on future generations would consist in damages such as ocean acidification, as well as the side effects of climate engineering itself (Preston, 2013: 29, 32). But as with nuclear deterrence, the main form of burden-shifting would be the conversion of ongoing and observable damages into catastrophic risk (cf. Stärk, 2025). If emissions declined slowly, SAI might have to be maintained for centuries. If they continued to rise, it could be locked in indefinitely. The risk of something going wrong over a long period would be substantial (Baur et al., 2023: 375–376; Neuber and Ott, 2020; Preston, 2013: 32). It might be an exogenous catastrophe such as a pandemic, misaligned artificial superintelligence – or thermonuclear war.
7
States or non-state actors could also undermine geoengineering programmes or even sabotage them. Andreas Malm warns that: all that would be needed in the year 2130 would be for one actor, powerful enough to switch off injection, to perceive some number of side effects as more pressing than the prospect of global heating, the memory of which would by then have been suppressed for a century. (Malm, 2022: 29; see also Futerman and Beard, 2023: 17; Nightingale and Cairns, 2014)
8
A nightmare possibility is that an omnicidal actor could exploit geoengineering technology to bring about catastrophic and irreversible global heating leading to human extinction (Vermeer et al., 2025: 30).
From the standpoint of the world’s timeless population – ‘all the people who exist at some time in history’ (Broome, 2005: 404) – trying to maintain SAI indefinitely, as with nuclear deterrence, would be a bad gamble. Yet the probability that solar geoengineering would collapse in any given year could be low. Because so much of the expected cost would be externalised to the distant future, each successive generation might have a self-interested incentive to take the risk. By mitigating many short-term effects, solar geoengineering could render societies increasingly complacent. As with nuclear deterrence, we already see signs of a ‘rationalist-optimist’ framing prone to assume geoengineering will be smoothly managed for the common good (Malm, 2022; Malm, 2023 see also Baatz, 2016: 34–35; Corry et al., 2024; McKinnon, 2020; cf. Futerman and Beard, 2023: 15). Above all, it would be tempting to procrastinate in addressing the underlying problem. The result could be perennial postponement of emissions cuts, followed by eventual catastrophe.
Two factors make a disastrous outcome less likely than in the case of nuclear deterrence. First, states would have months to get an SAI crisis under control. If injections by one party collapsed, catastrophe need not result so long as the same or other actors could restart the programme (Parker and Irvine, 2018; Wagner, 2021: 61; cf. Rabitz, 2019: 518–519). This makes SAI significantly different from nuclear deterrence, whose breakdown could have immediately disastrous effects that third parties could not mitigate (Elster, 1979: 387). Second, the cost of clean energy has plunged, with new onshore wind and solar installations already cheaper in most places than fossil fuels (International Energy Agency, 2023: 9–10). In the long term, states are likely to abandon the latter of their own accord (Stärk, 2025). Given these differences, and the risk of catastrophe if emissions do not fall in time, solar geoengineering research may on balance be justified (Wagner, 2021). 9
Unfortunately, no substitute for nuclear weapons appears to be on the horizon. This is no accident. Global heating is an unfortunate by-product of fossil fuel use; hardly anyone desires it for its own sake. The risk posed by nuclear weapons, on the contrary, is one of their raisons d’être. It underpins deterrence. So long as states have conflicts of interest that render war a plausible option, some will want nuclear weapons (Schelling, 1966; Stärk, 2025). That said, they need not pose an existential risk to humanity to serve as deterrents. Even if nuclear weapons are locked in for the foreseeable future, we should seek ways to make them less dangerous (Baum, 2015; Goldfischer, 1998).
Conclusion
‘Whether or not the new factor of nuclear weapons should impel states toward a contract of world government’, Hedley Bull observed nearly 50 years ago: . . . it has not in fact had that effect. On the contrary, the increased vulnerability of states and peoples is widely taken to provide a new guarantee of peace, making the international anarchy not less but more tolerable than it was before. (Bull, 1981: 735; see also Deudney, 1993: 29–30)
The problem is that sooner or later this guarantee will expire. A similar form of lock-in could occur with solar geoengineering, though it would probably be both less dangerous and less enduring than in the nuclear case.
Unfortunately, we may observe the same phenomenon repeatedly. With ongoing technological development, global catastrophic risks are proliferating. More and more actors will be able to cause devastating harm (Bostrom, 2019; Persson and Savulescu, 2014[2012]; Sears, 2020; Torres, 2019: 358–361). Superintelligent AI, for example, could drastically amplify some human beings’ ability to harm others. Much of the concern about the existential risk superintelligence poses, however, has to do with the possibility that it could escape human control (Bostrom, 2016[2014]; Carlsmith, 2025; Dung, 2025). A 2023 survey of nearly 3000 experts found that most judged superintelligent AI, if created, ‘to pose at least a 5% chance of causing human extinction or similarly permanent and severe disempowerment of the human species’, with many putting the probability higher (Grace et al., 2024: 19). In contrast with nuclear war and geoengineering breakdown, it is unclear whether superintelligent AI would pose a low-frequency threat – some argue that catastrophe would be probable or even certain (PauseAI, n.d.; Yudkowsky and Soares, 2025), whereas others deny that it poses a risk at all. Given substantial expert disagreement, the rational response would be to put the burden of proof on the sceptics and take precautionary measures (Baum, 2018; Dafoe and Russell, 2016; Yampolskiy, 2022: 240–242). The actual result has been macrosecuritisation failure (Sears, 2023: chapter 6).
Could superintelligence risk get locked in? Risks like nuclear war or geoengineering breakdown are ‘state risks’: the total probability of disaster depends on how long we remain in the state of vulnerability. In contrast, superintelligent AI is sometimes seen as involving mainly a transition risk – if we successfully align its goals with human interests, we should then be out of the woods (Ord, 2020: 209). Yet a kind of state risk is already emerging. We cannot know whether frontier AI poses any objective risk of catastrophe, but in the absence of known frequencies, it is reasonable to assign some epistemic probability based on theory. Such probabilities, as Nick Bostrom puts it, can be ‘construed as (something like) the credence that an ideally reasonable observer should assign to the risk’s materialising based on currently available evidence’ (Bostrom, 2013: 16; Stärk, 2025). Here it depends both on superintelligence being developed and on catastrophe resulting if it does (Ord, 2020: 168–169). Over the next few decades, this risk appears disturbingly high (Carlsmith, 2025: 383; Grace et al., 2024). But in any given year its probability may seem low enough to render it acceptable to actors who sharply discount the future. As with nuclear deterrence, the benefits of risk-taking would be front-loaded, and the costs expectedly back-loaded – creating an ongoing incentive for development (Yudkowsky and Soares, 2025: 204–206). 10 Suppose that states make the transition to superintelligence without immediate disaster. If development proceeds at similar rates, a balance of power system could form. Human governments might try to use superintelligent AI for deterrence. It is far from clear that such a balance of power would be stable (Bostrom, 2017: 141–142; Friederich, 2024: 318; Tinnirello, 2019). Moreover, if it did prove stable, the longer it endured, the greater the chance that sometime somewhere superintelligence would escape human control (Ord, 2020: 402, n. 61). In a balance of power system, both deterrence failure and AI misalignment would be state risks and thus cumulative over time. Yet as with the risk of nuclear war, they would be easy to ignore, and it would be tempting to postpone addressing them.
If this should prove a pattern, it has a troubling implication. The probability that any given form of technology will produce an existential catastrophe might be low. Perhaps we overestimate the potential of nuclear war or extreme global heating to lead to the collapse of civilisation. Perhaps superintelligent AI will prove benign or never emerge at all. Yet, it seems unlikely that we will never invent existentially dangerous technology. We might invent it not only once, but repeatedly. If we go on adding enough risks over enough time, the total probability of one materialising will be high (Taleb et al., 2014: 2; Thorstad, 2024).
Nuclear weapons could be the first of many existential state risks that we introduce to the world. Each might be unlikely to materialise in a given year; each might be easy and convenient to ignore. Over time, we would find ourselves surrounded by quietly ticking bombs. Humanity’s prospects would not be good in this scenario.
Footnotes
Acknowledgements
This paper had its origin in two outstanding workshops sponsored by the Centre for Global Security Challenges at the University of Leeds. For comments on earlier drafts, I am grateful to Justin Canfil, Olaf Corry, Jenny Oberholtzer, Benoît Pelopidas, Yevgeny Uchaev, two anonymous reviewers and audiences in Cambridge, Leeds, Hamburg, Pittsburgh and Philadelphia. For financing and a supportive environment during part of the writing of this paper, I thank the Institute for Peace Research and Security Policy at the University of Hamburg.
Data availability
All data used in this paper are drawn from publicly available sources.
Funding
The author disclosed receipt of the following financial support for the research, authorship and/or publication of this article: The author wrote part of this paper during a visiting fellowship at the Institute for Peace Research and Security Policy, University of Hamburg.
Ethical considerations
This study did not involve any animal or human participants, and no ethical approval was required.
Consent to participate
Not applicable.
Consent for publication
Not applicable.
Notes
Author biography
Matthew Rendall is a Lecturer in Politics and International Relations at the University of Nottingham. He has published widely on International Relations theory, diplomatic history and moral philosophy, including papers in Ethics, The Journal of Philosophy, The Review of International Studies and Security Studies. He is currently writing a book on existential risk.
