Abstract
This article is about our relationship to two lethal technologies that blow the mind: nuclear weapons and artificial intelligence – and their mutual imbrication. Our diagnosis is that we are missing an ethical critique adequate to the (emergent/resurgent) technologies of mass destruction. In particular, we are dissatisfied with the persistence and dominance of algorithmic reasoning and/in contemporary just war theory, manifest in deterrence theory. We argue that such approaches fail to take into account the intrinsically human capacity to exercise moral judgement and the quintessential human-ness of ethical relations. In response to deprivation of ethics as algorithmic thinking, we summon a distinctly human quality – the imagination – and argue that it is pivotal to exploding the conceptual categories that hamper ‘ethical’ theorising. Imagining the Apocalypse is not the answer; imagining otherwise is. With this in mind, we foreground the imagination as vital to ethical reasoning and political critique.
The rest of humanity may not forgive us but then the rest of humanity, depending on who fashions its views, may not know what a tired, dejected, heartbroken people we are.
Prelude: On human disappearance
Shortly before his death in 2007, Jean Baudrillard opens one of his final pieces of published writing with the words ‘Let us speak, then, of the world from which humans have disappeared’ (2009: 9). In this short text, Baudrillard ponders the modes by which the modern human is taking a leave of absence – not so much physically or as a natural phenomenon, ‘not exhaustion, extinction or extermination’, as he says, but rather as a mode of altered existence. It is a disappearance of the human as philosophical subject, of sorts, on account of an excess of reality and limitless technology, particularly pronounced in the post-World War II cybernetic and nuclear age. This is the condition, in which human beings give ‘way to an artificial world […] the highest stage of materialism […] That world is perfectly objective, since there is no one left to see it.’ (Baudrillard, 2009: 15). And because as humans we are not objects, it is a world in which the human subject, fallible, failing and flawed, must, in a never-ending effort, be overcome through limitless, object-making technology.
It is, as Martina Heßler notes, a technologically-facilitated striving for end-less gain, improvement and increase – a constant effort to fix what seems broken. In this environment, the human becomes ‘a technologized Sisyphus who must perpetually eliminate and contain newly appearing defects and deficiencies of both humans and machines’ (2025: 16; emphasis in original). A mode of human life in which morals and meaning may disappear altogether. In the age of escalating nuclear threats and accelerated violence with artificial intelligence (AI)-enabled technology, this condition seems particularly pronounced. The phenomenon of human disappearance, the desire for perfect technological objectivity and its implications for our (in)ability to ethically reckon with violence in warfare, is, in short, our concern in this paper.
Our reasoning proceeds as follows: the first part of the essay considers the ways in which nuclear weapons and AI-enabled military technologies mirror and reflect one another as technologies of mass violence, exploring their shared logics and parallel emergence. We find them to be twin perils, each raising the spectre of annihilation through the evisceration of the human. This first half of the paper is followed by an Interlude, dedicated to the pathology of ‘spasm war’. The second half develops an ethical critique of the cyber-nuclear age. We revisit the game-theoretical mode of reasoning at the heart of strategic nuclear deterrence as it developed during the Cold War and examine its continuation into contemporary discussions of military AI. In contrast to the algorithmic ‘ethics’ that dominate discussions of both nuclear weapons and AI, we reassert the human subject and human experience as central to reckoning with technologies of mass violence. We seek to disrupt the complacency and complicity inherent in algorithmic ethics, foregrounding our shared existential predicament via Günther Anders and Albert Camus and the potential it offers for a more proper sense of responsibility.
Spectrality of annihilation: The twin perils of nuclear weapons and AI
The materialisation of the nuclear bomb in the 1940s and the more recent advances of AI are increasingly twinned in contemporary imaginaries as technologies of such magnificently awe-inspiring – awe-ful, awe-some – capabilities that we must contend with their powers in relation to human death in its most abstract form. As the near-unimaginable human toll that resulted from the nuclear violence in Hiroshima and Nagasaki fades from collective memory into a distant abstraction of human devastation, AI is raised as the new speculative spectre of holding the capacity for an imagined cost to human life at scale – a hypothetical existential risk that former Google chief executive officer (CEO), Eric Schmidt, defines as ‘many, many, many, many people harmed or killed’ (Schmidt quoted in Kharpal, 2023). In contemporary discourses, nuclear and AI technologies are often twinned as similar or alike in their destructive capacity. A 2023 letter signed by most luminaries of the AI-producing industries declared their own products to be existentially perilous and that ‘mitigating the risk of extinction from AI should be a global priority alongside other society-scale risks such as pandemics and nuclear war’ (Roose, 2023).
The abstraction of real-world past violence made possible with the most absurd of all weapons – the nuclear bomb – now serves as a framing device to speculatively conjure the always latent possibility for future large-scale violence, mediated through the objectifying characteristics of the respective technology that produces the violence.
The nuclear bomb and AI (for military purposes and otherwise) have their roots in the same era and within the same scientific-rational set of foundations particular to World War II. Both technologies rely on the same large-scale industrial wartime mobilisation of technological and labour power (Bousquet, 2009: 33). Just as the world had been introduced to the possibility of large-scale annihilation, the computer, as Bousquet writes, ‘another product of the war effort, became the new dominant technology and abstract model through which the world came to be understood chiefly in terms of information-processing’ (33). Both are notable for their logic of abstraction as it relates to human life.
The coupling of nuclear imaginaries and AI narratives is, at the time of writing in early 2025, well established in media discourses. As if by coincidence, in 2023, the cinematic rehabilitation of Robert Oppenheimer by blockbuster director Christopher Nolan cemented the primacy of crisis and conflict for a contemporary age in which a nuclear power (Russia) had just launched a large-scale war of territorial aggression against another state (Ukraine). This ushered in a well-known, but perhaps misplaced, hand-wringing about necessary violence by a handful of tragic protagonists who hold the power to do what one must do in war, including inflicting unspeakable levels of violence in one fell swoop with whatever technology is available (Pelopidas and Renic, 2024: 214). Only a short time later, in October 2023, one of the world's most technologically advanced states (Israel) leveraged the latest in military AI technologies for a large-scale counter-terrorism military operation in a confined area of land (Gaza) which has killed around 50,000 individuals and wounded over 100,000, likely many more (AJLabs, 2025). The new moment of incredible powerful AI, we are told in several news features, is the new ‘Oppenheimer Moment’ (Robins-Early, 2024).
Once the ‘Oppenheimer Moment’ began to fade, in step with technologically conditioned public attention spans, another historical reference was raised that further bound the nuclear era to the current AI summer. What AI needs, it was suggested by a US government commission and repeated by many a pundit, is a ‘Manhattan Project-style AI initiative’ (Naughton, 2024). We recall, of course, that the Manhattan Project had all of America's industry enrolled in the pursuit of the most destructive weapon known to humans. A large-scale military-civilian effort, demanding billions in financial resources and millions in human resources, effectively turning all of America into ‘one huge factory’, as Nils Bohr noted at the time (Bohr quoted in Atomic Heritage Foundation, n.d.). A factory for spectral violence. Raising the image of the Manhattan Project in the context of AI is not done to bring forward a cautionary tale of irresponsible acts and their consequences, but rather a project plan, an unambiguous good which ought to be repeated for success. This imaginary is a re-imagining, often bolstered by media accounts, which acts as a lens through which both the inflection of the past and the possibility for a future is contained.
These curated recollections of the past matter especially when they deal in tropes of crisis. They inform our attention in the present through a pastiche of the past. As Amin Samman, drawing on Paul Valéry, asks, ‘[w]hat, exactly, is the function of the past in times of apparent crisis? If “history feeds on history” how does this process play out?’ (Samman 2019: 67), and what effects might it have? The twinning of nuclear violence and AI as an inevitable future in all walks of life, including military life, produces the demand to do whatever is necessary to gain the upper hand, to ‘be the ruler of the world’ by mastering AI. Top AI firms have recently pledged to invest US$500b in Project Stargate, a project to build out AI infrastructure in the US (Da Silva et al., 2025). In keeping with this, the military AI weapons company Anduril Industries (valued at US$14b) is forging ahead with its plans to invest US$1b in a huge manufacturing facility to produce tens of thousands of autonomous weapons per year (Metz and Lipton, 2025) for an initiative they have named ‘Rebooting the Arsenal of Democracy’. The initiative, and its promotional material, conjures up the industrial efforts raised for World War II and thereafter by America and its allies as an unmistakable force for good, because it was ‘the pivotal factor in preventing World War III’ so far (Metz and Lipton, 2025). This Pax Americana – deterrence through technological objects, the military AI company laments, is now in peril and requires a large-scale industrial AI effort to enable what another military AI company, Palantir (market cap US$160b), calls ‘the primacy of winning’ (Sankar, 2024). What is to be won remains unclear.
In conjuring up the valour of the Manhattan Project, an imaginary of history emerges as the mandate for the present: remain focussed on crisis and annihilation. The justification for this present-future is routed through the logic of technological and financial objects that serve the objective to overcome this latent crisis of possible annihilation, and win. The latent crisis, in both instances, is human-made and betrays an alienation Günther Anders identifies as a Promethean disparity (promethisches Gefälle) – a mismatch between that which humans are able to produce (Herstellen), and that which humans are able to imagine as the consequences of this production (Vorstellen). As Anders states: we are perfectly capable of making the hydrogen bomb, but we are unable to adequately picture the consequences of that which we are making. And similarly, our capacity to feel lags behind our capacity to do: while we are capable of bombing hundreds of thousands of people, we are incapable of weeping for them or regretting our actions. (Anders, 2010: 17; authors’ translation and emphasis).
‘Pending [human] approval’
Although the notion of AI as so vastly destructive a force as to invoke the same fears of existential risk as nuclear annihilation has become a common trope in certain technology circles, the two technologies manifest their spectrality in opposite directions. Nuclear violence derives its potency from a mid-century understanding of violence as a highly visible, overwhelming spectacle, measurable in megatons and megadeath or megacorpse (the unit by which mass violence – 1 million dead – came to be measured in the 1950s) (Anders, 1994: 14). Nuclear violence looms large as a latent totalitarian threat. An almost iconoclastic largesse that perverts our ability to picture what it is or does (14). AI, as a mode and facilitator of violence, in contrast, is invisible – a collection of zeros and ones, routed through servers and interfaces, measurable in bits and bytes. It works within the data substrate of highly abstracted human life. Only once enacted by humans-in/on-the-loop does the scale of violence possible with AI-enabled weapon systems materialise. The human here is embedded within a technologically functional system of violence, a workflow process for the application of force at high speed and on a large scale. Consider, for example, the Israeli AI-system Lavender which identified 37,000 ‘targets’ for operators to approve. Human operators had mere seconds to weigh up whether the AI decision was legitimate or an error (Abraham, 2024; Schwarz, 2024a). Humans, and their cognitive capabilities, are an obstacle here and the tendency is towards a mode of automated decision-making, as per the tab ‘pending approval’ on Palantir's graphics of targeting selection and data processing. 1 The human is indeed on the loop but may do little else than to respond to the tantalising invitation to execute, decision pending. This opens the doors to the potential for massive violence, wherein ‘mistakes’ are treated ‘statistically’ (Schwarz, 2025), and which Heßler's ‘Sisyphus in the engine room’ must work to fix, perpetually (Heßler, 2025).
The management of nuclear violence has consolidated around an object, the (non-)activation of which needs to be calibrated with careful processes to manage human tempers and temptations. AI, by contrast, may serve as a technological process, attached to objects of violence, that enables the potential for mass atrocity, ostensibly removing humans and their tempers and temptations. More recently, imaginaries of nuclear weapons and artificial intelligence also twin in the literal sense of the technologies merging into one. For such future visions, the AI process informs or facilitates the nuclear decision-making. The integration of AI into nuclear weapons systems falls into three main categories: threat detection and early warning systems; nuclear command, control and communication (NC3) networks; and in the shape of autonomous and semi-autonomous delivery systems. Familiar problems such as automation bias (from the AI sphere) and risk of escalation (from the nuclear deterrence sphere) mutually amplify risks (Boulanin, 2019; Johnson, 2020, 2023; Zala, 2024).
Some posit that AI can usefully complement human decision-making in a moment of nuclear crisis. Rather than serving to exacerbate decision-making pathologies, it is even argued that AI may assist decision-makers in exhibiting ‘greater empathy and responsiveness towards the fear of adversaries’ (Holmes and Wheeler, 2024: 165). Implicit here is the assumption that unenhanced human empathy is not enough. If a Promethean discrepancy hampers our ability to understand the scale and significance of our objects’ impact, perhaps we must then become technologised even in our ability to ‘feel’, ever striving, and failing, to improve ourselves and our machines. Humans are, after all, too unpredictable in their emotionality and suffer from ‘cognitive errors’, a lament often raised in the context of advocacy for lethal autonomous weapons (see e.g. Trabucco and Heller, 2022).
The radical contingency magnified by the union of nuclear and algorithmic violence is breathtaking; yet, as tends to be the case with regard to lethal technologies, sanitised strategic calculi dominate the discourse (Boulanin, 2019; Boulanin et al., 2020). What unites them is that the referent object is the weapon or technology itself, and rarely the sentient human being. And where the human is included in the consideration, it is either as an unfeasibly hyper-rational agent, as a calculable abstraction, or, at a stretch as a dangerous bundle of desires. There might have been a moment in time, when we would have been able to imagine cyber-nuclear violence otherwise.
Interlude: ‘Spasm war’: A cybernetic pathology
The term ‘spasm war’ is commonly attributed to Herman Kahn and his third book on nuclear strategy, On Escalation: Metaphors and Scenarios (Kahn, 1965). The concept denotes the final rung in a 44-step nuclear escalation ladder in which Kahn suggests that nuclear war might not immediately devolve into nuclear Armageddon but that this is an incremental process in which certain levers of control remain available until its culmination in the final iteration: spasm war. If this occurred, Kahn predicted, all parties would racket up the escalation ladder, culminating in a delirious, uninhibited nuclear war, void of any overarching strategy, consisting only of arbitrary uncoordinated strikes. It is crucial, as Freedman points out, that this final iteration is not a matter so much of ‘blind and overwhelming fury as the lack of control and thought’ in the final stages of possible nuclear annihilation (2003: 204).
The term itself – spasm war – reportedly emerged from a briefing session Kahn took part in as part of the wider Hudson Institute project, commissioned by the Martin Company (an aerospace business). The commission was for a study which had as its very first (of nine) objectives: to ‘stimulate and stretch the imagination’ (Kahn, 2017: xxiv). Kahn was a mathematician, physicist and systems analyst. It is then perhaps not surprising that the imagination was stretched in a particular direction. To have control meant to exercise calculative reason at all stages of the escalation ladder. To lose control meant giving in to un-reason, to drives and urges, to uncontrollable, convulsive acts that are underwritten by uncontrollable emotions. With this approach, Kahn ‘neatly sidestepped the moral and social costs of fighting with genocidal weapons [following] a pragmatic murmur, “always abstracting from the humanitarian aspects” ’ (Ghamari-Tabrizi, 2009: 4). This murmur would become a mode of ethical theorising about the justification of violence that informs scholarship and rationales for lethal force alive and well today. We will return to this matter of ethics below.
The crucial point is that in this imaginary, humanity, in its uncontrollable messiness, is too unbearable to conceived of in any other way than in abstracted or spasmic form. The spasm reflects the shadow side, a mirror image of spasmatic, uncontrollable annihilation to the insistence on rational mastery, which we see so strongly revived today. Part fear, part fantasy couched in a sexual vernacular of libidinal surrender in which primal drives trump all reason. Carol Cohn noted the sexual overtones of military jargon more broadly, recounting how a US military advisor to the National Security Council unapologetically fantasised about ‘releasing 70–80% of US megatonnage in one orgasmic whump’ (1987: 693). In the specific context of 1960s nuclear strategising, this sentiment seems to have been borne out by phrases like ‘orgasmic spasms of destruction’ and ‘wargasm’ (Freedman, 2003: 204). The spasm as the abstract imaginary of a foundational yet abstract human drive. An irrational flaw, a shameful aberration, in an otherwise rational system of control; a system afflicted with the always latent possibility of the worst of all failures.
Kahn's approach to raising the spectre of nuclear war, and possible survival, was influential at the time and remains so. The advent of cybernetics had ushered in the promise that the newly created nuclear instability could be controlled, managed and contained with enough computational processes and power. As a field of inquiry, cybernetics offered a powerful idea: that a techno-logical principle (information feedback) could be used to ‘explain how all living things – from the level of the cell to that of society – behaved as they interacted with their environment’ (Kline, 2015: 2). This pursuit was, perhaps, in itself a post-war desire for rational control in the face of the breathtaking death toll of World War II, a form of what Sigmund Freud called ‘isolation’ – an ego defence. Freud notes that an individual may at times mentally isolate ‘an event, idea or act by cauterizing it emotionally and by preventing it from becoming a significant experience’ (Nandy, 1997: 8). As Ashis Nandy explains, later iterations of this theory adapted it for a scientific-technologically advanced mid-20th century era, defining isolation as a defence mechanism which isolates ‘an idea from the emotional load of feelings that originally connected with it. [This manifests in] the continued elimination of affective associations in the interest of objectivity’ (Fenichel quoted in Nandy, 1997: 8). It is a mechanism, Nandy notes, of ‘hypocrisy and self-deceit. Individuals and societies can isolate their violent acts, even mass murders, from the emotions they should formally arouse’ (9). An echo to Kahn's pragmatic murmur: always abstracting from the humanitarian aspects. The systems-oriented processes of computational technology are a perfect match for this psychopathology. The computer as ‘Electric Brain’ which alleviates human decision-making, an external, objective conscience, so to speak (Anders, 2010: 60).
Deterrence theorising contains its own psychopathology in positing a fictional-rational stability, or equilibrium: calculable, manageable and foreseeable. With analytical distance the concept of deterrence rests on a rational logic: to overmatch my potential enemy so severely that they would not dare come at me is a way to maintain the status quo. A similar ethos is enshrined in the motto ‘Peace through strength’. This is also the sentiment expressed by today's military technology companies like Anduril and others. Wars typically only happen, Anduril CEO Palmer Luckey declares in a 2024 interview, when states and belligerents misunderstand their own capabilities: ‘when both sides understand who is going to win, it is very rare for things to proceed to violence’ (Luckey, 2024a). This betrays a rather ahistorical understanding of warfare but perfectly captures the rationale. Perhaps to Luckey's credit, he acknowledges that ‘where this falls apart is when you have enemies that have irrational aims [because] it is very hard to engage in game theory with people who pursue the non-game-theory optimal strategy’ (Luckey, 2024b). This does not deter him (no pun intended) from holding fast onto the peace through strength mantra.
Of course, when we scratch the surface of the analytical distance on which deterrence theory relies, we understand quickly that it betrays a condition which rests on the pure threat of inflicting unspeakable, inhuman pain and suffering. Sharon Weiner highlights that this logic reveals a most brutal wager, one that has not been revisited since the 1940s: ‘is it necessary to base our national security on a threat to commit suicide, or on a threat to commit genocide’, she asks (Weiner, 2023b). 2 It indicates a state of affairs in which a threat of unspeakable violence is worth more than a promise, or cooperation and mutual benefit.
Here, then, the reference to ‘spasms’ may well be understood as a wider, generalised pathology which has at its core the latent and unwelcome possibility of painful spasms. Perhaps this is a ubiquitous condition wherein the potentiality of destruction, annihilation and omnicide is always already present. The inherent insanity and profound ambivalence of having the non-use of the world's most destructive weapon as a haloed source of stability, even peace, is intrinsic to this very disorder. Deterrence is a theory that imagines the world as a constant threat-scape. This is precisely where the latest visions for a military AI-infused future come into play: as a computational process to perpetually identify possibly threats which can be pre-empted in some way or another.
Algorithmic ethics and the excavation of the human
Cybernetics itself is a concept with many facets and faces, which is in no small part due to the multi-disciplinary context from which it arose. However, the exploration of these ideas also coincided with the development of computational technology, and it was not long before broader philosophical ideas about information, communication and feedback loops was routed through the computational logics of information processors. This is how we largely understand cybernetics today: as a predecessor of AI. At the time, the combination of cybernetic theories with computational processes held significant sway for both hot and cold war doctrines from the 1950s onward.
In 1960s US military doctrine – particularly under the helm of McNamara, a Ford Motor Company executive turned Secretary of Defense who adopted Kahn's escalation ladder culminating in spasm war – modes of warfare were forged along the lines of highly quantitative, computational processes (Bousquet, 2009: 149; Turse, 2013: 38) – James Gibson (1986) famously termed this ‘technowar’. For his strategising about winning the Vietnam war, McNamara ‘relied on numbers to convey reality and, like a machine, processed whatever information he was given with exceptional speed’ (Turse, 2013: 38). This, he thought, would allow him to optimise decision-making at speed. As Turse writes, ‘McNamara and his national security technocrats were sure that, given enough data, warfare could be made completely rational, comprehensible, and controllable’ (2013: 38).
With the computational ethos as logical basis, ‘the human’ became ever-more abstracted, disappearing into a formula of hallowed object-ness; a calculable variable in wargaming. This wilful disappearance of the actual, living human is also notably present in post-war theorisations on the ethics of violence. This is what we turn our attention to next.
Game theory and the maximisation of moral utility
The ethical discourses on nuclear annihilation and military AI-enabled weapons radiate in opposite directions. Where the ethics of nuclear violence is most often cast in terms of deterrence, or non-action, the ethics of AI-enabled, autonomous systems is about more discerning delivery of violence. A somehow ‘better’ violence. Of course, there are complications to this bifurcation. The non-use of nuclear weapons (or nuclear taboo) is always also a ‘use’ of sorts: the threat of annihilation looms. As Elaine Scarry writes, it is to threaten the sudden collapse of ‘the floor of the world’, holding another population hostage to the ever-looming threat of annihilation and genocide (2014: 1). And the ‘ethics’ of discerning, better targeting is, as we know, underwritten by the massive pretence of non-failure, whereby the promise of the system of precision offers an alibi for precisely the implicit potential of the opposite: indiscriminate killing.
What underpins both sets of ethical discourses is a reliance on abstracting the human from the processes and effects of violence and a foregrounding of the weapon as an analytical anchor (yet not as object of analysis) for probabilistic reason. This is particularly notable in analytic moral just war reasoning in which technology is often leveraged as an amplifier for certain moral dilemmas, but without acknowledging that the destructive technological capacities have significance for the moral choices we make. 3 AI-enabled weapon systems, whether used in conjunction with nuclear command structures or not, are the purest realisation of Kahn's nuclear escalation logic: to place the human agent squarely into a matrix of variables and limited, purely rational choices. In short, it is a type of moral reasoning about the ethics of war that is underwritten by an economic logic of expected value or expected utility.
Consider, for example, the musings of Alex Karp, CEO of Palantir, in a New York Times op-ed, published in 2023, on the rational utility of threatening violence. In the piece, titled ‘Our Oppenheimer Moment’, he argues for a moral mandate to develop AI-enabled weapons and autonomous weapon systems as a workable deterrent. We should not, he declares, ‘shy away’ from developing weapons for mass violence, because the ability to develop such weapons, paired with ‘a credible threat to use such force’ serves as a basis for better diplomacy, echoing Luckey's understanding of peace through strength (Karp, 2023). Karp draws on nuclear wargame theorist Thomas Schelling's doctrine of coercive diplomacy to make his point: ‘to be coercive, violence has to be anticipated […] The power to hurt is bargaining power. To exploit it is diplomacy – vicious diplomacy, but diplomacy’ (Schelling quoted in Karp, 2023).
What Schelling frames as the ‘art of coercion’, rests, it must be said, on a misunderstanding of the dynamics of violence and a very reductionist idea of the human. War scholar Antulio J Echevarria notes Schelling's shortcomings succinctly: Schelling's error was not so much that he developed theories to predict rather than to explain, though he is guilty of that to a degree, but that he oversimplified war by attempting to reduce it to a rational sequence of decisions, a decision-logic. (2021)
Warfare has never, and likely will never, adhere neatly to such rational strictures, which makes it all the more puzzling that a mode of economic thinking, framed as moral reasoning, still informs some of the more strikingly algorithmic variants of justifying violence in warfare today. In those types of moral wagers, the human moral agent is present as a calculating agent, able to effect a desired outcome in an environment of limited variables and time. The concept of violence is typically abstracted entirely from the context-relevant moral impact. As though the moral salience of a charred and dismembered body of a loved one, multiplied by a hundred, is confined to one single act of force and ends at the point of infliction. No lingering traumas, no psychological injuries, no second- and third-order harms, ever factor into these modes of calculative moral reasoning. Only units of dead bodies. Megacorpse.
Such types of moral reasoning have their foundations in the same cyber-nuclear era as game theory and computational approaches to military doctrine. This is unsurprising; moral reasoning does not happen in a vacuum – ideas and modes of thought often circulate way beyond their original field. As Sonja Amadae (2016) has shown, nuclear violence and algorithmic reasoning share a point of origin – specifically encapsulated in the figure of John von Neuman, who wrote what would later come to serve as a seminal text in game theory as we know it today. And the more one probes into the twinned history of thought of these two military weapons technologies, the more it is impossible not to see the justification of their use as intrinsically linked by a shared logic, specifically in how the permissibility of harming the innocent is justified.
In the early 1970s, a mode of moral reasoning about violence emerged that sought to apply the types of formal logics prevalent in analytic philosophy to the pressing question of ethics of harm. These approaches framed moral puzzles around the requirements of economic reasoning, often with the help of hypothetical thought experiments which were deliberately detached in their suppositions from real-world human contexts so as to afford parsimony and clarity. Moral reasoning came to be about practical applications and decision-making under conditions of imperfect knowledge and constraint. A classic reference point for this is the work of the Oxford philosopher Derek Parfit. Parfit's own work on practical ethics was inspired by ‘the game theoretic buzz’ that his young economist colleagues in Oxford had created in the 1970s, and he ‘integrated it into [his] philosophy’ (Edmonds, 2023: 172).
And although Parfit draws on the prisoner's dilemma and game theory directly only to explain the rational limitations of self-interest, his variant of analytic philosophy closely mirrors the structures and priorities of economic decision-theory and expected utility reasoning 4 in vogue at the time. His aim was to ‘generate a general theory about how we should act that would cover all cases’ (Edmonds, 2023: 174). This search for objective truths became a quest to extract principles from a set of variables and parameters, often derived from radically abstracted hypothetical case examples, involving a supposition, an action and an effect. Frequently, the abstraction required to arrive at a principle is so stark that no semblance of lived humanity is left in the scenarios. A classic example for this is the trolley problem, which posits a scenario in which an out-of-control trolley barrels down a track on which, inexplicably, five people are tied down. The trolley can be diverted by switching a nearby lever and sending it onto another track on which, inexplicably, one person is tied down. The dilemma here is whether it is permissible to kill the one in order to save the five. There are countless variants of this dilemma with various technological variables and artefacts as the decision-enabler (a footbridge, a trap door, a loop and lazy Susan and so on), each becoming more abstract and detached from human contexts with each iteration, as though the messy plurality of human experience gets in the way of smooth models of abstract economic reasoning (Schwarz, 2024b).
Francis Kamm, a fellow traveller of Parfit's in this analytic pursuit, reportedly said of Parfit: ‘It seems to me that [Parfit's] highly impartialist view of ethics might seem to some as a way to get rid of people’ (Edmonds, 2023: 189). Like the escalation strategists and wargamers at the time, Parfit was ‘a philosopher who wanted to impose reason and order on morality, to iron out wrinkles’ (Edmonds, 2023: 194). This is so much more easily done with objects than with humans and their plural experiences, needs and uncertain dynamics. Parfit's views have been, and remain, contested, but they have nonetheless inspired a host of young, technologically minded philosophers, whose views have been taken up in particular in the discussion on AI and existential risk in a Parfitian manner, under the mantle of effective altruism and longtermism. 5 Parfit's practical ethics has also inspired revisionist just war theories pertinent to new technologies of warfare, which, at times, have fully embraced probabilistic reasoning to ascertain ‘who should die’ (see e.g. Yitzhak Benbaji in Strawser et al., 2017: 13–58). Thinking about humans and violence in purely abstract form leaves an absurd blind spot about all those facts of human life that matter. It ignores, as Kwame Antony Appiah points out, the important ‘intimate relation between social practice and values’ (2010: 198). And with this, it very likely loses touch with what it means to inflict violence, in the real world.
Although we have many disagreements with Michael Walzer's discussion on just war, we do think he was right when he noted that nuclear weapons explode the theory of just war’, as they are ‘simply not encompassable within the familiar moral world’ (Walzer, 2006: 282). And deterrence theory fails to acknowledge that invoking ‘the spectre of destruction’ and death, although not the same as killing itself, is nonetheless intricately entwined with the act of killing – otherwise it would not work. It is, as Walzer notes, ‘in the nature of that closeness that the moral problem lies’ (2006: 270).
We can see how the fetishisation of strategic ‘rationality’ has constrained thinking about the purpose of nuclear weapons into a sanitised consequentialist calculation, under which deterrence is presented as ‘necessary’ and ‘unavoidable’. But rather than a doubling down on the limitations that the just war tradition had hitherto offered, or acknowledging that the use of the nuclear bomb against civilians constituted an utmost moral failure, and that threatening to do so again results in perpetuating this moral failure, the question ‘how can a nation live with its conscience’ (Bennett quoted in Walzer, 2006: 270) was answered with a radical pivot to imagine the world and all humans in it as objects that can be rationally and objectively calculated. Today, in an era of sharply accelerating global instability, and against a context of rising hostilities between nuclear-armed nations, most of whom have set their eyes on building out their AI-enabled and autonomous weapon systems, we are stuck with a hopelessly limited way to reason ethically about the worst of all possibilities: the failure of deterrence, the death of thousands, or millions – megadeath – in pursuit of … winning? What does winning mean when the human has disappeared?.
The abstraction is not accidental, but precisely the point. Abstraction decouples empathy from action. It also serves to render the ‘rational’ so fantastically incomprehensible that surely only mathematically savvy experts, technologists and analytic philosopher kings can be trusted with actioning the calculation. And this matters for the possibility of restraint. The view of deterrence as necessity rests on a particular nuclear ontology that forecloses other alternatives (Bourne, 2016; Ritchie, 2022). Failing to contend with different nuclear ontologies has led to an often-unacknowledged agonism between those who accept the existence of nuclear weapons as an instrument in strategic thinking, and those who do not (Considine, 2017, 2022). Yet more importantly, it is bound up with a preoccupation with nuclear weapons over human beings and human life. But making deterrence out as rational necessity – or, worse, a moral mandate – is, as Günther Anders might argue, a sham argument. We have been left with conventional just war theory joined at the hip with deterrence thinking and leaving the human experience out of the equation entirely.
Moral relations do not happen ex nihilo. As humans, we understand and are able to judge the specificities of human relations and relationality in a range of social contexts in a way that technological artefacts are simply not able to. Devoid of understanding human experience, there is no meaning. Or to put it another way, our morality, and thus our moral decision-making, is anchored in our history of human social practices, relations and values. It is this condition that makes us not just actors, but moral actors, always acting in relation to others. By routing this fundamental condition through an abstraction engine and computational capture, the meaning on which moral action rests becomes hollowed out.
How might one have imagined otherwise? The existentialist thinkers of the immediate post-World War II era grappled with mass killing, genocide and the way in which human beings had brought about the means of their own destruction via scientific progress (Munster, 2016; Van Munster, 2023; Van Munster and Sylvest, 2019, 2021). The existentialist concern for lived experience constitutes an altogether different starting point for ethical reasoning from that outlined above. The central wager here is that if we want moral or ethical reasoning to connect in a meaningful way with the actual world of finite lives, ethics must be rooted in, rather than alienated from, the subjective experience of war. As such, we must grasp for a ‘heartfelt truth’, as Cian O’Driscoll (2023) calls it, a truth that cannot be recovered from bloodless Cartesian assumptions but only from the truth of human experience. In order to give meaning to our moral reasoning about ethics, human experience must be foregrounded in all facets.
Anders has repeatedly sought to raise the bar for a re-imagination of how to think morally about violence and war in a world where the physical annihilation of humans had become possible. In his open letter to Klaus Eichmann, Adolf Eichmann's son, he meticulously explained how ‘the Monstrous’ may become possible. The monstrous is made possible through widespread complicity. ‘The monstrous’, Anders explains, has three crucial aspects: (a) it describes the institutional, dispassionate and conveyor-belt like extermination of human beings; (b) in order to facilitate this, it requires complacent and compliant executors of these acts – not merely one Eichmann, but many; and (c) it requires a consent to remain ignorant, to consent to not-want-to-know – ‘a million of passive Eichmanns’ (1988: 19–20, authors’ translation). Only once we fully reckon with the shocking, shameful and striking plurality of human experiences as it relates to mass violence can we adequately ethically reckon with the monstrosity of the cyber-nuclear nexus where we become increasingly enrolled in passive complicity. For Anders (1988: 24) it is clear that the more technologised we become, the more we objectify ourselves, the more likely the monstrous is growing, casting a shadow that darkens our world.
Sisyphus in the cyber-nuclear world
A key impetus that can be drawn from the existentialist philosophers is regarding the question of choice, and, relatedly, of responsibility. The ‘choice’ that is recovered here has nothing to do with Cartesian rationality, ‘action’ or ‘free will’, but centres on an awareness of the conditions of human existence itself. In the Myth of Sisyphus, Camus (2018) presents a character punished for his crimes in the underworld. As punishment, Sisyphus is tasked with pushing a boulder to the top of a hill, only to find that every time he reaches the top of the hill, the boulder rolls down and he is forced to recommence – an image that has come to incarnate ‘the Absurd’. The world has no intrinsic meaning, nor do our lives as such have meaning, yet we go on – this starting point was common to existentialists.
‘There is but one truly serious philosophical problem, and that is suicide’ opens The Myth of Sisyphus (Camus, 2018: 3). Camus rejects suicide on the grounds that killing oneself in response to the innate meaninglessness of life does nothing to confront the Absurd. Instead, to do so would only amplify absurdity itself. He goes on to consider the notion of ‘philosophical suicide’, which he understands as a leap of faith that in the end amounts to capitulation, surrendering to a belief that one does not actually hold. We might imagine pretending that God, in fact, exists or submitting to one or another set of ethical precepts, by way of finding a more peaceful – albeit restricted – life, wherein we would inevitably suffer a loss of freedom. For Camus, the answer to the conundrum of seeking meaning in a world in which meaning is wholly absent can only be the third option: accepting the condition of the Absurd itself. Embracing the world and life itself as is. In all its mundanity, in its incongruity, in its very human-ness. It is in the revelation, discovery and acceptance of the world precisely as absurd that we can find contentment; this is not the abandonment of hope but the realisation that we are free to make what we will of our lives and take responsibility for it.
There is another line of argument one might draw from Camus’ account of Sisyphus’ story that emanates from the notion of control. Understanding and embracing the Absurd entails an understanding and embracing of the uncertain – that which is impossible to control. This direct challenge to the notion of controllability has particular relevance to nuclear weapons. Camus’ critique of control also paves the way for a relational ethics, an ethics built not on foundational claims but on recognition of the relations that exist between (all) human beings, simply by virtue of being human. Like Anders, Marc Crépon critiques the passive or indirect complicity of societies, individuals and institutions in systemic violence and war – which he dubs le consentement meurtrier, ‘murderous consent’ (2019). Crépon elucidates how violence is tolerated and enabled in modern democracies, despite their professed democratic values. In this context, it is unsurprising that nuclear-armed states normalise the possibility of annihilation, framing it as a rational security strategy rather than the ethical catastrophe that it is. This ethical catastrophe begins with the failure to recognise that we, as human beings, are inextricably bound to one another and that this must be the point to which all ethical thinking returns (Holmqvist, 2013, 2024).
Drawing on Camus (alongside Anders, Butler and Freud), Crépon holds that reckoning with individuals’ and (democratic) societies’ tacit consent, blindness to or complicity in the threat of nuclear violence is pivotal to developing an ethical critique. In his bid to offer such a critique, Crépon takes from his reading of Camus the creative potential in a world void of intrinsic meaning (as meaning is not given, it has to be created). Meaning in this sense can be created but only through relationships or connections with others, to which Crépon summons several of Camus’ names: ‘solidarity’, ‘mutual complicity’ or ‘fidelity to the human condition’ (Crépon, 2019: 38). Fifteen years after the publication of The Myth of Sisyphus, Camus appeared concerned by the darker, defeatist interpretations of his work: ‘although The Myth of Sisyphus poses mortal problems, it sums itself up as a lucid invitation to live and create, in the midst of the desert’ (2018: preface). Creation, creativity and thereby hope for Camus reside precisely in the acceptance of the unattainability of control – not the other way around, as deterrence thinkers would have it.
Despite the existentialists’ animated engagement with the bomb, their message went largely unheeded (Bousquet, 2024; Munster, 2016; Van Munster, 2023). Instead, the trend towards the encroaching technification of the human by which the human-qua-human has receded into the logic of technological rationality and artificiality. Perhaps this disappearance is precisely the philosophical suicide Camus indicates. We might understand this with another of Anders’ reflections on the atomic bomb. In his text Endzeit und Zeitenende, he muses: Nothing would be more shortsighted than to consider the possibility of our extinction as an accidental by-product of some specific technological devices, for example, atomic weapons. Rather, the potential for our liquidation is the very principle which we provide all our devices with. What we aim to do is to produce products that do not need our presence or assistance, and could function without us without complaint – that means devices through which we make ourselves superfluous, through which we liquidate ourselves. (Anders, 2003: 198; authors’ translation)
Coda: The end of imagination?
‘The end of imagination’ (Roy, 1998). Thus was the title of the response penned by Arundhati Roy, writer and public intellectual, to the Indian government's 1998 nuclear weapons test in the Rajasthan Desert. Although this was not the country's first demonstration of nuclear weapon capability, it was a watershed moment affirming India's standing as a major nuclear power capable of building fission and thermonuclear weapons with yields up to 200 kilotons. For Roy, the show of force was not simply a scandalous event (it led to the United Nations (UN) Security Council imposing sanctions on India); it was of greater political, ethical and aesthetic significance. Nuclear weapons, in Roy's understanding, constitute an affront on our very ability to think. ‘Our Comprehension of Horror Department is hopelessly obsolete’, she laments (Roy, 1998: 11). Jacques Derrida similarly wrote about total annihilation and apocalyptic destruction, calling the ‘remainderless destruction’ of nuclear war a fantasy, a phantasma (Derrida et al., 1984). The end of imagination in this sense entails a closing down of the future, whereby a future without nuclear weapons is rendered unimaginable. In Benoît Pelopidas’ words, a ‘nuclear eternity’ is created. 6
What Roy deplores the most is the way in which an end of imagination shuts down the possibility that things could be otherwise. This, as we shall see, is pivotal to why imagination matters, and why it is so central to the possibility of challenging the stagnancy of deterrence thinking and the polluted world of ever-modernised weapons of mass destruction, merging the everyday and ubiquitous presence of artificial intelligence with the capacity to blow the world to pieces. ‘My world has died’, writes Roy; a world that was flawed and unviable but ‘worthy of love’ for the specific reason that it ‘offered humanity a choice’ (1999: 15). India's nuclear tests, the manner in which they were conducted and the euphoria with which they were received signalled to Roy the termination of other possibilities – the foreclosing of any real, functioning possibility or option. As such, Roy's argument might on the face of it seem to echo the familiar argument that as the weapon was first invented, its ‘uninvention’ became impossible (Bourne, 2016). That is, even if all nuclear weapons were to vanish off the face of the earth, the scientific knowledge of how they could, at any given moment, be recreated would mean that they would never, in any meaningful sense, be gone (Bousquet, 2024: 15).
The end of imagination, in Roy's rendition, however, is not a statement about whether or not nuclear weapons will last into eternity, or whether they can be uninvented; it is a statement about how, when we cease to be able to imagine otherwise, we deprive ourselves of the possibility of political choice. Roy's framing of imagination as a prerequisite for choice is a key existentialist theme, most obviously echoing the early writings of Jean-Paul Sartre, and in particular The Imaginary (1940/2004).
The Imaginary was foundational to the development of Sartre's oeuvre, and highly influential on the existentialist movement more broadly. Sartre's study of imagination had its origins in psychology, but a psychology overlain with phenomenology. Imagination, to Sartre, is free from sensory constraints and intrinsically linked to human creativity and freedom; it is a phenomenological and existential prerequisite for ‘choice’ to even exist. ‘Imagination’, he wrote, ‘is the whole of consciousness as it realizes its freedom’ (Sartre, 1940/2004: 186). Human perception and imagination can be seen as constrained or paralysed by the magnitude of destruction, in the difficulty of imagining annihilation. Conversely, could a recourse to human imagination hold promise for thinking differently about (the risk of) annihilation, and therefore also to fight against it? 7 Art and literature can be mined to release the creative capacity of the unconscious mind – not just imagine but continuously expand the horizons of our imagination (Van Munster, 2023). To do this, we cannot confine, let alone reduce, our creative endeavours to the strictures of the currently dominant digital technological pursuits. In the age of AI, creativity risks becoming entirely absorbed by the recombinant logics of data processing, our perception routed through digital screens and interfaces, our environment and others always mediated through systems.
Theologian Judith Wolfe encourages us to take seriously the role of human imagination in ordinary perception and orientation, in encounters with art. […] Philosophers and psychologists have long argued that perception is irreducibly imaginative, in the sense that to perceive intelligibly is, in part, to integrate sensory data into forms or wholes that are not simply given. The ability to do this […] is central to how humans apprehend and orient themselves in the world. (2024)
If we recognise that perception and imagination are so closely related (as phenomenologists such as Maurice Merleau-Ponty would argue), is there then promise in developing a different understanding of the things themselves?. 8
Disrupting complacency was key to the existentialists’ endeavour, many of them arguing for an ‘awakening’ whereby man would come to terms with the absence of innate meaning in life, and respond thereto with a heightened sense of awareness, responsibility and decisiveness (Bousquet, 2024). Despite their notoriety during the 1950s and 1960s, the existentialist call for a broader political awakening went largely unheard; rather, impasse and political apathy are only further entrenched today, as we find ourselves in an era of nuclear resurgence, putting an exclamation mark on Anders’ observation that we seem to be unable to imagine the upshots of our ability to produce.
Anders tells a cautionary tale about the dangers of letting our capacity to produce outstrip our capacity to imagine; and the risk that we are left unable to take responsibility for what we have done/produced (be it nuclear weapons or AI). But he thought that perhaps not all is lost, yet: ‘the crucial moral task today is the development of moral imagination’ (Anders, 2010: 271). Fundamentally, imagination is required for us to take responsibility: ‘we have to try to widen our horizon of responsibility until it equals that horizon within which we can destroy everybody and be destroyed by everybody’ (Anders, 1962: 495; authors emphasis). Can we imagine the human future otherwise? Surely we can, if we exercise our ways to think otherwise, and extricate ourselves from the psychopathological technologically fortified illusion of control we still cling on to. Becoming more and more enmeshed in an AI determined universe will do the opposite.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship and/or publication of this article: Elke Schwarz has co-authored this article with support from a Leverhulme Trust Research Fellowship.
