Abstract
In an international confrontation, nuclear weapons can provide leverage even if they are not used. The leverage comes from threats of use or increased risk of use. When the confrontation is successfully resolved without nuclear use, exploiting this latent, potential use of nuclear weapons appears to be cost free. But in fact, some fraction of the consequence of a nuclear war should be included in the cost–benefit calculation each time the tactic is employed.
The crisis in Ukraine has been described as a possible rekindling of the Cold War. Let’s hope not. One of the fundamental characteristics of the Cold War was keen awareness that every conflict between the Soviet Union and the United States was, ultimately, a latent nuclear confrontation. But sober calculation reveals that, whether in the current Ukrainian crisis or some future face-off, nuclear saber rattling could come with a surprisingly high price tag, and nuclear weapons are best left out of the equation.
The winnowing of the US nuclear arsenal through displacement by advanced conventional alternatives and negotiations with Russia, combined with looming decisions about how to replace the fantastically expensive legacy weapons of the Cold War, has led to a re-examination of the fundamental purposes and utility of nuclear weapons. Any discussion of nuclear weapons begins with deterrence, but pure deterrence alone can be accomplished with quite modest nuclear forces. Nuclear advocates postulate two other advantages of nuclear weapons, which they fear may be lost with continuing reductions in numbers: First, some believe that nuclear weapons offer real and ongoing military utility and, second, many believe that nuclear weapons have been responsible for the absence of big-power wars since 1945.
In his recent book, The Second Nuclear Age, Yale University political scientist and business professor Paul Bracken lists eight lessons from the “first nuclear age” (otherwise known as the Cold War). The very first one is “You Don’t Have to Fire a Nuclear Weapon to Use It,” a lesson that supposedly carries forward to today (Bracken, 2012). Georgetown University international relations expert Matthew Kroenig claims, based on a statistical analysis of Cold War crises, that there is a clear connection between a nation’s nuclear might and its advantage in international contests (Kroenig, 2013). 1 And this is, at least since August 1945, done without actually exploding any bombs; just having them is enough to sway the outcome of a dispute.
How does this virtual effect work, and what are the implications? It cannot be that nuclear weapons bestow some magical power by their mere existence. For nuclear weapons, sitting unused, to have real influence on national leaders’ decision making there must be some—perhaps very small, but still finite—probability that they will be used. The simplest model is that, if both sides in a confrontation can foresee each move and countermove and believe that this chain ends with nuclear use, then the weaker nuclear power, understanding this, will back down.
This simple picture does not account for why stronger nuclear powers cannot consistently and routinely impose their will or, indeed, why many times—in the Falklands, Sinai, Afghanistan, Vietnam, and elsewhere—non-nuclear powers have challenged nuclear-armed nations. A common explanation has been that nuclear weapons are too big a club, that their assured use is implausible for anything less than threats to a nation’s very existence. That is, nuclear weapons are simply so powerful a tool that, ironically, they become, in practice, unusable.
In response to the paralysis caused by this awesome but all-or-nothing nature of nuclear weapons, a state could dial back the power of its nuclear weapons to a more usable level by threatening, not the certain use of nuclear weapons, but an increase in the risk of nuclear use. Fine-tuning of the risk could potentially make nuclear weapons relevant to a wide range of common, even minor, confrontations.
This is not, of course, a new idea; a half-century ago, Thomas Schelling (1960) described in his seminal book, The Strategy of Conflict, a “deterrent that leaves something to chance.” Very simply, even if nuclear response is not guaranteed but has only some finite probability, if the challenger is not willing to roll the dice and accept the risk, he will be deterred.
This calculated, bluffing approach to potential nuclear attack seems like a great deal, even cost free, when the bluff works. Like winning the war without firing a shot. Those promoting this tactic of risk typically overlook, however, the cost imposed on the side doing the bluffing, even though the mathematics of game theory is very clear on such costs. I call this failure to consider the hidden cost the Insurer’s Fallacy. It’s an approach that relies for success—indeed, bases the future of civilization—on gambler’s luck.
The costs of threatening war
Imagine that I go to, for example, California, where there is a major earthquake every, say, 20 years, and I set up an insurance company that sells earthquake insurance. For the first several years, people buy insurance, send me premiums, and there is no earthquake. It seems like free money, and I spend it all as fast as it comes in. This practice is obviously not sustainable and will be a financial disaster when the earthquake finally happens. What an actual insurance company does is calculate costs based on “expectation values,” or averages. That is, when figuring annual profit and loss, the company would compare its income (premiums) to the “costs” imposed that year by a five percent expectation of an earthquake, even though there is no such thing as five percent of an earthquake. This analogy suggests that, when exploiting the risk of nuclear war, the cost–benefit calculation should include as a cost some slice of a future nuclear war, to properly account for the case when the bluff fails and triggers an actual nuclear war.
Insurance company executives are not hailed as financial geniuses and awarded huge bonuses because an earthquake fails to occur during a particular fiscal year. But national leaders who exploit nuclear risk reap such praise when they happen to be lucky at nuclear bluffing, as they so far have been. Bracken, for example, celebrates President Truman’s innovation and skill during the Berlin Crisis of 1948, the first time nuclear risk manipulation seems to have won a confrontation. As he writes, “Truman wasn’t threatening to bomb the Soviet Union. He was threatening to go to higher levels of risk” (Bracken, 2012: 54). Perhaps this was a hollow bluff on Truman’s part, but Bracken believes it was a willful surrender of some control over events to allow an increase in the probability that something might go terribly wrong. Elsewhere, when discussing the basing of US nuclear bombs under control of allies, Bracken (2012: 63) writes, “If the Soviet Union attacked [Europe], or if it rocked the boat too hard, well, all bets were off about what might happen next.” This tactic seems brilliant in retrospect—because it worked—but would seem like a very bad idea indeed if it had not. And luck—not just wiliness, statesmanship, calculation, and sound judgment—is explicitly a part of the tactic, something that nuclear theorists are happy to discuss in the abstractions of game theory but often neglect when considering the cost of the tactic in specific cases.
The nuclear dice have been thrown a few times since 1945 with no nuclear war, so how lucky has the world been? The irony is that we can never really know, even if our luck eventually runs out, because we are explicitly exploiting random chance, so we would need many decades in which a nuclear war did not occur and a few with a war before we could start calculating the odds. But a little math gives us some insight into where the upper boundaries of the probabilities might lie. In the same year, 1960, that The Strategy of Conflict appeared, a peculiar little book by British mathematician Lewis Richardson, Statistics of Deadly Quarrels, was published posthumously. 2 Richardson compiled and analyzed centuries of incidents of deadly “quarrels” around the world, ranging from murders to major wars. He found that, over century-long periods, the outbreak of war was statistically random, consistent with a certain constant probability per year that a war would occur, an idea that most people resist but history supports. The converse is that avoiding war involves some degree of luck.
With just one historical “test case” and no “control group,” we can never know what the actual probabilities of nuclear war were in the 20th century. We could postulate that war was extremely likely but the world just happened to have an extremely unlikely amount of good luck. But that is to postulate a rare event. For a more plausible and likely boundary case, we could postulate that the world has had just average luck; that is, nuclear war and peace have been equally likely since the beginning of the nuclear age. This assumed equal probability, plus a little mathematics, implies that over the 69 years since 1945 there could have been a one percent probability of nuclear war per year. To look at this from another angle: A one percent annual probability of war, compounded over 69 years, leads to an overall 50/50 chance of nuclear war or peace. Of course, perhaps the probability per year was much lower, so that, for example, there was one chance in a million of a nuclear war over the past 69 years.
The historical record does not really allow a confident choice between these alternatives. The “long peace” of the Cold War might be, as many believe, credited to nuclear weapons (Gaddis, 1987). But history is also consistent with a one percent annual likelihood of nuclear war that most people would consider intolerable, indeed, insane, combined with no better luck than flipping a coin and getting heads instead of tails. Both the nuclear explanation for the long peace and any quantitative estimate of nuclear risk are extremely wobbly propositions (Wilson, 2013). 3 We should proceed with caution and humility.
The roll of nuclear dice
Whenever a nuclear power “uses” nuclear weapons by increasing risk in a confrontation—and then wins the standoff—it appears to be a benefit without cost. Bluff and bluster are free compared with exploding nuclear bombs. But the risk is real and, like the company selling earthquake insurance, there is a cost that must be accounted for, even when the feared event does not take place. A nuclear power should enter into the cost–benefit ledger a certain percentage of the cost of a nuclear war every time it exploits nuclear risk, even in those cases where it turns out to be lucky. If this cost were made explicit, even if that risk were one in a hundred per event, many would reject it as a bad deal.
Advocates of a tough nuclear policy will respond that, whatever the costs of such challenges, the choice is not entirely up to the United States or any other particular nuclear power. In the second nuclear age, with an expanding cast of nuclear players, nuclear challenges can be forced on even the best-intentioned nuclear power. This may be true, but that does not mean that sober nuclear states should make the situation worse by themselves playing cavalierly with nuclear games of chance. Indeed, the established nuclear powers cannot tout the power of nuclear leverage without India and Pakistan and other potential future nuclear states learning all the most dangerous lessons.
An extreme improbability of nuclear war could be an illusion, and the world does not really want to do the experiments needed to find out for sure the actual probability. But in the end, playing games of chance with nuclear weapons cannot pay off. The first step is making explicit the ignored cost of latent nuclear leverage. Every nuclear nation’s priority should be to forgo “using” nuclear weapons through brinkmanship and to steer nuclear arsenals, doctrine, and policy toward reducing both the probability of nuclear use and the utility of nuclear threats. No nuclear power should be looking for opportunities to more effectively exploit nuclear weapons by rolling dice. Snake eyes can—and do—come up.
Footnotes
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Notes
Author biography
An adjunct professor at George Washington University’s Elliott School of International Affairs, USA,
