Abstract
The threat of global climate change has pushed governments around the world to consider alternative energy sources, including nuclear energy. As the interest in nuclear power increases, serious discussions on safety must resume before moving forward. There is no better time than now, the twenty-fifth anniversary of Chernobyl, to revisit the causes of the worst civilian nuclear disaster in history in an attempt to prevent future accidents. The author reviews initial reports that accused operators of violating instructions, but also looks at evidence that surfaced in the early 1990s, which suggested that the reactor design, combined with a lack of information sharing, were to blame for the explosion. While it is imperative to address human error, design flaws, and communication failures, the author argues that organizational characteristics of the Soviet nuclear power sector deserve more attention by scholars and policy makers. Chernobyl was not a disaster waiting to happen, the author writes, but occurred despite ongoing efforts to improve technical design, operator training, and inter-organizational communication. Chernobyl, in this interpretation, illuminates that what the nuclear industry considers “safe enough” is always linked to specific historical periods, cultural settings, and institutional arrangements, and that even the best efforts to ensure nuclear safety may not be good enough.
After more than two decades of detailed study, scientists of all stripes have begun to better understand the health and environmental impacts of Chernobyl. Despite these findings, there remains much to be investigated, clarified, and even publicized.
1
Equally significant as research on the
The organizational aspect of the Chernobyl disaster particularly deserves more attention by scholars and policy makers. 3 Human error, problematic design features, and communication failures—all of which led to Chernobyl becoming the worst accident in the civilian nuclear industry—are likely to persist despite the industry’s attempt to design safer and more proliferation-resistant reactors, as well as its effort to increase system automation to minimize human error. But the institutions that operate, regulate, and evaluate a nuclear industry are critical. International experience shows that there are different ways to run a nuclear industry, but we are still mostly in the dark when it comes to understanding what works, what doesn’t, and why. 4
Background
It is important to contextualize the Soviet Union’s nuclear industry to understand the causes that led to Chernobyl. On April 26, when reactor number four at the Chernobyl nuclear power plant exploded, the logic of Cold War competition prevailed: Nuclear affairs equaled state secrets. Setbacks or accidents related to nuclear issues were routinely classified, even when they involved public health; only successful accomplishments were announced. True, just one year earlier, new Soviet leader Mikhail Gorbachev had come to power, promising to restructure the country’s ramshackle economy and establish a new culture of openness and transparency. The Russian words
Fast forward to August of the same year: The Soviet Union dispatched a delegation headed by academician Valerii Legasov to a post-accident review meeting at the International Atomic Energy Agency (IAEA) in Vienna. 6 Their statements revealed more details about the Soviet nuclear industry than had ever been disclosed before, which did not fail to impress the international community. At the same time, the delegates delivered a clear verdict: The plant managers had failed to enforce proper discipline among their staff, and the workers on duty in the control room had violated operating instructions. Based on this testimony, the International Nuclear Safety Advisory Group (INSAG), a panel assisting the director general of the IAEA, concluded that the reactor operators were to blame for the accident. 7 The human error version was reinforced a year later, when six individuals, including the former plant director, were sentenced to long prison terms. 8
At the same time, however, Soviet nuclear specialists knew (and foreign reactor experts suspected) that there was more to the accident than an “unauthorized experiment” and “operator error.” With Gorbachev’s
Designing reactors
The RBMK (Russian acronym for “high power, channel-type reactor”) was the brainchild of Savelii Feinberg, a physicist at the country’s leading nuclear research center, the Kurchatov Institute (Goncharov, 2001; Karpan, 2005; Sidorenko, 2003). Nikolai Dollezhal, a decorated engineer who had devised the plutonium producing reactor for the first Soviet atomic bomb, developed the technical design. The RBMK was based on a long tradition of light water-cooled, graphite moderated reactors. Already in the early 1950s, engineers had started modifying the basic model used for military purposes. For example, the Siberian Nuclear Power Plant, a secret facility launched near Tomsk in 1958, was a classic dual-use design: First and foremost, the plant generated weapons-grade plutonium, and electricity was merely a by-product. But a few graphite moderated reactors produced electricity only: In 1954, the Obninsk Nuclear Power Plant started operation near Moscow, becoming the world’s first nuclear power plant, and in 1964, the first industrial-scale graphite moderated reactor went online at Beloiarsk Nuclear Power Station in the Urals. These reactors were quite small (a symbolic 5 megawatt at Obninsk, and 100 megawatt at Beloiarsk), and their contribution to the country’s massive electricity generation plans remained negligible. But when Dollezhal’s engineers proposed a 1,000 megawatt giant, economic planners listened. After a brief review period, they selected the RBMK design for standardization and mass production, alongside the VVER, a pressurized light-water design that were used in submarines, icebreakers, and at a power plant in southern Russia (Novo-Voronezh). 11
Among the advantages of the RBMK was its relatively simple assembly—in contrast to the VVER, which had an expensive pressure vessel and involved sophisticated factory manufacturing. Furthermore, the operating experience acquired earlier at graphite moderated reactors (the RBMK’s military predecessors) guaranteed the availability of an initial cohort of skilled operators and a well-established supply industry. The first RBMK at Sosnovy Bor, near Leningrad, went critical in September 1973—and, up until the Chernobyl disaster, 14 RBMKs followed suit, operating near Kursk, Smolensk, Chernobyl, and in Lithuania. Built after the first Soviet nuclear power plant safety regulations took effect, reactor number four at Chernobyl reached criticality in 1983; it represented an advanced version of the original RBMK design and featured several important safety improvements. 12
One of the most prominent Soviet nuclear scientists, Anatolii Aleksandrov, endorsed the RBMK from the start. Aleksandrov had been a close collaborator of Igor Kurchatov, the “father” of the Soviet atomic bomb, important public scientist, and director of the country’s leading nuclear research institution. Aleksandrov succeeded Kurchatov as director of the Institute of Atomic Energy in 1960, and in 1975 he was elected president of the Soviet Academy of Sciences. He also chaired the Interdepartmental Technical Council (MVTS), the single most powerful decision-making body for Soviet nuclear energy policy. 13 Aleksandrov was later criticized for simultaneously holding too many leadership positions at the time of the Chernobyl disaster. 14 More importantly, Aleksandrov was accused of ignoring complaints about operational problems with RBMKs while he was head of MVTS. 15
After Chernobyl, critics of the RBMK-type reactor specifically condemned the design of its control rods, which were intended to increase the reactor’s economic performance. Evidence showed that this particular design made it much more difficult to handle the reactor at low power levels—that is, at critical moments right after start-up and especially before shut-down. 16 When the control rods were extracted too far, their re-insertion into the core, under certain conditions, could cause an initial surge in power, before shutting down the reactor. 17 In April 1986, this design feature (in conjunction with atypical core conditions and operating errors) may have turned a normal “scram”—a rapid shut-down of the reactor—into a catastrophe (Diatlov, 2003; Karpan, 2005; Sidorenko, 2003).
Organizing knowledge
None of these problems were completely new, however, or unknown to Soviet nuclear scientists. The physicists at the Kurchatov Institute, who provided scientific support for the RBMK, and the engineers at the Research and Development Institute of Power Engineering (NIKIET), who designed it, understood the nature of graphite moderated reactors well, and at least some of them also knew about the potential danger of improperly operating the RBMK’s control rods (Asmolov et al., 2004; Sidorenko, 2002). A brief flash in reactivity had been observed in the first RBMK units at Leningrad, where it triggered an incident in 1975 in which one channel was destroyed and about 30 others (out of 211) were damaged. 18 So why did they not act upon this knowledge and modify the design, or at least alert the operators? During the internal post-Chernobyl investigation, Soviet reactor designers acknowledged that they had considered it virtually impossible for an unstable core to ever combine with operating violations in a way that would destroy the reactor—that is, even in the highly unlikely event that the technology failed, there were still instructions to follow that would rectify any hazardous situation. Unimpressed, the Central Committee of the Communist Party, the Soviet Union’s highest government body, fired several leading scientists, engineers, and managers. 19 But the assumption that such an accident would simply not occur was to some extent reasonable: It was grounded in the sense of duty, discipline, and secrecy that the reactor designers took for granted.
Reactor designers worked for the secret Ministry of Medium Machine Building (
When
Socio-technical systems
The growing number of specialized organizations in the Soviet nuclear industry was a sign of its professionalization, but this process sometimes differed from that in the West. A nuclear power plant combines human agency and technical operations to form a complex, entangled socio-technical system. Actual operating experience and specific political, economic, and cultural contexts determine how the parts of this system get weighed and what kinds of interactions are allowed. In the United States, for example, recurring incidents of human error in nuclear plants led to increased automation. 21 In the Soviet Union, by contrast, encounters with highly unreliable instrumentation at the first nuclear installations led to drastic improvements in the training of nuclear experts and a deliberate increase in the reliance on human expertise, experience, and intuition in the control room. When nuclear plants multiplied and the practice of diligent on-the-job training became difficult to maintain, new organizational mechanisms were created to ensure that specialists from other plants, senior colleagues, and experts from research, design, and regulatory institutes assisted reactor operators, especially during difficult transitory periods (such as start-up and scheduled shut-downs). In addition, operating procedures were codified in substantial instruction manuals, and operators had to pass rigorous re-certification exams, the results of which directly affected their salaries. In other words, Soviet nuclear plant employees were given responsible positions, but they also received incentives, support, and monitoring.
When problems with the control rods first came to light, the RBMK’s designers reacted by modifying operating instructions for control room staff—that is, they revised regulations and insisted that instructions be followed to the letter. They never explained to the operators
From today’s standpoint it seems obvious that if human operators were key to the safe operation of the RBMK, they needed to have access to all information, including that pertaining to problems and accidents. Alas, the Cold War made tight information control seem not only necessary but also reasonable—especially where nuclear materials were involved. Fear of espionage and sabotage influenced the decisions about how much knowledge, and what kind of knowledge, reactor designers were willing to share with reactor operators. Profound differences in organizational culture, like those between
Communicating organizations
In a short foreword to INSAG-7 (1992), Hans Blix, then the IAEA’s director general, bemoaned flaws in the Soviet nuclear industry’s communication, training, and organization: “the lack of feedback of operating experience and the inadequacy of communication between designers, engineers, manufacturers, constructors, operators and regulators … , coupled with a lack of clear lines of responsibility, were critical factors in the events leading up to the Chernobyl accident. … the lessons from the Three Mile Island accident had not been acted upon in the USSR: in particular, the importance of systematic evaluation of operating experience; the need to strengthen the on-site technical and management capability, including improved operator training; and the importance of the man-machine interface.”
25
Blix’s account represents a tendency to blame organizational problems, including those of operator training, industrial management, and inter-agency cooperation, on the character of the Soviet system. During and after the disintegration of the Soviet Union, Western
In the light of a closer historical analysis of the Soviet nuclear industry, however, a much more disconcerting interpretation becomes possible: Soviet nuclear specialists had in fact set up a tremendously sophisticated system, and they worked tirelessly to improve technology, training, and management. They did evaluate feedback, they did react to accidents (including Three Mile Island), and they did strive to refine the inter-organizational system of communication and cooperation. Yet they were not always successful: They wrote safety regulations and updated reactor designs, but the implementation of such changes often took longer than was warranted. They trained highly qualified nuclear experts for all levels and branches of their nuclear industry, but sometimes cooperation among these experts was less than ideal. Finally, they did pay great attention to the transfer of sensitive knowledge among the agencies involved with nuclear energy: Some of their methods of addressing these challenges were successful, others less so. In sum, the Soviet nuclear complex, far from being an inert apparatus, was a dynamic system run by intelligent people who were committed to science and their homeland. And yet, their system failed to prevent Chernobyl.
Conclusion
How reactors are designed and what we consider their safe operation is always embedded in and shaped by specific historical, cultural, and institutional contexts. Which design to adopt, how to train nuclear specialists, whether to increase automation, and who to put in charge of managing nuclear power plants—these decisions are all based on experience. As a nuclear industry grows, the complex interplay of national preferences and international conventions results in specific technical choices, operating conventions, and safety norms. Soviet reactor designers developed the RBMK because it made sense at the time and in the context of the Soviet system. They had a group of qualified operators, a capable supply industry, and experience operating graphite moderated reactors. By 1986, the Soviet industry had successfully launched 15 RBMKs, with continuous improvements to their safety systems. Reactor number four at Chernobyl was neither old, nor fundamentally flawed; for all intents and purposes, it was “safe enough.” The operators at Chernobyl were trained to control a reactor that regularly required manual intervention. They realized that even the highest level of automation could not relieve them of making the occasional decision under uncertainty (Perin, 1998, 2005). They understood the rules, and they knew that sometimes they might have to bend these rules
Read this way, the Chernobyl disaster was indeed Soviet through and through. The choice of reactor design, the training of operators, and the organization of the nuclear industry—together with the character of the planned economy and the political system—enabled this catastrophe. By the same token, however, Chernobyl is only one example of how technical choices, political and economic pressures, and organizational idiosyncrasies may combine to produce disaster. It is a chilling reminder that our notions of rationality, safety, and reliability may not automatically protect us from unanticipated failure, especially when dealing with high-risk technologies and complex socio-technical systems.
Footnotes
Acknowledgements
This essay is based on research supported by the US National Science Foundation Grant No. SES-0240807.
1
2
Updates on the underlying causes for the disaster are typically published in Russian, Ukrainian, or Belorussian and are available only to insiders, as the number of hard copy editions is very limited, or in obscure places on the Internet (e.g., Diatlov, 2003; Dmitriev [n.d.]; Dollezhal’, 2002; Fain, 1998; Karpan, 2005; Polivanov [n.d.]; Sidorenko, 2002, 2003). A series of online forums archive an ever-growing collection of original documents, personal photographs, and press reports relating to Chernobyl (one prominent example is Pripyat.com, which also features an English version, http://pripyat.com/en/; the official Chernobyl nuclear power plant site has a truncated English version:
).
3
Organization theory offers vastly different strategies to prevent accidents: Normal Accident Theory advocates reducing complexity, loosening tightly coupled systems, and abandoning technologies where this proves impossible (e.g., Perrow, 1999); Theorists of High Reliability Organizations, by contrast, recommend learning from organizations that rarely fail (e.g., LaPorte et al., 1989). Scholars in Science and Technology Studies (STS) advocate transparency, public engagement, flexible structures of governance, and even organized distrust, but they also caution that predictions rarely do justice to tremendously complex interactions between science, technology, and social order (e.g., Wynne, 1996; Jasanoff, 1994; Ezrahi, 1990).
4
For example, Nelkin and Pollak, 1981; Parr, 2006; Jasper, 1990. Diane Vaughan’s work on organizations has been most influential in this area (e.g., Vaughan, 1996, 1999; see also Hutter & Power, 2005).
5
6
Legasov was the first deputy director of the Kurchatov Institute of Atomic Energy, and his suicide on the second anniversary of the accident provoked enduring speculations about his earlier testimony. The report to the IAEA was published in the industry’s flagship journal Atomnaia Energiia (Informatsiia ob avarii na Chernobyl’skoi AES i ee posledstviiakh, podgotovlennaia dlia MAGATE, 1986).
7
This report,
, was published in September, immediately after the meeting, and was based on working documents prepared by the USSR State Committee on the Utilization of Atomic Energy for the IAEA Meeting of Experts, Vienna, 25–29 August 1986, and on additional material presented by the Soviet experts during the meeting.
8
The semi-public proceedings ended with 10-year prison sentences for the former director, Viktor Briukhanov, the station’s former chief engineer, Nikolai Fomin, whose suicide attempt in pre-trial confinement had delayed the start of the trial for several months, and former deputy chief engineer, Anatoly Diatlov, who had been on duty in the control room during the explosion and who was transferred to prison directly from the hospital. The other three defendants received lesser sentences (Gorbachev, 2004; Illesh and Pral’nikov, 1988; Karpan, 2005; Vozniak and Troitskii, 1993).
9
Shteinberg was appointed chief engineer following the accident. He was involved in the disaster mitigation until he was transferred to
10
At the time, VNIIAES was the country’s leading research and technology support organization for the operation of nuclear power plants. Among Abagian’s co-authors were Evgenii Adamov from NIKIET, the RBMK’s design institute, Leonid Bol’shov from IBRAE, the Nuclear Safety Institute, and Evgenii Velikhov from the Kurchatov Institute (Report by a working group of USSR experts, 1991).
11
On the development of the RBMK see, e.g., Sidorenko, 1997; Karpan, 2005; Dollezhal’, 2002. RBMKs did not generate weapons-grade plutonium, although they could have switched to plutonium production due to their online-refueling system.
12
Sidorenko, 1997; Gosatomnadzor, 1974. Improvements included upgrades to the emergency core cooling system and the accident localization and emergency power supply systems, and modifications in the facility layout (Sidorenko, 1997).
13
The MVTS was established in September 1971 under the Ministry of Medium Machine Building (Sidorenko, 2001). It represented several ministries, the State Planning Commission (Gosplan), the Academy of Sciences, research institutes, design bureaus, and various industrial enterprises.
14
Aleksandrov resigned as president of the Academy of Sciences in October 1986.
15
On August 16, 1986, in a resolution of no confidence, the Soviet government abolished the MVTS (Sidorenko, 2001).
16
INSAG-7 states: “control of the RBMK-1000 at startup … was different from and much simpler than control of the power density distribution of the non-uniformly poisoned reactor at low power [the situation on April 26, 1986]. … The operators had little or no experience of control under these circumstances” (INSAG-7, 1992: 5). See also Iadrikhinskii, 1989; Polivanov [n.d.]; Semenov, 1995.
17
One of my interviewees compared these control rods to an automobile brake pedal that accelerates the car before stopping it. For illustrations, see http://accidont.ru/rodes.html, and http://www.if.uidaho.edu/∼gunner/Nuclear/LectureNotes/RBMK_Reactors_and_Chernobyl_STD.pdf (p. 3, taken from United States Nuclear Regulatory Commission, 1987: 2–32).
18
The event was immediately classified, but after Chernobyl, details came to light about this and other harbingers of disaster. See, e.g., Dmitriev [n.d.]; Borets [n.d.]; Sidorenko, 2001.
19
V Politbiuro TsK KPSS.
20
22
The idea of a database of incidents and accidents that reactor operators would have access to was shot down by NIKIET (Sidorenko, 2001).
23
Another strategy was to transfer trusted individuals to key positions in the nuclear industry, when
24
Suffice it to mention the US Navy, whose “safety culture” is often invoked as a model for the US nuclear industry—disregarding their very different organizational traditions.
25
INSAG-7, 1992: foreword.
26
Vaughan identifies this process as “the normalization of deviance” (Vaughan, 1996); but it is often tricky (for both actors and analysts) to identify unambiguously what behavior is “normal” and what is “deviant.”
Author biography
