Abstract
People perceive different types of radiation risks in very different ways. Surveys of the general public in the United States and elsewhere have consistently shown that people perceive nuclear power and nuclear waste as having high risk, but perceive other sources of radioactivity—such as medical x-rays and naturally occurring radon gas—as posing much lower risk. The majority of radiation experts see things quite differently, rating nuclear power and nuclear waste as less risky than the general public does, and perceiving medical x-rays and radon as more risky than generally believed. This perception gap demonstrates that acceptance of risk is conditioned by a number of factors, such as trust in the managers of the technology and appreciation for the direct personal benefits of the technology. Risk-communication strategies that help people place the risks of nuclear power and nuclear waste in perspective by comparing them with other risks can help reduce fears of radiation. Education about radiation can also affect risk perceptions and attitudes. Although differences between the perceptions of laypersons and those of experts cannot be attributed in any simple way to degree of knowledge, it is clear that better information about radiation and its consequences is needed. There is a particularly urgent need to develop plans and materials for communicating with the public in the event of a radiological disaster. The fear, anger, and distrust following the accident at Fukushima shows that communication is still a major problem.
Consider the risk of dying as a consequence of nuclear power. That’s what members of the League of Women Voters in Eugene, Oregon—and their spouses—were asked to do as part of a landmark study more than 30 years ago (Fischhoff et al., 1978). They rated not only the risk of nuclear power but also of 29 other technologies or activities, and they judged nuclear power to be riskier than anything else on the list, including handguns and mountain climbing. And yet they rated x-rays, an ostensibly similar technology that also relies on radioactivity, as being much less catastrophic and dreaded than nuclear power. Why?
Researchers have struggled to answer that question ever since. Other early studies revealed that risk-assessment experts judge nuclear power to be much lower in risk than laypersons believe it to be, and experts rate x-rays as riskier than laypersons do. In one study (Slovic et al., 1979a), three groups of laypersons perceived nuclear power as having very high risk—ranking it 1st, 1st, and 8th out of 30 hazards—whereas a group of risk-assessment experts gave a mean risk rating that put nuclear power 20th in the hierarchy. Conversely, the three groups of people found medical x-rays to be low in risk—ranking them 22nd, 17th, and 24th—whereas the experts placed it 7th. More recent studies found essentially the same results (Hardeman et al., 2004; MacGregor et al., 2002b). This work shows that the public perceives sources of radiation exposure differently from the views of experts.
Representative surveys of the general public in the United States, Sweden, Canada, Norway, Belgium, and Hungary have consistently shown that people view nuclear power and nuclear waste as extremely high in risk and low in benefit to society, whereas medical x-rays are seen as very beneficial and low in risk (Slovic, 2000). For example, a national survey of more than 1,500 Canadians conducted in 1992 and 2004 found that about one-third of the participants perceived nuclear power as posing a high health risk (Krewski et al., 2006). By comparison, only 12 percent of the participants perceived medical x-rays as having high risk in 1992—and that dropped to 6 percent by 2004. Surveys in the United States sponsored by the National Securities and Nuclear Policies Project did, however, find that perceived benefits of nuclear energy modestly exceeded perceived risks from 2002 through 2010—prior to the Fukushima accident (Jenkins-Smith, 2011).
Perceptions of risk associated with nuclear waste tend to be even more negative than perceptions of nuclear power (Slovic et al., 1991a). When asked to state whatever images or associations came to mind when they heard the words “underground nuclear waste storage facility,” a representative sample of American respondents could hardly think of anything that was not frightening or problematic. And, as with nuclear power, these feelings aren’t unique to a specific culture. In a 2002 survey of a representative sample of the Belgian population, about two-thirds of the participants considered the risk of nuclear waste to be high or very high, while less than 20 percent considered medical radiography to be highly risky (Hardeman et al., 2004). When the same questions were put to a group of radiation protection professionals at a conference, they perceived nuclear waste as being much less dangerous than the general public did. The disposal of nuclear waste is a technology that most experts believe can be managed with low risk, although they may disagree on the best way to do so. The discrepancy between this view and the images in people’s minds is striking.
A bad image
Before the Three Mile Island accident, people expected nuclear power accidents to lead to disasters of immense proportions (Slovic et al., 1979b). Scenarios of reactor accidents resembled scenarios of the aftermath of nuclear war in people’s minds. After the Three Mile Island event, studies found even more extreme images of disaster.
Negative imagery spiked in much the same way after Fukushima. When Americans were asked to state “the first phrase or image that comes to your mind when you think of nuclear power,” the words “disaster” and “bad” were far more frequent in 2011, after Fukushima, than in an earlier survey in 2005 (Leiserowitz et al., 2011). Support for building more nuclear plants also declined around the world after Fukushima. For example, in Germany the government pledged to phase out nuclear power.
Powerful negative imagery associated with radiation and nuclear energy is not new. Physicist and historian Spencer R. Weart (1988) has observed that modern thinking about radioactivity employs beliefs and symbols that have been associated for centuries with the concept of transmutation—the passage through destruction to rebirth. In the early decades of the twentieth century, transmutation images became centered on radioactivity, which was associated with “uncanny rays that brought hideous death or miraculous new life; with mad scientists and their ambiguous monsters; with cosmic secrets of life and death; … and with weapons great enough to destroy the world” (Weart, 1988: 421).
But this concept of transmutation has a duality that is hardly evident in the imagery associated with nuclear power and nuclear waste today. Why has the evil of destruction overwhelmed the promise of rebirth in the minds of so many people? The answer undoubtedly involves the bombing of Hiroshima and Nagasaki, which linked horrifying images to the power of the atom. The sprouting of nuclear power in the aftermath of the atomic bombing led one scholar to observe: “Nuclear energy was conceived in secrecy, born in war, and first revealed to the world in horror. No matter how much proponents try to separate the peaceful from the weapons atom, the connection is firmly embedded in the minds of the public” (Smith, 1988: 62).
In Japan, it took a post-war publicity campaign by the US government, in collaboration with Japanese officials and businessmen, to sell the Japanese public on the peaceful use of atomic energy. The accident at the Fukushima Daiichi Nuclear Power Station has again shattered that dream of rebirth.
In the early 1990s, sociologist Kai Erikson provided insights into the special quality of nuclear fear by drawing attention to the broad, emerging theme of toxicity—both radioactive and chemical—that characterized a “whole new species of trouble” associated with modern technological disasters (1990, 1991). Erikson described the exceptionally dreaded quality of technological accidents that expose people to radiation and chemicals in ways that “contaminate rather than merely damage; … pollute, befoul, and taint rather than just create wreckage; … penetrate human tissue indirectly rather than wound the surface by assaults of a more straightforward kind” (Erikson, 1990: 120). Unlike natural disasters, these accidents are unbounded. Unlike conventional disaster plots, they have no end. “Invisible contaminants remain a part of the surroundings—absorbed into the grain of the landscape [and] the tissues of the body” (Erikson, 1990: 121).
In light of these findings, it is not surprising that radiation controls in industry are associated with some of the highest costs per year of life saved. For example, one analysis of more than 500 life-saving interventions showed that some radiation emission standards for nuclear power plants cost $100 million or more (in 1995 dollars) for every year of life saved (Tengs et al., 1995). By comparison, a law mandating seat-belt use was a bargain at only $69 for every year of life saved. 1
Interestingly, the deep fears and anxieties associated with radiation do not seem to extend to naturally occurring radiation. This was evident in a survey of residents in the Reading Prong area of New Jersey, a region characterized by very high radon levels in many homes, who were basically apathetic about the risk (Sandman et al., 1987). Few had bothered to monitor their homes for radon. Most believed that, although radon might be a problem for their neighbors, their own homes did not pose any threat.
It is instructive to compare perceptions of risk and benefit for various radiation technologies with those of chemical technologies. Concerns about chemical risks rose dramatically during the latter half of the past century. Chemicals, in general, and agricultural and industrial chemicals, in particular, are seen as very high risk and very low benefit. However, just as medical uses of radiation are perceived much more favorably than nuclear power, prescription drugs—a very potent and toxic category of chemicals to which we are often exposed at high doses—are perceived more favorably than other chemicals (Krewski et al., 2006; Slovic et al., 1991b).
The logic of acceptable risk
What does the risk-perception research described above tell us about the acceptance of risk from radiation and nuclear energy facilities? Although many technical experts have labeled public reactions as irrational or phobic, such accusations are clearly unjustified. There is a logic to public perceptions and behaviors that has become apparent through research. For example, the acceptance afforded x-rays and prescription drugs suggests that acceptance of risk is conditioned by perceptions of direct benefits and by trust in the managers of the technologies—in this case, the medical and pharmaceutical professions. The managers of nuclear power and non-medical chemical technologies are clearly less trusted, and the benefits of these technologies are not highly appreciated; hence, their risks are less acceptable.
The lack of concern about radon seeping from the ground beneath homes appears to result from the fact that it is of natural origin and occurs in a comfortable, familiar setting, with no one to blame. Moreover, it can never be totally eliminated. Opposition to the burial of radioactive soil, on the other hand, likely derives from the fact that this hazard is imported and technological in origin; industry and the state are blameworthy; the risk is involuntary; it has a visible focus (the barrels or the landfill); and it can be totally eliminated by preventing the deposition in the landfill (Sandman et al., 1987).
At present, public acceptance of risk from nuclear facilities is best characterized as fragile. The National Security and Nuclear Policies surveys (Jenkins-Smith, 2011) show that, over time, in the face of growing concern over the use of fossil fuels and the absence of major accidents, nuclear energy can achieve modest acceptance. 2 But in the aftermath of Fukushima, rebuilding trust and confidence will likely be slow and difficult (Ramana, 2011).
The impacts of perceptions: Stigma
Whether or not one agrees with public risk perceptions, they form a reality that cannot be ignored in risk management. The stigma associated with radiation contamination, which can have substantial socioeconomic impacts, is a prime example of this (Flynn et al., 2001; Pidgeon et al., 2003).
The word “stigma” was used by the ancient Greeks to refer to bodily marks or brands that were designed to expose infamy or disgrace—to show, for example, that the bearer was a slave or criminal. In the modern world, stigma is associated with products, places, and technologies perceived to pose abnormal risks.
A dramatic example of stigmatization involving radiation occurred in September 1987 in Goiania, Brazil, where two men searching for scrap metal sawed open a capsule containing 28 grams of cesium chloride while dismantling a cancer-therapy device in an abandoned clinic. Children and workers nearby were attracted to the glowing material and began playing with it. Before the danger was realized, several hundred people became contaminated and four people eventually died from acute radiation poisoning. Publicity about the incident led to stigmatization of the region and its residents (Petterson, 1988), resulting in substantial economic impacts. For example, the prices of products manufactured in Goiania dropped by 40 percent after the first news reports and remained depressed for a period of 30 to 45 days, despite the fact that no items were ever found to have been contaminated.
More recently, analysts have attempted to model and quantify the substantial economic costs that might result from detonation of a radiological dispersal device in the heart of a major city such as Los Angeles (Giesecke et al., 2011). Even after cleanup, perception-induced stigmatization of the location and its products and services is expected to lead to economic shocks to the regional economy many times greater than the direct costs associated with the physical destructiveness of the event.
The Fukushima prefecture of Japan is already experiencing stigmatization as a result of the accident at its nuclear plant (Mackinnon, 2011). Consumers avoid food and other products from Fukushima, even those that show no signs of contamination, and few tourists now visit the region. Some schoolchildren from the area have reportedly been bullied by classmates, and Japanese atomic bomb survivors worry that the former residents of Fukushima will suffer the same stigma that they have long faced.
Putting risks in perspective
Given the importance of risk perceptions and the extraordinary divergence between the perceptions of experts and laypersons, it is not surprising that there has been a burgeoning interest in risk communication. Much has been written about the need to inform and to educate people about risk and the difficulties of doing so (National Research Council, 1989; Ropeik, 2010).
One useful principle that has emerged is that comparisons are more meaningful than absolute numbers or probabilities, especially when these absolute values are quite small. Several have argued that, to understand whether people respond adequately to radiation risks, the risks should be compared with “some of the other risks of life” (Sowby, 1965). Others have ranked hazards in terms of their reduction in life expectancy, demonstrating that hazards involving radiation from nuclear power generate far more concern than hazards that cause much greater premature death, such as coal production and use (Cohen and Lee, 1979).
Although risk comparisons may bolster intuition, they do not educate as effectively as their proponents have assumed. A statement such as “the annual risk from living near a nuclear power plant is equivalent to the risk of riding an extra three miles in an automobile” fails to consider how these two technologies differ on the many qualities that people believe to be important. As a result, such statements are likely to produce anger rather than enlightenment (Huyskens, 1994).
A better approach may be to compare radiation exposures with other radiation exposures. For example, when radioactive elements from Chernobyl reached the United States, the Inter-Agency Task Force chaired by Environmental Protection Agency Administrator Lee Thomas used comparisons—reported by media outlets—to point out that the exposures were small, relative to natural background radiation, and comparable to the exposure from a chest x-ray (Associated Press, 1986). In hindsight, Thomas might have added one more comparison, namely a comparison to the dose received in a traditional diagnostic thyroid scan made with radioactive isotopes, which would have been the best apples-to-apples comparison. Although much less familiar to the public than a chest x-ray, a thyroid scan would have been recognized as a medical example. 3
Radiation risks can be presented in a number of useful and defensible ways. The more closely one can match the exposure of concern to medical or background exposures, the better the chance of avoiding an apples-to-oranges comparison that can befuddle and anger people.
Particles of information
Just as radiation scientists can conduct experiments to examine the effects of exposing a living cell to radioactive particles, communication scientists can conduct experiments to study the effects of exposing the human mind to “particles of information.” In one study, college students were given a pie chart depicting a person’s degree of exposure to radiation from eight sources (MacGregor et al., 2002a). The students found radon to be a larger source of exposure than they had expected, and industrial sources and nuclear medicine to be smaller exposures than expected. The students had different ideas about the meaning of “natural background radiation,” but after being tutored about diverse sources of radiation exposure and their relative contributions to a personal radiation “budget,” the students perceived less risk of radiation-induced harm in the form of cancer and birth defects. However, they still believed that human-caused exposures were much more likely to cause harm than natural background exposures.
This experiment was part of a larger pilot study that tutored participants in the basics of radiation science to test whether greater knowledge might lessen the gap between expert and lay perceptions. The study produced mixed results (MacGregor, 2002), but pre- and post-tutorial testing showed that educated laypersons could significantly increase their knowledge of radiation science. Increased knowledge led to increased concerns about radiation exposure from x-rays and other medical applications, air travel, cosmic radiation, natural background radiation, and radon. Risk perception decreased for hospital waste, nuclear waste, and nuclear power plants. Attitudes toward the adequacy of radiation risk-management policies were slightly more favorable after exposure to the tutorial. More studies of this nature should be conducted to determine the effects of education on radiation risk perceptions and attitudes.
Conclusion
Perhaps the most important generalization from research on radiation risks is that public perception and acceptance is determined by the context in which radiation is used—and the very different reactions to different uses provide insight into the nature of perception and the determinants of acceptable risk.
A second generalization is that, in every context except the use of nuclear weapons, public perception of radiation risk appears to differ from the majority of expert assessments. In some cases, members of the public see far greater risks associated with radiation technology than do technical experts; in others, the public is much less concerned than experts believe they should be. Although these differences cannot be attributed in any simple way to degree of knowledge, it is clear that better information and education about radiation and its consequences is needed.
There is a particularly urgent need to develop public communications plans and materials in the event of a radiological disaster. The Chernobyl accident revealed huge problems in risk communication in Europe (Drottz and Sjöberg, 1990; Gadomska, 1994; Otway et al., 1988; Wynne, 1989). Officials peppered their information messages with different terms (roentgens, curies, becquerels, rads, rems, sieverts, grays), which were explained poorly or not at all. Public anxiety was high and not always related to the actual threat. Public officials were at odds with one another and inconsistent in their evaluations of risks from consuming various kinds of food or milk. Comparisons with exposure to natural radiation from familiar activities were not well-received, because the media and the public did not trust the sources of such information. Other comparisons (for example, with background cancer rates) fared even worse. Many of the statements made by officials to calm the public confused and angered them instead.
The accident at Fukushima showed that communication is still a major problem and that public fear, anger, and distrust still exist. For sure, communication has improved since the Chernobyl accident in 1986, but the response after Fukushima in 2011 indicates there is still a long way to go. Enough is known about radiation and risk communication to enable experts to design effective messages; this is the good news. The challenge is that communication strategies must be considered a priority—in terms of time and money—to be effective. Messages should be created and tested
Footnotes
Funding
No specific grant from any funding agency in the public, commercial, or not-for-profit sectors supported the preparation of this article.
