Abstract
Because of their specific knowledge, scientists are well positioned to identify environmental threats to humankind, sound the alarm, and propose and comment, at least on a general level, on potential responses. However, many policy makers and scientists believe that scientists should have no more to say about public issues than anyone else and that science can only tell us how the world is, not how it ought to be. Although there are deep differences between science and policy, the line between policy-relevant and policy-prescriptive science is under continual negotiation, and there is no uniquely “objective” way of characterizing facts. In the authors’ view, scientists should generally refrain from making recommendations in areas far from their expertise and from making categorical policy declarations, but when their expertise is relevant scientists should not be excluded from the policy process whether by external forces or by self-censorship. Scientists’ input and influence has played a key role in the past, and remains essential in shaping responses to important policy questions such as what to do about anthropogenic climate change.
Consider the case of the late F. Sherwood Rowland, who, along with fellow chemist Mario J. Molina, first recognized the threat that chlorinated fluorocarbons (CFCs) posed to stratospheric ozone and potentially to life on Earth. (Rowland and Molina later won the 1995 Nobel Prize in Chemistry for their work, along with Paul J. Crutzen.) Some of Rowland’s colleagues criticized him for publicly stating that CFCs should be phased out. Although he did not advocate a specific policy instrument, these colleagues felt that by calling for a policy goal Rowland was illicitly stepping out of the world of science into the world of policy in which he did not belong.
Imagine, however, that Rowland and Molina had published their research in a peer-reviewed journal and that, like most scientific work, it had then been largely ignored. Decades later, dermatologists and oncologists would have begun to notice an unexplained increase in rates of skin cancer. Epidemiologists would have analyzed the available data and concluded that there was in fact an epidemic of skin cancers around the globe, especially severe in Australia, southern Chile, and among white Africans. Meanwhile, plant pathologists would have noticed increased ultraviolet damage in agricultural crops; veterinarians would have noted increased rates of cataracts in farm animals. Scientists would then have searched for an explanation for this strange association of human, animal, and plant pathology. Eventually, someone would have come across Rowland’s work, connected the dots, and understood what was happening. Programs would then have been quickly put in place to measure stratospheric ozone, demonstrating that the ozone layer had been massively depleted. But by that point it would have been too late to do anything about it.
This thought experiment, while counterfactual, is not fantastic. It is essentially what happened with asbestos and tobacco, and it could easily have been the case with chlorinated fluorocarbons as well. Rowland and his colleagues, including those who criticized him, were in a special position that equipped them to be “sentinels” for the rest of society (Oreskes, 2013). There was no one else with the specific knowledge to understand the threat and to see the potential harms so vividly. By virtue of their epistemic proximity to the problem, these scientists could see it, explain it, communicate its urgency, and propose and comment on responses to it. Because of their familiarity with the problem and what was at stake, they were specially situated to sound the alarm and had a unique role to play in the discussion that followed. Environmental scientists today are in a similar position.
One planet, two worlds
In recent years, scientists have increasingly become sentinels, alerting the world to problems—such as stratospheric ozone depletion, anthropogenic climate change, and biodiversity loss—that threaten humans as well as other forms of life with which we share the planet. These threats are rich with social, political, economic, and moral dimensions. Yet they were first identified by natural scientists, and understanding and responding to them requires robust scientific understanding of their causes, character, and extent. Scientists, it would seem, have an obvious role in discussing both the problems and their solutions. It is thus not surprising that people such as James Hansen, E. O. Wilson, and Sherwood Rowland became minor celebrities by speaking out on these and other environmental issues.
Scientists who do speak out on such questions often come in for serious criticism. Some of this has to do with professional jealousies or fundamental disagreements about politics. But it is also because there is a widespread belief that scientists should have no more than anyone else to say about public issues. Scientists can tell us what may happen under various scenarios, but it is the job of someone else—policy makers, government officials, informed citizens, or whomever—to determine how we ought to respond. Hansen, Wilson, and Rowland have no more to say about this than anyone else. Science can tell us how the world is, not how it ought to be.
There is a lot to be said for this view. There are deep differences between the education and experience of scientists and policy makers. Scientists have sometimes held forth about policy matters in naïve and counterproductive ways, and the cultural differences between scientists and policy makers are often painfully on display on such occasions. Science is centered on understanding the diversity of nature; policy is focused on the singleness of action. Science points to trends, identifies thresholds, and establishes links between variables leading to probabilistic beliefs based on the accumulation of evidence. The political systems within which policy making is embedded act on the basis of up-or-down votes, always with an eye to political and legal liability. Seen in this way, science and policy seem to inhabit two different worlds.
Many scientists have internalized this “two worlds” view and bowed out of the public arena altogether, fearing to trespass into territory that seems to belong to others, and sensitive to the abuse that some scientists have taken for their involvement in public affairs. In some cases, scientists even hesitate to explain the implications of their work in terms that ordinary people can understand, seeing any such account as a necessarily “subjective” departure from the “objective” language of science. Other scientists, angered and frustrated by the thought that they should stand by while a misinformed and apathetic public marches off a cliff, lurch in the direction of imperial science. They make categorical declarations about reducing greenhouse gas emissions, protecting biodiversity, and various other matters. They claim that their policy declarations are dictated by science, much as others claim that their preferred policies are dictated by God.
Science is not value-free
The “two worlds” view is centered on a thought articulated by the 18th-century Scottish philosopher David Hume. Hume pointed out that an “ought” cannot be deductively derived from a set of premises that include only “is”s (Hume, 1740). True enough, but contrary to how it has often been understood this observation does not go very far toward establishing the “two worlds” view.
Facts and values are entangled with each other in all sorts of ways. In some cases, values are so widely shared that simply discovering some facts seems to determine what we ought to do. The discovery of a vaccine for the Ebola virus, for example, would lead quite directly to the conclusion that we ought to vaccinate people at risk, since almost all of us are committed to the value that it is good to reduce the incidence of this horrific disease. Similarly, most people—even those who do not believe that humans play a significant role in global warming—value a stable climate on our planet. Almost everyone would value a magic technology that would guarantee climate stability, including people who do not think such a technology is currently needed.
Moreover, there is no neutral, uniquely “objective” way of characterizing facts. While the decision to study some particular phenomenon or effect suggests that it is more important than phenomena or effects that we do not choose to study, the cautious language of science (for example, “it would appear that” or “it is likely that”) is often heard as downplaying the significance of an effect once it has been discovered. There is also a substantive bias in science against making some mistakes rather than others. In particular, there is an almost universal horror of false positives. Thus, standard statistical methodologies permit an investigator to miss real effects in order to avoid claiming an effect that does not actually exist. This is reminiscent of the bias in the criminal justice system in favor of allowing the guilty to walk rather than punishing an innocent person. But hypotheses are not people, and acting on this bias means that we risk (for example) erring on the side of missing a contaminated water source rather than risk claiming that a source is contaminated when it is not. This leads to the sort of cases familiar in the environmental justice literature in which a community feels that it is experiencing a cancer cluster due to a toxic insult, but science fails to show the effect. In reality, this may have as much to do with our statistical practices (for example, requiring high confidence levels for validity) as it does with whether the effect is actually present. Yet the failure to scientifically demonstrate an effect is often interpreted as showing that the effect does not exist.
Science is not value-free and we should not pretend that it is. This does not mean that scientists should say whatever they want, whenever they want, or that in many cases value-free science is not a goal worth striving for. What it does mean is that, when it comes to having value-commitments, science is closer to policy than the “two worlds” view allows.
Assessing assessments
In recent years, scientists have increasingly been called upon to work with their colleagues on assessments that produce “policy-relevant but not policy-prescriptive” science, as the mission of the Intergovernmental Panel on Climate Change is characterized on its website and in its statement of principles and procedures (IPCC, 2010). In an ongoing research study of such scientific assessments, we have found that participating scientists typically believe in the existence and importance of a clear and distinct boundary between science and policy (Oppenheimer et al., forthcoming). In formal interviews and informal conversations, many scientists involved in assessments stress the importance of preventing the “infiltration” of political considerations into their technical reports. Yet, at the same time, there are considerable differences of opinion among scientists as to where the posited boundary lies. Some issues that may look to an outsider like policy matters may be considered by scientists to be amenable to technical analysis. For example, the most recent Intergovernmental Panel on Climate Change report analyzes some of the ethical values implicated in various responses to climate change (IPCC, 2014). Conversely, matters that some scientists wish to avoid as “political” might seem to a layperson to be highly technical (for example, quantitative estimates about sea level rise or damages from extreme events when there are high levels of uncertainty).
In our view, scientists should generally refrain from making recommendations in areas far from their expertise, but they should not refrain from commenting on areas within their proximate expertise. In these domains, scientists, by virtue of their knowledge and familiarity with an issue, are among those qualified to judge—and sometimes the most qualified to judge—what actions may be called for.
However, there is often tension between maintaining scientific authority and mobilizing it to address complex and divisive societal problems. What to a great extent gives science its authority to sit in judgment on the epistemological practices of other cultures and institutions is its self-presentation as a value-free, institutional “truth detector.” Since science cannot fully live up to this ideal, the power of scientists to address problems such as climate change is limited and contested.
This helps to explain what in any case should be obvious: Policy problems cannot be solved without leadership from the policy-making community and permission from citizens. Too often policy makers want to use science for cover. If “science made me do it,” then I as a policy maker don’t have to take responsibility for a highly contested, value-laden decision. But no environmental problem—polluted air, polluted water, ozone depletion—has ever been successfully addressed without leadership from policy makers. It is also incumbent on citizens not to punish policy makers when they take risks to address such difficult problems, even if they have doubts or concerns about the particular policies that are implemented.
Despite the tensions that arise at the nexus of science and policy, scientists have been remarkably influential—and in some cases essential—in shaping responses to important policy questions. Recognition of the importance of scientists to policy making goes back in the United States at least to the creation of the National Academy of Sciences in 1863, and traces an interesting and variegated history in the century and a half that follows. While scientists need to recognize the limits of what they can contribute and the extent to which their fellow citizens will and should defer to them, they must be involved in the policy process now more than ever. Attempts to exclude them—whether by external forces or by self-censorship—deprive society of an important resource.
Footnotes
Acknowledgements
Acknowledgements to Jessica O’Reilly, Keynyn Brysse, Milena Wazeck, and Matthew Shindell for contributing to research discussed in this paper.
Funding
This paper draws on research done as part of the project “Assessing Assessments,” funded by the US National Science Foundation and to be presented in greater detail in a forthcoming book by the same title.
