Abstract
We explore the common attributes of political conflicts in which scientific findings have a central role, using the COVID-19 pandemic as a case study, but also drawing on long-standing conflicts over climate change and vaccinations. We analyze situations in which the systematic spread of disinformation or conspiracy theories undermines public trust in the work of scientists and prevents policy from being informed by the best available evidence. We also examine instances in which public opposition to scientifically grounded policy arises from legitimate value judgments and lived experience. We argue for the public benefit of quick identification of politically motivated science denial, and inoculation of the public against its ill effects.
Keywords
When scientists discover a planet in our Milky Way that is made of diamonds (Bailes et al. 2011), public fascination and admiration are virtually assured. Who would not revel in the idea that we might spot a particularly bright sparkle in the night sky? However, when scientists discover that burning fossil fuels causes climate change, or that a lethal airborne virus is best controlled through mask wearing and social distancing, then the public and political response is much less favorable, with scientists being verbally assaulted or having their reputations impugned (Lewandowsky, Mann et al. 2016; Mann 2012).
Scientists cannot escape those politically motivated conflicts. Daniel Kahneman has recommended that scientists should scrupulously avoid the political and that if science involves a matter “that anybody in Congress is going to be offended by, then it’s political” (cited in Basken 2016). Adherence to Kahneman’s recommendation would render entire scientific fields, such as evolutionary biology and climate science, off limits. Moreover, even if scientists abstain from providing policy advice, they can become targets of conspiracy theorists who frequently “blame the messenger” for inconvenient information, as has been apparent during the COVID-19 pandemic. For example, Dr. Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases, has been subject to extensive online abuse and hate speech. 1 If political conflict cannot be avoided, scientists must manage such conflicts, and the public must understand that such conflicts can be inevitable. Fortunately, both surveys (Pew Research Center 2009) and experimental studies (Kotcher et al. 2017) have shown that scientists can, in some circumstances, advocate policies without necessarily losing credibility or the public’s trust.
In this article, we explore the common attributes of political conflicts in which scientific findings take center stage, using the COVID-19 pandemic as a case study, but also drawing on knowledge of long-standing conflicts surrounding climate change and vaccinations. A core to all those conflicts is disinformation, mainly crafted by politically motivated actors, that distorts public perception of scientific evidence. Another core attribute of such conflicts in democratic societies is the public’s legitimate need to be involved in the surrounding policy debates and for dissenting voices to be heard. The fundamental question to be resolved, therefore, is how to differentiate between legitimate democratic critique of scientifically informed policies on one hand and motivated science denial on the other.
The COVID-19 “Infodemic”
The COVID-19 pandemic that upended the world in early 2020 also triggered an “infodemic” (Zarocostas 2020), that is, an overabundance of low-quality information including misinformation (i.e., information that turns out to be false), disinformation (i.e., false information that is intentionally spread to mislead people), and conspiracy theories (Enders et al. 2020; Roozenbeek et al. 2020). This infodemic, while predominantly spread online, has had adverse real-world consequences—including for innocent bystanders. For example, in the UK, the baseless claim that 5G broadband installations were responsible for the virus-borne disease led to vandalism against numerous telecommunications installations (Jolley and Paterson 2020). Research has also shown that exposure to misinformation can reduce people’s intention to get vaccinated (Loomba et al. 2021). We cannot overlook the involvement of politicians and politically motivated actors in the dissemination of disinformation. For example, in a text analysis of thirty-eight million online documents, Evanega et al. (2020) identified then-president Trump as a major vector of misinformation on COVID-19. Trump’s dissemination of misinformation has been linked to reduced compliance with pandemic control measures, which eventually translated into higher COVID-19 infection and fatality growth rates in U.S. counties that predominantly voted for Trump in 2016 than in those that voted for Clinton (Gollwitzer et al. 2020). Similarly, far-right parties in Europe vocally opposed public health measures such as “lockdowns” and mandatory mask wearing (Wondreys and Mudde 2020), based on “anti-elite” arguments buttressed by misleading statistics. A growing body of evidence exists that vaccine hesitancy is mainly associated with the political Right rather than the Left (for a summary, see Lewandowsky and Oberauer 2021), although political extremism may be another factor that transcends the conventional Left-Right dimension. For example, anecdotal reports from the UK indicate that far-right conspiracy theories relating to COVID-19 also find traction among the radical Left (Monbiot 2021).
Similarly, studies in France have found that opposition to COVID-19 vaccines is greatest among people who are aligned with extreme parties on both the Right and Left (J. K. Ward et al. 2020). Support for the generality of this pattern is provided by prepandemic research that suggests that extreme ideology, irrespective of orientation, is a predictor of conspiratorial thinking (van Prooijen, Krouwel, and Pollet 2015).
Institutional actors
Evidence exists that COVID-19 misinformation has been amplified by right-leaning media. Those media outlets have given prominence to misinformation and conspiracy theories from the early stages of the COVID-19 pandemic (Cinelli et al. 2020; Motta, Stecula, and Farhart 2020). Two recent instrumental-variable analyses have focused on the effects of Fox News (Simonov et al. 2020) and, in particular, the channel’s show by Sean Hannity, who until recently consistently downplayed the risks from the pandemic (Bursztyn et al. 2020). The studies showed that a 10 percent increase in viewing Fox News reduced the propensity to stay at home by 1.3 percent, with obvious downstream consequences for public health (cf. Hegland et al., this volume; Simonov et al. 2020). Furthermore, a one standard deviation increase in the relative viewership of the downplaying show (Sean Hannity) relative to another Fox show that did not downplay the pandemic (Tucker Carlson) was associated with a temporary increase of cases and deaths by roughly a third (Bursztyn et al. 2020).
COVID-19-related disinformation is also spread by other highly organized actors. For example, a recent analysis has suggested that the “antivax” online industry accumulates annual revenues of $35 million and that their audience of sixty-two million followers may be worth upward of a billion dollars a year for the big social media platforms (Center for Countering Digital Hate 2021). Some COVID-19 disinformers also have organizational links to similar political operatives who deny the existence or the human causes of climate change. To illustrate, one such connection involves the American Institute for Economic Research (AIER), a libertarian free-market think tank that has a history of bogus argumentation about climate change (e.g., by denying the scientific consensus) and that has recently engaged in similarly misleading argumentation about COVID-19 (B. Ward 2020). A central component of the AIER’s antiscientific activities relating to COVID-19 is their sponsorship of the “Great Barrington Declaration,” whose signatories (many of whom have no relevant scientific credentials) advocate a “herd immunity” strategy by letting the pandemic spread through the population—by avoiding lockdowns—while (ostensibly) seeking to protect those who are most vulnerable. This position has been vociferously opposed by the majority of experts (McKee and Stuckler 2020) and has been labeled “simply unethical” by the World Health Organization (Associated Press 2020).
Attempts to undermine a scientific consensus through dubious declarations or petitions is a long-standing strategy used by vested interests and was pioneered by the tobacco industry in the 1950s (Cook, Lewandowsky, and Ecker 2017) before being adopted by the fossil fuel industry (Cook et al. 2018) and now by COVID-19 disinformers. Research has found such attempts to undermine a scientific consensus to be the most persuasive disinformation strategy in a comparison of six different climate-denial messages (van der Linden et al. 2017). This is perhaps unsurprising because any appearance of scientific dissent invites the public to exploit this “false balance” to choose a more congenial (but scientifically unsupported) position (Koehler 2016).
COVID-19 and Democracy
COVID-19 has led governments to enforce policies that impaired economic activity and that limited individual liberties to an extent that has not been seen in western democracies in the past century, that is, since the outbreak of the 1918 Spanish flu. Although some of these harsh measures were likely necessary—and demonstrably successful (Flaxman et al. 2020; Haug et al. 2020)—concerns that these policies were used by some governments to trigger a slow “authoritarianization” and that they are harbingers of an “authoritarian pandemic” (Thomson and Ip 2020) should not be dismissed. Any infringement of civil liberties must be thoroughly examined before it can be justified as an unfortunate exception in the interest of public health. Scrutiny of restrictive measures is particularly difficult because, while democratic norms and practices are well established during normal times, very little guidance and few conceptualizations exist of what democratic standards are acceptable during times of crisis. Codifications, such as the U.S. Constitution, can provide only broad guardrails but cannot substitute for conventions, practices, and norms of conduct. The recent refusal by Donald Trump to concede that he lost the 2020 election, a clear departure from democratic norms, starkly highlighted the importance of convention and practice in a functioning democracy (Cuéllar 2018).
A recent empirical analysis that related pandemic severity (measured in terms of deaths from COVID-19) to infringement of democratic rights across 144 countries found no association between the two variables, which argues against a simplistic “lives-versus-democracy” trade-off (Edgell et al. 2021). Social restrictions have also negatively impacted mental health at scale (Every-Palmer et al. 2020; Serafini et al. 2020) and have disproportionately impacted women, single parents, young people, minority groups, refugees and migrants, and poor people who cannot afford to buy basic personal protective equipment (PPE) (Greenaway et al. 2020; van Barneveld et al. 2020).
Frustration with, and opposition to, social restrictions are therefore potentially legitimate grievances that deserve to be heard in democratic public discourse. Pandemics deprive people of their feelings of control and security, factors that are known to enhance the attractiveness of conspiracy theories (Lewandowsky and Cook 2020). Some people may therefore be driven toward conspiratorial rhetoric out of psychological or rhetorical needs rather than out of an intrinsic disposition for such theories (Lewandowsky 2021). Although the epistemic status of argumentation is independent of the proponent’s circumstances, those circumstances or grievances may be relevant to determining the appropriate response. The need to recognize and empathize with these grievances is amplified by the fact that the pandemic has had the most severe impact on low-wage and low-skill employees. These employees were hit in multiple ways, from wage insecurity for hourly workers to dense living conditions and the inability to escape crowded and unsafe workplaces (Kramer and Kramer 2020).
But where do we draw the boundary between politically motivated disinformation on one hand and legitimate expressions of grievances or criticism of government policy on the other? And assuming that we can identify that boundary, what are the appropriate responses by scientists and communicators?
Denial versus Legitimate Critique
In the remainder of this article, we present a sketch of this boundary viewed through two different lenses.
Scientific argumentation versus denial
The first lens relies on the notion of science denial, which arises when people reject well-established scientific propositions that are no longer debated by the relevant scientific community. The term “science denial” is well defined and established in the literature (Diethelm and McKee 2009; Hansson 2017; Lewandowsky et al. 2015; McKee and Diethelm 2010) and is frequently used in connection with the link between AIDS and HIV, climate change, evolution, and other clearly established scientific facts. In all those cases, absent new evidence, dissent from the scientifically accepted position cannot be supported by legitimate evidence and theorizing but must necessarily—that is, in virtually all instances—involve misleading or flawed argumentation.
There simply is no legitimate and coherent scientific position on climate change that does not invoke carbon emissions as a causal variable (Benestad et al. 2016). There is no legitimate scientific position that attributes AIDS to a cause other than the HIV virus (Kalichman 2015). The definition of denial, and the misleading techniques that it employs for its (usually political) ends, transcend domains. They are summarized by the acronym “FLICC” (Cook 2020; Diethelm and McKee 2009; Schmid and Betsch 2019):
—Fake experts: Using doubtful/questionable/discredited/fake experts. Fake experts were first used by the tobacco industry in the 20th century by presenting people in a lab coat who assured the public that smoking did not entail any harm (Cook, Lewandowsky, and Ecker 2017; Oreskes and Conway 2010).
—Logical fallacies: Patterns of reasoning that are invalid due to their logical structure. Logical fallacies, ranging from “straw man” arguments that misrepresent opponents’ positions to false dichotomies that demand one chooses between one of two options when more options may be available or both choices might be viable (on fallacies more generally, see Hahn 2020).
—Impossible expectations: The act of demanding undeniable proof beyond what is scientifically feasible. Demanding “proof” of the existence of global warming. Science, of course, rests on evidence, not proof, and no scientific result could ever meet absolute standards of proof.
—Cherry-picking: Regarding and disregarding pieces of evidence such as to advance one’s point. Cherry-picking of outlying “convenient” observations (Hansson 2017; Lewandowsky, Ballard, et al. 2016) while ignoring an abundance of evidence to the contrary.
—Conspiracy theories: Explaining evidence by means of an evil conspirator, while consecutively expanding the theory to defend against challenging evidence. Conspiracy theories are an almost inevitable component of denial, which serve to explain away overwhelming scientific evidence. One example is the accusation that the pharmaceutical industry is trying to kill people through vaccinations or that left-wing politicians pursue the “Great Reset” agenda to change Western societies (Schmid and Betsch 2019).
Claims that rely on one or more of these techniques contribute little to a debate and should not play a direct role in determining policy. The rules of scientific evidence formation and argumentation are inescapable and cannot be discarded or side-stepped for political expediency. A cherry-picked argument against climate change or COVID-19 vaccinations is unscientific and should not prevent climate mitigation or a vaccine rollout. (The converse, however, does not follow: argumentation that survives the FLICC criteria is not guaranteed to lead to accurate conclusions. For example, logically correct arguments can lead us astray when a premise is incorrect.)
At first glance, this insistence on quality of argumentation may seemingly curtail the public’s involvement in any scientifically informed debate. After all, members of the public are often nonexperts on topics and issues whose outcomes affect their lives. Consequently, people form attitudes on these topics, mainly by relying on narratives, stories, feelings, or pictures rather than data and scientific analysis. People can therefore easily fall prey to the misleading techniques outlined above. The public at large is also typically unskilled in argumentation and the evaluation and weighting of scientific evidence, which creates a further apparent barrier to involvement. We suggest that these barriers are not insurmountable.
First, scientific issues can be legitimately communicated by stories or pictures (Lewandowsky and Whitmarsh 2018). For example, whereas a picture of snowfall in New York presents no legitimate evidence against global warming (because a cherry-picked normal weather event falls within the envelope of events expected with climate change), a photo of retreating glaciers is a legitimate tool to communicate climate change (because glaciers integrate snowfall over centuries and hence their retreat captures climate change rather than weather). Insistence on quality of argumentation thus does not prevent the use of engaging and accessible communication.
Second, the insistence on sound argumentation does not prevent people’s lived experience from being relevant to debate. On the contrary, as we show next, people’s lived and reported experience and their value judgments are crucial to developing scientifically informed public policy. People’s lived experience, and the narratives emerging from them, provide us with a form of data that can constrain the design of policies.
Different lived experience
Beliefs are context dependent. Denying facts that are commonly accepted by the scientific community therefore does not necessarily reflect “irrationality” or bad faith. The rationality of a belief is established relative to an evidence base, so actors may rationally disagree in situations where they have different evidential histories. For example, if people differ in how much trust they put on various information sources (e.g., scientists vs. their neighbor on social media), polarization may ensue even on the basis of identical information that is processed completely rationally (Cook and Lewandowsky 2016; Druckman and McGrath 2019; Jern, Chang, and Kemp 2014).
Ethnic minorities, for example, have historically been discriminated against in the health care system. Western countries, especially those with colonial histories, have also damaged people’s trust in medical treatments through their previous mistreatment of indigenous populations (Lowes and Montero 2021) and misuse of vaccination centers, for example, by the CIA in its hunt for Osama bin Laden (Reardon 2011). It is unsurprising that people would question scientific evidence communicated by the same institutions that caused them harm or deceived them in the past (Jamison, Quinn, and Freimuth 2019).
Opposing vaccinations because they have previously been misused by authorities does not make the argument scientifically sustainable. However, appreciation of why evidence is mistrusted in these communities is essential to interpret beliefs and to design culturally sensitive interventions. Studies have shown that locally designed, multicomponent interventions that are sensitive to lived experience can increase vaccine uptake considerably (Crocker-Buque, Edelstein, and Mounier-Jack 2017), reflecting the need to regain trust rather than dismiss beliefs based on lived experience as simple denialism.
Similarly, denying the severity or even the existence of COVID-19, or denying the effectiveness of social distancing, may represent an adaptive strategy irrespective of the poor quality of argumentation (although it is less clear that denial along with other forms of “not wanting to know” could be construed as “rational”; see Krueger et al. 2021). For example, denial can enhance people’s self-efficacy by maintaining optimism about the current situation (Bénabou and Tirole 2016). Avoiding or selectively relying on information about the pandemic can be a particularly effective strategy to maintain such self-efficacy (Golman, Hagmann, and Loewenstein 2017; Hertwig and Engel 2016). In addition, research into the psychology of conspiracy beliefs has identified beneficial consequences of such beliefs for the individual, such as creating a sense of belonging through group identification. However, these potentially beneficial effects should not blind us to the fact that conspiracy beliefs are ultimately based in distorted reasoning and that their positive results for an individual can lead to gravely adverse social outcomes, including racism (Golec de Zavala and Cichocka 2011; van Prooijen 2018).
Such consequences have been observed with respect to the COVID-19 pandemic as well (Coates 2020; Pei and Mehta 2020). Denial can also be a protective mechanism to manage fear (Schimmenti, Billieux, and Starcevic 2020). Someone who lives alone, with a preexisting health condition and a limited budget, does not have many options. She would either have to isolate herself completely to avoid contracting COVID-19 or neglect the danger and go out to serve her everyday needs, in which case denial may be necessary to manage fear. Similar considerations apply to the millions of people who are forced to go to work every day in crowded buses or trains and operate in workplaces with little provision for their safety and health. Combating denial of or noncompliance with social distancing measures under those circumstances requires supportive policies rather than persuasive or coercive measures. It is only through supportive policies that life conditions that force people into a denialist or conspiracist mindset will lose their power.
In summary, flawed argumentation does not become scientifically more valid because of a person’s cultural background or lived experience. However, understanding a lived experience can serve at least two purposes. First, it can provide pointers as to why a particular person engages in (or falls for) FLICC-based flawed argumentation. Second, even if flawed, it can reveal shortcomings in the scientific process or evidence base. For example, given that non-Hispanic whites of European ancestry compose more than 90 percent of participants in clinical trials in the United States, compared to their actual share in the population of 61 percent (Ma et al. 2021; Mak et al. 2007), much medical knowledge may not apply to the full diversity of the population. To illustrate, side effects of 5-Fluorouracil, a common cancer drug, occur at higher rates in underrepresented populations. Because the clinical trials involved predominantly white participants, these side effects in racial/ethnic minority groups were initially overlooked (Yates et al. 2020). Arguments based on a person’s identification as a cultural or ethnic minority may therefore not reflect “irrationality” or bad faith. On the contrary, analysis of those arguments can provide valuable pointers to underlying issues—such as lacking representation in medical research—that can be addressed by suitable policies or remedial research.
Recommendations
When science has an impact on policy and on people’s daily lives, two fundamental rights of the public collide: the right to be heard and the right not to be misled. 2 We propose that this tension can be resolved, and legitimate democratic debate be facilitated, in at least two ways.
First, misleading and inappropriate argumentation must be identified (e.g., Cook, Ellerton, and Kinkead 2018; Lewandowsky, Cook, and Lloyd 2018; Lewandowsky, Lloyd, and Brophy 2017). Rapidly evolving crises can overwhelm the scientific process, which then cannot provide firm answers at the speed at which they are demanded by the public and policy-makers. However, lack of scientific knowledge or scientific uncertainty does not legitimate misleading and inappropriate argumentation. Cherry-picking, for example, remains inappropriate irrespective of the strength of available scientific evidence.
Second, when misleading arguments have been identified, they can be used to “inoculate” the public against their ill effects. The theory of inoculation posits that people can be protected against misleading information when they are (1) warned that they may be misled and (2) are exposed to a preemptive rebuttal of the misleading argumentation (Lewandowsky and van der Linden 2021). Inoculation has been shown to be effective in many situations, including COVID-19 misinformation (Basol et al. 2021) and vaccine-related conspiracy theories (Jolley and Douglas 2017). The two main inoculation approaches are (1) fact-based, where misinformation is demonstrated to be false through factual explanations; and (2) logic-based, which involves identifying misleading techniques rather than content. The logic-based approach can convey immunity across topics (Cook, Lewandowsky, and Ecker 2017) and is applicable without requiring detailed knowledge of the entire inventory of misleading talking points. Second, the functional role of inappropriate argumentation must be interrogated.
Do people believe and voice those arguments to express a relevant aspect of their circumstances? If people voice conspiratorial rhetoric, do they express a deep-seated belief or does the rhetoric serve other functions (Lewandowsky 2021), such as redressing a perceived loss of control? What policies might mitigate those circumstances, so that people no longer have to rely on counterproductive arguments? Policies that take into account the reasons underlying misleading arguments can be more effective than those agnostic about these reasons. For example, in the context of overcoming vaccine hesitancy, research has shown that approaches, such as “motivational interviewing,” that are based on listening and empathy, and understanding of circumstances rather than “winning” an argument are particularly successful (Gagneur 2020).
It is only when misleading arguments can be identified and rejected, or can be interrogated for their underlying causes, that the holy grail of “deliberative decision making that is inclusive, transparent and accountable” (Norheim et al. 2021, 10) can be achieved.
Footnotes
Notes
Stephan Lewandowsky is a cognitive scientist at the University of Bristol. His research focuses on people’s responses to misinformation and the potential tension between online technology and democracy.
Konstantinos Armaos is a PhD candidate at the University of Lausanne. His dissertation is on the cognitive and motivational aspects of identity processes, with a special focus on motivated beliefs related to climate change, COVID-19, and intergroup bias.
Hendrik Bruns is a policy analyst at the European Commission Joint Research Centre (JRC). He tries to understand individual behavior and attitudes relating to climate, energy, and the environment and to apply behavioral interventions to motivate proenvironmental behavior and to reduce the spread of misinformation.
Philipp Schmid is a psychologist and postdoctoral researcher at the University of Erfurt, Germany. He studies the psychology of science denialism and health misinformation and aims to support people’s informed decision-making in health, for example, decisions related to vaccinations. He applies a persuasion psychology perspective to understand the impact of misinformation in health communication and to develop and evaluate promising interventions.
Dawn Liu Holford is a senior research associate at the University of Bristol and was previously a SeNSS/ESRC postdoctoral fellow. Her research looks at sociocognitive aspects of communication and information processing.
Ulrike Hahn holds a chair in computational modeling in the Department of Psychological Sciences, Birkbeck College, University of London, where she directs the Centre for Cognition Computation and Modelling.
Ahmed Al-Rawi is an assistant professor of news, social media, and public communication at the School of Communication at Simon Fraser University, Canada. He is the director of the Disinformation Project, and his research expertise is related to social media, news, and global communication.
Sunita Sah is director of Cornell University’s Academic Leadership Institute and an associate professor of management and organizations at Cornell University’s College of Business. A physician turned organizational psychologist, she has published widely on decision-making, ethical action, conflicts of interest, trust, and how we respond to outside influence.
John Cook is a postdoctoral research fellow with the Climate Change Communication Research Hub at Monash University. His research focus is on using critical thinking to build resilience against misinformation. He recently released the Cranky Uncle game, combining critical thinking, cartoons, and gamification to build players’ resilience against misinformation.
