Abstract
The spread of COVID-19 misinformation highlights the need to correct misperceptions about health and science. Research on climate change suggests that informing people about a scientific consensus can reduce misinformation endorsement, but these studies often fail to isolate the effects of consensus messaging and may not translate to other issues. We therefore conduct a survey experiment comparing standard corrections with those citing a scientific consensus for three issues: COVID-19 threat, climate change threat, and vaccine efficacy. We find that consensus corrections are never more effective than standard corrections at countering misperceptions and generally fail to reduce them with only one exception. We also find that consensus corrections endorsed by co-partisans do not reduce misperceptions relative to standard corrections, while those endorsed by opposition partisans are viewed as less credible and can potentially even provoke a backfire effect. These results indicate that corrections citing a scientific consensus, including corrective messages from partisans, are less effective than previous research suggests when compared with appropriate baseline messages.
False and unsupported claims have spread widely since the start of the COVID-19 pandemic, fostering misperceptions about the origins of the novel coronavirus (SARS-CoV-2), the health risks it presents, and how to most effectively prevent and treat the illness it causes (Pennycook et al. 2020). This misinformation continues to circulate despite extensive efforts by media organizations, social media companies, and health organizations to debunk it (Rogers 2020; World Health Organization 2020). The prevalence of these false claims threatens efforts to mitigate the deadly impact of the virus and potentially jeopardizes social welfare, mirroring the pattern observed with other health and science issues like climate change and vaccinations (e.g., Frankovic 2019; Reinhart 2020).
One popular approach to debunking misinformation about scientific and health issues is presenting evidence of a scientific consensus that contradicts a false claim or belief (Koehler 2016; van der Linden et al. 2017). However, studies testing consensus messaging have several limitations. First, they often fail to isolate the effect of a consensus message and instead bundle information about a scientific consensus with a correction message saying the claim is false. In addition, these studies largely focus on climate change, a highly polarized issue in which directionally motivated reasoning—a phenomenon in which individuals process information with a bias toward a preferred conclusion (Kunda 1990)—is seemingly common and the immediate stakes for most Americans are relatively low (e.g., van der Linden et al. 2014, 2018).
Moreover, findings vary on which sources are most effective for corrections. The corrections tested in many science and health misinformation studies typically attribute their information to scientists and experts, who may be highly credible to some audiences (e.g., van der Linden et al. 2014, 2018). However, other groups may distrust those sources and/or engage in directionally motivated reasoning when presented with counter-attitudinal information on controversial issues such as climate change threat (Bolsen and Druckman 2018). As a result, scholars have examined whether co-partisan sources are more effective at correcting misinformation (Benegal and Scruggs 2018; Berinsky 2017; Bolsen et al. 2019). However, messages from partisan sources are likely to be encountered by people who do not share the party affiliation of the speaker. These sources may be less effective at persuading people from the other party or even provoke negative reactions from them (e.g., Hart and Nisbet 2012; Swire et al. 2017). Moreover, the effectiveness of messages from scientists and partisan sources may vary across issues depending on the level of partisan polarization (which may strengthen directionally motivated reasoning) and perceived threat (which may strengthen accuracy motivations, e.g., Case et al. 2021).
We therefore compare the effects of different sources and types of corrections on belief in misinformation, testing the relative effectiveness of a scientific consensus correction endorsed by scientists, co-partisans, or opposition partisans versus a standard, non-consensus correction. We further consider whether the effectiveness of different sources of consensus messaging varies across three issues where people may encounter false claims that a threat or risk is exaggerated—the threats posed by COVID-19, climate change, and communicable disease among people who forego vaccination. These issues differ in partisan polarization levels and in the perceived threat or risk they pose to the public. We hypothesized that co-partisan endorsements would be most effective for polarized issues, such as climate change threat, that we expected to trigger stronger directionally motivated reasoning. Conversely, we expected scientific sources to be most effective for issues such as COVID-19 for which individuals perceive high levels of personal threat, which should instead increase accuracy-motivated reasoning (Kunda 1990).
Results from an experiment conducted on a sample of more than 5000 participants indicate that scientific consensus corrections are generally not more effective than non-consensus corrections when compared with participants who did not receive a correction. Consensus messages from scientists only reduce misperceptions from that baseline on the issue of climate change threat and are never measurably more effective than standard correction messages. Likewise, consensus corrections from co-partisans do not reduce misperceptions versus baseline and are not more effective than standard corrections that cite a scientific consensus on any issue. Conversely, corrections that cite a scientific consensus from opposition partisans were seen as less credible and even backfired among Democrats on climate change.
Overall, our results suggest that the effectiveness of corrections using consensus messaging may be overstated, including those from partisan sources that cite a scientific consensus. These findings underscore the importance of identifying the appropriate baseline for comparison when assessing the corrective effects of political messages.
Theoretical expectations
We specifically test the following hypotheses and research questions, which were preregistered prior to data collection. 1
First, though prior studies have shown mixed results for the issues we consider (e.g., Kreps and Kriner 2020; Nyhan et al. 2014; van der Linden et al. 2018), meta-analyses show that exposure to corrective information typically reduces belief in misinformation (Chan et al. 2017; Walter et al. 2019). We therefore expect participants who receive a correction to rate false or unsupported claims about climate change threat (H1a), COVID-19 threat (H1b), and vaccine efficacy (H1c) as less accurate than those who receive misinformation but no correction.
Two forms of corrections should be especially effective: attributing consensus messages to scientists, which has been found to increase belief accuracy across partisan lines (e.g., van der Linden et al. 2018), and attributing consensus messages to co-partisan elites, which may be especially effective for otherwise skeptical partisan groups (e.g., Benegal and Scruggs 2018). We therefore hypothesized that participants who are exposed to a message in which either co-partisans or scientists endorse a correction citing a scientific consensus would rate the claim being corrected as less accurate (H2a) and rate the source of the correction as more credible (H2b) than those who received an otherwise equivalent correction that does not cite a consensus.
However, we expect that the effects of these messages will differ by issue. Directionally motivated reasoning should be more common on issues with high levels of partisan polarization among elites (e.g., Druckman et al. 2013), while accuracy motivated reasoning should be more common on issues with high levels of perceived threat (Kunda 1990; Vraga and Bode 2017). We therefore expected that co-partisan corrections citing a scientific consensus would decrease misinformation belief and increase the perceived credibility of the corrective source more relative to scientific consensus corrections for climate change threat, a more polarized issue, than for vaccine efficacy, a less polarized issue (H3a). We conversely expected scientist endorsement of a consensus to be more effective than co-partisan endorsement for COVID-19 threat, an issue of high personal threat (especially given the pandemic taking place when the study was conducted), compared with climate change threat, an issue of relatively lower personal threat (H3b). Our preregistered theory and analysis plan considers threat as an issue-level variable that is expected to increase accuracy-motivated reasoning. However, other studies conceptualize threat and related anxiety as individual-level variables and/or expect that they would induce directionally motivated reasoning instead (e.g., Groenendyk 2016; Haas and Cunningham 2014; Weeks 2015). We therefore also summarize exploratory results that consider perceived threat as an individual-level moderator in footnote 5 (details are reported in Online Appendix B) and propose further research into the effect of threat and anxiety on accuracy-motivated reasoning in the conclusion.
In addition, we examine the following preregistered research questions for which we have weaker expectations: whether consensus corrections from co-partisans or scientists would be more effective for misinformation belief about COVID-19 threat (RQ1); whether consensus corrections from scientists (versus co-partisans) would be more effective for COVID-19, an issue of high threat, than vaccine efficacy, an issue of relatively lower threat (RQ2); how an opposition partisan endorsement of a consensus correction (versus scientists) would affect misinformation belief for the more polarized issues of climate change threat and COVID-19 threat (RQ3); and whether a co-partisan endorsement of a consensus correction would have differing effects by party for the more polarized issues of climate change threat and COVID-19 threat (RQ4). 2
Methods
Study design
Our experimental design builds on Benegal and Scruggs (2018), which tested four treatments for climate change misinformation: no correction, a scientific consensus correction, a Republican endorsement of a scientific consensus correction, and a Democratic endorsement of a scientific consensus correction. However, since Benegal and Scruggs do not test a correction message that excludes a consensus component, it is unclear if the corrective effects they observe are due to the consensus treatments or simply exposure to corrective information. We therefore add a condition in which respondents are exposed to a correction without any reference to consensus, which we refer to as a standard correction.
As Figure 1 illustrates, respondents were randomized into one of six conditions in a between-subjects design: a pure control group which saw no misinformation or correction; a misinformation-only condition in which respondents read an article containing misinformation about each issue; a standard correction condition in which respondents read each misinformation article and a corresponding factual correction immediately afterward; or three consensus correction conditions in which respondents read each misinformation article and a corresponding factual correction immediately afterward citing a scientific consensus supporting the corrective information. This endorsement was either attributed to a group of scientists, a group of Democratic elected officials, or a group of Republican elected officials. Respondents encountered each issue in random order and remained in the same condition for all three issues.

Survey flow diagram.
Our preregistered design is intended to test the effects of corrective information among people exposed to misinformation. We therefore compare misperception beliefs between respondents who were randomly assigned to the correction conditions and those who only saw misinformation. This approach allows us to estimate the effectiveness of corrections conditional on exposure to issue-specific misinformation. Since many Americans have been exposed to misinformation about these highly salient topics, we believe this baseline is most informative. However, as we show in Table B15 in Online Appendix B, our results are similar when we instead use the control group as a baseline.
Each misinformation article claimed that the threat (climate change or COVID-19) or effectiveness (vaccines) of the topic in question was being exaggerated, arguing instead that COVID-19 is no more harmful than the seasonal flu, that climate change is no more harmful than natural temperature cycles, and that infectious diseases are no more harmful to unvaccinated people than to vaccinated people. To the extent possible, we matched the language of the misinformation and correction articles to maximize parallelism and minimize issue-specific confounds. (The full survey instrument, including exact wording for all measures and stimuli, is provided in Online Appendix A.)
Before viewing the articles and outcome measures, participants answered demographic and attitudinal questions. Next, participants viewed the treatment stimuli and outcome questions. Misinformation, corrective information, and outcome questions were grouped by issue. Misinformation and correction articles were each followed by an attention check immediately after reading the passage, though we included participants in our analysis regardless of whether they passed. Passage rates for the misinformation attention check were 66% for climate change, 62% for COVID-19, and 63% for vaccination. Passage rates for the correction attention check were 78% for climate change, 78% for COVID-19, and 76% for vaccination. Within each issue block, participants rated each of the following on a four-point scale: the accuracy of the false claim in the misinformation article, the credibility of the correction article (if viewed), and their willingness to sign a petition supporting a policy aligned with the corrective information (see Table B8 in Online Appendix B for results for this measure). Issue blocks were presented in random order. Filler articles were shown between issues to minimize order effects. All respondents were debriefed at the conclusion of the study.
Sample characteristics
Participants were recruited May 9–10, 2020, through the Lucid Marketplace online panel using quotas to mirror the age, race, gender, and region distribution for the U.S. adult population. 3 Lucid provides a larger, more diverse, and less professionalized respondent pool than Amazon Mechanical Turk and has been found to match population benchmarks and experimental findings from other survey panels (Coppock and McClellan 2019). Our total sample included 6645 respondents. Table B1 in Online Appendix B demonstrates sample balance across conditions. The study was approved by the Committee for the Protection of Human Subjects at Dartmouth College.
Results
Our analysis plan was preregistered with EGAP prior to fielding the study (https://tinyurl.com/ybeg235d); all deviations are labeled. Statistical analyses were conducted using OLS regression with robust standard errors. All experimental analyses below include indicators for co-partisan and opposition partisan corrections that were constructed using respondent partisan self-identification (including leaners) and our Democratic and Republican correction conditions. These results thus exclude pure independents for whom the co-partisan/opposition correction variables are not defined.
Following our preregistered analysis plan, we found that the effects of our experimental manipulations varied by the order in which the issues were shown to respondents (see Section B.1 in Online Appendix B for results and further discussion). Our study thus focuses exclusively on experimental results for the first issue seen by respondents, which was randomized.
We also assessed our expectations of perceived personal threat and partisan polarization by issue. For threat, we expected COVID-19 to be seen as posing the most immediate risk to respondents’ health and safety followed by the risk from not getting vaccinated (i.e., from other communicable diseases) and then climate change. However, the risks posed by COVID-19 and forgoing vaccination were not viewed as measurably different by our respondents, though both were seen as presenting greater immediate risks than climate change (see Table B2 in the Online Appendix). Finally, pre-treatment measures show the greatest partisan differences over the immediate health and safety risk of climate change followed by COVID-19 and then vaccination (Table B3).
We now consider the effects of the experimental treatments on belief in misinformation about each issue. Our research confirms previous findings that corrective information can reduce belief in false information about climate change threat. However, we did not find evidence that corrective information about COVID-19 threat and vaccine efficacy had similar effects. Results are presented in Table 1 and summarized graphically in Figure 2. 4
Effects of information exposure on misinformation belief.
p < 0.05, **p < 0.01, ***p < 0.005 (two-sided). OLS models with robust standard errors. Estimated among self-identified Democrats and Republicans (including leaners). Controls included for political interest and knowledge, trust in science, Trump approval, nonwhite race/ethnicity, college education, gender, party, and age group. We included these controls because we thought they would be prognostic, which should increase the precision of our treatment effect estimates without increasing bias materially (see Bloom et al. 2005; Broockman et al. 2017; Higgins et al. 2016). Linear combination effects versus misinformation-only condition in the bottom half of the table are calculated as the differences between treatment effect estimates, not differences of means between conditions.

Misinformation belief by issue and condition.
Consistent with previous research, our first finding indicates that corrective information about climate change threat reduces misinformation belief relative to the misinformation-only condition (−0.16, p < 0.05). Effects were similar for scientist corrections (−0.17, p < 0.05) and co-partisan corrections (−0.14), though we could not reject the null hypothesis of no effect at the p < 0.05 level for the latter. We thus generally find support for H1a, which predicted that corrective information would reduce misinformation belief on climate change threat.
By contrast, we find no measurable effects of any correction condition relative to the misinformation-only condition for the issues of COVID-19 threat and vaccine efficacy and thus do not find support for H1b and H1c, respectively. We note that COVID-19 threat misinformation had no measurable effect on misinformation belief, which was not reduced further by exposure to corrective information (similar to the findings in Carey et al. 2020). For vaccines, exposure to misinformation actually reduced misinformation belief (−0.16, p < 0.05), suggesting it was unpersuasive. These beliefs were not reduced further by exposure to corrective information, however.
We found little evidence to support our expectations that corrections in which scientists or co-partisans endorsed a scientific consensus would be more effective than standard corrections. Misinformation belief did not differ measurably between correction conditions for any of the three issues, including COVID-19 threat (H2a; RQ1). Similarly, we find generally null results for the perceived credibility of scientist or co-partisan consensus corrections versus a standard correction (H2b; see Table B6), though respondents rated the scientist correction as more credible than the standard correction for climate change threat (0.14, p < 0.05). Finally, as reported in Table B14, we do not find the expected differences in co-partisan versus scientist correction effects across pairs of issues that differ most in expected polarization (climate change threat versus vaccine efficacy; H3a) and most in perceived personal threat (COVID-19 versus climate change; H3b). 5 Results are similar for perceived credibility (Table B7).
To better understand these results, we conducted exploratory descriptive analyses of attention check passage rates and response timing across conditions. 6 These results suggest that the increased length and complexity of the consensus corrections may have made them more difficult to comprehend. First, as Table B10 shows, respondents in the conditions in which scientists, co-partisans, or opposition partisans cited a scientific consensus were generally less likely to correctly answer the attention check questions compared with those in the standard (non-consensus) correction groups. 7 We also find that individuals reading consensus corrections spent longer on the correction page than did individuals receiving the standard correction (see Table B11).
The consensus correction from opposition partisans was never more effective and was sometimes less effective than other correction messages on issues of partisan controversy. We find in an exploratory analysis that an opposition partisan correction is measurably less effective than a correction from scientists for climate change threat (−0.19, p < 0.01) and actually increases misinformation belief compared with baseline (0.15, p < 0.05). Comparable tests are null for COVID-19 threat, another (somewhat) polarized issue. In addition, respondents receiving a consensus correction from opposition partisans rate the correcting source as significantly less credible than the standard correction for both climate change threat (−0.35, p < 0.005) and COVID-19 threat (−0.44, p < 0.005).
However, these effects varied substantially by party. As we show in Table B12, the opposition correction resulted in significantly greater misinformation belief on climate change threat among Democrats than did the scientist correction (0.34, p < 0.005), a difference that was measurably different than what was observed among Republicans (p < 0.05). Moreover, Democrats’ belief in climate change misinformation increased versus baseline if they received the opposition partisan correction (0.21, p < 0.05). By contrast, such a backfire effect was not observed among Republicans.
This backfire effect is likely the result of an opposition party source cue rather than a disconfirmation bias; the latter interpretation is inconsistent with increased Democratic belief in a climate change misperception generally unsupported by Democrats. Such a backfire may arise if respondents ignore or reject the content of the correction due to the presence of an opposition party cue, which provokes a negative response. We find in exploratory analysis that Democrats receiving the opposition correction answered the attention check correctly 16 percentage points less often than did Democrats receiving the scientist correction of climate change misinformation (see Table B13 in the Online Appendix).
Finally, we found per RQ4 that the co-partisan correction was measurably less effective (p < 0.05) among Republicans than Democrats for climate change threat (perhaps because the source was less plausible). These partisan differences in co-partisan and opposition correction effects for climate change threat are illustrated in Figure 3. No such effect was found for COVID-19 threat.

Climate change misinformation belief by party and condition.
Conclusion
Consistent with prior research, we find that corrections citing a scientific consensus can effectively correct misperceptions about climate change threat. However, this effect does not extend to the other highly salient health and scientific issues of COVID-19 threat and vaccine efficacy. We generally found little evidence that consensus messages from either scientists or co-partisans were more effective than standard corrections, contradicting prior research (Benegal and Scruggs 2018; van der Linden et al. 2018). The effects of these messages did not vary with issue-level differences in partisan polarization or perceived threat.
We also find that, among Democrats, opposition party endorsements of the scientific consensus lead to significantly greater belief in misinformation (on climate change threat) and lower credibility ratings of the corrective source (on all issues) relative to the scientist endorsement. We specifically observe a backfire effect on climate change threat, our study’s most polarized issue and thus the one for which Democrats are presumably most likely to believe Republicans have differing interests (Lupia and McCubbins 1998). In contrast, we may not have observed a backfire effect for COVID-19 threat and vaccine efficacy because Democratic and Republican elites are (somewhat) less polarized on these issues. Nonetheless, given the apparent ineffectiveness of co-partisan corrections that cite a scientific consensus and the risk of backfire against consensus corrections from opposition partisans, fact-checkers might consider reducing their usage of partisan-sourced consensus corrections.
One potential mechanism for why consensus corrections appear no more effective than standard corrections is that consensus corrections are lengthier and more complex, which may have reduced respondents’ understanding of the corrective information. Consensus corrections require both a source of consensus and the content of a correction, whereas standard corrections only require the latter. This increased length and complexity could have dissuaded respondents from reading the entire message or made the message less comprehensible or persuasive (Lowrey 1992). The greater clarity and brevity of a correction without consensus may therefore be more effective in real-world contexts where audiences often lack interest, time, or both.
This study has several limitations. First, the corrections did not cite any outside sources to maintain the plausibility of the consensus corrections and avoid confounding source effects across issues. This design choice may have reduced the effectiveness of the corrections we tested. Second, exposure to our misinformation articles decreased belief in a vaccine misperception and had no measurable effect on the two other misperceptions we tested. However, observed levels of misperceptions suggest these null results may reflect pre-treatment effects resulting from prior exposure to the misinformation (Druckman and Leeper 2012). The limited effects of the corrective information that we observe thus remain noteworthy. Nonetheless, future studies should consider how to increase the validity of these stimuli, which may have been seen as implausible or not convincing enough to change respondent beliefs. Third, our study did not include accuracy incentives or other mechanisms to deter expressive responding; as with any survey, we cannot rule out the possibility that some respondents answered insincerely. Fourth, future research should determine how to avoid order effects in correction studies with multiple issues. Fifth, though we focused on endorsement of a scientific consensus, other types of consensus (such as consensus among political or religious leaders) might yield different results. Finally, we focused on two issues (climate change and COVID-19) that Democrats are more likely than Republicans to perceive as threatening. Future research should test an issue of higher threat to Republicans such as immigration.
Our results nonetheless provide important evidence suggesting that consensus messages, including partisan messages that cite a scientific consensus, are not necessarily more effective than the baseline set by standard approaches to correcting misinformation. Future studies should ensure that they use proper baselines for comparison (e.g., comparing effects against standard approaches to correcting science- or health-related misperceptions) when assessing the effects of corrective messaging.
Supplemental Material
sj-pdf-1-rap-10.1177_20531680211014980 – Supplemental material for The limited effects of partisan and consensus messaging in correcting science misperceptions
Supplemental material, sj-pdf-1-rap-10.1177_20531680211014980 for The limited effects of partisan and consensus messaging in correcting science misperceptions by Vignesh Chockalingam, Victor Wu, Nicolas Berlinski, Zoe Chandra, Amy Hu, Erik Jones, Justin Kramer, Xiaoqiu Steven Li, Thomas Monfre, Yong Sheng Ng, Madeleine Sach, Maria Smith-Lopez, Sarah Solomon, Andrew Sosanya and Brendan Nyhan in Research & Politics
Footnotes
Acknowledgements
We thank Sumitra Badrinathan and Simon Chauchard for helpful comments.
Correction (June 2025):
Declaration of Conflicting Interests
The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Dartmouth Center for the Advancement of Learning.
Supplementary materials
The supplementary files are available at http://journals.sagepub.com/doi/suppl/10.1177/20531680211014980. ![]()
Notes
Carnegie Corporation of New York Grant
This publication was made possible (in part) by a grant from the Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
