Abstract
Some people hold beliefs that are opposed to overwhelming scientific evidence. Such misperceptions can be harmful to both personal and societal well-being. Communicating scientific consensus has been found to be effective in eliciting scientifically accurate beliefs, but it is unclear whether it is also effective in correcting false beliefs. Here, we show that a strategy that boosts people’s understanding of and ability to identify scientific consensus can help to correct misperceptions. In three experiments with more than 1,500 U.S. adults who held false beliefs, participants first learned the value of scientific consensus and how to identify it. Subsequently, they read a news article with information about a scientific consensus opposing their beliefs. We found strong evidence that in the domain of genetically engineered food, this two-step communication strategy was more successful in correcting misperceptions than merely communicating scientific consensus. The data suggest that the current approach may not work for misperceptions about climate change.
Keywords
Some people hold beliefs that are opposed to overwhelming scientific evidence. These misperceptions, defined as factual beliefs that are false or that contradict the best available evidence in the public domain (Flynn et al., 2017), can be harmful to one’s health and even hamper society’s ability to address major challenges. One of the biggest challenges of our time is climate change, for which public policy and action depend on the accurate belief that climate change is caused by human action (Krosnick et al., 2006; van der Linden, Leiserowitz, et al., 2015). Similarly, accurate beliefs about vaccination influence the decision to get vaccinated (Joslyn & Sylvester, 2019), which is our most promising approach to eradicating diseases such as polio, diphtheria, and measles. In the domain of food technology, substantial opposition to genetic engineering of food (Scott et al., 2016, 2018) means that we stand to lose support for one of the most promising technologies to reduce undernourishment, from which an estimated 690 million people suffer globally (United Nations, n.d.). Our goal in the current research was to test a communication strategy aimed at correcting misperceptions about important societal issues.
Communicating scientific information can be problematic because people typically take their own goals and needs, knowledge and skills, and values and beliefs into account when evaluating new information (National Academy of Sciences, 2017). This makes simply providing information insufficient for effective science communication. Research has found that instead of communicating complex knowledge, communicating scientific consensus (i.e., a high degree of agreement among scientists) is effective in eliciting accurate beliefs (Cook, 2016). The gateway to these personal factual beliefs is the individual’s perception of the agreement among scientists—their perceived consensus (Lewandowsky et al., 2013). This approach is thought to be effective because communicating scientific consensus does not rely on elaborate processing of complex scientific information. Rather, it plays into heuristics such as trust in experts and the idea that consensus implies correctness (van der Linden et al., 2019).
The heuristic to trust in expert consensus not only is an ecologically rational strategy but also provides science communicators with a route to personal beliefs. This route through communication of scientific consensus is captured in the
However, an important unanswered question is whether communicating scientific consensus is also effective in correcting beliefs of people who hold a misperception. Misperceptions are notoriously hard to correct, especially in the case of politicized science issues such as climate change (Flynn et al., 2017). The question of how to correct misperceptions is important for two reasons. First, people who hold false beliefs may display behaviors that are detrimental to themselves or others. Second, people who hold misperceptions may not trust experts advocating positions incongruent with their preferences (Kahan et al., 2011) or may downplay the reliability of a consensus cue that is in contrast to their interests (Giner-Sorolila & Chaiken, 1997). Moreover, even when the scientific consensus is accepted by individuals with a conflicting worldview (van der Linden et al., 2018, 2019), this may not necessarily prompt them to update their personal beliefs to be in line with the consensus (Bolsen & Druckman, 2018; Dixon, 2016; Pasek, 2018). This means that communicating scientific consensus may not be persuasive among people who need persuasion most. At the same time, consensus communication is one of the most promising strategies to correct false beliefs. Thus, the challenge is to make consensus communication work among people who hold misperceptions in the face of overwhelming scientific evidence.
Statement of Relevance
False beliefs about important societal issues, such as climate change and food safety, can be harmful to both personal and societal well-being. The current research demonstrates that informing people of a scientific consensus opposing their false beliefs can help to correct those beliefs. Moreover, the current work extends existing research by demonstrating that empowering people to understand and identify scientific consensus can help to correct a false belief even further, as is demonstrated in the case of genetically engineered food. Although this approach may not work for misperceptions about climate change, these findings support a strategy of open communication about the process of reaching scientific consensus. There is much to be won, considering that communication about consensus and reaching consensus in relevant news content is rare. Communicating scientific consensus, paired with science-communication campaigns focused on boosting understanding and identification of scientific consensus, is a promising place to start.
To address this challenge, we developed and tested a strategy that teaches people the value of scientific consensus and how to identify it when evaluating the veracity of a claim. The current strategy can be considered a boosting approach to behavior change, which consists of a noncoercive intervention strategy that aims to increase people’s competence to make their own choices. This competence can be fostered in a number of ways, such as through changes in skills or knowledge, but to classify as a boost, an intervention needs to be transparent and promote agency (Hertwig & Grüne-Yanoff, 2017; Lorenz-Spreen et al., 2020). An example of boosting is when individuals are inoculated against the persuasive effect of misleading information by warning them of impending exposure to such misleading information and explaining to them how the misleading technique works (e.g., the use of fake experts; Cook et al., 2017). Such inoculation may foster the skill to identify manipulative methods used to misinform and thus promote agency by making people more resistant to manipulation (Lorenz-Spreen et al., 2020). In contrast to misinformation-focused strategies, such as inoculation, the goal behind the present strategy is to strengthen the corrective effect of accurate information.
In the current work, we applied the boosting approach to empower individuals holding misperceptions to understand the value of and to identify scientific consensus. Thus, in contrast to providing more information, boosting consensus reasoning is intended to empower individuals to make the best use of already available information. When the boost is successful, consensus information not only plays into heuristics; it is also fully understood as a source of valuable information and is easily identified. This empowerment, combined with exposure to information about a scientific consensus embedded in a naturalistic environment, might also yield less resistance than more direct means of persuasion, because direct means of persuasion may be perceived as deceiving or as a threat to freedom (Fransen et al., 2015). Therefore, we expected a two-step communication strategy, consisting of (a) boosting consensus reasoning and (b) providing consensus information, to be more effective at helping to correct false beliefs than communicating only consensus itself.
This approach of boosting consensus reasoning was examined in three preregistered experiments. The general setup of all three experiments in the current research was similar (see Fig. 1 for an overview of all conditions employed across the experiments). The most substantial difference between experiments is the added control condition in Experiment 3 (the condition on the far right in Fig. 1), which allowed us to investigate whether boosting consensus reasoning strengthened an already persuasive consensus statement or whether the consensus statement alone was ineffective in correcting the misperception.

Overview of conditions across the experiments. In all conditions, participants’ beliefs were measured at the start and at the end of the experiment, allowing us to investigate changes in belief. In the boost+ condition (all experiments), participants read an infographic explaining the process through which a scientific consensus is reached and why a scientific consensus is a useful piece of information when deciding whether or not something is true. The infographic also set out three steps through which one can use information about a scientific consensus to evaluate a claim (see Fig. A1 in Appendix A). Subsequently, participants were provided with the opportunity to apply their new skill by reading a news article containing a short paragraph with a statement about the scientific consensus regarding the topic of misperception. The boost condition (Experiments 1 and 2) was similar to the boost+ condition, but a shorter version of the same infographic set out only the three steps. In the consensus-only condition (all experiments), participants’ consensus reasoning was not boosted. Instead, participants read an infographic telling them that we were interested in their strategy for evaluating claims. Subsequently, they read the news article described above containing the consensus statement. In the control condition (Experiment 3), participants read the same infographic as in the consensus-only condition. However, the news article they subsequently read did not contain a statement about the scientific consensus. The boost conditions are indicated in blue, and the conditions with a consensus statement are indicated in green.
The hypotheses, sampling procedure, main analyses, and exclusion criteria for all experiments were preregistered on OSF: https://osf.io/7aqjp/ (Experiment 1), https://osf.io/kd7hb/ (Experiment 2), and https://osf.io/4w9tq/ (Experiment 3). Material, data, and analysis scripts for all three experiments are also available on OSF (https://osf.io/hua8v/). All three experiments were part of a research project that was reviewed and approved by the Ethics Committee Social Science at Radboud University (Reference No. ECSW-2018-056).
Experiment 1: Climate Change
In Experiment 1, we addressed the misperception that climate change is not caused by human action. We were interested in climate change because this is one of the most challenging topics in science communication and because the belief that climate change is caused by humans is as an important predictor of climate-change risk perception (Lee et al., 2015).
Method
Potential participants were screened prior to the experiment (see the Supplemental Material available online for the full sampling procedure). All participants in Experiment 1 were selected because they believed that climate change is not primarily caused by human action. Participants’ belief in human-caused climate change was measured at the start and end of the experiment (
Participants
Participants (U.S. nationals) were recruited using the online research platform Prolific (www.prolific.co), following a Bayesian sequential-sampling procedure with an optional stopping rule (Schönbrodt et al., 2017). During the sequential-sampling procedure, we checked the Bayes factor (BF) at predetermined intervals to evaluate the evidence in the data in favor of the alternative over the null hypothesis (BF10). We planned to continue data collection until there was moderate evidence (Schönbrodt et al., 2017; 1/6 > BF10 > 6) in favor of or against our hypotheses or until a maximum sample size (
Sample Sizes in Each Experiment
Note: Following our preregistered exclusion criteria, we excluded people who did not hold a misperception at the start of the experiment, if they failed an instructed-response item, or if they reported a different nationality (i.e., other than American) than we screened for. However, a small number of participants were excluded after data collection (exclusion criteria not preregistered; see the Supplemental Material for more details).
Materials and procedure
First, prior belief in human-caused climate change and prior perceived scientific consensus were measured. Subsequently, each participant was randomly assigned to one of three conditions: the
In all conditions, we presented participants with an infographic entitled “How to figure out whether a claim is true.” In the boost+ condition (see Fig. A1 in Appendix A), the infographic (containing a total of 511 words) first explained how a scientific consensus develops and why it is a useful piece of information when evaluating claims. Specifically, it described the process as beginning with a question on which evidence is gathered, followed by the development of a consensus, which ultimately can be used to evaluate a claim. This first part of the infographic concluded with the statement that consensus among scientists reflects consensus in the evidence. The second part of the infographic taught participants “Three steps to evaluate a claim.” The three steps were (a) “look for a statement indicating consensus,” (b) “check the source making the consensus statement,” and (c) “evaluate the expertise of the consensus.” This second part of the infographic concluded with the statement that a claim that satisfies the conditions mentioned in the steps is very likely to be true.
The infographic in the boost condition (171 words) consisted only of the second part, setting out the three steps (see Fig. A2 in Appendix A). In the consensus-only condition, we told participants that we were interested in their personal strategy for evaluating claims (63 words; see Fig. A3 in Appendix A).
In a pilot study, the consensus-reasoning manipulation was tested with U.S. citizens recruited on Prolific (
After reading the infographic, participants in all groups evaluated three practice statements that were unrelated to the actual topic of misperception that we were interested in (i.e., “According to Andreas Spigletti, more than 4 out of 5 medical doctors agree that pneumonia is caused by being exposed to low temperatures,” “According to Lisa Williams from the National Academy of Sciences, 9 out of 10 psychologists agree that Greenland is about the same size as Africa,” “According to dr. Kendall Smith from Leipzig University, 95% of physicists agree that electrons are smaller than atoms”). In both boost conditions, participants were asked whether, in view of the three steps they had just read about, the statement allowed them to judge whether the claim was true or not. Participants in the consensus-only condition evaluated the same three practice statements but were asked whether, on the basis of their strategy of claim evaluation, they could judge whether the claim was true or not. Participants in the boost conditions received feedback on whether their answer (“yes” or “no”) was correct, thereby reiterating the three steps of evaluating a claim using consensus reasoning—for example, Incorrect. We cannot evaluate this claim, because Step 2 cannot be completed. We do not know if the source making the statement (Andreas Spigletti) is a scientist from a university or another scientific organization. Therefore, we cannot judge whether the claim is true.
This feedback was intended to instruct participants about how to practice their newly acquired consensus-reasoning skill. Participants in the consensus-only condition received no feedback.
After completing the practice rounds, we presented participants with a news article about a sharp rise in Arctic temperatures. The article (322 words) included a statement on the scientific consensus regarding human-caused climate change and was adapted from an article in In 2016 already, a study showed that there is a scientific consensus on human-caused global warming. Dr. John Cook from George Mason University: “The expert consensus is somewhere between 90% and 100% that agree humans are responsible for climate change, with most of the studies finding 97% consensus among publishing climate scientists.”
We incorporated the scientific-consensus statement in a news article to address one prominent critique of consensus messaging in the domain of climate change. Critics argue that much of the research is conducted in an artificial context and that consensus messages lack the complexity of real-world information (Kahan, 2015; Pearce et al., 2015). By incorporating the consensus statement in a news article, participants could apply the consensus-reasoning skill in a more externally valid setting than if they had received a stand-alone consensus message.
After reading the news article, participants’ beliefs and perceived scientific consensus were measured again (posterior belief and posterior perceived consensus, respectively). We explained to participants that this second measure was not a test but that we were interested in their beliefs. The remaining variables were then measured.
Measures
Belief in human-caused climate change was measured by asking participants to what extent they believed the following statements to be true: “Climate change is caused primarily by human action.” Their response was measured on a visual analogue scale ranging from −100 (
Perceived scientific consensus was measured in a similar way, once at the start of the experiment and once after the consensus-reasoning manipulation and news article. We asked participants what they thought the percentage of climate scientists was who agreed that climate change is caused primarily by human action. Their response was measured on a scale ranging from 0% to 100%.
The manipulation check asked participants what steps they take to evaluate claims. Three text boxes were provided, and at least one of these needed to be filled in. Responses were coded to identify whether consensus (or something similar) was mentioned. The coding procedure was tested in the pilot study.
Other, exploratory measures, including secondary outcomes such as worry about climate change, support for policy aimed at tackling climate change, and the intention to reduce one’s own carbon footprint, can be found on OSF (see “Additional Measures” at https://osf.io/qx23b/).
Data analysis
We conducted two analyses of covariance (ANCOVAs) with posterior belief in human-caused climate change as the dependent variable, condition as the independent variable, and prior belief as a covariate. First, we compared the combined boost conditions with the consensus-only condition (Hypothesis 1). Second, we compared the two boost conditions with each other (Hypothesis 2). Standard assumptions for linear models were checked, and, where necessary, additional robust ANCOVAs were conducted. In all confirmatory analyses, and in accordance with the preregistered exclusion criterion, model outliers greater than 3 standard deviations were removed. This did not substantially alter the results. We computed BFs (using a Bayesian ANCOVA, prior
Results
As expected, the manipulation check revealed that participants in the boost conditions mentioned consensus more often (18.24%) than participants in the consensus-only condition (2.70%), χ2(1,
We did not find support for our hypothesis that boosting consensus reasoning leads to higher belief in human-caused climate change. Specifically, an ANCOVA comparing the combined boost conditions (
One potential explanation for the ineffectiveness of the boosts in correcting misperceptions is that they did not affect the gateway belief (participants’ perception of the scientific consensus). We were able to investigate this explanation because perceived consensus was also measured at the start and end of the experiment (for means and standard deviations, see Table B1 in Appendix B). The omnibus ANCOVA, in which posterior perceived consensus was the dependent variable, condition was the independent variable, and prior perceived consensus was the covariate, yielded no significant effect,
An explanation for the boosts’ ineffectiveness at influencing either the misperception or the perceived consensus is that participants’ antiscience view hindered them from accepting the science-based boosting strategy for claim evaluation. This notion was supported by the fact that the boosts were less effective in eliciting consensus reasoning in the experiment than they were in the pilot study; most of the participants in the pilot study did believe in human-caused climate change. Moreover, we found a positive, point-biserial correlation in the combined boost conditions between trust in climate scientists and the score on the manipulation check—consensus mentioned:
Experiment 2: Genetically Engineered Food
Because of the null results of the first experiment and the finding that climate scientists were viewed as relatively untrustworthy, we decided to change the topic of the following experiment. In Experiment 2, we addressed the misperception that genetically engineered food is worse for health than food that is not genetically engineered.
Method
The experiment was highly similar to Experiment 1 except for the topic of misperception. A second difference consisted of a follow-up measure of participants’ belief 2 weeks after participation in the experiment.
Participants
Again, participants were recruited following a sequential-sampling procedure. Following the results of the first experiment, we decided to focus the sequential-sampling procedure on Hypothesis 1 (comparing the combined boost conditions with the consensus-only condition). Therefore, we planned to continue data collection until there was moderate evidence (1/6 > BF10 > 6) in favor of or against Hypothesis 1 and slightly less substantial evidence (1/3 > BF10 > 3) in favor of or against Hypothesis 2 (comparing the longer boost with the short boost). We started checking the BFs at 50% of the maximum total sample size (225 participants) and decided to check them after every set of 75 new participants. Because the desired level of evidence was never obtained, we recruited participants until the maximum sample size was reached (see the Supplemental Material for more details).
Just as with Experiment 1, potential participants were screened prior to the experiment. At the beginning of the experiment, all participants believed that genetically engineered food is worse for health than food that is not genetically engineered (final
Materials and procedure
The news article containing the consensus statement in Experiment 2 (313 words) discussed a new, fungus-resistant, genetically engineered banana and was adapted from a news article from In 2014 already, a survey showed that there is a scientific consensus on the safety of genetically engineered food. Dr. Cary Funk from the Pew Research Center: “92% of working Ph.D. biomedical scientists said it is as safe to eat genetically engineered [GE] foods as it is to eat non-GE foods.”
Participants were invited to the follow-up study approximately 14 days after participation in the initial experiment. A total of 370 participants (448 invited; retention rate ~83%) completed the follow-up study, and 365 were retained after exclusions. These participants completed the follow-up study on average 14.03 days (
Measures
Belief in the misperception that genetically engineered food is worse for health was measured by asking participants to what extent they believed the following statement to be true: “Genetically engineered (GE) food products are worse for health than non-GE food products.” Again, their response was measured on a visual analogue scale ranging from −100 (
Results
Again, the results of the manipulation check demonstrated that the boost conditions were effective in eliciting consensus reasoning in claim evaluation compared with the consensus-only condition, χ2(1,
The hypothesis that boosting consensus reasoning leads to lower belief in the idea that genetically engineered food is worse for health received tentative support in Experiment 2. First, the main effect of boosting consensus reasoning on posterior beliefs, comparing the combined boost conditions (

Prior and posterior beliefs in all conditions of Experiments 1 to 3. In Experiment 1, participants rated their beliefs that climate change is caused primarily by human activity (higher scores indicate more accurate beliefs). In Experiments 2 and 3, participants rated their beliefs that genetically engineered food is worse for health than food that is not genetically engineered (lower scores indicate more accurate beliefs). Small dots represent individual observations, bigger dots with error bars represent mean scores and 95% confidence intervals (CIs), respectively. Lines connect mean scores for prior and posterior beliefs in each condition. Plots, including means and CIs, were created with complete samples before model outliers were removed.
Experiment 3: Genetically Engineered Food Replicated
The results of Experiment 2 suggested that there may be some potential to boosting both understanding and identification of scientific consensus, but we had not yet found convincing evidence to either support or oppose its effectiveness. One explanation for these tentative findings could be a lack of power. We addressed this issue in Experiment 3 by conducting a high-powered replication.
Method
We retained the boost+ and consensus-only conditions to test the effectiveness of boosting consensus reasoning over not boosting consensus reasoning in the presence of a consensus statement. The boost condition was dropped because the results from the previous experiment indicated that this condition, if anything, would be less effective than the boost+ condition. Instead, we added a control condition in which participants neither received a boost nor read a consensus statement. This allowed us to test whether the consensus statement was effective in correcting the misperception in the first place. The follow-up measure was dropped in order to reserve all our resources for testing the immediate effects with high power. We hypothesized that when participants were exposed to the consensus statement, the boost+ condition would lead to lower posterior belief in genetically engineered food being worse for health than the consensus-only condition. Second, in the two previous experiments, we had found quite substantial differences between prior and posterior beliefs in general (though this can partly be explained by regression to the mean). On the basis of this, we hypothesized that when participants did not receive a boost, exposing them to the consensus statement would lead to lower posterior belief in genetically engineered food being worse for health than not exposing them to the consensus statement.
Participants
The recruitment procedure for Experiment 3 was similar to that of Experiment 2, except that no sequential-sampling procedure was employed. Instead, we conducted an a priori power analysis based on the comparison between the boost+ and consensus-only conditions of Experiment 2 (see the Supplemental Material; final
Materials and procedure
Participants in the control condition read the same news article as in the other conditions, but the paragraph about the consensus was replaced with a paragraph discussing a field trial that had been conducted with different versions of the genetically engineered banana.
Measures
In this experiment, the manipulation check was conducted using an automated R script (see the R script on OSF for the automated procedure).
Results
The manipulation check indicated that the boost+ condition increased consensus reasoning in claim evaluation, compared with the consensus-only condition, χ2(1,
We found strong support in the data for both hypotheses. First, examining belief that genetically engineered food is worse for health, we found that the effect of the news article was greater in the boost+ condition than in the consensus-only condition, as expected. The difference between posterior belief in the boost+ condition (
We conducted additional, exploratory analyses to investigate effects of the boost+ condition and consensus information on perceived consensus. We found significant differences between the boost+ condition (
Discussion
Holding on to misperceptions in the face of overwhelming scientific evidence can be harmful to an individual and to society, but correcting those misperceptions can be hard. In three experiments, we investigated a strategy aimed at correcting misperceptions: boosting consensus reasoning. We explained the value of scientific consensus and provided steps to identify scientific consensus when evaluating the veracity of a claim. In the case of climate change, we found moderate evidence against the effectiveness of an extensive boost (the boost+ condition) to correct a misperception (BF10 = 0.18; increase in belief in true statement:
There could be multiple reasons for these differing results. First, trust in climate scientists in the United States is low (Pew Research Center, 2016), whereas biomedical scientists might be trusted more. This raises the possibility that boosting consensus reasoning is less effective in situations in which there is low trust in the relevant experts. Our data support this explanation, showing that climate scientists were trusted less than biomedical scientists as a source of information about their respective fields. Relatedly, this difference in trust might also be reflected in the sources of the consensus statements quoted in the news articles that we used as stimulus material, potentially yielding lower trust in the source of the climate consensus than the source of the genetically engineered food consensus. Another explanation could be that misperceptions related to climate change are simply more resistant to correction than misperceptions about genetically engineered food, for instance because they are more crystallized. Climate change is a highly politicized topic, suffering from decades of strategic use of misinformation (Cook, 2016), which may have resulted in those individuals who hold misperceptions becoming resistant to new information. Our data also support this second explanation: The overall decrease in false belief was smaller in the experiment about climate change than in the experiments about genetically engineered food.
A third explanation could be that the perceived scientific consensus was already higher at the start of the climate-change experiment than it was at the start of the genetically engineered experiments. Being more aware of the scientific consensus about human-caused climate change, these participants might have become more resistant to the influence of consensus knowledge on their personal beliefs. This explanation is partly supported by our data, which show that participants in the climate-change experiment had a higher perceived consensus at the start of the experiment than did participants in the genetically engineered food experiments. They did, however, substantially increase their estimate during the experiment, in contrast to what would be expected if they were resistant to consensus messaging in general. Finally, it could be that we did not have enough statistical power in the climate-change experiment to find a true difference between the boost+ and the consensus-only conditions. The BF indicated only moderate evidence against the effectiveness of the boost+ condition in the climate-change experiment, which does not convincingly rule out an effect. Even if a true effect existed in our sample of climate-change deniers similar in size to the one we found in Experiment 3 about genetically engineered food, then the experiment about climate change was underpowered (achieved power ~55%). Of course, a combination of these four explanations may also be at play here.
Apart from enhancing the corrective effect of consensus communication, there are two main arguments for boosting consensus reasoning. First, boosting consensus reasoning not only might help individuals to recognize a true scientific consensus but also might help them to identify a false consensus. People are notoriously poor at distinguishing between true and false consensuses (Yousif et al., 2019). Boosting consensus reasoning empowers individuals to identify misinformation in the form of a false consensus by looking at the source of the consensus claim and the expertise of the individuals making up the consensus. A single scientist stating that genetically engineered food is bad for your health, for example, should not be persuasive to individuals who have received the boosting intervention. Second, there is an ethical advantage to boosting people’s understanding of consensus over only communicating the consensus, namely that its goal is to empower individuals (Hertwig & Grüne-Yanoff, 2017). Consensus communication is often criticized on the grounds that it invokes scientists’ authority as a means of persuasion (Pearce et al., 2015). Conversely, boosting is meant not to persuade but to empower individuals to be able to understand and make the best use of the available information regarding a scientific consensus, whether that consensus is in line with their preferred beliefs or not.
Many questions remain regarding boosting consensus reasoning. First, our boost+ manipulation was multifaceted: It discussed how scientific consensus develops and why it is useful in evaluating claims, it presented steps explaining how to recognize a true consensus, and it included a practice session with feedback regarding how to apply these steps. From the current work, it is unclear which part (or combination of parts) of the manipulation was responsible for the corrective effect. Future research could examine what specifically drove our boosting effect, as well as whether parts of it (such as explaining the value of scientific consensus) could also be used as a more direct means of persuasion. Second, regarding the generalizability of the findings, we investigated boosting consensus reasoning in the context of only two topics and found mixed results. Relatedly, we recruited participants from an online crowdsourcing platform for research. Prolific allowed us to sample from the population of interest (individuals holding misperceptions), but it is unclear whether these participants are representative of the general population of people who hold misperceptions. Therefore, much remains to be learned about the generalizability of the findings, both regarding different topics of misperception and the individuals holding the misperceptions. Third, as mentioned earlier, boosting could help individuals identify misinformation, such as a false consensus (Cook, 2016). Future research could investigate this possibility by employing a similar design to the current research but testing a message communicating a false consensus. Finally, the current research was quite straightforward in that participants had the opportunity to apply their boosted consensus-reasoning skill immediately. The results of the 2-week follow-up measure in Experiment 2 indicated no clear difference between beliefs in the boost+ and consensus-only conditions. We are hesitant in interpreting this result because the second experiment appeared to be underpowered to detect an effect of the boost+ condition compared with the consensus-only condition, especially at follow-up. The question remains whether the boost in consensus reasoning is durable, allowing consensus reasoning to be activated later in time, or instead deteriorates.
Although the focus of the current work was on boosting consensus reasoning, it also demonstrated the corrective effect of consensus communication by itself in the case of genetically engineered food. Previous research has demonstrated that consensus messaging can change beliefs about genetically modified food in the general population (Kerr & Wilson, 2018). Here, we showed that consensus messaging can also correct misperceptions in this domain. Although previous research has yielded promising results about reducing belief in misperceptions in the general population (e.g., about a vaccine–autism link; see van der Linden, Clarke, & Maibach, 2015), we believe this is the first experimental work to test consensus messaging in a sample of misperception holders.
To conclude, the current work extends existing research by demonstrating that empowering individuals who hold misperceptions to use scientific consensus in deciding whether or not something is true can help to correct these false beliefs. Moreover, it provides evidence that communicating scientific consensus not only is an effective strategy to strengthen accurate beliefs, as seen in previous research, but also can be used to correct misperceptions. These findings support a strategy of open communication about the process of reaching a scientific consensus. There is much to be won, considering that cues signaling the existence of consensus in relevant news content are very rare (Merkley, 2020). With a public deficient in knowledge about the scientific consensus on important societal topics, communicating the consensus itself is a promising place to start. And with scientists deficient in communication about the scientific process, consensus communication could be paired with boosting consensus reasoning.
Supplemental Material
sj-docx-2-pss-10.1177_09567976211007788 – Supplemental material for Boosting Understanding and Identification of Scientific Consensus Can Help to Correct False Beliefs
Supplemental material, sj-docx-2-pss-10.1177_09567976211007788 for Boosting Understanding and Identification of Scientific Consensus Can Help to Correct False Beliefs by Aart van Stekelenburg, Gabi Schaap, Harm Veling and Moniek Buijzen in Psychological Science
Supplemental Material
sj-docx-3-pss-10.1177_09567976211007788 – Supplemental material for Boosting Understanding and Identification of Scientific Consensus Can Help to Correct False Beliefs
Supplemental material, sj-docx-3-pss-10.1177_09567976211007788 for Boosting Understanding and Identification of Scientific Consensus Can Help to Correct False Beliefs by Aart van Stekelenburg, Gabi Schaap, Harm Veling and Moniek Buijzen in Psychological Science
Supplemental Material
sj-docx-4-pss-10.1177_09567976211007788 – Supplemental material for Boosting Understanding and Identification of Scientific Consensus Can Help to Correct False Beliefs
Supplemental material, sj-docx-4-pss-10.1177_09567976211007788 for Boosting Understanding and Identification of Scientific Consensus Can Help to Correct False Beliefs by Aart van Stekelenburg, Gabi Schaap, Harm Veling and Moniek Buijzen in Psychological Science
Supplemental Material
sj-pdf-1-pss-10.1177_09567976211007788 – Supplemental material for Boosting Understanding and Identification of Scientific Consensus Can Help to Correct False Beliefs
Supplemental material, sj-pdf-1-pss-10.1177_09567976211007788 for Boosting Understanding and Identification of Scientific Consensus Can Help to Correct False Beliefs by Aart van Stekelenburg, Gabi Schaap, Harm Veling and Moniek Buijzen in Psychological Science
Footnotes
Appendix A: Consensus-Reasoning Manipulation
Appendix B: Prior and Posterior Belief and Perceived-Consensus Scores
Prior and Posterior Beliefs and Perceived-Consensus Means in Each Experiment
| Experiment and variable | Condition | Mean | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Boost+ | Boost | Consensus only | Control | |||||||
| Prior | Posterior | Prior | Posterior | Prior | Posterior | Prior | Posterior | Prior | Posterior | |
| Experiment 1 | ||||||||||
| Belief in human-caused climate change | −57.93 (28.26) | −44.00 (44.96) | −65.80 (29.13) | −48.26 (51.22) | −60.04 (27.70) | −47.28 (45.87) | — | — | −61.33 (28.45) | −46.55 (47.31) |
| Perceived consensus about climate change | 60.08 (25.04) | 79.54 (24.16) | 55.82 (25.98) | 75.64 (26.76) | 56.04 (24.53) | 73.97 (26.24) | — | — | 57.27 (25.16) | 76.35 (25.75) |
| Experiment 2 | ||||||||||
| Belief that genetically engineered food is worse | 60.24 (28.67) | 9.44 (58.56) | 61.01 (29.25) | 20.17 (57.25) | 58.05 (28.35) | 20.51 (53.57) | — | — | 59.78 (28.72) | 16.71 (56.62) |
| Perceived consensus about genetically engineered food | 47.80 (24.73) | 78.89 (22.71) | 45.43 (23.93) | 72.97 (25.85) | 45.90 (23.46) | 71.05 (26.75) | — | — | 46.37 (24.02) | 74.3 (25.33) |
| Experiment 3 | ||||||||||
| Belief that genetically engineered food is worse | 57.65 (28.03) | 2.62 (58.59) | — | — | 56.32 (28.32) | 18.93 (51.80) | 56.49 (28.73) | 30.91 (43.66) | 56.81 (28.34) | 17.78 (52.78) |
| Perceived consensus about genetically engineered food | 45.36 (24.00) | 74.45 (25.82) | — | — | 45.98 (24.11) | 71.20 (25.67) | 42.65 (23.80) | 51.52 (25.50) | 44.64 (23.99) | 65.52 (27.58) |
Note: Standard deviations are given in parentheses. Scores were calculated before model outliers were removed.
Transparency
A. van Stekelenburg designed the experiments, conducted the experiments, and analyzed the data. G. Schaap, H. Veling, and M. Buijzen advised on the design of the experiments and analyses. All authors contributed to writing the manuscript and approved the final manuscript for submission.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
