Abstract
The spread of disinformation is widely regarded as one of the most serious global risks. In one laboratory and three online experiments (Ntotal = 3,066), we measured trust in true, false, and disinformation statements related to the Russo–Ukrainian war (Experiments 1–3) and to politics, climate, and health (Experiment 4). We examined longer-term effectiveness of a fact-based corrective message delivered either before (prebunking) or after (debunking) participants’ initial evaluation of disinformation statements. Across all four experiments, debunking intervention consistently and substantially reduced trust in disinformation, with effects persisting for at least two weeks. Prebunking intervention produced similarly durable benefits only when it was immediately followed by evaluation of the just-corrected disinformation. When evaluation was delayed, prebunking had no reliable impact. We found no significant backfire effects on trust in disinformation across any ideological groups. However, debunking induced a more conservative response pattern overall, reducing trust in true statements as well.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
