Abstract
Across two studies, we test two of Facebook’s attempts to fight misinformation: labeling misinformation as disputed or false and including fact checks as related articles. We propose hypotheses based on a two-step model of motivated reasoning, which provides insight into how misinformation is corrected. For study 1 (n = 1,262) and study 2 (n = 1,586), we created a mock Facebook News Feed consisting of five different articles—four were actual news stories and the fifth was misinformation. Both studies tested (a) the effect of misinformation without correction, (b) Facebook’s changes to its platform, and (c) an alternative we theorized could be more effective. The findings, in line with the two-step model of motivated reasoning, provide evidence of symmetric party effects for the belief in misinformation. In both studies, we find partisan differences in responses to fact checking. We find modest evidence that our improvements to Facebook’s attempts at correcting misinformation reduce misperceptions across partisan divides.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
