Abstract
Across the globe, social media have become dominant channels of communication and news for many citizens. They also provide online spaces where misleading information can exacerbate social cleavages and political differences in societies, which can then lead to deleterious democratic outcomes. Therefore, much work has sought to understand the ways in which the effects of misinformation can be attenuated. This virtual theme collection highlights eight studies that examined the conditions in which individuals would actively verify information as well as the effectiveness of certain countermeasures designed to help individuals discern information veracity.
The global information environment has become increasingly volatile due to content of questionable veracity shared and spread through social media that can potentially misinform citizens, polarize societies, and undermine democratic norms. Concepts that describe this content, such as “misinformation,” “disinformation,” and “fake news,” have thus become embedded as part of the media and political lexicon, and much academic research in the past decade has sought to understand the antecedents and consequences of misinformation at various levels of analysis (Freelon & Wells, 2020). This virtual theme issue highlights eight publications in Journalism & Mass Communication Quarterly that explored misinformation at the individual level from two perspectives. First, under what conditions would people verify the content they come across online and what are the psychological drivers behind such actions (van der Linden, 2022)? Second, what is the effectiveness of well-intended countermeasures that are supposed to help individuals discern the veracity of online information they encounter (Courchesne et al., 2021)?
The possibility that online information can be more prone to be misleading because of the lack of editorial oversight was raised more than two decades ago by Flanagin and Metzger (2000) in their study on people’s perceived information credibility of content across different channels. Among their findings were that people generally considered online content to be as credible as content from traditional media, and very few engaged in any kind of verification behaviors to check the veracity of the content. One of their conclusions was especially prescient as they noted the dangers of “gossip and rumors posted online becoming the basis for actual news stories” (p. 535), which we now know in hindsight can have serious and even seismic political and social consequences.
Information Credibility and Audience Verification Behaviors
Based on the premise that verification is a “normative ideal” when individuals come across content of uncertain veracity, Edgerly et al. (2019) found in a survey experiment that respondents were less likely to verify news headlines that they were most uncertain about. Instead, they were more likely to verify headlines that were congruent with their political ideology (i.e., conservative vs. liberal). This suggested that people verify less so for the purposes of reducing uncertainty, but rather to reaffirm their existing partisan beliefs. Mourão et al. (2022) extended these findings by exploring what factors drove partisans to verify information and found that conservatives were more likely to verify based on credible sources congruent with their political identity while liberals relied on their degree of familiarity with the news headline rather than its ideological alignment. Both U.S.-based studies mentioned earlier pointed to “directional motivated reasoning” as a fundamental cognitive driver in which individuals judge the veracity of political content and subsequent intentions to engage with it (Flynn et al., 2017). The same cognitive process was also evident for Hong Kong in an experiment by Tsang (2020) that examined how opposing camps (pro-extradition vs. anti-extradition) perceived a WhatsApp post that portrayed the police in a negative light. As expected, pro-extradition participants who tended to be more pro-government viewed the post as more misleading than anti-extradition participants, whereas the source of the WhatsApp post did not affect participants’ veracity judgments. This provided cross-national evidence for the role of partisan-based motivated reasoning that underlies how people process misinformation.
Effectiveness of Countermeasures Against Misinformation
Another important strand of research examined the efficacy of countermeasures to attenuate belief in misinformation and to debunk it. One type of intervention is fact-checking, which involves the systematic assessment and verification of information by third-party fact-checkers such as FactCheck.org and PolitiFact (Walter et al., 2019). In survey experiments, these often come in the form of fact-check labels that accompany misleading news headlines or articles. York et al. (2019) provided some optimistic findings in their survey experiment showing that participants who were exposed to fact-checks, such as a rating of “Completely False” after a news story, held more accurate issue perceptions in line with the fact-check than those who read a news story without the fact-check. This in turn increased their epistemic political efficacy (i.e., confidence in one’s ability to discern the truth). Going beyond a typical experiment design, Mattes and Redlawsk (2020) used a more interactive methodology based on the Dynamic Process Tracing Environment (DPTE) to examine the conditions under which participants were interested in fact-checking a series of news headlines in a made-up election campaign setting between two candidates. Interestingly, over 90% of Democrats and Republicans requested a fact-check of the news they came across, and there was again evidence for the role of partisan-based motivated reasoning behind the decisions. For example, fact-check requests were more common when the opposition candidate attacked the respondent’s preferred candidate. A study by Duncan (2020) offered more nuanced findings. Through a repeated-measures design, the findings showed that the change in the perceived credibility of a political story was reduced following exposure to credibility warning cues. This occurred regardless of whether the news was consistent or not with participants’ political identities. This was another optimistic finding as it suggests the possibility that fact-checks could be effective even for misleading content that is congruent with one’s political worldview. In contrast to the previous studies that focused on political issues and topics, Sun and Lu (2023) focused on a health context (i.e., COVID-19) and examined the efficacy of direct rebuttals of misinformation by different sources rather than fact-check labels. The results found that such rebuttals from credible sources (i.e., Centers for Disease Control and Prevention) can work indirectly by reducing beliefs in misinformation, which lead to greater vaccination intention. Although somewhat suprising, political ideology was not part of the study design given the politicized nature of COVID-19 vaccination in the United States.
Toward a More Holistic and Global Research Agenda on Countermeasures
Overall, these eight studies provide a snapshot of the rich body of misinformation research published in JMCQ. From a normative perspective, fact-check and credibility labels, rebuttals from reputable sources, and user-initiated verification behaviors are desirable to reduce individuals’ beliefs in misleading information. It is important to acknowledge that they represent just a subset of countermeasures against misinformation that also include accuracy nudges (Pennycook & Rand, 2022) and news literacy-based interventions (Chan, 2022), among others. Moreover, as evidenced in the eight articles in this themed collection, much of the misinformation literature is still heavily based on U.S. samples and experimental research designs focusing on political issues. Theoretically, the extent to which partisan-based motivated reasoning plays a key role in non-U.S. contexts requires further study since the conservative-liberal divide is less salient or applicable to other countries. Therefore, more studies from other parts of the world as well as comparative studies are needed to provide a generalized understanding of how people react to misinformation and its related interventions. Hopefully, such endeavors would be featured in the future issues of JMCQ.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
