Abstract
Repeated exposure to misinformation not only reduces the accuracy of people’s beliefs, but it also decreases confidence in institutions such as the news media. Can fact-checking—journalism’s main weapon against misinformation—worsen or ameliorate distrust in journalists and the media? To answer this question, we conducted two pre-registered experiments in Chile (total N = 1,472) manipulating message and receiver factors known to regulate the persuasiveness of fact-checks: transparency elements, arousing images, and political alignment. The results of both studies show that, across message formats, fact-checks are similarly effective at reducing people’s misperceptions. However, these positive effects on belief accuracy come at a cost: Compared to control groups, users exposed to political fact-checks trust news less and perceive the media as more biased, especially after reading corrections debunking pro-attitudinal misinformation. We close with a discussion of the theoretical and practical implications of these findings.
Misinformation not only influences people’s beliefs but can also undermine public trust in institutions such as the news media (Ecker et al., 2022; Freeze et al., 2021; Lewandowsky & van der Linden, 2021; Ognyanova et al., 2020; Stubenvoll et al., 2021; Vaccari et al., 2022; Valenzuela et al., 2022). For this reason, work on the effectiveness of exposure to misinformation corrections on social media has intensified (Cotter et al., 2022; Hameleers & van de Meer, 2020; Porter & Wood, 2021; Vraga et al., 2020). Fact-checking, while not a cure-all to the problem of misinformation (Walter et al., 2020, 2021), has been found to improve the accuracy of people’s beliefs. However, we still do not know all the conditions regulating the impact of fact-checks on other attitudes and behaviors or the factors regulating how people process and understand fact-checks (Dias & Sippitt, 2020; see also Li et al., 2022; Porter & Wood, 2021; Yeo & McKasy, 2021). To shed light on this matter, this article will examine whether exposure to fact-checks on social media can influence people’s evaluations of news media and their content. To the degree that corrections increase credibility evaluations and news engagement (Cotter et al., 2022; Mena et al., 2020), it is possible that fact-checks may improve perceptions about news content and journalistic work. Conversely, fact-checks may discredit news media by making evident that much content in the public sphere is incorrect and by priming perceptions that see news outlets as hostile, incompetent, or manipulative (Freeze et al., 2021; Li et al., 2022; Stubenvoll et al., 2021; Weeks et al., 2019; Wood & Porter, 2019). All of this could have an impact on news trust as well. By mistrusting news, it is possible that people feel reluctant to consume professional news media and even go to the extreme of avoiding it, which in the long run leads to greater inequality in access to information (Valenzuela et al., 2019; see also Dias & Sippitt, 2020).
In addition, we want to explore the role played by two message factors—transparency elements and arousing visuals—and one receiver factor—political orientations— that may alter the downstream effects of fact-checking. Past research suggests that these factors moderate the effects of exposure to information and misinformation, as they favor message credibility, news engagement, knowledge gains, elaboration, and policy preferences, among other attitudes (e.g., Grabe et al., 2015; Tenenboim, 2022; Vaccari et al., 2022; Young et al., 2018). Therefore, they could also have a role in how people perceive and are influenced by media messages correcting or debunking false claims and thus shape the impact of fact-checking on people’s beliefs and attitudes.
To examine this, we conducted two preregistered survey experiments in Chile during 2021. This high-income Latin American nation offers a particularly interesting context to explore misinformation corrections and their effects. A country with a free press, Chile’s media system is nonetheless highly concentrated and rather homogeneous, with most news outlets having a conservative leaning (Gronemeyer et al., 2021; Valenzuela et al., 2019). More importantly, Chile has been experiencing an information disorder for the past decade (Cárcamo-Ulloa et al., 2023; Valenzuela et al., 2022), and there is evidence of high levels of exposure, credibility, and propensity to share false information (Halpern et al., 2019). In surveys, people often report 30% to 80% of familiarity with debunked statements that had circulated in the past 6 or 12 months (Ceron et al., 2021; Halpern et al., 2019; Valenzuela et al., 2019). Particularly prevalent is misinformation about public affairs, crime, science, and natural disasters (Valenzuela et al., 2019).
In the last decade, Chile has also shown increasing levels of social discontent, political disaffection, and distrust in institutions (Bachmann et al., 2021; Bargsted et al., 2022; Somma et al., 2021). For instance, less than one in five voters identifies with a political party, and less than 10% of the population reports trusting in political parties or Congress (Bargsted et al., 2022; Somma et al., 2021). Trust in news media in general –and legacy media in particular– have declined in the last ten years. with news users seeing the press as part of the political and economic elites (Newman et al., 2020; Valenzuela et al., 2022). After a record low level in the wake of the October 2019 riots (Bachmann et al., 2021), by 2022, only a third of news users reported trusting news in general (Newman et al., 2022).
Skeptical audiences often turn to news from social media and online-only sources (Fletcher & Park, 2017; Labarca et al., 2022; Stubenvoll et al., 2021), and indeed in Chile there are also high levels of social media use (an antecedent of misinformation exposure and sharing, see Stubenvoll et al., 2021): 70% of internet users say they rely on social media for news (Newman et al., 2022) and most people report to trust social media more than legacy news outlets. Interestingly, fact-checking initiatives have flourished in the last 4 years, with some of them operating exclusively or mainly over social media and have become a staple in politics and election coverage (Bachmann et al., 2022).
All in all, this makes for a scenario ripe for consideration about the effectiveness of misinformation corrections on the public’s beliefs and their influence on different outcomes.
Misinformation and False Content
Misinformation—information that is false (e.g., with errors) but with no harm meant—and disinformation—content that is deliberatively false to be harmful (e.g., manipulated videos to deceive people)—(Wardle & Derakhshan, 2017) are very complex phenomena with emotional, cognitive, and structural factors. There is evidence that people have a hard time distinguishing legitimate news from falsehoods (Egelhofer & Lecheler, 2019), and disinformation often turns into misinformation when individuals amplify content that they do not know is factually false (Wardle, 2019). Because of that, throughout this article, we use the term “misinformation” to refer to false and misleading information, regardless of the intention to harm (Ognyanova et al., 2020; Valenzuela et al., 2019).
In the current media environment, misinformation is rampant and closely related to social media use (Vaccari et al., 2022; Valenzuela et al., 2019). Not surprisingly, a great deal of research has examined the prevalence and impact of misinformation. Empirical evidence shows that adults regularly encounter false news and get confused about different facts (Halpern et al., 2019; Ognyanova et al., 2020; Tsfati et al., 2020; Vaccari et al., 2022; Valenzuela et al., 2019, 2022). Misinformation has been linked to post-truth contexts going hand in hand with declining social capital, major economic inequalities, polarization, vanishing trust, and fractionated media landscapes (Lewandowsky et al., 2017; Wardle & Derakhshan, 2017, see also Cotter et al., 2022). Studies have also consistently shown that exposure to misinformation can undermine trust in traditional media and institutions (Lewandowsky & van der Linden, 2021; Ognyanova et al., 2020; Vaccari et al., 2022; Valenzuela et al., 2022), which in turn has been linked to preference for alternative sources of information (Strömbäck et al., 2020).
Somewhat less studied are the effects of exposure to fact-checking. The literature shows that countering misinformation is much more difficult than spreading false news, especially since not all individuals are equally receptive to corrections and data verification, and people often continue to rely on information they have been shown to be false (Ecker et al., 2020, 2021, 2022; Freeze et al., 2021; Lewandowsky et al., 2017). However, there is evidence that fact-checking and similar efforts can correct misperceptions and reduce belief in misinformation (e.g., Li et al., 2022; Porter & Wood, 2021; Porter et al., 2022; Walter et al., 2020).
Nonetheless, misinformation correction spaces could have a counterproductive effect. Since they make evident that some messages circulating in the public sphere are not correct or have not been verified, they could foster skepticism and cynicism, cast doubts about what is true, and even hamper evaluations of the credibility of journalistic work (Freeze et al., 2021; Stubenvoll et al., 2021; Vraga et al., 2019). This may be related to what in psychology has been called the “tainted truth effect,” whereby individuals who are warned about the possibility of misinformation tend to discount true information to a greater extent than those who are not aware of this risk (Freeze et al., 2021; Szpitalak, 2017). For instance, general warnings shown to participants before they read a set of headlines reduced the credibility of both truthful and untruthful headlines (Clayton et al., 2020). In other words, making people aware of the occurrence of false information may lead them to be more skeptical and dismiss true or verified information as false to minimize the risk of being deceived (see also Freeze et al., 2021).
Arguably, this could influence the way people evaluate news content and their attitudes toward media. This is important considering that confidence in traditional media has been declining globally since the late 1980s (Strömbäck et al., 2020). While news trust is often seen as an attitude stemming from people’s experiences with the media environment, this is a complex feature assessing expectations toward news media and their content (Kohring & Matthes, 2007). Importantly, research has found that media trust can change in response to different situations (Adam et al., 2023; Newman et al., 2022). There is also evidence that short-term effects on media perceptions may accumulate over time, and thus interventions affecting people’s evaluations of news accuracy, fairness, bias, and trustworthiness may have long-term consequence on news credibility (Pingree, 2011; Pingree et al., 2013; Turcotte et al., 2015). It is possible, then, that misinformation corrections may further complicate perceptions about media and their content. While some level of distrust based on close examination and consideration (i.e., media skepticism) might be healthy and a preferred alternative to blind faith in all media contents, pervasive disposition not to trust, without examination (i.e., media cynicism) posits a challenge (Strömbäck et al., 2020; see also Cappella & Jamieson, 1997; Kohring & Matthes, 2007). Since news help citizens navigate the public world beyond personal experience and are therefore a key element of political participation in democracy, it is important to examine whether exposure to fact-checks lead individuals to believe that nothing is accurate and foster widespread cynicism about the veracity of news in general. Mistrust in traditional media is correlated with the preference for alternative sources of information (Strömbäck et al., 2020; see also Clayton et al., 2020), and individuals forgoing professional news media and choosing their own reality and alternative facts (Lewandowsky et al., 2017) could greatly impact public discourse and the way citizens are involved in public affairs (Dias & Sippitt, 2020).
Correction Effectiveness and Formats
Several factors influence the way people process media messages and contents, and our ability to both process actual news and recognize misinformation as such is influenced by individual characteristics, such as demographics, media literacy, and personality, and by structural constraints, such as access to certain outlets (Ecker et al., 2021, 2022; Vraga et al., 2022).
Corrections have been found to be more effective if they are deemed credible or coming from a credible source (Ecker et al., 2021; Hameleers & van der Meer, 2020; Vraga et al., 2022), but, in contexts of low media trust and high skepticism—as it is the case of Chile—what makes a message credible is not that clear. Transparency has been posited as a means to strengthen confidence in news content. Transparency can adopt many forms, but it typically has to do with some openness or disclosure from reporters and news outlets about how news content came to be (Curry & Stroud, 2021; Karlsson et al.,2014, 2017; Peifer & Meisinger, 2021). Transparency is supposed to foster increased message credibility, which in turn is expected to favor news engagement. The more credible news content, the argument goes, is more likely to be shared by news users, as people are more likely to expend time, energy, and resources relative to news that they find credible (Peifer & Meisinger, 2021; Tenenboim, 2022). Studies conducted in the United States suggest that transparency leads to modest increases in credibility evaluations and intentions to engage with news (Curry & Stroud, 2021). However, in their studies with Swedish citizens, Karlsson and colleagues (2014, 2017) found little connection between transparency and credibility, with disclosures and corrections influencing credibility evaluations only for those who already trusted media.
While disclosure of sources and transparency of verification methods are some of the code of principles of the International Fact-Checking Network (IFCN), an association promoting best practices for fact-checkers (International Fact-Checking Network [IFCN], 2016), these standards are not necessarily met by all fact-checking organizations. In Chile, for instance, only two organizations have been certified by the IFCN. Moreover, fact-checks and their adjudications are regularly posted and shared via social media in such a way that they have little if any space to explain the authorship, methods, or sources of a verification (even though these elements might be available on a linked website). Therefore, in this study, we explore the use of transparency elements in fact-checks correcting misinformation.
Another relevant message factor for media content is the use of arousing visuals. Research shows that emotionalizing content and highly arousing visuals can positively affect knowledge gains, and perceptions of appeal and issue relevance as well as evaluations (Grabe et al., 2015, 2017; Mena et al., 2020; Mujica & Bachmann, 2018). For instance, the use of arousing images (e.g., close-ups or with color changes) has proven to elicit emotions and memories, fostering an effective link to the content (e.g., Vettehen et al., 2008) and may favor knowledge and credibility (de León & Trilling, 2021; Grabe et al., 2015; Mujica & Bachmann, 2018). Further, people’s responses to fact-checking, especially in the political realm, may be influenced by emotional arousal, especially when they interact with individuals’ beliefs and both in-group or out-group source (e.g., Carnahan & Bergan, 2022; de León & Trilling, 2021; Shin & Thorson, 2017).
While corrective messages tend to favor just a presentation of facts (e.g., whether something is true or false, and the evidence supporting such a claim; see Lewandowsky et al., 2012; Zheng et al., 2021), some scholars have argued that emotions could help overcome challenges to recognize and question misinformation (Yeo & McKasy, 2021) by eliciting arousal in news users and enhancing recall, elaboration, and content sharing (de León & Trilling, 2021; Zheng et al., 2021). Thus, this study also tests the impact of emotionalizing visuals on the effectiveness of misinformation corrections.
Arguably, fact-checks deemed more credible should be more effective. People are more likely to process messages with attention and less counterarguments when they consider the arguments to be stronger (e.g., Nabi, 2002), and higher argument quality perceptions have proven to enhance source liking and processing depth of all kinds of information (e.g., Cyr et al., 2018). Further, information quality and source credibility have been linked to users’ sharing behavior (e.g., Tenenboim, 2022; Vaccari et al., 2022) as well as news and misinformation processing (e.g., Mena et al., 2020), Thus, they could be relevant to misinformation corrective efforts.
However, individuals are often motivated to process (mis)information that supports or protects their preexisting beliefs and identities, and thus the effectiveness of fact-checking might depend on people’s existing political preferences (Hameleers & van de Meer, 2020; Li et al., 2022; Shin & Thorson, 2017). When corrective messages challenge their attitudes, individuals tend to respond rather defensively so they can preserve their points of view and sometimes even double down on their inaccurate beliefs (Lewandowsky et al., 2012; Wood & Porter, 2019). After all, people confronted with contrary evidence often become “motivated skeptics” (Kunda, 1990), and pro-attitudinal fact-checks are more likely to be shared than counter-attitudinal corrections (Shin & Thorson, 2017). Yet, recent evidence suggests that fact-checks may discredit misinformation even if they are incongruent with prior beliefs, thus compensating partisan biases in news interpretation (Hameleers & van der Meer, 2020; Wood & Porter, 2019).
Relatedly, exposure to uncongenial corrections may cast doubt about the news media and fact-checking, and increase people’s perceptions that these organizations are politically hostile and not to be trusted (Stubenvoll et al., 2021; Wood & Porter, 2019). Partisans have shown to be poor judges of news content, rating neutral content as biased against their views —that is, they develop a hostile media perception (Vraga & Tully, 2015; see also Li et al., 2022; Weeks et al., 2019). Along these lines, a fact-check debunking one’s beliefs could be deemed biased and less credible, and negatively affect perceptions and attitudes toward fact-checking in particular and toward media at large. Indeed, there is evidence that content congruency with one’s opinions impacts perceptions of bias and credibility of social media messages, whether by fellow users or by news organizations (Gearhart et al., 2020; Weeks et al., 2019). Therefore, in this study, we also examine the effectiveness of pro- and counter-attitudinal fact-checks.
Hypotheses and Research Questions
Building on the cited literature, we posed several hypotheses and research questions that were pre-registered in the OSF registry prior to data collection (https://osf.io/yp67h/registrations).
1
Our baseline hypothesis is that participants exposed to fact-checks correcting misinformation will have more accurate beliefs compared to participants in the no-misinformation and non-corrected misinformation control groups (
Moving on to the downstream effects of fact-checking, we address two research questions: Does exposure to fact-checks correcting misinformation influence media trust or trust in news coverage, including fact-checking organizations? (
Method
Data
We conducted two experiments embedded in online surveys fielded by Netquest in Chile in September 24 to October 7, 2021 (Study 1), and December 2 to 13, 2021 (Study 2), respectively. Study 2 was designed after collecting data for Study 1, to replicate and extend the results of the first study. In both cases, to make the samples more representative of the population, respondents were selected to match the gender, age, and region of residence distributions of the latest CEP poll—a nationally representative, probabilistic survey. Because internet samples underrepresent people from lower socioeconomic status and overrepresent those from higher status, we oversampled low-SES respondents and undersampled high-SES respondents by establishing a quota for each group of 25% (i.e., the quota for middle-SES participants was set at 50%). After applying preregistered exclusions, 2 sample sizes were 698 (Study 1) and 774 (Study 2). A benchmark analysis is available in Supplemental Appendix C, while an a priori power analysis is available in Supplemental Appendix D.
Design and Procedures
The two studies had a similar between-subjects experimental design. Following Ecker and colleagues (2021), all treatments were compared against two control conditions: a no-misinformation group and a misinformation-only (i.e., no fact-checking) group. Study 1 produced four treatments using a 2 (high transparency / low transparency fact-checks) × 2 (high arousal / low arousal visuals in fact-checks) design. Study 2 expanded to eight treatments using a 2 (pro-attitudinal / counter-attitudinal fact-checks) × 2 (high transparency / low transparency fact-checks) x 2 (high arousal / low arousal visuals in fact-checks) design. In this case, the misinformation-only control group was also differentiated into exposure to pro- and counter-attitudinal messages.
In both studies, after accepting the IRB-approved consent form, participants were asked questions about their sociodemographic background, political ideology, and media use. Subsequently, all participants were randomly shown two Facebook posts containing the experimental manipulations (see Materials below). Respondents were instructed to “read each of them carefully, as you will be asked to evaluate these messages.” Respondents were then asked their intentions to engage with the message on social media as well as questions measuring the posts’ perceived argument quality and credibility. The posttest measured key outcomes, including belief accuracy and media trust. Last, subjects were debriefed about the nature of the experiment, which included information about the veracity of each claim to which they were exposed earlier. Median times to complete the studies were 18 and 23 min, respectively. 3 The original and translated questionnaires, survey flows, stimuli, data, and code necessary to replicate the empirical analyses are available on the OSF project page: https://osf.io/yp67h/.
Materials
For Study 1, we pretested on a convenience sample of undergraduate and graduate students (N = 112) 32 different posts containing misinformation about 8 different topics related to the COVID-19 pandemic (e.g., curfews, sanitary inspections, return to in-person classes, and PCR tests) to find two with sufficient variance in credibility ratings. In the end, we settled for two claims: (1) that the coronavirus is resistant to chlorine in swimming pools and (2) that the Ministry of Health spent $371 million Chilean pesos (≈ $473,000 U.S. dollars) on purchases of overpriced face masks. Thus, while both claims are about COVID-19, one is about a purely scientific aspect while the other is related to government affairs. Participants in the no-misinformation control group read two filler stories in line with what news users would routinely find in their timelines at the time: one about a new alcohol law and another about the last presidential election in Peru. Those in the uncorrected misinformation condition were exposed to the original, false claims that were referenced in the fact-checks (but not to the corrections). The remaining conditions all contained fact-checked messages, varying in transparency and emotional arousal. To manipulate transparency, articles contained in boldface type either all or none of the following elements: author’s name, author’s Twitter handle, and information sources used in the correction (Curry & Stroud, 2021). To manipulate arousal, we included or excluded affective pictures from the OASIS (Kurdi et al., 2017) database, validated globally as arousing images. To increase external validity, all stimuli were based on real corrections published on Facebook by FastCheck.cl, Chile’s leading independent fact-checking organization, but were edited to have a similar format, length, and timeliness. The median reading time of each message was 1 minute.
After completing Study 1, and to ensure that findings were not dependent upon the topic of the stimuli, we set out Study 2 as a replication of Study 1 but with a new topic: elections. We expected this to tap into preexisting attitudes and identities to a greater extent than the COVID-19 issue and decided to also explore the role of attitudinal strength/partisanship in moderating fact-checking effects. In Study 2, we used actual candidate statements from the 2021 Chilean presidential election that were debunked by fact-checking organizations, in line with our goal of maximizing external validity. Following the advice of Pennycook et al. (2022), we searched for misleading claims that were neither widely shared nor outdated by the time of the survey, and that were comparable in topic. In the end, we chose claims made by the main candidates, the leftist Gabriel Boric and the right-winger José Antonio Kast, in which they either defended a policy position using false, exaggerated statistics or attacked the opponent with blatant policy misrepresentations and straw-man arguments. To balance pro- and counter-attitudinal fact-checks, we picked two pro-Boric/anti-Kast (e.g., “Boric claims that Kast will increase pensions to the Armed Forces only”) and two pro-Kast/anti-Boric (e.g., “Boric’s pardon bill will set free people accused of throwing Molotov cocktails at police officers”) debunked statements. As in Study 1, respondents in the no misinformation control group were shown filler stories (e.g., “More than 200 municipalities agree to create an association to sell gas at a lower price”), whereas those in the misinformation-only control condition read the original, false claims only.
Measures
For belief accuracy, respondents were asked how credible they found each claim presented in the Facebook posts they read. To protect against response bias, the claims were included within larger batteries measuring the credibility of other false as well as verified claims not contained in the stimuli. For social media engagement intentions, participants were asked whether they would like, share, or reply to each of the Facebook posts they read. Message credibility and argument quality were measured by borrowing items from studies on the elaboration likelihood model (Cyr et al., 2018). More specifically, we asked participants to indicate how credible they found each post on a 5-point scale (perceived credibility). For argument quality, we used a scale by Bhattacherjee and Sanford (2006). Media trust was operationalized trust in two ways. First, respondents were asked how much confidence they have in TV, radio, newspapers, online news, fact-checking, WhatsApp, Facebook, and other social media. Based on the results of a factor analysis (Supplemental Appendix F), we created separate scales of confidence in traditional media, social media, and fact-checking. Second, we used a short version of the trust in news media scale by Kohring and Matthes (2007). For hostile media perception (Study 2), we took the absolute difference between self-placement on the left-right scale and the placement of the news media on the same scale. Descriptive statistics and the exact question wording of variables are available in the Supplemental Appendix (sections G and H, respectively).
Analyses
In line with our preregistrations, fact-checking effects were estimated using linear multiple regression models. To increase the precision of our estimates, we preregistered the inclusion of the following covariates: gender, age, socioeconomic status, geographic location, and whether the questionnaire was completed using a computer or a mobile device. To facilitate the interpretation of regression estimates, we calculated and plotted average marginal effects using Stata 16.1’s margins command. We used one-tailed p values for hypotheses and two-tailed p values for research questions.
Results
H1 posits that participants exposed to fact-checks will have more accurate beliefs compared to participants in the control groups of no misinformation and uncorrected misinformation. As shown in Figure 1, we found significant fact-checking effects across studies. Compared to exposure to misinformation only, exposure to fact-checks led to an average increase in belief accuracy of 0.46 in Study 1 [90% confidence interval [CI] = 0.28, 0.65], F(1, 690) = 16.77, p < .001, and 0.26 in Study 2 [0.10, 0.43], F(1, 766) = 7.16, p = .008. Importantly, for three of the four claims, fact-checks increased belief accuracy beyond that of the baseline condition of no misinformation (full results in Supplemental Appendix I). The results, thus, support H1.

Contrasts testing the effects of fact-checking on belief accuracy.
For H2, we conducted separate tests to evaluate the impact of transparency elements and arousing visuals, as individual factors, on both belief accuracy and social media engagement intentions. As shown in Figure 2, all fact checks were similarly effective. Furthermore, neither the intention to like, share, or reply was significantly different across treatment groups. If anything, fact-checks were significantly less likely to produce Facebook interactions than the filler articles of the no misinformation group. Likewise, H3 hypothesized that participants would evaluate transparent, visually arousing fact-checks higher in argument quality and perceived credibility than fact-checks without these elements. Contrary to expectations, the effects were rather constant across treatments (see Supplemental Appendixes J-L for full results). Hence, neither H2 nor H3 was supported.

Contrasts testing the effects of specific fact-checking formats on belief accuracy.
In H4, we anticipated that pro-attitudinal fact-checks would have a stronger effect on belief accuracy than counter-attitudinal ones. We tested this expectation in Study 2, which measured candidate evaluation and vote choice prior to experimental treatment assignment. 4 Indeed, fact-checks that aligned with participants’ political preferences led to an increase of 0.40 [0.19, 0.62] in belief accuracy on a five-point scale when compared to those exposed to pro-attitudinal misinformation, F(1, 711) = 9.56, p = .002. Counter-attitudinal fact-checks, in contrast, did not improve belief accuracy over counter-attitudinal misinformation, F(1, 711) = 0.18, p = .670. Nevertheless, when compared to the baseline condition of no misinformation, both pro-attitudinal fact-checks, F(1, 711) = 17.20, p < .001, and counter-attitudinal ones, F(1, 711) = 6.88, p = .009, improved belief accuracy significantly, with the former having a larger effect than the latter, F(1, 711) = 9.66, p = .002. Thus, H4 was partially supported (full results in Supplemental Appendix M).
RQ1 asked about the effects of fact-checking on media trust, including trust in news coverage and in fact-checking organizations. The results are displayed in Figure 3. As shown in the left panel, in both studies, we found that exposure to fact-checks did not alter media trust scores relative to the baseline condition of no misinformation. When compared to the misinformation-only group, however, Study 2 found that fact-checks decreased trust in news content, F(1, 766) = 4.41, p = .036, and in media institutions, F(1, 766) = 7.15, p = .008 (see Supplemental Appendix N). 5

Contrasts testing the effects of fact-checking on media trust variables.
Last, RQ2 inquired about the effects of pro- and counter-attitudinal fact-checks correcting misinformation on attitudes toward media. We addressed this question in Study 2, conducted in the context of the 2021 presidential election. When compared to the baseline group of no misinformation, none of the experimental treatments had a significant influence on media trust variables. The opposite was the case with hostile media perceptions, as all treatments lowered hostility (see Supplemental Appendix P). The contrast analysis yielded significant differences, too. When compared to counter-attitudinal misinformation, exposure to counter-attitudinal fact-checks decreased trust in media by 0.35 [0.64, 0.06], F(1, 711) = 4.01, p = .046, and increased participants’ hostile media perception by 1.41 [0.28, 2.54], F(1, 557) = 4.24, p = .040. The effects of pro-attitudinal fact-checks were less consistent. While exposure to this type of corrections reduced trust in news content relative to pro-attitudinal misinformation by 0.32 [0.07, 0.57], F(1, 711) = 4.54, p = .033, it had no discernible effect on hostile media perceptions, F(1, 557) = 1.21, p = .271.
Discussion
This article aimed to expand the literature on misinformation correction and news credibility in the context of information disorders, polarization, political disenfranchisement, and media distrust and explored whether exposure to fact-checking has downstream effects beyond correcting misperceptions. Based on past research showing their role in moderating the effects of information and misinformation exposure, our experiments paid attention to the role of transparency elements and emotionalizing content in fact-checks and their impact on several outcomes regarding the evaluation and effectiveness of such corrections. Results indicate that fact-checking does work, as it significantly increases belief accuracy. These effects are comparable, if not larger than, those reported in existing meta-analyses on fact-checking. Using Cohen’s d as a metric to compare the effect of exposure to fact-checks relative to exposure to uncorrected misinformation, Walter et al. (2021) found an average effect of fact-checking on health beliefs of d = 0.40 [95% I.C. = 0.25, 0.55]. In Study 1, we estimated an effect size of d = 0.51 [0.26, 0.76]. Likewise, Walter et al. (2020) estimated an average effect of corrections of political issues of d = 0.29 [0.23, 0.36], while in Study 2 we estimated d = 0.38 [0.10, 0.67]. That is, the estimated effects of our fact-checking treatments on belief accuracy are at, or above, what prior meta-analyses have found.
Such a finding is particularly relevant, given the national context of the present study. Chile has seen important changes in the media landscape in recent years as well as major transformations in the political sphere and mounting social discontent (Bachmann et al., 2021). Citizens have become quite distrustful of political and media elites, and that is why confirmation that fact-checking works even under those conditions is important: these highly critical individuals—motivated skeptics, even—still benefit from improved belief accuracy after being exposed to misinformation corrections, and this in turns results in better-informed citizens.
Furthermore, the format of the fact-checks made little difference. It did not matter if the corrections had transparency elements or arousing visuals, although these factors have proven to affect content processing in other contexts. Rather, in our experiments, subjects in all conditions saw similar increases in accuracy beliefs and reported the same levels of social media engagement intentions, perceived credibility, and argument quality. In other words, fact-checking on its own, regardless of format, does not influence users’ evaluation of the corrections nor lends itself to more engagement with news audiences. These results are in line with past research on fact-checking formats and styles finding no fundamental benefit of using a narrative format in misinformation corrections (Ecker et al., 2020; Huang & Wang, 2022), or that corrections based on humor and satire may be as effective as factual ones (Boukes & Hameleers, 2023; Yeo & McKasy, 2021). This suggests that what really matters is correcting misinformation, not the details of its presentation, despite multiple efforts to find optimal correctives (Walter et al., 2020; however, see Carnahan & Garrett, 2020; Sangalang et al., 2019; Zheng et al., 2021).
We also found that the effect of fact-checking is significantly stronger when the correction is pro-attitudinal, whereas counter-attitudinal fact-checks did not improve belief accuracy and increased participants’ hostile media perception. Yet such confirmation bias effect is relatively moderate and—more importantly—does not cancel out the positive impact of exposure to fact-checking on respondents’ belief accuracy levels. However, this could change in a context of increasing political polarization: as positions become more extreme and people more inflexible about their beliefs and identities—as seen in Chile and other countries—it is possible that the effectiveness of misinformation corrections would take a hit, with counter-attitudinal corrections triggering a more skeptical assessment of the information at hand.
Fact-checking effectively reduces misperceptions, even if it does not improve news media’s standing in people’s eye, as the misinformation corrections in our experiment have a counterproductive downstream effect, at least in Study 2. While in Study 1, fact-checks failed to influence participants’ attitudes toward the news media or their content, in Study 2 exposure to fact-checking had a significant negative impact on media trust relative to the misinformation control group. Depending on the exact format, respondents who saw corrections reported between 6% and 8% less trust in news content and media institutions in comparison to those individuals who only saw uncorrected misinformation. Importantly, this effect was found after a single exposure, with individuals immediately reporting less trust in news as well as more perceptions of media biases. This suggests that exposing individuals to misinformation corrections makes them more critical of news media, a seeming overcorrection in line with a tainted truth effect (Freeze et al., 2021; Szpitalak, 2017). Considering the Chilean context, where most people report distrusting news, this could be a sign of alert if such levels of (dis)trust are understood as media cynicism instead of just healthy skepticism and critical thinking. That is, making already critical citizens aware of the existence of misinformation could be detrimental to people’s evaluations of news media and the credibility of news work, and arguably, affect public discourse. The exact mechanisms involved in this outcome are not clear and more research is needed to fully understand the effects of fact-checking exposure on users’ perception of media, but the results reported here suggest that for all its effectiveness, fact-checking may come with a price.
These studies are limited in that they relied on experiments with strong internal validity but lacking natural conditions. For instance, the design included participants from an opt-in panel and a design of forced exposure to Facebook posts as well as an immediate evaluation of their credibility. As such, it is not clear whether the effects in a more real-world scenario would be the same. Yet, these results have practical and theoretical implications. First, fact-checking matters, even if the specific formats in which corrections are presented might not, and second, corrections are believable and improve belief accuracy, although they could worsen media trust. In an age of endemic misinformation, the exact effects of exposure to misinformation correction should be more thoroughly examined and move beyond belief accuracy. The challenge ahead is to identify the exact mechanism behind the possibility of fact-checking producing attitude change.
Supplemental Material
sj-docx-1-sms-10.1177_20563051231179694 – Supplemental material for Studying the Downstream Effects of Fact-Checking on Social Media: Experiments on Correction Formats, Belief Accuracy, and Media Trust
Supplemental material, sj-docx-1-sms-10.1177_20563051231179694 for Studying the Downstream Effects of Fact-Checking on Social Media: Experiments on Correction Formats, Belief Accuracy, and Media Trust by Ingrid Bachmann and Sebastián Valenzuela in Social Media + Society
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors acknowledge funding from the National Agency of Research and Development of Chile (ANID) and its Program for Scientific Information (Grant PLU-200009) as well as through the Millennium Nucleus on Digital Inequalities and Opportunities (NUDOS) [grant NCS2022_046]. The first author also received funding from ANID-FONDECYT grant 1231378. The second author received also received funding from the Millennium Institute for Foundational Research on Data (IMFD) [grant ANID-ICN17_002] and ANID-FONDECYT grant 1231582.
Supplemental Material
Supplemental material for this article is available online.
Notes
Author Biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
