Abstract
Despite a wealth of research examining the effectiveness of correction of misinformation, not enough is known about how people experience such correction when it occurs on social media. Using a study of US adults in late March 2020, we measure how often people witness correction, correct others, or are corrected themselves, using the case of COVID-19 misinformation on social media. Descriptively, our results suggest that all three experiences related to correction on social media are relatively common and occur across partisan divides. Importantly, a majority of those who report seeing misinformation also report seeing it corrected, and a majority of those who report sharing misinformation report being corrected by others. Those with more education are more likely to engage in correction, and younger respondents are more likely to report all three experiences with correction. While experiences with correction are generally unrelated to misperceptions about COVID-19, those who correct others have higher COVID-19 misperceptions.
Observational Correction on Social Media During COVID-19
The World Health Organization (WHO) has called attention to the “infodemic” that exists alongside the COVID-19 pandemic, arguing that social media plays a dangerous role in amplifying the spread of misinformation (WHO, 2020). For COVID-19, misinformation has been documented on a number of topics, including official governmental and medical organization actions, how the virus spreads, ways to prevent or treat infection (including vaccines), and the virus’ origins (Brennen et al., 2020, 2021; Enders et al., 2020; H. K. Kim et al., 2020).
Concerns about the role of social media in amplifying misinformation are by no means new, with multiple studies raising alarms about the prevalence of misinformation across social media platforms for emerging disease and enduring health issues (e.g., Broniatowski et al., 2018; Chou et al., 2018; Del Vicario et al., 2016; Guidry et al., 2015). We focus here on responding to misinformation, as studies of correction on social media demonstrate consistently positive effects (Walter et al., 2020).
While these experimental studies of correction show its promise on social media (Porter & Wood, 2019; Vraga & Bode, 2020), less is known about how people experience it in practice, and indeed some researchers claim experience with correction is rare (Weeks & Gil de Zúñiga, 2021). There are several ways that people might experience misinformation and its correction on social media. They might be corrected themselves, if they share misinformation on social media (or if someone else thinks that they have). They might correct someone else, if they see them share misinformation and are motivated to share corrective information with that person. Or they might witness someone else being corrected (known as observational correction, e.g., Vraga & Bode, 2017). Therefore, the first goal of this article is to document how often people experience correction of misinformation on social media in the context of COVID-19 in three ways: witnessing someone else being corrected, engaging in correction of others, and being corrected themselves.
To reach the entire population, it is important that different kinds of people experience correction, so we also examine whether certain types of people—more or less educated, old versus young, and Republican versus Democrat—are more or less likely to experience correction in these three ways.
Finally, if being corrected or observing correction is effective on a broad scale, we would expect these experiences to be associated with lower beliefs in misinformation. The final goal of the article is to therefore explore whether witnessing correction, being corrected, or correcting others is related to beliefs in COVID-19 myths (misperceptions).
Viewing and Engaging in Correction
In this article, we focus on responsive correction—correction that takes place after exposure to misinformation. This includes seeing a fact-check (Ecker et al., 2020), a correction from a platform (Bode & Vraga, 2015), or another social media user refuting a claim (Margolin et al., 2018). Notably, it does not include work that focuses on corrective information that comes before misinformation. 1 This would include work on pre-bunking and inoculation, which tries to mitigate the impact of misinformation before it even occurs (Banas & Rains, 2010; Cook et al., 2017; van der Linden et al., 2017).
Although there are examples of correction failures (see Nyhan et al., 2014; Nyhan & Reifler, 2010), meta-analyses of this type of responsive correction have demonstrated that it is consistent if modestly effective (Walter & Murphy, 2018), including specifically in the domain of correction of health misinformation on social media (Walter et al., 2020). This is true for both relatively entrenched issues like the safety of flu vaccination, and for emerging issues like the spread of Zika in 2016, as well as across multiple social media platforms, including Facebook, Twitter, and Instagram (Bode & Vraga, 2015; Smith & Seitz, 2019; van der Meer & Jin, 2020; Vraga et al., 2020; Walter et al., 2020). Moreover, corrections are effective when they emerge from experts within the platform, from algorithmic action by the platform itself, and—critically—from everyday users of the platform (Bode & Vraga, 2018; Vraga & Bode, 2018).
People report seeing substantial amounts of misinformation on social media platforms (Pew Research Center, 2019, 2020a), offering ample opportunity for correction to occur. For example, 57% of those who get most news from social media say they have seen at least some misinformation about COVID-19 (Pew Research Center, 2020a). And when exposed to this misinformation, previous research has found that somewhere between 25% and 35% of users respond by engaging in some form of correction, at least some of the time (Chadwick & Vaccari, 2019; Tandoc et al., 2020).
These are useful data points, but raise several limitations in existing data. First, most previous research has focused on corrections to news or political topics (Porter & Wood, 2019; Walter & Murphy, 2018), with some work on correction of health misinformation (Walter et al., 2020). In contrast, we document correction in the context of COVID-19, the disease caused by the novel coronavirus. This is a distinct context—that of an emerging health crisis—and one which has been handled differently by social media companies than previous misinformation topics (Facebook, 2020; Skopeliti & John, 2020), which might change how much misinformation is circulating on social media, and how users choose to respond to it. As such, we can compare exposure to and corrections of misinformation related to COVID-19 with previous research in different contexts (e.g., Chadwick & Vaccari, 2019; Tandoc et al., 2020).
Second, research has largely focused on who corrects others, but does not speak directly to the question of how often people witness such corrections, or are corrected themselves. Observational correction fundamentally includes not just the individual being corrected, but also the community seeing the interaction—and as such, may represent a larger percentage of the population than those willing to correct others themselves.
We therefore ask what percentage of US adults report (a) witnessing correction of misinformation, (b) engaging in such correction themselves, and (c) being corrected by others (RQ1) on social media in the context of COVID-19. To contextualize these findings, we also investigate how many people report (d) seeing misinformation and (e) sharing misinformation (since social media users cannot see correction or correct others without seeing misinformation, and they cannot be corrected without sharing misinformation).
Who Experiences Corrections?
In addition to providing an overview of the frequency of experiencing correction in these different ways, we also examine what makes such experiences more or less likely. If correction is happening, but only for a subset of the population, that represents a different reality than if correction happens roughly equally across a broad swath of American adults using social media. To further investigate this question, we consider which individual attributes predict experiences with correction on social media. Our expectations vary depending on the outcome in question.
First, previous research has found that older adults are more likely to share misinformation on social media (Guess et al., 2019). In general, there are widespread concerns about the media literacy and digital literacy of older adults (Hunsaker & Hargittai, 2018; Pew Research Center, 2017), which might contribute to their acceptance of misinformation. This might therefore lead them to be more likely to experience correction themselves (if they are sharing more misinformation), and less likely to engage in correction of others (if they hold misperceptions). On the contrary, recent research shows that younger adults are actually more likely to accept misinformation about COVID-19 (Baum et al., 2020). This research, though, focuses on the relationship between age and misperceptions; we do not know of existing data documenting what age range is most likely to witness correction, correct others, or be corrected. As a result, we simply explore whether age is related to (a) engaging in correction themselves, and (b) being corrected by others (RQ2).
Likewise, partisan differences in misperceptions around COVID-19 have emerged in the United States. In April 2020 (1 month after our study was conducted), Republicans were almost twice as likely as Democrats to believe that the virus was made in a lab (Pew Research Center, 2020b). Since that time, the pandemic has become increasingly politicized, leading to a partisan gap in individual public health behaviors like physical distancing (Gollwitzer et al., 2020). However, while partisan gaps in misperceptions have been documented, that does not necessarily logically lead to greater or less experience with correction. For that reason, we simply ask whether partisanship is related to (a) engaging in correction themselves, and (b) being corrected by others (RQ3).
In addition, education is generally negatively associated with willingness to believe misinformation, and so should play a key role in experiences with correction as well. Those who are more educated likely have more training in critical thinking, media literacy, and digital search strategy—which we might expect to be related to correction behaviors (Hunsaker & Hargittai, 2018; Kahne & Bowyer, 2017; Mihailidis & Viotty, 2017). We therefore expect those with more education to be (a) more likely to correct others and (b) less likely to be corrected themselves (H1).
Sources of Information
There is also reason to believe that the sources people rely on for information about the COVID-19 pandemic might affect their experiences with correction. Broadly speaking, a long line of media effects research has demonstrated that information influences attitudes and behaviors (Druckman, 2005; Feldman, et al., 2012). For example, previous research has linked news choice, especially for partisan news channels, with issue attitudes and beliefs, including misperceptions about the war in Iraq and the Affordable Care Act (Kull et al., 2003; Meirick 2013).
Early evidence suggests this trend continues when considering COVID-19: right-leaning news outlets in the United States like Fox News were more likely to discuss COVID-19 misinformation, and their viewers were more likely to endorse such misinformation (Motta et al., 2020), and engage in fewer COVID-19 prevention behaviors (Zhao et al., 2020). This research provides a theoretical groundwork for suggesting that information sources matter for online correction behaviors, but to our knowledge no research has specifically investigated these relationships, especially in the context of COVID-19. Therefore, we are interested in whether the information sources people rely on to learn about COVID-19 are related to how often people report witnessing correction, correcting others, or being corrected themselves. We consider four classifications of information sources.
First, we think that it is important to consider whether people are receiving information directly from health experts like scientists, the CDC (Centers for Disease Control and prevention), the WHO, and doctors/experts online. As primary sources, these should be the most reliable sources of information about the pandemic, and research suggests that people who engage in debunking behavior on Twitter are more likely to cite specific and verifiable authorities like the CDC or WHO than those who are sharing misinformation (McGlynn et al., 2020). In general, we argue that people who report relying on more reliable information (in this case, health experts) should be (a) more likely to correct others, and (b) be less likely to be corrected themselves (H2).
Second, we consider people relying on COVID-19 information they receive from national and local news sources. Although news media is generally expected to be a reliable conduit of information, transmitting information from health experts to the public (De Coninck et al., 2020), mainstream news can amplify misinformation (Marwick & Lewis, 2020; Papakyriakopoulos et al., 2020), and may be a more frequent source of misinformation for most people than social media is (Tsfati et al., 2020). Because it is unclear whether reliance on mainstream news should increase or decrease exposure to misinformation, we ask what effect such reliance has on (a) correcting others, and (b) being corrected (RQ4).
An additional category of information is that encountered on social media. Social media has developed a reputation for spreading misinformation (Garrett, 2019; Vosoughi et al., 2018), but can also be a source of correction (Vraga & Bode, 2020). In addition, in an unprecedented move, social media companies have united to battle COVID-19 misinformation and promote reliable information on their platforms, which may change the typical dynamics of social media for this case specifically (Sonnemaker, 2020). These conflicting expectations lead us to propose a research question: What is the relationship between reliance on social media for information and (a) correcting others or (b) being corrected? (RQ5)
One final source of information is that originating from President Trump. Given his prominence as a voice in the pandemic (Liasson, 2020) and his tendency toward disseminating misinformation (Kessler et al., 2020), he is important to include as an additional information source. Given his unfortunate track record of sharing misinformation about COVID-19, we expect that people relying on him as an information source are less informed than those who do not. However, as it is unclear how misperceptions relate to experience with correction, we simply ask how reliance on Trump as a source of information relates to (a) engaging in such correction themselves, and (b) being corrected by others (RQ6).
Witnessing Correction
Next, although we do not have specific expectations about how participant attributes or information sources should affect this, we ask how these elements predict witnessing correction on social media (RQ7). Individuals are less in control of whether or not they witness correction as compared to whether they choose to correct others, so we do not expect particular attributes make people more or less likely to witness correction. But given the effectiveness of witnessing correction (“observational correction,” Vraga & Bode, 2020), if there are systematic differences, they may have important downstream effects we would be interested in knowing.
Experiencing Correction and General Misperceptions
To date, studies have explored the effects of exposure to specific corrections, often in response to misinformation being shared (Bode & Vraga, 2015; Smith & Seitz, 2019; van der Meer & Jin, 2020; Vraga & Bode, 2017). While research suggests exposure to such corrections is broadly effective at reducing misperceptions (Porter & Wood, 2019; Walter & Murphy, 2018), existing studies are limited by their artificiality—studying simulated exposure to a specific correction and its immediate effects using an imitated social media experience. Therefore, we do not know whether these effects are maintained among people who report experiencing correction through witnessing correction, being corrected, or correcting others. Relying on self-reports of experiences with misinformation and correction offers an alternative way of studying this phenomenon, albeit one with its own limitations in relying on self-reported data.
We offer several expectations in terms of how experiences with correction regarding COVID-19 might affect one’s own misperceptions on the subject.
First, people who correct others on social media should have lower misperceptions than those who do not (H3). For people to engage in correction, they (ideally) need accurate knowledge. As such, this relationship also functions as a check as to whether those who believe they are “correcting” are actually better informed than those who are not. Second, people who witness correction should have lower misperceptions than those who do not (H4). Observational correction in experimental settings finds that people witnessing others being corrected on social media lowers misperceptions (Vraga & Bode, 2020). Although our measures are less directly connected than those in experimental settings (i.e., we don’t know on what topics people witnessed correction and are limited by relying on their reports of what they saw), in general we would expect people who report witnessing correction to have more accurate information than those who do not. Third, our expectations regarding people who have been corrected on social media are less clear. On one hand, they likely initially had higher misperceptions that led them to share misinformation. On the other hand, they were corrected—and if that correction is effective, as some research suggests it might be (Margolin et al., 2018), their misperceptions may be lower as a result. Therefore, we ask whether people who have experienced corrections on social media have higher or lower COVID-19 misperceptions than those who have not (RQ8).
Methods
Case: COVID-19
To study these questions, we use data from a survey conducted in March 2020, during the early onset of the COVID-19 pandemic in the United States. COVID-19 is a respiratory illness associated with a novel coronavirus first identified in late 2019 in China, which then rapidly spread around the world. COVID-19 has a relatively high fatality rate and is fairly contagious, a lethal combination. This is an interesting case because in the early months of the pandemic there was very little known definitively about where the virus came from, how it spread, how lethal it was, or how to combat it. This made for a unique information environment, where people were very eager for relevant information and may have been consuming more information than usual (Jurkowitz & Mitchell, 2020), but where reliable information was somewhat hard to come by.
Sample and Weighting
We test these questions using a national survey of American adults, conducted 27 and 28 March 2020. We recruited the sample from Lucid Academic, an online panel provider that constructs samples designed to approximate the US population using a combination of screening questions and quota sampling on the basis of age, race, ethnicity, education, income, party affiliation, and region (Coppock & McClellan, 2019). 2 A total of 1,094 people participated, and our sample characteristics were 51% male, 76% White (13% Black, 5% Asian, and 7% Hispanic), relatively educated (39% had a bachelor’s degree or higher), and somewhat Democratic (46% Democrats, 36% Republicans, and 19% Independents), with an average age of 45. These numbers are similar to the US population, with the largest discrepancy in terms of Hispanic ethnicity (18% of the US population is Hispanic, as compared to 7% of our sample).
In all analyses, we weight by age, race, gender, education, partisanship, and ideology, using raking weights (using the SPSS extension SPSSINC RAKE). Weights are targeted to population values from the census (race, gender, education) or other representative sources (Gallup data for partisanship and ideology, GSS data for age). Weights are not trimmed, but are largely under 3 (range: 0.27–6.75). With weights applied, our sample is reduced to N = 1,071 (this omits anyone who had missing data on any of the six variables used for weighting). We further limit analyses to people who report using at least one social media website regularly (N = 1,043) and use listwise deletion for missing data.
Measures
Correction on Social Media
Participants were asked in the past week whether (yes or no) they had experienced any of the following on social media as related to COVID-19: (a) seeing someone else sharing misinformation, (b) sharing misinformation yourself, (c) being told you shared misinformation (“being corrected”), (d) seeing someone else being told they shared misinformation (“witnessing correction”), or (e) telling someone they shared misinformation (“correcting someone else”). Descriptive statistics are provided in the “Results” section.
Information Sources for COVID-19
Participants were asked how much COVID-19 information they were receiving from different sources (5-point scale none at all to a great deal). An exploratory factor analysis with a promax rotation of six sources identified two factors: health experts (accounting for 38.1% of variance, α = .75, M = 3.02, SD = .96), comprising scientists, the CDC, the WHO, and doctors/experts online, and reliance on news sources (national news media and local news media, accounting for 11.1% of variance, r = .57, p < .001, M = 3.42, SD = 1.05; see Supplemental Appendix B for full details). We also include receiving information from President Trump (M = 3.09, SD = 1.37) and social media (M = 2.91, SD = 1.37) as single items.
COVID-19 Myths and Facts
Participants were randomly assigned to view 10 (out of a possible 20) statements regarding COVID-19 and rated their perceptions of veracity (“to what extent do you think this statement is” with answer choices from definitely false to definitely true) on 5-point scales. For this study, we limit our analysis to only 12 of those myths—those taken directly from the WHO’s Myth Busters website (https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters) at the time the study was fielded. 3 We reversed some of these items so that participants saw both true (seven) and false (five) items and created a scale measuring COVID-19 misperceptions (where higher numbers reflect greater misperceptions, M = 2.15, SD = .63) based on the number of myths a person rated. The statement our sample had the most misperceptions about was “UV lamps should not be used to sterilize hands or other areas of skin to protect against COVID-19” (a true statement, M = 2.78) and the statement our sample was most informed about was “So far, there is no specific medicine validated through scientific studies recommended to prevent or treat COVID-19” (a true statement at the time of the study, M = 1.79). Please see Supplemental Appendix C for more details and for perceptions of each myth.
Results
Experiencing Correction on Social Media
We first examine reported experiences with misinformation and its correction on social media (RQ1). Just over half of participants reported seeing misinformation about COVID-19 on social media in the past week (56.6%), while a third reported witnessing someone else being corrected (34.1%). Nearly a quarter of our respondents (22.3%) reported correcting someone else who shared misinformation on social media. In addition, when limiting our analyses to those who saw misinformation on social media (and who can thus see or do correction), 51.3% witnessed correction of misinformation and 35.1% corrected someone else.
Turning to sharing misinformation or being corrected, we observe that nearly equal percentages report having shared misinformation themselves in the past week (11.9%) and having been corrected (10.3%). Among the 11.9% who say they shared misinformation, 50.7% experienced correction, whereas only 4.9% of the 88.1% who said they did not share misinformation still reported being corrected (in line with Chadwick and Vaccari (2019), who find that 5% of UK participants report being “corrected” when they did not share problematic news). Because some people likely shared misinformation without realizing they did so (and therefore would not report it in our survey), and some people may not want to admit sharing misinformation, 11.9% is almost certainly lower than the actual percentage of people sharing misinformation and therefore likely overstates the percentage of people who share misinformation who are subsequently corrected (50.7%). Indeed, experiencing correction may serve as an important signal that one has shared misinformation—those who are not corrected may be systematically less likely to report having shared misinformation, because they have not been told it was wrong.
Predicting Who Experiences Social Media Correction
To gain a deeper understanding of who is experiencing correction on social media (to test H1 and answer RQ2, RQ3, and RQ7), we estimate a series of logistic regressions to predict (a) who has witnessed correction, (b) who was corrected themselves, and (c) who corrected someone else sharing misinformation, based on demographic factors and information sources (Table 1). For these analyses, samples are limited to the appropriate sub-sample of people based on experience with misinformation. So in predicting who witnesses correction and who corrects others, we limit the sample to those who say they have seen misinformation (N = 588), and in predicting who gets corrected, we limit the sample to those who say they have shared misinformation (N = 124). 4
Predicting Exposure to Misinformation and Correction on Social Media.
Note. Odds ratios reported in table to facilitate comparison. Values less than 1.00 suggest a negative relationship (e.g., lower odds of DV occurring), while values over 1.00 reflect a positive relationship. SE = standard error.
p < .10. *p < .05. **p < .01. ***p < .001.
Participant Attributes
We find that older adults are less likely to report all experiences with correction: to have witnessed correction about COVID-19 on social media (RQ7, odds ratio = 0.99, p < .05), to have been corrected themselves (RQ2b, odds ratio = 0.96, p < .001), and to have corrected others (RQ2a, odds ratio = 0.97, p < .001). Those with more education were more likely to report witnessing correction (RQ7, odds ratio = 1.35, p < .001) as well as correcting misinformation themselves (supporting H1a, odds ratio = 1.18, p < .05), but not less likely to be corrected, failing to support H1b. Finally, partisan identification is not related to experience with any form of correction (RQ3).
Information Sources
Turning to the sources that people rely upon for information related to COVID-19, reliance on health experts is related to higher reports of correcting others (odds ratio = 1.45, p < .001), supporting H2a, but not related to being corrected, failing to support H2b. Those relying on the news media as a source of information are less likely to correct others (odds ratio = 0.81, p < .05), and less likely to witness corrections (odds ratio = 0.77, p < .05, RQ7) but no more likely to be corrected (RQ4).
Relying on social media (RQ5) or on President Trump (RQ6) as an information source was not related to any measured outcomes (Table 1).
Corrections and COVID-19 Myths
Finally, we consider whether experiences with COVID-19 correction on social media—correcting others (H3, only among those who say they saw misinformation), witnessing correction (H4, only among those who say they saw misinformation), and being corrected (RQ8, only among those who say they shared misinformation)—are related to misperceptions for COVID-19 myths debunked by the WHO. H3 is not supported, as those who report having corrected others are higher in misperceptions (b = .10, p < .05), nor is H4, as witnessing correction was not associated with misperceptions. Likewise, those who reported having been corrected on social media in the past week were similarly misinformed compared to those who report sharing misinformation but do not report being corrected (RQ8), although this analysis is preliminary given the small sample size (N = 136) of those who reported having shared misinformation (Table 2).
Predicting Misperceptions (WHO).
Note. Standardized betas reported. WHO = World Health Organization; SE = standard error.
p < .10. *p < .05. **p < .01. ***p < .001.
Discussion
This study set out to examine how often people were experiencing correction on social media in the context of COVID-19, which audiences were most likely to have specific experiences with correction, and the extent to which each of those experiences are related to misperceptions. Our results suggest that these different experiences with correction of COVID-19 misinformation on social media, including witnessing correction, being corrected, and correcting others, are all relatively common, for most types of people. However, with the exception of correcting others, they are generally not associated with misperceptions about COVID-19, at least not for our sample and for the misperceptions we measured. We discuss each of these findings in turn.
First, many people experience both misinformation regarding COVID-19 and its correction on social media. Although exposure to misinformation about COVID-19 on social media was common, so too is correction. Among those who reported seeing someone sharing misinformation in the past week, over half said they had also witnessed someone being corrected. Likewise, of the 12% who said they had shared misinformation, over half said they were corrected. These numbers are quite promising, given past work suggesting that seeing and experiencing correction on social media can reduce misperceptions (Bode & Vraga, 2018; Margolin et al., 2018; Vraga & Bode, 2017) and emerging work suggesting such corrections can help researchers (and potentially platforms) identify misinformation more efficiently (H. Kim & Walker, 2020). While these perceptions of misinformation may be inflated by experiencing correction (which alerts users they are sharing misinformation), they are roughly aligned with other studies about the prevalence of misinformation broadly (Chadwick & Vaccari, 2019; Pew Research Center, 2019) and in the context of COVID-19 (Pew Research Center, 2020a). Moreover, if seeing correction leads people to recognize misinformation as being false, this is also a positive outcome.
In general, we found fewer relationships between respondent characteristics and experience with correction than expected. Older adults are less likely to report correcting others, as well as less likely to witness correction. Interestingly, however, they were also less likely to report being corrected, which is somewhat in contrast to previous research indicating they are also more likely to share misinformation in the first place (Guess et al., 2019). In addition, those with greater education are both more likely to correct and more likely to witness someone else being corrected. The first is likely a direct effect—education may instill not only the skills to engage in correction, but also the confidence to do so and the norm that doing so is appropriate—and it is that combination of factors that predicts behaviors (Ajzen, 1985). We think the relationship between education and witnessing correction, on the contrary, is likely a homophilous network effect (McPherson et al., 2001). Those with greater education are more likely to have similarly educated connections in their network, who are similarly more likely to engage in correction.
Despite our expectations related to partisanship and correction, we found that Republicans and Democrats were equally likely to correct, to witness correction, and to be corrected. It is possible this is due to the topic we consider. COVID-19 was not initially a politically divisive issue, and our data come from relatively early in the pandemic in the United States (March). Later in the outbreak, political divides did emerge on issues such as whether to wear a mask, whether lockdowns were appropriate public safety measures, and how dangerous the virus was (Pew Research Center, 2020b). In general, susceptibility to correction depends on the politicization of an issue—more politicized issues are more difficult to correct (Bolsen & Druckman, 2018)—so we might expect different findings from a later data collection.
It is worth noting that although the null results for partisanship contradicted our expectations, they instead suggest that people across a spectrum of partisanship are experiencing correction on social media. Republicans and Democrats alike are witnessing correction and being corrected, which is important for reducing misperceptions (Vraga & Bode, 2020), but also for creating and reinforcing norms that correction is normal and acceptable on social media. The more people ascribe to those norms, the more we would expect them to engage in correction themselves (Ajzen, 1985; Cialdini et al., 2006), increasing the scale of correction and its effects.
In contrast, information sources moderately predict experience with correction. Those relying directly on health experts were more likely to report correcting others, which is promising—those people might be more likely to do so with accurate information and referring people to an expert source, as is best practice for correction (Vraga & Bode, 2020). Those relying on the news media for information, on the contrary, were less likely to correct others and less likely to witness corrections. The news media was the most common information source for our sample, and unfortunately the measure—national news organizations and local news organizations—is rather blunt, which may obscure important differences in news outlets in their coverage of COVID-19.
However, the other information sources we considered—including social media and President Trump—were not related to engaging in correction, witnessing correction, or being corrected. Clearly, there remains much to understand about experiences with correction beyond just the informational aspect of it. How people think about correction, the norms they have about social media engagement, and the specific contexts and circumstances may have more to do with correcting others or being corrected than what information sources people employ.
Finally, we find no relationship between being corrected or witnessing corrections and myth acceptance, although we do find that people who correct others have somewhat higher levels of misperceptions about COVID-19. This latter finding is of particular importance, as it suggests that people who are engaging in correction are less well informed than those who are not, amplifying concerns that the “corrections” themselves could be spreading misinformation. This is consistent with previous research, which finds that misinformed individuals are more willing to reply to social media posts (Tully et al., 2020) and more willing to post about the topic (McKeever et al., 2016). Altogether, this research suggests an asymmetry in willingness to engage on an issue—those who are more informed are often the silent majority, whereas the misinformed are small in number but louder online. This reinforces the need to encourage the silent majority to engage online, combatting misinformation by refusing to cede the online space to the vocal minority (Buerger, 2020).
Of course, we do not know what misinformation any of the individuals in our sample claim to have corrected, witnessed being corrected, or had corrected on social media. If the misinformation and/or correction they experienced are not related to the myths tested, it makes it less likely that a relationship would emerge. Future research should more narrowly examine not just exposure to correction broadly (as we do here), but more about what those corrections look like (e.g., in terms of topic and source) or how people reacted to the experience. This is a space where additional mixed methods or qualitative research would be especially valuable (a la Tandoc et al., 2020).
This raises several limitations of our study. First, it is cross-sectional, so the relationships we observe cannot be interpreted in a causal way. Second, our sample—while weighted to approximate the demographics of the United States—is not representative, and this may affect both the descriptive statistics reported and the relationships uncovered. Third, we are limited to self-reports of people’s experiences with misinformation and correction. This means we cannot know whether the misinformation and correction that people report experiencing are actually misinformation and correction—that is, we do not know if the misinformation people report seeing was actually false, or if the reports of correction were actually correct, or whether they took place at all. People may not be able to accurately report their social media experiences (Junco, 2013), and recognizing misinformation requires knowledge of what is correct or incorrect, which is challenging for most people (McGrew et al., 2018). In addition, whether a participant reports witnessing misinformation or correction is dependent upon their definition of those concepts—and their definition may differ from the one we put forth here (Nielsen & Graves, 2017). The items we used for measuring experience with correction were also limited in that respondents only reported whether they had or had not experienced each. Future research should further probe these concepts to measure frequency of experiences. We also deliberately focus on these relationships within the context of COVID-19. We think this is important to examine, but does not necessarily mean any relationships we identify will generalize outside the context of a global pandemic. In addition, the variance explained in each of our models is quite limited, suggesting that there is much more we need to learn about what factors predict experiencing correction, and what predicts belief in COVID-19 misinformation. Finally, due to the exploratory nature of the study, we pose a large number of research questions and hypotheses. This increases the chance that an identified relationship is due to chance (due to multiple comparisons), so future research should investigate the findings outlined here in more targeted ways, to ensure that these are consistent relationships and not an artifact of simply testing many things at once.
Still, within this specific context and given these limitations, we are left with several clear takeaways. People were experiencing both misinformation and its correction on social media during the early days of the COVID-19 pandemic. Experience with correction takes several forms, including direct experiences with correcting others or being corrected, as well as the indirect experience of witnessing correction, but each seems to be fairly widespread in the population. It is especially important to recognize that these experiences with correction cross partisan divides in the population, which is particularly important for addressing polarized issues. While preliminary evidence does not suggest major shifts in misperceptions based on seeing others corrected or being corrected yourself, that does not mean such experiences are not important or that future research using more tailored data would not find such an effect. We expect that experiencing corrections creates a virtuous circle, encouraging more people to engage in the process themselves. If this is the case, correction can truly become a community response to misinformation on social media.
Supplemental Material
sj-docx-1-sms-10.1177_20563051211008829 – Supplemental material for Correction Experiences on Social Media During COVID-19
Supplemental material, sj-docx-1-sms-10.1177_20563051211008829 for Correction Experiences on Social Media During COVID-19 by Leticia Bode and Emily K. Vraga in Social Media + Society
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received financial support for this article from the University of Minnesota.
Supplemental Material
Supplemental material for this article is available online.
Notes
Author biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
