Abstract
Resilience to disinformation on social media relies on the user’s ability to critically assess disinformation and even counter it. Active users, who, with their actions, can curate the information environment of others, can play a crucial role in stopping the dissemination of disinformation. Their activities, such as correcting or reporting, in the decentralized social media environment may prove more effective than institutional responses. Considering this, the study looks specifically at how active users engage with disinformation. Through 60 semi-structured interviews over 3 years, we explore how crises like COVID-19 and the Russia–Ukraine war impact Czech users’ motivations and strategies. Findings indicate that users are driven by a moral obligation to provide accurate information. Both people sharing and correcting disinformation believe in their critical skills, with their desire to help amplified by crises. However, the ones correcting often face frustration and demotivation due to hostile interactions and a lack of visible impact, while the ones sharing remain persistent. Strategies are influenced by the perceptions of the individuals and the type of disinformation. Completely false information is often ignored as not worth debunking, whereas partially false information prompts active correction due to the perceived ease of rebuttal. The study highlights the need for social media platforms to support users in corrective actions and address algorithmic issues that may impede these efforts.
Introduction
In recent years, the academic debate has shifted from focusing on the negative impacts of disinformation dissemination, such as lowering trust in institutions and media, to exploring ways to build resilience against it (Boulianne et al., 2022; Humprecht et al., 2020). Previous research has identified various factors influencing resilience to disinformation dissemination, often emphasizing macro-level conditions like the role of political polarization (Allcott & Gentzkow, 2017), trust in news media (Nielsen & Graves, 2017), social media use (Morosoli et al., 2022), and fragmented media ecosystems (Shin & Thorson, 2017). As obtaining news from social media becomes increasingly important (Newman et al., 2023), we focus on social media users, who are exposed to rapid information dissemination and diverse user interactions. This research aims to understand how resilience to the spread of disinformation can be fostered.
In this context, resilience refers to a structural context in which disinformation does not reach many citizens. At the same time, resilience is not only a consequence of not being exposed to disinformation but also being able to assess it or counter it critically (Humprecht et al., 2020). The use of social media has been repeatedly linked to a level of disinformation resilience (Humprecht et al., 2020; Morosoli et al., 2022). Considering that the information on social media is “curated” by a variety of actors—not only conventional newsmakers but also individual media users and social contacts (Thorson & Wells, 2016), there is a need to look at how people engage with disinformation. This is specifically important if one of the options for engagement is correction, which has been shown to help slow down the dissemination of disinformation (Walter & Tukachinsky, 2020).
Due to the decentralized nature of social media, user intervention may be even more effective than institutional ones, for example, news agencies or social media (Colliander, 2019; Hatakka, 2019). The role of users is becoming increasingly crucial when we look at the current trajectory of banishing independent fact-checkers by social media companies, most recently Meta. If we consider active users as opinion leaders in this environment, they may have the opportunity to reach even users who might not be otherwise reached (Thorson & Wells, 2016). Despite this, research on individual perceptions and motivations for corrective actions is lacking. Our study addresses this gap using interviews to understand users’ behaviors and motivations when encountering disinformation.
Based on interviews collected over 3 years, we focus on users’ motivation and strategies and changes over time, including crises such as COVID-19 and the Russia–Ukraine war. The study shows that these events have notably decreased motivation to stay active and correct disinformation, overshadowing initial motivation to help others with frustration from no visible results. Conversely, those spreading disinformation remain motivated to spread “the truth” longer. Furthermore, reactions are heavily influenced by perceptions of opposing users, sometimes leading to trolling or complete demotivation to stay active.
This study contributes to research (Colliander, 2019; Hameleers, Humprecht, et al., 2023; Tandoc et al., 2020) focusing on corrective action and possible action of resilience by the users. It is important to note that the user’s perception of the online environment, as well as that of other users, is an essential factor in their decision-making process. Specifically, addressing the frustration of users actively participating in correcting disinformation, it is important to focus on how to motivate them to remain engaged in an online space where disinformation is a growing problem.
Theory
Disinformation on social media and building resilience
Resilience to disinformation relies on the user’s ability to assess disinformation and even counter it critically (Humprecht et al., 2020). Building individual resilience can involve pre-bunking (e.g. media literacy) (Vraga et al., 2020) or debunking strategies (e.g. fact-checking) (Walter et al., 2019). In the context of debunking, social media users play a critical role—not only as recipients of corrections but also as creators. According to the two-step flow model, active users labeled as opinion leaders become intermediaries between the media and the public (Brosius & Weimann, 1996). Their corrective actions, such as writing posts, sharing, or commenting, can reach wider audiences, including users with little interest in the news (Thorson & Wells, 2016).
These “social corrections,” understood as corrections by peers, are considered effective in limiting misperceptions (Bode & Vraga, 2017; Badrinathan & Chauchard, 2024). They serve as a supplementary correction to elite sources, whose effectiveness produces mixed results (Chan et al., 2017; Walter & Tukachinsky, 2020; Robertson et al., 2020). Specifically, looking at X (Twitter), the presence of tweets that explicitly oppose and/or correct a post causes greater awareness that the news might be fake (Pang & Ng, 2017). Even activities, such as reacting to comments, can influence how other users perceive information, as posts with more reactions are perceived as more credible, while negative reactions reduce support for the content (Luo et al., 2022; Waddell, 2018).
Furthermore, social media often fails to limit the amount of disinformation and relies heavily on users’ engagement (e.g. report buttons) (Gimpel et al., 2021). Ordinary users may better detect subtle, context-specific false information, including morally questionable but rarely illegal content (e.g. racist, homophobic) that evades government monitoring (Hatakka, 2019). Even though only a small number of users are motivated to correct false information (Tandoc et al., 2020), their actions may be more effective than activities by social media companies (Colliander, 2019) due to their social connection. Peer corrections significantly reduce belief in misinformation, even when rooted in strong group identities (Colliander, 2019; Badrinathan & Chauchard, 2024).
However, users can also contribute to the dissemination of disinformation, engaging in what Thorsten Quandt (2018) calls dark participation, which is undesirable and potentially harmful. Since what is considered, disinformation is primarily based on existing beliefs (Morosoli et al., 2022), and perceived corrections may fail to deliver a factual truth. Research shows that the spread of disinformation would be minimal if users did not engage with them (Wintterlin et al., 2023). Therefore, it is important to consider how corrections themselves might unintentionally contribute to further dissemination, specifically if we know that people’s perception of what constitutes disinformation differs.
Considering these differences in disinformation perception, we need to further understand what drives users to participate. Factors such as political interest (Gagrčin, 2023), emotional connection to the topic (Galletta Horner et al., 2022), or the idea of good citizenship possibly play a role (Dalton, 2008). Qin et al. (2015) conclude that predicting an individual’s will to spread and stop misinformation depends on the opinion climate and social influence. Socio-demographic factors such as gender (Vochocová et al., 2015) or age (Andersen et al., 2020) can also be influential.
Understanding varied motivations is essential if these corrections can play a role in slowing down the dissemination of disinformation. This type of participation can empower users, giving them a sense of agency in addressing disinformation and reinforcing the feeling that they can “do something about it” (Wu, 2023). However, considering the spectrum of factors that can play a role in a decision to interact with disinformation, there is a need for a deeper look at the user’s perception of the situation. Therefore, we ask:
RQ1. What de/motivates active users to interact with disinformation?
Interacting with disinformation—what are the options?
Users can correct disinformation by employing different strategies and approaches, leading to different results. For instance, corrections that cite credible sources have limited effectiveness, working in only a minority of cases (Badrinathan & Chauchard, 2024). Personal relationships (Colliander, 2019) and the strength of a user’s political beliefs also influence how corrections are received (Brandtzaeg & Følstad, 2017). Interestingly, corrections targeting political figures from an opposing party are often viewed as more credible and are more likely to be shared (Oeldorf-Hirsch et al., 2024). In addition, communication styles, such as satire, can influence how corrections are perceived (Boukes & Hameleers, 2023).
Users can further adapt their strategies based on the type of disinformation they encounter. Research shows that users distinguish between fake news, misinformation, and disinformation, influencing their trust levels (Gaozhao, 2021) and actions (Tandoc & Seet, 2022). The effectiveness of corrections also varies: completely false information, such as fabricated COVID-19 claims, is more easily disproved by fact-checking, while partially incorrect information is harder to address (Hameleers, Humprecht, et al., 2023).
Besides more expressive types of engagement with disinformation, such as commenting and posting, users can also opt for activities such as reporting, which prompt the information to be checked, flagged, or removed (Gimpel et al., 2021). Community reporting is a common tactic, and this “online civic intervention” is used by social media platforms to combat disinformation (Porten-Cheé et al., 2020). Reporting is considered a good starting point and gives users a sense of agency, even though there is sometimes a self-awareness of potentially being wrong (Wu, 2023).
Users can implement different strategies based on their perceived effectiveness of the action and their perception of the disinformation. Their beliefs, the source of the information, or the language used may play a role in their approach. Considering that actions like reporting and blocking are important for building resilience to disinformation, we look at how users who participate in these activities choose their strategies:
RQ2. How do active users choose their strategy for interacting with disinformation?
Role of crises
Social media users play an important role during crises, serving as sources of information for others. Chierichetti et al. (2011) demonstrated that even ordinary individuals, when acting together, can reach most of their community within just a few steps. This dynamic can impact the spread of disinformation and the effectiveness of corrective efforts. Looking at the COVID-19 pandemic, the need to stay informed and engaged was high in the beginning (Ohme et al., 2020; van Aelst et al., 2021) when also citizens’ prosocial participation was visible both offline (e.g. sewing respirators) and online (e.g. sharing information). But as the pandemic progressed, citizens’ level of participation (Ohme et al., 2022) and news consumption lowered and returned to their original state (Nielsen et al., 2020). This can be explained by the support for the government during the early days of COVID-19 in Europe (Bol et al., 2020) and spreading skepticism about official information as the crisis progressed (Ohme et al., 2022).
Growing distrust toward the establishment’s treatment of the crisis was cultivated by the fast-paced dissemination of misinformation (Nielsen et al., 2020). Every second European citizen was worried that they could not distinguish between real and fake news on the internet (Newman et al., 2022). This was the case also in Czechia, at that time led by populist government, being criticized for how they handled the pandemic (Novotná et al., 2023). The low level of institutional and media trust further heightened the interest in disinformation scene and alternative media, which have been quite active in recent years (Štětka et al., 2021).
This situation was only propelled by another crisis, the invasion of Ukraine by Russia in the spring of 2022. For Czechia, this was significant due to the countries’ geographical proximity and the country’s engagement in assisting Ukraine and incoming Ukrainians. Research shows that this closeness may impact people’s information accuracy assessment (Hameleers, Tulin, et al., 2023).
Throughout both crises, many counterfactual narratives were presented alongside authentic information, which led to citizens’ judgments and behaviors potentially being based on misperceptions. People’s position toward the perceived disinformation and any further engagement with it was often based on the source. Information was most likely perceived as false when it came from the enemy or the “other” (Hameleers, Tulin, et al., 2023).
The level of consumption and trust toward information visibly changes throughout crisis times. This affects not only the dissemination of false information but also people’s participation. Considering these precise trajectories, we need to raise the question:
RQ3. How does de/motivation to interact with disinformation change during continuous crises?
Methodology
Data collection and sample
The semi-structured interviews were collected in three spring waves (March—April, 2021, 2022, 2023), the first consisting of 13 interviews, the second of 22, and the third of 25 interviews. The participants were recruited and interviewed by trained master’s students (under supervision by the authors) from the research seminar at the Department of Media Studies and Journalism, Masaryk University. The students followed a semi-structured interview guide, which ensured that all topics would be addressed. The guide consisted of topics, such as social media routines, including content creation, sharing and interactions, and de/motivation for active participation and engagement with false disinformation. Further subtopics included participants’ perceived efficacy of their different forms of political participation on social media. The students focused on covering four main topics, using example questions. They were encouraged to ask for specific examples, particularly related to ongoing crises, while maintaining certain flexibility to ensure a natural flow of conversation. The students were trained to use sensitive language and avoid labeling anything as disinformation unless the participant did so. Participants’ perceptions defined disinformation, with contextual interpretation added during analysis.
The interviewers recruited the participants in their own more distant social networks to ensure a varied sample. The inclusion criteria were activity on social media (primarily Facebook), other interactions with political content, and exposure to disinformation. Furthermore, we looked for variety in gender, age, and socio-economic background. Even though no financial compensation was offered to the participants, the response rate was high (7 refused), and overall, 76 interviews were collected, and 60 interviews were used for the analysis. The ones disregarded were participants who did not meet the inclusive criteria or for ethical reasons. All interviews (lasting 60–90 min) were recorded and transcribed.
The research adheres to the ethical standards set by Masaryk University, guaranteed by detailed informed consent. It describes the procedure for dealing with data and outputs, ensures anonymity and protection of data, and defines the rights of participants (e.g. withdrawing from the research, being informed about the results).
The final sample (N = 60) varied in gender (female = 26.7%), age (21–74; M = 34, median = 28), type of residence (city = 70%), level of education (university = 55%), or marital status (single = 45%). The overall sample also varied in participants’ political attitudes, which were the most visible in question regarding COVID-19 and the war in Ukraine. From all three waves, 10 participants talked about their disbelief in COVID-19 or disregarded the effectiveness of vaccination. At the same time, some of them questioned the existence of war in Ukraine or voiced their support for Russia based on information from disinformation sources. Throughout the interviews, there was no normative selection of what is considered false news, and only participants’ understanding was considered. However, in the analysis based on mentions of belief in disinformation, participants were divided into those who do not trust and those who trust in disinformation shared on social media. Based on this differentiation, the analysis considers them as correcting and those sharing disinformation.
Analysis
Interviews were coded by four coders in Atlas.ti. (version 8). Data were analyzed by implementing a thematic analysis process (Braun & Clarke, 2006; Brett & Wheeler, 2022). In the initial reading of the interviews, emerging topics were systematically organized, leading to creating codes during open coding. The codes were created both inductively (based on the guide) and deductively (based on additional information presented in interviews). At first, the waves from 2021 and 2022 were coded, creating the main corpus of codes. They were categorized into four main groups: social media use and routines, perception of false information, de/motivation for engagement, and the perceived effectiveness of correction. The difference was made between codes relating to participants’ activities and ones concerning their perceptions and attitudes. Additional codes were also considered regarding the context of the participant’s actions or perceptions, such as COVID-19 or the war in Ukraine. In the third wave, the existing corpus was applied again, checking for new behaviors, opinions, or strategies. In addition, participants were recruited with more focus on underrepresented socio-demographic characteristics (e.g. age and gender) to make sure about the saturation.
The codes and subcodes were created primarily by coding the first five interviews by two coders. Intercoder reliability was maintained through weekly training sessions, where the coder aligned on existing codes, discussed and evaluated new ones, and applied the agreed-on coder and subcodes to the interviews. The codebook was thus built by carefully reviewing and repeating cross-reading of coded interviews. After that, the relevance of each data segment was examined, and recurring patterns were identified by axial coding. Finally, selective coding, supported by a mind map to visualize key topic relationships, was conducted in the final stage.
Results
RQ1: normative idea to do what is right
When encountering what they perceived as disinformation, participants were motivated to react mostly because they felt responsible for contributing to a better online information environment. The normative idea of “doing what is right” was several times described as a motivator to engage both expressively (in comments and posts) or passively (reporting, blocking). Many participants reflected on their moral obligation to react and not just stand idly by while big lies are being spread without any opposition (Šimon, 22). Reflecting on involvement in a discussion, this need to provide the perceived correct information often overpowered even uncomfortable feelings toward hostility. Participants reflected that most users do not actively participate in posting or commenting, but they read the information nonetheless. This perceived dynamic was included in their motivation; therefore, their activity was targeted in several cases toward the silent ones, which are seen as a majority.
The idea of good citizen motivates both those who correct disinformation and those who share it. Both groups justified their action on social media by using the same narratives (e.g. helping others and doing what is right). They also equally perceived their critical abilities as superior to those of others, often intertwining this perception with a motivation to help because they are knowledgeable. Participants mainly described themselves as seeking various information and perspectives and being able to differentiate between factual and false information. Some (as Karel, 43) identified themselves as opinion leaders helping others: For people who follow me, I believe they will get a lot from my posts.
Certain confidence was common also among participants who shared disinformation during the two crises. They frequently relied on their self-perceived critical abilities and their experience, prioritizing personal experience over information from mainstream news, for example, Lucie (45) reflected that the news she sees on Seznam (a mainstream news platform) does not correspond to what she sees with her “own eyes.” Sometimes, they justified this by their age and used it against “less experienced” younger users. Higher age was thus used as an advantage in the argument. However, personal experience was not solely derived from the participants but was often shared by family members, friends, or more distant acquaintances: So I have some friends in the hospital, nurses, and my other friends also have friends in the hospital, and they said it’s not as bad as it’s written everywhere. (. . .) (Jitka, 29)
For the active users, this perceived discrepancy motivates them to help others “see the truth.” Their view of “others” aligns with their self-image as critical thinkers. Both those sharing and correcting disinformation emphasize the importance of consulting multiple sources to “build a picture” of correct and false information. Some who share disinformation contrast their ability to analyze raw data with the perceived inaccuracies of mainstream media. They advocate for relying on “raw numbers,” often coming from official sources, which they consider reliable and without media interpretation: Look, the numbers speak volumes! The death toll on Covid is 25,000 in the year since it took place. That’s an absolutely paltry amount. You can’t take that to mean it’s a disaster. It’s just the press making it out to be a disaster. (Čeněk, 56)
This critical stance toward official interpretation further emphasizes that people who share disinformation add their own experience and judgment, which might shape the original meaning. While those who correct disinformation rely heavily on mainstream media, people who share disinformation emphasize the lived experience. This is not to say that people correcting disinformation are not motivated by their own experience, but they do not use it to justify their argument. Furthermore, if at some point they made a mistake and were corrected by others, they became more cautious of their actions online. Often, they refrain from engaging in expressive activities if they feel they lack knowledge. Pavel (27) notes that he avoids acting if he lacks a solid foundation or cannot verify the information. In contrast, people sharing disinformation often view negative feedback as validation, strengthening their motivation to act.
However, strong emotional reactions to the shared disinformation can motivate even more cautious users. Specifically, users correcting disinformation were willing to interact even in uncomfortable situations—if their frustration or anger was strong enough. This was particularly common at the pandemic’s beginning and later at Russia’s invasion, while emotions were heightened. Some users even actively sought out what they saw as “the other side” or sources of false information to interact with them.
RQ2: to comment or to report, that is the question
Correcting disinformation is reflected as time- and energy-consuming, leading to picking different strategies when interacting with disinformation. The growing frustration from continuous crises caused many users to move from correcting disinformation to other expressive activities requiring less effort such as trolling. Furthermore, less visible actions like reporting and blocking became more frequently used. These methods gained popularity as they were considered more efficient and less time-consuming than factual corrections. As the repertoire of interactions expanded, so too did their decision-making process. Their motivation to interact became more deliberate, leading them to develop varied strategies based on who shares the information, what kind of information it is, and their understanding of how social media algorithms function.
Even though studies have disproved the notion that people who believe disinformation share similar characteristics and form a homogeneous group (Thorbjørnsrud & Figenschou, 2020), people who correct disinformation operate with this belief. Commonly described characteristics were age (older people), education level (generally lower), financial situation (feeling financially strained), and political ideology (ultra-right or ultra-left leaning). The unifying factor perceived by correcting participants toward the ones sharing was the lack of ability to evaluate information critically: I even clicked on their profiles a few times, and they are both young and old, and they look serious. For example, a guy in his thirties, in a picture in a tie, didn’t look completely brainwashed, and then he shared this stupid thing. (Robert, 24)
The perspective that individuals who spread disinformation form a homogeneous group is reinforced by both crises captured in interviews. These events are perceived as a unifying factor among disseminators of disinformation, with similar arguments in the discussion and posts. Most participants noted an overlap between individuals who denied the existence of COVID-19 or the effectiveness of vaccines and those who also held beliefs that the war in Ukraine was not real or stood by Russia’s actions. As one of the participants, Pavel (30) argued: “(. . .) it’s a clear demonstration that those people are just disinformators, there’s no reason for those groups to be the same here.”
Users engaged in correcting disinformation distinguish between two groups: those who share false information because they actually believe it and those who lack the ability to assess the information they encounter online critically. More importantly, this distinction influences their reactions. For some, the perceived inability of others to critically assess information motivates them to provide accurate information. For others, this inability to critically assess information creates an image of someone who “just reads and believes anything.” Such individuals are seen as only inoculated with some kind of disinformation campaign (Luboš, 25) and, therefore, unable to have their own opinions. Correctors often opted for trolling to relieve some of their frustration. Interestingly, the individuals who believe disinformation may be considered better suited for debate because they are seen as well-versed in certain topics.
These distinctions are crucial when differentiating between prominent individuals disseminating disinformation (those with larger numbers of followers) and “ordinary” users. For interaction with prominent users, the motivation to engage stems from their perceived reach to the users and the potential harm they could cause by spreading false information. This felt influence may lead to more willingness to invest time in providing correct information (Marek, 28). Conversely, encountering prominent disinformators can also lead to actions such as blocking or reporting to mitigate their influence, which is seen as an effective way to react and is worth doing.
To this point, some users feel that correcting the information is ineffective and instead opt for reporting or blocking content. They feel this approach more effectively prevents the content from being seen, and most importantly, they are not supporting the algorithm in the end, which is seen as contraproductive: (. . .) I’m not going to engage in the discussion on that principle anymore, because I know that actually by engaging, I would be helping this person who’s writing this (. . .). (David, 25)
As a result of these types of interventions, participants have experienced having their comments deleted or encountering hostility in discussion. Based on these experiences, even though their motivation did not change, some opted for different strategies that could stop the message from being shared, aiming to “nip it in the bud in the beginning,” as Adam (22) explains.
Small or big lie?: influence of disinformation type
The decision of what activity to choose when encountering disinformation was often based on the differentiation between various levels of false information. Hameleers, Humprecht, et al. (2023) categorize information into de-contextualized or partially false and fabricated or completely false information. Participants often pinpointed false information, which they described as bizarre or absurd and not worth debunking. In this category, they mostly described information connected to conspiracy theories, often connected to secret government plans or “people behind the scenes.” However, these perceived conspiracies sometimes prompt one to engage in another often-used strategy, which is trolling. This strategy aims to point out the absurdity of the information shared and to humiliate or make fun of the person who disseminates it. Such reactions are justified by far exceeding acceptable boundaries of understandable behavior when spreading disinformation. Andrei (25) comments that if “it was something bizarre, I wasn’t commenting on the cover directly, but more like commenting on the people who published it.”
When the urgency to act is overshadowed by an awareness of how social media algorithms work, users often decide on less expressive activities such as reporting or blocking. They note that the algorithms tend to amplify the most conspiratory, radical, or/and aggressive content and views. Considering that engaging expressively (by commenting, reacting, or sharing) would only increase the reach of the disinformation, they opt for either blocking or reporting the content itself or the user/source sharing it.
Furthermore, these types of false information are often read but not interacted with because their degree of falseness is considered excessive. In the context of pandemic, for instance, Daniel (22) observed that if “someone was convinced that mRNA vaccines change genetic information, one could not talk them out of it at all.” Any reaction, therefore, would be useless. The time required to correct such information is considered too disproportionate to its negligible impact. As Tomáš (35) notes, information is often woven together, and untangling it requires much effort.
However, users correcting disinformation find partially false information or de-contextualized information easier to debunk. Their motivation to react and correct them is higher because the facts that disprove the information are easier to find and are not as complex compared to conspiracies. Distinguishing between individuals who hold radically opposite opinions based on disinformation (e.g. denying the existence of the war in Ukraine) and those who believe only partially false information is also significant. In the latter case, participants perceive some potential for a rational discussion.
Furthermore, partially false information is sometimes perceived merely as misinformation, eliciting less intense reactions from those encountering it. Users were sometimes even more motivated to provide the correct information and debunk the false part to help others because the sharer is seen as having made a mistake rather than purposefully trying to spread false information. This was based on their perception that most information is correct and only some parts are incorrect. Furthermore, they also reflected that this information was, in their perception, not connected to any conspiracies, and in contrast, they represented mistakes that anyone could have made. However, this seemingly higher motivation to engage may not lead to better results when it comes to building resilience. As Hameleers, Humprecht, et al. (2023) note, partial disinformation may not raise as much suspicion as completely false ones in users and can be impossible to falsify due to the claims of some verifiable facts. At the same time, this initiative to correct partial disinformation from users may flag false information that is unnoticed by social media or official fact-checking platforms.
RQ3: role of crises—frustration over time
The introduced motivations and strategies were changing over time to respond to ongoing and new crises, which influenced people’s behavior and perceptions. There was a noticeable difference between the first and later waves of interviews conducted in 2021 and 2022. Specifically, the pandemic was a turning point for many users motivated to provide factual information on social media. Their original impulse to help, which stemmed from the perceived societal (e.g. polarization) and health risks (e.g. the perception that believing disinformation could lead to health issues), was often met with hostility, ignorance, or another wave of false information. This led to participants expressing a build-up of frustration and demotivation to remain active in later interviews.
The initial motivation to provide accurate information became overshadowed by negative experiences. Lukáš (27) expressed this sentiment: “Of course, as time goes on, I find it’s just very hard to convince anyone.” Those spreading disinformation were typically perceived as individuals unwilling to acknowledge they were wrong. While participants did not primarily interact to seek validation for being right, the absence of noticeable change or results led to significant disappointment. This disappointment was amplified if this type of one-sided interaction was repeated, there was perceived incivility in the discussion, or they deemed it too energy- and time-consuming. The frustration was extended to the discourse around the war in Ukraine. Worn out by the persistent disinformation and their experiences during the pandemic, many participants chose to withdraw from active engagement: There was covid in the media for two and a half years. Restrictions, measures, loosening, restrictions. Lockdown. And now suddenly Ukraine and only Ukraine is going. I think a lot of people are getting irritated. (Voloďa, 25)
In contrast, participants who shared false information about COVID-19 and/or war in Ukraine did not exhibit the same type of frustration over time. They were neither stopped by negative feedback nor bothered by the lack of visible impact. Even though they sometimes disagreed with certain information (e.g. need for vaccination) and openly expressed this disagreement, they did not require tangible evidence of changing someone’s mind. For them, merely putting the information out there was often sufficient.
Each new crisis could then be used as another topic where their version of “truth” is visible. This dynamic is frequently observed by those attempting to correct the disinformation who noticed the same arguments being recycled. Anti-systematic rhetoric and criticism of mainstream media lies are commonly emphasized. In such cases, participants like Bára (25) often express a sense of helplessness in debunking disinformation, citing that “the structure of the disinformation is built in a way, that they can’t be debunked.” At the same time, for some users like Petr (26), the repetitive use of specific words and arguments may also blur the line between a real person commenting and a potential propaganda bot: They actually, it seems to me use phrases that are quite similar, like typically “the media won’t tell you this,” “here’s the truth,” “these photos are from some really cool media outlet that we just found.” (Samuel, 21)
For some participants, the frustration stemming from interaction with the people sharing disinformation led to a greater emphasis on the role of social media themselves. They were pinpointed as the ones setting the algorithm and, therefore, responsible for highlighting false information and incivility. In this case, users felt little to no agency to do something about how these sites are set up with their actions. The overwhelming impact of these sites, in contrast with their own, further demotivated some users to do something. To shift more responsibility on the social media companies rather than the users, solutions such as using “algorithm or direct jobs that people in that institution would try to combat that kind of misinformation” as mentioned David (20) were proposed.
Discussion
Concentrating on active social media users, this study provides insight into their motivation and strategies when encountering disinformation online. It emphasizes the importance of social media usage in resilience to disinformation and the role of users in disseminating (Wintterlin et al., 2023) and correcting it (Colliander, 2019). Further highlighting how the actions of active social media users shape trust or distrust toward false information in this environment.
Interestingly, we found that both people correcting and sharing disinformation are motivated mainly by the normative idea of doing what is right and using similar narratives when talking about their duties to spread correct information. Both groups are also confident in their ability to recognize false information, which further fuels their motivation to help others who are not as able. Often describing themselves as opinion leaders, they know that their actions have an audience and can supply them with information that would not reach them otherwise. Even though both groups are aware of the influence of social correction and reflect the felt duty to provide these corrections, only one stays motivated over time.
Validation is dismissed as the primary motivation, but only the users sharing disinformation stay motivated without it. While people correcting disinformation when met with hostility, ignorance, or inability to change people’s minds become frustrated and demotivated, people sharing disinformation stay motivated to spread further “the truth.” The difference between these two types of actors highlights the alarming fact that disinformation spreads more easily than corrections, especially over time. In this sense, the role of opinion leaders on social media is being left to those users who are more prone to believing in disinformation, further curating the information environment of others (Thorson & Wells, 2016). COVID-19 was the catalyst for the split between the frustrated and motivated users, particularly affecting their willingness to discuss Russia’s invasion of Ukraine.
Another critical factor in participating in corrective actions was the role of different reaction strategies. Instead of correcting, users often opted for reporting or blocking, which were seen as easier alternatives to commenting or posting. These actions were also supported by the perceived role of algorithms and the need to stop the spread of disinformation effectively. Even though expressive types of participation on social media may be met with hesitancy or demotivation over time, less expressive forms continue to serve as an effective mechanism for slowing the spread of disinformation (Wu, 2023).
Other reactions, such as trolling, were more motivated by the homogeneous image of people sharing disinformation. Even though research shows that people seeking and believing disinformation are diverse (Thorbjørnsrud & Figenschou, 2020), there is a strong tendency to see this group as unified, especially in crises. This leads to reactions that are more addressed toward the source of the disinformation and its believers than its information. Often, these reactions are uncivil and can lead to further polarization between the two perceived groups, possibly lowering the chance that corrective action will contribute to refuting misinformation.
Finally, the perceived type of disinformation plays a role in how and if the users are willing to react. According to active participants, correcting completely false information is perceived as useless, especially when seen as too conspiratorial and energy-consuming to disprove. People tend to correct information seen as partially false or out of context, partly because they view it more as misinformation than disinformation. Correcting only certain aspects requires less effort, encouraging their participation in providing accurate information. This is important, considering official fact-checks are often unable to detect this partially false information (Hatakka, 2019) and can be less effective in influencing users’ trust (Hameleers, Humprecht, et al., 2023).
This article comes with some limitations. Qualitative research prevents generalization, and findings focus primarily on Facebook, the leading platform participants use. Future studies should explore perceptions of disinformation across different platforms, considering their varied uses and algorithms. Furthermore, the participants consisted of active users, who are a minority group on social media, and there is still a gap when it comes to the perception of people who may not be actively participating in this type of action but are passively observing it. In addition, the distinction between correcting and sharing disinformation was based on attitudes toward the COVID-19 pandemic and the war in Ukraine, offering insights into crisis periods when disinformation spreads heavily. At the same time, aside from instances where these crises were explicitly mentioned, the perception of what constitutes disinformation versus correction was based solely on the participants’ reflections. As a result, we cannot conclude the actual truthfulness of the corrections. Further studies could focus more deeply on how social media users judge what they consider disinformation. Finally, some dynamics of news consumption during the pandemic are similar globally; Czechia’s low institutional and media trust may reflect patterns in comparable countries.
Despite the limitations, our study provides key insights. It reveals that any motivation of active users to correct is met with multiple barriers, which change their strategy and their overall motivation to respond to disinformation. Since correcting disinformation is an important part of slowing down its further dissemination and building resilience, it is essential to look further at how correcting works outside official institutions and platforms. Furthermore, there is a need to keep users motivated to correct, considering the users sharing disinformation are not met with the same felt frustration over time, therefore becoming the ones curating the environment of others. But the responsibility for tackling false information online should not only be on the users. Social media platforms should take proactive measures based on research into effective fact-checking methods reflecting users’ perceptions and behavior. One example could be controlling algorithms that promote false information, which often hinders corrective action.
Supplemental Material
sj-docx-1-sms-10.1177_20563051251342223 – Supplemental material for Doing What Is Right: Role of Social Media Users in Resilience to Disinformation
Supplemental material, sj-docx-1-sms-10.1177_20563051251342223 for Doing What Is Right: Role of Social Media Users in Resilience to Disinformation by Karolína Bieliková, Alena Pospíšil Macková and Martina Novotná in Social Media + Society
Supplemental Material
sj-docx-2-sms-10.1177_20563051251342223 – Supplemental material for Doing What Is Right: Role of Social Media Users in Resilience to Disinformation
Supplemental material, sj-docx-2-sms-10.1177_20563051251342223 for Doing What Is Right: Role of Social Media Users in Resilience to Disinformation by Karolína Bieliková, Alena Pospíšil Macková and Martina Novotná in Social Media + Society
Supplemental Material
sj-docx-3-sms-10.1177_20563051251342223 – Supplemental material for Doing What Is Right: Role of Social Media Users in Resilience to Disinformation
Supplemental material, sj-docx-3-sms-10.1177_20563051251342223 for Doing What Is Right: Role of Social Media Users in Resilience to Disinformation by Karolína Bieliková, Alena Pospíšil Macková and Martina Novotná in Social Media + Society
Footnotes
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The work on this article was supported by the NPO “Systemic Risk Institute” (grant no. LX22NPO5101), funded by the European Union—Next Generation EU (Ministry of Education, Youth and Sports, NPO: EXCELES).
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Supplemental material
Supplemental material for this article is available online.
Author biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
