Abstract
Although previous studies have recognized the widespread presence of disinformation networks, we know little about the extent to which such networks affect the ability of disinformation spreaders to disseminate falsehoods. In this study, we conceptualize disinformation networks as a form of coordinated strategic communication and apply an innovative algorithm to quantify the networked influence of disinformation spreaders. We found that coordinated networks account for up to 62% of disinformation spreaders’ ability to engage the broader public and 23% of their ability to have their message shared more frequently. These findings suggest that any effective disinformation prevention effort needs to incorporate plans aimed at disrupting networks, rather than solely focusing on notable individuals. In addition, our further analysis reveals that the countries of origin and the type of disinformation spreaders significantly affect their ability to gain networked influence among their peers. Theoretical and practical implications are discussed.
Keywords
Disinformation is “false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit” (High Level Expert Group [HLEG], 2018, p.10). Disinformation is driven by economic, political, or ideological goals and often reflects sophisticated, coordinated communication strategies. Unlike misinformation, which refers to misleading information shared by people who may not intend to harm or profit, disinformation campaigns are characterized by their strategic nature. The word strategic implies that the communication is not random or unintentional. Instead, the communicator has premeditated plans that are carefully orchestrated and pursued. As noted by Habermas (1979), strategic actions are directly opposed to communication action, which is based on the presupposition of “mutually recognized validity claims” (p. 209).
Studies have documented how disinformation campaigns can infiltrate multiple social media platforms using customized tactics such as bots, astroturfing, fictitious information blends, and content collages (Guo & Vargo, 2020; Keller et al., 2020; Y. M. Kim et al., 2018; Linvill & Warren, 2020; Rojecki & Meraz, 2016). At the core of disinformation are disinformation spreaders that include governments (Lu & Pan, 2021), special interest groups and fringe groups (Y. M. Kim et al., 2018; Krafft & Donovan, 2020), politicians and political groups (Vargo et al., 2018), and companies (Zerback et al., 2021). Disinformation spreaders often deploy sophisticated tactics to inflate social media influence while avoiding detection (Rojecki & Meraz, 2016). The deceptive nature of disinformation, the difficulty in verifying intent, and the often-hidden connections among actors make disinformation detection, verification, and prevention a difficult problem to address despite growing social concerns. Moreover, while the action of a single disinformation spreader, however, popular, may only achieve a limited impact, coordinated sharing of disinformation among spreaders can exponentially amplify their influence and exacerbate harm (Keller et al., 2020).
The coordinated sharing of disinformation between two or more spreaders can form a disinformation network. A network is a structured collection of actors and their relationships (Diani & McAdam, 2003). In this study, we define a disinformation network as a network that revolves around the coordinated sharing of false information by its participants. While previous communication studies have documented the existence and functions of such networks (Giglietto et al., 2020; Keller et al., 2020), there remains a gap in our understanding regarding the significance of these networks to disinformation spreaders. Furthermore, it remains unclear whether all disinformation spreaders function similarly, or if certain types of spreaders wield exceptional influence.
In our endeavor to address these questions, this study contributes two significant insights to the disinformation literature. First, we conceptualize disinformation networks as a form of coordinated strategic network and employ an innovative algorithm to illustrate how much these networks shape the performance of disinformation spreaders. Drawing from the social movement literature (Diani & McAdam, 2003; Jung et al., 2014; Sommerfeldt & Yang, 2017; Zhou & Yang, 2021), which has documented the value of coordinated strategic networks for information spreaders, we argue that disinformation spreaders also leverage such networks to reap similar benefits. In addition, we draw from the networked influence literature (Friedkin, 1998, 2001; Friedkin et al., 2016; Jia et al., 2015; Leenders, 2002) and utilize an innovative algorithm (Williams et al., 2021) to quantify the influence exerted by disinformation spreaders on their peers. This algorithm employs machine-learning processes to model various conditions, ultimately providing a direct measure of a spreader’s impact on the entire system.
Second, we advance a typology of disinformation spreaders (click-baiters, issue exploiters, and political manipulators) and demonstrate that different types of spreaders exhibit varying levels of influence. This typology, along with its associated findings, offers a framework for future research, enabling the differentiation of various types of disinformation spreaders and facilitating close monitoring of their behaviors.
The findings of our study suggest that coordinated networks can account for up to 62% of disinformation spreaders’ ability to engage the public and 23% of their ability to amplify a message (in terms of garnering more shares). These results underscore the importance of incorporating strategies that target network disruption in disinformation prevention efforts, rather than solely focusing on individual spreaders. Furthermore, our analysis reveals that factors such as the spreaders’ countries of origin and types significantly influence their ability to gain social influence within their peer networks.
Disinformation Campaigns as Strategic Networks
Disinformation Campaigns
Disinformation can affect publics’ perceptions about important issues such as public policy, public health, and political campaigns and severely harm processes that are fundamental to democracy. Concerns regarding the impact of disinformation have stimulated a fast-growing body of academic research on this phenomenon (Guo & Vargo, 2020; Keller et al., 2020; Y. M. Kim et al., 2018; Linvill & Warren, 2020; Rojecki & Meraz, 2016). Based on the literature, we classify disinformation studies into two streams: co-producer studies and communication process studies.

Examples of three types of disinformation spreaders.
Specifically, click-baiters produce or spread disinformation for direct or indirect monetary gains. Click-baiting is a way of structuring headlines and online content to generate but not fulfill readers’ curiosity so readers are compelled to click to obtain more information (Lu & Pan, 2021). Click-baiting can be applied to any content and is a tried-and-true digital marketing tool for advertisers who make money by capturing clicks and views online.
Issue exploiters produce and distribute disinformation to advance their mission or issue position. While normal issue advocacy aims to affect political or public behaviors through the strategic deployment of information and arguments, issue exploiters strive to accomplish such goals with disinformation. Some issue exploiters reveal their identities, but others may pose as unrelated accounts while coordinating their sharing, posting, and commenting behaviors in an effort to inflate the visibility of their content. This is a widespread subtype of disinformation, known in the literature as “astroturfing” (Zerback et al., 2021).
Finally, political manipulators produce or spread disinformation to advance their political/ideological interests, appeal to their base/supporters, and build political brands. Political manipulators include some politicians, candidates, political pundits, and politically affiliated groups and media. Disinformation spreaders often traffic in controversial issues such as COVID vaccines or climate change. This phenomenon has been extensively examined within computational propaganda research (Bolsover & Howard, 2017). Woolley and Howard (2016) precisely define computational propaganda as “the assemblage of social media platforms, autonomous agents, and big data tasked with the manipulation of public opinion” (p. 3). Computational propaganda thrives due to the anonymity of the internet, allowing state-produced propaganda to masquerade as organic content (Woolley & Howard, 2016). Moreover, the internet has introduced novel avenues for the efficient dissemination of propaganda, leveraging algorithmic manipulation and audience targeting facilitated by big data analytics. The societal ramifications of these evolving propaganda dynamics are only beginning to be comprehended, further complicated by the unprecedented fusion of social and technical possibilities enabled by AI.
Communication Process Studies
The other stream of disinformation studies focuses on the communication process that drives diffusion with various effects. Three primary types of communication process studies have emerged: detection studies, diffusion studies, and audience reaction studies.
Detection studies primarily investigate methods to identify disinformation. These studies often rely on content analysis, examining false content flagged by fact-checkers, or analyzing its distinctive features. In addition, some studies focus on identifying malicious actors through manual verification or automated detection methods. Nakov et al. (2021) discuss a range of technologies, including transformer-based tools and AI-driven approaches for evidence retrieval and verification. A recent avenue of research proposes detecting disinformation based on patterns of coordination among its spreaders (Giglietto et al., 2020). For instance, Keller et al. (2020) suggest an identification strategy based on coordination patterns, arguing that collective behaviors within a group of accounts serve as stronger indicators of a disinformation campaign than individual “bot-like” behaviors.
Diffusion studies examine the temporal and spatial dimensions of disinformation propagation. (Li, 2020), after reviewing numerous studies, concludes that disinformation tends to spread faster than truthful information and may reappear through various cascading mechanisms. Furthermore, understanding the political, social, and technological contexts, as well as the socio-technical spaces in which disinformation spreads, is crucial.
Audience reaction studies focus on how audiences respond to disinformation (Jones-Jang et al., 2020). They explore how disinformation shapes people’s perceptions and how false beliefs persist despite corrections (Thorson, 2016). Some studies investigate audience characteristics such as knowledge level, political engagement, exposure to diverse viewpoints, and susceptibility to social corrections, all of which influence reactions to disinformation (Amazeen & Bucy, 2019; Rossini et al., 2020).
In summary, previous studies have illuminated key characteristics of disinformation spreaders and campaigns. However, these examinations often isolate spreaders and their messages. Our study shifts focus to explore the network dynamics among spreaders, examining how these networks can serve as strategic tools to advance their objectives.
The Strategic Nature of Disinformation Campaigns
Previous disinformation studies have revealed that while disinformation may take many forms, coordinated efforts among actors (both human and nonhuman) are a key mechanism that can escalate the impact of disinformation. For instance, clusters of social bots that pose as humans can magnify the spread of disinformation through coordinated liking, sharing, and searches. Rojecki and Meraz (2016) found that partisan media tend to coordinate their efforts in spreading disinformation that smears candidates of the opponent party.
In this study, we recognize disinformation networks as a form of strategically coordinated networks (Sommerfeldt & Yang, 2017; Yang et al., 2021). Coordination here means: “a form of organizational effort to attract public attention or direct mobilization logistics on the ground” (Piedrahita et al., 2018, p. 327). Strategically coordinated networks are those formed through coordination to serve a set of shared goals. These networks are well-researched in the social movement literature. Studies have found that activism organizations coordinate their strategies to attract public attention and exert leverage on policy makers (Diani & McAdam, 2003; Jung et al., 2014) and to maximize impact (Keller et al., 2020). Nevertheless, little is known about how to quantify the influence of coordinated actions and which type of actors are most likely to wield large influence. To fill the gap, we turn to research on networked social influence.
Quantifying Networked Influence
Networked Influence
Social influence, which shapes actors’ attitudes, opinions, and behaviors, is pivotal for processes such as socialization, identity formation, collective decision-making, and cooperation. When this influence operates through social networks, it becomes what we term as networked influence (Aral et al., 2009). Social networks provide connected individuals with the opportunity to observe and adjust their own opinions and behaviors based on those of others. The transition from mere social influence to networked influence holds significant importance. As Gruzd and Wellman (2014) highlighted, while social influence delves into the socio-psychological mechanisms of individual information processing, networked influence focuses on how the composition of networks (e.g., tie strength, homophily, and clusters) influences the adoption and alteration of opinions, conversations, and behaviors (Peres et al., 2010; Zheng et al., 2012).
However, traditional research on networked social influence has been constrained by limitations in accuracy, breadth, and depth, largely due to its reliance on self-report data (Kilduff, 1992). These studies often involve small sample sizes and static time points, failing to capture the intricate details of communication processes. Consequently, analytical results have been confined to examining small, well-defined populations and offering only a limited number of snapshots of interaction patterns. Nonetheless, with the rapid expansion of social media, the landscape is evolving. In these systems, social relationships wield significant influence over user behavior, as individuals can explicitly or implicitly influence their peers’ actions through social networks (Amatulli et al., 2014). Moreover, social influence can cascade through these networks, yielding widespread effects.
Over the past decades, social modeling concerning social influence and spread dynamics in social networks has been explored across various domains, including the spread of epidemics, diffusion of technological innovations, and the impact of word-of-mouth in product promotion (Peres et al., 2010). Numerous models have emerged from these endeavors (see Zheng et al., 2012 for an extensive review). Macro-level models such as the SIR (Susceptible/Infective/Removed) model and the Bass model primarily focus on capturing spread behaviors at the population level. In contrast, micro-level models aim to elucidate individual behaviors within the context of social network topology. These micro-level models encompass well-known frameworks such as the preferential attachment model, threshold model, cascade model, and competitive model.
Among these models, various iterations of the cascade model concentrate on the phenomenon of cascading social influence. For instance, the model proposed by Goldenberg et al. (2001) posits that node m initially activates at step t – 1 and subsequently has the opportunity to activate its inactive neighbor n with a constant probability pn(m). Upon successful activation by m, n becomes active at step t. In scenarios where multiple neighbors of n are active at step t – 1, activation attempts are executed randomly. Cascade models realistically depict how social influence diffusion on social media platforms and serve to elucidate the pivotal role of influencers, such as disinformation spreaders. Subsequent research also examines the diverse roles played by different actor types within such cascading processes. Our study focuses on the interactions among influencers and investigates how they mutually influence one another.
The Uniqueness of the Current Approach
A key area of research in networked influence is to distinguish causal influence from other confounds that lead to behavioral clustering in network space and time (Aral & Walker, 2012). Establishing a causal relationship with network data is challenging due to issues such as correlated observations, time-varying factors, and unobserved confounding factors (Tucker, 2008). Some studies have adopted experiments with random assignment to establish causal effects (Tucker, 2008). However, experiments often lack generalizability and cannot practically be applied to study unsympathetic actors such as disinformation spreaders. To address these challenges, we draw on a novel approach that allows for a causal and falsifiable measure of influence between actors in a system.
As outlined in Williams et al. (2021), the “Social Value” model is the equivalent of an over-time experiment of actors in a network. In its simplest form, the model asks whether the behaviors of others are different when the person of interest is present or takes some action. Consider a series of parties at a house, and then whether a particular friend attends or not. If the parties where the friend attends are different than the ones where she does not, we have the fundamentals of an experiment. The causal variable is the presence or absence of the person. So, if there are enough cases where that person is present and cases where they are not (or takes some action or does not), then those two sets of observations can be compared and we can say that the behaviors of others were caused by that person. Although there are many possible variations and complications of this approach, the basic idea is essentially a repeated social experiment in which we can determine the effect of any person on everyone else around them.
In the current context, the outcome variable is disinformation spread. So, applying the Social Value inclusion/exclusion logic, the approach compares instances when an individual spreads disinformation versus when they do not, asking “is there a change in those around them between these two instances?” So, do others around them spread disinformation more or less when the focal person does or doesn’t?
At the core of the networked social influence estimation is a machine-learning model that estimates the effect of a range of factors on the variable of interest. The Social Value model allows for a precise quantification of how much influence each actor has on others in their network and so identifies the actors with high levels of networked influence, or “Social Value.” This enables individual-level hypothesis testing. In addition, the model can be used cumulatively to gain macro-level insights into the larger network itself, and how much behavior is socially driven overall versus not. This is accomplished by summing the Social Values of all actors, which allows the observer to calculate how much behavioral outcomes change due to the impact of all of the interactions, compared with other factors. In other words, it can quantify how much Social Value there is in the system compared with any remainder and so reveal how valuable a social network is compared with other casual forces. So, influence can be hypothesized and measured at both the individual and whole network levels, which we elaborate on in turn.
Individual Networked Influence
A prominent theme in networked influence research is the notion of influencers (Aral et al., 2009; Gruzd & Wellman, 2014). In cascade models, they are central to the activation of social influence in networks. Influencers are actors that are more influential than others and thus play a catalyzing role in spreading opinions and behaviors (Belanche et al., 2021). Unlike spontaneous communication, where individual personalities or similarities with others may strongly affect their ability to influence their peers, in the context of disinformation campaigns, spreaders are goal-oriented. Previous studies suggest that strategic actors are more likely to be susceptible to others’ influence when such behaviors can help advance their goals and bring tangible benefits (Diani & McAdam, 2003; Jung et al., 2014; Keller et al., 2020). In this study, as discussed earlier, we differentiated three types of disinformation spreaders: click-baiters, issue exploiters, and political manipulators. In addition, previous studies found that social media allow disinformation spreaders from multiple countries to easily coordinate actions (Keller et al., 2020). In addition, such disinformation often cascades from a few core countries (e.g., the United States) to other countries and thus reaches diverse audiences (Golan & Himelboim, 2016). For disinformation related to global issues such as the COVID-19 pandemic, it is likely that actors’ countries of origin also play a role in their ability to accumulate networked influence. As such, we ask the following question:
RQ1: What types (in terms of disinformation spreader types and countries of origins) of disinformation spreaders accumulate more networked influence?
Whole Network Influence
In this study, we consider disinformation spreaders who repeatedly co-share messages about a social issue as members of a co-sharing network (Keller et al., 2020). Here, “whole network” refers to the entirety of interconnected individuals or entities participating in the dissemination of disinformation, encompassing all nodes and their interrelations within the system. Although these disinformation spreaders appear as separate accounts on social media, their repeated co-sharing of similar messages forms a network that may influence their ability to engage the public and drive message re-shares.
There are multiple potential theories for why coordinated networks benefit strategic actors. Piedrahita et al. (2018), for instance, found that the contagion effect in social networks often takes repeated activation to materialize. In other words, when users only encounter a message once, they are unlikely to act on them. However, a repeated encounter (“repeated activation” by Piedrahita et al., including examples such as repeated use of similar hashtags and repeated sharing of messages) can drive extra user engagement, which manifests as a sudden burst of online activities such as swift information cascades, trending topics, and viral hashtags. Another theory is that the coordinated network among strategic actors can be seen as a form of “public goods” to them. Shumate and Lipp (2008) found that when NGOs send hyperlinks to each other’s websites, such coordinated behaviors raise the overall visibility of their shared issue and direct more visitors to their websites. As the shared visibility rises, the condition benefits all NGOs working on the same issue area.
Although social media come in many shapes and flavors, two types of actions are fairly important across platforms: engagement and re-shares. In this study, we recognize social media engagement as a multifaceted construct encompassing cognitive, affective, and behavioral elements. Due to our reliance on observational data, we focus on behavioral engagements such as sharing, interacting, and endorsing behaviors (Dessart, 2017). On Facebook, for example, such behaviors include sharing, commenting, liking, and reacting with emojis (e.g., haha, wow, care, sad, and anger). In terms of re-shares, we specifically look at the likelihood of social media users to re-share or repost a message, which gives the message the possibility of propagating further on their social networks (J. W. Kim, 2018). To explore how a disinformation spreaders’ whole network affects their ability to engage and make messages get more re-shares, we ask:
RQ2: To what degree does a disinformation spreaders’ whole network drive their ability to engage with the public?
RQ3: To what degree does a disinformation spreaders’ whole network drive the ability of their messages to get more re-shares?
Method
Sample
In this study, we test our research questions with COVID-19 vaccine-related disinformation collected from a sample of anti-vaxxer accounts. As many have noted, the identification of disinformation is difficult for a variety of reasons (see Krafft & Donovan, 2020 for a review). Although interviewing disinformation spreaders about their intention is difficult to conduct, studies can apply a number of characteristics of disinformation spreaders to reasonably classify some actors as disinformation spreaders. Key elements of disinformation spreaders are as follows: (1) special interests that would motivate actors to distort facts, (2) repeated offense and coordinated actions, and (3) the sharing of verifiably false information (Keller et al., 2020).
Following this guideline, we took several steps to identify our list of disinformation spreaders. First, previous research has identified a list of active anti-vaxxer accounts that repeatedly disseminated vaccine-related disinformation during the COVID-19 pandemic (Yang et al., 2021). This group of accounts is responsible for the lion’s share of vaccine-related disinformation disseminated on Facebook (Yang et al., 2021). We adopted this sample and verified accounts’ repeated involvement in vaccine-related disinformation through reviewing their “about us” information and account posting history. The repeated offense and presence of special interests are important characteristics of disinformation spreaders. Second, we further combined keywords such as COVID-19 and vaccine (including different variations of vaccine names) to search the full record of these accounts’ COVID-19 vaccine-related posts between March 1, 2020, and June 1, 2021 (this time period covers the first statewide shelter-at-home mandate to the time when more than 60% of residents in several states had received at least one dose of COVID-19 vaccines) (American Journal of Managed Care [AJMC], 2022).
We conducted our search on Crowdtangle (https://www.crowdtangle.com), which is a data archive hosted by a Meta-affiliated organization. It should be noted that Crowdtangle only includes public accounts, which means that we did not have access to information posted by private groups. Within the timeframe, these accounts shared 326 URLs. Further manual coding identified a total of 187 unique URLs that contained COVID-19 vaccine-related disinformation. It is important to clarify that this figure does not denote the total number of messages but rather highlights the recurrence of these 187 URLs across multiple accounts. A team of graduate students meticulously verified the factual inaccuracies in these URLs based on established scientific knowledge of COVID-19 vaccines, corroborated by reputable sources such as the Centers for Disease Control (CDC). The dissemination of false information stands as a fundamental aspect of disinformation. These URLs have been disseminated by 6,252 actors spanning 76 countries. Notably, messages and account data from the top 10 countries were selected for further analysis, representing over 90% of the total actors involved.
In addition, we used the co-sharing of URL relationships among accounts to construct a co-sharing network (see visualizations in Appendix A). The construction of this network takes several steps. First, an edgelist between spreader and URLs is loaded into Python using networkx to create a two-mode network. Second, we further transformed this two-mode network into a one-mode network through iterating the edgelist of the two-model network. Within this one-mode network, accounts A and B are considered to share a connection if they both post the same URL. Notably, this one-mode network exclusively comprises accounts, with follower information omitted to safeguard the privacy of private users. In total, the network consists of 531 actors interconnected by 29,548 ties, with no self-loops permitted, resulting in a density of 0.109.
Independent Variables and Control Variables
Accounts’ Country of Origin
We retained accounts from 10 countries for analysis. See Appendix B for details on accounts’ countries of origin and representation frequency.
Spreader Type
We coded the accounts into four spreader types: click-baiters (n = 212), issue exploiters (n = 100), political manipulators (n = 97), and others (n = 121).
Accounts’ Purposes
We manually coded the accounts’ self-reported “about us” section to identify their self-reported purposes. Two coders independently coded 10% of the 187 URLs and reached an intercoder reliability of 83%. Upon a discussion that resolved all disagreement, the two coders then proceeded to code the rest of the sample. We identified a total of 11 types of purposes: promote alternative medicine, promote alternative science, promote conspiracy theories, mobilize anti-lockdown, mobilize anti-all-vaccines, mobilize anti-COVID-19 vaccines, express religious objections, promote conservative ideologies, promote liberal ideologies, express race-related distrust, and others.
Account Social Media Status Information
This category of variables included information such as the accumulated likes that an account has received at the time of message posting and the number of followers.
Dependent Variables
In this study, we have two dependent variables that are analyzed in separate models.
Total Engagement
This variable refers to the total number of audience reactions that a post has received (including like, haha, love, care, sad, wow, and anger). It also includes the number of times a post received comments.
Re-Shares
This is the number of times that a post has been shared. When someone re-shared a post, not only did the person interact with the post, but she also needed to endorse it in her social network, which may require a higher level of commitment than other forms of engagement.
Analytic Procedure
As discussed earlier, in this project, we utilize the open-source software developed by Williams et al. (2021). The model estimation and codes are adopted from Williams et al. (2021) with modifications based on the current study. For original codes, example data, and data structure requirements, please visit https://github.com/eunakhan/social-value. Specifically, in this study, we took several steps to build models to estimate the amount of total engagement or re-shares for all actors. First, we build models that used data up to time t to predict actors’ total engagement or re-shares in interval τ = (t, t + τ). This step essentially records actors’ performance at each time point, makes an estimation about how they would perform at the next time point, and then compares observation from t + τ against the observation to record errors. Second, based on the co-sharing network at time t, we found all actors (U) whose neighbors were absent in this interval τ and considered the pairwise networked social influence each of these users had on their neighbors. This step uses the network information to consider who could have the chance to exert networked influence on a given actor. This step is akin to that of an experiment, in which one would observe how actors behave with and without the presence of a specific actor, and record the differences.
Next, we predicted actors’ total engagement or re-shares at τ for each actor u
At the last step, for each actor u ∈ U, we subtracted the sum of pairwise networked social influence value of all absent neighbors on actor u, forming the total engagement or re-shares estimate of the previous step, to get the networked influence adjusted value amount:
These models generated several value estimations as outputs. First, asocial value refers to accounts’ behaviors that are not attributable to others. Asocial values are performance outcomes that an account would achieve regardless of its networked contacts. Imagine a person who lives on an isolated island and carries out a set of behaviors. Those behaviors are asocial in the sense that no one influenced them. Second, social value refers to an account’s networked influence, which captures the degree to which they influence others’ behaviors. Third, someone’s asocial value and their impact on others taken together are their impact on the system, which is termed network power. Finally, an actor’s total value is the sum of their asocial value, social value, and others’ influence on them. The total value is calculated by estimating how much activity would disappear from the system if this person were to leave (Williams et al., 2021).
In our model estimation, we implemented random forest regression models inside the social value algorithm and it had R2 values of 82% for total engagement and 85% for re-shares, and accuracies of 61% and 92%, respectively. The random forest regression models help to estimate the presence of actors on others. We set the number of trees in the forests to be 100 for both models. See Appendix C for SHAP value estimation on both random forest models. SHAP estimation is used to show which predictor most powerfully affects the model performance.
Results
One of our questions explored the degree to which disinformation spreaders’ whole networks shaped their ability to engage with the public. We calculated every actor’s social values and other related values (see Appendix D for details). In addition, we computed the cumulative amount that social value accounted for in terms of total change observed in the dependent variable (total engagement). The cumulative value is 61.55%, meaning that over half of the observed variance in actors’ total engagement values is attributable to their disinformation spreader network. This also means that 38.45% of their behaviors were caused by other factors, such as the content itself or other unseen variables.
Another question explored the degree to which disinformation spreaders’ whole network shaped their ability to get their message to be re-shared more frequently. Similarly, we aggregated individual actors’ networked influence values and computed the cumulative amount that disinformation spreaders’ whole network accounted for their re-shares. This time, the cumulative value was 26.06%, which means disinformation spreaders’ network accounted for about one-quarter of the variance that drives message re-shares. Again, the remainder, or 73.94% must therefore be driven by other factors, presumably the message content or other unseen variables.
Finally, we have a research question asked about which types of disinformation spreaders accumulate more networked influence. To answer this question, we used the individual disinformation spreaders’ social values as the dependent variable and ran logistic regressions to see how countries of origin, accounts’ purposes, and spreader type may affect actors’ social value level, while controlling for the accounts’ social media status. Tables 1 and 2 report regression results in detail.
Logistic Regression Model for Networked Social Influence in Terms of Total Engagement.
For the countries category, the United States is the reference group. For the spreader type category, click-baiters are the reference group. For the accounts’ claims category, promotion of alternative medicine is the reference group.
p < .05. **p < .01. ***p < . 001.
Logistic Regression Model for Networked Social Influence in Terms of Re-Shares.
For the countries category, the United States is the reference group. For the spreader type category, click-baiters are the reference group. For the accounts’ claims category, promotion of alternative medicine is the reference group.
p < .05. **p < .01. ***p < .001.
Specifically, we found that in the context of total engagement, accounts that originated from the United States are significantly more influential than accounts originated from Australia, Great Britain, Italy, and Mexico. In addition, among spreaders, although issue exploiters only accounted for a small segment, they were significantly more likely than click-baiters to be influential (coef = 240.41, p < .05). Accounts in the “others” category were also significantly more influential than click-baiters (coef = 85.32, p < .001). No significant difference was observed between political manipulators and click-baiters.
In the context of re-shares, again, accounts that are originated from the United States were significantly more influential than accounts originated from Australia, Great Britain, and Italy. In terms of spreader type, again, issue exploiters (coef = 0.42, p < .0001) were significantly more influential than click-baiters. In comparison, the other accounts (coef =-.06, p < .001) were significantly less influential when compared with click-baiters. No significant difference was observed between political manipulators and click-baiters. In terms of the accounts’ purposes, those that posted about conspiracy theories, anti-lockdown and related measures and religious objections were significantly less influential than accounts promoting alternative medicine.
In sum, across contexts, accounts originating from the United States that fell under the issue exploiters category were most likely to acquire considerable Social Value among disinformation spreaders. The purposes of accounts seemed to only affect Social Value in the context of re-shares, where messages from accounts that habitually post alternative medicines are significantly more viral.
Discussion
Our study set out to quantify the value of disinformation spreaders’ networks in terms of driving engagement outcomes and re-shares. We also explored what types of disinformation spreaders were most influential among spreaders. The findings of our study illustrate the considerable impact of disinformation spreaders’ co-sharing network, highlight the importance of targeting such networks in future intervention efforts, and provide a network-centric approach to identify the most harmful spreaders.
The Strategic Value of Disinformation Networks
Disinformation studies have well documented the coordinated networks among disinformation spreaders (Keller et al., 2020; Rojecki & Meraz, 2016). In addition, previous social movement studies have explored the strategic values of coordinated networks such as bringing attention to social issues, facilitating the propagation of ideas, and expanding sites for recruiting new supporters (Diani & McAdam, 2003; Jung et al., 2014; Sommerfeldt & Yang, 2017; Zhou & Yang, 2021).
Drawing from the literature, our study conceptualized disinformation spreaders’ co-sharing networks as tools that can perform strategic functions and applied an innovative algorithm to quantify the amount of impact that disinformation spreaders’ whole network has on their engagement and re-shares outcomes (Williams et al., 2021).
Our investigation underscores the profound influence wielded by the co-sharing networks of disinformation spreaders on public engagement. Through our analysis, we discovered that a significant portion—up to 62%—of their ability to elicit likes, comments, and emotional reactions from the public is attributed to the structure and activity within these co-sharing networks, rather than the inherent qualities of the messages or individuals involved. This revelation illuminates a potent social mechanism often overlooked amid our focus on individual actors.
Given the observational nature of our study, causal relationships remain elusive. However, existing literature offers compelling insights, such as the concept of “repeated activation” as articulated by Piedrahita et al. (2018). This phenomenon describes how the relentless exposure of the public to identical messages from various accounts can profoundly impact their attention and trigger contagious effects within social networks, fostering heightened levels of engagement and participation. In essence, within the bustling public sphere, coordinated efforts among networked actors play a pivotal role in ensuring repeated activation. By strategically aligning their actions and disseminating congruent messages, these actors maximize the likelihood of repeated exposure for many members of the public, particularly their followers. Thus, the strategic value of disinformation spreaders’ networks lies in their adeptness at orchestrating repeated activation—an essential mechanism for driving engagement. Nonetheless, the extent to which repeated activation serves as the primary mechanism behind the observed phenomenon warrants further validation through future experimental research.
Similarly, we found that disinformation spreaders’ co-sharing networks also matter a great deal for their messages to go viral in terms of being shared more frequently. Again, it is likely that repeated activation may play a role in motivating the public enough that they are willing to endorse disinformation among their social contacts, which needs to be further verified. Nevertheless, the effect is smaller for re-shares (23%) than engagement. Our machine-learning model reveals that the type of account that shares these messages (account status and account stated purposes) matters a great deal for re-shares. It is likely that a match between the identity expressed by those accounts and the identity of the publics is a key factor that drives information sharing. Future studies should examine how social identity and self-categorization play out in the process of disinformation sharing, and how such factors interact with disinformation spreader networks.
In sum, our analysis confirms that a disinformation spreader’s co-sharing network is a crucial asset for accomplishing their strategic goals. In addition, we found that the effect of such networks varies depending on the type of behavior. For public behaviors that mostly involve individual reactions, the impact is more salient. For those that also involve social interactions, the impact is less prominent, as it is likely to be mediated by other factors.
Influencers Among Disinformation Spreaders
Our analysis also reveals that, among disinformation spreaders, some are much more influential than others and thus command considerably higher levels of networked influence. Among the three types of disinformation spreaders (click-baiters, issue exploiters, and political manipulators), issue exploiters are significantly more likely to accumulate more Social Value than others, and this finding is consistent across the context of engagement and re-shares. There are a couple of possible explanations for the observation.
First, issue exploiters produce and distribute disinformation to advance their missions or issue positions. Issue exploiters include examples such as nonprofits, religious groups, fringe groups, and special interest groups that strategically share disinformation to swing public opinion, persuade their followers, and attract support to such organizations. Some issue exploiters have the resources to strategically advocate on certain issues for decades (McDaniel et al., 2008). In the context of the current study, issue exploiters are anti-vaxxer organizations that promote anti-vaccine messages as their missions (e.g., some of the prominent groups in the sample are: Children’s Health Defense, Informed Consent Action Network, Children’s Medical Safety Research Institute, and Stop Mandatory Vaccination). Given their persistent efforts in the anti-vaccination space, these issue exploiters may appear as “experts” on topics related to vaccines and therefore command more influence. Groups such as Children’s Health Defense are the sources where many of the URLs originate. In comparison, although click-baiters are the largest sub-group in our sample and political manipulators (e.g., President Trump) and tend to command more followers, they are not as influential as issue exploiters.
Second, the context of our analysis, the anti-vaccine issue, in general, is not highly politicalized (although during the COVID-19 pandemic, conservative political pundits are known to be more vocal against vaccine mandates, see Shin et al., 2022). This may also explain why the context of this issue may not allow political manipulators to exert a big impact on other disinformation spreaders. In comparison, political manipulators may be more influential in contexts such as election-related disinformation. Future studies may explore if different disinformation spreaders are influential in different issue contexts.
We also persistently identify an effect for accounts originating from the United States to be significantly more influential than accounts from many other countries. This effect may be due to the fact that many of the most influential accounts such as the Children’s Health Defense originate from the United States. It may also be due to a core-periphery system explained in Wallerstein’s (1974) World System Theory. Research has found that the system is reproduced in the digital space, where information sources from “core” countries tend to dominate the international information system (Golan & Himelboim, 2016). In either case, this finding reminds us of the importance of regulating disinformation spreaders from such core countries. With the help of social media, core country spreaders’ harmful impact may spill over easily to other countries, especially developing countries where there may be even fewer resources to counter mis/disinformation.
Finally, we found that accounts’ social media status such as number of followers and the number of likes that an account has accumulated at the time of message posting may have a significant effect on their networked influence. For engagement, we found that the number of followers significantly and negatively affected an account’s social value. This may be due to the fact that a lot of the most followed accounts are either political manipulators or click-baiters in the sample. As for re-shares, we find a marginally significant effect with followership positively driving up accounts’ social value. These findings suggest that influencers identified based on social value are distinctively different from those based on their followership. In other words, our approach offers a new way to reveal influencers.
Practical Implications
Disinformation is a multidimensional social problem that requires multidimensional responses. Intervention efforts that only address the information problem may not adequately curb the spread of disinformation. Our study shows that the network perspective is critical for addressing challenges associated with disinformation. Coordinated networks are a key strategic tool that disinformation spreaders build to amplify their engagement outcome and drive up the re-shares of their messages. This research not only underscores the network perspective but quantifies the impact, which is extremely large. Stakeholders such as social media platforms need to target such networks rather than just blocking individual disinformation spreaders.
Moreover, our method suggests an alternative approach to develop a tracking system that can identify the most harmful disinformation spreaders with the greatest networked influence. This tracking system is different from existing approaches that are simply based on actors’ aggressive behaviors or followership. This approach considers the social network context to derive falsifiable scores, which can then be used to create categories of influencers.
Limitations and Future Research
This study has limitations that could be strengthened by future research. First, our examination focused solely on disinformation spreaders within the context of vaccine-related disinformation disseminated during the COVID-19 pandemic. As previously highlighted, it remains uncertain whether certain findings, such as the prevalence of issue exploiters compared with other types of disinformation spreaders, are exclusive to this context. It is plausible that in scenarios such as presidential elections or periods of war and armed conflict, alternative types of disinformation spreaders may gain prominence. Future research could entail comparative analyses across multiple disinformation contexts, aiming to systematically delineate the circumstances wherein different types of disinformation spreaders wield the greatest influence. It is important to note that our typology may serve as a strong function for theory building for these future studies. Second, our study exclusively focuses on disinformation spreaders. We do not know how the process of Social Value diffuses through mass networks of general publics as misinformation. Future studies could collect discourse around the entire issue and explore how Social Value further perpetuates beyond the circle of disinformation spreaders. In addition, due to Meta policies that are designed to protect user privacy, we do not know who viewed the disinformation and how such an exposure affects their attitudes toward vaccines. Future studies need to combine multiple methods to gain a comprehensive understanding of how networked influence diffuses through social networks and affects individual decisions. More importantly, future studies should also explore why actors are receptive to social influence. Furthermore, our model indicates that social values derived from disinformation spreaders explain only approximately 25% of the variance, particularly concerning the re-sharing of disinformation. Given our study’s focus on the network dynamics within this disinformation ecosystem, the factors accounting for the remaining 75% of variance remain unidentified. Previous research (Giglietto et al., 2020; Li, 2020; Zerback et al., 2021) suggests that various elements, including message content, communication strategies, actors’ characteristics, and motivations, likely exert significant influence. Subsequent investigations should aim to gather empirical evidence elucidating these factors and their interactions within the context of spreaders’ networks.
Conclusion
While previous research has focused on how networked influence occurs among individuals, in this study, we conceptualized such networks as strategic tools utilized by strategic communicators to serve their purposes. Furthermore, we explored the degree to which strategic actors derive their influence from their shared, coordination networks. We also explored which type of actors are more likely to be influential in such coordination networks. The findings provide strong support for the importance of incorporating the network perspective for disinformation spreader identification and intervention. They also revealed considerable differences among disinformation spreaders in terms of their influence levels. With continued research, we can build frameworks and models that holistically tackle the challenges posed by disinformation spreaders and offer more options for regulators and digital platforms to curb their harms.
Footnotes
Appendix
Descriptive of Actors’ Social Values and Other Related Values.
| Networked influence in total engagement | Networked influence in re-shares | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Metric | Min | Max | Std | Mean | Total | Min | Max | Std | Mean | Total |
| Social value | 0 | 1,673.86 | 196.48 | 161.25 | 85,466.45 | 0.0 | 1.44 | 0.16 | 0.24 | 129.99 |
| Asocial value | 0 | 10,513.32 | 737.25 | 100.73 | 53,388.54 | 0.0 | 2.18 | 0.38 | 0.69 | 368.77 |
| Network power | 0 | 10,642.33 | 758.08 | 261.99 | 138,855 | 0.0 | 2.41 | 0.44 | 0.94 | 498.76 |
| Total value | 0 | 33,481.32 | 2,204.05 | 423.24 | 224,321.45 | 0.0 | 5.29 | 0.72 | 1.18 | 628.76 |
Acknowledgements
Prof. Williams and the University of Southern California have a financial interest in SVI, a company that has licensed intellectual property discussed in this article.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
