Abstract
This introduction for the special issue “Disinformation-for-Hire as Everyday Digital Labor” carves out a specific area of inquiry within the ever-growing field of disinformation studies, with its sharp focus on the commercial transactions, organizational logics, and entrepreneurial practices that propel the production of disinformation. Inspired by traditions of political economy, media production studies, and everyday life approaches, this framework draws analytical focus to (1) the slow-burn horror of disinformation as everyday digital labor; (2) the diverse industries and workers engaged in disinformation production; and (3) regulatory areas beyond social media content policy and platform-centric accountability—especially relevant in the Global Majority context. Furthermore, this essay discusses how digital labor studies need to engage more directly with the ways disinformation thrives in the gray in-betweens of formal/informal and licit/illicit digital economies. The essays in our collection mobilize “disinformation-for-hire” as a valuable analytical frame that lays bare disinformation as a product of commercial and political complicities in the late capitalist arrangement of transnational digital industries.
Keywords
From state-sponsored propagandists silencing opposition forces by hiring click armies and troll farms, to commercially motivated data analytics firms selling insidious toolkits to politicians, to platform workers producing memes for overseas clients through “fake news factories,” disinformation production has become a professionalized and diversified industry with global reach and varied local histories. Who exactly are the buyers and sellers of (dis)information operations around the world? What are their social identities and professional backgrounds? Are there any regulatory solutions that can actually nip the problem of disinformation at the bud, or Are awareness-raising exposés and investigative research the only consistent intervention to curb its production and marketization?
This special issue “Disinformation-for-Hire as Everyday Digital Labor” carves out a specific area of inquiry within the ever-growing field of disinformation studies, with its sharp focus on the commercial transactions, organizational logics, and entrepreneurial practices that propel the production of disinformation as everyday digital labor. Emphasizing the ordinariness of disinformation as always already embedded in everyday livelihood, local cultures, and the open global markets of digital industries is intended to be a provocation to the mainstream frameworks and dominant methodologies in the field. In our view, popular references to “dark firms” or “black markets” of social media influence operations tend to reinforce unhelpful dichotomies of good versus bad people and above-board versus below-the-table techno-commercial operations. This special issue instead spends its energies making sense of the morally gray areas where ruthless political ambition and unfettered social media commerce converge. This large and messy space is where lines between ethical and illicit are blurred, political and commercial motivations are entangled, and professional practice becomes the convenient cover for and enabler of cunning political investments. We aim to clarify that this framework is not about downplaying or normalizing disinformation. On the contrary, it is about interrogating disinformation’s longer histories, social structures, organizational arrangements, and precarious labor—inspired by the analytical traditions of political economy, cultural studies, media, and everyday life often underrepresented in disinformation studies.
As the “dystopia beat” of today’s newsrooms, disinformation investigations have been hampered by how the traditional tools of reporting, correcting, and exposing can inadvertently extend the influence of the people flooding our information environment with falsehoods and hate. While researchers and journalistic allies have become more careful not to “oxygenate” hate by practicing strategic silence (Donovan & boyd, 2019; Phillips, 2020), we find it important to continue the conversation on how the tropes of “dystopian” disinformation research and reporting could better engage audiences and policymakers, rather than trigger their fatigue or defensiveness. Reporting that relies on the shock value of “wild” conspiracy theories or pokes fun at the shamelessness of social media influencers may do well in engaging political fans’ attention for a news cycle, but it takes risky investigative journalism and patient ethnographic research to get at the long histories and social structures behind singular moments of so-called dystopian social media activities. Thus, we argue in this essay that the study of disinformation needs to extend beyond “fake news” events (e.g., the bald-faced lie, the textuality of a tweet, and the hashtag campaign that trended on social media) to examine disinformation’s tentacles seeping into the everyday (e.g., the recruitment of operators, the professional covers that smoothened commercial transaction, and the shifts of political and moral allegiances over time). We also need to continually find ways to contextualize and analyze patterns of media manipulation as intentional and routine objects of organized and collaborative labor rather than depend on hype and outrage as triggers of public action.
A focus on how falsehoods and hate are executed by competitive and collaborative teams of task-minded laborers rather than by “extremist” or “fascist” believers of a coherent ideology is not an argument for moral relativism or a passivity to normative and regulatory discussion. Instead, we mobilize “disinformation-for-hire” as a valuable analytical frame that lays bare disinformation as a product of commercial and political complicities in the late capitalist arrangement of transnational digital industries. We argue that focusing on complicity broadens the field’s main targets of analysis, that is, “fake news” and “inauthentic coordinated behavior” mobilized for far-right ideology, by tracing their “family resemblances” to their forebears of digital campaigns, influencer marketing, public relations (PRs) spin, attack advertising, and state intelligence operations. In addition, the disinformation-for-hire framework can better analyze the ways in which ideologically driven disinformation, asymmetrically produced by right-wing groups across the Global North (Kreiss & McGregor, 2023; Starbird et al., 2023) and Majority World (Amrute et al., 2022; Ricaurte, 2022), is strategically funded—whether top-down from political elites and parties, or more horizontally through campaign contributions or in-kind donations from corporate elites seeking political favors.
This framework thus better realizes the promise of a “whole-of-society” approach to fighting disinformation (Donovan et al., 2021) as we reflect on how we can better engage stakeholders across sectors of advertising, PRs, political consultancy, and international law. It is important to trace lines of power and responsibility across private and public sectors and expand legal and regulatory discussion in the disinformation space beyond its core focus on Big Tech platform accountability. This expansive approach is all the more relevant in countries in the Global Majority, where “fake news” panics have been weaponized by incumbent politicians to target oppositional and activist voices on social media (George, 2019), all the while offering protection to the professional consultants who engineered their successful campaigns (Ong, 2020).
This introductory essay summarizes important tropes and features of “disinformation-for-hire” research and reporting that lend themselves to important, if less-heard, interventions and solutions in the disinformation space. We also introduce the articles of this special issue.
Three Tropes of Disinformation-for-Hire Research
The term “disinformation-for-hire” was popularized by pioneering disinformation beat journalists Craig Silverman, Jane Lytvynenko, and William Kung in their 2020 Buzzfeed article exposing PR firms that spread lies on behalf of corporate and political clients (Silverman et al., 2020). While the headline played up the “new breed” quality of combining human “brute force” with the automated tools of current PR practice, their article went into effective granularity and international comparison of how different firms have profited from spam, search engine optimization, and astroturfing for clients in politics, entertainment, and private corporations. Based on original interviews with insiders, historical overviews, and engagement with recent academic research, the authors reported on how PR firms have cloaked influence operations from Eastern Europe, Israel, Philippines, South Africa, and Taiwan. They documented the cat-and-mouse games private firms play with platforms taking down accounts and banning users. They also referenced ethnographic academic research that has previously argued how PR industry jargon has been used to neutralize the stigma of the digital labor of creating disinformation for political clients (Ong & Cabañes, 2019a).
This article brings to the spotlight the analytical and storytelling tropes that distinguish disinformation-for-hire research from the mainstream disinformation beat. Three key tropes we identify from disinformation-for-hire research are its discussion of (1) the slow-burn horror of disinformation as everyday digital labor; (2) diverse industries and workers engaged in disinformation production; and (3) regulatory areas beyond social media content policy and platform-centric accountability.
The first trope of the disinformation-for-hire subfield is a narrativity of the everyday labor of producing disinformation as the setting for amoral human activity. This focus on the unthinking and casual process of how disinformation campaigns are assembled imparts a creepy sense of existential dread to readers of powerful and granular investigative writing. Compared with the high-emotion beats of shock effect, shame, or ridicule struck by other fact-checking genres of disinformation reporting, disinformation-for-hire investigations drum on the amorality of workers by interviewing and quoting insiders, whistleblowers, and known industry figures previously or currently involved in digital operations. For example, in The Guardian’s (Kirchgaessner et al., 2023) undercover exposé of the “Team Jorge” Israeli contractors, direct quotes from the firm’s mastermind Tal Hanan justifying that “[he] had been working all [his] life according to the law” are used as a powerful if open-ended counterpoint to the article’s detailed account of the perpetrator’s boasting of fake account creation and media manipulation in the service of clients taking part in electoral races all around the world. The Guardian article hones in on the casualness and directness in which its antagonist Hanan assigns financial value to affairs that constitute international electoral interference (Kirchgaessner et al., 2023)—a portrayal of villainous qualities that is rather different from the critique of top-down state-sponsored oppressive force exerted by well-known authoritarian figures or the use of offensive and hateful language of social media influencers.
If mainstream disinformation studies tend to mobilize shock effects from their narration of people’s ideological investments in antidemocratic and extremist belief systems (e.g., Davey & Ebner, 2019) and the ways anti-establishment thinking are stoked by foreign adversaries like Russia (e.g., Jankowicz, 2020), disinformation-for-hire troubles readers primarily through its focus on the short-term and touch-and-go qualities of disinformation projects. In this perspective, horror is triggered not by the retelling of the dangerous online rabbit holes that radicalized lonely individuals or disenfranchised populations over time, but by the commercial success of tech-savvy hustlers who have managed to appease a revolving door of clients and maintain a skilled workforce to fulfill project lifecycles. The idea that “organized lying” is a natural, rather than illicit or extraneous, part of everyday life retells the problem of disinformation from the lens of an Arendtian “banality of evil” (Arendt, 1963).
For strategic communications scholar Lee Edwards (2020), this analysis requires media studies to think about how diverse media industries are historically culpable for the current crisis on disinformation. Although there is a strong presentist quality to the frame of disinformation as emerging from Web 2.0 and 3.0 social media, PR practices of creating misleading or false communication for the pharmaceutical, alcohol, and extractive industries—and even nonprofits—are commercial services that also require regulatory attention. Current debates about social media regulation have too often narrowly focused on content moderation and takedowns of falsehoods and conspiracy theories. It is crucial to be reminded how the PR industry must be held into account for its complicity to the problem of disinformation while strategically protecting the legitimacy of the profession at large (Edwards, 2020).
The second trope of disinformation-for-hire research is the representation of industry and worker diversity; this research area endeavors to discuss how multiple positionalities of class, gender, race, and nationality participate in a whole range of formal and informal labor regimes organized around digital deception. Social class, especially the interplay of different class positions (Wright, 2015), was a key focus in Ong and Cabañes’ (2018) ethnographic research “Architects of Networked Disinformation,” which visualized a hierarchy of elite politicians collaborating with upper-middle-class marketing consultants while exploiting the cheap labor of young and precarious digital workers mired in race-to-the-bottom work arrangements in the Global Majority. Ong and Cabañes’ (2018) work challenges the mainstream global narrative around Rodrigo Duterte’s Philippines that has often fixated on how an angry populist leader’s violent message and armies of online trolls mobilized low-income and less-educated voters to support his controversial policies. Interviewing political strategists, influencers, and social media operators across various marketing firms and servicing politicians across the political spectrum, Ong and Cabañes (2018) retold the ways in which older and younger professionals are recruited in various forms of political trolling while expressing myriad excuses and moral displacement strategies that they were simply “doing their job” and “everyone is doing it anyway.” Challenging the mainstream political analysis of angry populism in the Philippines as fueled by disinformation that is produced and consumed by voters of the low-income classes, the authors argue that the main culprits “hide in plain sight, wearing respectable faces, sidestepping accountability while the public’s moral panics about trolling are directed elsewhere” (Ong & Cabañes, 2018, p. 3).
This trope of disinformation-for-hire thus encourages a more precise articulation of power, responsibility, and accountability in both sociological and regulatory discussions within disinformation studies. All too often, especially in Global Majority contexts, it is the poor, working-class, young voters that are assumed to be the “dumb” audiences or producers of disinformation campaigns in service of populist politicians, while the responsibilities of the elites and middle classes are overlooked (Webb, 2022).
Indeed, prior disinformation studies help map out for us a class-divided yet collaborative hierarchy of workers in the disinformation industry: (1) elite entrepreneurs, such as tech firm owners and investors, innovation industry leaders, elite freelance coders or developers, PR strategists, and political campaign consultants (Benkler et al., 2018; Braun & Eklund, 2019; Briant, 2018); (2) digital and platform workers, such as middle-class creative workers in local media industries, gig platform workers, social media influencers, political pundits, and pollsters (Abidin, 2016; Lewis, 2018; Ong & Cabañes, 2019b); and (3) digital “sweat laborers,” such as menial laborers employed in tech factories and click farms, politicians’ staff exploited for their digital skills, and microtask workers on global platforms like Amazon’s Mechanical Turk or local click farm platforms (Confessore et al., 2018; Grohmann, Pereira, et al., 2022; Pohjonen & Udupa, 2017; Silverman & Alexander, 2016).
These workers are situated in various degrees of official/unofficial, paid/unpaid, formal/informal, and illicit/above-board forms of digital labor. The NATO StratCom COE (2018, p. 5) report “The Black Market for Social Media Manipulation” identifies overlapping categories of “the easily accessible open market, the dark web, and the offline word-of-mouth market.” The report also discusses the diverse clients that seek different services in this market: “Some customers want more likes on their photos, some want to profit financially at the expense of the ad industry, and others want to influence the outcome of elections” (NATO StratCom COE, 2018, p. 16). Given this worker and industry diversity in disinformation production, it is important we revisit popular assumptions about “troll farms,” “bots,” and “astroturfing,” and understand the exact organizations and transactions behind specific instances of media manipulation. As Ong and Cabañes (2018) have asserted in their ethnography, “nobody is a full-time troll,” and so we need to demystify the “human infrastructures” behind fake news (Nemer, 2021).
It is important that disinformation-for-hire research continues to shed light on the diverse entry points in which people are recruited to disinformation labor and the qualities of responsibility and accountability that we can assign to them at each level. Broadening out from a narrow focus on individual “scammers” or “fraudsters” (Grohmann, Pereira, et al., 2022), we understand the layers of political and commercial power and assign responsibility to people at the top of the hierarchy. We can also consider and support the ways in which workers might resist and refuse unethical and exploitative labor relations within organizations and industries. Media democracy interventions in the disinformation space can actually learn lessons from tech activist strategies of providing legal protections toward whistleblowers and other collective action initiatives against Big Tech firms (e.g., Valmary, 2023).
The third trope of disinformation-for-hire investigations is a rejection of platform determinist frameworks in diagnosing the disinformation crisis in favor of socio-historical analysis and diverse interventions beyond fixing and policing social media content. The fact that we live in a platformization context (Poell et al., 2019) should not mean that problems and their solutions depend solely on official top-down regulatory engagements with platforms nor on the pressures exerted by civil society partners in backchannel communications with their content policy teams (Ong et al., 2019).
Unfortunately, as Robyn Caplan et al. (2020) observe, “technological determinism, the idea that technology is a dominant factor in social change, has seeped into the ways we talk about platforms.” In contexts across the Global North and the Global Majority, platform determinist discourses have become convenient, persuasive, yet deeply problematic organizing frames that civil society actors mobilize as justification of their counter-disinformation efforts. Platform determinist explanatory frameworks, where Big Tech is spotlighted for “surprise” electoral victories of populist leaders, indeed mark a resurgence of the hypodermic needle view of media effects in global public discourse (Chakravartty & Roy, 2017). All too often, the set of solutions made available by this perspective are narrowly confined to “fixing the content” on social media—an approach mired by legal definitional battles around “fake news” and content policy minutiae of what words or images to downvote in the News Feed. “Fixing the content” is especially prone to authoritarian state capture, which has become the case for many Global Majority countries, where governments attempt to exert control on what counts as misinformation and punish its authors accordingly. Legal approaches overly focused on the policing of social media content are prone to authoritarian state capture, resulting in witch hunts of opposition leaders, activists, and even academics. We especially support Global Majority civil society leaders who have spoken out against Global North policy advocacy, such as in the case of #PushbackUNESCO, where Southeast Asian leaders challenged The United Nations Educational, Scientific, and Cultural Organization’s (UNESCO’s) global guidelines that encourage governments to take greater control in regulating social media content (#PushbackUNESCO, 2023).
By framing the disinformation crisis as primarily an issue of content sanitation, platform determinist frameworks also tend to support fact-checking as a key intervention (Lelo, 2022). In many Global Majority countries, fact-checking startups have mushroomed and became a veritable “industry” by attracting foreign philanthropy and organizing local liberal supporters to what they tout as a practical intervention. Of course, the expansion of fact-checking models is accelerated by Big Tech companies themselves, who have recruited local third parties to help with their content moderation decisions. The idea that fact-checking can inadvertently popularize false claims has been thoughtfully advanced by Whitney Phillips (2020), who advocates for the use of strategic silence, or the quarantining of dangerous speech in cases when reporting could inadvertently draw more visibility and clicks to extreme perspectives. Writing about India, media anthropologist Sahana Udupa (2019) raises concern that fact-checking is highly vulnerable to accusations of partisanship. She observes in the Indian context that fact-checking and “fake news busting” have become “weaponized” and hyper-partisan, “following the patterns of political and ideological fissures” in the country.
Fact-checking also fails to address the broader structural conditions that allow disinformation campaigns to be produced and organized in the first place. Insofar as fact-checking initiatives sometimes rely on discourses that also fold in antipoor sentiments blaming the less-educated, the low-income, and younger generations for being gullible audiences of disinformation (Media Commoner, 2023), we argue that such expressions also exacerbate the social divisiveness that populist leaders have stoked to their advantage and alienate the very audiences of these interventions.
Disinformation-for-hire investigations, therefore, discuss a broader set of tools for legal and ethical accountability and worker regulation. Campaign finance reform, political ad library monitoring, and advertiser pressure and call-out campaigns are some of the solutions advanced by researchers and civil society leaders following deep journalistic investigations of disinformation campaign operations. There are no easy answers and neatly transferable solutions from one context to another. For example, naming and shaming the key masterminds of campaigns might be seen as effective “cancelations” that curb their potential to seek new clients, but in some cases, notoriety can attract larger projects. “Left infighting” in the disinformation space is also all too real when new interventions to understand disinformation economies and their notorious players are accused of potentially “platforming” extreme perspectives.
Digital Labor and Disinformation in the Global Majority
The focus on disinformation-for-hire as everyday labor is even more important in a Global Majority context. In a scenario in which the main international bodies and policymakers advocate platform-centric regulation to resolve disinformation problems (following a Western perspective), it is necessary to locate and historicize how disinformation production occurs in different contexts. The everyday labor approach reminds us that there are contexts of production, business, markets, and arrangements that existed in political communication long before the emergence of digital technologies. In the Global Majority, the boundaries between formal/informal and the different gray zones of disinformation production are even more blurred. Thus, it is necessary to advocate for global perspectives for disinformation-for-hire research, which would localize contexts instead of obscuring them, and reflects on shared learnings rather than depend solely on North-to-South policy flows.
For example, research in the Global North has cataloged media manipulators’ different techniques of “attention-hacking” (Marwick & Lewis, 2017) and “source hacking” (Donovan & Friedberg, 2019) to advance far-right and white supremacist ideology. Global Majority research meanwhile discusses disinformation production as less ideologically invested and more financially motivated, as aspiring middle-class workers seek added income in short-term, project-based arrangements for politicians (Lindquist, 2019; Pohjonen & Udupa, 2017) and platforms themselves—including Big Tech platforms hiring PR agencies to spread disinformation (Levy, 2022). These studies—when put into dialogue with research on labor precarity among IT workers in Australia (Gregg, 2011), digital influencers in Singapore (Abidin, 2016), and content moderators in the Philippines (Roberts, 2016)—shed light on socioeconomic structures, narratives of distress, and the slippery nature of labor practices that can lure vulnerable populations.
This human-centered perspective on the disinformation industry presents an opportunity to hear the rationalities, identity building, and moral justifications, or “deep stories” (Hochschild, 2016), of people who are paid to postmanipulative or even hateful content on social media. This means acting in various interfaces of legal/illegal and formal/informal practices in media and creative industries. Focusing on the workforce also creates an opportunity for disinformation-for-hire to be seen not only as belonging to disinformation studies but also to digital labor studies. On one hand, digital labor studies have emphasized tech workers (Dorschel, 2022), creative workers (Duffy et al., 2021; Jarrett, 2022), and platform workers (van Doorn, 2017) in many sectors, without highlighting the role of disinformation—and their digital workers—in these activities or sectors. To date, these studies have underestimated the role of creative and media industries in supporting and encouraging disinformation-for-hire. In studies on workers behind automation, there is also little space to discuss how humans have played the role of “bots” and fake “artificial intelligence (AI)” for disinformation production.
In a context of “generative AI,” dominant narratives already announce the dangers of AI-driven electoral campaigns and, consequently, of AI-driven disinformation. The problem lies not just with AI, but with how industry and disinformation systems use AI for profit. Following scholarship on AI and labor (e.g., Le Ludec et al., 2023; Posada, 2022), it is necessary to pay attention to the workers who support so-called AI through training and data annotation. There are global geographic inequalities in relation to AI value chains (Ferrari, 2023; Jones, 2021), with the majority of workers located in the Global Majority and the majority of companies in the Global North. Thus, this is also a facet of disinformation-for-hire as everyday digital labor that argues in favor of global perspectives that capture inequalities and coloniality in disinformation production. This centrally involves the extraction of resources, bodies, and territories through data extractivism, as the rich literature on the topic has addressed (Crawford, 2021; Grohmann & Araújo, 2021; Ricaurte, 2022). This is also a call to address disinformation-for-hire in this subfield, especially how the everyday labor of data workers is entangled with global value chains of disinformation, not only in political campaigns, but also in entertainment sectors (Grohmann, Aquino, et al., 2022).
The everyday labor approach for disinformation-for-hire also means understanding sociological nuances in relation to the workforce behind disinformation production. Disinformation workers can themselves be precarious digital workers, although, in a context of the Global South, this often means not having a choice or choosing the less precarious option (Abilio, 2020; Graham & Anwar, 2019). Precarious labor conditions prompt reflection on the appropriate industry safety nets to prevent (mostly young, millennial or gen Z, aspiring middle-class) people from slipping into this kind of labor, especially in a context where the informal labor is a core feature in the world of work in the Global Majority.
Thus, the answer lies less in “regulating” companies linked to disinformation production—including platform-based companies—than in systematically and holistically addressing labor and technology policies, including the role of organized labor, unions, social movements, and governments. One of the ways to address disinformation-for-hire as everyday digital labor is to take a policy- and worker-oriented research approach, as “policies from below” (de Peuter et al., 2023). This means supporting worker justice initiatives around the world to empower workers to speak out against industries and organizations cloaking disinformation-for-hire. Therefore, disinformation-for-hire as everyday digital labor can shed light on aspects not yet sufficiently discussed in both disinformation studies and digital labor studies, as evidenced by the articles in this special issue.
Key Themes of the Special Issue
This special issue gathers empirical and theoretical articles dedicated to exploring different aspects of disinformation-for-hire as everyday digital labor. Showcasing diverse methodologies of interviews, participant observation, digital ethnography, discourse analysis, and critical literature reviews of the field, the different pieces explore a range of disinformation worker, whistleblower, and researcher contexts around the world.
In the first article, media anthropologist Sahana Udupa discusses emerging classes of political consultants and digital influencers involved in different degrees of “official” and “unofficial” campaign practices in service of politicians in Narendra Modi’s India. Udupa’s thoughtful discussion of the “unmooring of morality” that occurs when workers fixate on data analytics exemplifies the themes of the banality of evil in disinformation-for-hire. Her analysis of “shadow politics” gives us more precise language to think through the snake oil entrepreneurialism at the intersection of data analytics and political campaign strategy. In terms of solutions beyond the usual focus on fact-checking, Udupa suggests a whole menu: “[t]ransparency in election expenditure, regulations for campaign finance, professional code of conduct and co-regulatory models for commercial political consultancy, training and awareness raising among micro entities” (Udupa, this volume).
In the second article, communication scholars Marina Ayeb and Tiziano Bonini provide a breathtaking comparative deep-dive of troll farm work cultures in Tunisia, Egypt, and Iraq. Based on interviews with eight workers at different levels of the work hierarchy, they describe the “emotionally burdensome daily tasks, absence of legal job contracts, and highly surveilled work environments” in digital disinformation workplaces (Ayeb & Bonini, this volume). Adopting analytical frameworks from media production studies, the researchers demystify paid trolls not as “folk devils,” citing Stanley Cohen (1972), nor mercenary armies of the authoritarian state, but as precarious digital workers of their domestic creative media industries. Nevertheless, Ayeb and Bonini critique accounts of their informants as possibly unreliable and unsympathetic at times, as evident in the ways they draw attention to the narrative contradictions of a “supervisor” of disinformation operations who had different strategies “of distancing himself from the troll farms’ world, but on the other hand . . . put us in contact with people working as disinformation operators in Egypt and Turkey.” Certainly, the narrative structure of this piece reflexively discusses the many methodological risks that researchers face when in close proximity with the masterminds and the foot soldiers of toxic campaigns.
In the third article, Johan Lindquist and Esther Weltevrede focus on the transnational market of buying and selling followers on social media, particularly on Instagram. While popular writings about this market have previously used industrial metaphors of “click farms” and “follower factories,” the authors make a point of showing the front- versus back-ends and licit-versus-illicit boundaries that this market plays with. In spite of platforms ramping up their own efforts at what they call “authenticity governance”—policies that authenticate real identities and delete undesirable behaviors—the authors use both ethnographic and experimental methods to trace how fake and “authentic-enough” followers are sold and resold across transnational labor chains (Lindquist & Weltevrede, this volume). Their article shows the painstaking work of “following the trail” left by their original key informants from Indonesia and Turkey to monitor “engagement services” that boost views and likes across multiple platforms with the aim of evading platforms’ detection and deplatforming abilities.
The fourth article by Frankie Mastrangelo and Gina Marie Longo emphasizes the second trope of disinformation-for-hire research we previously discussed by representing industry and worker diversity. Specifically, they analyze the essential oil market and the multilevel “pyramid” scheme of popular companies such as doTERRA and Young Living and their strategies of targeting women of faith with “promises of financial success, personal empowerment, community, and health and wellness without trading off their roles as traditional wives, stay-at-home mothers, and godly citizens” (Mastrangelo & Longo, this volume). Developing the concept of the “disinformation downline,” a gendered pipeline for radicalizing women essential oil distributors into QAnon and conspirituality belief systems, Mastrangelo and Longo contribute a powerful sociological analysis of radicalization that takes into account these companies’ “magical” marketing techniques that obscure gendered, racialized, classed, and ableist structural inequalities.
In the fifth article, communication researchers Stephanie Baker and Michael James Walsh analyze the most popular antivaccine influencers at the height of the coronavirus 2019 (COVID-19) pandemic and the ways they used memes for commercial and political gain. The authors explore the specific textualities of antivaccine memes and the influencers’ insidious techniques of playing up persecution and victimhood alongside disinformation producers’ strategic evasion of platform governance and recruitment of their followers to their own personal newsletters and websites. Baker and Walsh powerfully show how antivaccine meme production can be traced to a small yet savvy group of influencers who manage to get promotional deals from pharmaceuticals and find ways to personally monetize engagement through readily available techno-commercial infrastructures on the Internet.
The sixth article by communication scholars Charlotte Knorr, Margitta Wolter, and Christian Pentzold invites critical reflection about the testimonies of whistleblowers exposing the malpractice of data analytics services they had once conducted on behalf of political clients and/or the surveillance state apparatus. The authors critique mainstream journalistic coverage of prominent whistleblowers that tends to mold people into hero-villain caricatures without critical thought into “how whistleblowers talk about themselves and try to seize control of their public image” (Knorr et al., this volume).
Data and consultants feature centrally in the article by Knorr and colleagues, through whistleblowers’ memories. The authors claim that whistleblowers are in a position of “privileged precariousness,” telling stories of enchantment, delusion, and awakening. From their narratives and memoirs, they position themselves as “honest moral arbiters in service of the public interest.” The authors’ unique contribution to disinformation-for-hire research is their grounded analysis of whistleblowers’ ambiguous work arrangements and personal ambitions and the ways they (fail to) engage with issues of culpability and corruptibility as well as their accountability and agency as regard to malpractice in digital industries.
The seventh and final essay is communication scholar Jayson Harsin’s critical review of mainstream disinformation studies and the ways that a “for-hire” perspective can advance new analytical frames for the field. For him, disinformation researchers suffer from a “disciplinary unneighborliness” by failing to read across disciplines and their relevant research areas of advertising, PRs, propaganda studies, posttruth studies, and strategic communications research. Harsin’s frustration is evident when he gives examples of the vague and contradictory definitions and parameters around the term “disinformation” when he asks: Is a Calvin Klein ad with an airbrushed/photoshopped model an example of disinformation . . .? Can a fictional TV show, a film, comic strip, or a novel be disinformation [or] misinformation?. . . Broader yet: does the fact that algorithms necessarily filter and shape information, visibilizing some, invisibilizing others, point to technologies that are structurally disinformational? Finally, Harsin offers a scathing critique of a “cryptonormativity” in disinformation studies that hearkens to a functionalist and falsely nostalgic analytical paradigm that falsely considers that once upon a time there had been fewer “threats” to our information environments. Harsin’s essay mirrors our own earlier remarks on the ways shock effects are mobilized by clickbaity explanatory frames that pin problems of disinformation on exceptional “bad actors” throwing wrenches in the gears of liberal democracy.
These articles exploring different dimensions of disinformation-for-hire challenge the foundational methodologies and easy explanatory crutches of mainstream disinformation research and journalistic reporting. The critical, comparative, and finely granular analysis of the pieces of our special issue altogether invites diverse and nontraditional ways of storytelling about the villains, victims, and heroes in the disinformation context.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research is supported by the Carnegie Corporation of New York and Luminate Group.
