Abstract
This article presents three critiques of disinformation scholarship, with an emphasis on “for-hire.” The article argues that disinformation is defined in unpromising and contradictory ways. Concepts have ontological and epistemological repercussions, and thus far, disinformation scholarship has failed to engage them. Partly because scholars are studying disinformation even when they do not use that word to label their work, the article argues that explicit disinformation scholarship tends to neglect neighboring fields and scholars—the second critique. By most definitions of the term disinformation, neighbors are researching the same object domain, which could provide rich resources for scholars newly attracted to “disinformation”: propaganda, public relations, promotional culture, political consulting/marketing, and post-truth studies. It discusses the neighbors’ deep historical and contemporary research on for-hire deceptive communication, including that pertaining to social media. The third critique argues that disinformation scholarship has a cryptonormative tendency, evident in language of disorder, threats, dysfunctions, and pollution; it therefore needs more overt normative justification (or defense of anti-normativity). The cryptonormativity also entails a tendency toward ethnocentrism. The article ends by questioning whether disinformation is conceptually suitable for the theoretical work with which it tasks itself.
Introduction
“Policymakers increasingly assess disinformation to be an existential challenge to democratic governance” and social media its main cause, leading to calls for increased platform filtering of “untruthful” content (Del Campo, 2021). On the other hand, others argue “the origins of public misinformedness and polarization are more likely to lie in the content of ordinary news or the avoidance of news altogether as they are in overt fakery” (Allen et al., 2020, p. 1). Indeed, “Several authors argue that the phenomenon of online disinformation has gained more influence through social media,” but “empirical evidence regarding the rise of online disinformation and its effects on society is inconclusive, and little is known about the situation outside the United States.” (Humprecht et al., 2020, p. 494)
These different assessments of deceptive communication, inaccurate information, and its object domains (social media? TV news? Email?) haunt contemporary scholarship whose core concept is disinformation, including work on hired “disinformers.” Rushing to examine and evaluate producers (for-hire or pro bono) without a rigorous development of the core concept, disinformation, has ontological (what is the reality it describes or co-constructs?) and then epistemological consequences (what we think we know about that reality so-perceived, so-constructed). We need to figure out what the object domain of disinformation scholarship (DS) is or can be and what one means by disinformation before we can more rigorously explore the work of hired disinformation producers and production. 1 We might even dare ask: Is disinformation the key concept that can do the work we are asking of it (explain disbelief, belief, ignorance, violence, distrust, populism, insurrection, and the demise of liberal democracy, to name a few of the many things with which scholars task it)? I have previously employed the term but am now hesitant on this question, especially until it is more conceptually developed. At the very least, disinformation needs significant conceptual refinement.
While all the research I will discuss here is unmistakably valuable for understanding aspects of a complex set of relations, I aim to show how a critique of it can advance our collective knowledge production and further debate. Concerned especially with hired production, I aim to understand what disinformation is, as scholars construct it, and to critically examine the conceptual limitations of its actual use. Considering conceptual taken-for-grantedness may assist more precise conceptual application. It may be more rigorous application would mean more sparing application or, in some situations, opting for other terms. In turn, the knowledge about that which it aspires to produce may be more reliable and its normative political implications more forthrightly and systematically stated and theorized. In the spirit of advancing theory and analysis of disinformation for-hire, I offer three critiques of contemporary DS. 2
The first critique is conceptual-definitional. In a conceptual vortex, DS’ definitions proliferate and are usually unjustified in comparison with other ones (simply asserted), and almost never with regard to any other competing or overarching concepts, which might absorb it. The core emphasis on “information” is also taken for granted. Importantly, concepts create imaginations of reality and influence how we shape and react to it.
My second critique opens from the first: definitions of disinformation point to a vast terrain of past and present implicit DS: DS in all but name. I argue that five sub-fields deserve DS attention. These sub-fields’ object domains are ridden with professional (for-hire) deception: propaganda, PR, promotional culture, political consulting, and post-truth. DS’ disregard for the scholarship that precedes it, continues aside it, and shares common objects of study I call a “disciplinary unneighborliness.”
My third critique involves widespread cryptonormativity (Kim, 2021) in DS and has two parts. First, I argue explicit DS assumes that its artifacts and producers are morally deficient and/or dangerous (e.g., a “threat to democracy”). They may or may not be thus, but the onus is on such scholars to bring to the fore and develop the underlying normative assumptions about the communication they are reducing to good and bad “information” and (good or) “bad actors.” More rigorous treatments of disinformation need to do more than provide assertions of “best practices” and short-hand techniques for identifying “problematic information,” for “media literacy.” This question of normativity leads to a closely related ethnocentric and postcolonial critique, where DS would de-center its analytical gaze from the West, especially the United States, and shift it to the global South, on the one hand (because normatively, the concept assumes global application). And, on the other hand, instead of focusing almost exclusively on Russia, China, and Iran, DS of foreign influence should also turn back on Western liberal democracies such as the United States, which have been systematically attempting to “destabilize” states of interest through campaigns of deception fairly consistently for over a century into the present.
Critique I: Definitional Vortexes
Conceptual definitions have important methodological and epistemological implications (Gerring, 2011, p. 113; Lakoff & Johnson, 1980). 3 “Explicit disinformation” scholars (DS by explicitly naming its object “disinformation”) offer potentially expansive definitions, yet their scope of objects is almost exclusively constrained to social media and thus presentist. At the same time, taken at its definitions, explicit DS has consistently ignored a large body of contemporary and historical scholarship on the subject, including a good deal on disinformation’s producers—the most likely reason for which being that its literature reviews look only for scholarship that tags itself with the term “disinformation.” “Implicit disinformation scholarship” I call that which researches the same object domain indicated by definitions of disinformation—but it does not call its object “disinformation.” A November 2022 search of Ebsco’s Communication and Mass Media Complete database for “disinformation” in abstract OR keyword yields 469 peer-reviewed articles since 1975. Expanding it interdisciplinarily to Academic Search Complete database yields 2,700+ articles. Consider what DS becomes if one searches for the close cousin “propaganda” and includes its results in the tally? 4 Given the object domain thousands of scholars demonstrably share, what does it mean to speak of “disinformation studies”? If definitions do not limit the activity or artifacts called “disinformation” to the realm of traditional politics or social movements, not to mention social media, then it seems it would necessarily open the door to a range of long-standing bodies of scholarship (discussed in the following sections of this article) on its topic that may lay claim to DS or suggest it should be resituated within those sub-fields.
In a recent systematic literature review of DS, Kapantai and colleagues (2021) stress the challenges of identifying, evaluating, and presenting a body of scholarship that uses “different terms for describing ultimately the same types or criteria” (p. 1306). They conclude, “H]aving a global perspective around the problem could contribute to avoid profound effects in real-life domains” (p. 1306). Precision would not only potentially help “avoid profound effects in real-life domains,” but it would also help avoid potentially profound effects on “real-life” academic domains.
Across common dictionary and influential DS’ definitions (citations, “high-impact” journals), disinformation becomes something that could be produced anytime, anywhere, by any means, through any media, in any field or sector of the society, from oral politics in a Greek amphitheater in 200 BCE to Voice of America or Radio Moscow radio segments in 1970s Africa (several languages), TikTok self-promotion or YouTube videos by an evangelical preacher or imam—throughout history, respectively. Whether DS wants to constrain (or can conceivably—who has the authority?) these definitions and object domains in time, space, medium, social field, and corresponding academic field is debatable. Either way, implicit and explicit definitions can create very different knowledge about what are often, otherwise, the same field of occurrences. At other times, the choice of definition shifts attention away from what is an important referent for other scholars, from the dynamics of promotional capitalism to historical episodes and artifacts, and media—social media platforms, live events, books, websites, and content without interactive social capabilities, as with many fake news artifacts. Finally, some DS impose a “harm” criterion, and others focus on lies/lying—both of which require a high burden of proof. 5
Critique II: Disciplinary Unneighborliness, 5Ps
Contemporary explicit DS deprives itself of a rich historical and contemporary company that takes the same set of phenomena as its object of study. That company could help situate contemporary objects in a longer view and perhaps also help with theory building. Explicit DS may benefit by broadening bibliographies and perspectives. Specifically, five sub-fields represent an immense resource of implicit but overlooked DS. A more active engagement with contemporary and preceding neighbors’ scholarship would meritoriously ground claims that DS is an interdisciplinary project (e.g., DiResta, 2021).
Propaganda
It makes sense to begin by considering propaganda and its research with regard to disinformation; propaganda research is the earliest to analyze the common object domain. Social psychologist Nicholas Di Fonzo (2010) writes that “[p]ropaganda is goal-oriented communication that employs (to one degree or another) deceptive tactics” (p. 1). Deceptive, from deceive, is to mislead someone or hide the truth from them, cause them to believe something false (Merriam-Webster, Cambridge, Collins dictionaries). These predicates all share language and meaning with disinformation. Also noteworthy, Di Fonzo describes propaganda as “communication,” not simply a kind of corrupted or false “information.” What is gained or lost in nesting disinformation in communication? Disinformation likely evaporates without its core—information. Would shedding “infocentrism” be an improvement?
Or consider Jowett and O’Donnell’s (2018) influential definition: “[p]ropaganda is the deliberate, systematic attempt to shape perceptions, manipulate cognitions, and direct behavior to achieve a response that furthers the desired intent of the propagandist” (p. 6). Their treatments of propaganda include advertising and PR campaigns. The emphasis on systematism in application, as well as the examples they give, points not just to disinformation, but for-hire. So close are some understandings of the two terms—disinformation and propaganda—that Oxford Dictionary of Media and Communication even defines disinformation as “[a] form of propaganda involving the dissemination of false information with the deliberate intent to deceive or mislead” (Chandler & Munday, 2020). 6 Most leading academic and dictionary definitions of propaganda could also be integrated or even exchanged with most definitions of disinformation considered previously, perhaps with the exception of those stipulating the harm factor, and even there, definitions do not exclude intent to harm.
Michael Sproule mapped changing terminologies about a similar object in academic work, persuasion, communication, information, deception, misinformation, and rhetoric, and propaganda in his 1997 book Propaganda and Democracy. More recently, the Oxford-Reuters’ project on “computational propaganda,” 2012–2021, has “been investigating the use of algorithms, automation and computational propaganda,” whereby “political bots are manipulating public opinion over major social networking applications.” They further describe their work as “engaged actively with European and North American policymakers to discuss the global challenge of digital disinformation” (Oxford Internet Institute, n.d.). However, it is not clear why they call it propaganda and not disinformation or why they do not simply say, in the previous sentence, “digital propaganda.” 7 Nonetheless, puzzlingly, DS has ignored this subfield’s resources, given the overlaps in terminology and object domain (for an exception, see Anderson, 2021). We should note also, parenthetically, that modern fact-checking-for-hire, beyond the lone journalist and editor, developed out of wartime rumor “clinics,” such as Knapp’s (1944) famous study of rumor and propaganda that generated concepts applied still today (see also Harsin, 2006), and that propaganda has been twice euphemized, now as “public diplomacy.” There are optimistic signs that (explicit) DS is beginning to take “propaganda” as conceptually useful (e.g., Tripodi’s noteworthy The Propagandists’ Playbook, 2022), though one hopes DS will build on the enormous past body of propaganda research (since the 1920s) when it invokes propaganda.
PR
What is the relationship of disinformation to PR? Isn’t PR, too, deception-for-hire? PR is frequently invoked and studied as part of contemporary and past disinformation processes, but is the invocation redundant? Since the beginning of the 20th century, PR has sold itself as expert disinformers-for-hire, first in war propaganda (domestic and foreign) and then in peacetime, trailblazed by PR godfather Edward Bernays’ pseudo-events: events strategically organized to appear spontaneous or “natural” (Boorstin, 1961; Ewen, 1996). Depending on how one defines disinformation, as, say, intentionally deceptive information, it could conceivably include most PR and advertising. Since “disinformation-for-hire” is necessarily implicated in truth-telling, Lee Edwards (2021) describes it as PR: Trust in truth-tellers becomes more fragile, and space opens up for claims not only to truth but also to truth telling, in what Harsin (2015) has described as “truth markets” (p. 328). In such circumstances, professions that claim communications expertise, such as public relations, are well-placed to compete for territory and legitimacy as purveyors of truth. However, for public relations at least, competing effectively requires both the profession and its audiences to overlook its historical and current practices of disinformation. (p. 4)
If we accept this entwinement of disinformation and PR, disinformation cannot be reduced to electoral politics, nor to social media.
Media manipulation is a term that is not usually distinguished from disinformation and misinformation but appears as if interchangeable with them (e.g., Lewis & Marwick, 2017). Mention of “media manipulation,” from a neighborly perspective, recalls Edward Bernays’ (1928) paean to propaganda, in which he matter-of-factly wrote, The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. (p. 9)
This means of “manipulation” he originally called “propaganda,” then PR (the former term having acquired unfavorable associations with attempted systematic brainwashing; Ewen, 1996). Bernays’ assertion points to the need for a more historical DS, to understand what remains of an instrumentally deceptive kind of communication practice and form as it passed from different media, print to radio, film, television, and into social media (and their “intermediality”), where it seems to thrive as never before. Indeed, it points to the need to ask historically why it appeared at all. Work such as Thompson’s (2020) on PR practitioners’ contradictory role in disinformation production/reduction is exceptionally neighborly (Ong & Cabanes, 2019).
Promotional Culture
Closely related to PR, “promotional culture” studies also knock on the explicit DS door. Are some structured social or economic relations based on ongoing, systematic disinformation? What is the relationship of disinformation to ideology? For example, Hearn writes provocatively: [p]romotion works to perpetually persuade. A culture marked by the ubiquity of promotional discourse is . . . signaled by a lack of trust in language. Here, what matters most is not “meaning” per se or “truth” or “reason,” but “winning”—attention, emotional allegiance, and market share. (Hearn, 2011)
Thoroughly instrumentalist in their dogged social commodification, promotional processes “function . . . to bring about some form of ‘self-advantaging exchange’”—which would mean it is, by most definitions, disinformation?
Indeed, deciding (who does?) whether disinformation processes are restricted to the traditionally political (electoral politics, public deliberation, activism, and social movements) will open and close research areas. Does it make sense to speak of political DS and corporate DS, just as there is corporate propaganda, political propaganda, war propaganda? If disinformation is characterized by “[m]anipulated content [,] when genuine information or imagery is manipulated to deceive” (Wardle, 2018, p. 954), are there limits to deception’s referents?
The object domain, the process-product of disinformation, potentially opens widely, again. Is a Calvin Klein ad with an airbrushed/photoshopped model an example of disinformation (altering a person’s actual body for promotion (of course, long a common feature of advertising)? Can a fictional TV show, a film, comic strip, or a novel be disinformation? Can they be misinformation? What is not “information” and therefore potentially disinformation or misinformation (based on intent)? Focusing on their core—false information which is intended to mislead” (Oxford University Press, n.d.)—don’t they pertain to business, military, government, and PR producers as well as audience-consumers? Broader yet: does the fact that algorithms necessarily filter and shape information, visibilizing some, invisibilizing others, point to technologies that are structurally disinformational (especially since they Blackbox their algorithms as “trade secrets”)? They are clearly implicated in a great deal of disinformation cases, though some scholars may not think of them as such. For example, Noble’s (2018) Algorithms of Oppression focuses explicitly on racist representations of people of color through search results (similarly, Nikunen, 2021). Taking the logic one step further: is social media itself fundamentally mis- and disinformational at its algorithmic base (filtering, selecting, shaping realities, or experiences)? If promotional capitalism (not simply the technology or even the human producers) embeds and shapes what is being called “disinformation,” DS might follow Pickard’s (2021, 2020) courageous lead and openly engage with capitalism itself (Harsin, 2015, 2018).
Political Consulting, Marketing, Strategic Communication
While most definitions of disinformation do not limit it to political “information,” the most visible scholarship in practice places it within politics, and democracy in particular, and given its culture of influence-for-hire, it merits extended attention here. Explicit DS has largely ignored a longer body of scholarship on legal, resource-rich paid political deception: political consulting (globally). Though Ong and Cabanes interview hired producers of “disinformation” in the Philippines, the hugely visible overviews and policy-oriented explanations of disinformation and misinformation (Kapantai et al., 2021; Lewis & Marwick, 2017; Tucker et al., 2018; Wardle & Derakhshan, 2017) fail to mention political consultants or strategists, political PR or “public diplomacy,” not to mention the division of labor in that economy, now heavily active in social media and highly publicized after Cambridge Analytica. So intimate (and obvious) is the relationship between political consultants and disinformation that the New York Times’ Max Fisher (2021) investigated the wedding of disinformation, political consulting, and social media in “Disinformation for Hire, a Shadow Industry, Is Quietly Booming.” I focus on a few relatively recent key examples of this research, which certainly would count as disinformation-for-hire, by most definitions of disinformation (and hiring).
Michael Serazio’s interview-based research on political consultants is particularly pertinent to disinformation-for-hire. It establishes consultants and their work squarely within most definitions of disinformation (especially those emphasizing intent to mislead). He writes, “Through the speeches, advertisements, and PR materials they provide, consultants furnish much of the information context for that democratic decision-making and their inclination to impoverish ‘facts and details’ within those texts diminishes the political economy of civic communication” (Serazio, 2017, p. 15). Indeed, “these consultants are not even interested in formal, conscious deliberation from the audiences addressed if feelings can be conjured first to short-circuit rational calculations.” Serazio concludes that political consultants’ deceptive practices contribute to the intense distrust and cynicism that surveys say is rampant in contemporary democracies (i.e., they bear responsibility for “post-truth”).
When consultants speak of the need to co-opt “real people” in their ads,” he writes, “it bespeaks the “unreality” of the entire political spectacle and campaign branding project—one in which . . . shots of a candidate in a certain place or with certain people . . . as often betray truth as reveal it. (Serazio, 2017, p. 15)
Serazio even frames the work in terms of “information,” disinformation’s core. “Consultants may well feel that campaigns don’t need to be accountable to ‘independent facts’ if voters adrift in a fragmented information environment won’t necessarily hold them accountable.” Uncannily prescient of contemporary post-truth and DS, Serazio (2014) and his interviewed consultant remark, respectively, For that reason, one direct mail and opposition research head fears that this is a “slippery slope” that could culminate ’50 years from now, [where] politics could be this kind of cartoonish reality, where facts don’t matter.’ The evidence here suggests that strategists are, in fact, designing toward an environment like that which could arrive much sooner. (p. 757)
While Serazio’s work is more recent (and neglected in DS), histories of political consulting compellingly demonstrate a long 20th century of disinformation-for-hire. Of the rise of the early 20th century publicity agent crossing over from commercial to political sectors in the United States, before going global (Johnson, 2017, Ch. 18), Sheingate (2016) writes, By invoking the ideal of publicity, figures as diverse as progressive reformer Gifford Pinchot, PR founder Ivy Lee, and Presidents Theodore Roosevelt and Woodrow Wilson could defend their methods as a modern way to inform the public. Yet, the success of a publicity campaign hinged at least in part on the public’s inability to distinguish between the objective presentation of facts and the subjective manipulation of information to appear fact-like,
He resonates for contemporary DS. “This ambiguity sparked important innovations in the tools of persuasion, laying the foundation for the modern consulting industry, but it also contributed to an ambivalence many Americans express toward politics that is still evident today” (p. 14). As Sheingate notes, “political consulting itself grew out of the field of PR, and we can trace polling directly to scientific advances in market research made during the 1930s and 1940s.” Thus, he concludes, “the effort to understand, measure, and manipulate political behavior grew out of similar efforts and technologies designed and applied first in the commercial realm” (p. 195).
Likewise, in his history Democracy for Hire, Dennis Johnson (2017) describes the work of political consultants like “tricky” Richard Nixon’s first campaign manager, which was “characterized by a vicious, snarling approach that was full of half- truths, full lies, and innuendoes, hurled at such a pace that Voorhis [Nixon’s opponent] could never catch up with them” (p. 37). We are used to hearing that democracy has never consisted of honest discourse free of factual errors or misleading language; less frequently do we hear that it has long been systematic and for-hire (at least in the 20th-century United States and postcolonially now most of the world).
Next to Serazio’s contemporary work, the portrait that these two histories of US political consulting reveal is the normalization of systematic political deception for-hire by the “resource rich.” This development is certainly not limited to the United States (though it gets export credit). Political consulting is practiced on a global scale, and US firms are hired to travel the world and, in postcolonial style, intervene as needed, as several books and even a documentary film have examined (Boynton, 2006; Harding, 2008; Lees-Marshment, Strömbäck, & Rudd, 2010; and in explicit DS, Ong & Cabanes, 2019).
Yet, it is not just that DS has largely ignored a significant body of scholarship about systematic industrialized deception, its history, commodification, and globalization. Just as important is the fact that such scholarship implicitly problematizes DS’ “infocentrism” at the expense of formal organization, narrative, and context. Thus, Johnson observes in his history of political consulting, Invariably, the information gathered about a candidate is true—it makes little sense to make up facts and figures, and if caught doing so, the researcher and his candidate will be the first ones to suffer. The real sin is how facts, often unrelated, are put together to create a false narrative or accusation. (p. 143)
What is misleading is not the data-as-information, but the strategic presentation of it beyond information as true or false statements.
Post-Truth
To these areas of research above that refer to a similar object domain, one would have to add the contemporary mushrooming work around the more historically periodizing notion of so-called “post-truth” politics. Demonstrably, it overlaps with the three sub-fields just discussed. Much of this work has explicitly identified “disinformation” as one of its major objects of reflection and critique. For example, Cosentino’s (2020) Social Media and the Post-Truth World Order: The Global Dynamics of Disinformation makes clear the correspondence. Farkas and Schou’s (2019) Post-truth, Fake News and Democracy also explicitly identifies the same object domain as DS. The encyclopedic overview “Post-truth and Critical Communication Studies” in Oxford Research Encyclopedia of Communication (Harsin, 2018) classifies types of post-truth communication such as conspiracy theories, rumor bombs, and bullshit as common sub-types of disinformation, making explicit links to implicit DS. 8 These works’ references to scholarship classified as disinformation by definition show how research around post-truth constitutes a growing body of implicit DS. Whether DS is uneasy with the term “post-truth” is somewhat beside the point, which is that its analyses share the same object domain and, in some cases, the term “disinformation.”
How may we sum up these taxonomical blurrings which may have implications for the constitution of the object of knowledge production (of disinformation production processes)? If one is to leave disinformation a broad definition without constraints of particular intent (e.g., to harm) or primary and secondary intent (intent to deceive to achieve another goal, economic, political, psychological, etc. profit), the field will certainly be historical, not just presentist; not just limited to social media but multi- and print media; about corporate campaigns to distort communication about public issues; about online harassment and trolling as much as political campaigns; about war and its continuation by other means—for starters. By explicit DS definitions, all these neighbors study disinformation.
Critique III: Cryptonormativity
DS’ Cryptonormative Discourse
To take a normative position: DS must be more nuanced about grades of deception and, in particular, power asymmetries among producers, audiences, and prosumers. The latter is especially important because nearly all definitions of disinformation stress the producer’s intention. The producer and those things and people, including technologies and their owners, and influencers that on different levels facilitate circulation, are structurally inculpated. But the implicit normative assertions therein are more complicated and need elaboration. So as not to commit the errors I am criticizing, what do I mean by “normative?”
Normative claims are judgmental, value-laden, prescriptive instead of descriptive. They express how someone thinks things should be, not the way they are (Althaus, 2012). As Althaus (2012) notes, “normative concerns” commonly appear in research, but often as “assertions rather than assessments” (p. 97). When researchers make these “evaluative claims,” they “fail to mention the evaluative standards used to reach conclusions” (Althaus, 2012). This kind of evaluation without supporting normative framework other scholars call “cryptonormativity” (Hesmondhalgh & Toynbee, 2008; Kim, 2021). It is rife in DS.
An oft-repeated DS cryptonormativism is the title of Wardle and Derahkshan’s (2017) “Information Disorder,” playfully reversed by Bennett and Livingston as “disinformation order.” One also finds “information pollution,” “dysfunctional sharing” of disinformation, or simply “problematic information” (Chadwick et al., 2018; Freelon & Wells, 2020, respectively). The language is strangely reminiscent, though almost certainly unintentionally, of Parsonian and Mertonian mid-century functionalism and, occasionally, of cybernetics’ “systems” and “equilibriums.” As Lucas (2007) explains, for functionalists, social systems are composed of interconnected parts; the parts of a system can be understood in terms of how each contributes to meeting the needs of the whole; and social systems tend to remain in equilibrium, with change in one part of the system leading to (generally adverse) changes in other parts of the system. (p. 4861)
The “consequence of viewing society as a system of interconnected parts,” Lucas continues, “is that any changes are seen as having the consequence of disrupting the entire system,” which some functionalists view “as a major threat.” Thus, the approach has been criticized as “being conservative in nature” (p. 4861).
DS frequently frames disinformation (dis-) order as “threatening”: for example, a “threat to democracy” (Bennett & Livingston, 2018, p. 135) or “a global threat” (Wardle & Derakhshan, 2017). Bennett and Livingston (2018) write, As these radical right movements reject the core institutions of press and politics, along with the authorities who speak through them, there is a growing demand for alternative information and leadership that explains how things got so out of order. (p. 128)
I do not disagree that radical right movements are dangerous by any number of standards (e.g., terrorizing minority populations) but with the implication that things were in good order at some previous time. Perhaps they were better, in some ways, for some people, but that needs spelling out and debating. Only then, as in a “SWOT analysis,” can one rigorously engage in threat assessment.
Without more nuanced presentation, this language appears nostalgic (and for more radically left media and communication scholars, even reactionary, as Bratich (2020) has recently goaded). The extended premise that a “system” or “order” exists and is or was good or bad necessitates more rigorous support that makes clear what was salutary in previous media forms, ownerships, and practices (and where?) that somehow got thrown out of whack. What specifically (not just “in the past there were problems, too . . .”) has been lost, for whom, why, when, and where—the United States? Liberal democracies (they have different media policies)? Everything, everywhere? Even when an attempt is made to apply a normative theoretical framework to disinformation practices as Tenove (2020) does, it is not clear why one is speaking of “disorder.” Disorder is conceptually meaningful in explicit theories that articulate ideal order; otherwise, it is question-begging.
In much cryptonormative DS where democracy appears in a functionalist lens, the crushing historical reality of resource-rich anti-democrats is overlooked in favor of moralizing about contemporary social media age “bad actors” who throw wrenches in the gears of liberal democracy. Recently, Anderson (2021), in passing, has rightly questioned the cryptonormativity of some DS with regard to metaphors of “health,” suggesting “misinformation” (and presumably “disinformation”) scholarship needs to engage with political theory. However, in DS, the history and practices of liberal democracy itself are rarely ever questioned as part of the problem/causal condition. Not only is its cryptonormativity liberal democratic, but it is also Western.
The fact that disinformation/misinformation is so far quite Western liberal democratic, especially US-focused, 9 may also explain its cryptonormativity. Beneath the metaphors of disorder and dysfunction is an assumption that deception and error are or should be oddities of political communication and are fundamentally bad. My sense is that they are not oddities, will likely never be eliminated from political communication, and are not even universally “bad.” Elaborating on what situations might be “bad” and why is precisely the kind of decrypting DS needs. In other words, contingent value judgments regarding “misleading” cannot be assumed but must be argued with regard to their normative framework, and, more critically, with an eye toward their historically and culturally specific entanglement in power relations. The point requires elaboration. 10
Disinformation Production and Power Asymmetries
So far I have argued that while some scholars cryptonormatively characterize disinformation as threats to (liberal representative) democracy without critically analyzing a political communication culture where resource-rich parties, politicians, and corporations (think of tobacco and fossil fuel “disinformation”) have hired “professional” disinformers, other DS practices a cryptnormativity that takes democracy and its “orders” for granted. These oversights become even more problematic in a geopolitical, postcolonial context.
Despite DS’ cryptonormative treatments and emphases on US conservatives’ strategies, it would seem disinformation, in its assorted definitions, has no singular ideological stripe and may appear on any side of asymmetrical power relations (even as a tactic for “evening the playing field”). Indeed, in other terms, it has been called a “weapon of the weak,” the title of James C. Scott’s account of everyday forms of “disinformation” in a more authoritarian, non-Western political context. “[O]f course, acts of theft, sabotage, and vandalism have no authors at all. Thus, while there is a fair amount of resistance in Sedaka [Malaysia], there are virtually no publicly announced resisters or troublemakers,” he writes. “Even the more purely symbolic resistance—malicious gossip, character assassination, nicknames, rumors—we have examined follows the same pattern.” Thus, regarding power relations in Sedaka, gossip “is never ‘disinterested’; it is a partisan effort (by class, faction, family) to advance its claims and interests against those of others” (Scott, 1987, p. 282). We might also recall the Franco-Cameroonian philosopher Achille Mbembe’s (1996) normative complication of rumor bombs as threats to information and other “orders.” “Rumors are the poor person’s bomb,” he writes with regard to Cameroonian postcolonial politics and life under Paul Biya’s 1980s–1990s authoritarian regime ([my translation], p. 158).
Of more recent use of disinformation as a “weapon of the weak” with regard to authoritarian regimes such as China, Liu (2020) writes, Rumor and rumoring—i.e., the communicative practice of sharing rumors—on digital media platforms, such as online forums, the instant-messaging service QQ, Weibo, and the mobile messaging app WeChat” are both a major problem or public “issue” for the Chinese government and also a “weapon of the weak.”
Certainly, these Chinese “bad actors” (aside from the government’s own for-hire “fact-checkers”) avail themselves of rumor bombs as weapons of the weak. Through in-depth interviews, Liu shows “the understudied contentious nature of rumor communication, not least in a highly repressive context.” He concludes that the contentiousness “is evident in mobile rumoring as ‘covert resistance’ . . . against omnipresent censorship and social control” (p. 113).
While explicit and implicit DS is also connected to cryptonormative fact-checking solutions (i.e., fact-checkers are “good,” those they check usually bad), yet, in a reversal of cryptonormative good/bad actors, some implicit DS has explored “fact-checking” itself as a form of disinformation through (accurate) information suppression. For example, pre-Trump DS scholars have referred to the Soviet Union’s development of their own historic, ideologically topsy-turvy institutionalization of fact-checking as the attempt to control “disinformation from below” (Gustafson, 2012, p. 212). The same normative questions arise today around the “fact-checking” initiatives of China’s “50 cent army,” which trolls Chinese digital media to suppress “disinformation” and maintain nationalist “positivity” (Singer & Brooking, 2018, p. 184).
Global South scholars have also emphasized power in traditionally managed disinformation production and circulation around questions of extreme and hate speech. Working to advance DS’ meeting with work on “extreme speech and specific global variations,” Udupa and Pohjonen (2019) write, [E]xtreme speech foregrounds the radical situatedness of online speech acts in different cultural, social, and political milieus globally. It takes seriously the cultural variation of speech acts, the normative orders bundled around them, and the historical conditions that underpin them. This implies that there is no easy-to-define boundary between speech that is acceptable and speech that is not. (p. 3051)
Similarly, George, prior to the recent DS explosion, wrote about hate speech as misinformation, a term more common than disinformation prior to 2016, and examined “production and dissemination of intentional false information” in Indonesia, India, and the United States (George, 2016). 11 The work on hate speech as disinformation has already received tremendous normative scrutiny, and of course remains hotly debated around questions of tipping points and lines that could distinguish it from other offensive speech (Álvarez-Benjumea & Winter, 2018; Kaye, 2022; Udupa & Pohjonen, 2019). Aside from the exceptions just mentioned, attention to normativity in hate speech and DS has not yet spilled into the broader explicit DS. These examples demonstrate the stark politics of disinformation’s conceptual adjudication and also the risks of ethnocentric scholarly tendencies (projecting contemporary US liberal democratic communication onto the world and its history). 12 Indeed, given these possible positions in the field of disinformation production, we might draw on de Certeau’s (1998) distinctions between strategy and tactics to speak of tactical and strategic disinformation uses (pp. 35–38): the former belonging to resource-poor and/or structurally oppressed deception, the latter referring to resource-rich and/or oppressive positions in the asymmetrical relation of power.
Ethnocentric, Postcolonial, and Democratic Realist Analyses
Bound to questions of normativity and disinformation production in liberal democracies and more autocratic regimes is the question of geographical focus (there is no constraint in the definitions, but in practice it is a different story). Is the statement controversial: The most widely cited research around family resemblance concepts in disinformation is in English and about social media practices and cultural productions, and Global North domestic political speech (i.e., across the globe, not just within one country with a language not widely spoken outside of it)? The work previously cited is exceptional. Might, also, DS engage more vigorously with not just disinformation in the Global South but also disinformation and genocide in particular, lamentably ancient and ongoing: recently Rwandans, Armenians, Sudanese, Darfuri, Bosnian, East Timorian, and Rohingyas—to name just a few? 13 Rightly, the Holocaust has attracted attention of many interdisciplinary scholars, but the emergent field—if it is—of disinformation, if to de-colonize, will need to shift some liberal reformist and Global North research funding/attention to the Global South and scope of study to genocide and other obvious episodes involving dis- and misinformation, and its many more specific forms, including rumor bombs, conspiracy theories, gray propaganda, and outright (provable) lies, provided DS adheres to current definitions.
Yet, part of that decentering of Western DS will also have to consider the relationship of Western deceivers-for-hire as they work beyond the West. A closer look at the professionalization of political deception will show a kind of postcolonial global flow of well-resourced savoir faire, demonstrated, for example, in Rachel Boynton’s (2006) still pertinent documentary, Our Brand Is Crisis. One can speak of “Americanization” not just with regard to tabloid style or management and content of news and programming (a subject of ongoing debate), but also in professional political deception or perception management—the latter being an important nuance to understand; we are not dealing exclusively or even mostly with communication reducible to true/false statements (Harsin, 2018).
Another critical about-face for uneven DS would be to re-focus on Western liberal democratic governments’ ongoing systematic disinformation production (on populations foreign and domestic). It appears that part of the contemporary renaissance of DS continues to use disinformation in the sense of “foreign influence campaigns” or propaganda (e.g., Elswah & Howard, 2020), while other parts expand its domain to electoral politics. Yet the hired producers, when not US presidents or shadowy companies like Cambridge Analytica are almost exclusively Russia and China in this anglophone literature. DS might consider evidence that the US and other Western democratic governments also regularly produce foreign disinformation “influence campaigns” and have done so for quite some time (Bittman, 1990). Just recently, in August 2022, Facebook and Twitter made news when, for the first time (at least reported publicly), they took down accounts found to be part of a US foreign influence campaign (Nix, 2022). Gizmodo’s headline name-checked disinformation (Ropek, 2022). The New York Times further commented that “Researchers have long suspected that influence operations promoting U.S. interests abroad have been active, though no specific efforts had previously been documented and studied.” Indeed, it was not the first U. disinformation operation to make news. In 2011, for example, the Guardian reported that the US military had employed software to create “fake on-line identities to spread pro-American propaganda.” These examples again raise the normative question: is any misleading communication “disinformation” and is it supposedly condemnable? Just as importantly, if DS is to avoid being seen as an “administrative communication research” 14 wing of US and Western governments’ interests (or “soft” on them), it will need to spend more time studying US and other Western computational propaganda.
Conclusion
In conclusion, one might regionally boomerang the cryptonormative metaphor of “information disorder” or disinformation. Instead of assuming that disinformation is an external agent infecting healthy democracy and media “systems,” why not ask what might be the historical relationship of actual liberal democracies to the production processes of disinformation? Instead of focusing just on hired hands’ filtered, isolated true/false statements as corrupt “information” and on how false beliefs are dangerous (of course, they can be), and then on regulatory responses, might we take a more historical look at the multiple roots and development of “information disorders” and the ensuing panic about them, open to the possibility that failures of liberal democracy have helped create them? Why is there an “industry” of disinformation at all? What kinds of values and power relations are its conditions of possibility? Values like Serazio’s (2018) political consultants’, no doubt bracketed by the job’s instrumentalism: “On balance, interviewees constructed the audiences they targeted as rather childish: apathetic toward public affairs, incapable of handling complexity, and easily titillated by flashy visual and emotional devices” (p. 144). Might we look more at what kind of structural political change is necessary to overcome the temptations to produce (in primary and secondary participatory levels) and live in the perceptual path of the so-called disinformation, on social media and elsewhere? Given all the resources poured into managing and nudging citizens’ attention, knowledge, motivation, and emotion from the mass to social media age in liberal democracies, there are thus deeper historical reasons, which stretch anxiously into the present, for which people are distrustful of macro-truth tellers trying to restore gatekeeping power and suppress panics; “truth-tellers” who, long before bots and social media in the modern liberal period, have schemed systematically to manage their fellow citizens’ political participatory power. 15 Many of them were hired hands.
In summary and conclusion, to help clarify and deepen theories of disinformation (including for-hire), I made three critiques, relating to the contradictions of conceptual definitions, disciplinary unneighborliness, and cryptonormativity. The conceptual flux, the neglected neighbors, and cryptonormative tendencies all entail a now-you-see-it-now-you-don’t effect for disinformation for-hire. Finally, I raised the potentially vexing question of whether “disinformation” and “misinformation” (and various corruptions, distortions, pollutions, or disorderings of “information”) are really the concepts scholars need to do the work of analyzing and theorizing the messy and unnerving political communication landscape some reluctantly call “post-truth politics.”
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
