Abstract
In recent years, government agencies, information institutions, educators and researchers have paid increasing attention to issues of misinformation, disinformation and conspiracy theorizing. This has prompted a seemingly endless supply of guides, frameworks and approaches to ‘combating’ the problem. In studies of mis- and disinformation, a constellation of analogous concepts are defined in multiple ways across multidisciplinary literatures and institutional contexts. Misinformation, disinformation and conspiracy theory are often conflated, lacking specific, portable definitions across fields of study. Linguistic metaphors are often leveraged in place of this definitional work. The larger conceptual metaphors that they connote contain normative assumptions that often impose values and moral imperatives, imply deficiencies, assume intent, and foreground individual agency or lack thereof. Metaphors are as restrictive as they are illuminating; once used, a metaphor also applies constraints to the way in which a phenomenon can be understood. Metaphors not only shape the ways in which science is communicated to the public, but also the kinds of questions that are asked, the theories and methods used, and the parameters of the research design. By analyzing instances of linguistic metaphor, this exploratory study identifies and develops two conceptual metaphors that are frequently evoked to discuss mis- and disinformation: embodied health metaphors and environmental health metaphors. The former includes linguistic metaphors like viral/virality, infodemic, infobesity, information hygiene, information dysfunction, and information pathology. The latter includes linguistic metaphors like information pollution, infollution, and digital wildfires. Uncritically invoking such metaphors adopts tacit arguments deriving from the original field of study (e.g., public health’s tendency to equate individual embodied health with virtue), or the image of the metaphor itself (digital wildfires implies quick spread and immediate danger), or both. Widespread and uncritical use of such metaphors, we argue, rewards speed and epistemic homogeneity in mis- and disinformation research – ultimately discouraging in-depth inquiry.
Introduction
Mis- and disinformation are frequently discussed in metaphoric terms. Metaphors are often used to diagnose ‘problems’, and set the terms by which we research, debate, and set policy. One such metaphor, ‘infodemic’, was famously invoked by the World Health Organization as the COVID-19 pandemic ravaged the world. Coined by David J. Rothkopf in a 2003 Washington Post Opinion piece, it was originally used in relation to the SARS outbreaks in the early 2000s. Another metaphoric term, ‘information pollution’, is rooted in foundational Information Studies research concerned with knowledge work and workers. Conversely, the concept of ‘digital wildfires’ can be traced back to a World Economic Forum report from 2013, which warned of a potential ‘viral spread’ of mis- and disinformation (Howell, 2013a). No matter their origins, terms like these are based in conceptual metaphors whose portability allows them to circulate across disciplines and between academic, policy, and journalistic realms, often changing function and definition as they increase in popularity. The rate at which this happens is exacerbated by the implicit urgency conveyed by these metaphors, with academics, journalists, and policymakers alike framing mis- and disinformation spread as a crisis, an emergency, a grand challenge – something that needs to be addressed as swiftly and decisively as possible. With this sense of urgency at our backs, it becomes increasingly difficult to examine and rethink our most fundamental approaches to the issue – including the language we use to discuss it.
In recent years, there has been some problematization of using metaphors uncritically to discuss misinformation, platform governance, and the Internet more broadly (Cowls et al., 2022; Simon and Camargo, 2023; Wyatt, 2021). This paper adds to that literature by identifying two new areas of conceptual metaphor, which we label as environmental health and embodied health. We trace these conceptual metaphors through their linguistic metaphoric instances as they appear in academic research articles and white papers. Environmental and embodied health metaphors have also made their way into journalistic contexts (American Heart Association, 2020; Elliott, 2013; Ovide, 2020; Snyder, 2020; Stolberg and Weiland, 2020). This paper is a theoretical intervention in the vein of Wyatt (2021) and Simon and Camargo (2023), who encourage critical Internet studies scholars to be wary of the ideological implications of repeated invocations of specific conceptual metaphors – and to perhaps think before they invoke common linguistic metaphors. Our paper asks: what are the conceptual metaphoric antecedents of the linguistic metaphors we observe in discussions of mis- and disinformation, and how might the implicit normative commitments and corrective imperatives contained in such conceptual metaphors shape the way we discuss and devise solutions to the spread of mis- and disinformation?
Habermas, for whom truth is only possible through consensus, differentiates between misinformation and disinformation by virtue of the creator’s intention (Southwell et al., 2017). This distinction between the two terms is generally accepted among scholars today: Benkler et al. (2018) define misinformation as ‘communication of false information without intent to deceive, manipulate, or otherwise obtain an outcome’, and disinformation as ‘dissemination of explicitly false or misleading information’ on purpose (33). Despite various attempts to create umbrella terms for both mis- and disinformation, many scholars prefer the term disinformation due to its ability to convey the harmful intent of its creators (Fallis, 2014; Wardle and Derakhshan, 2018). Additionally, the term serves as a more effective lens for understanding power dynamics and inequality (Kuo and Marwick, 2021).
In the past 8 years, academics, journalists, and policymakers have approached misinformation as though it is an urgent grand challenge in need of immediate and swift countermeasures. Many assumptions and trends exist in studies of mis- and disinformation. Counter to the widespread belief that encountering misinformation erodes trust in institutional knowledge sources, Thorson (2024) found that exposure to misinformation on social media increases trust in traditional news media. Further, engagement with misinformation on social media does not equate to belief – it is unclear how frequently people become convinced of a given misinformation narrative because they see or engage with it on social media (Altay et al., 2023; Guess et al., 2023). Adams et al. (2023) argue that the framing of misinformation as an existential threat to humanity has little evidentiary support, and that efforts to reduce misinformation are out of proportion with the threat it poses (also argued by Altay et al., 2023). The authors conclude by questioning the moral imperative that undergirds much research on misinformation – that believing in ‘truth’ is morally righteous and believing in anything other than truth is morally reprehensible. Other recent critical studies have pointed out the whiteness of the mis- and disinformation studies field (Kuo and Marwick, 2021), the lack of attention to how misinformation spreads outside of the U.S. and in bounded, relational contexts (Malhotra, 2020, 2023), the significance of race in targeted disinformation campaigns (Freelon et al., 2022), definitional and methodological issues (Altay et al., 2023), and the tendency towards alarmist narratives (Altay and Acerbi, 2023; Jungherr and Rauchfleisch, 2022). This paper, which situates itself within this critical turn, offers a theoretical intervention that encourages mis- and disinformation researchers to not only reflect on their own research processes, but to consider their epistemic situatedness and the consequences of overusing conceptual metaphors laden with moral implications.
The use of metaphor is widely understood as a crucial tool in communicating research both across disciplinary boundaries and in public-facing science communication. Metaphor is used to simplify an unfamiliar or complex idea through more familiar descriptive language, drawing on a comparison to something similar. This paper unpacks specific instances of linguistic metaphors to tease out and illustrate larger conceptual metaphors. Conceptual metaphors provide cognitive frameworks that structure our understanding of abstract or complex concepts by relating them to more concrete or familiar domains (Deignan, 2017). In the domain of Conceptual Metaphor Theory, recurring instances of linguistic metaphors are used to identify conceptual metaphors. This paper is a first step towards the identification of two overarching conceptual metaphors for mis- and disinformation: human (‘embodied’) health metaphors and environmental health metaphors. Embodied metaphors draw on images of disease. Linguistic metaphors in this area include terms like infodemic, infobesity, and information disorders – in addition to more general linguistic metaphors like contagion and virality. Environmental metaphors manifest through linguistic metaphors such as information pollution and digital wildfires, which rely on eco-imagery.
Taken at face value, embodied and environmental health metaphors make mis- and disinformation feel insurmountable – epidemics and environmental devastation are both grand challenges that require a depth of human cross-cultural collaboration not currently feasible in a late capitalist context. Often, the solutions to such grand challenges involve state or governmental action – most frequently, regulation and surveillance – and/or individual action, in the form of reducing one’s carbon footprint or doing one’s part to ‘flatten the curve’. Using metaphors based in embodied and environmental health oversimplifies a complex and ever-changing phenomenon, giving it seemingly indisputable moral weight.
Embodied and environmental metaphors for mis- and disinformation contain underexamined normative commitments and epistemic assumptions about correctness and incorrectness, rationality and irrationality. The portability of such metaphors – their ability to traverse both disciplinary boundaries and borders between policy, journalism, and academia – makes them easy to implement. We argue that these conceptual metaphors contain an unspoken argument that mis- and disinformation is rampant, unavoidable, platform-agnostic, spreads uniquely fast, and is highly dangerous to the public. The stakes are high, but their constant heightening – in response to perceived intensification of misinformed discourses – is not only ahistorical, but it also does not actually help us to address the problem. Instead, it may make us work too quickly to do so. This paper critically examines how the use of embodied and environmental conceptual metaphors for mis- and disinformation position the phenomenon as a grand challenge imbued with an axiomatic urgency that disincentivizes in-depth examination. Further, the solutions that emerge follow the field the metaphor draws on – public health in the case of embodied metaphor and environmental science in the case of environmental metaphor.
Understanding metaphor
Responding to what he considered to be an inadequate understanding of the function of metaphor, Max Black (1955) put forth the interaction view of metaphor. Instead of metaphor simply substituting literal description with comparison to the familiar, Black claimed that metaphor also constructs a relationship between two entities. Lakoff and Johnson (2008) identified the utility of metaphor as a necessary coping mechanism for dealing with abstraction by translating it into the concrete and the familiar. They argued that metaphor was an essential means by which human beings understand themselves and their environment, not simply reflecting reality but shaping the sense of what is possible, desirable, or taboo. ‘Time is money’, for instance, both reflects and constitutes the structural similarities between time and money within a capitalist system. The metaphor simply and succinctly asserts that time and money are both finite resources, that they are exchangeable, and that they are valuable: we save, waste, spend and invest our time and our money.
Metaphors are as restrictive as they are explanatory. Once used, a metaphor also applies constraints to the way in which a phenomenon can be understood. Black uses the example of using chess terminology to describe a field of battle: The enforced choice of the chess vocabulary will lead some aspects of the battle to be emphasized, others to be neglected, and all to be organized in a way that would cause much more strain in other modes of description. The chess vocabulary filters and transforms: it not only selects, it brings forward aspects of the battle that might not be seen at all through another medium (Black, 1955: pp. 288-289).
In highlighting one facet of a phenomenon through harmonious resemblance, a conceptual metaphor also obfuscates other aspects. Uncritical, repeated use of the same metaphors thus necessarily shapes both conceptualizations of complex phenomena and the responses devised.
With the introduction of Critical Metaphor Analysis, Lakoff and Johnson (2008) attempted to extend the domain of metaphors outside of the linguistic and into the cognitive, identifying the role of metaphor in collective meaning-making. Charteris-Black (2004) extended this work with Critical Metaphor Analysis (CMA). CMA suggests that the same concept can be expressed through multiple metaphors just as the same metaphor might be used in manifold ways according to both shared context and ideological perspective. As such, metaphors are powerful tools ‘for constructing social relations and creating, contesting or legitimating specific, social, cultural or political and ideological representations of the world’ (Castro Seixas, 2021). Lakoff (1991) famously illustrated that the conceptual metaphors used to discuss the Gulf War – which drew on the conceptual domains of business, fairy tales, and sports – both minimized the war’s violence and destructiveness and justified its cause. Lakoff went on to analyze the same phenomenon as part of the 2003 invasion of Iraq by the United States, analyzing the ideological work of metaphor as expressed in political speech, national policy, and international development discourse (Lakoff, 1991). For Lakoff, the labor of metaphor makes possible an otherwise nonsensical framing of the war as just. Attentive to some of the ways in which metaphor is deployed ideologically, Lakoff’s analysis remains ahistorical, eliding the entrenched histories of racialized metaphors working to dehumanize and isolate Islamic populations, casting Islam itself as contagion (Kolb, 2020; Puar, 2020).
Metaphors are central to science and health communication; Feminist Science and Technology Studies (STS) makes a critical intervention in understanding the consequences of metaphor in the sciences. Metaphors not only shape the ways in which science is communicated to the public, but also shape the kinds of questions that are asked and the parameters of research design. In doing so, metaphor in scientific discourse is an important means by which science enacts power. In her book Woman in the Body, Emily Martin (2001) compared the metaphors used (both historically and contemporarily) to describe menopause with language used to describe ‘failure’ to produce in an industrial sense. Combined ideological expressions of failure through the lens of capitalism and heterosexist patriarchy links the purpose of a uterus itself to the success and health of the economy. In her work on the history of genomics, Evelyn Fox-Keller (2009) describes how the early linguistic attribution of agentic features to genes played a role in the development of genomics research by defining its terms. Cowan and Rault (2022) argue that for feminist STS, ‘metaphor can be both meaning and method’. In describing Deboleena Roy’s work and surfacing Roy’s prompt that ‘metaphors lead to paradigm change’, they argue for the ‘collective refusal of some metaphors, the re-evaluation of others, and the introduction of new metaphorical frames and figures to reorient our work’ (Cowan and Rault, 2022: p. 1; Roy, 2018).
The circumscription of meaning facilitated by metaphor can have serious consequences. In their essay ‘Decolonization is Not a Metaphor’, Tuck and Yang (2012) discuss the effects of the ‘metaphorization’ of decolonization. Metaphor in this instance transforms the material action of repatriation into a broader category of ‘progressive’ action, which Tuck and Yang argue ‘recenters the priorities, well-being, futures, innocence, and good intentions of white settlers’. The metaphor works to absorb radical possibility into a pre-existing frame that does little to threaten the status quo. Metaphors are used to diagnose ‘problems’, and set the terms by which we research, debate, form communities, and set policy. J. David Cisneros (2008) describes metaphoric clusters that typify public discourse around immigration in the United States. These analyses go beyond the domains of the mainstream press and academic research, extending into the language of policy and proposed legislation and frame the figure of the immigrant ‘as invader, as criminal, and as disease...immigrant as pollutant’ (p. 590). Such violent metaphors dehumanize immigrants and migrants and are often operationalized in tandem with other narrative tactics (Walsh and Hill, 2023). As Flores (2010) points out in her historical work on immigration and metaphor, ‘the ease with which these constructions appear suggest that they have become deeply embedded within the cultural commonsense’ (p. 381).
Metaphor is also often used as a heuristic device for theory formation or modeling, becoming an element of ‘doing science’. The hubristic assumption that all is interchangeable can be seen almost everywhere you look. ‘Artificial intelligence’ as a concept is predicated on implicit connections between human cognition and computer systems (Barnden, 2008). Swarm Intelligence, a subfield of artificial intelligence, utilizes the collective behavior of ‘self-organized systems’ and focuses on interactions between entities and their environment. Algorithms based on everything from ant colony optimization to shuffled frog leaping are being deployed across a multitude of contexts from predictive policing to ‘smart city’ applications (Furtado et al., 2007; Zedadra et al., 2019).
The Internet has always been conceptualized in metaphoric terms like the ‘information superhighway’ or the ‘web’ (Johnston, 2009). Olson (2005) investigated the legal ramifications of the ‘cyberspace as place’ metaphor, correctly predicting, at least in part, that the application of real-world property law to the Internet would result in increased privatization. Wyatt (2004) shows that ever-changing nature of the Internet, its functions, and its structures results in policymakers, academics, designers, technologists, and journalists using metaphor to discuss its newer, less familiar features. Pointing out that metaphors are not merely descriptive, Wyatt asserts that they ‘also have a normative dimension; they can be used to help the imaginary become real or true’ (p. 244). Markham (2020) carries this idea forward: The metaphors we use to frame our experiences of the internet (then and now) matter; in that they can construct both the enabling and limiting features of our technologies. These frames spread through everyday terminologies and visual imageries. What we called surfing, we now call sharing. What was once cyberspace and The Net are now platforms. What we once called online or networked is now IOT and smart. All of these are metaphors, but we might be less likely to notice them as such, because this is now dominant metaphors work – as infrastructures of. (p. 9).
Markham makes explicit the connection between material realities and imaginaries – as infrastructures, metaphors determine how we think about existing technologies, often creating the blueprint for the design and implementation of new features and affordances. Even the bygone metaphors Markham mentions continue to shape technological realities, becoming ‘the root system upon which newer metaphors build’ (Tiidenberg, 2020, 16).
Many scholars and journalists have critiqued the use of specific metaphors in discussion of the Internet (Blavin and Cohen, 2002; Frischmann, 2018; John, 2017; Postrel, 1998). Some metaphors’ normative dimensions are clearer than others – Gillespie (2017) discusses how social media companies’ rebranding of themselves in the 2010s as ‘platforms’ allowed them to operationalize the metaphor not just in terms of its computational meaning, but also through its architectural and political associations – as a raised structure from which to broadcast announcements and ideas. Similarly, Wyatt (2021) demonstrates that considering data as the ‘new oil’ or ‘gold’ packages it in terms legible to nationalist and corporatized interests: ‘Both industry and policy makers draw on these resource-based metaphors to emphasize the importance of exploiting the economic potential of data for private or public gain’ (p. 411). Specific metaphors serve specific interests, and the most powerful corporate interests are often the most successful in their dissemination of metaphor as a portable means for discussing new technologies.
Building on her 2004 study, Wyatt (2021) urges digital studies scholars to think both critically and imaginatively about the metaphors they use to discuss the Internet. She points to the ever-increasing pressure to publish scholarly work as a disincentive for scholars to unpack the metaphors they use. Highlighting McCloskey’s (1982) warning that ‘unexamined metaphor is a substitute for thinking’, Wyatt calls on scholars to ‘consider the power of our own words and metaphors’. Similarly, Tiidenberg emphasizes that ‘Each term we use invites different moral assessment and regulation…’ (Tiidenberg, 2020, 17). Metaphors are not merely portable terms for making technology broadly comprehensible – they are themselves infrastructures upon which power and material reality are scaffolded. Wyatt encourages critical scholars of Internet and digital media studies to either come up with more imaginative metaphors that are not so steeped in corporate and state interests, or to ‘dispense with metaphor and be firmly literal’.
In mis- and disinformation studies, a constellation of analogous concepts are defined in multiple ways across multidisciplinary literature(s) and institutional contexts. Misinformation, disinformation, and conspiracy theories are often conflated or lack specific, portable definitions across fields of study. At times, they remain entirely unscrutinized. Metaphor is often leveraged in place of this definitional work and alongside it come normative assumptions that often impose values, imply deficiencies and/or guilt and assume intent or agency. In recent years, government agencies, information institutions, educators and researchers have paid increasing attention to issues of misinformation, disinformation and conspiracy theories prompting a seemingly endless supply of guides, frameworks, and approaches to ‘combating’ the problem.
Metaphors across mis- and disinformation literatures can flatten complex phenomena; in the context of the metaphor there is no distinction between instances of misinformation, all contributing to poor health outcomes (embodied) and a polluted earth (environmental). Misinformation is then positioned as a problem that is somehow both intractable and clearly bounded. Solutions are predicated on histories of disease and environmental degradation, inheriting eugenicist rhetoric from public health discourses and invoking the profound existential threat of climate change. Feminist science and technology studies identifies metaphor as a key tool for obscuring power relations contained within a dynamic, but also provides us with the imperative and mechanism by which to expose those relations. In unpacking the normative and implicit assumptions contained within a metaphor, we can begin to see the true contours of misinformation.
Embodied health metaphors
In her book Illness as Metaphor (1978) and its companion essay ‘AIDS and its Metaphors’ (2006), Susan Sontag explores the moralizing undercurrent present in the companion histories of metaphor and illness. Historically, many diseases were considered expressions of character defects or moral failings, stigmatizing illness itself as well as those who become ill. Making the explicit link between the use of metaphor and the lived consequences of those on the other end of public health initiatives (or the lack thereof), Sontag also reminds us that the stakes are material. To technofeminist scholar Donna Haraway, the immune system is ‘an elaborate icon for principal systems of symbolic and material “difference” in late capitalism’. The immune system is an overworked signifier, defining and delineating the normative ideal from the non-normative ill (1991, p. 204).
Although this tendency predates the advent of the COVID-19 global pandemic, since 2020 we have reached a rhetorical fever pitch that relies on metaphor to characterize mis- and disinformation as harmful to the health of both the individual and to the body politic at large. The use of metaphorical language such as viral/virality, infodemic, infobesity, information hygiene, information dysfunction, and information pathology are ubiquitous in research literature, public health communications and throughout the media. The relationship between belief, opinion, policy, social media, and media coverage is dynamic, complex, and highly charged in moments of collective crisis. These relationships are co-constitutive even as metaphoric language can flatten the dynamics at play. Whether employed as a means of public science communication or in less self-aware ways, the use of imagery and metaphor of illness calls attention to popular anxieties about health, ability, personal responsibility, agency, and failure.
Despite the radical changes in part precipitated by the advent of the world wide web, personal computing panic and hyperbolic hand-wringing about the challenges of how individuals, communities, businesses, and institutions deal with ‘information overload’ are commonplace across the literature. Although this literature covers a wide swath of disciplinary and professional contexts, a general agreement exists that too much information has a potential for deleterious effect on a subject and that efficient sorting between appropriate and inappropriate information is beneficial (Bawden and Robinson, 2009; Melinat et al., 2014; Miller, 1960; Roetzel, 2019). Relying as it does on a function/dysfunction binary in which the sheer volume of information renders a subject dysfunctional – and to which ‘cures’ are understood only in terms of literacies – prefigures a move towards pathologizing information behavior and consumption in recent years.
In their survey of literature on information overload, Bawden and Robinson (2009) discuss its lack of conceptual precision, describing information overload as a ‘...state of affairs where an individual’s efficiency in using information in their work is hampered by the amount of relevant and potentially useful information available to them’ (pp. 3 - 4). They collect myriad synonyms across the literature to illustrate the slippery nature of the phenomenon, oscillating between the diagnostic and the descriptive. The multiple concurrent histories of information technology are full of references to some form of information overload, alternately collapsing into a panic over access afforded by new technologies and identifying emergent technologies as the solution.
Medical terminology slips into the language across the literature surveyed, from the specific (changes in attention span and anxiety related to information search) to the general (the dichotomy of functional/dysfunctional). One implication of medicalized language is that information overload is both harmful and in need of a cure. A cognate concept referenced in the article, ‘infobesity’, attaches additional stigma to the subject, side-stepping debates about scientific basis for the term obesity itself as well as the outsized impact of fatphobia on the health and well-being of fat people. Additionally, the invocation of ‘obesity’ in this sense captures a key paradox across the literature. Much like the literature on obesity itself, it simultaneously frames disinformation as a matter of public health and of individual responsibility. This history within Information Studies positions communities of research and practice to then frame distinct phenomena as information that is ‘bad’ for you or ‘good’ for you and positions the agency of the subject as absent, misguided or out of control.
Building on Bawden and Robinson (2009), Culloty and Suiter (2021) diagnose mis- and disinformation as inherently pathological, identifying three information ‘pathologies’: bad actors, platforms, and audiences. Labeling some information and information behaviors in this way necessitates a diagnostic stance and implies that external treatment is necessary, warranting coordinated action by institutions deemed to be credible. If there is an information disease, then there must be information doctors. The concomitant language of ‘inoculation’ against misinformation is informative here (van der Linden, 2022; van der Linden et al., 2017).
The bibliometric study conducted by Bran et al. (2021) provides a broad snapshot of convergent fields and ideas under the larger umbrella of what they refer to as ‘information disorders’, or ‘three different notions: “mis-information” (false information, but which is not intended to cause harm), “dis-information” (false information intentionally spread in order to cause harm) and “mal-information” (genuine information that is manipulated in order to produce harm)’. This study attempts to quantify research on information disorders by analyzing and evaluating literature in the Web of Science Core Collection database from 1975 to 2021 and finds a wide range of disciplines and considerable breadth of terminology used to describe phenomena. While there is general acknowledgment that the area of study is challenging in its scope and breadth, not only are definitions and terminology multiple and unstable, but the boundaries around such phenomena are also equally multiple and unstable. This suggests that while coherent categorization is indeed a problem, it is not an isolated one.
As is clear from Bran et al. (2021), using metaphors of health, illness and disorder are not new in the literature. However, since 2020, investment in research related to mis- and disinformation has exploded, largely due to the urgency around several simultaneous global crises including the outbreak and spread of the global COVID-19 pandemic, acute and ongoing challenges to the functioning of democratic institutions, continued erosion of civil rights, and widespread calls for reconstituting approaches to public safety. Research institutions, civil society organizations, public institutions and news media have worked in various ways to name and address mis- and disinformation. Bran et al. (2021)’s findings illustrate this sudden and dramatic increase in published research, but there has also been a proliferation of practical literature on navigating the information landscape aimed primarily at the public (pamphlets, guides, community syllabi, how-to manuals, etc.) (Keselman et al., 2022; Phillips and Milner, 2023).
In many cases, the use of these metaphors goes unexamined. Articles attempting to diagnose, quantify, describe, or address mis- and disinformation often invoke the imagery of disease, virality, and public health in the title of an article but rarely enumerate or define those terms (Kouzy et al., 2020; van der Linden, 2022). In particular, the use of infodemic occurs frequently and typically without explication. Across the literature, bad information is seen as having a causal relationship to an infodemic, a phenomenon we are expected to immediately understand and apprehend. Occasionally, the World Health Organization (WHO) is cited to define infodemic as ‘too much information including false or misleading information in digital and physical environments during a disease outbreak’. In their report, the WHO goes on to describe how these conditions undermine public health responses and can be both ameliorated and/or accelerated by social media and other communication technologies. The WHO’s director-general, Tedros Adhanom Ghebreyesus, has been quoted as saying ‘We’re not just fighting with a pandemic; we’re fighting an infodemic’ (The Lancet Infections Diseases, 2020). The term ‘infodemic’ was coined in 2003 by journalist David Rothkopf, writing a Washington Post article on the SARS outbreak. In his words, an infodemic is ‘A few facts, mixed with fear, speculation and rumor, amplified and relayed swiftly worldwide by modern information technologies…’ He goes on to call infodemics ‘the most virulent phenomena known to man’, and to say that ‘These Internet – or media – borne viruses create global panics, trigger irrational behavior…’ (Rothkopf, 2003). The term has proliferated across multiple contexts, leading to what a recent report on the annual WHO Infodemic Management Conference describes as a term that is ‘conceptually conflated…often overworked, and was currently used to refer to different concepts in different fields or country settings’ (Wilhelm et al., 2023).
Through the lens of embodied health, Lor et al. (2021) discuss how libraries and librarians can and should respond to the infodemic that purportedly accompanied the COVID-19 pandemic. Just as we were taught to practice physical hygiene in response to the COVID-19 pandemic, some information literacy approaches suggest that the infodemic must be approached with information hygiene (Grimes, 2020).
Simon and Camargo (2023) explore the origins and limitations of the infodemic metaphor, situating it historically and demonstrating that information does not, in fact, behave like a virus. Stressing the tension between the term’s popularity and its uncritical invocation, the authors argue that the term’s widespread use ultimately oversimplifies complex social behaviors, resulting in chilling effects on deep academic inquiry, robust public discourse, and considered policymaking.
Even the term ‘viral’ draws on embodied health metaphors byframing the exponential popularity of specific pieces of media in terms of infection. Information of all shades of truth can ‘go viral’, but the metaphor of virality implies uncleanliness. Contamination and contagion are powerful ways to think about information, the Internet, and the way we consume and produce knowledge and media.
Hansson et al. (2021) set out to define and analyze potential harms related to information disorder when specifically applied to the context of COVID-19. Even as the authors are careful to admit that ‘potential, harmful information was (and in some cases still is) difficult to fact-check or debunk because of the novel nature of COVID-19, the lack of scientific evidence, and the frequently updated official recommendations and regulations’, their analysis of risk factors for information disorder positions it as a failure in crisis communication and/or the inability of the individual to discern good or bad information. The relative virality of information is positioned as having less to do with the quality of the information itself, instead relying on the abilities, predispositions and associations of the individual. This casts some people as having a ‘weakened immune system’ with respect to information, one that presumably puts the otherwise ‘healthy’ populace at risk.
In her 2018 article in Nature, Heidi J. Larson identifies what she refers to as digitally enabled emotional contagion as a core issue in the erosion of trust in vaccines and vaccine related public health initiatives. The shift in characterizing the problem from information-based to emotion-based entrenches the rational/irrational dichotomy. Using the framework of ‘moral emotions’, Solovev and Pröllochs (2022) argue that emotions associated with morality (contempt, anger, disgust, shame, pride, and guilt) accelerate or contribute to the spread of mis- and disinformation. Lloyd and Hicks (2021) advocate for a more expansive understanding of information literacy as a social and discursive phenomenon, a domain within which certain strategies and behaviors that have been characterized only in negative terms (such as information avoidance) can be understood as a part of a broader set of strategies for navigating complexity.
Although illness does not always equate to disability, metaphors relating to human health will almost always cast problems or challenges in such terms. Drawing on an ‘ideology of impairment’ (Schalk, 2013) that equates embodied disability with broad informational disorder, embodied health metaphors for mis- and disinformation implicitly equate health with virtue. Such metaphors ignore the lived experiences of fat and disabled communities and the effects of structural oppression on their everyday lives. Ultimately, embodied health metaphors emphasize the parallels between embodied health and mis- and disinformation – allowing their dissimilarities and divergences to slip through the cracks.
Environmental health metaphors
Like the disease-related anxieties triggered by embodied metaphors, environmental metaphors activate generalized anxieties around anthropogenic effects on the Earth and the looming devastation of climate apocalypse. While most embodied metaphors derive primarily from policy and public health arenas, environmental metaphors have their roots in Information Studies scholarship.
Discussing the tendency towards and limitations of technological determinism resulting from discussing technology as either a tool or a system, Nardi and O’Day (2000) suggest that the concept of an information ecology ‘...includes local differences, while still capturing the strong interrelationships among the social, economic, and political contexts in which technology is invented and used’ (p. 47). This metaphoric approach attempts to scaffold an understanding of the complexities inherent in information environments. Information ecology has been proposed as one way to get beyond individualized approaches to dealing with mis-, dis- and other kinds of ‘bad’ information (Ma, 2021).
While ecology itself can be defined as the relationships between organisms and with their surroundings, the associated discipline of environmentalism contains an inherent corrective imperative, fighting against adverse effects on the environment caused by industrialized civilization. Cunningham (2014) defines information environmentalism as ‘...a normative discourse that seeks to protect and nurture the information commons’. This metaphor is stacked on top of the ‘Internet as commons’ metaphor, which has itself become infrastructural. As athropogenic climate change continues to have devastating effects around the world, viewing the so-called information commons as increasingly polluted by mis- and disinformation makes a certain amount of sense; both exemplify the unrelenting ravages of capitalism. The information commons, however, is conceptual and entirely human-created, unlike the natural world.
Relying too heavily on metaphor can hamper how we consider mis- and disinformation as problems, and how we consider and frame up solutions in terms of the prevailing logics that make sense in an environmentalist space. This section will review two prevailing metaphors that are based in information environmentalism: information pollution and digital wildfires.
Information pollution, sometimes termed ‘infollution’, has been discussed in Information Studies spaces for decades. Orman (1984) situates the metaphor historically: The second half of the twentieth century has often been characterized as the information age. As the industrial age brought about the problem of industrial pollution, it is only natural to expect the information age to produce information pollution. Indeed, information pollution is crippling our decision-makers and breeding skepticism in the public at large (p. 64).
This characterization of information pollution is both rooted in history and prescient in its forecast of the consequences of public skepticism. Similarly, Cai and Zhang (1996) suggest that ‘information revolution may incur information pollution, noting that industrial revolution incurred environmental pollution’ (p. 3124). Immediately, the implication of this comparison is that without proper regulation, information pollution will affect the health of the populace, filling their minds with smoglike haze.
Orman defines information pollution as ‘the contamination of information supply with incomplete, inconsistent, or irrelevant information’ (1984, p. 65). Perceiving the problem of information pollution as primarily affecting decision makers rather than the populace, he offered a technologically deterministic solution of decision support systems. Defining information pollution as ‘...the presence and spread of undesirable messages in human society, in quantities large enough to produce significantly adverse effects on human activities and social life’, Cai and Zhang (1996) take an information behavior approach to information pollution, suggesting a variety of potential models. They propose that mitigation techniques for information pollution should consider the value of a message, beyond its utility or degree of uncertainty – ‘a message can be essential, useful, useless, harmful, or even poisonous’. Particularly considered among the other descriptors, characterizing a message as poisonous assumes a complete lack of agency on the part of the audience or recipient of that message.
Like Orman, Nielsen (1984) conceptualizes information pollution in terms of a specific class of people: knowledge workers. For Nielsen, information pollution occurs with workplace interruptions (chat, email, etc.) that significantly lower knowledge workers’ productivity. Bray (2007) continues with Nielsen’s conceptualization of information pollution as a knowledge work issue, characterized by multiple distractions that force workers into multitasking. This recalls the functional/dysfunctional dichotomy at the core of research on information overload. For Bray, information pollution is one possible downside of the ‘positive movement of internet-enabled empowerment’. Like scholars before him, he explicates the information environmentalism metaphor in detail: I suggest that the IS field has reached a critical moment of realization, akin to the ecological conservation moment which grew in reaction to how 18th and 19th century industrial economies overlooked the reality that the world has finite resources and limits on how much waste material it can receive before the environment changes dramatically. Similar to the limits of Earth’s ‘environmental load’ with regard to human-made pollution, some of the technologies we have built have led (unforeseeably) to increased information pollution (p. 2).
Rather than designing technological solutions, Bray insists that information professionals must begin replacing existing information with better information, avoid technologies that foster constant shifting from task to task, and treat attention and the cognitive capacity as a ‘scarce resource’ to be ‘conserved’. Iqbal et al. (2019, 2020) and Iqbal and Nawaz (2021) examine the impacts of information pollution in the workplace, carrying on Orman and Nielsen’s examination of the topic in the context of knowledge work.
According to many who engage with environmental metaphors of mis- and disinformation, the Internet has ostensibly worsened the problem of information pollution and is itself a chaotic informational environment in need of cleanup (Pandita, 2014). Firat and Kurt (2015) developed an information pollution scale for use in educational contexts, and Özdemir (2016), who defines information pollution as being caused by ‘deterioration of shared/transmitted information’, ‘misleading elements’, or ‘malicious contents’, argued that information pollution is a particular problem among younger generations because of their capacity to adapt quickly to new technologies. Boyd (2014) problematizes the prevalent idea that teen use of the Internet and social media is bad and/or dangerous by debunking hyperbolic dystopian or utopian narratives, including ‘the dystopian notion that teens are addicted to social media’ (p. 15).
Wardle and Derakhshan (2018) conceptualized their well-known notion of information disorder in information pollution terms. Acknowledging the historical precedent for different types of bad or incorrect information (e.g., the existence of rumor and gossip throughout history), they argued that the Internet and social media have created a perfect environment for ‘information pollution at a global scale’. In contrast with earlier work, Wardle and Derakhshan take the metaphor at face value; at this point, the connection between the industrial and information revolutions and their concomitant ecological effects is implicit.
Wardle and Derakhshan, among others, conceptualize information pollution as a kind of grand challenge. Meel and Vishwakarma (2020) tackle information pollution on the Internet, which they define as contamination, on purpose or by accident, of the information environment. They point to the low barriers to entry for producing and disseminating information on social media as the main cause of information pollution, emphasizing the need to ‘quarantine the malice of information pollution’. Like Wardle and Derakhshan’s notion of information disorder, there is some metaphorical slippage between the environmental and the embodied in Meel and Vishwakarma’s use of ‘quarantine’. Further, rather than disrupting the information ecosystem as was the prevailing logic of earlier, more robust metaphors that relied on logics of industrialization, Meel and Vishwakarma state that information pollution is a byproduct of our digital communication ecosystem.
Since 2020, much of the scholarly work on information pollution on the Internet focused on articulating its hazards. Meel and Vishwakarma consider it ‘very dangerous’; Serrano-Puche (2021) claims that information pollution is caused by ‘a crisis of public communication’. Similarly, policy scholars Malin and Lubienski (2022) argue that ‘...“information pollution” relative to U.S. politics and policy is presently at crisis levels…’ (3) especially in the context of education policy. Humprecht et al. (2020) state that we are in an ‘age of information pollution’, and Lavorgna et al. (2022) characterize information pollution as a ‘social harm’. These myriad uses of the term invoke it at differing scales, but almost always – again – as a problem or crisis with dangerous and wide-ranging effects, which must be solved as soon as possible.
One of the most comprehensive approaches to conceptualizing information pollution in recent years is Whitney Phillips and Ryan M. Milner’s You Are Here: A Field Guide for Navigating Polluted Information (2021). Phillips and Milner frame their study in familiar terms by at once slipping between embodied and environmental metaphors and conveying urgency: ‘Polluted information is a public health emergency’ (p. 5). They prefer the term ‘polluted information’ to ‘information pollution’, because, they argue, the information itself is being polluted, and the latter term emphasizes the interconnectedness of its online and offline effects. Phillips and Milner argue that using an ecological metaphor to understand the way that ‘toxicity’ spreads online focuses discussion of mis- and disinformation on consequences rather than intent. Stressing that the utility of environmental metaphors lies in their emphasis on individual reflexivity, the authors ask: how do we each contribute to the polluted information landscape, even if we are not an industrial polluter, so to speak? The authors continually emphasize that none of us is above contributing to polluted information on an individual level, even accidentally. Their conceptualization of mis- and disinformation in environmental terms is more careful than most, attuned as they are to the pitfalls of uncritical use of metaphor and the tension between the structural and the individualized when it comes to devising solutions.
The metaphor of information pollution has inevitably made its way into the policy arena. Some government representatives have vaguely gestured to ‘living in an era of increasing information pollution’, (US AID, 2020) and others, like Secretary of State Antony Blinken, are more specific in their fears. Discussing the political landscape in the Philippines, Blinken said ‘The Internet is a double-edged sword. It can help in making a more informed vote, I mean, and encourage young people's participation, that is, if young people know how to sort information, because there is also prevalent information pollution online…’ (US Department of State, 2021).
We can see a direct throughline between Özdemir (2016)’s palpable worries about the naivete of ‘digital natives’, and Blinken’s worries about young Filipino Internet users struggling with media and information literacy. That the inverse could be true is not considered – the agency and knowledge of young Filipino Internet users, and more broadly young people in general, is not considered as a significant factor. The media has repeatedly characterized teens’ relationship to the Internet in general and social media specifically as dysfunctional and oppositional; again – young people’s expertise is ignored or doubted (boyd, 2014; Stern and Odland, 2017). Youth are a convenient scapegoat on which to foist a generalized feeling of being out of control of the information flows at work on the Internet. Blinken and others (Blankenship, 2020; US AID 2020; US Department of State 2021) continually propose fact-checking and media literacy as mitigation strategies, both of which have been shown to be of limited effectiveness (Thorson, 2016; Uscinski and Butler, 2013).
Early uses of information pollution draw on perceived parallels between the industrial revolution and the information revolution – the metaphor builds upon its own normative assumptions over time. Many who discuss information pollution on the Internet today take great pains to position it as a highly hazardous byproduct of the networked information landscape, tacitly arguing that cleaning up such contamination is of the utmost importance. Further, comparing mis- and disinformation spread to environmental contamination ignores the slipperiest parts of the issue – true pollution is material, it hangs in air and clogs up waterways; mis- and disinformation is immaterial and often hard to identify (Altay et al., 2023). The metaphor of information pollution also draws on a human-created problem that we have not yet found the collaborative wherewithal to solve. If we are unable to solve the ongoing environmental catastrophe resulting from outsized industrial production, colonialism, and capitalism, how can we do so in the networked digital world? There is an implication as well in these metaphors that the situation is ever-worsening when that may not be the case.
2013 saw the first use of the term digital wildfires by the World Economic Forum’s Eighth Global Risks report: The global risk of massive digital misinformation sits at the center of a constellation of technological and geopolitical risks ranging from terrorism to cyber attacks and the failure of global governance. This risk case examines how hyperconnectivity could enable ‘digital wildfires’ to wreak havoc in the real world (Howell, 2013a)
A so-called digital wildfire refers to any social media event in which information spreads quickly and unpredictably. The report goes on to warn that the Internet enables the ‘viral spread of information that is either intentionally or unintentionally misleading or provocative, with serious consequences’. Webb et al. (2016) argue that, ‘If we accept digital wildfires as a global risk factor, we are led to examine the role of governance in regulating the “havoc” they can cause and the potential for a global ethos promoting digital responsibility’ (p. 194). They ultimately conclude that new approaches to social media governance might be necessary to contend with digital wildfires. Langguth et al. (2023) frame their discussion of COVID-19 and 5G conspiracy theories as a digital wildfire, adding to the lexicon by designating 5G and COVID-19 conspiracy theories as a complex digital wildfire, because they were made up of a glut of different messages across multiple platforms. They argue for a more abstracted governance that would remove opportunities for financial or other kinds of profit from the spread of disinformation.
Kopilaš and Gajović (2020) conducted a study that retrospectively analyzed a 205-member WhatsApp group that had been created to discuss the health effects of 5G mobile networks and to collect signatures on a petition against their widespread use. Though the authors do not use the term ‘digital wildfire’, they do describe the dynamics of the group as ‘wildfire-like’, due to the rapid, intense, and short lifespan of the activity within the group. Wildfire-like patterns, they argue, are at play across digital environments and are agnostic to the veracity of the information that spreads. Kopilaš et al. argue against the normative framing of digital wildfires as involving only bad information, instead suggesting that, ‘...wildfire-like dynamics are typical for the digital environment, as a specific feature of the digital society, and would happen in favorable circumstances regardless of the content, certainly not only if the content is malicious’ (2020, p. 9).
By contrast, and building on their earlier work (Webb et al., 2016), Edwards et al. (2021) take the existence of digital wildfires at face value, arguing that their endurance is an indication that not enough has been done to address them through governance. Exploring different approaches to governance, Edwards et al. begin to slip into a different area of metaphor, one of carcerality, using words like ‘enforcement’ and ‘policing’. One of their primary findings is that the lack of government supervision of tech companies’ supposed ‘self-regulation’ results in an emergent ‘digital gangsterism’ that must be addressed using carceral means: ‘...the policing of emergent technologies presents a major site of debate about potentially harmful, albeit legal, practices and a broadening out of the politics of criminalization beyond the usual suspects of offline street crime to encompass the organization of serious crimes online…’ (p. 16) The authors further suggest that artificial intelligence and machine learning tools are highly useful to ‘police online harms’ because ‘...conventional law enforcement is limited by the volume of social media communications and the capacity for human oversight’ (p. 16).
A 2013 New York Times opinion piece titled ‘Only You can Prevent Digital Wildfires’, written by managing director of the World Economic Forum (and coiner of the term) Lee Howell, suggests that new social norms must emerge on social media before effective regulatory frameworks can be put in place. Howell (2013b) also asserts that: ‘It will also be necessary for more of the consumers of social media to become more literate in assessing the reliability and bias of sources’. Similarly, press releases and articles with headlines like ‘How to police “digital wildfires” on social media’ (Webb and Jirotka, 2016) and ‘Users key to stop social media wildfires’ (Economic and Social Research Council, 2018) continue to put the burden on individuals. The carceral language of policing is also notable in the former headline, particularly when accompanied by the header image of someone recording a militarized police force with a tablet.
The digital wildfire metaphor for mis- and disinformation emphasizes harm, unpredictability, and spread. Where information pollution implies a slow-moving contamination that slowly envelops the entirety of the information landscape, digital wildfire hints at a singular event, bounded by time and platform(s), that presents immediate danger. Framing mis- and disinformation as sites of immediate and profound danger invites carceral solutions to the problem rather than offering time and space to think expansively and critically about new and different approaches.
Discussion
Information does not actually spread like an infectious disease (Simon and Camargo, 2023). Metaphors like infodemic can oversimplify the complexities of our informational environment, developing the contours of the phenomenon and ratcheting up affects of crisis without much concrete evidence (Altay et al., 2023; Simon and Camargo, 2023).
When we use environmental and embodied health metaphors, they shape the kinds of solutions we consider, and can have differing levels of utility, effectiveness, and resonance for different communities and audiences (Brugman, 2022). Just as solutions to climate change and public health crises are frequently polarized between individual behaviors and regulatory strategies, solutions to misinformation spread are often based either in literacy or governance. Literacy framings remain limiting, however – even if they acknowledge the structural, they still rely almost exclusively on the decision-making and information behavior of individuals.
Across all these areas that engage with embodied and environmental metaphors – public scholarship, journalism, policy, and academic scholarship – is a tone of urgency and an implied need for timely crisis management. A significant amount of hand-wringing and concomitant funding and research has been done on mis-and disinformation as aspects of political polarization and the role of platforms in the U.S. context. As Kreiss and McGregor (2023) illustrate, focusing on the role of platforms in political polarization – and catastrophizing polarization as the primary threat to democracy – ignores the historically embedded structures of inequality along racial, gender, and class lines, further entrenching the hegemonic power structures at the core of racial capitalism. Platforms were born of and continue to exist within this historicized context, and research on them and their democratic role must contend with this history.
Further, using metaphoric language like information pollution, infobesity, infodemic, digital wildfire and contagion implies that there is an unavoidable, platform-agnostic ubiquity to misinformation. If there was air on the Internet – and indeed, for many of us, the Internet is so integrated with our daily lives that it has become ‘like oxygen’ (Markham, 2020) – these framings imply that misinformation would be in the digital air we breathe. The constant heightening of the stakes in response to perceived intensification of misinformed discourses does not actually help us to devise solutions. In fact, it may make us work too quickly to do so. Altay et al. (2023) problematize the assumption that we are in a so-called ‘age’ or ‘crisis’ of misinformation, showing that both the pervasiveness and the consequences of online misinformation are somewhat overblown: ‘...the internet is not rife with misinformation or news, but with memes and entertaining content’. They contend that scientists focus on misinformation on social media – particularly Twitter – because it is ‘methodologically convenient’, not because it is a rampant problem. Further, they assert that the virality of a given piece of information is not necessarily determined by its truth or untruth; rather, it must be assessed in terms of harm or ideological motivation. They further suggest that much research on misinformation underestimates the ability of social media users to assess and think critically about the information they are consuming – just because misinformation is prevalent on a given platform, does not necessarily mean it will result in outsized harm.
Conclusion
The findings in this exploratory study are limited – we introduce two new conceptual metaphors for misinformation, ‘embodied health’ and ‘environmental health’, and encourage critical scholars of digital and Internet studies to reflect on their use of these and other metaphors. How we ask questions determines the shape of the research results – by extricating ourselves from taking these methods and metaphors at face value, we may liberate ourselves from foregone conclusions that either emphasize individual responsibility to assess information critically, or construct governance policies that rely heavily on logics of surveillance and carcerality.
Further research might explore other types of metaphor that we did not have space to contend with in this article, such as those based in war and combat. Additionally, a longitudinal Critical Metaphor Analysis study using a large corpus of data from journalistic, academic, governmental, and/or industry publications to identify and trace other linguistic antecedents of these conceptual metaphors and map their functionality would offer a more generalizable conclusion than this theoretical intervention can. Further research on media literacy, information literacy, platform governance, and technology policy could build off of this study as well. Language shapes the very questions we ask; it is inevitable that it shapes the material realities of the research we conduct.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
