Abstract
From pandemics to political campaigns, online misinformation has become acute. In response, a plethora of interventions have been offered, from debunking and prebunking to fact-checking and labeling. While the technical efficacy of these “solutions” are debatable, I suggest a more fundamental failure: they rely on a humanlike caricature, a rational and ethical figure who only needs better facts to disavow misguided misinfo practices. Instead I argue that misinformation studies must incorporate a more holistic human. Drawing from the broader humanities, this article conceptualizes the actually-existing human who can be emotional, factional, and bigoted – all qualities instrumentalized and amplified by online media. Reinserting this missing figure reintroduces agency and antipathy into misinformation studies. Misinformation is not something done to innocent subjects who merely need to be educated, but is an active practice shaped by identity and sociality that reflects the contradictions and frictions intrinsic to human nature.
Introduction
Misinformation, typically defined as the dissemination of false or misleading information, has become increasingly acute, whether influencing opinions during political campaigns or shaping public health behaviors throughout a pandemic. A raft of recent research has responded to this pervasive and increasing threat, attempting to use technologies to mitigate misinformation (Bodaghi et al., 2024; Fung et al., 2022; Sharma et al., 2019).
I argue these interventions are fundamentally flawed due to epistemological rather than technical reasons: they lack an adequate conceptualization of the human. Many studies presuppose a highly rational and ethical subject, a stereotype who is consistently logical and civil. Flawed foundations produce flawed framings. Building on the classic mis/dis schema (Wardle and Derakhshan, 2018), misinformation has become a mistake by those who simply need “better facts” to make better decisions, while disinformation has turned into an attack by “bad actors” on a passive populace. This flat figure is so far removed from actual humans, with their identities and socialities, their desires and enmities, that it effectively forms a void rather than a substantial subject. In short, misinformation is missing the human.
Effective misinformation solutions must begin with solid foundations. To this end, I first sketch misinformation’s humanlike caricature and then draw from the broader humanities to conceptualize the actually-existing human. The argument, while provocative, echoes recent research which critiques misinfo solutions as divorced from the real-world (Broda and Strömbäck, 2024) and suggest misinformation must better account for links with affect and emotionality (Young, 2021), with individual and group identity (Reddi et al., 2023), and its enticements and entanglements with deep aspects of human nature (Ecker et al., 2022).
Humanlike stereotypes
The humanlike caricature seen across many misinformation studies consists of two assumptions. The first is that this figure is rational, a reasoning mind who applies logic and carefully weighs up points when confronted with information. This is the figure behind the common framing of misinformation as incorrect facts or “fallacious reasoning” (Cook et al., 2018: 1). If facts are the problem, then fact-checking is the solution, with 200 fact-checking initiatives proliferating across 60 countries (Kertysova, 2018). As manual fact-checking hits its limits in the face of data deluge, we see a push to automate it via machine learning, NLP, and other AI techniques (Santos, 2023). In a sample typical of the genre, researchers speak of developing a “robust fake news detection system that not only fact-checks information pieces provable by background knowledge, but also reasons about the consistency and the reliability of subtle details about emerging events” (Fung et al., 2022: 4790).
All of these interventions suggest that the reasoning human simply needs extra reasoning power provided by technology. They assume that misinformation is merely a result of an information deficit (Simis et al., 2016) or even equivalent to it (Kessler et al., 2022). Here humans are rational individuals but possess insufficient knowledge to effectively evaluate every claim they encounter. Framed like this, the way to counter misinformation becomes almost self-evident: to somehow bolster the vulnerable subject against the data deluge by providing them with additional facts or data. Across 98 tools for combatting disinformation, almost half were verification tools, 20 related to education and training, and 14 were used for credibility scoring (Kavanagh et al., 2020). In short, if the problem is insufficient information, the solution is more or “better” information.
But actual humans are no longer rational (if they ever were). The totalizing principles of the Enlightenment – scientific positivism, rationality in the service of progress, and an exclusive claim to represent reality – “have been fatally undermined” (Larbi, 2019). It is telling that recent years have seen the rise of fake news, post-truth, and alternative facts (Farkas and Schou, 2019; McIntyre, 2018). While these developments are certainly not condoned, they are important in signaling a broad dissatisfaction with the dominant paradigm of rationality. This is the culmination of a long and “decisive shift away from objectivity” toward a moment in which “facts have lost their currency” (O’Callaghan, 2020). As Adams et al. (2023) argue, the moral panic around misinformation stems from intersubjectivity replacing objectivity as a primary way of interpreting issues and events. Rather than lamenting this or pining for a return to Enlightenment ideals, we should incorporate an understanding of the human that acknowledges this shift.
Indeed, other domains have already done so. In healthcare, humans make decisions with their naturally limited, faulty, and biased decision-making processes; interventions that presume rationality must be reworked (McCaughey and Bruning, 2010). On platforms, users are uncertain and behave in “irrational” and varying ways; privacy interventions must incorporate this understanding to be effective (Acquisti et al., 2021). In contrast, misinformation studies, despite being a newcomer, clings to an outdated model of human behavior, a figure who only requires perfect information to make the “correct” decision. Such a figure was buried sometime in the mid 20th century, or really, never existed at all. As Latour (2012) stressed, we have never been modern.
Second, this humanlike caricature is assumed to be ethical. By ethical, I mean a figure who always respects the humanity in others (Kant, 1785), and whose habits, actions, and responses are directed toward the moral and the good (Mitchell, 2015). Many misinformation studies seem to presume that individuals are inherently honorable and tolerant and consistently achieve a virtuous life, even online. This assumption drives a view of misinformation as a malevolent technique carried out by “bad actors” on decent people (Kreps, 2020). This “scourge” takes the form of “attacks on liberal institutions, electoral processes, and social norms” executed by rogue states and social manipulators (Pherson et al., 2021: 316). Misinformation is framed as a morally objectionable form of “epistemic exploitation” that preys on the moral but vulnerable (Fritts and Cabrera, 2022).
A perfectly ethical human drives a view of misinformation as an ill that only needs to be unmasked in order to be disowned. Once information is revealed to be misinformation, good people will want nothing to do with it. This is why the primary focus of many studies is identifying misinformation (Islam et al., 2020; Su et al., 2020), with little rationale or follow-up. Informed of their mistake, the upstanding citizen corrects their practice and continues their careful dialog in the service of the public sphere (Habermas, 1974). While they may disagree with a particular stance or political ideology, they carefully check sources and veracity before posting, engaging in online debate with tolerance and civility.
Actual humans are not so innocent and austere. Whether Zoom-bombing anti-racist events (Ali, 2021) or shitposting in forums (McEwan, 2017), humans can be disruptive or actively antagonistic (as the next section explores). The hoax and the rumor in particular suggest a framing of misinformation that differs considerably from malevolent actors duping a passive public via propaganda. Hoaxes can be humorous, a practical joke staged for the public that exhibits performativity or even playfulness (Fleming and O’Carroll, 2010). Similarly, rumors are driven by a mixture of ambiguity and curiosity, they relieve anxiety, and their communal telling and retelling constitute a pleasurable social interaction (Mullen, 1972). Rumors that are funny or assist in sense-making are forwarded as a way of enhancing one’s self or relationships (Shen et al., 2021). Both hoaxes and rumors splice sociality, friction, and playfulness into an issue which is often framed as an incredibly serious ploy carried out by the exploiters on the exploited.
Bringing these insights together, the “human” at the heart of misinformation studies is only really a caricature. Rather than a thick description (Geertz, 2008), a layered and complex portrait of the human condition, this straight-laced or even self-righteous figure is a cardboard cut-out, a flat persona stripped of desire and plucked out of any sociocultural milieu. Comparing this generic ideal, for instance, against the online troll (Buckels et al., 2014; Cheng et al., 2017), a playful trickster who stirs up trouble for fun – or even just against living breathing people in social networks – highlights the poverty of the former and the rich particularity of the latter. Indeed, the figure that underpins many misinformation studies – particularly those which are solutions-focused – is so insubstantial as to be nonexistent. In other words, there is no human at all.
Reinserting the human
Remedying this oversight begins by reinstating a more holistic human at the core of misinformation research. Such a conception does not ignore the messy or irrational qualities of human nature, but actively incorporates them into a more robust and authentic model. Misinformation is “not the result of mere ignorance but is driven by factors such as conspiratorial mentality, fears, identity expression and motivated reasoning,” stress Ecker et al. (2022), “reasoning driven more by personal or moral values than objective evidence.” In this section, I sketch out the actually-existing human, focusing on three traits that are integral but have been overlooked. To build up this multifaceted portrait, I draw widely from across the humanities, including race and cultural studies, political science, and psychology – while paying particular attention to media studies to showcase how this subjectivity is instrumentalized and amplified in the networked environments where misinformation plays out.
Emotional
First, humans are emotional. “Emotional” here does not necessarily denote an intense feeling (though it may be accompanied by one), but is shorthand for a mode of acting that favors intuition over reason or the instinctual rather than the rational. Faced with an incredible volume and velocity of conflicting information, we turn to the non-thought of feeling, reactions, and routines. Conscious consideration is mentally expensive, saved for a small set of substantial decisions (Kahneman, 2015). For this reason, psychologists suggest we engage with the world in a “predominantly feelingful manner” (Cromby, 2007: 111). This mode is both intuitive and immediate. Zajonc (1984) asserts the primacy of affect, stressing it comes first and informs slower modes, while Pham et al. (2001: 185) contend feelings “frame subsequent thought generation through the spontaneous priming of feeling-consistent cognitions and the controlled retrieval of knowledge that helps explain the initial feeling response.” The growing emphasis on emotionality rather than rationality has been described as a paradigm shift in decision theories, a mountain of evidence suggesting that emotions constitute potent, pervasive, and predictable drivers of decision making (Lerner et al., 2015).
Online environments seek to capture and amplify this emotion. Social media feeds, driven by the prime metric of “engagement,” inherently privilege content that is controversial, attention-seeking, or outrage-inducing (Munn, 2020a). Indeed, we see a trend toward more visual and visceral formats. Facebook provided a text box and a prompt: “how are you feeling?” Instagram bypasses the textual, prioritizing the image and the hashtag, while TikTok aims to grasp the mood or vibe of subcultures through short form video (Dominus, 2020). This trend is exemplified by the emoji, the tiny pictorial package of affirmation, outrage, or opinion. “In their anti-semanticism, emojis oppose symbolic delegation,” writes Kornbluh (2024: 42), “aesthetically crystallizing the political economy of instantaneity and flow.”
Rational debates and nuanced deliberations are too slow, unable to capture the microseconds that characterize the attention economy (Munn, 2020b). Instead, the affordances and algorithmic logics on these platforms establish spaces where outbursts of affect win out. As one ex-Google engineer stated, the design of these environments privilege our “impulses over our intentions” (Lewis, 2017). The result is an environment where emotive rumors are shared rather than checked: falsehoods spread farther, faster, and more broadly than truth in all categories of information (Vosoughi et al., 2018). The design of online environments privileges emotion and immediacy over rationality and rumination. Given these conditions, I echo other studies (Young, 2021) in suggesting a more intuitive and expressive conception of the human must be at the core of misinformation studies.
Factional
Secondly, humans are factional. The human mind has evolved in a context characterized by a struggle against other human beings for available resources (Geary, 2005). With survival at stake, those who could form strong bonds with others and work together possessed distinct advantages. Indeed, psychological studies suggest these zero-sum game pressures may not even be necessary for competition and discrimination between groups (Doise and Sinclair, 1973; Ferguson and Kelley, 1964; Rabbie and Wilkens, 1971). All this evidence indicates “in-group bias is a remarkably omnipresent feature of intergroup relations” (Tajfel and Turner, 2004: 56). Individuals show favoritism to those within their coalition or community while ignoring or ostracizing those deemed to be outside this circle.
Certainly these factionally-driven judgments emerge more with controversial issues. There is little division about the existence of birds or the sum of two and two. Yet controversies are precisely where we see misinformation deployed, from politics and pandemics to hot-button topics like abortion and gun rights. The notion of the “fact” in these cases becomes debatable and simply less useful at an operational level. As Clark et al. (2019) observe, even if experts could agree on the facts, responses reflect not just empirical details but an individual’s political vision (what is happening here and what should our response be). Such tribal bias is a “nearly ineradicable element of human nature” (Clark et al., 2019: 587) that skews our evaluation of information in ways that benefit the self and the group.
Factionalism shapes how we assess and even approach information. The right associates misinfo with the radical left; the left associates it with the radical right – the credibility of information is determined by ideological bias (Hameleers and Brosius, 2022). When evaluating whether a story is misinformation, users employ motivated reasoning driven by partisan differences (Jennings and Stroud, 2023). In their Brexit study, Greene et al. (2021: 587) label this tendency “ideological congruency” and demonstrate how fake news denigrating the opposition created high rates of false memories. Pandemic news and political stories are interpreted in ways that match existing worldviews. Given a robust conception of the human, this should come as no surprise: information is not evaluated by a lone individual without a memory, but by someone with kith and kin, with beliefs and a background.
Humans thus do not consider claims and events in some idealized vacuum, but are highly aware of their social context and relational dependencies. Situated in these vital social networks, the “truth” – whatever that might mean – is just one of many factors that must be considered. Faced with a difficult issue, individuals will interpret this information in ways that preserve their identity, prioritizing group belonging over sound reasoning and judgmental accuracy (Kahan et al., 2017). To really understand the deep allure of misinformation, its repeated hold even in the face of corrections (Ecker et al., 2022), misinformation studies must incorporate an adequately factional human whose loyalties run deeper than logic.
Bigoted
Finally, humans can be bigoted. While this trait overlaps with factionalism, here we are dealing with out-group hatred rather than in-group favoritism, with intolerance for others rather than group unity. This shift in focus can also entail a shift in disciplinary expertise. Alongside psychology and political science, misinformation studies should draw from race studies, feminist studies, disability studies, and other fields that have long recognized the ability of humans to discriminate and subjugate.
On the one hand, such bigotry is not new but ancient; prejudice based on cultural, epidermal, or religious difference is timeless. Antisemitism (Braun et al., 2000; Lindemann and Levy, 2006) and antifeminism (Federici, 2021; Theweleit, 1987), to take just two strains of hate, have histories which stretch for hundreds or thousands of years. On the other hand, networked media allows these antipathies to be presented, organized, and practiced in novel ways. Digital affordances enable hateful ideologies to be circulated in the public arena, to take on compelling forms, and to be taken up by their adherents, often with devastating effects (Munn, 2023). Bringing these two insights together, bigotry is baked into human nature – but also stoked and instrumentalized by mediated misinformation.
Openly discussing these themes is already a stark difference from many studies of misinformation solutions, where race, discrimination, and prejudice are not even mentioned. This systematic silence, this bracketing out of positionality, of race, class, gender, and other differences, is itself a mechanism that perpetuates hegemony and inequality. Statements like “I’m colorblind, not racist” claim neutrality, but that neutrality is a myth (Anderson, 2010). This is a kind of ambient and quiet bigotry, a bigotry that refuses to even admit its own existence. Such bigotry is “a profoundly invested disingenuousness, an innocence that amounts to the transgressive refusal to know” (Williams, 2016: 27).
Bigotry shapes the production, consumption, and circulation of misinformation. Reddi et al. (2023) suggest that “identity propaganda” takes three forms: othering narratives that alienate non-dominant groups; essentializing narratives that create generalizing tropes of marginalized groups; and authenticating narratives that call upon people to prove or disown their membership in a group. Crucially, these narratives rely upon historical power relations; they hinge on and perpetuate pre-existing power structures. In this sense, the claim that cognitive biases cause bigotry (Friedman, 2023) should be flipped. Bigotry is the apriori and unconscious characteristic that shapes downstream thought.
This doesn’t imply that every human exhibits virulent racism or sexism. It simply acknowledges that humans are highly attuned to difference in its various forms and that prejudice, favoritism, and discrimination can manifest in systemic and subliminal ways. This is a far cry from the cookie-cutter caricature currently on offer, who is assumed to be liberal, civil, and infinitely tolerant. Yet if the actually-existing human is certainly more fraught, they also seem more authentic. Integrating this figure into misinformation studies would mean acknowledging that human relations also contain frictions, fears, and antagonisms.
Reconsidering the issue
Reinserting the human into misinformation should reshape how it is framed and fought. A new subject means a new problem and new solutions. Fact checking provides one brief example. Misinformation’s rational caricature has meant fact-based approaches have dominated. But the actually-existing human suggests these interventions are catering to an ideal who doesn’t exist. Instead of the “pure” evaluation of competing claims, we see subjects with lived experiences, communities, and preferences, who respond to information through a mixture of feelings, factionalism, and even animus for others.
Bringing together these triple aspects – the emotional, factional, and bigoted – already starts to produce a more fine-grained portrait of the human and shed light on insights from emerging studies. Horner et al. (2021) for instance, found that participants were more likely to believe headlines that aligned with their existing beliefs and reacted with negative emotions to headlines that attacked their party. “Ideologically uncongenial” sources are discredited as fake news (van der Linden et al., 2020). The factional and emotional converge here strongly to shape how information is perceived and responded to. Another study found that participants’ dissemination of fake news was amplified by conditions of fast-paced and impulsive decision-making – and this tendency was particularly pronounced among respondents with higher right-wing authoritarian attitudes (Schulte-Cloos and Anghel, 2023). The emotional and bigoted overlap here in powerful ways, directly influencing how misinformation is consumed and re-circulated.
Reinserting the human reintroduces agency and antipathy into misinformation studies. Misinformation is not an attack by bad actors on innocent civilians, nor even an infodemic (Singh and Banga, 2022) that the public needs to be inoculated against, but is something that is actively constructed by society itself. “Misinformation is not something that happens to the mass public,” stresses Kahan (2017), “but rather something that its members are complicit in producing.” Individuals make choices about what content to consume and what rabbit holes to pursue, choices that protect their identity and confirm existing beliefs. These ideologies are not rational, based on perfect information, but driven by dreams, fears, and lived experiences. And these ideologies are not always civil or liberal but are shot through with ignorance, prejudice, and friction. These agonistic positions are key for humans and a broader democratic society (Laclau and Mouffe, 2014), and should not (indeed cannot) be smoothed out.
A messier human means a messier discipline. The egotistical and emotional human tramples through the clean and comforting logic that assumes misinformation can be solved with information. Instead, misinformation will need to reach across disciplines to develop more multifaceted responses that draw from media, race, and cultural studies, psychology, political science, and education. Such a meta-discipline would move beyond narrow frames like “fake news” to address complex epistemological challenges (Cook, 2023). After surveying 10 years of misinformation studies, Broda and Strömbäck (2024: 21) conclude “we have yet to develop a more comprehensive understanding of people’s motivations to engage with misinformation, disinformation, and fake news.” A rich interdisciplinary approach would be poised to address this gap.
While some promising research has begun to reframe misinformation (Ecker et al., 2022; Young, 2021) and productively question long-held assumptions (Adams et al., 2023), there is little systemic integration between this newer work and technical countermeasures, with many fact-checking approaches poorly linked to psychological research (Ziemer and Rothmund, 2024). To make progress on this issue, misinformation will need to robustly connect human insights with real-world interventions. This may mean bringing together stakeholders across the public and private sector to design interventions for specific communities and applications. Climate change, conspiracy theories, and anti-immigrant misinformation instrumentalize identity, sociality, and raciality in distinct ways and call for distinct solutions.
Certainly, then, the human complicates matters, adding layers and undermining any belief in a singular, all-encompassing solution. But turning misinformation from a tractable computational problem into a wicked problem seems to be the more honest and ultimately beneficial move, opening up new perspectives and new interventions from a far broader community. Having reinserted the human, misinformation studies can finally begin in earnest.
