Abstract

Misleading and false information distributed online, often conceptualized as misinformation and disinformation, are subjects of increasing contemporary concern (Freelon & Wells, 2020; Starbird, 2019). Mis/disinformation is increasingly regarded as a threat to public discourse, democratic decision making, society’s cohesion, and ultimately our abilities to identify and agree solutions to the myriads of global challenges we are facing. While false information has been a feature of human life since early history (Posetti & Matthews, 2018), there are good reasons to worry that the current strands of mis/disinformation have devastating contemporary consequences that are only increasing. High-profile examples of mis/disinformation over the past few years have included the idea that the 2020 US elections were stolen (Persily & Stewart, 2021), that Hilary Clinton operated a global pedophile ring out of a pizzeria (Kline, 2017), dangerously misleading cures for Covid-19 (Bleakley, 2021), that climate change is a hoax (Sarathchandra & Haltinner, 2021), and claims that Ukraine operates secret US-funded bioweapons labs close to the Russian border (“Disinfo: Pentagon-Backed Labs Produce Bioweapons in Mariupol,” 2022).
These concerns about the effects of mis/disinformation have stimulated a vibrant research agenda with academics from a wide variety of disciplines focusing on the subject. This cross-disciplinary interest in the subject is arguably an artifact that makes misinformation distinctive as a research topic, with researchers in public health, psychology, and computer science, among others investigating the problem of mis/disinformation (Huang, 2021). This special issue on Multidisciplinary Approaches to Mis- and Disinformation Studies highlights some of this cross-disciplinary research. The combination of computational approaches offering methods and techniques to measure and combat the problem with social scientific disciplines that seek to understand causes and consequences is, potentially, particularly powerful, and offers genuine hope that the information environment might be “cleaned up” as it appears it has been improving in some cases (Marchal et al., 2020).
While offering great promise, such a wide combination of disciplines focused on one field inevitably brings challenges as well. Some of these concern how the problem is conceptualized and understood. Some work, often in technical disciplines, seeks to engage with misinformation at the unit and content level: looking at understanding of whether an individual claim, statement or news article is factually accurate or not. Others, perhaps more oriented toward the social sciences, would seek to understand information ecosystems as a “whole,” and look at the balance of content within them. Neither of these perspectives is inherently wrong: indeed both are necessary. But further dialogue between micro and macro conceptions of misinformation is needed.
A second vexatious question concerns the impact of misinformation. When, why, and to what extent do false claims have impact? Measuring the impact of any type of communication has been challenging scientists for decades (Zaller, 1996), and similar challenges have been seen again in the field of misinformation in particular (Bail et al., 2019; Rocha et al., 2021; Vilella et al., 2021). Again, one challenge is that impact studies often focus on the level of individual content, whereas those individuals most impacted by misinformation often seem to be those exposed to a steady diet / deeply embedded in a misinformation “community” (Enders et al., 2021). Without conclusive proof that misinformation regularly damages people exposed to it, the field will always struggle with a core legitimacy problem.
Related to this, there is the question of regulation and intervention. How should the problem of misinformation be solved? Solutions emerging out of technical disciplines are increasing in scope and scale and offer the promise of automated rebuttals, provenance information, downranking and eventually blocking and banning of misleading content. However, such solutions are criticized both as potential restrictions on freedom of speech and also as reflective of a generalized “solutionist” approach to technology (Morozov, 2014) that does nothing to address the root causes of misinformation generation and consumption.
It is within the context of these wide ranging cross-disciplinary debates that the Third Multidisciplinary International Symposium on Disinformation in Open Online Media (MISDOOM 2021) was held. MISDOOM is one of a small but growing number of conferences that explicitly seeks to connect the social and technical sciences together in one overarching conference format. Such conferences face logistical challenges arising from very different norms about what a conference is (a place to present work in progress or the moment of publication of an academic contribution) and how contributions should be structured. However, they also offer enormous value in exposing different disciplines to each other’s contributions.
The current special issue represents one of the core social science outcomes from the conference, alongside another work focused more on computer science (Bright et al., 2021). Specifically, the aim of this special issue was to highlight the potential of bringing together social theory and computational methods, showcasing the potential of a collaboration between social science and computer science. The selection of the articles in this special issue hence reflects the value of both computational methods and social theory, as well as the combination of the two. We summarize the articles here in the order they appear in the issue, broadly organized from theoretical to technical.
Developing from a focus on social theory, in “Misinformation on Misinformation: Conceptual and Methodological Challenges,” the authors provide six misperceptions about misinformation and provide avenues to overcome their associated research challenges. They highlight that varied misinformation definitions impact the practical implications of results, that volume of engagement with misinformation should not be conflated with belief, and that misinformation research must incorporate information environments outside of social media. They remind us that misinformation is a symptom, rather than a cause, of deeper socio-political problems.
“From Facebook to YouTube: The Potential Exposure to COVID-19 Anti-Vaccine Videos on Social Media” investigates the important question of how misinformation happens across various social media platforms and how this may reinforce the misinformation dynamic. It also reveals that the measures taken so far by social media platforms such as Facebook and YouTube to fight misinformation are not sufficient. The authors discuss what these results imply for public health agencies and their strategies for combating misinformation and promoting public health recommendations.
“White Supremacist Conspiracy Theories on YouTube: Exploring Affiliation and Legitimation Strategies in YouTube Comments” highlights how White Supremacists can use technology to spread their messages. Using a social semiotic approach, the authors attempt to understand the social bonds of those who engage with such groups’ content. Their approach sheds light on the types of people who are commenting on extremists’ YouTube videos, and the social bonds that the commenters might share with each other.
The article “Behind Blue Skies: A Multi-Modal Automated Content Analysis of Islamic Extremist Propaganda on Instagram” contributes knowledge to the visual platform of Instagram through a research design that analyses multiple elements of the platform. This multi-method article studies hashtags, visuals, and text published by the German group Generation Islam over a 2-year period. They provide nuanced findings on how Instagram’s affordances are leveraged by extremist propagandists.
The article “Unpacking Multimodal Fact-Checking: Features and Engagement of Fact-Checking Videos on Chinese TikTok (Douyin)” engages with the factors that make fact checking interventions, which are a critical part of the fight against misinformation, successful and engaging for their audience. The authors identify three types of fact checking video that appear to be particularly successful on a highly used Chinese social media platform.
Finally, social theories can also be used to develop computer programs. In the article “Developing Misinformation Immunity: How to Become Your Own Fact-Checker in a Human Computer Interaction Environment,” the authors describe how computer chatbots can be designed to inoculate against misinformation. To build such a program, it requires an understanding of Fallacy Theory, gamification, and the users’ experience with the chatbot. This article highlights how social theory can be entwined with computer design, resulting in a potential solution to misinformation.
In summary, we feel the articles in the special issue offer a wide ranging overview of just how many different ways the problem of mis/disinformation is being currently tackled within academia, and highlight the need to promote more cross-cutting dialogue and discussion to truly address what remains an immense societal challenge.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
