Abstract
The popular assumption that mis- and disinformation are distinguishable from true information based on easy-to-identify content features is challenged in an online context where multiple claims of truthfulness compete for legitimacy. When conventional and alternative narratives both rely on seemingly objective and fact-based truth claims, it is difficult for citizens to separate false from true information. In this setting, we rely on an inductive qualitative analysis of social media and alternative media platforms to explore how mis- and disinformation refer to expertise and objectivity. Our main findings suggest that expertise and objectivity in mis- and disinformation can be legitimized by (1) quoting or involving message-congruent alternative experts; (2) selectively decontextualizing or quoting established experts; (3) contrasting ‘honest’ alternative experts/critical citizens to ‘dishonest’ established experts; (4) emphasizing people-centric expertise, common sense, and critical thinking as foundations of truth-telling; and (5) referring to visual information and lived experiences as direct reflections of reality. The typology aims to inform empirical research on the detection of mis- and disinformation and can be applied in the design of interventions to raise awareness about how false information signals legitimacy.
Although misinformation is commonly defined as false information that lacks relevant expertise or empirical evidence (e.g., Vraga and Bode, 2020), false information often refers to experts, scientists, doctors, or other alleged authoritative sources that are used to prove the legitimacy of deceptive stories (e.g., Ylä-Anttila, 2018). In a competitive online arena where counter-factual narratives and established information battle for legitimacy and attention (Waisbord, 2018), signaling authenticity through decontextualized or selectively quoted experts may strengthen the credibility of mis- and disinformation. Yet, to date, we know markedly little about how mis- and disinformation refer to expert sources or other claims of objectivity to create an illusion of truth. Against this backdrop, this paper relies on a qualitative content analysis of alternative media websites and social media platforms flagged by fact-checkers as containing high levels of mis- and disinformation.
By referring to expertise, mis- and disinformation can exploit the credibility of cues commonly used in established journalism, herewith offering a credible and seemingly high-quality alternative to established information. Yet, references to expertise and empirical evidence are often deliberately decontextualized or selectively quoted in disinformation (Hameleers and Yekta, 2023), suggesting that they may be used to deceive audiences. By creating deceptive credibility cues, disinformation may create confusion among news users, who no longer know which sources to believe as both conventional and alternative media refer to expertise when making contrasting claims about reality (e.g., Ylä-Anttila, 2018). Moreover, it may be extremely difficult for (media literacy) interventions to warn people about suspicious content by simply pointing to a lack of expertise. For these reasons, it is crucial to gain a more refined understanding of how expertise and objectivity are referenced and (de)contextualized in alternative media platforms and on social media platforms.
To this end, we conducted an extensive qualitative content analysis of four prominent alternative news websites in the Netherlands, which have been flagged as suspicious by various independent fact-checking organizations. In addition, we relied on a database containing fact-checks from two independent Dutch factcheckers, Nieuwscheckers and AFP Nederland, and selected all false narratives circulating on social media that they debunked between June 2022 and June 2023 (N = 78). Additionally, as alternative hyper-partisan platforms themselves may function as evidence for mis- or disinformation spread via social media, we also explore how alternative media platforms are used as legitimate expert sources for social media mis- and disinformation. Herewith, our inductive analysis aims to capture a substantial part of the alternative media information landscape that may play a central role in the platforming and legitimization of mis- and disinformation.
Enhancing the credibility of mis- and disinformation through objectivity claims
The central aim of this paper is to map how false information relies on expertise and other objectivity claims to enhance the perceived credibility of mis- and disinformation. Disinformation can be defined as the intentional spread of false information (e.g., Bennett and Livingston, 2018; Freelon and Wells, 2020). The intention to deceive is thus central to the definition of disinformation (Chadwick and Stanyer, 2022; Hameleers, 2023), which distinguishes it from honest mistakes central to misinformation (e.g., Wardle, 2017). As such, misinformation can be distinguished from disinformation by the criterion of deliberativeness: False information can be classified as disinformation when actors knowingly spread false information in a goal-directed manner (i.e., to achieve political profit or harm political opponents).
In this paper, we are conceptually interested in how false information legitimizes truth claims to appear credible and trustworthy. Although intentionality cannot be inferred based on the content of statements alone, our analysis is focused on falsehoods that are associated with intentions to distort reality and strategically disinform targeted audiences. Based on an assessment of the context and sources of falsehoods (Hameleers, 2023), which is also offered by fact-checking information, we assume that the falsehoods analyzed in this paper are likely to be disseminated intentionally. For example, when conventional expert sources are labeled as ‘fake news’ paired with the use of a manipulated image by a source with a clear populist anti-establishment perspective, we assume that the ideological alignment with the narrative and the act of manipulating an image is driven by the intention to deceive.
Yet, it is important to clarify that the association of falsehoods with malicious motives is surrounded by a degree of uncertainty: Even the most dubious claims may be disseminated without the intention to deceive recipients, while malicious actors pushing disinformation narratives are motivated to keep their bad intentions hidden from recipients. Moreover, false information may align with political or ideological motivations without actors being aware of its inaccurate nature. Especially when false information strongly resonates with the ideology or convictions of senders, they may truly believe that the false information they disseminate represents accurate worldviews. For this reason, although we focus on false information that resonates with deceptive practices and ideological or strategical motives, we use mis- and disinformation as guiding concepts as we cannot in all cases determine actors’ awareness of falsehoods. However, we do focus on false information that likely used expert references deceptively and strategically to signal credibility and legitimacy to recipients.
Generally, the credibility of mis- or disinformation can be understood as the extent to which the message is perceived as fair, unbiased, accurate, telling tell the whole story, and trustworthy (Meyer, 1988). Extant literature on the credibility and legitimizing tools of propaganda has offered important insights into how mis- or disinformation can attempt to enhance the credibility of false information (Chaiken and Maheswaran, 1994; Hovland and Weiss, 1951). Specifically, different techniques may be used to signal the credibility of propaganda to receivers, such as name-calling, transferring authorities, generalizations, involving virtues of honesty and objectivity, falsely suggesting causal links, or misrepresenting facts (i.e., omitting information or promoting alternative truths) (e.g., Conserva, 2003).
In this paper, we consider the various propaganda techniques identified in extant literature as sensitizing concepts to give direction to the analysis of credibility-enhancing tools in mis- and disinformation. Given that mis- or disinformation constructs knowledge and truth claims that compete for legitimacy and attention alongside established accounts of reality (Waisbord, 2018), we are specifically interested in how constructions of objectivity and expertise are invoked to legitimize false statements, herewith signaling the various components of credibility as suggested by Meyer (1988). In line with extant literature on the legitimacy of counter-factual claims (e.g., Hameleers and Yekta, 2023; Heft et al., 2019; Ylä-Anttila, 2018), we aim to establish how mis- and disinformation may rely on various tools and arguments to claim expertise and objectivity contrasted to the alleged deception of established sources.
For our endeavor, it is important to problematize the role of expertise and objectivity in mis- or disinformation. Traditionally, expertise is referred to as a social position in which people have prominence over public interests, opinion, and society (Evans, 2008). Expertise is a quality that is obtained or attributed to persons based on their field-specific knowledge, credibility, and trustworthiness (Sprain and Reinig, 2018). Alternatively, lay persons or people with lived experience can also be regarded as source of expertise. These actors gained knowledge and credibility through direct involvement and experience, bringing common sense, opinions, and values as valid sources of expertise (Brown, 2009). In this paper, we approach constructions of expertise as a multifaceted construct, including references to authoritative figures with field-specific empirical knowledge and people with lived experience on the issue at hand.
Objectivity has traditionally been defined as striving toward the highest level of correspondence possible between something ‘out there’ in reality and its description (Heidegger and Questions, 1943). Objectivity is, however, not something that exists outside its construction and context. Hence, in journalism, objectivity is also seen as a performance (Boudana, 2011) or a strategic ritual (Tuchman, 1972) to create an image of trustworthiness among audiences. In this setting, journalists’ daily routines may involve different practices, such as fact-checking, quoting experts, or reviewing statistics, to claim objective reporting and signal trustworthiness to their audience. The ideal of trustworthy reporting can thus be achieved by adhering closely to the standards and routines that maximize the correspondence between reality and its journalistic description, for example, by relying on independent and original sources who are directly involved in the described reality (Downie and Schudson, 2009).
As literature on the delegitimizing role of disinformation labels suggests (Egelhofer and Lecheler, 2019), references to experts can also be used to accuse the opposed camp of relying on ‘fake’ experts or emphasize that the other side is biased by selectively quoting confirming expert sources. In addition, as research on propaganda has documented, inappropriate sources of expertise can be used to legitimize one-sided claims (Conserva, 2003). Thus, expertise can be used in various ways, such as making counter-factual statements seem objective or by delegitimizing opposing experts.
Although extant research on propaganda techniques has revealed various rhetorical tools that can be used to signal legitimacy and credibility (Conserva, 2003), we currently lack an understanding of how these techniques may be transferred to modern applications of mis- or disinformation through digital means. To advance this field, we use an in-depth qualitative content analysis of mis- and disinformation narratives in the Netherlands, which aims to answer the following central research question: How are references to expertise and objectivity used to signal credibility in online mis- and disinformation messages? (RQ1).
Beyond our main focus on expertise and objectivity, we expect that there are various ways of legitimizing counter-factual truth claims in mis- or disinformation. For example, mis- or disinformation may copy journalistic tools of mainstream media to make claims about objectivity beyond expertise, signal credibility by (selectively) quoting empirical evidence, or construct seemingly proven causal claims based on generalized conclusions (see also research on propaganda techniques, Conserva, 2003). In our analyses, we will stay open to such alternative forms of claiming objectivity.
Social media and alternative media as platforms of mis- and disinformation
Mis- and disinformation often reach people via social or online media platforms (e.g., Waisbord, 2018). Historically, mainstream media have also played a role in disseminating mis- and disinformation. For example, in media coverage on armed conflicts and wars, established media reporting is not neutral, and its coverage may demonstrate a clear bias favoring one side of the conflict, or even be based on mis- and disinformation (e.g., Bell, 1998). Mainstream media may also play a role by legitimizing disinformation narratives that originate from online settings or fringe platforms (e.g., Lukito et al., 2020; Tsfati et al., 2020). As argued by Tsfati et al. (2020), although misinformation exposure via digital means may be a relatively rare event (also see e.g., Acerbi et al., 2022), people may learn from mis- and disinformation narratives via the mainstream media that report on them (Al-Rawi, 2019). Mainstream media can thus play different roles in spreading mis- and disinformation. First, their coverage of certain issues may contain false information intentionally or unintentionally when covering contested issues such as immigration (Philo et al., 2013). Second, their attention to false information may amplify mainstream disinformation narratives originating from other platforms. Thus, it is important to recognize that mis- and disinformation have existed before and independent of the rise of social media, given that the intentional dissemination of falsehoods can also be associated with established (offline) media.
Yet, we focus on social media for different reasons. First, communication via online platforms is less directly governed by traditional gatekeeping routines and professional norms of journalism, which allegedly strive toward balance, objectivity, and transparency. This means that online mis- and disinformation can reach audiences without the interference of fact-checking before information is published and disseminated (e.g., Tucker et al., 2018). Second, online media allow for the simultaneous presentation of established and counter-factual truth claims that compete for legitimacy and attention (Waisbord, 2018). Online, citizens can select the truth claims that best resonate with their own beliefs and identities, whereas the abundance of information may overload their capacity or willingness to critically assess the veracity of the encountered information. Third, online media are part of a participatory logic, where official news sources, alternative news sources, social media platforms, and ordinary citizens are all central players in the amplification of false information (e.g., Starbird, 2019). Fourth, the low threshold for disseminating false information via social media combined with the affordances of digital editing and Artificial Intelligence allow malign actors to create seemingly authentic news with relatively low effort (e.g., Westerlund, 2019).
Considering that our study aims to map the diversity of expert references in mis- or disinformation as exhaustively as possible, we also focus on online spaces most likely to offer a platform for mis- and disinformation. Because we are theoretically interested in alternative media platforms that are likely to spread mis- and disinformation, we focus on alternative media with a clear hyper-partisan or anti-establishment position (also see Heft et al., 2019). Considering that mis- or disinformation also often consists of a delegitimizing narrative contrasting conventional knowledge (e.g., Bennett and Livingston, 2018), we regard the sub-category of hyper partisan counter-media as a favorable habitat for the dissemination of mis- or disinformation.
Regarding the signaling of credibility through expert references and objectivity, we tentatively expect that alternative media platforms and social media play different roles based on their affordances and different role expectations regarding credibility. Alternative media aim to offer a credible alternative to established news platforms, which could indicate that they rely on the same epistemologies and references to knowledge as traditional media to delegitimize conventional knowledge (e.g., Heft et al., 2019). The communication by ordinary citizens, politicians, and other actors via ungated social media platforms, in contrast, may be less governed by the mimicking of journalistic quality and truth-telling, leading to a more people-centric or populist construction of truth claims. This resonates with the idea of epistemological populism coined by Saurette and Gunster (2011). On social media, expertise may be more likely constructed based on lived experience and common sense (Brown, 2009) compared to the more authoritative truth claims spread by alternative media. We, therefore, raise the following research question on the comparison between expert references disseminated via alternative hyper-partisan media platforms versus social media that are less governed by journalistic values and the signaling or performance of objectivity:
RQ2: To what extent and how are references to expertise similar or different on alternative hyper-partisan media platforms versus social media?
Method
Data collection and sample
As a starting point for data collection, we used an open-source overview of alternative media platforms in the Netherlands that are flagged as containing suspicious content by different independent fact-checkers. 1 Often, the narratives spread by these platforms can be regarded as disinformation instead of misinformation as they were disseminated in informational contexts where the dissemination of falsehoods had electoral, financial, or other strategical advantages (i.e., creating an oppositional stance to established institutions by delegitimizing them with false information in a fact-free and uncivil manner, also see Hameleers and Yekta, 2023). Although we cannot fully exclude the possibility that some falsehoods on these platforms were disseminated unintentionally, the politicized nature of many of the false claims and the context in which they were communicated (i.e., an anti-establishment source spreading false information about the establishment) suggests a high likelihood of intentional deception. Yet, as false information can be used to advance strategic goals without actors being aware of the falsity of their strategic communication, we refrain from labelling all content as disinformation. What matters is that the contexts in which expertise is used to signal credibility and legitimacy correspond with a high likelihood that falsehoods are driven by the goal to persuade recipients about the validity and objectivity of truth claims.
We double-checked the list of included sources with the databases of different fact-checking platforms to ensure that we only selected platforms with a high likelihood to contain disinformation. The inclusion criteria for alternative media were based on existing overviews of platforms disseminating counter-factual knowledge (e.g., Heft et al., 2019). Specifically, platforms had to meet the following criteria to be included: They (a) claimed to offer an alternative to conventional knowledge and established media; (b) often delegitimized established information sources and conventional experts; (c) were less professional in their organization and platforming than established news sources; (d) contained hyper-partisan references.
Based on these inclusion criteria, the following platforms were included in the analyses: 9fornews, Niburu.co, Café Weltschmerz, and blckbx.tv. From each platform, 25 mis- or disinformation narratives were selected in the period covering June 2022-June 2023 (a full year). A narrative could consist of more than just one statement or claim: We considered a mis- or disinformation narrative as a wider false storyline in which counter-factual claims on reality were legitimized, for example, by quoting different alternative expert sources. The 25 narratives were included based on the principle of maximum variation: Narratives across the entire period and across different issues were included to make sure that the sample was as diverse as possible regarding the targeted development of themes related to expertise. For all articles included, we checked whether the claim could be considered disinformation by consulting existing fact-checks and/or empirical evidence.
Although the narratives that we included can also be disseminated without the intention to deceive and mislead, we looked for indicators suggesting that the narratives could be used deceptively. To determine whether the false information in our sample could be intentional and goal-directed, we used the following criteria: (1) the statement, storyline, or article is not based on relevant expertise, scientific consensus, or a transparent interpretation of contextualized empirical evidence (also see Vraga and Bode, 2020) and (2) the actor or source can profit from deceiving recipients, as there are indicators of deceptive intent (also see e.g., Hameleers, 2023). Such indicators include: Enhancing cynicism related to established information, fueling (ideological) polarization, and attacking opposed actors in a fact-free manner (e.g., Freelon and Wells, 2020). All mis- or disinformation narratives that were included in the analyses met the first criterion of illegitimate, inaccurate, or false references to expertise. Yet, this does not resolve the issue of intentionality, as lacking or inaccurate expert references may also indicate misinformation (Vraga and Bode, 2020). Therefore, all narratives that were included in the analysis had to also include at least one indicator of goal-directed deception, such as the presence of manipulated or fabricated references to experts and evidence, a hyper-partisan bias that favored the actor’s ideological or strategical perspective, or the fact-free delegitimization of opposed perspectives. Although we do not equate these features with disinformation automatically, we suggest that these indicators increase the likelihood of intentional deception.
Related to this, many mis- or disinformation narratives did not rely on authentic information, but used manipulated graphs, and decontextualized images or statistics. Although untrue information may be spread without the intention to deceive, decontextualizing (visual) information when the original context is well-known, or fabricating evidence, cannot be regarded as unintentional communicative acts: When real information is widely available to communicators but manipulated, altered, or left out, the likelihood that information was spread with the intention to inform honestly and accurately is low. Most of the false and debunked narratives we included in our sample contained fabricated evidence (over 25%), relied on an illegitimate use of experts (selectively quoting experts, referring to experts who do not have relevant knowledge on the topic) (over 50%), and refuted well-established scientific consensus (over 50%). At least one of these indicators had to be present for the identification of mis- or disinformation.
For social media content, we used databases containing fact-checks of two independent fact-checking platforms Nieuwscheckers and AFP Nederland. We selected all narratives that were circulating online and platformed on social media between June 2022 and June 2023. The inclusion criteria strived for maximum variation regarding topics (i.e., COVID-19, health-related misinformation, immigration, climate change) and sources (i.e., politicians, ordinary citizens, celebrities, and opinion leaders). The final sample size was driven by theoretical saturation: After analyzing 60 narratives spread in the period listed above, we included 18 narratives spread through the rest of the year 2022 to assess whether the additional mis- and disinformation narratives added new insights into the established themes and their variety.
Data analysis
The articles and posts were analyzed using the Grounded Theory approach (Charmaz, 2006). Although we did not follow all distinct procedures of this approach, including line-by-line coding and the aim of theory formation, we used the three different steps of data analysis to generate themes and dimensions (Charmaz, 2006). Starting with open coding, we labelled all segments of the data that related to constructions of expertise and objectivity. Based on the research questions, references to expertise and the use of sources as evidence were used as sensitizing concepts to guide and structure the coding of the raw data.
During focused coding, we aimed to achieve a higher level of abstraction and analytical depth related to the constructs of expertise and the signaling of authoritative sources of knowledge in alternative and social media. Concretely, we merged similar open codes and re-assigned labels with a higher level of abstraction and grouped codes that described variety on the key dimensions that emerged (e.g., diversity in how alternative doctors were quoted). Finally, during axial coding, we explored the analytical connections between the themes and dimensions.
Quality checks
To strengthen traceability, transparency, and trustworthiness, we conducted different quality checks. Most important, two researchers were involved in data collection and analysis. Through peer debriefing, we discussed the data inclusion and coding procedures in various sessions. Both researchers discussed the translation of research questions into sensitizing concepts, and only upon agreement, the open coding started. During the first stages, open coding was conducted together. Only after coding procedures were calibrated, the two researchers independently coded a sub-sample of articles and posts. At various stages of the coding process, the researchers compared the assigned codes for consistency and fit with the raw data. The first steps of focused coding and axial coding were done together to minimize interpretation differences. Nevertheless, the analyses are not free of bias or interpretation, which is also a feature of qualitative text analysis. However, by documenting the process of coding and data reduction, the steps taken in data collection and analysis are traceable.
Results
Expert references and objectivity on alternative media platforms
To answer the first research question on references to expertise and objectivity used to signal trustworthiness, we report on the main themes resulting from the stepwise coding procedure of the alternative media platforms. We will discuss the references to expertise used in social media narratives when we answer our second research question. Within each theme we discovered, we will discuss the variety through which expert references and claims of objectivity were made and contrasted to the references of objectivity in the mainstream media. Although we refer to the (relative) dominance of the themes in our sample, we refrain from offering quantitative indicators of prevalence as the sample is non-representative and a mis- or disinformation narrative could contain more than one theme. More examples can be found in the Supplemental Online Appendix.
Quoting or involving message-congruent alternative experts
Contrary to the assumption that mis- and disinformation may be devoid of references to experts, doctors, and scientists as sources of objective information, most mis- or disinformation disseminated via alternative media platforms 9fornews, blckbx.tv, and Café Weltschmerz directly quoted or referred to expert sources. The most frequently mentioned expert sources were ‘alternative’ experts, that is non-mainstream experts, or supposedly silenced scientists. Especially in the context of Covid-19, the alternative media channels quoted alternative (health) experts to emphasize their anti-establishment narratives related to Covid-19 and vaccines. The following quote from 9fornews illustrates how alternative doctors were selectively quoted as evidence for the perspective that Covid-19 vaccines can kill: “Something new is coming, warns immunologist Jessica Rose. She bases herself on Big-pharma people and people involved in the Covid-19 response. She argues that the threat is real.” Next to mentioning the authority of experts, for example, by using statements such as “the influential scientist” or “world-famous and widely respected professionals”, direct quotes of doctors and experts were used in articles to legitimize authoritative knowledge.
Many statements did not refer to individual doctors, experts, or scientists. Rather, alternative media mentioned the ‘consensus’ and ‘widespread support’ for alternative truths among different experts and doctors. An article on a European summit on the Covid-19 response published by 9fornews, for example, emphasized the widespread expert-based support for the mis- or disinformation narrative that Covid-19 vaccines are extremely dangerous. The article selectively highlights the perspectives of professors and experts supporting this narrative: “Among others, Professor Didier Raoult and Pierre Kory will talk about the cure of hydroxychloroquine and ivermectin. Professor Arne Buckhardt will talk about the damage the vaccines cause to the heart and lungs. Pathologist Ryan Cole explains how the vaccines cause cancers.”
Although most references to alternative doctors, experts, and scientists were made in the context of Covid-19, authoritative sources of expertise were also quoted in response to other issues, such as the conspiracies surrounding 9/11. Yet, expertise played a much less substantial role in such ideologically underpinned conspiracies as compared to health-related mis- or disinformation on Covid-19. Across issues, references to alternative sources of authoritative expertise were not contextualized. Hence, although the names and sometimes the affiliations of experts were mentioned, there was no information on why these experts had relevant field-specific expertise, or in what context their statements about issues such as Covid-19 were originally made. This can be exemplified by an article of Blckbx.tv that characterizes the prime minster of the Netherlands as a psychopath. This label was legitimized by quoting a lawyer as an expert on this issue: “Lawyer and TBS-expert Kob Knoester used the show to claim that Mark Rutte [Dutch Prime Minister] is a psychopath: I can explain this very well. This is confirmed by scientific evidence on non-criminal psychopaths that you often see in high-ranked positions.” Although a lawyer and expert on criminal offences could be a relevant expert in some contexts, this source had no expertise on mental illnesses.
In response to RQ1, our findings suggest that references to alternative experts are made selectively to legitimize anti-establishment perspectives. Expert sources were quoted out-of-context, or used to legitimize anti-establishment narratives, even if the expert source had no specific knowledge on the discussed issue.
Selective quoting of established expert sources
Alternative media platforms also selectively quoted or referred to experts and authoritative sources of knowledge that were part of the established order. As an example, one article published by Blckbx.tv referred to the established organization that protected personal data (AP) to legitimize the anti-establishment position that independent scientific evidence is consistently and strategically avoided in the discussion about Covid-19: “Privacy excuse of government in research on excess mortality was a lie, claims privacy watchdog AP. This is just another example of how the government is systematically hindering independent scientific research.” In line with this theme, statements made by an authoritative source of expertise were decontextualized and used to create the deceptive statement that independent research is deliberately hindered by the Dutch government.
Another variation on this theme is the combination or integration of different authoritative, established, and legitimate expert references with alternative sources to suggest an alternative counter-factual interpretation. One prominent application found across platforms is connected to the mis- or disinformation narrative that Covid-19 vaccines cause harm by killing young people. This narrative was often reconstructed by selectively referring to legitimate experts also referred to by mainstream media. For example, Blckbx.tv shared factual and verified information on young people dying because of lung infections: “We are witnessing a large amount of bacterial lung infections as a complication of the flu, says Van Der Voort, head of the ICU of University Medical Center Groningen (UMCG).” This fact-based and expert-driven narrative was then followed up by suggestive interpretations and alternative expert references claiming that increasing infections are the cause of the Covid-19 vaccination: “Fact is that more and more doctors and scientists have warned about mRNA-COVID-vaccines can harm the immune system. This could explain why young people are dying from the harmless flu.”
‘Our truths versus their lies’ – contrasting ‘honest’ alternative experts and critical citizens to ‘dishonest’ established experts
References to expertise were often not constructed in a vacuum but emphasized the discrepancy between the inaccurate or even deliberately untrue interpretations of the mainstream experts versus the alternative experts confirming conspiracies or anti-establishment narratives. The antagonistic construction of expertise was prevalent in most of the expert references of the alternative media platforms. Niburo.co often framed the boundary between the lies and deception of mainstream experts versus the critical mindset of ‘analytical’ ordinary people as a struggle between evil and good. Next to attacking the mainstream media, Niburo.co even delegitimized other alternative media platforms, including Blckbx.tv. These platforms were accused of being afraid and driven by financial interests, whereas Niburo.co was allegedly focused on the truth: “They [the other alternative media platforms] are all afraid. But if you cover the truth and remain honest, you should not be afraid about anything. They can attack you. They can name and shame you and take you out of the air. But so, what?” Both Café Weltschmerz and Niburo.co often constructed a boundary between the deception of established fact-checkers as experts versus an in-group of ordinary people who can see the truth by using common sense and asking questions. Thus, fact-checkers were often delegitimized and contrasted to alternative “truth” constructions that were based on common sense and good will.
Another way to emphasize the clash between established sources of expertise and alternative media platforms was to deny established sources’ expertise and delegitimize their objectivity or fact-based rationale. Café Weltschmerz, for example, framed ‘real’ experts in opposition to established ‘climate idealists’ to legitimize a climate skeptic perspective: “Climate idealists sound the alarm: The Garda Lake will be in great danger according to them. But lake experts are annoyed: the reality looks different.”
Emphasizing people-centric expertise, common sense, and critical thinking as foundations of truth-telling
Expertise also revolved around people’s involvement in or practical experience with issues. Blckx.tv, for example, referred to an investment banker as an expert on digital currencies. This expert was quoted to legitimize the mis- or disinformation narrative that elites are involved in freezing bank accounts of people involved in anti-establishment protests to silence them: “Investment banker Austin Fitts speaks from the blckbx.tv studio and says that our communities will become digital concentration camps. Cash will be illegal and replaced for digital bank currency, which allows them to control us.”
The platforms Niburo.co and Café Weltschmerz also referred to common sense and having a critical mindset as important indicators of expertise. Asking the right questions and being critical of elites was seen as a more reliable source of expertise than the mainstream’s interpretations and allegedly deceptive scientific evidence. Niburo.co, for example, devoted an article to the legitimacy of common sense and asking critical questions. These critical questions, then, could reveal the truth that is deliberately kept hidden from the people: “Thinking for yourself is perhaps normal for readers of this platform, but the majority of the world population refuses to do so […]. People are not even asking critical questions regarding the theory on global warming. Because of the MSM [mainstream media] people blindly accept it as a fact.” Generally, this platform often contrasted the establishment’s alleged lies and deception to the accurate perspective on reality that could be seen and revealed only by “people who are able to be critical and think outside of the box.”
In line with the idea that established forms of scientific expertise are inaccurate or deceptive, Café Weltschmerz also emphasized that field-specific expertise is not required to see what is really going on – instead, it is about asking critical questions to the elite and not taking conventional wisdom for granted: “I, someone without a medical background, already knew that lockdowns would have severe consequences back in 2020, but I was referred to as crazy and received all hate from experts.” The theme highlighting that ‘critical’ thinkers can find the truth even without field-specific expertise forms an alternative construction of objectivity and truthfulness compared to references to alternative experts that confirm anti-establishment narratives. Deviating even more from conventional journalism, Niburo.co circumvented all references to experts and prioritized common sense, seeing connections between events (i.e., conspiracist reasoning), direct observations, and a critical attitude toward established truth claims as means to reveal the truth.
Expert references and objectivity in online mis- or disinformation on social media
For our second research question, we asked to what extent references to expertise on social media are comparable to those on alternative media platforms. Our findings show that despite some overlap in how expertise was referenced, there were also considerable differences between the platform types. Notably, on social media, visual mis- or disinformation was very prominent. There were also fewer references to authoritative forms of expertise when legitimizing alternative truth claims on social media. Lived experiences of ordinary people were more central in the truth claims disseminated via social media. We will elaborate on these findings below.
Absence of references to expert sources
In contrast to alternative media platforms, experts, doctors, and scientists were rarely quoted or referenced in social media posts. Furthermore, when experts were mentioned, they were often not contextualized. For instance, in one narrative, a cardiologist named Dr Thomas Levy supposedly uncovered that the Federal Aviation Administration had relaxed its medical examination standards, permitting pilots with heart damage to continue flying. Although his expertise may have been relevant in this context, this connection remained unexplained, and the process behind his discovery was unclear. Moreover, our findings reveal that other social media users and alternative media platforms were often referred to as cues of expertise and trustworthiness to legitimize counter-factual claims. Thus, expertise was also signaled by invoking social cues or quoting the alternative truth claims spread by alternative media.
Referring to visual information and lived experiences as direct reflections of reality
In social media posts, deceptive claims were often supported by visual material. These visuals may enable the audience to observe the proof for the false claim firsthand and draw their own conclusions (also see Weikmann and Lecheler, 2023). Ordinary people’s visualized experiences served as expert cues in the sense that they depict lived experiences in a credible manner. Sometimes, the author of the message created these visuals, such as a screenshot from a weather application, to prove that the climate is allegedly not changing. Most of the time, however, the message contained manipulated or decontextualized images or videos created by others and re-shared on platforms like YouTube and TikTok. Traditional expert references were often absent in such truth claims. Thus, de-contextualized visuals may be used to signal authenticity and truth value without the need to legitimize mis- or disinformation with alternative or out-of-context expertise.
‘Our truths versus their lies’ – contrasting ‘honest’ critical citizens to ‘dishonest’ established experts
To underscore alternative truth claims within online messages, we observed that users often argued that the mainstream media or other elites are secretly holding back information from the public. For instance, one user shared a manipulated message about a fire in the Pentagon and argued that the “MSM [mainstream media] remains silent!” This pattern of delegitimizing mainstream media and experts and pitting them against the public who has the right to know the truth, closely mirrors our findings on truth constructions spread by alternative media platforms. For both social media and alternative platforms, there is a recurring narrative that focuses on concealed truths whilst challenging the credibility of established sources of information.
Emphasizing people-centric expertise, common sense, and critical thinking as foundations of truth-telling
On social media, expertise was often conveyed by references to direct observations of witnesses or by relying on common sense. This theme was much more saturated on social media as compared to the alternative media sample. These witnesses were external individuals confirming a truth claim, such as “Neighbors in Beneden-Leeuwen [neighborhood in the Netherlands] confirmed this afternoon that they never saw this graffiti”. However, at times, the communicator of the message also acted as a witness, as seen in statements like “I saw a post about this on Telegram and I thought it was a joke. I went to look on Amazon myself and it really is there”. Additionally, legitimacy was established by appealing to common sense, for instance, by arguing against coincidences, as can be illustrated with the following quote: “Do we really have to believe that it is completely coincidental that (…)”.
In response to RQ2, then, mis- and disinformation narratives are constructed differently on social media compared to alternative media platforms. Alternative media platforms more closely mimic conventional tools of journalism and the legitimization of claims through expert references, whereas proof and legitimacy mostly stem from visual proof and the experiences or emotions of ordinary people in the context of social media.
Discussion
Our main findings indicate that most alternative media platforms refer to expertise in most mis- or disinformation narratives across different issues. Thus, contradicting the popular assumption that mis- and disinformation circumvents experts and empirical evidence, false information is mostly constructed by referring to alternative experts to signal the legitimacy of counter-factual narratives. Yet, the ways through which expertise is constructed differ across and within platforms. Although expertise is consistently used to legitimize counter-factual narratives that attacks the established order (also see Heft et al., 2019; Ylä-Anttila, 2018), constructing expertise does not exclusively consist of references to alternative doctors, scientists, or other experts. More specifically, the alternative media platforms Niburo.co and Café Weltschmerz also referred to common sense and critical thinking of ordinary citizens as valid sources of expertise, which were contrasted to the lies of established experts. This construction of valid and reliable expertise through critical enquiry and common-sense links up to the concept of epistemic populism coined by Saurette and Gunster (2011): Different from counter-factual knowledge (e.g., Ylä-Anttila, 2018), the ordinary people’s viewpoints were seen as valid, whereas expertise following traditional journalistic routines was attacked and disqualified.
The constructions of (un)truths on social media follow a similar logic of epistemic populism. The main difference between alternative media platforms and social media discourse is that expertise and objectivity on social media were legitimized by referring to visual proof or the direct lived experiences of ordinary citizens. Thus, closeness to reality instead of authoritative expertise was emphasized as central to truth claims. This difference between alternative media and social media may be explained by the different affordances and values underlying knowledge dissemination across platforms. On social media, everyone with an internet connection can participate in the construction of truth claims. In this setting, all social media users can become an expert by sharing their lived experiences, conducting their own research, or sharing visual materials.
Alternative media, in contrast, are more organized and professionalized platforms that adhere closer to regular formats of news telling and journalism – which include signaling credibility through expert references, empirical evidence, and less emotionalized coverage (e.g., Hameleers and Yekta, 2023). Alternative media platforms may, more than social media, strategically present themselves as an objective and fact-based alternative to established news formats that allegedly deceive the people by presenting them with a fake or fabricated image of reality. Therefore, they construct truth claims by referencing alternative experts, scientists, statistics, or other fact-based sources of information to legitimize their alternative epistemology that challenges the status quo and established journalism (Heft et al., 2019).
As a major implication of our findings, mis- or disinformation’s discourse used to legitimize counter-factual narratives may be largely dependent on the communicated context. While the logic of social media may empower a more visual and citizen-driven discourse of truth-telling to offer ‘proof’ for deceptive claims through a seemingly direct representation of reality (Weikmann and Lecheler, 2023), alternative and hyper-partisan media platforms may more directly compete for truthfulness with legacy media and journalism by using the same tools to signal credibility. The role perceptions of different platforms and channels of mis- or disinformation may thus be aligned with their central mode of truth-telling and signaling truthfulness to their audiences.
The inductive findings can be transferred to a tentative typology of claiming expertise and legitimacy in mis- or disinformation narratives. Specifically, across different online contexts, we suggest that expertise and objectivity in mis- or disinformation can be legitimized by (1) quoting or involving message-congruent alternative experts; (2) selectively decontextualizing or quoting established experts; (3) contrasting ‘honest’ alternative experts and critical citizens to ‘dishonest’ established experts; (4) emphasizing people-centric expertise, common sense and critical thinking as foundations of truth-telling; and (5) referring to visual information and lived experiences as direct reflections of reality. Although the first three types mainly relate to the discourses found on alternative media platforms, the final two types of claiming legitimate expertise mainly apply to social media discourse.
Our findings also have important implications for policy and practice. In contrast to assumptions that we can clearly differentiate between mis- or disinformation and honest information based on identifiable ‘fingerprints’ (Carrasco-Farré, 2022), the distinction between true and false information is difficult to determine based on content features alone. Given that expert references, empirical evidence, and claims of legitimacy may follow similar patterns in mis- or disinformation and accurate information, media literacy interventions and de-bunking interventions need to respond to the complex nature of legitimacy claims in mis- or disinformation. For example, it may be insufficient to warn people about lacking or missing expert references and empirical evidence in misinformation. Based on our findings, the detection of mis- or disinformation should take into account the ways in which mis- or disinformation may strategically mimic established information and warn people about how expertise may be decontextualized or based on false inference. Generally, looking beyond fixed indicators of deception, news users need to be made more aware of how legitimacy claims can be used in honest versus deceptive contexts.
Our study has some limitations. First, we relied on a small sample of mis- and disinformation narratives that were biased by our selection criteria and entry points (i.e., existing databases and fact-checking resources). Although our aim was not to arrive at a representative overview of mis- and disinformation, we suggest that future research relies on more databases or even randomly selected content to include in quantitative endeavors. Second, we focus on only one country that is known to be relatively resilient to mis- or disinformation due to high levels of media trust and low levels of polarization (e.g., Humprecht et al., 2020). Although we believe that the concepts of counter-factual narratives, alternative experts, and epistemic populism are general enough to transfer to other settings, we leave it up to future research to assess the transferability of the typology to other contexts, which include most different countries and (social) media platforms. Finally, we suggest future research to scale up data collection in (automated) content analysis to arrive at an estimate of how prevalent various narratives of truth-telling are in mis- or disinformation versus information that is not classified as false. This would allow for a better assessment of how dominant various forms of expertise are in mis- or disinformation as compared to their use in more reliable forms of information.
Supplemental Material
Supplemental Material - Look at what the real facts and experts say! The use of expert references and objectivity claims in disinformation: A qualitative exploration and typology
Supplemental Material for Look at what the real facts and experts say! The use of expert references and objectivity claims in disinformation: A qualitative exploration and typology by Michael Hameleers and Emma van der Goot in Journalism.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Supplemental Material
Supplemental material for this article is available online.
Note
Author biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
