Abstract
This study integrates criminological social learning and psychological explanations of individual factors and mechanisms for science denial to offer an individual-level analysis of ‘alternative lifestyle’ subcultural groups in cyberspace in order to understand the assimilation, success and proliferation of potentially dangerous health-related misinformation. Through a rigorous passive online ethnography of two relevant self-identifying ‘alternative lifestyle’ Italian- and English-speaking online communities observed over the initial stages of the COVID-19 pandemic, we observed the unfolding of online narratives and behavioural intentions of criminological and psychological interest. We identified in our data both individual factors and mechanisms for science denial and clues to social learning, and we showed how they interrelate. Furthermore, by looking at the linguistic and visual resources used to shape how participants think through social learning mechanisms, we identified four main narrative frames: informative; oppositional; empathetic; and agency and spirituality. The findings of this study provide a more comprehensive understanding of the reasons for and mechanisms behind medical misinformation online and suggest ways to mitigate the related harms.
Introduction
Health-related myths, ideas and practices (including fraudulent, harmful, or at best useless, pharmaceutical and therapeutic approaches) developed outside science-based medicine have boomed in recent years, especially owing to the commercialization of cyberspace. The latter has played a fundamental role in the rise of false ‘health experts’ and in the creation of filter bubbles and echo chambers (Cinelli et al., 2020) – often in the form of online communities – that have contributed to the formation of highly polarized debates on health, the propagation of health myths, and the promotion and selling of fake cures advertised as safe and effective. It has already been argued how the creation and diffusion of health-related misinformation is of criminological concern because such misinformation can cause severe social (and also criminal) harms (Horsburgh and Barron, 2019; Lavorgna and Di Ronco, 2017; Massa, 2019).
In early 2020, the world found itself facing a new challenge with the outbreak of a novel coronavirus disease – COVID-19 – spreading across countries, to the point that the outbreak was recognized as a pandemic by the World Health Organization (WHO) on 11 March 2020. The virus causes a severe acute respiratory syndrome; the mortality rate directly linked to the virus is estimated to be relatively low, but the indirect impact on public health of the pandemic is considered extremely serious because it is likely to lead to a progressive breakdown of many healthcare systems owing to the number of patients likely to require critical care. Physical distancing has been suggested or imposed in many countries as a fundamental factor to mitigate the pandemic and slow its spread – an approach leading to major changes in the behavioural patterns and routines of many, as well as impacting mental health and wellbeing (Editorial, 2020; WHO, 2020). In the unfolding of the COVID-19 pandemic, in both traditional and social media, a flurry of (at times conflicting) information has been published, building up a pile of relevant knowledge alongside potentially misleading news (Adhanom Ghebreyesus, 2020; Zarocostas, 2020). In this context, there has also been a proliferation of potentially very dangerous health-related misinformation, with potentially significant repercussions for individual and public health.
This study furthers the line of enquiry into the propagation of non-science-based medical (mis)information in cyberspace by examining two relevant self-identifying ‘alternative lifestyle’ Italian- and English-speaking online communities observed during the initial stages of the COVID-19 pandemic (January–April 2020). By integrating criminological social learning approaches and psychological explanations of individual mechanisms in science denial, this article offers a micro-level analysis of relevant subcultural groups in order to understand the assimilation, success and proliferation of potentially dangerous health-related misinformation, answering questions such as: What psychological factors and mechanisms facilitating science denial and medical misinformation are present in the communities analysed? What clues to social learning can be found? What are the main narrative frames enabling social learning employed in these communities?
Providers and supporters of non-science-based health practices: A brief overview
The study of the propagation of misleading health information, its causes and its effects, has received attention in recent times by criminologists too, after being a topic of interest for many years mainly for journalists and debunkers (for example, D’Amato, 2019; Gazzola, 2019) and researchers in the health sciences (for example, Cassileth and Yarett, 2012; Cattaneo and Corbellini, 2014; Kaptchuk and Eisenberg, 1998). There has also been an increasing body of research on online communities promoting deviant health attitudes, for instance in the field of nutrition (Ayoob et al., 2002; Horsburgh and Barron, 2019) and eating disorders (Boepple and Thompson, 2014; Boero and Pascoe, 2012; Santarossa et al., 2019), vaccinations (Johnson et al., 2020; Kata, 2010) and addictions (Davey et al., 2012; Havnes et al., 2019). Nevertheless, this is not surprising: cyberspace has been increasingly used to support health-related decision-making and to market health products (Mackey and Liang, 2017), with communication around illness being transformed from a largely private experience to (at times) a semi-public one (Conrad et al., 2016). Consequently, the social implications of online health misinformation have become extremely relevant for the discipline (and especially for those criminologists leaning towards social, rather than criminal, harms – see Hillyard and Tombs, 2008; Pemberton, 2007), because they may cause financial, physical and psychological/emotional harm to the primary victims, as well as public health problems, and loss of confidence in the professional scientific and medical norms (Cattaneo and Corbellini, 2014; Lavorgna, 2021).
Previous research suggests that the promoters of these misleading health information and/or non-science-based health practices can be very diverse: some seem to seek profit, social prestige or a combination of both; others are motivated by strong inner (holistic) beliefs and opinions; in a minority of cases, a sexual motivation has been found (Bratman, 2008; Lavorgna and Horsburgh, 2019). Similarly, supporters and utilizers of non-science-based health practices range from people with a general interest in ‘alternative’ medicine and/or lifestyle, to desperate patients (or their loved ones) trying to get some help/hope, to people aiming to ‘save’ society, being harshly critical of governments, the pharmaceutical industry and the medical establishment (Lavorgna and Di Ronco, 2017). Stressing the coexistence of different types of motivation, at times in apparent conflict with one another, is useful to reject simplistic representations of the providers and supporters of non-science-based health practices, which tend to depict these individuals, respectively, as ‘fraudsters’ or ‘quacks’ (Konnikova, 2016; Lerner, 1984) and ‘gullible’ (Ernst, 2019; Metin et al., 2020).
To explore this complexity further, in this study we decided not to investigate some specific health providers and their supporters directly, but rather to address more generally the narratives around science denial and non-science-based health practices carried out by self-identifying ‘alternative lifestyle’ online communities that, in the context of the pandemic, have proactively used cyberspace, and especially social media, to discuss and promote non-science-based health information and other fringe theories. If and when followed, the unfounded advice could lead to various types of social harm, as discussed above. In line with the findings of Kaptchuk and Eisenberg (1998), we believe that, in order to better understand the dynamics of non-science-based health practices and possibly think of better harm mitigation strategies, it is crucial to better understand (rather than overlook or simply dismiss) the epistemological dimension of certain communities, identifying and analysing their beliefs and contextual practices.
Using a psycho-criminological approach to better understand individual behaviours in a subcultural setting
A number of studies looking at deviancy in online communities have relied on social structural approaches, focusing on subcultural elements (for example, Blevins and Holt, 2009; Holt, 2007; Røed Bilgrei, 2018). In doing so, they have mostly looked at macro (societal) or meso (community) levels. In our study, we are more interested in considering the micro (individual) level because we want to observe individual behaviours within our groups of interest, and how different active members of the group engage with the rest of the network in promoting, discussing or otherwise sustaining anti-scientific beliefs. We are integrating a social learning approach with psychological understanding of individual factors and mechanisms for science denial.
The origins of (psychosocial) learning theories can be traced back to the seminal work of Edwin Sutherland, who introduced his differential association theory in the late 1930s. In brief, according to Sutherland, people commit criminal behaviour through contacts with others. Crimes are the result of specific social interactions in a process of communication: anyone can learn criminal behaviour (and also motives, drives, rationalizations and attitudes) because of contacts with criminal patterns and isolation from anticriminal patterns (Sutherland and Cressey, 1978). Sutherland’s work was integrated and expanded in the following decades with contributions from the field of social psychology, according to which people must be socialized and taught how to behave through various forms of direct (conditioning) and indirect (modelling and imitation) learning (Akers, 1977). Learning is therefore seen as a dynamic process that can be encouraged, strengthened or weakened through reinforcement; the social environment becomes the most important source of reinforcement, and peer association the key factor increasing the likelihood of criminal behaviour.
A further development of social learning has been offered by Albert Bandura (1977, 1986): in Bandura’s view, attitudes and behaviours can be learned simply by observing and mimicking the behaviour of others (observational learning) and encoding their behaviour. In observational learning, social modelling (of attitudes, values and lifestyles) can occur in a variety of forms (Bandura, 2008), including through social media and especially – we hypothesize – through online communities that might act as echo chambers, exposing individuals to information only from like-minded individuals, therefore amplifying ‘learning’ through contact with a relatively uniform environment.
Overall, social learning theories – by focusing on how and why individuals learn criminal behaviour and the impact of significant others in this process – proved empirically robust in explaining the variance in individual differences in delinquent and criminal behaviour, having been tested on a wide array of situations, including in cyberspace (for a recent overview of the literature, see Shadmanfaat et al., 2020). Even if we acknowledge that social learning theories need to be adapted when applied to online behaviours because modalities of association can differ greatly between the offline and the online contexts (Boman and Freng, 2017), we think they can offer valid guidance in navigating our research questions.
Of course, by relying on a theoretical understanding developed for the study of crime, we do not want to imply that the anti-scientific practices that are the object of our study should be criminalized. We are hesitant also to consider most of these practices as ‘deviant’, because that would imply a sufficient level of societal consensus around science-based approaches (or at least around a recognized ‘value’ of science in society), something that tends to fluctuate depending on many factors, such as the topic under discussion, the geographical location, and religious or political identification (Krause et al., 2019). Rather, we believe that these practices can create, more or less directly, social harms (Lavorgna and Di Ronco, 2019; Lavorgna and Horsburgh, 2019); it is the normalization and promotion of these practices within certain communities that is problematic, and we hypothesize that this normalization and promotion is a ‘learned’ behaviour, which can be properly explored through a theoretical approach aimed at explaining problematic socialization.
The promotion of anti-scientific approaches or, more broadly, science denial has been addressed in psychological studies. In a recent contribution, Prot and Anderson (2019) summarized several areas of research in psychology that can help explain the roots of denial, and why simply bombarding deniers with accurate scientific information does not lead to attitude change. Although they also illustrate a number of important group and intergroup processes that can be useful to interpret and explain the information being communicated, the current research integrates a social learning approach with Prot and Anderson’s individual-level mechanisms as a framework for understanding the assimilation, success and proliferation of potentially dangerous health-related misinformation. The seven individual-level mechanisms that lead to science denial set out by Prot and Anderson are: belief perseverance; confirmation bias and myside bias; reactance and ‘forbidden fruits effect’; cognitive dissonance; defensiveness; conspiratorial thinking; and training in research methods and rational thinking.
Belief perseverance refers to the phenomenon whereby people will cling on to their original beliefs, despite being confronted with credible evidence to the contrary (Ecker et al., 2011; Ross et al., 1975). According to belief perseverance, when someone makes an initial observation or has a particular experience, they may try to develop causal explanations for what they witnessed or experienced. These causal explanations then become independent from the data on which they were originally based, meaning that individuals will continue to believe the casual explanation they generated regardless of the fact that the initial evidence might then be refuted (Prot and Anderson, 2019).
This is closely related to the mechanisms of confirmation bias and myside bias discussed by Prot and Anderson. Confirmation bias refers to the processes whereby individuals tend to perceive and interpret new information in a way that confirms their original beliefs, while rejecting or ignoring information that contradicts those beliefs (Peruzzi et al., 2019). Relatedly, myside bias refers to a propensity for people to evaluate and generate evidence that helps to confirm their own opinions (Stanovich et al., 2013).
The mechanism of reactance discussed by Prot and Anderson (2019) refers to how people tend to be averse to having their freedom or ability to act in a particular way restricted. When this happens, they become more likely to act in a way that attempts to restore their freedom. In relation to science denial, if scientific evidence is perceived to threaten an individual’s ability to act in a certain way, that evidence is likely to be rejected. Indeed, evidence suggests a forbidden fruits effect, where the restricted behaviour becomes more attractive (Rosenberg and Siegel, 2018).
Cognitive dissonance (Festinger, 1957) is a concept that is often discussed in relation to behaviour-change research; people will generally experience psychological discomfort if there are inconsistencies between their cognitions and/or their behaviour. Thus, a common idea in relation to health-related behaviours is that, if people are informed of the dangers of a particular activity (for example, smoking tobacco), then this information (‘smoking is bad’) should create some dissonance between what the person thinks about smoking (‘I should not do it’) and their behaviour (they smoke). Once this distance between thoughts and behaviours occurs, people are likely to take measures to reduce it. Typically, this can be done by changing their behaviour (for example, stopping smoking) or by changing their thoughts/attitudes. As Prot and Anderson (2019) point out, people often do not reduce the dissonance by adapting their behaviour but, instead, reduce the dissonance by challenging the validity of the information received. Thus, rather than stopping smoking, it has been shown that people are more likely to downplay the risks associated with smoking (Gibbons et al., 1997).
There is also evidence to suggest that people may process information in biased ways in order to protect their self-identity or self-esteem – so-called defensiveness ( Harris and Napper, 2005; Sherman and Cohen, 2006): when people are presented with information that challenges or contradicts their self-identity or self-esteem, to prevent experiencing dissonance (or in response to experiencing it), they react in a defensive manner; the more threatened an individual is by the information, the more defensive and resistant to change they become. It has been consistently argued in the psychodynamic literature that people use defence mechanisms, including biased perceptions, judgements and cognitions, to protect themselves from negative emotions (Festinger, 1957; Freud, [1933] 1964; Pyszczynski et al., 1993). Such defence mechanisms can work to reduce the distress caused by cognitive dissonance (Vaillant, 2011).
Developing conspiracy theories is a common method of motivated reasoning, and another important mechanism for science denial. Theorizing that events or phenomena are caused by the workings of powerful people or organizations is a central component of conspiratorial thinking, where experts and authorities are determined as having corrupt motives (Landrum and Olshansky, 2019; Lewandowsky et al., 2013). In their study of conspiratorial thinking and science denial, Landrum and Olshansky (2019) found evidence that having a conspiracy mentality is strongly related to a susceptibility to believing in viral deception (‘fake news’) and misinformation about science.
Finally, deficiencies in training in research methods and rational thinking are deemed to have an important role in science denial: it is therefore hypothesized that one way to tackle science denial is through education in research methodology, which would increase people’s ability to adopt methodological and statistical reasoning (Prot and Anderson, 2019).
Methodology
In a context of ‘digital positivism’ based on big data research methods and computational criminology in online research, it is important to retain a space also for qualitative analyses focusing on interpretative and critical approaches (Fuchs, 2019). In our study, we were interested in understanding the behaviour of individuals within specific subcultural groups (and specifically in their online manifestations). Moreover, we wanted to ‘use’ a major, new health-related event (the COVID-19 pandemic) to observe the unfolding of online narratives and behavioural intentions of criminological and psychological interest as linked to potentially harmful science denial.
In this context, we selected two relevant self-identifying ‘alternative lifestyle’ online communities (hereafter identified as Community I and Community II), here defined as an aggregation of individuals interacting around a shared interest in a way supported and mediated by technology, and guided by some protocols and norms (slightly adapted from Porter, 2004). In practice, we delimited as at the ‘core’ of these communities those actively participating in the online discussions pivoting on the Facebook pages hereafter identified as I and II. Both Community I and Community II also have an online presence as digital magazines, but it is in the dedicated (open) Facebook pages that people can post and comment news and opinions, thus becoming co-producers of relevant content (Fuchs et al., 2010). Community I (about 30,000 followers on the dedicated Facebook page) focuses on health-related information, whereas Community II (about 300,000 followers) tends to have a broader scope, discussing health-related information alongside other contemporary topics (for example, there are posts and discussions related to national and international politics and to children’s education). Although the above-mentioned pages were our starting point, we also analysed the content of textual material (for example, blogs, newspaper articles) and visual material (for example, pictures, videos) referred to in our initial pages up to two additional clicks (which often lead us, for instance, to other open Facebook pages, YouTube pages, or other websites).
We selected two different communities, one in English (Community I) and one in Italian (Community II) because Italy and the main English-speaking countries were hit by the pandemic at different points in time (with Italy being two to four weeks ahead on the epidemiological curve), which prompted very different reactions from policy makers (with more or less restrictive lockdowns implemented at different points in time), especially in the weeks right before/after the outbreak was recognized as a pandemic by the WHO on 11 March 2020. We were expecting that different reactions and narratives from high-level policy makers, as well as different national healthcare systems, could have an effect on the dynamics of medical misinformation, and so investigating communities from different (offline) contexts would allow us to explore this hypothesis. For our analysis, we took into consideration the period 1 January to 15 April 2020 (a date when all the main countries of interest for our online communities – the UK, the US and Italy – were in some form of lockdown); towards mid-April, furthermore, we reached a point of data saturation, with no new relevant information or behavioural dynamic emerging for the scope of our analysis.
Our practical strategy was guided by the need to safeguard the privacy and anonymity of the participants in the online communities observed in our study, while ensuring methodological rigour and respect for existing guidelines for online research on social media and the policies of the platform accessed (Markham and Buchanan, 2012; Zimmer and Kinder-Kurlanda, 2017).
We are aware that the data used are at times socially sensitive but, because we limited the study to open groups and pages, we could assume that the participants expected the virtual space used to be public. Seeking informed consent from the Facebook (or YouTube) users observed would have been virtually impossible and seen as intrusive; because we observed only passively, this omission is in line with current research standards (BSA, 2017; Social Data Science Lab., 2019). As regards concerns related to users’ anonymity, we did not use personal identifiers and are not using identifiable quotes (Williams et al., 2017). To minimize the ethical risks (for example, storage of identifying or sensitive information that is not needed for the scope of this research), we collected our data (here defined as all material publicly available, such as words, images/memes, and videos) manually and anonymized it at the point of collection. When the content of the data collection was problematic in itself (for example, because of potentially sensitive information included), we took research notes instead; research notes were also used to capture images and videos. We submitted our data study plans to our university Research Ethics Committee for approval (approved – ERGO/55870).
Our approach can best be described as a form of passive online ethnography. We recognize that in online ethnography there is the need to accept flexibility and explore new routes to ethnographic knowledge in order to create relevant contextual understanding (Pink et al., 2016; Postill and Pink, 2012). By relying on the adaptive approach employed in this study, we do not claim to have carried out a fully fledged study of a specific virtual community, but rather we have used an ethnographic perspective and used elements of the ethnographic method (in line with Androutsopoulos, 2008).
In particular, by looking at online discussions we were able to capture and analyse relevant conversations pertaining to an issue (and in a context) of interest as they occurred, without having to incite or direct the conversation (Brooker et al., 2017; Jowett, 2015); these naturally occurring interactions reveal internal conventions, mechanisms and motivations (Giles et al., 2015; Goffman, 1983; Housley et al., 2017). Because imagery and picture sharing are an important part of online communities (visual conversation), of self-presentation and of construction of the digital self, we also took into consideration videos and images (such as memes but also ‘motivational’ pictures); they helped us to understand visual and socio-aesthetic elements of the communities investigated (McDonald, 2007; Mandoki, 2016).
Having collected the data and the research notes, we manually carried out qualitative thematic analysis – focusing on narrative and visual elements – on the files identified. We conceptualized our data in two distinct ways. First, the (theory-based) analytic focus was on identifying in our data individual factors and mechanisms for science denial (as presented above), as well as clues to social learning (definitions; reinforcement; modelling/imitation; motivation/values; references to significant others offline/online). Second, attention was paid to the linguistic and visual resources used to frame narratives around science denial and non-science-based health practices within the online communities considered, because these are seen as having a role in shaping how participants think through social learning mechanisms. Working closely with the raw data, we identified four main narrative frames: (1) informative frames; (2) oppositional frames; (3) empathetic frames; and (4) agency and spirituality.
Results and discussion
Science denial meets social learning
During our passive ethnography, we encountered instances of all the factors and mechanisms of science denial illustrated by Prot and Anderson (2019), as well as clues to social learning. In this section, we will present some of the examples found and how they interrelate.
Belief perseverance was clearly emerging from the narrative used by numerous community members, who were posting comments simply to express their agreement, or that a certain post or video ‘gave voice to their thoughts’ (Community II). Even when confronted with credible evidence of a contrary belief (for instance, the testimony of a user writing from an area severely affected by deaths related to COVID-19 expressing concern over a post minimizing the effects of the virus), commenting users cling to their original belief, for instance suggesting that the effects of the virus have been particularly severe in a certain area because mass vaccination had been carried out in that area in previous months, hence (in their view) weakening immune systems (Community II). Sometimes, it was suggested that the provider was relying on a sort of Socratic (maieutica) approach (that is, a form of cooperative argumentative dialogue between individuals, used to elicit knowledge by a series of questions), enabling community members to explicate and clarify thoughts and knowledge they already have within them (‘this is what I already knew and you simply amplified my thoughts’, Community II).
What a ‘belief’ is, however, is not set but is part of a learning process and takes place through ‘definitions’. In social learning, definitions are evaluative expressions ranging from approval to disapproval of a given behaviour, formulated by an individual following exposure to the definitions of others. In our communities we could observe both general definitions on broad moral principles (for example, being critical and suspicious about what mainstream media say; or the fact that ‘infectious diseases are born within us . . . from our unhealthy lifestyle’, Community II) and specific definitions focused on very specific acts (for example, taking certain probiotics or watching certain videos). We have identified positive definitions (approving of a particular behaviour – for example, through encouraging comments or positive emojis), negative definitions (disapproving of a particular behaviour – for example, accusations of lacking coherence or being brainwashed), and even neutralizing definitions (providing a rationalization if not a justification for engaging in a particular behaviour – for example, we feel passive and anxious because this is how ‘they’ want us to feel). Of particular interest in terms of socialization processes, in Community II several posts were dedicated specifically to parents and guardians and children’s education. For instance, we observed links to an author (a member of the community) who prepared and interpreted stories and tales for children to explain the pandemic to them: he used a traditional tale and its archetypal characters as an allegory to highlight how we should not ‘submit to the dominant narrative’ that mainstream media carry about the pandemic, or this will ‘make us blind’.
Regarding interactions among adults, we could observe the development and the reinforcement of definitions. In both communities, typically, those who agree with – or post frequently in – the community receive more positive reinforcement than others. The reinforcement of definitions was evident especially in examples of confirmation bias and myside bias. For instance, when it was suggested that vitamin C, a certain type of fruit juice or a commercialized product could be helpful to combat COVID-19, users confirmed that the same substance had helped them get better from other types of seasonal illness (and therefore they would expect some benefits also with the new coronavirus; Community I). Similarly, when some maps were posted suggesting a causal effect between the presence of a coronavirus outbreak in a certain area and the presence of 5G masts (the fifth generation of wireless communications technologies, recently launched in many countries across the globe), most users commented that they were expecting that relationship, that ‘it is what they thought and knew’ (Community II).
However, there were only a few cases that indicated that some users had been somehow indoctrinated into positive definitions (that is, beliefs were intentionally inculcated). For example, when one member cast doubt on the content of a post, another intervened to belittle him and discussions in line with the content of the initial post were encouraged – Community II). This suggests that the mainstream representation of these types of online communities as ‘echo chambers’ might not be completely accurate, because most participants are not isolated or alienated from the ‘dominant’ culture. Instead, they bring into the online community a wide range of personal social networks (significant others) and experiences. For example, in Community I a user indicated that they had contacted their GP to get some clarification on medical misinformation read in the community; in Community II some members referred to their experience of living in a severely affected area, and others mentioned the opinions of family and friends. In other words, the frequency, duration and intensity of participation in certain online communities seem to be very heterogeneous, with only a minority of members being active participants within the community. This suggests that a majority of participants are less invested in the community, being occasional participants or bystanders.
Only a minority of providers and users openly present or defend notable and long discredited conspiracy theories or fringe theories on the origins of COVID-19 (‘it’s a biological weapon to lower the world population’; ‘the CIA is behind this’ – Community I and Community II). Many providers and users, however, support emerging conspiratorial theories that, according to current science-based knowledge, have to be considered as pure conspiratorial wild imaginings – first and foremost the alleged connection between 5G and COVID-19. Fear of the virus, according to some promoters and some commentators, is etero-directed top down to favour pharmaceutical companies. They raise the suspicion that a vaccine for COVID-19 is already available but has been kept secret from the general population for the purposes of mass control (to impose mandatory vaccines in the near future or to pass repressive or intrusive laws, including the inoculation of microchips – Community II), or even that COVID-19 was invented in order to create a need for it (Community I). Although some of the concerns over the use of emergency powers to allow stronger surveillance mechanisms are legitimate, the conspiratorial element is to be found in the fact that the existence of a clear direction is assumed, and science fiction elements end up overshadowing realistic alarm about extreme dataveillance (Lettera Aperta, 2020).
Reactance and the ‘forbidden fruits effect’ were also evident: where people have had their ability to go outside or to socialize restricted, suddenly these activities become more attractive. Particularly in Community II, it was possible to observe a clear temporal shift in the narrative. At the beginning of the ethnography, most Italian policy makers and the majority of the general public expected COVID-19 to be a health issue with limited effects in Italy. At this stage, the dominant frame in the posts and videos was overall in line with official recommendations, and the page administrators were explicitly advising the community members to follow the official guidance to protect both themselves and others. When the lockdown started to become stricter, the narrative changed: the health recommendations provided by the government were openly challenged, and those who ignored the official guidelines for the lockdown were considered by some in positive terms (‘champions of the freedom to take a walk’, ‘the scapegoats of the unsolved personal unhappiness of others’).
Similarly, cognitive dissonance was observed especially when, in the face of restrictions on movement and social distancing policies implemented, people in both communities downplayed the risks or severity of the virus so that they could continue to visit family or go out in public places. In Community I, for instance, some users were urging others not to succumb and be panicked by the ‘madness that is COVID-19’, because there are many more legitimate catastrophes that we ought to be concerned about. When the lockdown became stricter, the risks started to be seriously downplayed in Community II too. For instance, we observed posts minimizing the number of deaths (and also of deaths of those working on the healthcare frontlines), and explicitly advocating not washing hands or cleansing objects too often so as not to destroy the ‘good’ bacteria, or underplaying the importance of self-isolation. A user explicitly commented that he was purposely going to go shopping without a face mask as officially suggested. The change of tone is always set by the provider; the other follows. Again in Community II, for instance, following a post stressing that there is no need to be afraid, the many comments repeated ‘I am not afraid!’; in other instances, community members stated that they were ‘waiting for [the provider]’s analysis’ on a certain development in policy-making.
According to social learning, another way in which an individual incorporates definitions into their own belief system is through imitation of an admired model. In both communities we found several references to doctors and scientists recognized as experts within the community (they were always presented as well-recognized international experts) but not as such by the outgroup research community and/or debunkers. Members of the communities are generally encouraged to spread the work of particular doctors or self-appointed experts ‘in all of their social interactions’, even by actively sending letters to local newspapers and other media outlets (Community I). Providers are generally recognized as experts within their communities; other members at times seem to try to become recognized providers within their community (for example, they post links to their own blogs or personal pages) in an attempt to gain more recognition and status (‘I wrote an article in my style, linking some data, [one of my links] is to one of your books’, in a message directed to a notorious provider posting on the core page, Community II).
Even if the use of scientific imagery is broadly used (for example, through scientific-sounding names or references to ‘good scientists’ such as Einstein and Galileo), the lack of training in research method and rational thinking is generally evident in both communities. For instance, to understand the extent of the pandemic in Italy some users suggested starting a questionnaire among themselves to see whether and how many people they know who have tested positive for COVID-19. This reveals no understanding of basic sampling methods and their rationale (‘after all, most of us have experienced the virus only through the media’, Community II). Some providers insist that they always report the sources used to form their arguments, but these are of very low quality (for example, they are from self-published material, predatory journals, or opinions posted by like-minded people on personal blogs; no empirical data or methodology are provided). Problematically, people viewing the content on these forums are learning about – or being taught – the type of ‘evidence’ that is considered acceptable within these communities, which continues the cycle of science denial and the sharing of potentially dangerous misinformation.
Defensiveness was also often linked to the same scientific imagery, for instance when providers insisted that they are excluded from the debate (after proclaiming themselves as experts), and that they are just as deserving of attention as those they call their counterparts, or nemesis, coming from the ‘official’ science. Furthermore, members of the community react in a defensive manner when the perspective of outsiders is reported. Through the construction of a clear ‘us vs. them’ oppositional narrative (in line with what has been reported in Lavorgna and Di Ronco, 2017), they label those whose views are different from those defended within their community as ‘thoughtless’, ‘compliant’ (Community I), ‘ignorant’, ‘gullible’, but also ‘risk-averse’ (Community II). According to the predominant community view, risks are somehow exaggerated or constructed by those with the power ‘to manipulate people’ and to make them lose their ‘mental clarity’ (Community II); the mainstream scientific community is accused of trying to ‘gaslight’ people, manipulating them into accepting a vaccine (Community I).
Narrative frames
Using insights distilled from the study, we identified four main narrative frames, as listed above: (1) informative frames; (2) oppositional frames; (3) empathetic frames and (4) agency and spirituality. All these frames are reflected in both the language and the aesthetic.
Informative frames are a constant element in the communities observed. The communities are an important setting for gathering health-related information, and are also used to propagate information beyond the communities themselves – there are constant requests to share videos and information among friends and acquaintances to help propagate the ideas presented by providers within the community. Some of the advice provided genuinely promotes a healthy lifestyle: community members are encouraged to take agency for their lives by eating healthily, taking supplements, exercising and living in harmony with the self and the non-self (other people, the environment). However, some of the information presented can be very dangerous, with messages promoting potentially risky behaviours (for example, ‘let’s NOT wash our hands . . . it is better to cleanse our brain to avoid the COVID paranoia’, Community II). In informative frames, some recurrent narrative mechanisms can be identified: the use of modal verbs (for example, ‘could’, ‘might’) is present at the beginning of the discourse but it disappears in the conclusions; conjectures are presented as ‘proofs’ and ‘demonstrations’; when providers do not offer information or answers, they insinuate doubts about the ‘official’ information. Also, details are rarely provided; when people ask for more details or for evidence, providers are typically very quick to respond, but they provide links to inadequate sources of information as if it is credible science. Members of the community are guided through a specific line of reasoning that, even if scientifically inaccurate or with logical fallacies, sounds reasonable and deserving of trust. Graphs or data are often presented and visually shown to prove a point (even if generally in a partial and misleading way). When providers are challenged, they remain calm and assertive in their response. For instance, when one comment referred to the provider as ‘a quack’, they gave a bullet-point list to show all of the ways in which the person was wrong in their assertion, referring to a bogus scientific journal that might appear legitimate (Community I).
Oppositional frames (or ‘us vs. them’) are commonly found. Indeed, a dichotomy of science is constantly offered, as if there is a ‘false science’ and a ‘real science’. The members of the observed communities are those (allegedly) following the real science, the one that is open-minded, open (to outsiders) and free (from all types of external conditioning and prejudice). This ‘good science’ can be traced back to Galileo and then to Einstein. According to this view, those going ‘against the current’ and accused of ‘pseudoscience’ are simply the courageous ones, censored for what they say; they depict themselves as ‘the few against the many’, ‘the light fighting against the darkness’ and ‘transgressive’ (Community II). On the other hand, the false, mainstream science is presented as arrogant and presumptuous, an ‘ideology’ degenerating into a ‘scientocracy’. The ‘false scientists’ are compared to Cardinal Bellarmine, the Jesuit theologian who summoned Galileo in the context of the ‘Galileo affair’, which culminated with the trial and condemnation of Galileo Galilei by the Roman Catholic Inquisition in 1633 (Community II). In other examples, science is referred to as ‘junk science’ with an agenda (to the point that COVID is renamed ‘CONvid’) (Community I). There is a general feeling of outrage towards scientists who become well-known in public debates, and who are personally attacked by community members and providers alike. For instance, in Community I, they are referred to as being under the influence of the media. In Community II, these scientists are even accused of being ‘dangerous for public order’ and more generally an ‘enemy of democracy’ because they would like to take decisions impacting on personal freedoms.
These sentiments are reinforced through visual elements. Positive definitions in the communities are strengthened through the use of calming images set in outdoor spaces or in bright and extremely white labs; light, upbeat music is generally used in videos. Experts in the videos are generally middle-aged or older white men, in serious yet approachable settings (for example, in a living room or a small studio with books, or in front of a whiteboard). If we compare this with images relating to ‘the false science’, those images are dark and sometimes cartoon-like to emphasize that they should not be taken seriously; there is a prevalence of indoor spaces, with people generally wearing masks or being medically tested. To strengthen the dichotomy further, famous artistic references are used: for instance, Goya’s painting ‘The Sleep of Reason’ is posted when arguing against ‘mainstream’ science; Anderson’s short light-music composition ‘The Typewriter’ is used in a video when WHO representatives and politicians are shown in order to stress through the use of a visual and musical rhetoric the bureaucratic nature of policy-making in the context of what had just been defined as a ‘false pandemic’ that will be used to push the agenda of ‘dangerous false vaccinations’ (Community II). Evocative metaphors are also present, most notably when in Community II the phrase ‘strategy of tension’ is used (it refers to a policy whereby violent struggle was encouraged rather than suppressed in the so-called Years of Lead in Italy, 1968–82, when the country suffered numerous terrorist attacks, by both far-left and far-right groups, which were often followed by government round-ups and mass arrests).
If ‘official’ science is presented in negative terms, providers in the communities observed are characterized by empathetic frames. These providers always rely on calm and accessible language. The tone is easy and they proactively suggest their approachability, for example by explicitly inviting people to keep in touch. Beyond offering information, they provide reassurance, both in the content of their posts and in the answers they provide through comments – a feature that seems to be particularly appealing to some members of the community (for example, a video was praised in the comments ‘because it is optimistic’, Community I) and can help explain how and why people invest psychologically in social discourses (Hollway and Jefferson, 2000; Verde and Knechtlin, 2019).The providers often make reference to the common fears and anxieties suffered by many during the lockdown (for example, the fear of losing jobs, of being away from loved ones). Community members are never belittled or alienated, and inclusive language is generally used (‘we need to make sacrifices’, ‘we need to become informed to escape from the sirens of disasters’, Community II).
Before concluding, we want to stress the presence of a fourth narrative frame (agency and spirituality), which is predominant in Community II but in some instances also present in Community I. This frame, we argue, should not be overlooked because from our data it clearly emerged that communities promoting non-science-based health information are often associated with an emphasis on existentialism, transcendence and personal growth. There are multiple messages from the pages and those commenting that encourage people to take agency and control over their own body and mind. For instance, some providers run a webinar series all about the pandemic and how to protect yourself; any time that vaccines and medicine were either posted by the page or brought up in the comments, other users were quick to suggest that the solution was not in medicine or vaccines but that we should instead be taught how to naturally boost our immune system and calm our minds (Community I). Members are encouraged not to be ‘victims’ of the situation, and to use their ‘spiritual vision’ and ‘intuitive intelligence’ to understand the progression of the pandemic and to use the pandemic as a time for personal growth. There are constant reminders to ‘stay positive’ and keep ‘clarity of mind’, to stay healthy and ‘not to become manipulable’, in a call for ‘moral resistance’ and, at times, the suggestion that ‘we are the primary responsible for our diseases’ (Community II). In a context where negative feelings are considered ‘corrosive for the soul’, what transpires is a very superficial attention to mental health, with issues such as depression and anxiety being treated only from a psychosomatic perspective; it is suggested that a change of attitude – focusing on positive thoughts and keeping positive – is sufficient to overcome mental health issues.
Spirituality-related discourses are common, where spirituality is generally a mechanism for personal empowerment. Spiritual belief, it is worth recalling, has to do with an individual’s search for existential meaning and is not necessarily linked to religion (Speck et al., 2004; Tanyi, 2002); spirituality has been recognized as an important factor contributing to wellbeing and coping strategies (Koenig et al., 1999; Speck et al., 2004). Spirituality-related discourses generally arise through inspirational posts or videos and spiritualist prayers to provide support and reassurance (for instance, in Community II death is at times discussed, and providers try to give comfort by saying – among other things – that we are still close to our dead loved ones). Spirituality-related discourses sometimes also occur through allusions to angelology and Biblical references. For instance, mainstream science is compared to the ‘devil’ that needs to be kept away (Community I); in the context of conspiratorial thinking on the pandemic, there are references to the Book of Revelation of the New Testament and to the so-called devil’s number reported there (Community II).
Although presented as separate sections here, the individual mechanisms, social learning cues and narrative themes are interconnected. For example, the online narrative agency and spirituality links very clearly to the mechanisms of defensiveness and cognitive dissonance when communities argued that, to combat the mental health issues associated with COVID-19, a simple change in attitude is all that is required. Similarly, mechanisms of conspiratorial thinking and biased processing of information are evident when discussing the oppositional frames narrative, where scientists are considered dangerous for public order.
Conclusions
In this study, through the passive online ethnography of online communities observed over the initial stages of the COVID-19 pandemic, we observed both individual mechanisms for science denial and clues to social learning, and we showed how they interrelate. We also discussed the unfolding of online narratives, recognizing some core themes that appear to be pivotal to provide a more comprehensive understanding of the success of medical misinformation online. In these conclusions, we want to emphasize the implications of some key findings of this study, which we believe can have repercussions beyond criminology and psychology, because they suggest ways in which to mitigate the harms of the propagation of misinformation online by improving health-related scientific communication.
From our analysis it emerges that the success of the communities observed relies strongly on the fact that participants mutually support each other; reassurance is offered and community members are given a sense of agency, of control over their lives. In line with the findings of Kaptchuk and Eisenberg (1998), the narratives observed in our communities are not only persuasive but also restorative to some, enabling some participants to find a renewed sense of the self, of morality and of purpose. Research on effective online medical support groups has already stressed that there is a clear need to balance empathic and factual communication (Preece, 1999), and psychosocial criminology teaches us that the diversity of individuals’ lived experiences is a core determinant of how we perceive risks, and indicates the role of emotions such as fear in denial (Holloway and Jefferson, 2000). Similarly, our study suggests that, if effective science communication (both from real experts and through traditional media and policy makers) is to be ‘competitive’ with the harmful information that is offered in certain online communities, then it needs to pay more attention to its own framing (for instance, taking emotions such as fear or spiritual needs more explicitly into consideration). Being dismissive is somewhat counterproductive, and leads to the dangerous alienation of potentially vulnerable members of society.
Additionally, and again in line with psychological research evidencing that simply bombarding people with accurate scientific knowledge is not enough to reduce science denial (Landrum and Olshansky, 2019), the need for renewed attention to the how of scientific communication appears from our analysis through the fact that community members do ascribe explanatory virtues to conspiracy or non-science-based theories, even when they are not endorsing them (see also Mirabile and Horne, 2019): the large majority of the active participants observed did indeed ‘like’ them or comment on them with positive interest (preferring them to the scientifically supported explanations, when available).
Our in-depth, qualitative study partially disproves earlier quantitative big data analyses (for example, Del Vicario et al., 2016) suggesting that there are well-formed and highly segregated communities around conspiracy and scientific topics. Indeed, although it holds true that users mostly tend to select and share content related to a specific narrative and to ignore the rest, ideological, political and social homogeneity were not found. Further in-depth, qualitative analysis on different online groups (such as other types of online communities promoting or debating potentially harmful health attitudes, as exemplified above) is needed to corroborate our findings, but our exploratory analysis indicates that it might be important to be more critical in using the metaphor of an echo chamber to describe these types of online communities: they are not closed systems, and spreading this idea risks obscuring the complexity of the actual social learning occurring online.
Despite the heterogeneity of the participants in the online communities observed, there are some major underlying assumptions and themes that can be identified (in line with Kaptchuk and Eisenberg, 1998). From our study, it appears that these assumptions and themes are at the very heart of what allowed the development over time of a (very fluid) coalition undermining not only public perceptions of science-based medicine (with anti-vax movements being a very notorious example) but also rational thinking and public trust in experts.
Although in this study we referred to social learning theory as a micro-level theory, useful to investigate within-group behaviours, it is worth remembering that the learning process has been linked to social structure by Akers himself, who stressed the importance of understanding the specific learning environment in which individuals operate (Akers, 2009). Hence, an interesting avenue for further research would be to investigate further the synergies between group and intergroup processes for science denial (Prot and Anderson, 2019) and criminological subcultural approaches, shifting attention to a meso-level of analysis. Similarly, it would be interesting to explore the gender dimension of the communities analysed: existing research has suggested that, among providers, there is a prevalence of (male) middle-aged adults (Lavorgna and Horsburgh, 2019). Also, in certain online communities propagating medical misinformation, a traditional, gendered division of roles in the family is advocated; references have been found to libertarian forms of feminism, together with attacks on what is dismissed as ‘gender feminism’ (Lavorgna, 2021 forthcoming) – suggesting this would be an important avenue for further analyses.
