Abstract
The spread of health-related misinformation on social media has increased user efforts to tackle emerging risks. In this study, we provide a model on how users’ perceptions of risk when encountering possibly fear-inducing pandemic misinformation influenced their intent to fact-check it. We employed an online survey to collect the data among adult Facebook users. The model was tested using structural equation modelling. Unlike previous risk perception models, we found a positive effect of cognitive risk perception on the intentions of social media users to utilise internet tools for verifying the accuracy of information. The results also revealed that the more emotional risk users perceived, the more they intended to use ample sources, and, indirectly, also to use tools for verifying information. Furthermore, the participants demonstrated a greater propensity to utilise online fact-checking tools as their intention to explore many information sources increased. Our study contributes to the field by connecting cognitive and emotional risk perception with multi-faceted fact-checking in social media, where both individual fact-checking practices and information-seeking behaviour merge. It also contributes to human information behaviour research, by highlighting higher concerns with disease danger as possible user characteristics for motivated misinformation debunking. Thus, our findings may aid health practitioners and risk communicators in assessing how to target and educate, especially individuals with low-risk perception. Finally, we call on the general public and legislators to recognise the invaluable role of providing online information accurately as a crucial part of the strategic communication agenda.
Plain Language Summary
The spread of false health information on social media has made people work harder to deal with new risks. Our study looks at how people's thoughts and feelings about risk when they see scary pandemic misinformation affect their desire to check the facts. We tested this using advanced statistics. We found that when people think more logically about risks, they are more likely to use online tools to fact-check. When people feel more emotional risk, they are more likely to look for multiple sources of information and then use online fact-checking tools. The more people want to gather information from different sources, the more they use online fact-checking tools. Our study shows how both thinking and feeling about risks are linked to thorough fact-checking on social media. It highlights that people who are very worried about disease risks are more motivated to correct false information. This helps health professionals and communicators know how to better target and educate those who are less aware of risks. We stress the importance of accurate online information as a crucial part of effective communication, urging the public and lawmakers to understand this vital role.
Introduction
Pandemics of highly contagious and unknown viruses pose a challenging quest in scientific and medical endeavours to fill the knowledge gaps and manage global health in a race against time. The lack of effective pandemic measures, physical and mental health decline, and starvation for trustful information, have outlined the recent Coronavirus pandemic (Saqib et al., 2023). While the first COVID-19 outbreak dates back to 2020, new subvariants of the virus have still been emerging, resulting in the Omicron subvariant discovered in 2023, being the most contagious strain of the virus to date, projected to grow in frequency internationally in the near future (Mahase, 2023).
The public, faced with an uncertain environment posed by the recent Coronavirus, has shown high demand for news and information, however, often at the expense of quality information regarding the virus (Bhagavathula & Raubenheimer, 2022; Morejón Llamas & Cristófol-Rodríguez, 2023; Van Aelst et al., 2021). The lack of consistent and clear messaging, assessed risk levels ineffectively, or proposed appropriate protective behaviours from public health officials, governments, media, and others (Freiling et al., 2023; Krause et al., 2020), pushed the public to consider low-quality publicity, lacking professional journalistic standards, instead of traditional sources (Lewandowsky et al., 2022). People’s information sources changed to non-mainstream social media news (Kalogeropoulos et al., 2019; Newman, 2021) and layperson users acting as opinion leaders (Wagner & Reifegerste, 2023). During the pandemic, social media news consumption rose by 2.7% compared to the pre-pandemic years (Van Aelst et al., 2021). The digital news report (Newman, 2021) showed that weekly news use on Facebook was 44%, on Twitter 13%, while Instagram, TikTok and Snapchat followed. These various social media platforms were used to retrieve health-related information (Cinelli et al., 2020), which was also the case on the web. Google Trends data also revealed notable increases in Coronavirus-related searches during this period (Effenberger et al., 2020).
The situation became conducive to a ‘(mis)infodemic’ about the virus on social media (Freiling et al., 2023), which enforced the debate about the negative influences of fake news and a variety of phenomena, like information pollution (Wardle & Derakhshan, 2017). It encompassed several problematic information types, which differed according to their intent towards the user (Wardle & Derakhshan, 2017). Misinformation and disinformation, often called simply fake news (Wardle & Derakhshan, 2017), became challenging to manage (Freiling et al., 2023; Islam et al., 2020).
Thereby, the outbreak of COVID-19 amplified health risks, as individuals might select specific characteristics or aspects, and interpret the information based on their own perceptions. They might translate them into communication with other people, resulting in misinformation (Mayer et al., 2017). The consequences might be critical, leading to a lower life quality, possibly an increased chance of dying (Altman et al., 2023; Freiling et al., 2023; Swire-Thompson & Lazer, 2020), and even changed community-level decision-making in various contexts. For instance, the misinformation spread resulted in hesitancy or outright rejection of public health interventions like vaccination, leading to outbreaks of preventable diseases, such as measles (World Health Organization, 2019). Likewise, exposure to misinformation regarding COVID-19 on social media reduced individuals' propensity to accept vaccines related thereto dramatically (Roozenbeek et al., 2020). This may have resulted in incorrect resource allocation, poor public health management, or lower trust in institutions (van der Linden et al., 2017).
To prevent such outcomes, implementing thorough fact-checking measures is essential, and measures were implemented at several stages. Industry giants and governmental bodies introduced professional company-independent or in-house fact-checking, along with debunking platforms (EUvsDisinfo, 2023; Jin, 2020; Walsh et al., 2022). Various approaches were also proposed to aid users, for example, instructional activities, technological tools and literacy practices (Chan, 2022; Kruijt et al., 2022; Swart, 2023). Thereby, users' actions towards combating misinformation, recognised as crowd-sourced fact-checking, could be considered as possibly more effective than professional actions (Calvo-Gutiérrez & Cervi, 2023; Lazer et al., 2018; Ruffo et al., 2023; Swart, 2023). Namely, users could rely on the opinions of others, seek additional sources, or use online fact-checking tools (Ahn & Kahlor, 2020; Kahlor, 2010; Liu & Huang, 2020; Ştefăniţă et al., 2018; Swart, 2023).
Fact-checking becomes critically important during crises such as the COVID-19 pandemic, where risk perception may be the denominator prompting users to consider the validity of the (mis)information they consume, encouraging them to engage in careful information processing, with the specific aim of identifying deceptive information (Lee et al., 2023). Research has shown that individuals exposed to validated, fact-checked information regarding COVID-19 risks exhibited more precise evaluations of personal and social threats, together with increased compliance with preventative actions (van der Linden et al., 2020). Nonetheless, there is limited understanding of how individuals’ perception of risk influences their intent to conduct fact-checking actions in crisis times. Risk perception may serve as a significant catalyst for information-seeking activities, as individuals who perceive elevated levels of danger are more inclined to seek supplementary sources to confirm or challenge received information (Renner & Schupp, 2011).
Existing research demonstrates various approaches towards examining risk perception’s role in facilitating health-related behaviours, like fact-checking or information-seeking intentions (Freiling et al., 2023; Jiang, 2022; Jiang et al., 2022; Lee et al., 2023; Malik et al., 2023; Nah et al., 2023). These studies have either been built on established theory-based models of risk perception, have proposed new models without relying on established theoretical models, or have not included modelling at all (Dryhurst et al., 2020). Indicating a research gap, the authors (Nah et al., 2023) looked mostly at how risk perception affects intentions to seek information, while studying why the concurrent effect on information-seeking intentions in multiple source types (fact-checking online tools, user counterparts and other sources) have remained under-researched. Conversely, in studies where information-seeking intentions were examined sufficiently, risk perception has not been included in the study (Jiang, 2022). Thus, the literature is scarce on studies where a direct effect of risk perception on fact-checking would be measured (Jiang et al., 2022), which holds even more firmly for health fact-checking.
Conceptual Framework
In this study, we propose a conceptual model on how risk perception about pandemics may explain individuals’ intentions to engage in fact-checking information about pandemics in social media. We assume that users’ intent to fact-check information can be explained by risk perception (Jiang, 2022; Jiang et al., 2022).
We focused on social media news consumption, as it was heightened during the pandemic of COVID-19, as in previous pandemics, due significantly to retrieving health information (Jang & Baek, 2018; Morejón Llamas & Cristófol-Rodríguez, 2023). We concentrated on user-generated news regarding COVID-19 disseminated on social media, rather than that produced by news organisations. We assumed that this type of content would not be produced by professional journalists, where filtering of misinformation takes place traditionally at one of the production stages. In such conditions, the possibility of misinformation spread could be higher (Wagner & Reifegerste, 2023). The large-scale quantitative analysis across multiple social media platforms, including Twitter, Facebook, Instagram and Reddit, revealed that misinformation circulated faster and reached a larger audience than science-based, verified content during the COVID-19 pandemic (Cinelli et al., 2020). On Twitter, emotionally charged content was spread more frequently via retweets, while on Facebook and Reddit, misinformation was shared mostly through group discussions and sharing mechanisms (Cinelli et al., 2020). In our study, the emphasis was on Facebook as the Slovenes' leading social media platform (Valicon, 2020).
Individuals’ subjective perceptions of the pandemic played the leading role in their response to COVID-19 updates on social media regarding the decision to check the information they engaged with (Lee et al., 2023). Engagement with the news was treated as reading the material, or participating in associated activities, such as commenting, liking, sharing, or contemplating it (Kožuh & Čakš, 2021; Swart, 2023). In what follows, we first present the concepts of risk perception and fact-checking intent, followed by substantiation of their interplay in a conceptual model.
Risk Perception About Pandemics
Risk perception pertains to individuals’ personal thoughts or judgements about the possibility of unfavourable events, such as accidents, illnesses, diseases and fatalities (Bae & Chang, 2020; Li et al., 2023; Paek & Hove, 2017). Risk perception consists of cognitive and emotional dimensions (Li et al., 2023), where the cognitive dimension refers to an individual’s knowledge and comprehension of hazards, encompassing their perception of susceptibility and the severity of dangers (Paek & Hove, 2017; Sjöberg, 1998). The emotional dimension of risk perception, also known as affective risk perception, relates to individual’s apprehension or concerns regarding potential exposure to hazards (Li et al., 2023; Paek & Hove, 2017; Sjöberg, 1998).
Our study does not deem risk perception as a reaction to specific social media news content about COVID-19, but as an antecedent for individual fact-checking. According to the cognitive-behavioural theory (CBT) (González-Prendes & Resko, 2012), people’s exposure to and consumption of social media news about COVID-19 may induce worry or anxiety around their health (Liu, 2020). It may also be the case in wider non-pandemic contexts. Individuals who had visited medical websites and obtained information regarding specific disease symptoms exhibited elevated degrees of health anxiety (Norr et al., 2014).
The perception of risk was, further, found to influence the motivation to seek more health information strongly (Lee et al., 2023; So et al., 2016), and when people face health-related news that contains misinformation, it is even more essential to know whether the risk they perceive plays any role in their subsequent intention behaviour, that is, fact-checking. Not to be overlooked, when they perceive a particular risk, they are motivated to change their health behaviour, and intend to engage in protecting it (Chen et al., 2015; Li et al., 2023; Liu, 2020). Even though they may have factual knowledge about the risk, it may lead them to fact-check the information they are exposed to (Choi et al., 2023; Lee et al., 2023).
Fact-checking Intent
Fact-checking is a non-regulatory initiative to recognise scientific facts and perspectives. It enables people to access trustworthy information regarding measures to safeguard themselves against COVID-19 (Hameleers, 2022; Schuetz et al., 2021). If they practice fact-checking actively, they are less vulnerable to disinformation (Chan, 2022; Chen et al., 2022). Fact-checking is manifested at both the professional and non-professional levels. Professional fact-checkers ensure the accuracy of information by working for news organisations, fact-checking services, or other organisations (Ruffo et al., 2023). At the non-professional level, crowdsourced fact-checking operates, where social media users are non-expert fact-checkers, and have also demonstrated good results in combating misinformation in the COVID-19 pandemic (Kou et al., 2021). The impact of crowdsourced fact-checking is also apparent beyond the context of COVID-19, in areas such as climate change and political debate. For instance, crowdsourced fact-checking was recognised as successful and a possible complementary approach to professional efforts in combating climate change (Saeed et al., 2022; Vu et al., 2023) and political misinformation (Gao et al., 2024).
Users tend to seek and believe in accurate information from authoritative sources or news outlets in order to verify information, like official accounts and verified users (Swart, 2023; Tandoc et al., 2017). Additionally, individuals can seek confirmation of the news from external sources, such as acquaintances, medical professionals and authoritative institutions (Swart, 2023; Tandoc et al., 2017). Individuals may check the claims in the social media news by using fact-checking tools as well, to find and remove inaccurate or fraudulent content, or claims that are disseminated online, or through other mediums in the public domain (Brandtzaeg et al., 2018), such as FactCheck.org, Snopes, Fact Checker, TinEye or the Google reverse Image Search. We refer to specialised online platforms that analyse rumours and evaluate health and political assertions, located predominantly on social media, for example, the Slovenian Oštro.si.
In our study, we consider fact-checking as individual endeavours to scrutinise assertions that have been reported previously on social media (Krause et al., 2020). It can be performed elsewhere in the digital space and in multi-fold modes: They check user counterparts’ opinions, explore alternative sources of information, or utilise Internet fact-checking tools (Ahn & Kahlor, 2020; Kahlor, 2010; Liu & Huang, 2020; Ştefăniţă et al., 2018).
Risk Perception Explaining Fact-Checking Intent
Existing research does not substantiate the impact of risk perception on fact-checking or information-seeking behaviours uniformly. The studies which developed conceptual models that would explain the role of risk perception in fact-checking intent are scarce, even though they grounded their models on well-established theoretical models (Jiang, 2022; Jiang et al., 2022; Lee et al., 2023; Nah et al., 2023).
These studies applied The Risk Information Seeking and Processing model (RISP) (Griffin et al., 1999), the Planned Risk Information Seeking model (PRISM) (Kahlor, 2010), the Comprehensive Model of Information Seeking (CMIS) (Johnson & Meischke, 1993), or the O-S-O-R model (Markus & Zajonc, 1985). Built on RISP and PRISM models, Nah et al. (2023), proposed a conceptual model that examined misinformation beliefs in connection with vaccine hesitation, while they focused on the mediating roles of information-related factors (cognitive risk perception, negative affective responses, perceived lack of information and intents to obtain knowledge) and behavioural subjective norms. Following the CMIS model, Jiang et al. (2022) developed a conceptual model where worry and risk perception are supposed to affect the experience of obtaining negative information, potentially leading to a decrease in health fact-checking behaviours. Guided by the O-S-O-R model, Jiang (2022) proposed a conceptual model where health worry is supposed to influence excessive consumption of information on social media, leading to eventual exhaustion from social media use, and decreased verification of health-related information.
Accordingly, the literature on risk perception models revealed no significant effect of cognitive risk perception on perceived information insufficiency or information-seeking intentions, but negative affective responses, that is, emotional risk perception (Nah et al., 2023). Likewise, Jiang et al. (2022) found no direct effect of cognitive risk perception on fact-checking. On the other hand, Lee et al. (2023) found that cognitive risk perception predicts misperception negatively. Concurrently, Jiang et al. (2022) found that cognitive risk perception about COVID-19 leads to a negative inclination to seek knowledge, which, in turn, decreases the practice of fact-checking.
As the literature does not state clearly what kind of relationship between risk perception and fact-checking exists, in our study, we propose a new model, with the central research question of how risk perception explains fact-checking intent (see Figure 1).

A conceptual model diagram of the relationship between risk perception and fact-checking intent, along with the components. Figure 1 Alt Text: A conceptual diagram illustrating the relationship between Risk perception and Fact-checking intent. The diagram shows that Risk perception affects Fact-checking intent and consists of two components: Cognitive risk perception and Emotional risk perception. Fact-checking intent consists of three components: Searching for reliable sources actively, Seeking others' opinions, and Utilising online fact-checking tools.
In a wider sense, our model relies on the RISP model (Griffin et al., 1999), which allows us to understand how individuals seek and process risk-related information. According to the RISP model, individuals’ perception of risk influences their motivation to seek information. Increased perceived risk results in higher information-seeking behaviour frequently (Griffin et al., 1999). In our case, we integrated the cognitive and emotional dimensions of risk perception, facilitating a nuanced understanding of what motivates social media users' fact-checking behaviours. The RISP model further asserts broadly that individuals pursue supplementary information to bridge the disparity between their existing knowledge and what they deem adequate to mitigate the risk properly. Our suggested model narrows this focus by identifying fact-checking intent as a key behavioural outcome. It is operationalised into three constructs which could be aligned partly with the RISP model. Namely, intent in seeking reliable sources could be understood as systematic processing, as seeking the opinions of others is compliant with heuristic processing, while utilising online tools for fact-checking might be understood as a technological enabler of information processing.
As outlined in the RISP model (Griffin et al., 1999), we propose different pathways through which cognitive and emotional risk perception affect fact-checking intent. Cognitive risk perception, which can be understood as the logical evaluation of the likelihood and severity of misinformation, may trigger systematic fact-checking intent, such as seeking reliable sources actively and utilising online tools. In contrast, emotional risk perception, which can be deemed an affective response like worry, may trigger more reactive and heuristic behaviours, such as seeking opinions from counterparts. These estimated pathways not only have a theoretical foundation, but are also grounded in different real-world scenarios, such as health-related and climate change misinformation. During the recent COVID-19 pandemic, emotional perception about vaccine misinformation motivated individuals to seek reassurance from friends or family (Singh et al., 2022), while others verified claims systematically through official health organisations (Ruggeri et al., 2024). Likewise, in a broader sense, the existing literature reveals that climate change information provokes emotional responses, which, subsequently, affect how individuals perceive the related risks and their intention to participate in pro-environmental behaviours (Myers et al., 2023).
In general, the described pathways cannot be disentangled easily in real-world contexts, nor has the existing literature explored them sufficiently. Thus, it is necessary to conduct an empirical study, to elucidate the mechanisms by which cognitive and emotional risk perception affect users’ intent to fact-check information on social media. In the proposed study we focus on adults, as they are highly active in both consuming and sharing information on social media platforms, which represent a source for obtaining health information both actively and through passive exposure (Lim et al., 2022; Wong et al., 2021). Besides higher exposure to health misinformation, older users were also found to be the ones who have a higher risk perception, tend to be female, and are more likely to live in urban areas (Bruine de Bruin, 2021; Dryhurst et al., 2020).
Methods
Ethics, Data Collection Procedure, Analysis and Presentation
The Institutional Review Board (IRB) of the Faculty of Arts at the University of Maribor approved the study’s implementation ethically. We also respected the Ethical Guidelines released by the Association of Internet Researchers (Franzke et al., 2020) and the Declaration of Helsinki (World Medical Association, 2024). As we employed an online survey questionnaire, the principles of research ethics were transferred into the design of the questionnaire. Namely, prior to answering the questions related to the study, the respondents were provided with four questions dedicated to informed consent, for example, ‘I confirm that I have read the introductory information about participating in the research’. Moreover, the data were stored securely and were accessible exclusively to authorised researchers. The respondents also had the right to stop filling out the questionnaire at any time without any consequences (Woodfield, 2017).
The data were gathered with an online survey questionnaire between 4th January and 28th February, 2021, when the second wave of the COVID-19 pandemic occurred in Slovenia. The data collection was held on Facebook, which was the world's most popular social media platform, with approximately 2.8 billion monthly active users at the time of the data collection (Statista, 2021b). The study population comprised individuals aged 18 years and older, who utilised at least one social media platform (Facebook) and resided in Slovenia. Specific data on the adult population on social media in Slovenia are not available, while, according to Statista (2021a), approximately 59% of the population was on Facebook in 2021, that is, 1,255,000 users. Our sample size was 433 participants.
The sensitive nature of the survey topic posed significant challenges to the data collection process. Namely, Facebook’s advertising policies prohibited publicising our survey, as it was related to COVID-19 (Facebook, 2020). Thus, we applied non-probability sampling, following the reachable principle. The participants were selected based on their accessibility and proximity within the digital space, as the authors disseminated an invitation to participate through their personal Facebook profiles. While this approach facilitated efficient access to a diverse sample, nevertheless, it naturally restricted the generalisability of the findings, as the sample may not have represented the broader population accurately.
To address the research question, we utilised structural equation modelling (SEM). The primary advantage was in the simultaneous modelling of relationships among multiple independent and dependent variables. It guaranteed rigorous examination of relationships and assessment of model fit by integrating confirmatory factor analysis (CFA) with path analysis (Hair et al., 2021; Kline, 2023). Statistical analyses were conducted with SPSS 28.0 and AMOS 28.0 (IBM Corp., 2021a, 2021b). Overall, we executed the following steps (Han, 2023; Hair et al., 2021; Kline, 2023):
An initial measurement of the variables was selected, modified and adapted to fit the research objectives. The content validity of the measurement items was assessed rigorously by three experts in psychometric measurement, research methodology and media communication. It allowed us to ensure that the measurement items were relevant and comprehensive (Drost, 2011).
The measurement instruments were checked for validity and reliability. The construct validity was assessed through CFA, to ensure that the measurement items demonstrated adequate loadings, and thereby reflected the underlying latent theoretical constructs adequately. The construct reliability was assessed with Cronbach’s alpha coefficient, to confirm internal consistency (DeVellis & Thorpe, 2021; Hair et al., 2021; Kline, 2023).
The measurement model's fit was assessed using various fit indices, including the goodness of fit index (GFI), normed fit index (NFI), comparative fit index (CFI) and root mean square error of approximation (RMSEA) (Hair et al., 2021; Kline, 2023).
A composite reliability (CR), convergent and discriminant validity was conducted. The CR analysis inspected the reliability of constructs in the measurement model, while the convergent validity analysis inspected whether the indicators of every construct shared a substantial amount of common variation (Hair et al., 2021). The outputs were standardised factor loadings and Average Variance Extracted (AVE) values. The discriminant validity analysis inspected further whether the constructs/factors were clearly separate from each other (Hair et al., 2021). The output was the factor correlations matrix.
Given that the collected data depended on the participants' self-reports, we used procedural and statistical strategies to mitigate the common method bias (CMB). In the data collection phase we employed anonymous survey responses, to reduce the potential biases from social desirability and priming effects (Chang et al., 2020). We also eliminated common scale properties, so that not all the questions were asked in the same response format (Podsakoff et al., 2012). In the data analysis, we examined CMB using Harman’s single factor test, to verify that a single component did not explain the majority of the variance (Podsakoff et al., 2012). We also conducted a full collinearity assessment for controlling CMB, where we calculated the variance inflation factors (VIFs) for all the constructs in the research study. Finally, ultimately, a common latent factor was integrated into the model, to account for any shared variance among the items due to technique bias (Hair et al., 2021; Podsakoff et al., 2012).
A structural model was developed, to test the relationships between constructs. Standardised path coefficients and their significance levels (p-values) were assessed, to determine the strength and direction of associations. Again, the fit of the structural model was evaluated, and we measured the reliability and validity of the model.
Measuring instrument
The survey questionnaire included three main sections, measuring risk perception, fact-checking intent and demographic background. Risk perception was measured through the cognitive and emotional dimensions. While the cognitive dimension measured the participants’ understanding of COVID-19 as a risk, the emotional dimension measured their feelings about this health-related risk (Bae & Chang, 2020; Paek & Hove, 2017). By relying on Paek and Hove (2017), and, adapting the scale by Bae and Chang (2020), cognitive and emotional risk perception were each measured through four items with 5-point Likert-type response categories, ranging from 1 (strongly disagree) to 5 (strongly agree). An item example for cognitive risk perception: ‘There is a high likelihood that I will acquire COVID-19’. An item example for emotional risk perception: ‘I am worried that I will contract COVID-19’. The measurement of intent to fact-check was conducted using three dimensions: (1) Searching for reliable sources actively, (2) Seeking the opinions of others, and (3) Utilising online tools to identify deceptive material for fact-checking purposes. The measuring instrument for fact-checking intent was developed by adapting the existing measuring instruments (Ahn & Kahlor, 2020; Kahlor, 2010; Liu & Huang, 2020; Ştefăniţă et al., 2018), finally comprising three items per dimension, with 5-point Likert-type response categories, ranging from 1 (strongly disagree) to 5 (strongly agree). An item example: “The next time I engage with news related to COVID-19 I plan to use more than one piece of news as a source of information.” The participants’ demographic data were collected as well. The participants were instructed to provide information regarding their gender, age, level of education, active use of social media platforms and personal experience with COVID-19. Active use of social media platforms was measured by measuring which social media platforms the respondents used (Facebook, Twitter, Instagram, TikTok, Snapchat, LinkedIn or other), how frequently they conducted various activities on social media, came across the news related to COVID-19, or engaged in such news. The frequency of conducting various activities on social media was measured with eight items (e.g., sharing the content) with response categories ranging from 1 (almost never) to 5 (every day). The frequency of coming across the news related to COVID-19 and engaging in such news was measured with two questions, each with the same abovementioned response categories. Personal experience with COVID-19 was measured with one question, which asked whether the respondents had already acquired COVID-19. The adaptation of the questionnaire to fit the research objectives was ensured with a pretest, during which three experts from Media Studies reviewed and confirmed the design of the questionnaire (DeVellis & Thorpe, 2021). Likewise, a pilot test was conducted with 20 potential respondents, to ensure questionnaire clarity (DeVellis & Thorpe, 2021). The results of the validity and reliability analysis are presented in Section “Validity and Reliability Analysis”.
Results
Sample
The study included 433 adult individuals who use social media actively in Slovenia. 62.4% of the participants were female and 37.6% were male. Most of the participants had tertiary education (60%), followed by those with secondary education (27.3%) and primary education (1.2%). On average, the participants were 33.31 years old (SD = 11.92), which, given recent studies, suggests that a younger age is a critical period for the impact of social media use on development (Orben et al., 2022). The majority had not had COVID-19 yet (62.1%). Table 1 shows how frequently they used various social media platforms, and how many participants conducted particular social media activities at least daily.
Information on Social Media Use Among the Study Participants.
Data Analysis
Validity and Reliability Analysis
We first conducted data screening, where we identified missing data. We employed listwise deletion to address the issue of missing data, a technique applied routinely by SEM researchers (Hair et al., 2021).
The CFA revealed that the construct risk perception consisted of two variables, and fact-checking intent consisted of three variables (see Table 2, column ‘Variable’). A reliability and validity analysis of the constructs followed. The retrieved factors had flawless internal consistency (DeVellis & Thorpe, 2021) (see Table 2, column ‘Construct reliability’). In the validity analysis, we excluded one item in the variable ‘Cognitive risk perception’, as its factor loading was lower than 0.5 (Hair et al., 2021). The remaining factor loadings can be seen in Table 2 (column ‘Construct validity’).
Constructs, Variables and Items, Along with the Results of the Construct Reliability and Validity Analysis.
Afterwards, we evaluated the model by calculating the fit indices (Hair et al., 2021) (see Table 3). All the scores, along with the chi-square value (X2), degrees of freedom (df) and relative chi-square value (Cmin/df), aligned with the recommended thresholds.
Results of Testing the Model Fit.
A CR, convergent and discriminant validity analysis followed (Hair et al., 2021). The CR, AVE values and the factor correlations' matrix are seen in Table 4. The results revealed that there were no issues regarding the reliability and validity, CR > 0.7 and AVE > 0.5 (Hair et al., 2021).
Findings From the Investigation of Validity and Reliability.
CR = composite reliability; AVE = average variance extracted. ERP = emotional risk perception = CRP = cognitive risk perception = FCISS = fact-checking intent in seeking sources; FCISO = fact-checking intent in seeking others’ opinions; FCID = fact-checking intent in detecting misleading information.
The bold diagonal elements represent the square roots of the AVE.
We performed an exploratory factor analysis, to inspect the common method bias, with Harman’s single factor test (Podsakoff et al., 2012). The results revealed that the first factor explained 34.59% of the variance, which was below the threshold of 50% (Podsakoff et al., 2012). Accordingly, there were no concerns that a single factor would dominate the variance. Further, the evaluation of multicollinearity revealed that the variance inflation factors for all constructs in the research study ranged between 1.03 and 1.43, which were under the threshold of 5.0 (Hair et al., 2021). It indicated that multicollinearity was not an issue in our study, and that the constructs were sufficiently independent. Finally, we incorporated a common latent factor into the CFA, to examine the shared variance among the observed indicators attributable to the common method bias. The findings indicated that the average common variance per item was 20.25%, which indicates that the common method bias unlikely substantially affected the results (Podsakoff et al., 2012).
The Structural Model
We developed and estimated a structural model. The fit indices indicated that all values conformed to the recommended thresholds (Table 5). Evaluation of the model’s reliability and validity showed that all CR values surpassed the established threshold of CR > 0.7, while the AVE values were, for most of the constructs, above 0.5, except for cognitive risk perception (AVECRP = 0.489) and emotional risk perception (AVEERP = 0.391). Notwithstanding these constraints, we decided to retain both cognitive and emotional risk perception in the analysis, owing to their theoretical importance in explaining the model’s structural relationships. Moreover, the reliability metrics of both constructs (CR and Cronbach’s alpha) fell within acceptable thresholds (Hair et al., 2021). The reason for this discrepancy may stem from the complexity and multifaceted structure of cognitive and emotional risk perception. Such constructs frequently have intricate elements that existing measurement scales may reflect inadequately, resulting in lower AVE values.
Results of Evaluating the Final Structural Model’s Model Fit.
The Final Model
The path diagram in Figure 2 displays estimates from the structural model shown on the paths. The estimates are standardised path coefficients, significant at the 95% or 99.9% level. The latent variables within the model were responsible for 82.62% of the variation in the outcome variable ‘Fact-checking intent in detecting misleading information’.

The final model (Significance level: *p < .05; ***p < .001). Figure 2 Alt Text: Figure depicting a structural equation model (SEM) illustrating the relationships between the latent and observed variables. The diagram displays arrows connecting the ovals (latent variables) and rectangles (observed variables), to demonstrate the statistically significant paths in the model. The double-headed arrows indicate correlations and the single-headed arrows denote directional relationships. The latent variables are Emotional risk perception, Cognitive risk perception, Fact-checking intent in seeking sources, Fact-checking intent in seeking other’s opinions, and Fact-checking intent in detecting misleading information.
The model, firstly, suggested that the primary predictor of fact-checking intent in identifying deceptive material was the intention to fact-check while searching for sources (β = .42, p < .001). It indicates that, as users look for alternative sources of information regarding COVID-19 increasingly, their reliance on online tools for fact-checking also increased.
Secondly, the intention to detect deceptive information through fact-checking was determined to be influenced by an individual’s intention to fact-check, namely, by seeking out other opinions from others actively (β = .25, p < .001). Moreover, the intention to fact-check when seeking others’ perspectives had a significant effect on the intention to fact-check when seeking sources (β = .41, p < .001). This indicates that individuals who are more inclined to seek opinions from other users about content on social media are also more likely to look for additional information sources and use online tools to verify the accuracy of that information.
Thirdly, the intent to fact-check information to detect misleading information was found to be explained by the cognitive risk perception (β = .09, p < .05). It means that individuals who viewed COVID-19 as a cognitive risk were more inclined to utilise online resources for verifying the accuracy of material found in social media news pertaining to COVID-19.
Lastly, our results revealed that emotional risk perception affected the intention to fact-check by seeking the opinions of others (β = .21, p < .001), and the intention to fact-check by seeking sources statistically significantly (β = .21, p < .001). The findings indicate that those who have a greater ability to perceive the emotional aspects of COVID-19 as a potential threat are more likely to engage in the behaviour of seeking out other users' opinions regarding the credibility of news sources. Moreover, individuals who have a higher emotional response to the risk of COVID-19 are more likely to search actively for alternative sources of information, particularly in social media news.
Discussion
Our study proposed and tested a model on how adult Facebook users’ perceptions of COVID-19 as a risk affected their intention to fact-check. Unlike previous risk perception models (Jiang, 2022; Jiang et al., 2022; Lee et al., 2023; Nah et al., 2023), our study contributes to the field by revealing a positive effect of cognitive risk perception on social media users’ intentions to use internet tools for fact-checking information.
Firstly, our research uncovered that users who perceived COVID-19 as a cognitive risk to a greater extent demonstrated a greater propensity to employ online tools for fact-checking. The finding aligns with prior research highlighting that cognitive risk triggers systematic information processing, leading individuals to pursue validation from credible sources and instruments (Loewenstein et al., 2001). Similarly, our research showed that the more emotional risk individuals perceived, the more inclined they were to utilise ample sources to verify the information they came across about COVID-19 in the news. Furthermore, this perception of risk also influenced their tendency to employ internet tools for fact-checking indirectly. It was also discovered that people’s emotional vulnerability influenced their inclination to seek out other users’ thoughts regarding the news makers they interacted with. These findings align with research on user-generated content consumption. Pennycook and Rand (2019) found that, in information crises, emotional risk drives individuals to increase their trust in user-generated content, such as the opinions and reactions of their counterparts. Hence, they may be more motivated to check information in multiple sources to mitigate uncertainty.
Unlike our findings, Jiang et al. (2022) found, detrimentally, that negative experiences with obtaining information indirectly decreased persons' risk assessment of health facts. In other words, apart from our findings, they did not substantiate a direct effect of risk perception on fact-checking (Jiang, 2022; Jiang et al., 2022). Our results thus add the specificity that the emotional component of risk perception leads individuals to intent to check information in their social media network, including family and friends (Kuru et al., 2023; Zhao et al., 2023), resulting in information seeking, prior to using online tools for fact-checking, while the cognitive element of risk perception leads them to the sole intent to employ tools to fact-check. Kiviniemi et al. (2018) also found that emotional or affective risk perception is more influential in short-term behaviour change, whereas cognitive risk perception was more influential in long-term behaviour change. In our case, it may be that emotional pandemic risk perception led individuals to seek their counterparts’ opinions as a practice of correction (Kuru et al., 2023), instead of using online tools to fact-check information about the pandemic in the short-term, due to a momentary online public discussion on social media. As they were aware that the discussion was held on the same topic, and may even participate in the discussion, they may have used it as a source of information. We may, thus, conclude that, in the long-term, only cognitive risk perception is preserved when individuals intend to employ internet tools for fact-checking, as the discussion about the risk may not be on the agenda of the online public discussion anymore (Kiviniemi et al., 2018).
Secondly, our research revealed a connection between persons’ inclination to verify news regarding COVID-19 by seeking other sources of information and comments from others, and their inclination to utilise online tools for fact-checking. The finding aligns with previous research (Nah et al., 2023), where it was found that high cognitive involvement in the assessment of digital content leads to employing a variety of verification strategies, e.g., using fact-checking platforms. The finding can also be explained by some scholars’ arguments (e.g., Lewandowsky et al., 2012), where it was argued that online conversations can lead to psychological reactance, causing individuals to become distrustful of certain information. Accordingly, we can assume that the skepticism may lead individuals to alter their ways of fact-checking, so that they use available online tools for fact-checking. It may, further, prompt users to enhance their understanding of health, hence facilitating accurate decision-making (Barua et al., 2020), and fostering users' inclination to interact with COVID-19-related content on social media. We may apply the correct decision-making to select credible sources of information. Concurrently, active users of social media are more inclined to have confidence in the accuracy of the COVID-19 news presented on social media platforms (Kožuh & Čakš, 2021). Thus, we can speculate that the experience on social media could have a self-corrective nature, where users, through their multi-stage activities, deriving from an individual’s skepticism, may alter their own experience in terms of diminishing misinformation, so that they engaged more in the news being fact-checked.
Finally, our study extends the existing overlook of individuals’ perceptions of COVID-19 as a risk which culminates in reactions to protect themselves from the health threat, but, in terms of media content observed through the risk perception, it is present as a need for reliable information, which, in the final stage, can result in fact-checking. Individuals who have a strong desire for reflective and analytical thinking are more inclined to engage in information verification (Kaufman & Taylor, 2024), and could, thus, be less susceptible to the effects of disinformation. So, if the authorities took pandemic measures, the users were also empowered to evaluate information (Liu & Huang, 2020). In the times of a new, barely known disease (Krause et al., 2020), this intention was even more provoked, and it was enabled due primarily to the previous efforts of enthusiasts and institutional media measures that will allow fact-checking of the published content.
Fact-checking enabled users to discern reliable information regarding COVID-19 prevention, as opposed to the misleading assertions disseminated on social media platforms (Schuetz et al., 2021). Therefore, this personal trait of seekers for reliable information was vital in browsing through the ‘best available evidence’ (Krause et al., 2020) sourced from the government, public health officials, or other verified sources of information. Ensuring access to accurate and trusted information, whether in a pandemic or not, is, thus, crucial. It can be manifested through ensuring credible and accessible official sources of information, investing in the development of the public's critical appraisal skills, and informing about the benefits and threats of available ways of fact-checking.
Implications and Limitations
Theoretical Implications
Firstly, our study contributes to the literature with the new risk perception model where, unlike previous models, comprehensive differentiating between the sources of fact-checking was held, while examining the effect of both cognitive and emotional risk perception related to pandemics. Our study revealed that cognitive risk perception and emotional risk perception play distinct roles, while both interplay in individuals’ decisions on where to fact-check information. Persons who perceive emotional risk are more likely to rely on subjective sources of information, such as reading others’ perspectives on the topic. On the other hand, individuals who perceive cognitive risk demonstrate a higher propensity to utilise fact-checking tools. These findings can be linked directly to overarching communication theories, such as the Theory of Planned Behavior (TPB) (Ajzen, 1991) and Self-Determination Theory (SDT) (Deci & Ryan, 2012), providing new opportunities for theoretical advancement.
The TPB suggests behavioural intention depends on attitudes, subjective norms and perceived behavioural control (Ajzen, 1991). In this context, our findings indicate that cognitive risk perception may align with attitudes, as users who assess the risks critically are likely to have an attitude which promotes fact-checking actions. Emotional risk perception may correspond with subjective norms, as this perception could stem from fear-induced expectations of society during the health crisis. Future studies could include cognitive and emotional risk perceptions as antecedents to attitudes, subjective norms, and perceived behavioural control, which would allow for enhancing the predictive accuracy of TBP in health-related domains.
Further, the SDT posits that individuals achieve self-determination when their demands for competence, relatedness and autonomy are satisfied (Deci & Ryan, 2012). Within the framework of our research, the way we distinguished between cognitive and emotional risk perception allows us to comprehend how the necessity for precise knowledge (competence) and self-directed validation of news (autonomy) are triggered when we react to the risks we perceive. Accordingly, future studies could explore how perceived competence and autonomy interact with what we feel during health crises. It would allow us deeper understanding of user motivations for fact-checking.
Secondly, we showcased potential avenues for advancing the theoretical understanding of risk perception in situations where individuals seek to verify the credibility of news creators by seeking the opinions of other users. This could be connected to the study of third-person effect regarding digital disinformation (Liu & Huang, 2020), which highlights that individuals tend to believe they are less susceptible to the influence of COVID-19 disinformation compared to people they are close to or people they are not familiar with. Previously, it was already shown that individuals who exhibit self-other discrepancy are more likely to view others as being more affected by fake news than themselves (Iftikhar et al., 2022). Namely, exposure to misinformation on social media was associated with a disparity in perception between the individual and others.
This difference in perception serves as a way for individuals to cope with psychological stress and improve their overall sense of well-being. This could be the consequence of the self-promotion of anti-disinformation tools of social media platforms, and public discussion on efforts to stop the spread of disinformation (Liu & Huang, 2020). As shown in our research, people are aware of the presence of those tools and are vulnerable to using them.
Practical Implications
Our findings revealed that low cognitive risk perception leads to little use of online tools for fact-checking, while the feeling of emotional risk does not influence the usage of these instruments directly. Instead, higher emotional risk perception leads to a higher tendency to employ several sources to validate information regarding COVID-19 in the news. These insights may assist practitioners at different levels in using significant study findings effectively in a productive manner.
Firstly, health professionals may integrate risk communication into health professional training, enhancing cognitive and emotional risk perceptions to motivate more thorough fact-checking. Namely, health professionals may use narrative risk communication techniques to convey urgency and responsibility related to relevant diseases. They may also be equipped with digital tools, allowing real-time verification of information, and, thus, effective communication of risks (Dinçer, 2024) while promoting fact-checking among patients. Our findings also indicate the necessity of formulating evidence-based emotional messaging strategies for health communication campaigns that use fear appeals alongside efficacy messages.
Secondly, our findings may serve Information Technology specialists who may employ Artificial Intelligence to improve user cognitive and emotional awareness to boost fact-checking. As Machine Learning advances, the systems will be able to discern people' distinctive traits, convictions, requirements, and susceptibilities progressively (Kertysova, 2018). Firm strategies in social media policies may involve employing machine-learning algorithms to identify possibly misleading claims and recommend credible sources, using gamified Artificial Intelligence chatbots to educate users, or providing personalised feedback based on the user's history, of engaging with verified material. Personalisation would, consequently, offer emotionally impactful prompts that fit user interests and fears, while highlighting the beneficial effects of user fact-checking initiatives.
Thirdly, educators may benefit from our research insights, as they get informed about what kind of fact-checking content could be integrated into formal and nonformal educational programmes to improve digital literacy and encourage fact-checking habits (Frau-Meigs & Corbu, 2024). Concrete strategies may include integrating fact-checking tools into school curricula, developing mobile applications offering quick tutorials on verifying claims, serious games for critical thinking, and organising workshops to instruct users in recognising misinformation (Frau-Meigs & Corbu, 2024).
Limitations and Future Directions
First, it is important to highlight that the individuals involved in our research were relatively young and highly educated. They may have been more mature regarding how they cope with their behaviour on social media (Alshare et al., 2022), which may have an influence on their tendency for fact-checking. Conversely, older or less-educated persons may exhibit distinct behaviour. They are more prone to disseminate misinformation on social media than their younger peers, irrespective of their educational attainment (Guess et al., 2019). Likewise, emotional risk perception may depend on age. Namely, older individuals were found to experience fewer negative emotions in their lives than their younger counterparts (Carstensen et al., 2020). Hence, it is important to interpret the results cautiously, and avoid performing broad generalisations to the whole population. Future studies should focus on examining age- or educational-based differences in risk perception and fact-checking behaviours on social media.
Second, we do not know where and how the risk perception about COVID-19 was established. There might be either an influence of acquiring news in traditional media, news authors towards whom users developed various levels of trust, or the cultural background structure of the social media bubble. Future research would, thus, benefit from enhancing the research model by focusing on intersectionality with increasing the number of culture-based variables. It would be also advantageous to replicate the study with a longitudinal design, to capture temporal fluctuations and possible causal links. This approach could enable researchers to investigate the effects of external influences, including major societal events, communication awareness efforts and policy measures, on risk perception and fact-checking behaviour over time. A longitudinal approach could uncover crucial time points for effective treatments, and illuminate the stability or volatility of behaviours during various stages of a crisis.
Third, the content’s message was not in the focus of our study. It would be intriguing to conduct an experiment to figure out whether the message content affects users’ intentions to fact-check the information appearing in this content. The results might be different, whether the content is (in)consistent with users’ points of view, and whether it appeals mainly to users through logical arguments or emotions. Likewise, the nature of content is changing as new social media platforms emerge, which may influence misinformation dynamics significantly. It would be beneficial to examine the impact of ephemeral content (e.g., Snapchat), or content based primarily on algorithmic recommendations (e.g., TikTok) on users’ intent to verify the information they are exposed to. As the content shifts from textual to multimodal form, such research may provide distinct insights into cognitive and emotional leverages for fact-checking intent.
The limitation stems also from the methodological approach, as we employed non-probability sampling, and conducted a statistical analysis on data collected through social media platforms. It may indicate that our participants were active social media users interested in the topic of the study, as it was their own choice to participate. The sample may, thus, not represent the broader population accurately. In the future, we recommend mitigating this issue by utilising probability-based sampling methods to enhance the representativeness.
Additionally, two limitations are related to the statistical analysis. We did not analyse the potential moderating or mediating variables that may influence the relationship between risk perception and fact-checking intent. Future research could, thus, incorporate moderating or mediating individual (e.g., age, gender, education level) or contextual variables (e.g., trust in media) into a more complex theoretical model. Furthermore, the limitation may also stem from the convergent validity of the structural model, as the AVE values for two constructs were slightly below the acceptable threshold. Although the internal reliability was sufficient, and, thus, both constructs were maintained in the analysis, future research should strive to modify the measurement scales to solve these issues. New indicators may be added to improve content validity, or qualitative pretesting should be conducted to ensure that the constructs are represented comprehensively (Hu, 2014).
Footnotes
Ethical Considerations
This study received ethical approval from the IRB of the Faculty of Arts at the University of Maribor (approval #038-33-89/2020/FFUM) on 21. 12. 2020.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Slovenian Research and Innovation Agency under Grant no. P5-0399.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
The dataset that is analyzed during the current study are available from the corresponding author on reasonable request.
