Abstract
This study investigates which intervention strategies most effectively increase privacy protection behavior. Drawing upon Protection Motivation Theory, we examine the short- and long-term effects of (combinations) of three strategies: (1) increasing awareness of the threat to privacy, (2) training effective privacy protection behavior, and (3) addressing and combating privacy fatigue. We conducted a longitudinal experiment in the Netherlands with three waves (
Keywords
With everything people do online, they share information—knowingly or unwittingly—with other users, and with commercial, non-commercial, and governmental entities (Acquisti et al., 2015). Digital participation is even believed to be impossible without sharing personal data (Ellison et al., 2007; Kane et al., 2014; Krasnova et al., 2010). We share personal information online to establish and maintain social connections, for our own enjoyment and convenience, to execute commercial transactions, to receive personalized messages and services, and to optimize the performance of websites and apps (e.g., Bansal et al., 2016; Ellison et al., 2007; Gibbs et al., 2011; Krasnova et al., 2010; Robinson, 2017). Next to people sharing information themselves, companies also extract data. The continuous extraction of personal data and our ever-growing dependence upon the digital platforms that enable this are reflected in the notions of “surveillance capitalism” (Zuboff, 2019) and “data capitalism” (West, 2019).
The most important downside and largest issue of the continuous data extraction is the decline of people’s informational privacy, which includes people’s right to have control over the collection and dissemination of personal information (Baruh et al., 2017; Nissenbaum, 2009). Having informational privacy means being able to determine for yourself when, how, and to what extent information about you is communicated to others (Westin, 1967). Managing and protecting online privacy has become an essential part of everyday life (Büchi et al., 2017). However, research shows that people rarely take action to protect their privacy online, and often do not know how to do this (Boerman et al., 2021), providing evidence for the idea that people often lack informational privacy.
Self-management of online privacy is particularly important as regulations and privacy law (such as the GDPR) mostly delegate the responsibility of privacy protection to users (Degeling et al., 2019; Strycharz et al., 2021). Research has shown that whether people protect their privacy depends on their privacy concerns and attitudes, knowledge, internet and digital skills, experience with privacy violations, education, gender, and age (e.g., Baruh et al., 2017; Büchi et al., 2017; Dienlin & Trepte, 2015; Smit et al., 2014). In addition, privacy protection behavior is negatively influenced by one’s level of privacy fatigue (Choi et al., 2018), privacy cynicism (Hoffmann et al., 2016; Lutz et al., 2020), and digital resignation (Draper & Turow, 2019). Thus, to increase one’s privacy protection behavior, interventions should boost the factors identified as positive predictors of the behavior and mitigate the negative ones.
To empower internet users and improve their resilience, this study aims to gain insights into which (combination of) intervention strategies aimed at boosting the positive and mitigating the negative factors most effectively increase privacy protection behavior amongst Dutch adults. Building on the Protection Motivation Theory (Rogers, 1975, 1983; Witte, 1992), we propose, develop, and examine three strategies: (1) increasing awareness of the threat to privacy, (2) training effective privacy protection behavior, and (3) addressing and combating privacy fatigue. By doing so, this study contributes to the literature in three ways. First, although prior research has shown that interventions focusing on increasing knowledge and digital literacy can (indirectly) decrease privacy protection amongst adults (Strycharz et al., 2019, 2021) and children (Desimpelaere et al., 2020), this study is the first to test the effectiveness of intervention strategies that focus on other factors that influence privacy protection, namely the perceived problem severity, efficacy, and privacy fatigue. By doing so, it answers calls to understand how we can combat digital resignation and empower people to manage their privacy (Draper & Turow, 2019). Second, where prior studies only measured immediate effects, we conduct a longitudinal experiment with three waves. This longitudinal approach allows us to test the effectiveness of the strategies immediately after the intervention, on the short-term (2 weeks after the intervention), and on the long-term (2 months later). Such a longitudinal approach is particularly important as it helps to understand whether any intervention effects persist over time and thus truly empower people to protect their privacy in the long term. Third, whereas prior studies focused on very specific privacy behaviors (e.g., rejecting cookies) or a limited set of behaviors, we test the effects on a range of 11 different behaviors that limit both data sharing by users and data collection by companies.
Antecedents of Privacy Protection Behavior
People can protect their online privacy in two ways: by adopting privacy protection measures to limit the data extraction by companies and by limiting the data they share themselves on the internet (Baruh et al., 2017; Büchi et al., 2017). These two behaviors, the use of privacy protection measures and limiting information disclosure, do not seem to be related to each other (Baruh et al., 2017). In this study, we focus on the actual measures that people actively take to protect their privacy by limiting the data extraction by others, such as deliberately rejecting cookies, using opt-out websites and add-ons that limit data tracking, and turning off ad personalization. This excludes limiting information disclosure, such as deliberately not filling out personal information, refraining from posting on social media, or untagging posts or pictures.
Research has shown that people who are more concerned about their privacy, or who have a high desire for privacy, are more inclined to protect their privacy (e.g., Baruh et al., 2017; Büchi et al., 2017; Dienlin & Trepte, 2015). In addition, people who have experience with privacy violations are more likely to take action to protect their privacy (Büchi et al., 2017; Chai et al., 2009). Furthermore, demographic variables such as education, gender, and age seem to be related to privacy protection (e.g., Baruh et al., 2017; Büchi et al., 2017; Dienlin & Trepte, 2015; Smit et al., 2014). More importantly, privacy protection seems to be positively related to individual levels of knowledge, such as privacy literacy (Baruh et al., 2017; Masur, 2020; Park, 2013), knowledge of data collection techniques (Ham, 2017; Ham & Nelson, 2016), and internet skills (Büchi et al., 2017). This is also why the increase of literacy and knowledge is believed to be an effective way to empower people to protect their privacy (Büchi et al., 2017; Masur, 2020; Park, 2013).
Furthermore, there are a few studies that examined the effectiveness of knowledge interventions aiming to increase privacy protection. A study by Desimpelaere et al. (2020) showed that a privacy literacy training enhanced 9 to 13 year-old children’s general understanding of data practices and helped them to better protect their privacy (i.e., limiting information disclosure). However, studies amongst adult samples showed that interventions that increase technical and legal knowledge decreased the perceived severity of and susceptibility to the problem, indirectly making people less inclined to protect their privacy by turning personalization off (Strycharz et al., 2019) or rejecting cookies (Strycharz et al., 2021).
Thus, knowledge interventions may not always have the anticipated empowering effect. One reason for why interventions focusing on knowledge may not be ideal is the so-called “control paradox” (Brandimarte et al., 2013). People with more knowledge are also more confident in dealing with privacy issues, and therefore underestimate the risks, which can result in
Awareness of the Threat to Privacy
Based upon the Protection Motivation Theory (PMT; Rogers, 1975; Witte, 1992), we first propose an intervention strategy that addresses people’s awareness of the threat to their privacy. Considerable studies have applied the PMT to the context of privacy (e.g., Boerman et al., 2021; Dienlin & Metzger, 2016; Ioannou et al., 2021; Strycharz et al., 2019). The PMT stems from health communication research and was developed to understand the factors that drive people’s motivation to protect themselves against a health threat. The PMT proposes that the motivation to protect the self from a threat (such as a virus, but in this context, a threat to one’s privacy), depends upon the
Drawing upon the PMT, we argue that increasing the perceived threat is an important first step to motivate people to protect themselves against this threat. Prior work that applied the PMT to online privacy has indeed shown that the perceived severity of data collection, usage, and sharing is an important predictor of privacy protection behavior (Boerman et al., 2021; Strycharz et al., 2019, 2021). We therefore propose that one way to increase privacy protection behavior is to address the threat appraisal, by emphasizing both the severity and susceptibility of the threat to people’s privacy.
In the design of the threat strategy, we draw upon the Impersonal Impact Hypothesis (Slater et al., 2015; Tyler & Cook, 1984). This hypothesis posits that personal and societal risk judgments are two different things. So, even if people understand that there is a societal privacy problem, for instance as a result of media coverage of the issue, they may not believe it to be a personal problem (Slater et al., 2015). This notion has clear connections to distinction of perceived severity (i.e., the understanding that there is a severe privacy threat) and perceived susceptibility (i.e., the belief that you can actually experience this threat) in the PMT. Thus, to ensure that people actually understand that the problem is not only severe, but also applies to them personally, we developed an intervention strategy that stresses that privacy threats are personally relevant by directly applying the issues to their own situation. This leads to the following hypotheses:
H1: A strategy addressing the threat of online data collection, usage, and sharing increases the threat appraisal (i.e., perceived severity and susceptibility) (a) immediately after the intervention, in the (b) short- and (c) long-term.
H2: A strategy addressing the threat of online data collection, usage, and sharing increases (a) privacy protection intentions immediately after the intervention, and privacy protection behavior in the (b) short- and (c) long-term.
Training Effective Privacy Protection Behavior
Further building upon the PMT, we also expect strategies focusing on the coping appraisal could be helpful. Following the PMT, efficacy is an important driver of protection motivation. When people do not believe that they can counter a threat, they are unlikely to try to protect themselves against the threat (Rogers, 1975). In the context of privacy, the coping appraisal consists of people’s belief in their own ability to protect their privacy on the Internet, that is, self-efficacy, and their belief whether a response effectively prevents threats to privacy, that is, response efficacy (see e.g., Boerman et al., 2021).
Previous research has shown that privacy and internet literacy and skills have a positive influence on privacy protection behavior (Bartsch & Dienlin, 2016; Büchi et al., 2017; Masur, 2020). Looking at the PMT specifically, research found that especially response efficacy influences privacy protection (Boerman et al., 2021). We therefore developed a strategy that focuses especially on increasing people’s self-efficacy and response efficacy.
In the design of the training, we focused on two important elements. First, we focus on people’s self-esteem and confidence (and thus, self-efficacy). Research has shown that building confidence, esteem, and self-efficacy can remove reluctance and any resistance they may have (Knowles & Linn, 2004). To raise self-efficacy, we develop and test a strategy teaching participants step-by-step how to take specific privacy protection measures (i.e., how to opt-out of personalized advertising, how to only accept necessary cookies, and how to install an add-on that blocks trackers). At the end of each step, participants are praised (“well done”), told what they can now do themselves, and their effort was rewarded with a digital badge. The second important element of the training, emphasizes the effectiveness of the learned measure (to increase response efficacy) by addressing what this measure actually does (i.e., “You can now see how many trackers on this page are blocked”). To examine the anticipated effects of this strategy, we hypothesize:
H3: Training people to use privacy protection measures increases the coping appraisal (i.e., self-efficacy and response efficacy) (a) immediately after the intervention, in the (b) short- and (c) long-term.
H4: Training people to use privacy protection measures increases (a) privacy protection intentions immediately after the intervention, and privacy protection behavior in the (b) short- and (c) long-term.
Acknowledging and Combating Privacy Fatigue
The inability to protect oneself from a threat is assumed to induce irrational feelings such as helplessness and loss of control (Rogers, 1983). Therefore, we propose that next to the two more cognitive, rational strategies, one could also focus on a more emotional appraisal which is highly relevant in the context of online privacy:
Research has shown that individual levels of privacy fatigue and privacy cynicism are important predictors of privacy protection behavior (Choi et al., 2018; Hoffmann et al., 2016; Lutz et al., 2020). People that are more fatigued and cynical, feel more powerless, put less effort into making privacy decisions, and thus are less likely to protect their privacy and “do nothing” (Choi et al., 2018; Stanton et al., 2016). Other studies also suggest that people who are resigned to engage in privacy protection often feel that these efforts are futile or unsuccessful (Draper & Turow, 2019; Selwyn & Pangrazio, 2018).
We therefore develop and test a strategy that especially aims at diminishing the feeling of privacy fatigue to increase privacy protection behavior. However, if people have strong feelings of privacy fatigue, they may want to resist a message that tries to change these feelings, for instance by self-assertion (Jacks & Cameron, 2003; Fransen et al., 2015). Self-assertion entails reminding yourself that you are confident about your attitudes (in this case, your privacy fatigue), and that nothing can be done to change these. To overcome this self-assertion, we developed a strategy that focuses on: (1) acknowledging the privacy fatigue, and (2) combatting this fatigue by showing that privacy protection is both simple and effective. By acknowledging the privacy fatigue, we also acknowledge any resistance people may have, which has shown to effectively defuse resistance, making a message more persuasive (Knowles & Linn, 2004).
Privacy fatigue consists of two aspects (Choi et al., 2018): emotional exhaustion (i.e., feeling useless and incapable of doing something about your own privacy), and cynicism (i.e., a feeling that privacy protection is futile). Therefore, the strategy emphasizes for each example that this is both simple, attacking the feeling of being useless and not being able to do something, and effective, attacking the idea that actions are futile. Expecting that this strategy will work, we hypothesize:
H5: A strategy acknowledging and combating privacy fatigue decreases privacy fatigue (a) immediately after the intervention, in the (b) short- and (c) long-term.
H6: A strategy acknowledging and combating privacy fatigue increases (a) privacy protection intentions immediately after the intervention, and privacy protection behavior in the (b) short- and (c) long-term.
Finally, combining one, two, or all three strategies may cause a synergy effect, rendering them even more effective. To examine whether this is true, we also compare all possible combinations of the three strategies.
RQ1: Which (combination of) strategies has/have the largest, positive effect on privacy protection behavior in the short- and long-term?
Method
Design and Sample
To test our strategies, we conducted a longitudinal experiment with a 2 (threat strategy vs. no threat strategy) × 2 (training strategy vs. no training) × 2 (privacy fatigue strategy vs. no privacy fatigue strategy) between subjects design. We manipulated the strategy that people were exposed to and our design led to all possible combinations, resulting in eight experimental conditions (i.e., no strategy; addressing the threat; training effective behavior; acknowledging and combating privacy fatigue; threat and training; threat and fatigue; training and fatigue; and threat, training and fatigue).
The data were collected in November 2020 (wave 1,
Sample Descriptions for the Three Waves.
Procedure
Participants were invited via the online panel and were redirected to our experiment in Qualtrics. In the first wave, we first screened the participants by asking them to confirm that they were participating on a laptop or computer, preferably using Firefox or Chrome. We decided to only people to participate on a laptop or computer to ensure that the strategies were clearly visible and readable (they were not mobile-friendly). In addition, some of the steps in the training (e.g., installing Ghostery) were specific to laptops and computers. All participants who confirmed to use a laptop or computer were asked to read the study’s information and give their informed consent. We asked them to read the information carefully and to follow the instructions. They were also told that they were able to go back and forth if something was unclear. Participants were then randomly assigned to one of eight conditions and were shown either none of the strategies, or an intervention using one strategy or a combination of our strategies. After the intervention, we asked participants about their current privacy protection behavior followed by their intention to perform these behaviors. We then asked an attention check, knowledge, susceptibility, severity (self and other), self-efficacy, response-efficacy, privacy concerns, cost response, attitude toward personalization, privacy fatigue, and digital literacy. We ended the questionnaire with asking participants for their response to the intervention and their demographic information (see all questions and their order in all waves in Table 2).
Measures in Questionnaires per Wave in Order of Appearance.
In the second wave, participants were all directed to a questionnaire that matched the condition of wave 1. These questionnaires started with an informed consent, and then showed a shortened version of the strategy/strategies. We then asked participants for their current privacy protection behavior, followed by the same measures as wave 1.
The third wave repeated the same informed consent and questions in the same order, but did not include any reminders. All participants were also given the opportunity to download a pdf of the training strategy at the end of wave 3.
Stimulus Materials
We created and pretested several versions of each strategy twice. In Pretest 1, 86 students (
In Pretest 2, we added a control group (no strategy) that served as a baseline, and measured the same variables. We randomly assigned 92 students (
All final strategies (see Figure 1 for screen shots) started with explaining the issue: “You’ve probably heard that companies on the internet collect, use, and share your personal information with other companies in a variety of ways. Are you doing anything to protect your privacy online? You should.” (see Panel A in Figure 1). This was all information participants in the no strategy condition got.

Example screenshots of strategies.
The threat strategy continued with: “We know from research that people do not find the collection, usage, and sharing of personal information on the internet as a severe problem. With three examples, we would like to show you that it is.” We then explained three risks: (1) sensitive personal profiles using private information to target vulnerable groups and influence you to buy (see Panel B in Figure 1), (2) no control over which companies have what information about you, and (3) personalized pricing.
The training strategy taught participants how to perform three specific behaviors step-by-step: specifically, how to (1) turn off personalization of ads (opt-out) via https://www.youronlinechoices.com (see Panel C in Figure 1), (2) only accept necessary cookies, and (3) install Ghostery to block trackers.
The fatigue strategy first acknowledged people’s privacy fatigue (“We know that you are probably tired of privacy issues and do not want to worry about your privacy online. You may even doubt whether this is necessary or whether it helps.”) followed by emphasizing how simple and effective three examples of privacy protection behaviors (identical to the training strategy) are (e.g., “Simple: installing of Ghostery only takes a minute. Effective: Ghostery shows how many and which companies collect your information and can block this automatically. Your information is no longer collected.”; see Panel D in Figure 1 for another example).
Measures
Table 2 provides an overview of all measures in the questionnaires of each wave. Tables 3 to 7 present the descriptive statistics of all relevant measures in the three waves.
Means and Standard Deviations of Privacy Protection Intention Scores Across the Eight Conditions at Wave 1.
Means and Standard Deviations of Privacy Protection Behavior Scores Across the Eight Conditions and the Three Waves.
Means and Standard Deviations of Perceived Severity and Susceptibility Scores Across Threat Strategy and the Three Waves.
Means and Standard Deviations of Perceived Self-Efficacy and Response Efficacy Scores Across Training Strategy and the Three Waves.
Means and Standard Deviations of Privacy Fatigue Across Fatigue Strategy and the Three Waves.
Privacy Protection Behavior
In all waves, we measured participants past privacy behavior. We stated that there are several ways to protect your personal information and privacy on the internet, and then asked participants how often (1 =
In wave 1, the past privacy behavior question specified that we were curious about their behavior before their participation to this study. In addition, to be able to test immediate effects, we asked participants how often they intended to do the eleven things in the future. Mean scores of the three factors can be found in Table 3 (intention in wave 1) and Table 4 (past behavior in all waves).
Threat Appraisal
In each wave, we measured perceived susceptibility by asking participants to indicate to what extent they agreed (1 =
Coping Appraisal
We measured self-efficacy with the statements: “I am able to protect my personal information and online behavior (such as my name, location, and search and surfing behavior) on the Internet”; “I feel confident that I can secure my privacy on the Internet,” and “I can ensure that companies cannot collect my personal information and behavior on the Internet” (Boerman et al., 2021; Cronbach’s α = .85). Next, we measured response efficacy by asking to what extent (1 =
Privacy Fatigue
We measured individual levels of privacy fatigue using seven items from the scale by Choi et al. (2018), including statements regarding emotional exhaustion and cynicism, such as “I am tired of online privacy issues” and “I have become less interested in online privacy issues.” The mean of the seven items was used as a measure of individual privacy fatigue (Cronbach’s α = .88).
Attention Checks
All waves included two attention checks. Participants who failed both checks were redirected to the end of the questionnaire and not included in the data. Following Kees et al. (2017), we included one question saying: “Research shows that people often pay little attention to reading the questions. We therefore want to check if you read this. If you are reading this, please fill out ‘[answer option]’. What is this study about?” Followed by four answer options. We also included an item in one of the scales asking people to tick a specific answer (“This is a question to test your attention, answer ‘Agree’ here”).
Results
To analyze the data, we conducted (1) ANOVAs to examine immediate effects of the strategies on perceptions and behavioral intentions (in wave 1), and (2) mixed fixed and random effect models in which we allowed random effects of individual participants to examine short- (changes from wave 1 to wave 2) and long-term effects (changes from wave 1 to wave 3). For the analyses, coding, and typesetting, we used R (Version 4.0.3; R Core Team, 2018) and the R-packages car (Version 3.0; Fox & Weisberg, 2019), psych (Version 2.0.12; Revelle, 2021), stats (Version 3.6.2; R Core Team 2018), lme4 (Version 1.1; Bates et al., 2015), and tidyverse (Version 1.3; Wickham et al., 2019). The results are discussed per strategy to address the hypotheses and RQ. For reasons of clarity and conciseness, we mainly focus on significant effects in the description of the results. Tables 3 to 7 present mean scores for the different variables and strategies in the three waves. Tables 8 to 10 shows a summary of effects of the strategies on the three mechanisms, and Tables 11 and 12 show a summary of effects of the strategies on behavioral intentions and the three privacy protection behaviors.
Fixed and Random Effects Models for Effects of the Threat Strategy.
Fixed and Random Effects Models for Effects of Training.
Fixed and Random Effects Models for Effects of Fatigue Strategy.
Three-Way ANOVA for Privacy Protection Intention.
Fixed and Random Effects Models for Privacy Protection Behaviors.
Effect of Threat Strategy
H1 proposed that the threat strategy would increase perceived severity and susceptibility. Results of a one-way ANOVA showed no significant immediate effect of the threat strategy on perceived severity,
The threat strategy also did not have an immediate effect on perceived susceptibility,
Regarding privacy protection behavior (H2), the threat strategy did immediately increase the intention to block tracking (threat strategy
Effect of Training Strategy
H3 proposed that the training strategy would increase self-efficacy and response efficacy. The training strategy was successful at immediately increasing self-efficacy (training
Regarding response efficacy, the training strategy was successful at immediately increasing perceived efficacy of tracking blocking behavior (training
Second, the training strategy did not immediately impact the perceived efficacy of rejecting cookies,
Third, the training strategy did not impact perceived efficacy of cookie deletion,
Regarding privacy protection behavior (H4), a one-way ANOVA revealed a main effect of the training strategy on the intention to block tracking (training
Second, regarding cookies rejection, we observed a main effect of the training strategy on the intention to do so at wave 1: participants exposed to the training strategy showed slightly more intention to reject cookies (training
Third, we observed a main effect of the training strategy on the intention to delete cookies and history at wave 1 (training
Effect of Privacy Fatigue Strategy
H5 proposed that the privacy fatigue strategy would decrease privacy fatigue in the short- and long-term. The strategy aimed at combating fatigue did not influence privacy fatigue immediately at wave 1,
Regarding the impact of privacy fatigue strategy on privacy protection behavior (H6), we did not observe an effect on the intention to block tracking at wave 1, nor on the tracking blocking behavior in the short- and long-term. For cookie rejection, we did not observe an immediate effect on intention, but a significant effect in the short-term, meaning that cookie rejection behavior increased slightly more for participants exposed to the fatigue strategy (fatigue strategy
Effect of Combinations of the Strategies
RQ1 asked what combination of strategies is most effective in fostering privacy protection behavior. Regarding intentions measured at wave 1, we observed a significant interaction between the threat and fatigue strategy on cookie and history deletion intention,
Regarding privacy protection behavior, we observed a significant interaction between the training and fatigue strategies in the short-term (wave 2) for cookie rejection behavior,
Furthermore, we observed an interaction between the training and the threat strategies on cookie rejection behavior in the short-term,
Discussion
To empower internet users and improve their resilience, this study aimed to gain insights into which (combination of) intervention strategies most effectively increase(s) privacy protection behavior. Based upon prior research, we know that knowledge interventions may not always have the anticipated empowering effect (Strycharz et al., 2019, 2021). Therefore, we argue that an intervention designed to help people to protect their privacy should focus on other factors than just knowledge. Drawing upon Protection Motivation Theory (Rogers, 1975; Witte, 1992), we proposed, developed, and examined the immediate, short- and long-term effects of (combinations) of three intervention strategies: (1) increasing awareness of the threat to privacy, (2) training effective privacy protection behavior, and (3) addressing and combating privacy fatigue. The study’s longitudinal approach contributes to our understanding of which strategies can effectively empower people to protect their privacy in the long-term.
Results showed that the training strategy was able to achieve its anticipated effect. Teaching internet users how to take specific actions to protect their privacy increased perceived self-efficacy to combat the threat and the perceived efficacy of the privacy protection measures included in the training. Moreover, the training strategy increased privacy protection behaviors. In particular, the training strategy immediately increased intentions to block tracking, reject cookies, and delete cookies and browser history. In addition, the training positively impacted tracking blocking behavior in the short- and long-term, actual cookie rejection in the short term (2 weeks later), and deletion behavior in the long-term (2 months later). This means that the most effective behavior to safeguard privacy (blocking tracking) was effectively trained and this effect persisted over time. The short-term effect on cookie rejection shows that the training came across, but that this effect wears off. This may be explained by the temporal costs of these protective behaviors. While blocking tracking involves one-time action (e.g., installing a tracking blocker), rejecting cookies requires repeated effort from the individual (i.e., rejecting cookies whenever they visit a new website with a cookie consent request or cookie wall), possibly causing higher costs of this action. As past research on PMT has shown, the more negative the protective action is experienced, the less motivated users are to execute it (S. Milne et al., 2000), which possibly explains the wear-off effect for rejecting cookies.
Furthermore, we find that the other intervention strategies did not have the anticipated effects. The strategy aimed to increase the threat appraisal (threat strategy) did not increase perceived severity and susceptibility, and the strategy combating fatigue (fatigue strategy) did not diminish privacy fatigue. The threat strategy did cause an immediate increase in intentions to block tracking and the fatigue strategy only had a short-term effect on cookie rejection behavior.
Moreover, results show that some combinations of strategies cause a potential synergy effect but also diminish the effectiveness of strategies. In particular, our findings demonstrate that while the threat and privacy fatigue strategies do not significantly impact the intention to delete cookies and browser history independently, a combination of these strategies does. In addition, combining the fatigue and the training strategy increases cookie rejection in the short-term more than the strategies do on their own. However, making privacy threats salient (i.e., the threat strategy) seems to decrease the effectiveness of the training on cookie rejection behaviors. This means that an intervention that aims to empower users to protect their privacy by having them reject tracking cookies should include both the actual training of privacy protection behaviors
Theoretical Implications
This study reiterates the relevance of the PMT in the context of online privacy, as asserted in previous studies (e.g., Boerman et al., 2021; Dienlin & Metzger, 2016; Ioannou et al., 2021; Strycharz et al., 2019). The PMT proposes the importance of both the threat appraisal and the coping appraisal in protection motivation. Our findings indicate that the effectiveness of our strategies mostly relied on the influence on the coping appraisal, rather than the threat appraisal. Especially self-efficacy scores were rather low (overall mean scores range between 4.37 and 4.46 in the three waves), indicating that people are not very confident in their ability to protect their privacy. Additionally, the finding that especially the training effectively influences the coping appraisal (i.e., self- and response-efficacy) and ultimate privacy protection, emphasizes earlier claims that increasing skills and literacy is an effective way to empower people to protect their privacy (Büchi et al., 2017; Masur, 2020; Park, 2013).
Moreover, the threat strategy did not seem effective in our study, most likely because there was not much to win when it comes to the threat appraisal. The means of both perceived severity and susceptibility were consistently high in all waves (overall mean scores range between 5.49 and 6.09), indicating a possible ceiling effect. In line with previous work (Boerman et al., 2021), these means suggest that the perceived threat to online privacy is already high. People may thus not require interventions to make them aware of the threats to their privacy, but rather they need to learn how to protect themselves.
Finally, our research demonstrates the importance of countering resistance and digital resignation by combating privacy fatigue. Although addressing only fatigue does not influence perceived fatigue or most privacy protection behaviors, our research does suggest that addressing privacy fatigue could strengthen the effectiveness of the training, in particular by increasing the rejection of tracking cookies. This demonstrates the importance of focusing not only on cognitive, motivation-driven behavior, but also on less rational, more intuitive states (such as privacy fatigue) within the context of online privacy.
Practical Implications
As previous research showed no effect of interventions focusing on increasing technical or legal knowledge, this study makes a first step in unraveling which intervention strategies could work. Although the found effects are not very large (i.e., differences never exceed the one-point difference), the results give hope that a training that teaches how to perform effective privacy protection behavior could empower consumers and motivate and enable them to protect their privacy. Providing a training can effectively boost people’s confidence (i.e., increase self-efficacy) and change privacy protection behavior, even in the long-term.
The interventions were specifically designed to resemble existing tools and toolkits online (such as the Fix Your Privacy Tool Kit by Bits of Freedom, https://www.fixjeprivacy.nl). Thus, these type of online training interventions could be easily implemented in existing media and digital literacy programs and made available on platforms of consumer and privacy organizations such as Bits of Freedom and Privacy Rights Clearinghouse. Furthermore, our study suggests that the effectiveness of such trainings can be boosted by also addressing the more emotional and intuitive factor of privacy fatigue.
Limitations and Future Research
Although we paid a lot of attention to the development of our strategies, both the threat and fatigue strategies did not have the anticipated effects. The ineffectiveness of the threat strategy could be due to a ceiling effect, however, this is not true for the fatigue strategy. The mean scores of privacy fatigue could certainly be improved, however, unfortunately, our strategy did not achieve this goal. As privacy fatigue overall encompasses the feeling of uselessness and powerlessness, and the idea that privacy protection is futile, and based upon the information provided by our participants in the pretests, we decided to focus on emphasizing that protecting your privacy is both simple and effective. However, the text and examples used in our strategy did not seem to work sufficiently. Future research could further examine what strategies do diminish people’s feelings of privacy fatigue.
Furthermore, our intervention strategies were not mobile friendly and some of the training steps were specifically designed for desktop browsers. As online privacy does not only concern desktop users and extends to mobile phones, further research could develop mobile friendly versions of the intervention strategies and test their effectiveness.
In addition, privacy protection behavior was measured via self-report, which has the limitation of under- or overestimation of behavior. Additionally, our longitudinal approach required us to repeat these questions, making our participants more familiar with the questions, which may have led to more social desirable answers. Nevertheless, our data do not point in this direction, as more social desirable answers would have increased privacy protection behaviors, which was not the case.
Finally, in this study we did not investigate whether the effects of the strategies vary between different contexts and groups of people. The current study has been conducted in the Netherlands, a member of the European Union in which the General Data Protection Regulation is in power. This regulation’s aims are to set high standards for the collection and processing of personal data as well as enhance consumer empowerment. As a result, GDPR impacts how data collection on the web is designed, what data are collected, how users are informed about these practices and what rights they have (Degeling et al., 2019). This might mean that while in the Netherlands training privacy behaviors is effective in increasing protection behavior, this may be different in countries in which consumers are offered less information and privacy rights. Hence, future research could examine the interventions in a non-GDPR context. Furthermore, research has shown that there are important differences between people, making some of them more vulnerable to privacy threats than others (e.g., Kezer et al., 2016; Tifferet, 2019) and emphasize the existence of new digital divides, not universal, but created by the context of online data collection (Helberger et al., 2021). Factors that influence vulnerability are among others age (Kezer et al., 2016), gender (Tifferet, 2019), and data collection context (Matz et al., 2020). Future research should examine whether intervention strategies that are more tailored to personal characteristics, needs, skills, and context of data collection could be more effective in boosting the resilience of individuals.
Footnotes
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
