Abstract
The growing trend of collecting data about individuals to track past actions and infer future attitudes and behaviors has fueled popular and scholarly interest in the erosion of privacy. Recent shifts in technologies around machine learning and artificial intelligence have intensified these concerns. This editorial introduces the articles in the special theme on digital resignation and privacy cynicism: concepts developed in the past decade to explain the growing powerlessness individuals feel in relation to their digital privacy even as they continue to experience consternation over the collection and use of their personal information. The papers in this special theme engage and extend existing research on these topics. The original articles and commentaries pose theoretical and practical questions related to the ways people confront the powerful institutional forces that increasingly shape many aspects of the information environment. They employ several methodologies and theoretical perspectives and extend the range of geographic, political, cultural, and institutional contexts in which privacy cynicism and digital resignation can be identified and examined. In addition to contextualizing these contributions, this editorial maps a range of related concepts including digital resignation, privacy cynicism, privacy apathy, surveillance realism, privacy fatigue, and privacy helplessness. It concludes by identifying key themes across the papers in this collection and provides directions for future research.
Keywords
This article is a part of special theme on Digital Resignation and Privacy Cynicism. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/digitalresignationandprivacycynicism
Introduction
The pervasive adoption of digital technologies throughout various social, political, and economic spheres has fueled public and scholarly interest in privacy. Social media platforms, smart speakers, search engines, e-commerce platforms, smart toys, and mobile payment apps are just some of the technologies that routinely harvest large amounts of data. Myriad articles in critical data studies, including in
Empirical studies, which have largely been conducted in Western countries, show persistently high levels of privacy concerns among citizens around the world (Engström et al., 2023; Eurobarometer, 2019; Pew, 2019). Yet, corporations with deep investments in data surveillance, such as Amazon, Microsoft, Meta/Facebook, Alphabet/Google, Apple, Tencent, and Alibaba, are emerging as critical infrastructures on which users must rely for digital inclusion (Van Dijck et al., 2018). While the project of what Andrejevic (2003, 2022) has called the “digital enclosure” has been well underway for two decades, the Covid-19 pandemic has further accelerated the reliance on digital infrastructures and led to novel privacy concerns including those raised by contact tracing apps and workplace monitoring systems (Newlands et al., 2020; Vitak and Zimmer, 2020).
For two decades, research on the privacy paradox (Barnes, 2006) has aimed to understand the apparent misalignment between individual concerns about privacy and corresponding protective actions. Researchers have sought to identify the necessary conditions under which concern might lead to action, for example in the form of privacy-protective behavior or restraint in self-disclosure (Baruh et al., 2017; Kokolakis, 2017). Several strands of research attempt to account for the privacy paradox. Some take a calculus perspective, arguing individuals weigh the benefits of a transaction against its costs (Dinev and Hart, 2006). Others consider the role of cognitive biases and heuristics (Acquisti and Grossklags, 2006). Still, others employ interpretations grounded in social theory (Lutz and Strathoff, 2014; see Kokolakis, 2017 for an overview).
In recent privacy research, a new perspective is emerging that centers on user agency—or the lack thereof—as a critical but under-examined aspect of privacy attitudes and behaviors. Independently and across different geographic, political, and social contexts, scholars have noted the manifestation of resigned attitudes towards threats to digital privacy, variably termed privacy apathy (Hargittai and Marwick, 2016), privacy cynicism (Hoffmann et al., 2016; Lutz et al., 2020; Ranzini et al., 2023), surveillance realism (Dencik and Cable, 2017), privacy fatigue (Choi et al., 2018), digital resignation (Turow et al., 2015; Draper and Turow, 2019), and privacy helplessness (Cho, 2022). Despite differences in terminology and empirical approaches, these works share the conclusion that a sizable share of individuals feels overwhelmed and disempowered when it comes to protecting their privacy from the threats posed by the digital infrastructures on which they rely for social, economic, and political inclusion.
Initial efforts to identify, conceptualize, and investigate attitudes of resignation have raised questions about the contours of these frustrations as well as their predictors and outcomes. The authors featured in this special theme explore, extend, and challenge current understandings of privacy cynicism, digital resignation, and related concepts. The papers engage theoretical and practical questions related to the ways people confront the powerful institutional forces that increasingly shape many aspects of the information environment. The articles and commentaries collected here take a range of approaches. While some are empirical efforts to measure and understand futility and resistance in the face of digital surveillance, others are theoretical pieces that consider the benefits and limitations of these concepts with the goal of deepening the conceptualization of power, equity, and resistance in the digital era. Some articles offer policy analyses that reveal the practical applications of these concepts in regulatory environments. Together, these articles extend the geographic, political, cultural, and institutional contexts in which privacy cynicism and digital resignation can be identified and examined. We hope these contributions will stimulate debate and drive novel insights around the nature of data and information privacy.
Digital resignation, privacy cynicism, and related concepts
For some time, critical researchers have questioned claims that individuals are either indifferent to privacy or willing to minimize the importance of privacy when doing so appears necessary for achieving some other good. Beginning in 2015, researchers identified attitudes toward privacy that captured different dynamics such as cynicism (Hoffmann et al., 2016) and resignation (Draper and Turow, 2019), rejecting claims that individuals engage in conscious calculations that weigh the benefits and risks of asserting privacy rights under various scenarios. Instead, they contend that what looks like privacy indifference can more often be explained as a sense of futility: a belief that individual efforts to protect information privacy in an environment increasingly characterized by pervasive corporate and governmental surveillance are unlikely to succeed. In this section, we outline some of these early efforts to define, measure, and explain this phenomenon. Table 1 presents an overview of adjacent but distinct concepts. All point to a sense of disempowerment in the platform society that engenders feelings of futility with regard to privacy-protective behavior.
Adjacent concepts to privacy cynicism and digital resignation.
Hoffmann et al. (2016) introduced the concept of privacy cynicism based on in-depth focus group data from Germany, defining it as an “attitude of uncertainty, powerlessness, and mistrust towards the handling of personal data by online services, rendering privacy protection behavior subjectively futile” (p. 2). Subsequently, the concept was positioned in the context of institutional privacy concerns (Lutz et al., 2020) and its multi-dimensionality was empirically consolidated.
Hargittai and Marwick (2016) described a rising feeling of “privacy apathy” among young Internet users in the United States, who feel that they cannot effectively protect their personal data from online platforms, but use them anyhow, frequently due to social pressures. They fear that privacy violations are inevitable, or only preventable if users entirely opt out of online services—a solution that is deemed unrealistic. The authors describe this stance as “quite cynical” (p. 3751). They also find that apathy is related to lower technical skills.
Situated in the context of state-corporate surveillance programs, Dencik and Cable (2017) proposed the concept of “surveillance realism”: “a simultaneous unease among citizens with data collection alongside the active normalization of surveillance that limits the possibilities of enacting modes of citizenship and of imagining alternatives” (p. 763). The authors find that while those in the United Kingdom are aware of, and worried about, persistent surveillance, they rationalize a lack of countermeasures by pointing out their futility in an environment where the absence of pervasive corporate surveillance is unimaginable. Importantly, the authors stress that users frequently lack an actual understanding of surveillance mechanisms and instead use broad generalizations or superficial explanations to describe institutional monitoring.
Another important concept sclosely related to privacy cynicism is “digital resignation” (Turow et al., 2015), which the authors define as “the condition produced when people desire to control the information digital entities have about them but feel unable to do so” (Draper and Turow, 2019: 1824). The authors build on their 2015 study of Americans to argue that resignation is a rational response to conditions where institutional forces exert significant power over individuals. Rather than trying to subvert the data practices of powerful companies, resignation offers a strategy for adapting to the surveillance environment. Importantly, Draper and Turow (2019) argued that digital resignation can constitute a purposeful form of inaction in the face of corporate activities that encourage a sense of individual futility. They point out that privacy resignation does not entirely preclude protective behavior, but note that efforts to interrupt corporate surveillance are sporadic and individualistic rather than sustained and collective.
Choi et al. (2018) build on studies of burnout to develop the concept of “privacy fatigue.” They argue that Internet users in South Korea frequently encounter privacy breaches (or reporting on such breaches), which leaves them increasingly emotionally exhausted and unwilling to address privacy concerns or protection behavior. This ultimately results in “disengagement,” which is akin to resignation (Turow et al., 2015; Hoffmann et al., 2016). Finally, Cho (2022) proposes “privacy helplessness” to describe a trait-like characteristic in which Americans generalize from repeat-averse experiences, such as privacy breaches, leading to a sense of resignation to the inevitability of these threats. Similar to privacy cynicism, a perceived lack of control is central to the development of privacy helplessness.
The various concepts discussed above provide a nuanced understanding of how a lack of agency and a sense of futility can impact privacy, yet they also present blurred boundaries and significant overlaps. In their meta-review, Van Der Schyff et al. (2023) advocated for theory building that focuses on finding similarities between different constructs, so as to more effectively contribute to our understanding of online privacy. With this special theme, we attempt to find areas of affinity, as well as difference, between a multitude of voices describing limitations to users’ agency when it comes to navigating digital privacy. We also aim to extend the geographical, contextual, and institutional scope of present research into privacy cynicism and digital resignation.
Contributions of this special theme and article summaries
As researchers have taken up and responded to the concepts and theories laid out above, important theoretical and methodological questions have emerged. Where do these concepts overlap and where are there meaningful distinctions? What are the most appropriate empirical measures and do they apply in different sociopolitical contexts? Are digital resignation and privacy cynicism strictly micro-level concepts, focused on the individual, or can they exist at collective or even institutional levels? What modes of resistance are available to combat feelings of apathy, powerlessness, and fatigue with respect to digital privacy? Do these attitudes reflect the privilege of being concerned about privacy rights without the presence or absence of those rights having an immediate impact on one's welfare or safety?
Together, the articles in this special theme respond to some of those questions. They speak to a range of international perspectives including those from North America, Europe, and East Asia, but, notably, not yet from Africa and South America or from South Asia and Oceania. They also consider different sociopolitical systems including capitalist democracies, social democracies, and socialist market economies. In addition to offering methodological diversity, the articles featured here also provide a range of macro- and meso-level analyses, including considerations of how resignation and cynicism may manifest within institutions and governments. In de-centering the individual “user” as a canonical unit of analysis, these articles provide opportunities to consider the systematic consequences of privacy disempowerment in digital contexts.
Three of the articles in the special theme suggest the need for more contextually specific frameworks for conceptualizing and measuring privacy attitudes and behaviors. In the article “The Interplay of Rational Evaluation and Motivated Reasoning in Privacy Helplessness: An Integrative Approach,” Cho (2024) investigates privacy helplessness among social media and mobile app users. Using data from national surveys of Facebook users and mobile application users in the United States, the research shows how privacy helplessness is significantly influenced by factors associated with both rational evaluations, such as perceived privacy control, trust in service providers, response efficacy, and motivated reasoning, including perceived rewards and costs. The study discovers interaction effects between these factors, indicating that they can either amplify or mitigate each other's influence on privacy helplessness. This suggests that individuals’ privacy attitudes and behaviors are shaped by a complex interplay of thoughtful assessments and biased processing. This study affirms findings from recent research on privacy cynicism, but also calls for more integrative and context-sensitive approaches in privacy scholarship to encourage a holistic view of the cognitive mechanisms at play in the digital ecosystem.
In their commentary, “Dimensionalizing privacy to advance the study of digital disempowerment,” Quinn and Epstein (2023) also addressed the need for theoretical and empirical approaches to privacy that allow for overlapping and, at times, contradictory perspectives. The authors argue for a deconstruction of privacy into horizontal (social) and vertical (institutional) dimensions to better understand the causes and consequences of digital disempowerment. They offer a critique of established privacy theories citing the limits of a persistent focus on individual control and decision-making, which the authors argue does not adequately address the power imbalances in today's digitally mediated environments. The commentary highlights the challenges in studying privacy, including definitional and measurement issues. Quinn and Epstein (2023) suggest that existing research on privacy has too narrow a focus on the role of institutions and overlooks horizontal dimensions of privacy, such as social norms and relationships. Ultimately, the authors call for a multi-dimensional understanding of privacy behaviors, pointing out that resignation in one area—for example, a sense that commercial surveillance is unavoidable—may or may not carry over into concerns people have about sharing information within social groups. Moreover, they point out the need for a dimensionalized approach that would capture behaviors that may result from resignation, but that are largely invisible, such as reducing opportunities for exposure or disconnecting from institutions or groups.
In their contribution, Hoffmann et al. (2024) reveal how a multitude of constraints, taking place at an individual, interpersonal, or societal level, impose boundaries on people's agency in terms of data sharing and participation within the digital society. The result of extensive limitations of agency is likely a state of resignation, in which individuals abandon the belief that they can be proactive about their data; to cope with those circumstances, they resort to
The remaining five articles in the special theme extend theories of privacy cynicism and digital resignation into new sociopolitical and institutional contexts. In the article, “How pro- and anti-abortion activists use encrypted messaging apps in post-Roe America,” Martin et al. (2023) considered the consequences of digital resignation among social activists. The authors examine how fears about government surveillance among pro-abortion activists and their anti-abortion counterparts in the United States influence their use of encryption messaging apps (EMAs). Martin et al. situate their study in the aftermath of the U.S. Supreme Court's 2022 decision in
In “Digital Resignation and the Datafied Welfare State,” Bagger et al. (2023) argue that the concept of digital resignation should be expanded from its original formulation—centered around the individual and framed within commercial exchanges—to broader societal contexts, such as the societal reliance (and dependency) upon digitalization and datafication. The Danish welfare system, and especially the education pillar, is taken as an example of how an entire ecosystem of interconnected individuals (educators, teachers, children, and parents) is expected to participate in digital sharing, how active participation becomes a proxy of good parenting, and how such coerced digital participation incentivizes digital resignation. In the Danish context, similar dynamics take place in the healthcare system, where being a digital patient is considered the norm. Bagger et al. argue that this formulation operates to the detriment of people's trust in the medical system, with concerns over data being shared beyond their practitioner's office. Lastly, the authors focus on news consumption as yet another area where digitalization is impossible to avoid, and a degree of resignation is inevitable. Based on this broader approach to resignation, looking less at the individual and more at the societal pressures that make data sharing ubiquitous also within the context of public welfare, Bagger et al. signal the need to investigate the mechanisms through which digital resignation is coerced and cultivated.
Zhu's (2024) contribution, “Privacy cynicism and diminishing utility of state surveillance: A natural experiment of mandatory location disclosure on China's Weibo,” examines the reactions of users discussing a politically controversial topic—the Ukraine war—on Sina Weibo to the introduction of a new surveillance technique: the mandatory disclosure of the account's location. The study presents a natural experiment, as the Weibo platform introduced geo-tagging of users and posts in a stepwise, selective process. Zhu (2024) shows that the introduction of mandatory location disclosure triggered a backfire effect among users, as those most likely to be tagged actually posted more content to the platform. It also did not deter other users from commenting on the geo-tagged posts. Today, mandatory location disclosure is applied to all Weibo users. It is codified by law and has been extended to other Chinese social media platforms. Zhu (2024) discusses the role of geo-tagging as a novel surveillance feature in the context of China's larger surveillance infrastructure that includes the “Great Firewall,” the so-called “Golden Shield,” the Social Credit System, extensive platform content censorship, and a mandatory real-name registration system for online platforms. The author argues that within this context of extensive, pervasive surveillance, additional measures, such as mandatory location disclosure, are subject to a law of diminishing returns. The study connects concepts such as privacy cynicism to the surveillance literature, extending the contextual focus of cynicism studies to a non-Western authoritarian system. It also centers on a novel technology, and examines a still under-investigated (non-Western) platform through innovative computational methods. Finally, it discusses a key point of contention in previous research on resignation and cynicism, the question of whether resignation must always exert a disempowering influence. Zhu (2024) argues that experiences of cynicism can actually “function as a source of resilience,” and can stoke resistance by psychologically immunizing individuals against fears of repercussions.
In “Role-Based Privacy Cynicism and Local Privacy Activism: How Data Stewards Navigate Privacy in Higher Education,” Popescu et al. (2024) considered privacy cynicism in an institutional context. The authors explore the conflicted role of “privacy stewards” in organizations through the lens of privacy cynicism literature. The little research that exists regarding privacy professionals in organizations indicates that they may be pushed to such cynicism because of inherent conflicts in their roles. Popescu et al. (2024) investigate the extent and nature of this dynamic in a university setting which, they state, is an “increasingly surveillance-heavy work environment” (p. 11). Based on fifteen semi-structured interviews with “data custodians” in the California State University system, the authors identify four role-based strains, for example, discrepancies between data stewards’ own interpretation of privacy and institutional definitions, and the tension between the risks of surveillance in general and its benefits in a university setting. Popescu et al. (2024) observed that data custodians deal with these tensions in various, complex ways. Crucially, data workers’ belief that “people don’t care about privacy” intensifies their privacy cynicism, further demotivating them from engaging in privacy protection at an institutional level. When data stewards did note areas where the privacy of students ought to be protected, they did it “not because of their role but despite it.” These findings highlight the sometimes-tortured conflict between data protection and data exploitation in organizations.
In “The after party: Cynical resignation in Adtech's pivot to privacy,” McGuigan et al. (2023) also take an institutional approach in their investigation of the digital advertising industry's response to growing privacy concerns. Characterizing its approach as “cynical resignation,” they argue that despite facing increasing regulatory, technological, and societal pressures to adopt privacy-preserving practices, adtech companies primarily aim to preserve their data-driven business models rather than genuinely safeguard user privacy. The authors identify three strategies—sanitizing surveillance, party-hopping, and sabotage—that these companies employ under the guise of privacy compliance. These strategies involve obfuscating data flows, exploiting first-party data through acquisitions, and hindering competitor access to valuable data, respectively. The authors argue that this cynical pivot to privacy represents a significant policy failure, because current regulations inadequately address the structural issues underlying corporate data practices. They suggest that without decisive action to alter the fundamental business models of data capitalism, adtech firms will continue to manipulate privacy narratives to their advantage, further entrenching their market dominance and exacerbating privacy concerns. A key contribution of this piece is its innovative shift in focus from the user as the cynical stakeholder, to a critical examination of company attitudes toward privacy. This reframing suggests that organizations, too, can be cynical actors in the privacy landscape, albeit in a markedly different manner from individual users. Furthermore, the emphasis on the political economy of privacy aligns with the broader discourse on surveillance capitalism (Zuboff, 2019), data capitalism (Sadowski, 2019), and critical data studies (Iliadis and Russo, 2016; Kitchin and Lauriault, 2014).
In “The Corporate Cultivation of Digital Resignation in Policymaking: How Weak US Regulations Enable Data Trafficking to China,” Kokas (2024) extends the institutional approach of the previous two papers in her application of Draper and Turow's (2019) framework regarding the corporate cultivation of digital resignation to the United States government. Kokas (2024) argues that companies’ ability to keep Americans resigned to corporate data use about them also influences government officials’ approaches toward that data use. Kokas (2024) further argues that this lax approach by the US government to the protection of citizens’ data from corporate use has international repercussions as it relates to China. She notes that in contrast with this lax US approach, “China's digital oversight offers a much more comprehensive view of what constitutes data that is worthy of protection and oversight,” with the goal that tech firms in China should not “endanger national security or the public interest.” Moreover, China claims national security sovereignty over data of Chinese firms and all companies operating in China. Kokas (2024) claims that “both the US and China exploit users in their own ways.” Her main aim in the article is to suggest that a key consequence of the difference between the data approaches of the two nations is that Chinese firms can use data-exploitation laws in the US to help the Chinese government gather data about Americans in multiple ways. The implications of digital resignation, then, span directly from “the corporate landscape of weak data security oversight” to the U.S. government “failures to address data security on a national and international scale.”
Throughlines and future research
The eight contributions to this special theme provide novel insights into the complex and often fraught relationship between the data agency of individuals, the platforms on which they communicate, and the institutions with which they interact. Articles in
We hope that the articles and commentaries on this special theme will spark interest and conversations about the novel concepts of privacy cynicism and digital resignation, and inspire research that expands our understanding of their roots, manifestations, and impacts. We thank all contributing authors for their valuable and innovative contributions to this emergent field. We also thank all involved in making this special theme possible, including the editorial team and reviewers at
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Norges Forskningsråd (grant number 299178).
