Abstract
This study examines the impact of role-based constraints on privacy cynicism within higher education, a workplace increasingly subjected to surveillance. Using a thematic analysis of 15 in-depth interviews conducted between 2017 and 2023 with data stewards in the California State University System, the research explores the reasons behind data stewards’ privacy cynicism, despite their knowledge of privacy and their own ability to protect it. We investigate how academic data custodians navigate four role-based tensions: the conflict between the institutional and personal definitions of privacy; the mutual reinforcement between their privacy-cynical attitudes and their perceptions of student privacy attitudes; the influence of role constraints on data stewards’ privacy-protective behaviors; and the contrast between the negatively valued societal surveillance and the positively valued university surveillance. The findings underscore the significance of considering organizational privacy cultures and role-based expectations in studying privacy cynicism. The study contributes to the theoretical understanding of privacy cynicism and offers practical implications for organizations, emphasizing the importance of aligning organizational definitions of privacy with employees’ understanding. Future research should further explore the mutual reinforcement of privacy cynicism in the relationship between data providers and data consumers (which we call the “spiral of resignation”) and consider the impact of role-based constraints in other organizational contexts.
Keywords
This article is a part of special theme on Digital Resignation and Privacy Cynicism. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/digitalresignationandprivacycynicism
The global education technology market is rapidly expanding, and higher education institutions are actively involved in it as both producers and consumers of big data. The political project of distributing the governance of public education across a vast array of private actors (Williamson, 2015) has resulted in universities adopting surveillance-heavy technological infrastructures that promise to harness big data and calculative logic to streamline students’ pathways to the workforce. However, to date, the educational sector lacks a coherent data governance framework (Hillman, 2023).
Existing privacy regulations in the United States provide little protection against the onslaught of student data extraction. The applicable regulations consist of a patchwork of area-specific and somewhat outdated state and federal data privacy laws. In California, new privacy laws modeled after the General Data Protection Regulation (GDPR), such as the California Consumer Privacy Act of 2018 (CCPA) or the more restrictive California Privacy Rights Act of 2020 (CPRA), do not apply to nonprofit universities. 1
Understanding how university data professionals navigate various areas of privacy tensions in their roles remains an understudied area of research on educational “datafication.” Although performing critical “data work” (Ozga, 2009), data professionals operate as “invisible workers” (Selwyn et al., 2018; Szekeres, 2011) whose everyday data practices and decisions profoundly shape institutional data governance. With their high data literacy, data professionals are best positioned to implement privacy controls. However, due to their roles, they are also exposed to numerous examples of privacy threats that can easily lead to a sense of digital resignation (Draper and Turow, 2019) or privacy cynicism (Hoffmann et al., 2016). The privacy attitudes of key data actors can significantly affect their ability to develop and pursue alternative methods to safeguard privacy against prevailing data-centric business models.
The perceived severity of privacy threats resulting from university surveillance stands in uneasy balance with data professionals’ perceived effectiveness in responding to these threats; both perceptions may affect the privacy behaviors of institutional actors but in opposing ways, as the literature on privacy cynicism suggests (e.g. Van Ooijen et al., 2022). Moreover, because of their role, university data professionals find themselves in a dual position: they must adjudicate between their responsibilities to campus members and the institution's strategic focus on extracting student data as the key to student success. They must also reconcile their professional data stewardship with their personal privacy attitudes and behaviors as digital users. How do data professionals negotiate tensions between their institutional roles and personal privacy orientations? How does their privacy cynicism influence their privacy calculus and decisions about protecting the privacy of various stakeholders in an educational setting? What are the consequences of these tensions on organizational privacy policies? Exploring these questions can help us understand the complex privacy-related decision-making processes at an institutional level and shed light on the mechanisms of privacy cynicism at the individual level.
In the following, we draw on critical data studies in education and insights from research on the role of privacy cynicism to explore how data professionals make decisions related to privacy in a large public university system in California and with what institutional consequences. Dencik and Cable (2017) emphasized that the norms and assumptions that shape our social imaginaries often ignore or marginalize valid alternatives. By not challenging the status quo, we hinder the potential for creatively developing new solutions to address the adverse effects of data capitalism. Therefore, this line of inquiry, which connects role-based privacy decision-making to organizational cultures and policies, provides a much-needed analysis of educational data governance at a meso level and can help establish more comprehensive principles of data governance in areas beyond education. In this regard, the article contributes to both critical studies of school datafication and studies of privacy cynicism by revealing the interplay between institutional data practices and the privacy cynicism of data workers.
We start by briefly discussing the trend toward educational datafication. We then review the role of privacy cynicism in shaping individuals’ privacy decision-making and highlight the importance of connecting individual actions to organizational privacy culture. Then, after explaining the context of our research, we present the findings of a thematic analysis of 15 interviews with data stewards. These interviews shed light on how data stewards navigate the tensions between valuing privacy, individual agency, and organizational constraints. We focus on several areas of friction: the tension between the risks of surveillance in general and the perceived benefits of university surveillance; the clash between data stewards’ personal privacy views and the constraints of their role; the disparity between the institutional definition of privacy and the data stewards’ own interpretations; and the mutually reinforcing effects between data stewards’ cynical privacy attitudes and their perceptions of students’ privacy orientations. We conclude by outlining productive avenues for further research on privacy cynicism.
School datafication and invisible data work
Higher education institutions have become significant producers and consumers of big data, aligning their traditional role with the entrepreneurial dynamics of the business world. The survival of universities as institutions and the key to “student success” now require a “re-envisioning” of higher education through the establishment of “smart” universities that “scale” data analytics to track students’ “progress from cradle to career” (see generally Furco et al., 2021; Lane, 2014). Educational intermediaries like EDUCAUSE, a nonprofit dedicated to promoting education technologies, advocate for the use of big data to generate increasingly sophisticated representations of students and their behaviors. For example, according to the 2023 EDUCAUSE Review, four of the top 10 IT issues that contemporary universities should consider involve the use and management of institutional data analytics to create the “Ultra-Intelligent Institution” as a part of a “Great Rethink” of higher education (Grajek, 2022). Such “datafication” discourses legitimize the larger political project of reforming higher education around market values (Williamson, 2015, 2018).
Using a critical data studies lens, a growing body of literature has started charting these ongoing processes of educational datafication, highlighting how datafication is mediated through “messy” assemblages of data, people, discourses, devices, software, and data standards (Decuypere, 2021; Hartong and Förschler, 2019; Williamson, 2018). This research also reveals how nonhuman actors (educational platforms, data standards, algorithms, dashboards) increasingly assume governance functions by promoting institutional data expertise as a legitimate supplement to (and often substitute for) instruction, thereby transforming traditional pedagogy into calculative strategy (e.g. Jarke and Macgilchrist, 2021; Prinsloo, 2020; Williamson et al., 2020).
But people remain essential to data infrastructures. An important outcome of educational datafication is the emergence of specialized professionals in charge of “data work” (Selwyn et al., 2018). According to Ozga (2009), data work involves a broad range of tasks and responsibilities, such as inputting student performance data into tracking systems, interpreting data sets to make informed decisions, and engaging with performance metrics to assess school effectiveness. Extensive research has explored how such data work reshapes teacher and student subjectivities in service of educational platforms (e.g. Macgilchrist, 2019; Williamson, 2017; Yu and Couldry, 2022). However, limited attention has been given to the data practices of educational actors that Szekeres (2011) calls “invisible workers,” including IT personnel or staff with Institutional Research (IR) units. Their data work encompasses both the more straightforward but less visible processes of inputting and cleaning data, as well as the more nuanced efforts of shaping realities through performative activities like selecting data proxies (Whitman, 2020) or resolving “data discrepancies” in response to conflicting policy demands (Hartong and Förschler, 2019, referring to state board members). Describing, understanding, and conceptualizing these data practices should remain at the forefront of critical data studies. This would require temporarily suspending overarching theoretical discussions about surveillance or control and shifting focus to on-the-ground analyses of institutional data practices (Decuypere, 2021), such as the one proposed in this article.
Privacy cynicism and the need for mesolevel perspectives on privacy
The previous discussion emphasizes the growing need to understand privacy decision-making in practice, within organizational privacy cultures, by considering the complex interplay of legal requirements, organizational dynamics, and end-user expectations. However, with few exceptions (e.g. Bamberger and Mulligan, 2015), privacy research has predominantly focused on privacy experiences at the individual level, such as individual expectations, literacy, and privacy management behavior, or at the sectoral / national level (Chan and Greenaway, 2005). This is also the case with research on privacy cynicism, which focuses on the role of privacy cynicism in the decision-making of individuals regarded as “consumers” or users of online platforms.
At an individual level, this body of research begins by raising the crucial question of the extent to which individuals can rationally weigh the costs and benefits of their privacy-related behavior. Within this debate (e.g. Baruh and Popescu, 2017; Dienlin and Trepte, 2015), two central claims have emerged. First, studies have challenged the assumption that individuals are willing and able to weigh the risks and benefits of information sharing rationally. Instead, studies suggest that privacy-related decisions are often influenced by emotions and cognitive shortcuts driven by incomplete information about abstract, long-term risks, leading to biased decision-making (Acquisti et al., 2017; Masur, 2019). Second, over the last decade, researchers have increasingly explored the possibility that people may fail to act upon their privacy concerns due to a heightened sense of powerlessness when attempting to protect their privacy. Concepts such as digital resignation (Draper and Turow, 2019), privacy apathy (Hargittai and Marwick, 2016), privacy cynicism (Hoffmann et al., 2016), privacy fatigue (Choi et al., 2018), and surveillance realism (Dencik and Cable, 2017) highlight how individuals’ feelings of powerlessness when dealing with the surveillant institutions connect to the perception of an inevitably compromised personal privacy. These concepts underscore people's multifaceted response to the extraction of their data, including mistrust in organizations handling data, a sense of powerlessness due to the lack of viable options for protecting privacy, and subsequent resignation marked by a belief in the futility of privacy protection attempts (Hoffmann et al., 2016; Lutz et al., 2020). Despite their varying conceptualizations, these ideas offer valuable insights into the potential antecedents and consequences of the sense of powerlessness and helplessness experienced by individuals confronted with increasing surveillance.
Regarding antecedents, existing literature suggests that the increasing demands for personal information contribute to digital users experiencing learned helplessness when attempting to manage their online data (Hoffmann et al., 2016; Seberger et al., 2021). This feeling may arise from the prevalence of data breaches and other privacy violations, coupled with repeated failures to maintain control over one's information, leading users to believe that their privacy is determined by factors beyond their control (Cho, 2022). From this perspective, resignation behavior appears as a form of motivated reasoning (Cho, 2022) or rational fatalism (Xie et al., 2019) that emerges when individuals perceive that their privacy will be compromised regardless of their protective efforts (Van Ooijen et al., 2022).
This sense of helplessness in the face of “surveillance capitalism” (Zuboff, 2019) is well-founded. Digital platforms rely on the extraction and harvesting of personal data as an integral part of their business models, presenting data sharing as a necessary condition for accessing the platform's benefits (Lutz et al., 2020). Individuals have little choice but to accept these practices as unavoidable. Digital resignation can also be attributed to intentional corporate tactics, such as placation, diversion, and the use of complex legal jargon, designed to discourage individuals from asserting control over their private information (Draper and Turow, 2019). Seberger et al. (2021) describes the resulting privacy choices available to users as “conditional empowerment,” a form of limited control that masks the structural impossibility of having complete control over one's informational privacy.
In terms of consequences, the ongoing debate surrounding privacy fatigue, privacy cynicism, and digital resignation highlights how individuals’ feelings of helplessness potentially hinder adaptive privacy behaviors, a relationship further troubled by factors such as individuals’ perceived risk levels, feelings of resignation, motivation, and the types of privacy actions they undertake (Cho, 2022; Lutz et al., 2020; Van Ooijen et al., 2022). Indeed, Cho (2022) proposes that helplessness may hinder individuals from engaging in privacy-protective behaviors. The normalization of surveillance may lead people to perceive it as unavoidable and beyond their control (Dencik and Cable, 2017), resulting in their skepticism about the effectiveness of privacy-protective actions and reduced likelihood of employing them (Lutz et al., 2020; Van Ooijen et al., 2022). However, the relationship between perceived risks, helplessness, and privacy actions is not straightforward. For instance, Cho (2022) suggests that in the absence of motivational deficits, helplessness might, in fact, trigger privacy-protective behavior. Additionally, Lutz et al. (2020) found that while resignation negatively impacts protective behavior, other aspects of privacy cynicism, such as institutional mistrust and powerlessness, do not diminish such behavior. These findings indicate that privacy cynicism does not exclude the possibility of privacy-protective behavior, supporting Draper and Turow's (2019) argument that digital resignation should not be mistaken for indifference toward privacy; instead, it is a critical response to a system that obstructs individuals’ efforts to protect themselves.
One potential consequence is the inclination to limit one's digital communication or even withdraw from online platforms altogether (Büchi et al., 2022). Such withdrawal can significantly impact our ability to envision and implement better privacy protection alternatives as a society. When individuals opt out of the digital marketplace, the “signals” used by markets and policymakers to gauge privacy preferences may become skewed in favor of those who choose to remain in the market due to their lower privacy concerns (Baruh and Popescu, 2017). In other words, a paradoxical consequence of such withdrawal would be reduced incentives for organizations to respond effectively to consumer privacy protection preferences.
At the macro, sectoral / national level, privacy research has primarily focused on how organizations implement the principles outlined in relevant regulatory frameworks. However, there is a notable lack of meso level privacy research that bridges the gap between the sectoral / national and individual levels (Gotsch and Schögel, 2023) by exploring how organizational dynamics play a crucial role in privacy-related decision-making. In a rare exception, Waldman (2019) emphasizes the phenomenon of legal endogeneity, where corporations redefine adherence to privacy laws as a mere compliance task, prioritizing operational efficiency and risk reduction over consumer interests. This shift from substantive goals, such as consumer protection and equality, to symbolic compliance structures, such as paper trails and privacy checklists, reflects a neoliberal trend that favors efficiency and innovation over individual wellbeing. Privacy professionals face challenges in navigating this growing managerialization of privacy law while striving to protect individual privacy.
Changing organizational processes and culture (Gotsch and Schögel, 2023) presents an uphill battle for privacy professionals. Based on interviews with privacy officers, Waldman (2019) identifies one key challenge as maintaining the delicate balance between advocating for privacy and fostering rapport with organizational stakeholders, particularly upper management. Pushing too strongly for privacy may jeopardize relations with organizational stakeholders, reducing the possibility of making longer-term changes in the organizational culture. Additionally, technical staff and engineers may have different priorities and perspectives on privacy, further complicating the implementation of internal privacy rules in technology design.
These observations underscore the need for more in-depth investigations on how individuals in diverse roles, both as professionals and end-users, navigate the complexities of reconciling legal principles, organizational priorities, and informal and formal organizational procedures with their personal privacy beliefs. Building on the insights from privacy cynicism literature, which suggests a defeatist attitude among end-users due to a perceived lack of control over privacy decisions, we explore how such a reaction might manifest among data stewards. These professionals constantly grapple with the ongoing challenge of harmonizing their responsibilities and personal values with the institutional emphasis on extracting student data. What are the consequences of the belief that, regardless of their efforts, both end-users and institutional inertia are pushing toward increased data sharing and collection? It is in this context that we employ privacy cynicism and related concepts to connect a micro level analysis of individual behavior with a macro level examination of how organizations shape their data practices.
Methodology
This article draws from ongoing ethnographic fieldwork conducted at California State University (CSU), focusing on the practices surrounding institutional data and their impact on educational data governance. With approximately half a million students and around 50,000 faculty and staff, the CSU system is the largest four-year public university system in the United States. The research began in 2017, two years after one of the authors took on a faculty role responsible for professional development with educational technologies. The position, embedded within the IT division, served as a “bridge” between campus decisions on academic technologies and faculty needs and concerns. In this capacity, the researcher engaged as an active participant observer in over 70 meetings with campus and system leadership to discuss technology implementation, attended more than 30 conferences on educational technology, and analyzed available administrative documents on data governance. The specific focus of this article, how data professionals navigate privacy concerns as both institutional actors and private individuals, emerged organically through the constant comparison of fieldwork memos as one of several larger themes worthy of further exploration. Here, we present the analysis of the in-depth interviews conducted as a result. In line with Whitman (2020), we consider these interviews to be a key venue for accessing institutional actors’ role-based sociotechnical conceptualizations of data work, their norms of responsive institutional data practices, and their “on-the-ground” privacy-protective behaviors in both official and private contexts.
Sampling and procedure
We draw on 15 semistructured in-depth interviews with data custodians in the CSU system. “Data custodian” has a specific meaning defined by the primary source of policy guidance for CSU privacy and information security, the Integrated Administrative Manual (IAM) Section 8000. Section 8000 outlines the responsibilities of three key roles involved in managing institutional data. Each operational domain has a data owner or data authority responsible for managing domain-specific information assets and implementing protocols for ensuring data integrity, security, and confidentiality. These data domains may include, for example, human resource data, student academic information, law enforcement data, and others. Data owners work with a data custodian (or data steward) to ensure compliance with information security protocols. Among data custodians, the role of the Information Security Officer (ISO) is critical. Each campus has an ISO who coordinates the campus information security plan, ensures overall compliance with data laws and applicable CSU directives, oversees the campus data risk assessment and incident response, and develops campus information security training. Our sample included seven ISOs and eight data custodians in various campus operational areas (Advising, Institutional Research, Office of the Registrar, IT) across eight CSU campuses. We assembled the sample using a snowball method. About 70% of the contacted data custodians responded to our interview requests. The interviews, conducted via Zoom, lasted 60–90 min and were professionally transcribed afterward.
The interviews covered several common questions. Participants were asked to describe the nature of their data work and their reporting lines, explain the meaning of privacy and provide examples of data protection practices in their roles and personal lives, discuss the potential benefits and harms of university data analytics, and explore how, if at all, the campus could improve in protecting student and staff privacy.
GDPR and COVID-19
We had completed 13 interviews before COVID-19 led to a nationwide transition to distance learning and the widespread digitization of previously analog processes on many campuses. To check whether this transition affected data stewards’ beliefs about privacy, we compared those interviews with two additional interviews conducted in 2023 and included in the present analysis. In them, we asked explicitly how COVID-19 affected the participants’ privacy attitudes and decision practices. We discovered that, while distance learning was perceived to have added to students’ incidental privacy risks (e.g. students’ risk of inadvertently displaying private spaces on Zoom), the data stewards did not report a significant change in their privacy attitudes. To our knowledge, the system did not substantially alter its data protection policies due to COVID-19. 2
The CSU system does not have a systemwide privacy officer; to our knowledge, neither do individual campuses. At the time of our interviews, US higher education was beginning to grapple with the challenges posed by the EU's General Data Protection Regulation (GDPR) and similar international regulations for managing foreign student data. These challenges are still being addressed, and to date, no definitive guidance on how the GDPR will affect campus operations has been issued. This situation further supports the notion that the passage of time did not substantially affect the opinions expressed in these interviews.
Ethical mitigation
Because one of the three co-authors has a high-level administrative role in the system, we were concerned that his position might exert undue pressure on the participants or lead to self-censorship in their statements. To ensure respondent voluntary consent and confidentiality as specified in the IRB protocol, that co-author was not involved in data collection and did not have access to the raw interview data, only to the finalized and anonymized data analysis report. To ensure anonymity, each interview transcript received a random number from 1 to 15. One researcher carefully reviewed all transcripts and removed any references to personal or campus names. These anonymized interview transcripts were then shared between the two researchers who conducted the data analysis.
Data analysis and validity
We conducted a thematic data analysis following Braun and Clarke's (2006) recommended process. First, working independently, the two researchers involved in the data analysis read through all the interviews in detail. Using the software MAXQDA 2022 and an inductive coding approach, they generated a plethora of initial codes and memos without a predetermined theoretical orientation. Once this phase was completed, the researchers collaborated to identify possible themes by comparing their observations and examining patterns within the data. Of particular interest during this stage was coding for evidence of privacy cynicism; for coding guidance, we used the instrument developed by Lutz et al. (2020). After identifying pertinent themes, one researcher assessed them in relation to the existing codes and relevant code segments, consolidating them and recoding the entire text corpus. The two researchers then reviewed the coding report and, working together, defined and named the themes. Drafting the data analysis report required a constant dialogue between the data and the identified themes and a further process of collective sense-making. The third researcher, acting as a “native” of the culture described, provided feedback to validate the data analysis findings.
Results
In the following, we use the terms “digital resignation,” “privacy resignation,” and “privacy cynicism” interchangeably to refer to discursive phenomena that explain participants’ disincentives to engage in privacy-conscious behaviors. We reserve the term “institutional apathy” to describe the arrangements of policies, organizational structures, and cultures of practices that discourage institutions from actively protecting the privacy of their campus members.
The normalization of university surveillance and data-driven action
As data professionals, data custodians are surveillance realists (Dencik and Cable, 2017): participants appeared keenly aware of the pervasive dataveillance (Clarke, 1988) that characterizes digital capitalism. Due to the nature of their roles, the participants actively kept abreast of research and regulatory developments in this field (three participants quoted recently read books on surveillance). Out of the 15 people interviewed, nine expressed a sense of being constantly monitored and a perceived loss of control over personal data, a situation which one participant compared to “the cows [that] have already left the barn”: “…this is America. You’re being tracked everywhere all the time” (Int6); “…these days everything is based on data, and everybody is trying to get, buy, or somehow solicit or steal your data” (Int9). A more unusual version of surveillance realism offered a seemingly hopeful outlook resting on the belief in the “value depreciation” of personal information: “I think that the amount of information that will be generated will continue to grow exponentially, though I think that how worthy this information is may continue to actually decrease. …When people started creating their Friendster or Facebook accounts, giving away their personal information, I think [their data] would have been worth a lot more because there wasn’t as much information about a specific person that you could actually leverage. Whereas now, your data replicates more, but I don’t know if it's that much more unique anymore.” (Int14)
On this view, efforts to protect data are not only futile but, in a sense, counterproductive. With surveillance already normalized and personal information becoming more readily available, the oversupply of personal data may eventually cause its value to drop. Therefore, the rational response is to continue with one's usual activities and trust in the market to stabilize the level of societal surveillance.
However, for the participants, the perceived benefits of institutional data mining and learning analytics directly contradicted the perceived severity of societal surveillance. Only a few participants expressed unease about university surveillance, and those concerns were primarily related to extreme cases of intrusion required by their role: “…I’m concerned about the fact that there is—I mean, even if you just looked at our wireless access logs. And I only pretty much do this in a criminal-type investigation. But it gets to the point where I feel like I can see when someone's breathed out and then when they’ve breathed in.” (Int6) Whereas participants recognized societal-level surveillance as a privacy risk, the perception of university surveillance, barring those extremes, was almost unanimously normalized as either beneficial or an operational byproduct.
For instance, our participants did not consider it a privacy risk to release years’ worth of anonymized student data to vetted commercial sources. When commenting on the implementation of EAB, a large platform that claims to assist advisors in identifying struggling students based on large algorithmic models derived from past performance, one participant noted: “What we were sold on and what we did the implementation on, it was the ten years of historic data from this university…You know, unfortunately though—or fortunately, I don’t know—it's not as useful as we might have hoped…I think because there are so many other variables when you’re doing advising with students, you know, just life in general…[EAB] can inform the decision, but if we’re doing really good advising it's more holistic: looking at the student as a whole…The one good thing it does–it's not really predicated on the analytics–is that you can run campaigns…where you can identify cohorts of students and do marketing campaigns or information campaigns or bring them in for advising appointments.” (Int1)
Despite claims of altruism, educational platforms often prioritize profits by targeting individuals for various actions. While these actions may benefit individuals, as data custodians and administrators like to believe, the companies in the education technology market are the ones who benefit the most. Candid perspectives like the one above are rare, with most data stewards justifying extensive student data collection and sharing by framing it as advancing the university's strategic mission. Indeed, our participants provided a generous inventory of positively valued data-driven initiatives aimed at “student success,” such as improving student advising, streamlining the curriculum, identifying problematic classes, designing better pathways to graduation, identifying early predictors of academic success, and improving course scheduling. Those interventions were not only highly valued but also seen as an organizational responsibility, as expressed by one data custodian: “At the same time, when we have that data, and we have those insights and we don’t do anything about it, then we bear some type of responsibility for… ‘blissful ignorance.’ We know this [student situation] is a problem. We see what problems that cause, and we see what happens with the students who have this issue, but let's avoid overreaching! [So] we’re really trying to get this level of granularity together, which may have been referred to in the past, as you know, as a Big Brother type view…” (Int15)
These comments illustrate how, in higher education, big data practices and calculative logics (standardization, classification, prediction) have spurred a powerful sociotechnical imaginary of real-time data-based interventions in learning which, as suggested by Selwyn et al. (2018), normalize data work in universities. But what exactly does data work entail for data stewards? The next section explores this question.
“I put the ‘no’ in innovation”: Institutional pressure, role tensions, and local privacy practices
Data security and data privacy
In a university setting, data is generated both intentionally (e.g. for administrative purposes, such as tracking program enrollment or to understand how students learn—data known as “learning analytics”) and automatically (known as “transactional data”) through people's interaction with various interfaces (Jarke and Breiter, 2019). Data work, the translation of people's data into legible user metrics, transforms personal data into institutional data: a class of assets that carries competitive value in the educational marketplace and decision-making value for the institution (Birch et al., 2021; Birch, 2023).
Data stewards’ primary responsibility is safeguarding university data assets from unauthorized access, disclosure, alteration, or destruction through data security practices. IAM's Section 8000 directs each campus to identify the types of data that require special protection, establish data ownership and stewardship, and define the risk management protocols that maintain the integrity and security of all university information assets. Data security should not be mistaken for data privacy. Data privacy practices focus on properly handling personal data to respect individuals’ rights and expectations of confidentiality. However, in Section 8000, student or staff privacy appears almost as an afterthought. Only section 8025.00 addresses the privacy of campus members. This is achieved by placing limitations on which institutional data may be accessed by other parties and under what conditions (purpose limitation), rather than focusing on users’ rights to, for example, stop data collection altogether.
Notably, the document does not provide a specific definition of privacy but instead establishes a tiered control system that classifies data based on sensitivity. Level 1 or confidential data (e.g., passwords, SSNs, biometric information) and Level 2 or “internal use” data (e.g., a student's academic record or an employee's financial record) are considered sensitive information to be safeguarded by campus-designated data owners. Level 3 or “directory” data refers to information available for public use, such as information found on university web pages, and is subject to minimal data access rules. In short, at least for now, the CSU system operates with an implicit understanding of privacy, focusing on establishing institutionally codified processes that ensure purpose limitation in data collection and prevent improper data access.
This implicit positioning of data privacy as subservient to the security of institutional data assets creates tensions in the role of a data steward: protecting data privacy may involve providing more options for campus members to limit or customize the collection or sharing of their personal data, whereas establishing data security protocols primarily concerns how the institution secures its data assets. Consequently, when data stewards take action to protect student privacy, they do so not because of their role-based responsibility but in spite of it. To resolve these tensions, as described below, data stewards must actively construct and assert their privacy-protective agency as a form of “privacy activism,” sometimes despite potential role frictions and clashing organizational reporting lines.
Data custodian role responsibilities and authority creep
Data custodians (particularly the ISOs) are primarily responsible for protecting the secure circulation of institutional data: “It's all about the data. So, the data at rest, the data in transit, the data on a piece of paper, the data in the cloud, the data on a USB stick, on a hard drive. So, how are we protecting the data, in whatever form it's at, wherever it's at, in what transportation form it's at” (Int7). All data custodians felt a deep sense of responsibility for the institutional data they oversaw and were explicit about adopting a principled approach to evaluating data access requests. Although expressed differently, the most pervasive principle was data minimization, which specifies a “need to know” requirement for potential data clients. For example, about data access requests for student data, “…there has to be a business reason or a reason for you to be able to access that student record. Just because you are curious for the student information is not a need-to-know…A need-to-know is based upon your specific job duties and what you are doing for that particular student” (Int5).
However, safeguarding the circulation of institutional data does not automatically imply protecting privacy. As mentioned earlier, no data custodian is officially in charge of protecting privacy by determining the “when / how / how much” of campus data collection or ensuring that all campus members can exercise their privacy rights. Unanimously, the ISOs recognized data security and privacy protection as distinct responsibilities and struggled to reconcile the two. In the absence of an explicit mandate for privacy protection, they claimed for themselves what could be described as “authority creep”: “So technically in an org chart situation, I am not responsible for privacy. I might be responsible for data protection. So the discussion of what we collect and how it's used—as long as it's authorized by somebody with authority to authorize that, technically that's where my authority ends. And I’m only involved because nobody is really aware that I don’t have the authority from an organization perspective.” (Int6)
Authority creep also arose because ISOs, as the ultimate data authority on campus, were occasionally tasked, sometimes serendipitously, with making decisions regarding data access that leaned more toward privacy concerns rather than data security issues. Sometimes such situations involved well-defined data access requests that followed established protocols, such as requests for student data access where the campus Registrar, rather than the ISO, had decision-making authority. However, data custodians frequently encountered unique data access scenarios that fell into gray areas lacking routinized decision-making processes.
In our interviews, we found that data stewards employed three informal privacy-protective behaviors: risk-passing, active consultations, and forcing reevaluation. Risk-passing involved escalating a data access request up the decision-making chain or redirecting it laterally to a likely data owner to shift liability to the most appropriate authority: “My position is to present risk and if the risk is high enough, then I’m looking for someone to sign off on that risk. So the [decisional] no would be if someone doesn’t sign off on the risk; then I wouldn’t let [the data access request] go through until someone did” (Int10). This response was more common among non-ISO data stewards and most clearly aligned with an institutional approach to privacy understood as a compliance task (Waldman, 2019).
In contrast, most ISOs engaged in a combination of active consultation and forcing reevaluation practices. Active consultation involved seeking formal or informal input from various stakeholders (despite an ISO not being required to do so) to facilitate collective decision-making meant to distribute and institutionalize the risk. The specific stakeholders varied across campuses based on factors such as campus culture, the ISO's reporting lines, or whether the campus had an established data governance committee. Forcing reevaluation occurred when the ISO, after becoming involved in a consultation process at someone else's request, presented an alternative perspective or, as expressed by one ISO, provided “the counterintuitive point” (Int10). Sometimes ISOs used the process of questioning as a deliberate delay tactic to push for what they believed to be the right thing to do: “So, I just happened to get wind of something like this [a data access request] and it came across [my] desk because they put it through our procurement process, somehow. And they’re like, ‘Oh, yeah, we’re just doing this. We’re sharing data with them; they’re going to do this. Just approve it.’ I said, ‘No.’ I’m like, ‘What are we doing here?’ and ‘What data is it?’ and I started playing 20 questions…And we went around and around for like two months and it's like, ‘[ISO name], we’re waiting for your—can you just sign the thing?’ And I’m like ‘No. It's got to go through procurement; it's got to go through all the processes now.’” (Int7)
Risk passing, active consultation, and forcing reevaluation are not the conventional privacy-protective behaviors we would expect. Instead, these tactics represent adaptations to an institutional culture of privacy apathy. For university data stewards, these behaviors are a form of role-shaped self-efficacy to navigate and mitigate institutional privacy risks. However, the data custodian's position within the campus organizational structure can impose additional constraints, as we will explore in the following section.
Role strain
ISOs’ placement within the organizational reporting structure is crucial for their authority to protect privacy and sometimes even for their ability to fulfill their regular data security responsibilities. At the time of the 2019 interviews, about one-third of ISOs reported directly to their campus President, another third to the Chief Academic Officer (i.e. the Provost), and the remaining third reported to either the Chief Financial Officer (typically located in Administration and Finance) or the Chief Information Officer (typically in IT). These reporting lines had an impact on the ISOs’ ability to “get wind of” technologies adopted outside normal university procurement processes: “And we view…[e]verything to do with any third-party vendor, when [the contracts] are done right. A lot of times [the contract] circumvents [the process] and if it's like the VP of Student Affairs who has to sign it, I may never see it” (Int7).
Moreover, the ISOs reporting lines often led to role strain. As famously formulated by Merton (1938), strain theory predicts that, at least in certain situations, the gap between one's role responsibility and one's personal orientation produces nonconformist behaviors like those discussed earlier. As security professionals, ISOs engage in data-protective behaviors that occasionally run counter to the rhetoric of technological innovation. The strategic imperative to innovate with learning technologies often translates into collecting additional student data, which then becomes university property and can be further mined for different purposes. In contrast, the ISOs defined their role as fencing or stopping the flow of new information. One participant described it: “I’m information security. I put the no in innovation” (Int7). However, for those reporting to the Chief Information Officer (CIO) of the campus, typically responsible for improving the campus technological infrastructure, their position in the hierarchy often created a conflict of interest: “…we are more concerned with security of the data, but from [the IT] side they’re more concerned with new technologies and operation. So for them…where there is a new product they are going to install tomorrow, you just found out about it, and the people who want it in place did not do any security assessment on it. I mean, that's part of our daily lives. Most times with the CIO in charge, I don’t have the power to stop it, so the project will come on. Security becomes an afterthought, instead of being embedded in the process.” (Int9)
That security, privacy, and innovation stand in an uneasy balance in the campus organizational chart became apparent when the participants discussed the challenge of implementing the GDPR in American higher education. While US universities are not obligated to appoint a privacy officer under GDPR, several ISOs commented on the advantages of having a designated privacy officer to aid in GDPR compliance and effectively handle data protection issues. Where that position would reside, however, was a matter of debate: many ISOs expressed concerns about conflating ISO duties with the responsibilities of the privacy officer, as it would create a “significant propensity for conflict”: “…I think it's risky to make them the same person, because now you’ve got these potentially conflicting issues…I just think if I’m fundamentally looking at making sure that data stays protected, has integrity, and is available for the operation, I’m not necessarily going…questioning around a contract as someone who's really making sure that student privacy is protected.” (Int8)
These concerns indicate that the ability of US higher education to respond effectively to the regulatory challenges posed by international privacy laws will depend significantly on the organizational placement and responsibilities of the new data stewards charged with privacy protection.
In sum, although data stewards are not explicitly tasked with safeguarding user privacy, our participants took a broad interpretation of their data protection mandate. In a system not prepared to accommodate rapidly developing privacy challenges, data custodians, especially the ISOs, engaged in adaptive behaviors and sometimes were locally effective in implementing pragmatical data-protection tactics as a form of privacy activism. These tactics emerged due to the constraints imposed by the definition and responsibilities of a data custodian's role and by the organizational structure of those role positions on campuses. The following section will address the tensions between the role-based enactment of privacy and the data stewards’ understanding of privacy outside their roles.
Mutually reinforcing resignation: Privacy cynicism and privacy generational gaps
The privacy cynicism framework assumes that individuals are either data providers (“users”) or data workers (e.g. corporate agents), but not both. Yet, university data stewards occupy both positions at once: they act as data workers within their professional capacity and as data providers outside of it. Therefore, a key aspect of our research was to explore whether, if at all, their understanding of privacy and data protection practices differed across these role boundaries and with what consequences on their expressions of privacy cynicism.
Recall that the IAM does not provide an explicit institutional definition of privacy. When asked to define privacy, participants attempted to articulate their own definitions. These definitions aligned with two main conceptions of privacy: institutional trust and individual choice. Out of the 15 participants, six provided a trust-based definition, two provided a choice-based definition, five did not define privacy explicitly, and one distinguished between role-based privacy, conceptualized as institutional trust, and personal privacy conceptualized as a personal choice.
“Privacy as trust” (Hartzog, 2018; Richards and Hartzog, 2015) suggests that privacy is fundamentally about establishing practices that foster trust in how personal data is handled. For the participants, privacy as institutional trust was described as: “being able to get the data that we need as an institution…knowing that it won’t be shared” (Int5); “Data protection…is what we do to safeguard your data from unauthorized use, and privacy is how we safeguard your data from inappropriate use” (Int11). Therefore, privacy as institutional trust aligns with a compliance-based definition that can be inferred from the IAM. In contrast, a conception of privacy as a “right,” which has a complex history in American jurisprudence, is grounded in legal and ethical principles that emphasize individuals’ autonomy and personal freedom from surveillance. For the participants, privacy as choice referred to: “an individual's right to say ‘I do not want my data to even be collected. It's private to me. I control it.” (Int6); “details about me [that are not] readily available to others [with whom] I would never choose to share the information with” (Int8). The two conceptions of privacy are not contradictory, but they do prioritize different policy objectives: improving the audit of institutional data practices vs. bolstering individual choice in matters of personal data collection (e.g. through notice and consent).
Interestingly, participants’ definition of personal privacy appeared to be linked to their levels of “privacy cynicism” (Lutz et al., 2020), which was evident when discussing their personal data practices outside their roles as data stewards. Indeed, nine data custodians expressed privacy cynicism as a combination of feelings of mistrust, powerlessness, and resignation: “I mean, I’ve been in IT and information security a while, and I’ve taken every possible control I can take for my personal information, but two years ago when I got a T-Mobile phone for my wife…even though at my work and home I don’t put stuff online…I got a letter that I’m one of those thousand people whose Social Security is at risk. Everything a hacker needs to steal your identity was compromised. And what can you do? At that point they offer you one year of worthless credit service check…Any additional information you want, they want you to sign up and pay more to even get your own information, even though they’re the cause of the reason they hacked you. They’re just using it as a marketing tool to sell you more services…So I don’t trust the system. It's all about following the money.” (Int9) “…you know, our privacy is rapidly eroding. For the most part, we’re accepting that as the cost of convenience of being able to be part of these worldwide social media networks and everything from shopping online to Facebook and LinkedIn. There are benefits to me having my information there, and then in exchange I’m losing privacy at a relatively rapid pace…I don’t do as much as I should. I’ll think a lot about it, but I also feel like in a lot of ways we’re so deep into this that trying to—you know, the cows left the barn.” (Int8)
In line with understandings of digital resignation, the privacy cynics were unanimous in their belief that the only cure for privacy threats was staying, unrealistically, “off the grid”. As summarized by one participant, “…we’re ultimately sacrificing convenience for privacy and, I think, to a large extent many of us are doing it knowingly. But coupled with that is, we are also potentially doing it without an alternative in many cases. I think there are some things now that you can only do online. And if you don’t want to put that information out there, you almost don’t have an alternative anymore” (Int15).
Interestingly, out of the nine privacy cynics, five provided a role-based definition of privacy, with an equal split between institutional trust and choice (one individual provided both definitions). In contrast, those who did not express privacy cynicism overwhelmingly defined role-based privacy as institutional trust. This finding suggests that a misalignment between personal and organizational definitions of privacy could contribute to the sense of digital resignation among institutional actors.
For those who expressed privacy cynicism, their sense of futility was further heightened by the parallel belief that students, one group of “data users” served by the data stewards, did not value their privacy. About half of the participants thought that students either did not care about privacy or were willing to trade it for the sake of convenience and free access to social platforms. These perceptions were especially prominent in later interviews: “[Students] don’t really seem to care unless something bad happens to them. They don’t really care.” (Int13); “I would also put a bet on that 95% of our students, or maybe even higher, would not care to learn about [privacy]” (Int14).
Data custodians’ perceptions of the younger generation's lack of care persisted even when explicitly acknowledging that the university—and the educational system in general—did not train students to think critically about the implications of societal surveillance. As one participant declared: “I think the parents do the best they can…[But] I think schools probably should take a much more active role…In all honesty, that should probably be something that's taught in elementary school. You see second graders with phones. They should, you know, understand what that means to have a phone and share messages and send messages.” (Int15)
The participants frequently highlighted students’ disinterest in privacy-protective behaviors by attributing it to sharp generational differences between the privacy attitudes of the data stewards’ age group and those of the millennials populating the school halls. The participants’ privacy-cynical attitudes and their perception of students’ indifference toward privacy seemed mutually reinforcing, which could potentially lead to increased feelings of privacy resignation, further discouraging data stewards from protecting stakeholders so careless about their privacy.
All participants agreed that campus members (whether students, faculty, or administration) could stand to benefit from more education on data ethics and digital literacy. Numerous calls for critical data literacy in education have already been made (Atenas et al., 2023; Kuhn, 2023; Raffaghelli et al., 2020; Wilson et al., 2017). However, when asked to consider the nature of such a program, its curricular placement, or the office in charge, it became apparent that neither the university curriculum nor specific campus initiatives provided a natural space for this training, beyond the routinized data security training required for those with higher level of data access. These findings underscore the limitations of students’ agency in engaging in critical data literacy as a form of resistance to educational datafication (Thompson and Prinsloo, 2023). Indeed, the importance of a meso level analysis of organizational and role-based pressures in implementing new data infrastructures is precisely in highlighting how little agency students really have in opposing these data processes (Prinsloo et al., 2018).
Conclusion
This study examined how academic data stewards in a large public university system in California navigate the demands of their role in increasingly surveillance-heavy work environments. The goal was to understand the relevant tensions that might impact institutional data governance policies. The findings present promising avenues for further inquiry into institutional data governance, particularly in educational settings and for the conceptualization of privacy cynicism.
First, we observed a contrast between the negative perceptions of surveillance in general and the positive perceptions of university surveillance, seen as necessary and beneficial. This observation becomes more significant when considering approaches like “privacy by design,” which emphasize embedding privacy into the design and architecture of digital services and business practices from the outset, rather than treating it as an afterthought (Cavoukian, 2010). Adopting a privacy-by-design approach could be advantageous in rethinking university data governance as a counter to institutional privacy apathy. Gotsch and Schögel (2023) suggest four key themes that can easily extend to educational settings: (1) strategic initiatives (e.g., adopting privacy-conscious technologies), (2) structural improvements (e.g., aligning data practices across data domains), (3) human resource education (e.g., incentivizing university employees to develop privacy skills), and (4) service development (e.g., offering digital literacy courses as part of students’ general education requirement). However, we should acknowledge that privacy-by-design approaches carry normative assumptions about the organizational cultures that facilitate proactive privacy protection and assume that choice transparency and availability necessarily empower end-users. Those assumptions might not necessarily hold true across all settings. In fact, critics argue that the availability of choices may not empower individuals and could even reinforce data capitalism and strengthen institutions’ power over individuals (Baruh and Popescu, 2017; Draper and Turow, 2019; Seberger et al., 2021). Addressing institutional privacy apathy requires a comprehensive understanding of the organizational and managerial processes that hinder a proactive approach to privacy.
Second, we found that data stewards engaged in local privacy activism as a form of data work that pushes against the constraints of their organizational roles. Risk passing, inviting active consultation, and forcing a reevaluation of data access requests are short-term managerial tactics used to cope with perceived institutional obstacles. These short-term tactics, however, can become bandaid solutions that, once institutionalized, restrict the data stewards’ ability to propose better structural approaches to privacy protection, leading to further institutional privacy apathy. Here again, privacy by design could prove beneficial. This approach, which informs regulatory frameworks like the GDPR and California's CCPA (as amended by the CPRA), would involve proactively anticipating and addressing privacy risks before they occur and creating fair information principles tailored to big data environments. This organizational transformation would enhance data quality and processing transparency, giving consumers a voice in how their data is used. However, as noted in the introduction, the CCPA's privacy restrictions do not currently apply to universities, and it remains unclear whether and how the GDPR's restrictions will. From a policy perspective, it is worth seeking clarity on the criterion for determining an organization's participation in the data economy. Should this criterion be whether the organization generates profit from accumulated data (as the CCPA assumes) or whether the organization collects and manages personal data as an organizational asset, subject to asset-like protections? If the latter, one viable counter to educational datafication could be subjecting higher education to CCPA-like privacy regulations.
Third, we emphasized the importance of understanding how data stewards define privacy and whether their definition aligns with the institutional definition. Consistent with the literature on digital resignation (Turow et al., 2015), it may well be that digital resignation develops due to the tension between the organizational definition of privacy, which usually prioritizes the benefits of extensive data collection for growing the organization's information assets, and the legal definition of individual privacy, which prioritizes choice and personal control. An organization's definition of privacy affects the ability of its employees to perceive and respond to the full spectrum of privacy risks posed by emerging data economies. Therefore, an organization's privacy culture can influence the privacy behaviors exhibited by its employees, even when their knowledge and position provide them with more opportunities than most to minimize privacy risks from digital engagement. More generally, these conclusions suggest that future research on privacy cynicism should investigate the effects of privacy cultures in work environments on individuals’ privacy orientations.
Fourth, our participants identified organizational gaps in how campuses educate students about privacy and security awareness. Their observations align with research indicating that students lack an understanding of how universities use their personal information. Indeed, in 2020, EDUCAUSE surveyed over 16,000 students from 71 universities in the United States and found that only 22% of students felt adequately informed about how their data was used by their university (Gierdowski et al., 2020, online). These findings indirectly support the idea that privacy resignation is a rational response to consumers’ mistrust of organizational data practices. In true neoliberal mode, our participants tended to blame students for their poor privacy practices, even though these data stewards acknowledged that students could not be expected to know how to protect themselves or indeed care about privacy in an institutional culture that does not socialize them into privacy-conscious behaviors or offer meaningful data choices. This observation highlights the important role that schools ought to play in countering privacy resignation by proactively educating their campus members, modeling responsible personal data sharing through privacy-protective data governance practices, and making those practices understandable to the campus community.
Lastly, data stewards' dual position as both data professionals and private users should remind us that individuals engage in multiple dyadic data relationships, sometimes as or on behalf of “data consumers” and sometimes as “data providers.” Exploring how individuals navigate data relations simultaneously and with what consequences for privacy cynicism may be a fruitful avenue for future research. Importantly, our analysis suggests that how “data consumers” perceive the privacy orientations of the “data providers” may create a mutually reinforcing effect, leading to a spiral of resignation. Data custodians’ privacy cynicism appeared associated with, and probably reinforced by, their perceptions that the “clients” they serve (i.e. the students) were careless about privacy. At the organizational level, these findings confirm the notion that digital resignation may be “organizationally cultivated” (Bagger, 2021). Therefore, any efforts to change organizational privacy governance must consider how the institutional actors responsible for implementing privacy policies perceive the privacy attitudes of those they are protecting and why. At the individual level, these findings suggest a more general bandwagon effect that contributes to a self-fulfilling prophecy: data workers’ belief that “people don’t care about privacy” reinforces their privacy cynicism, further demotivating them from engaging in privacy-protective initiatives that could make a difference at institutional or corporate level. Future research should incorporate both considerations of role duality and considerations about organizational cultures into studies of privacy cynicism.
Footnotes
Acknowledgements
The authors would like to express their gratitude to Dr Javier Torner for his invaluable assistance in facilitating access to the study participants and for his insightful contributions to the authors’ understanding of data governance. The authors also extend their appreciation to the anonymous reviewers and the editors of the special issue for their valuable comments and suggestions on the manuscript. An earlier version of this article was presented at the 2019 Convention of the International Association for Media and Communication Research.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
