Abstract
Personal data are produced through our daily interactions with digital technologies like search engines, social media, and online shopping, and is often referred to as our “digital exhaust.” It has been characterized as the key resource or asset for our economies in the 21st century. This paper focuses on the socio-technical imaginaries of digital personal data as a way to understand how desired forms of data governance are co-produced with collective understandings of personal data as a political-economic asset. We examine the different socio-technical imaginaries that underpinned different developments in data regulations in the United States and EU from 2008 to 2016, focusing specifically on the mutual constitution of law, political economy, and technoscience. We do so in order to understand the “prehistories” of contemporary data governance. We analyze the institutional and legal context around the development of data privacy regulation and data commercialization in these two important jurisdictions and reflect on how this institutional and legal context configured their respective approaches to data governance.
Introduction
In December 2020, the European Commission (EC) put forward its proposal for a Digital Markets Act (DMA) as part of a package of reforms to strengthen the European Union's market framework conditions in response to fears about the growing power of “Big Tech”—usually defined as Apple, Amazon, Microsoft, Alphabet/Google, and Meta/Facebook (Birch et al., 2021; Moore and Tambini, 2022). The DMA is supposed to come into force in 2023, following approval by the European Parliament and European Council. 1 Its aims are to establish a framework to rein in the market and social power of Big Tech firms. The political, policy, and public concerns about Big Tech usually relate to their position as key intermediaries in our economies and societies, providing the underlying digital infrastructures, platforms, and ecosystems for work, social life, and democratic decision-making. The DMA specifically defines and describes Big Tech firms as “gatekeepers” that can disrupt markets and innovation through their anti-competitive actions and strategies (e.g. self-preferencing, terms and conditions agreements, control over platform rules, etc.). As such, the DMA builds upon earlier policies in the EU to establish data governance regulations like the influential 2018 General Data Protection Regulation (GDPR). As a globally significant example of contemporary data governance, the GDPR has helped to shape and reshape other jurisdictional approaches to data governance around the world; nevertheless, it has also been subject to critique for the failure of the EU to address issues with enforcement (Edwards, 2018).
Data governance, like GDPR, is of critical importance today as a result of the increasing ubiquity of the mass collection and use of digital personal data—henceforth “personal data”—which can have significant negative consequences for individuals, businesses, and governments. 2 Before the advent of Big Data, “small” or offline data were inherently difficult to collect en masse, requiring extensive resources (e.g. time, labor, and money) (Pasquale, 2015). However, personal data today do not face the same limitations and can be collected automatically and stored indefinitely (Cohen, 2019; Prainsack, 2019; Birch et al., 2021; Viljoen, 2021). Much of these data are produced through the billions of daily interactions individuals have with digital technologies developed by Big Tech, like social media, search engines, and online marketplaces. These personal data have been called our “digital exhaust,” which is collected, aggregated, stored, and analyzed in different ways and for different purposes by commercial players like “data brokers,” Big Tech multinationals, AI startups, and others (Nissenbaum, 2017; Zuboff, 2019; Birch et al., 2021). Consequently, data governance has become a critically important issue in public, political, and policy debates, much of which is defined by increasing fears about the exploitation of personal data as a private resource or asset that underpins much of our economies in the 21st century (Nissenbaum, 2017; Zuboff, 2019; Birch et al., 2020, 2021).
How personal data are collected and used depends upon data governance regimes, which can be quite distinct from one another. Since different countries and jurisdictions have pursued quite different approaches to data governance, this raises several questions: why has this happened? What led to the development of these distinct forms of data governance? To answer these questions, it is important to stress that data governance policies, regulations, and frameworks are not effects of “technical” decisions or technological systems; rather, the development of data governance regimes reflects a socio-technical settlement entailing what Jasanoff (2015: 14) conceptualizes as the “mutual emergences in how one thinks the world is and what one determines it ought to be.”
With personal data, then, analyzing the specificity of this socio-technical settlement necessitates an examination of how data regulation is co-produced with a particular understanding of personal data as a political-economic asset (Birch and Muniesa, 2020; Birch et al., 2021). Comparative analyses particularly lend themselves to this task, which is why we focus on comparing the United States and EU in this paper. To do this, we examine the development of data governance in the United States and EU between 2008 and 2016. We focus on this specific time period for two reasons. First, it provides the context for the development of the GDPR in the EU, which has had a significant global impact (Edwards, 2018). We can then compare this development of EU data governance with the United States, as another important global policy actor and one which has taken a different route. To do this, we examine what we are calling the “prehistories” of contemporary data governance in these two jurisdictions. Second, although the United States has a long institutional and legal history, the current constitutional settlement in the EU is more recent, being formalized in the Lisbon Reform Treaty signed in December 2007. 3 This means that 2008 represents a useful starting point for analyzing the EU's institutional and legal context for the development of contemporary data privacy regulations and personal data commercialization.
Our argument is that the development of data governance in these two jurisdictions reflects their distinct socio-technical imaginaries of personal data (Jasanoff and Kim, 2009, 2015), and that these imaginaries reflect jurisdictional differences in how the United States and EU understand and frame the commercialization of personal data and its regulation. Consequently, socio-technical imaginaries are implicated in the emergence of specific varieties of technoscientific capitalism (Rahman and Thelen, 2019; Birch and Muniesa, 2020); for example, through the framing and regulation of personal data as a political-economic asset (Birch et al., 2021). Hence, adopting socio-technical imaginaries as an analytical approach helps us to understand the different views of and ways of dealing with personal data in different jurisdictions, both of which inform the development of distinct data governance frameworks. Rather than identifying a single point of convergence or unifying logic (Jasanoff, 2015), we focus on the particularities of data governance in the United States and EU, and especially how differences in laws, regulations, and commercialization of personal data reflect specific socio-technical imaginaries.
The paper is structured as follows: first, we discuss the literature on “imaginaries,” highlighting the theoretical implications of socio-technical imaginaries for our analysis; second, we discuss our methods; and third, we analyze U.S. and EU policy discourses and frameworks on privacy law, regulation of personal data, and commercialization of personal data as manifestations of their respective socio-technical imaginaries. We then conclude with a summary of our findings and reflection on the implications of our work in light of recent policy initiatives directed at reining the power of Big Tech (Birch and Bronson, 2022).
Socio-technical imaginaries
There is a long intellectual history to the concept of imaginaries, stretching back, at least, to Anderson (1982) and Catoriadis (1987). Our concern is with socio-technical imaginaries as developed by Jasanoff and Kim (2009, 2015) and others (e.g. Pickersgill, 2011; Hilgartner, 2015; Birch, 2019; Hassan, 2020; Hodson and McMeekin, 2021). Jasanoff and Kim (2009, 2015) and Jasanoff (2015) conceptualize socio-technical imaginaries as “collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of social life and social order attainable through, and supportive of, advances in science and technology” (Jasanoff, 2015: 6). Jasanoff and Kim (2009) originally defined them as national in scope and scale, as well as embedded in national histories and politics. As such, although some imaginings emerge at the individual level, they only become a socio-technical imaginary once collectively adopted (Jasanoff, 2015). Consequently, there is competition over imaginaries as different stakeholders promote different visions in policy and political discourse (Hilgartner, 2015). As Jasanoff (2015: 4) argues, it is possible for several imaginaries to “coexist within a society in tension” with it falling to “institutions of power to elevate some futures above others, according them a dominant position for policy purposes.”
These individual and collective visions are manifested in normative and public discourses about how best to rearrange social practices to benefit from technoscientific innovation (Pfotenhauer and Jasanoff, 2017), thereby reconfiguring both society and technoscience in mutually constitutive ways. Here, the “technoscientific problematic” posed by something like personal data is “framed by normative and practical visions concerning the optimal organization of society and science itself” (Tyfield, 2012: 158). For example, Jasanoff and Kim (2009) analyze the imaginaries of nuclear power in South Korea and the United States, outlining the dominant and stabilized framings in each country that shape nuclear research. Pickersgill (2011: 28), though, sees sociotechnical imaginaries “as one way that anticipatory discourse and practices are structured, and thus as a mechanism through which futures are designed.” Thus, sociotechnical imaginaries also help to frame desirable futures. Proponents of particular imaginaries do not leave the emergence of desirable futures to chance—they (re)shape institutions, regulations, and policies to support their desired futures and minimize deviations from the imaginary (Birch et al., 2014).
Consequently, socio-technical imaginaries are often performatively manifested in policy narratives and frameworks (Felt et al., 2007; Birch et al., 2014; Birch, 2019), and there comes a point where the imaginary transitions from ideational to having “real” policy effects. How this process occurs varies. Felt et al. (2007) conceptualize the transition by articulating a hierarchy of narrative imaginaries that goes from abstract to concrete. They define “master narratives” as ways to set horizons, defining what is possible or desirable, who relevant actors are, and what historical narrative to highlight. After the horizon has been set, policy narratives lay out roles to be performed—allowing actors to perform the ideals set out in the master narratives. Nowotny (2014) defines the success of collective political imaginaries as dependent upon utilitarian calculations. For Nowotny, imaginaries are presented to the public in terms of short- and long-term benefits. By extension, any imaginary that fails to convince the public that the good outweighs the bad will be contested.
Focusing on economic imaginaries, Jessop (2004, 2010) argues that “successful” imaginaries are those that have a constitutive effect on the world. Institutional stakeholders play an important role in anchoring imaginaries, socio-technical or political-economic. According to Jessop (2004: 7), government and public stakeholders (e.g. think tanks, business associations, etc.) are able to “manipulate power and knowledge” in order to establish their preferred political-economic order. Jasanoff (2015: 6) theorizes this process by focusing on government and legal institutions such as the courts, regulators, and so on, which she argues are capable of “elevating” an imaginary. In her work with others on technoscientific “constitutionalism” (Hurlbut, Jasanoff, and Saha, 2020: 983), for example, they argue that: The institutions within which novel technological configurations of life are subjected to ethical evaluation and moral ordering, such as parliaments, courts, or ethics committees, are themselves bound by imaginations and practices of legitimate rule, right modes of public reason, and felt entitlements of political subjects.
Here, technoscientific constitutionalism highlights the need to analyze how distinct forms of technoscientific governance are enabled or constrained by different legal regimes, which themselves reflect a distinct jurisdictional settlement (Hurlbut, Metzler, Marelli, and Jasanoff, 2020). The question of technoscientific constitutionalism is increasingly important in debates about personal data, where there is growing concern about contemporary modes of data governance that enable the mass collection and commercial use of personal data with limited recourse to change these practices (Cohen, 2019; Prainsack, 2019; Zuboff, 2019). As our analytical aim in this paper is to explore what we are calling the “prehistories” of contemporary data governance to understand current differences between jurisdictions, we focus on analyzing the distinct socio-technical imaginaries in the United States and EU, reflecting the technoscientific (e.g. data collection), political-economic (e.g. data commercialization), and legal (e.g. data privacy) context in which data governance has evolved over time as a “process of becoming” (Hodson and McMeekin, 2021).
Methodological framework
To undertake our analysis of U.S. and EU imaginaries, we examined the framing of personal data by institutional stakeholders in policy documents (Pickersgill, 2011; Hassan, 2020). As Jessop (2004) and Jasanoff (2015) note, institutional stakeholders (e.g. government agencies) can “elevate” an imaginary through their influence and knowledge. Hence, we directed our attention toward institutional stakeholders and how they elevate, anchor, and stabilize their preferred socio-technical imaginaries. Jasanoff (2015) argues that the context in which claims are made matters since the selective use of history can be employed to distance problematic events from desired socio-technical development. Our analytical focus on socio-technical imaginaries means that we are specifically interested in collectively held and generated visions, although we acknowledge that there are instances where these may be contested and unsettled by the individual or institutional stakeholders (e.g. individual judges, constitutional courts).
Regarding document selection, we drew primarily from government policy documents produced between 2008 and 2016 as representative of the key institutional stakeholders in the United States and EU. We also considered documents produced by other institutional stakeholders, such as privacy advocates, industry associations, and NGOs, though to a lesser extent. We chose this timeframe to reflect the “prehistories” of contemporary data governance in these two jurisdictions. Although the United States has a longer institutional and legal history, the current constitutional basis of the EU was formalized in the Lisbon Reform Treaty, signed in December 2007. This means that 2008 is a good starting point for comparing the EU with the United States through an analysis of their respective institutional and legal contexts underpinning the development of contemporary data regulations and data commercialization.
Our choice of institutional stakeholders in the United States includes government and non-government actors. The former includes the White House, Office of Science and Technology Policy, Federal Trade Commission, and the President's Council of Advisors on Science and Technology. The latter include non-governmental stakeholders like the Electronic Privacy Information Centre (EPIC) and the Direct Marketing Association. While the White House may not be able to implement legislation alone, it is an ideal place to look for imaginaries, as each Administration is conscious of its legacy and often aims to effect societal change. As well, Presidents have various sources of influence over Congress and spend a significant amount of their time drafting, presenting, and arguing for their respective legislative agendas to Congress, while also trying to build public support for these initiatives. In turn, Congress spends much of its time responding either to Presidential initiatives or to public opinion (Edwards, 1976). While some Congressional documents appear in the U.S. analysis, Congress as a body is too amorphous to advance a consistent imaginary.
Like the United States, EU institutional stakeholders include government and non-government actors. The former includes government stakeholders comprising EC, European Data Protection Supervisor (EDPS), European Council, and their representatives. The latter includes non-government stakeholders centered on the Big Data Value Association (BDVA), a leading private industry association. We drew from EU-level documents rather than from member states since EU institutions are responsible for the future direction of the EU. On a practical level, moreover, EU documents are available in English and address a broad audience. Regarding the EDPS, although they do not have direct enforcement power, they influence the EU imaginary through their consultative and cooperative role with other national data protection authorities (see EDPS, 2009).
Empirical analysis: Comparing the U.S. and EU socio-technical imaginaries
In the following empirical sections, we first overview the legal and institutional context of the United States and EU. Following this, we then analyze the following topics: development of data protection and privacy regimes; the regulation of personal data; and the commercialization of personal data.
US legal and institutional context
Between 2008 and 2016—and before that—there was no national legal framework in the United States responsible for regulating personal data; rather regulation of personal data was addressed sectorally. U.S. citizens enjoyed broad privacy protections provided by the First (i.e. free speech) and Fourth (i.e. unreasonable search and seizure) Amendments in the U.S. Constitution. However, the First and Fourth Amendments apply primarily to public actors and do not restrict the private sector in the same way (Mendez and Mendez, 2010. The U.S. approach to regulating personal data can be seen as “fragmented,” with various laws and jurisdictions addressed sectorally through federal and state laws (Sotto and Simpson, 2014; Rahman and Thelen, 2019). There were a few laws in place in the United States that regulated electronic marketing, although these were mostly directed at specific types of marketing; for example, text message marketing was regulated under the 1991 Telephone Consumer Protection Act (Sotto and Simpson, 2014: 192).
Individually, U.S. citizens had a varying degree of control over their personal data, much of which followed the principle of “notice and consent” (Nissenbaum, 2017). Once an individual's personal data had been collected, they had little control over how it was used as a result of the United States' standing doctrine (Cohen 2019). This requires the identification of “concrete,” “actual,” and “traceable” harms to an individual before they have “standing” to pursue a legal suit (p. 146). As well, individuals had little recourse for correcting inaccuracies in the personal data that were collected or knowing what had been collected on them—with California the exception to this (see EPIC, 2005). The U.S. legal system set a relatively high threshold for the recognition of privacy harms and breaches to consumers. For instance, in the event of a data breach, Sotto and Simpson (2014: 197) state that “consumers would need to establish that they suffered actual damages as a direct result of the organization's negligence in order to succeed in their claim.” Furthermore, compensation depended on the infringed sectoral law. Given the difficulty of quantifying privacy harms generally, proving that an individual suffered “actual harms” was a high threshold (boyd and Crawford, 2012).
Between 2008 and 2016, the primary regulator of personal data in the United States was the Federal Trade Commission (FTC), reflecting the “market model” that dominates the socio-technical settlement in the United States (Hurlbut, Metzler, Marelli, and Jasanoff, 2020). The FTC's main enforcement tool was outlined in Section 5 of the Federal Trade Commission Act which “prohibits ‘unfair or deceptive acts’ or practices in or affecting commerce” (Sotto and Simpson, 2014: 191). The FTC interpreted this mandate broadly and enforced privacy violations by entities deemed to be engaging in unfair or deceptive acts (ibid). Also notable was the lack of a government-mandated Data Protection Officer in private firms; it was up to the individual firm to determine whether or not they needed a “chief privacy officer” whose role, where present, was determined by business needs and not federal or state mandates (ibid: 196).
Lobbying represented another institutional force within the United States. Operating through Congress, business associations, privacy advocates, and other stakeholders argued for their preferred approach toward the regulation and commercialization of personal data. Commenting on the influence of personal data lobbyists, the Financial Times stated, “the US tech sector has shed its distaste for lobbying” (Financial Times, 2014). For example, Google's political donations to the Republican Party overtook those of Goldman Sachs, making Google the largest corporate donor to that party in the United States (ibid.). Data brokers also engaged in lobbying, with Tom Hadley (Vice President of Government Affairs and Public Policy for Experian) expressing “confidence” that Congress would not place broad regulations on how online marketers use personal data (The Hill, 2014). In this sense, lobbying constituted an important force within the U.S. institutional and constitutional setup.
U.S. socio-technical imaginary
First, the Obama Administration viewed data protection and privacy through a “trade-off” lens; see Turow et al. (2015) for the origins and implications of this “trade-off” thinking. According to the White House, personal data chiefly brought social and economic benefits and, as such, the White House claimed that any laws pertaining to privacy must “evolve in a way that accommodates the social good that can come from personal data use” (White House, 2014: 61). Personal data regulation is framed here as a response to technological imperatives, rather than an issue in itself (see Pfotenhauer and Jasanoff, 2017). The reason is that there were fears that the benefits of the data economy could bypass the United States; that is, the White House attempted to “shape the behavior” of stakeholders and the public—corresponding to Felt et al. (2007) argument that visions help to lay out roles to be performed—via its authority to reflect certain expectations. Like the White House, the President's Council of Advisors on Science and Technology (PCAST) viewed personal data and privacy as a trade-off. PCAST stated, “It is an unavoidable fact that particular collections of personal data and particular kinds of analysis will often have both beneficial and privacy-inappropriate uses. The appropriate use of both the data and the analyses are highly contextual” (PCAST, 2014: 47). Furthermore, PCAST worked to normalize the negative privacy impact of personal data pointing out that over the course of American history, emerging technologies frequently conflict with privacy. In this sense, the White House and PCAST both normalized a privacy trade-off in the context of promoting innovation, reflecting a tendency in the U.S. tech sector to understand data protection and privacy in terms of notice and consent, rather than the introduction of limitations on data collection itself (Waldman, 2021).
Privacy advocates such as the EPIC challenged the trade-off argument. Rather than focus on the political-economic benefits, EPIC (2015) emphasized the potential risks and harms that come from the continued commercial use of personal data, stating, “The current personal data environment poses enormous risks to ordinary Americans.” During the consultation phase of the White House Big Data Report (2014), for example, the President of EPIC, Marc Rotenberg, insisted that the upcoming report critically assess the existing regulatory regime and potential harms that personal data posed, in addition to opening up the report to public comment. Rotenberg reasoned that allowing public comment was essential, given that it was their personal data that was being collected, and potentially their privacy at risk (EPIC, 2014). Subsequently, the Office of Science and Technology Policy opened up a Request for Information (RFI) to facilitate public commentary (EPIC, 2015). Thus U.S. institutions seemed at least partially responsive to privacy concerns, and ones broader than individual harms, reflecting the interest group pluralism present in the U.S. political system (see Foley, 2007).
Second, personal data were primarily governed through industry self-regulation and market-based approaches (Hurlbut, Metzler, Marelli, and Jasanoff, 2020). Market-based regulation was framed in a number of ways. For example, as improving cybersecurity standards or encryption (PCAST, 2014), or promoting “industry best practices” and voluntary privacy frameworks, such as the FTC's privacy framework (2012). Underlying the preference for market-based solutions was the White House view that the sectoral and industry-specific U.S. regulatory regime encouraged innovation and economic growth; by not regulating broadly, this was meant to allow the industry to innovate and not foreclose future innovative uses of personal data until after risks are well understood (White House, 2014).
Even the FTC criticism of the data industry reflected the market-based model. For example, the FTC's “Reclaim Your Name” initiative, proposed by Commissioner Julie Brill criticized the data broker industry for not providing sufficient consumer choice and operating secretively (Brill, 2013; Pasquale, 2015). Brill's criticism of data broker industry practices was harsh, but still sought to restore consumer trust through market-based reforms. Specifically, Brill (2013) stated that she wanted the industry to “develop a user-friendly, one-stop online shop,” essentially offering consumers a personal data clearinghouse to view and edit what information data brokers have on them, with the option to opt-out of collection altogether. More broadly, the conflict between the FTC and the data broker industry demonstrated that the FTC envisioned problems posed by the regulation of personal data in terms of consumer trust. The clearest example of the market-based model was the White House's proposed Consumer Privacy Bill of Rights (CPBR)—published in 2012 and proposed as a bill in 2015. It envisaged the FTC in an enforcement role, establishing “enforceable codes of conduct” (FTC, 2012: 29). As such, the CPBR fitted into FTC authority, which could already act against companies (FTC, 2012). This framed FTC as being able to “effectively protect consumer data privacy within a flexible and evolving approach to changing technologies and markets” (ibid: 29). Throughout this process, then, the FTC played a key “anchor” role—in Jessop’s (2004) terms—in buttressing the United States' market model of personal data commercialization.
Finally, institutional stakeholders framed the issue of commercialization differently. According to the White House, personal data provided numerous socio-economic goods and benefits, equally distributed and accessible by small and large firms alike (White House, 2014). For instance, the White House envisioned personal data as capable of providing banking services for the “unbanked” who were previously unable to access credit or banking services. Other stakeholders, such as civil liberties groups, raised concerns regarding possible pricing biases that are embedded within the algorithms that process data (see also boyd and Crawford, 2012). The White House (2014: 46) responded to the algorithmic concerns of civil rights groups by framing the issue as stemming from “[a] lack of transparency and accountability,” suggesting that providing consumers with these elements (e.g. through FTC engagement) would address these risks.
Whereas the White House framed the commercialization of personal data around broad societal benefits, the data broker industry had a narrower focus (Nowotny, 2014). Much of the data broker industry presented the commercial use of personal data as an economic boon. For example, the DMA—an industry association that represents companies that use personal data—asserted that 70% of the value derived from personal data comes from its free exchange and analysis (Deighton and Johnson, 2013). Without widespread commercialization, the U.S. economy, according to the Data-Driven Marketing Institute (the DMA's academic and non-partisan think-tank), “would be significantly less efficient. US companies would have to spend considerably more than $110 billion to maintain current output levels” (p.2). Measuring the economic benefits and costs in monetary terms was a way to enroll supporters of the digital marketing industry, transforming the abstract notion of personal data into concrete societal benefits (Felt et al., 2007). Though this economic framing may be influential, these claims were contested by other stakeholders.
The FTC and data broker industry framing of commercialization illustrated how the legal and institutional setup in the United States “afforded” changes in post hoc consumer and market transparency and accountability, but constrained the implementation of ex-ante data protection regulations. In response to growing public concerns in the United States regarding how the data broker industry operated, for example, various branches of the U.S. government and the FTC launched investigations into the matter (see GAO, 2013; US Senate, 2013; FTC, 2014). Yet, it was not the existence of or commercialization of personal data that raised concern among government stakeholders, but rather the lack of transparency and accountability to consumers. This evidenced a particular problematization of commercialization for the FTC, where problem and solution are codependent (Neyland, 2016). In particular, the FTC highlighted consumer transparency and choice as significant goals, affirming, “[a] lack of transparency and choice remain a significant source of concern about this industry” (FTC, 2014: vii). As a solution, the FTC appealed to Congress to create legislation that reveals data broker practices and other measures that facilitate consumer choice under a continuing notice and consent framework (Nissenbaum, 2017) and within existing legal protections for data collectors designed to protect their aggregated collections of personal data; for example, the 1986 Computer Fraud and Abuse Act that “treated hacking [of data] as theft” (Pistor, 2020: 109).
The U.S. socio-technical imaginary of personal data was characterized by a benefits trade-off between privacy and commercialization in which market-based regulation and consumer trust came to the fore to reinforce rather than challenge commercialization. The U.S. benefits trade-off contained a contradictory view of its public (Welsh and Wynne, 2013), however, where the public was framed as both beneficiary and disruptor of the benefits provided by commercialization. To avoid unruly disruption, institutional actors presented narratives to normalize expectations surrounding past and present privacy-technology conflicts. Regulation in the U.S. imaginary encouraged multi-stakeholder involvement (i.e. pluralism) as a way to enroll a critical mass of supporters (Jasanoff, 2015). Yet, institutional actors in the United States maintained the direction of regulation by tacitly setting desired horizons (e.g. by including or excluding actors) and by co-opting unruly voices to retain political legitimacy (Felt et al., 2007). The FTC was positioned as the key institutional force in correcting perceived market failures, while commercialization of personal data was understood through a continuation of the standing doctrine; this emphasized “good” commercialization as resulting from improving consumer trust and “bad” commercialization as involving the opposite (Jessop, 2010).
EU institutional and legal context
Data protection and privacy have long been recognized as a right in the EU. Beginning in the 1970s, European nations began nationally to implement their own data protection regulations; for example, the 1970 German Data Protection Act and 1973 Swedish Data Act. Moreover, the Council of Europe (not an EU-level institution) enacted the “Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data” in 1981 (Reichel and Lind, 2014). These acts laid the foundation for the EU’s 1995 Data Protection Directive (DPD), which protected EU citizens from abuse of their personal information (e.g. data collection errors and inaccuracies) by government and non-government actors. These protections held even if EU citizen data were processed in another country, although implementation of the DPD resided with individual member states (Reichel and Lind, 2014). The EU's “Right to Privacy” is broad, encompassing several other rights: a “Right to Personality” (e.g. a right to personal autonomy); freedom from state and non-state interference, which is extended to professional or legal persons; and the express right to data privacy, outlined in article 8.1 of the EU’s 2000 Charter of Fundamental Rights. Article 8.1 guarantees the protection of citizens' personal data, outlines how the data should be handled, and the oversight of these rights by an independent data protection authority. Other relevant directives and regulations included Regulation 45/2001, which was about processing personal data by EC institutions and free movement of data, and Council Framework Decision 2008/977/JHA, which was about protecting personal data when it came to police and judicial cooperation.
In 2008, there were two levels of data protection authorities: national and EU. At the EU-level, the EDPS defined its role as “to make sure that the fundamental right to protection of personal data is respected by the Community institutions and bodies” (EDPS, 2009: 4). The EDPS was directly responsible only for data processed by Community institutions and bodies, it was not responsible for data that were processed at the national level (i.e. where the bulk of personal data were generated) and had no supervisory authority over national data protection authorities (ibid: 5). Yet, the EDPS operated in a supervisory role, advising and cooperating with national data protection authorities. For instance, the EDPS frequently issued guidance on emerging technologies that impact data protection. National data protection authorities interpreted their obligations toward enforcing data protection compliance diversely.
EC proposed a reform of the DPD in 2012; this was significant for legal and institutional reasons. Article 288 of the Treaty of the Functioning of the European Union, which laid out the legal actions possible by the EU, explains the difference between a directive and regulation: A directive is binding, as to the result to be achieved, upon each member state to which it is addressed, but leaves to the national authorities the choice of form and methods. By contrast, a regulation shall have general application. It shall be binding in its entirety and directly applicable to all member states. (Schiedermair, 2014: 73)
Because the DPD was a directive and not a regulation, it allowed for different implementation by EU member states, leading to divergence rather than the intended supra-national harmonization, which also resulted from the proliferation of directives and decisions. As such, the DPD had been viewed with “general dissatisfaction” (ibid: 74).
In 2011, the EDPS responded to the EC's plan for a revision of the DPD. In their opinion responding to the EC's “A comprehensive approach on personal data protection in the European Union,” the EDPS laid out the constitutional drivers—or “affordances”—of future data protection proposals in the EU. For example, the EDPS highlighted: technological changes like cloud computing; globalization and the cross-national transfer of personal data; the 2007 Lisbon Treaty and its creation of “a direct legal basis for a strong EU-wide data protection law”; and debates in international organizations like the Council of Europe and OECD (EDPS, 2011: 5–6). These drivers form the basis for the subsequent development of data protection regulations, which the Commission proposed the following year and culminated in the GDPR in 2016.
4
The EDPS opinion spells out the EU approach to data governance: The EDPS underlines that data protection is recognised as a fundamental right. This does not mean that data protection should always prevail over other important rights and interests in a democratic society, but it does have consequences for the nature and scope of the protection that must be given under an EU legal framework, so as to ensure that data protection requirements are always adequately taken into account. (ibid: 8)
As such, it differs significantly from the U.S. approach, discussed above, in which data protection was understood in relation to consumer and market transparency and accountability.
EU socio-technical imaginary
First, data protection and privacy legislation figured prominently in the EU policy discourse as both an important legal tool and cultural landmark. Privacy laws and legislation were seen as one of the main methods in which EU institutions both regulated data markets and “build Europe as a polity and Europeans as citizens” (Felt et al., 2007: 44). Similarly, the Commission viewed the 1995 DPD as the “central legislative instrument for the protection of personal data in Europe, and [as] a milestone in the history of data protection” (EC, 2012: 3). Aside from its legal significance, the EC and EDPS both viewed the DPD as a cultural achievement. Reflecting on the DPD, the EDPS viewed the EU as a “trailblazer” in the realm of data protection, stating that it “continues to command the close attention of countries who are considering establishing or revising their legal frameworks” (EDPS, 2015: 9). On the one hand, the EC and EDPS viewed data protection legislation as binding Europeans together; and, on the other hand, as a trade-off between rights and commercialization.
Given the legal and political importance accorded to previous data protection legislation, the EC sought to reform the DPD to address new challenges posed by personal data. The EC asserted that the DPD had not kept pace with the “new, challenging digital environment,” which required increased harmonization and efficiency to protect personal data. Notably, the EC framed this issue as best addressed by shifting from a data protection directive to regulation (EC, 2012). The EC advocated a change in kind, not in type, to address the new privacy challenges posed by personal data, especially through harmonization across member states. This differed significantly from the United States. It entailed an explicit shift in the legal form of data governance, where the discretion of member states in their implementation of data protection was to be removed. This harmonization can be seen as a technoscientific constitutional settlement (Hurlbut, Metzler, Marelli, and Jasanoff, 2020), which was largely driven by the emphasis on privacy as a fundamental right in the EU legal and constitutional regime (Barber et al., 2019). The EC and stakeholders viewed the current DPD as problematic because it allowed for a diversity of interpretations among member states. In effect, the EC had equated inconsistent levels of data protection with inconsistent access to fundamental rights (EC, 2012).
The EU conceptualized privacy as a trade-off, like the United States, but with citizen rights trumping commercialization. Speaking at a conference in Brussels on “Competition Policy and Privacy in Markets of Data,” Joaquin Almunia (VP of EC Competition Policy) explained how the trade-off was understood in the EU: “there is a delicate trade-off between privacy and better service and this is precisely why we need debates such as the one we’re having today” (Almunia, 2012). The debate was “delicate” due to the enshrined nature of privacy rights in the EU Charter of Fundamental Rights, as Almunia noted: The respect of private and family life and the protection of personal data are much bigger issues than just a commercial debate… In fact, they are freedoms enshrined in the Charter of Fundamental Rights of the European Union. (ibid.)
In sum, the fundamental privacy rights guaranteed by the EU Charter eclipsed other considerations in the context of personal data, including commercialization.
Second, however, the European Council also viewed state intervention as necessary and beneficial in the regulation and commercialization of personal data. The Council viewed personal data as “[an] important enabler for productivity and better services,” and that: EU action should provide the right framework conditions for a single market for personal data and Cloud Computing, in particular by promoting high standards for secure, high quality and reliable cloud services. (European Council, 2013: 2)
The “right framework conditions” involved collaboration and cooperation with member states to reinforce the benefits of market competition. As a legal and market regime, the EU differs significantly from the United States in seeking to establish ex-ante rules for market actors rather than regulate those market actors after the fact (Wörsdörfer, 2020). Here, a focus on personal data reflected the Council's view that digital markets were a “strategic technology” that required the assistance of member states to realize.
Having EU institutions intervene in the regulation of personal data was also portrayed by the EC as a commercial necessity. The EC argued that EU data firms were “falling behind” compared to those in the United States, stating: Big Data is one of Europe's key economic assets. Harnessing its potential could give European industry a huge competitive advantage. However, today only 2 of the top 20 companies changing lives and making money out of personal data are European. (EC, 2014)
The “falling behind” rhetoric advanced by the European Council and EC can be viewed as part of defining an external threat—a necessary step in legitimizing EU-level solutions and interventions (Rosamond, 2002; Birch et al., 2014). To “catch up,” the EC argued, “Europe needs to invest and strengthen all parts of the ‘data value chain,’ the people, organizations involved in data whether producing, analyzing, using, or creating value from it” (EC, 2014: 1–2). The personal data value chain involved the production, collection, analysis and so on of personal data (see Birch et al., 2021). In short, the EC was justifying a multi-faceted intervention (i.e. state and market) at all phases in the development of a personal data industry in order to realize its benefits. This included significant state-level intervention through public–private partnerships (PPPs), which not only offered economic benefits (e.g. employment) but also would enable the EC to actively guide member states and stakeholders to implement the desired data governance regime (EC, 2014). This PPP approach was depicted by the EC as a “win-win” for all stakeholders involved, a view that the industry-based personal data Value Association (BDVA, 2015) agreed with.
Finally, rather than hinder the commercialization of personal data, the EDPS asserted that privacy-enhancing technologies (PETs) represented an emerging market opportunity (EDPS, 2014). Commenting on the lack of commercialization, the EDPS stated, “Thus far, relatively few companies in the digital economy have detected financial advantage in enhancing the privacy of their offerings” (p. 11). However, the EDPS suggested that PETs could be “fostered” to become a competitive advantage for European industry. To facilitate commercial development of PETs, the EDPS concluded, “A more joined up approach to data protection and competition could help stimulate a similar level of competition in online services” (p. 34). In this regard, the EDPS suggested that a competitive market for PETs would actually be stimulated through improved data regulation. In contrast to the U.S. emphasis on notice and consent aspects of privacy (Waldman, 2021), the EU promoted new digital technologies and markets to reduce the potential harms from widespread data collection and use.
In addition to facilitating a future market for PETs, the EU recognized the commercial value of personal data (WEF, 2011). For the EDPS, the omission of personal data as an asset by private industry is problematic, given that “personal [Big Data] has become a substantial intangible asset used for the purposes of value creation, comparable to copyright, patents, intellectual capital and goodwill” (EDPS, 2014: 9). What made this omission worse for the EDPS was that many companies gave their users the impression that the services provided are “free,” when users “pay” for services with their personal data (p. 10). The EDPS concluded that if private industry continued to label their services as “free,” it could lead to a re-definition of key regulatory concepts such as “transparency,” “market dominance,” and “consumer welfare” (p. 34). Altogether, the EDPS recognized personal data as a valuable (if intangible) asset that could have consequences for the future regulation of personal data if the private industry did not alter its practice of labeling services provided as “free” (see Birch et al., 2021). Again, the EU approach was distinctive here, reflecting earlier extensions of intellectual property rights to digital databases with the 1996 Legal Protection of Databases Directive, which contrasted with the United States where similar moves were not taken (Open Future, 2021).
Producing and fostering consumer trust was seen as desirable in the EU, but primarily as a way to address citizen trust in the wider EU effort to confront “civic dislocation” where citizens “lose trust” in their government and disrupt science and technology agendas (Felt et al., 2007). In a speech by (then) Vice President of the EC Viviane Reding at the “Digital, Life, and Design” Conference, she explained the economic importance of maintaining consumer trust: …only if consumers can ‘trust’ that their data is well protected, will they continue to entrust businesses and authorities with it, buy online, and accept new services – the new services, you in this audience, invent and develop. Reliable, consistently applied rules make data processing safer, cheaper and inspire users’ confidence. (Reding, 2012)
Reding's statement demonstrated two things: one, consumer trust was envisioned as a precondition for the successful commercialization of personal data; and two, consumer trust would be generated through “reliable, consistently applied rules” (i.e. those found in the EC's data protection reform package.) In this sense, commercialization, trust, and data regulations were viewed by the EC as self-reinforcing.
In conclusion, the EU revealed a preference for a legislatively based and combined state-market regulatory and commercialization regime. Regarding privacy legislation, legal and political significance was accorded to data protection regulation, even considering new challenges posed by personal data. Within data protection, harmonization was envisioned as both a problem (i.e. lack of it) and a solution (i.e. the need for more of it). Any discussion of a trade-off between privacy and commercialization skewed in favor of European fundamental rights. Regarding commercialization, regulatory cooperation between data protection and competition was meant to foster an emerging market for PETs. The intangible yet “real” value of personal data had been identified as problematic in the EU political discourse, reflecting a need to rethink key legal concepts (e.g. market dominance, consumer welfare, and harm) within the EU. Lastly, consumer trust was viewed as a precondition to consumer participation in the digital Single Market. Yet the reverse was true as well—failure to uphold consumer trust would result in negative economic performance.
The EU imaginary was characterized by rights trade-off conceptualization of privacy, state intervention in regulation, and consumer trust-based commercialization. In the EU trade-off, citizen rights were dominant, although there were deliberate attempts to “design future” for the commercialization of personal data (Pickersgill, 2011). The rights-based trade-off was reinforced by the cultural and political significance accorded to data protection by EU institutional stakeholders and afforded by a specifically EU constitutional framework (Hurlbut, Metzler, Marelli, and Jasanoff, 2020). Harmonization was framed as both a problem and solution to the challenges presented by personal data; in short, consistent data protection regulation was seen to construct a specifically European imaginary that would lead to significant economic benefits. State intervention in the regulation and commercialization of personal data was normalized and operationalized, especially via PPPs (Jessop, 2004), enrolling supporters through practical appeals for future benefits (Nowotny, 2014). Commercialization in the EU imaginary was also understood through a consumer trust framing, but where data protection regulation and state intervention in the market were seen to facilitate trust by providing the framework for market conditions in which social actors could operate and thrive.
Conclusion
In this paper, we have outlined and analyzed the prehistories of data governance in the United States and EU from 2008 to 2016. The United States was characterized by a trade-off between the socio-economic benefits of data economies over privacy risks, whereas the EU entailed an emphasis on privacy rights (e.g. data protection). On regulation, the United States reflected a post hoc and market-based model for addressing challenges posed by personal data, while the EU reflected an ex-ante and state-market model as the solution. These preferences included the commercialization of personal data, with private industry in the United States defined as responsible for providing reassurances that would boost consumer trust, while the EU sought to generate citizen trust via harmonized data protection regulation (e.g. 2018 GDPR) and new privacy markets. The U.S. trade-off contained a contradictory view of its public, who were imagined to benefit most from opening up personal data but also posed the greatest threat to the development of a data economy. In contrast, in the EU the privacy rights of EU citizens trumped commercial considerations. This logic was embedded in the symbolic and cultural importance accorded to data protection in the harmonization project of EU institutional stakeholders. Here, harmonization constituted both the problem and the solution to the privacy issues raised by personal data, where harmonized data protection regulation was thought to enroll all EU citizens in an emerging EU political and economic project.
The difference in data governance reflects the different socio-technical imaginaries of personal data in these two jurisdictions; that is, the “mutual emergences in how one thinks the world is and what one determines it ought to be” (Jasanoff, 2015: 14). We analyzed the dominant imaginaries in the United States and EU, which “imbue” data governance with particular desirable outcomes from the regulation and commercialization of personal data and are “encoded” in collective visions and narratives of the future (ibid). On the one hand, the U.S. imaginary reflected a market-based regime in which self-regulation and existing legal doctrines (e.g. “standing”) and market institutions (e.g. FTC) were deemed adequate and best placed to manage data governance. On the other hand, the EU imaginary reflected a state-market regime in which new market framework conditions were deemed necessary by state actors, leading to the design and development of new regulations and new markets to manage privacy.
Our analysis does not imply that there is no overlap between the United States and EU. For example, both imaginaries still reflect a push behind the commercialization of personal data through building consumer trust. In the U.S. imaginary, institutional stakeholders provided a discursive framework for consumer trust, outlining what “good” and “bad” forms of commercialization look like. Institutional stakeholders deployed material and immaterial resources (e.g. FTC enforcement actions to correct the market and issued Op Eds in the media). The EU also imagined consumer trust through a discursive frame that defined “good” and “bad” forms of commercialization—“good” commercialization fosters consumer trust (data protection) and “bad” commercialization hampered consumer trust (misleading consumers). The EDPS sought to effect this framing by reorienting commercialization around an untapped privacy market, and by recognizing personal data as an intangible asset in the EU state and consumer trust imaginaries.
Our analysis of these socio-technical imaginaries helps us understand the pre-histories of contemporary data governance regimes in the United States and EU. The clearest indication of how different socio-technical imaginaries of personal data configured data governance is the expansion of regulatory frameworks in the EU, as compared with the United States. For example, the EU's GDPR reflects the more state-market model in the EU, as do the recent proposals of the Commission for a Digital Services Act and Digital Markets Act. They represent a further entrenchment of the drivers we have identified in the socio-technical imaginaries of personal data; namely, preference for state construction of markets, desire for EU-wide harmonization, focus on EU identity-making, and emphasis on data protection and privacy as key aspects of market framework conditions. In contrast, the United States has failed to develop nation-wide regulatory frameworks for personal data, leaving much of data governance to State governments—like California’s 2020 Consumer Protection Act—or to the market, defined by the standing doctrine and court actions (Cohen, 2019). While we think the EU's model provides a more attractive data governance model to date, it is still not clear yet whether the EU's constitutional settlement will actually strengthen personal data protection and privacy rights in response to concerns about Big Tech (Birch and Bronson, 2022), or will simply lay the groundwork for a broader extension of property rights to personal data (Open Future, 2021).
Footnotes
Acknowledgments
We thank the reviewers, Jennifer Gabrys and Matthew Zook for their helpful comments. Usual disclaimers apply.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article. This work was supported by the Social Sciences and Humanities Research Council of Canada, (grant number 435-2018-1136).
