Abstract
Notice and consent approaches, being the most prevalent legal frameworks, have in recent years come under fire. I suggest they fail because they rest on a historical approach to privacy justice, whereby the justice of a particular state of affairs is a function of whether each transaction on the way was just. Instead, I make use of a background justice framing. Even where consent is present it is inadequate to secure the values at stake. When we only assess the fairness or freedom of individual information transactions, we fail to see the way many can undercut the very values we seek to secure by requiring consent for disclosures in the first place. I propose a patterned principle to regulate the distribution of individual control over privacy, and to set the background against which individual notice and consent can still play a role, albeit a limited one.
Introduction
Under a privacy self-management approach, individuals are offered a privacy policy detailing how their information will be used to read and agree to before accessing a particular service. Such models, being the most prevalent legal frameworks, have in recent years come under fire, along with the understanding of informed consent they employ, for failing to acknowledge the cognitive and practical limitations of those who are disclosing, as well as the more structural ways in which control over privacy can be undermined through multiple instances of consenting disclosure (Gould, 2019; Solove, 2012; Susser, 2019).
These concerns are not new. In a 1980 paper, Ruth Gavison (1980) used an example involving a priest at a party who inadvertently outs a confessor as a murderer to demonstrate the ways in which pieces of personal information when brought together might reveal more than intended. Such cases are no longer merely hypothetical, or occasional. In a recent case, a catholic priest was outed as gay by a newsletter, which had purchased location information from the dating app Grindr via a data broker. By correlating information about known locations the priest had visited, such as his home and workplace, they were able to identify the priest and the locations he visited secretly, including gay bars. Neither he, nor those who found him, did anything illegal (White, 2021).
There are numerous other concerning cases. In 2018, researchers used data from home genetic testing kits to demonstrate that they could identify 60% of Americans of Northern European descent from an otherwise anonymous DNA sample, predicted to rise to around 90% by this year, regardless of whether they submitted their genetic information to databases themselves (Erlich et al., 2018). While many alarming cases involve deanonymisation, many more do not. Researchers recently used a dating website’s data set to train artificial intelligence (AI) to predict the sexuality of other individuals using only pictures of their faces (Wang and Kosinski, 2018). Sexuality and genetic information are obviously very privacy relevant, but profiling often makes use of seemingly innocuous information, such as shopping habits, music streaming history or Facebook likes (Schneier, 2015). Researchers also inferred individuals’ sexuality by analysing who was in their social media networks (Jernigan and Mistree, 2009).
These cases intuitively seem unjust, and yet they have been arrived at via a multitude of at least supposedly free exchanges of information, and thus do (or could) meet the standards of privacy self-management. Privacy self-management presupposes a historical approach to privacy justice, whereby the justice of a state of affairs is a function of how it was arrived at, whether each transaction was just. This struggles to make sense of these types of cases. Instead, I make use of a background justice framing. Meaningful consent is absent in many contemporary situations involving large amounts of personal information changing hands or being made use of, but even where it is present it is inadequate to secure the values at stake, namely, our capacity for social relationships, autonomy and non-domination. When we only assess the freedom of individual information transactions, we fail to see the way in which multiple such transactions undercut the very values that underpin consent in this context. To this end, I propose a patterned principle to regulate the distribution of individual control over privacy, and to set the background against which individual notice and consent can still play a limited role. The background privacy principle (BPP) states that the highest level of control over privacy that is consistent with an equal level of control over privacy for all should be secured for individuals, except where this distribution will create risks of domination. This should guide the design of laws and institutions to prevent multiple privacy transactions from undermining the very values that underpin control over privacy over time.
The next section considers first, the question of how best to define privacy, suggesting privacy as a condition is best understood in terms of access. I then make the case that individual control over privacy is valuable for a number of interconnected reasons relating to security, relationships, autonomy and non-domination. As such, the right to privacy is the right to control the extent to which we are in the condition of privacy. The second section explores the inadequacy of privacy self-management in light of cases, such as those outlined above and considers and rejects a possible Nozickean defence. In the third section, I make the case for the BPP. Finally, I briefly consider some of the ways in which legislation and the design choices made for new technologies can help to realise this account of privacy justice.
Privacy and its Value
There are two main ways of defining the condition of privacy, access accounts and control accounts. On control accounts, privacy is understood as necessarily involving control. So, a control account of informational privacy could hold that for an individual to experience a loss of privacy, they must have lost control over access to information about themselves.
1
I adopt an access definition of the condition of privacy:
Privacy: An individual is in the condition of privacy to the degree that others do not access information about them.
This refers to the condition of privacy, rather than the right to privacy. It also refers only to informational privacy. The extent to which one has privacy is not just a matter of the amount of information about one that others access, but also how sensitive that information is. We can say that sensitivity of information is a scalar property, in that, it comes in degrees, and very roughly that information is sensitive to the extent that it is information that one ‘might reasonably not want others to know’ (Benn and Lazar, 2021: 4).
While various versions of the control account predominated for some time, 2 there has been a recent trend towards access accounts. 3 While I cannot do justice to this debate here, I outline a few reasons why we ought to prefer an access account. Important counterexamples have been presented against control accounts, falling into two main categories. In threatened loss cases, control accounts give us the counterintuitive answer that privacy has been lost even where no access to information has occurred. If I leave my diary on the table in a café by mistake, then return later to find a stranger has picked it up but not read it, I have for that time lost control over informational access, but intuitively have not suffered a loss of privacy unless the stranger actually reads it. This is not the answer a control account will give (Macnish, 2018). In voluntary divulgence cases, control accounts suggest that we have not lost privacy even if we voluntarily give up a lot of information about ourselves, so long as we maintain control. For instance, if someone willingly gives a public talk about their mental health, control accounts may wrongly conclude that they have not lost control and therefore have the same level of privacy (Menges, 2020). The only way to avoid these counterexamples is to adopt a conception of control that makes access a necessarily part of a loss of privacy, and which then collapses the control account into an access account (Mainz and Uhrenfeldt, 2021).
We might assume that merely being in the condition of privacy is of value to individuals. Others not knowing my bank details or contents of my diary do serve to protect me from various harms. However, being in the condition of privacy may be valuable or not depending on the context, and more privacy is not necessarily better. Recall the feminist critique of privacy, whereby being in the condition of privacy beyond their control has in many instances enabled the subjugation of individuals and groups, including women and children (Mackinnon, 1989).
What is less context dependant is the value of individual control over privacy. 4 While the condition of privacy is best defined in terms of access, the value of privacy, and therefore the right to privacy, is not merely a matter of access but also control. An individual’s right to privacy can be defined as the right to control their degree of privacy. When I say control over privacy, I mean that others cannot access information if an individual does not want them to, and that third parties are not preventing that individual from granting access. By the first part, I mean that if others were to try to access information about me, I have control over it to the extent that I could prevent them. I might lack control over privacy even where no actual attempt at access occurs. If my bitter former lover shares my nudes on the Internet, I lack control over privacy with regard to them, whether or not anyone else decides they want to look. With regard to the second part, we should be clear that this does not imply that others actually do access that information about me, but rather that I am free to grant access if I so choose. A censorship law prohibiting me from publishing nude images of myself online would indeed limit my control in this way, but the lack of desire on the part of the population to access them once published would not. There is a grey area, but for now a rough distinction with some clear cases on either side is enough for our purposes. Putting these aspects together, control over privacy in the sense I take to be valuable is about having, and indeed sustaining in the face of information disclosures, the ability to disclose and to withhold information, and in some instances to retract previously disclosed information.
It is not unusual for a right to something to include control even as the object of that right is defined without it. Mainz and Uhrenfeldt suggest that an analogy with property might be helpful. Consider the difference between being in possession of a car and having property rights (which involve control rights) over a car (Mainz and Uhrenfeldt, 2021). If one is still deeply worried about referring to the right I have described as a right to privacy, and would prefer to refer to it as the right to control one’s privacy, then they may do so at slight stylistic cost but no real conceptual or normative cost. 5
I take it that rights are grounded in interests that persons have. There are numerous ways in which having control over privacy is instrumentally valuable, protecting us from fraud, theft, social censure and so on. 6 However, control over privacy also connects to values in more fundamental ways. This grounds the right to privacy. First, it allows us to have a multitude of different types of relationships, from the personal to the professional. When we can choose what to tell others and what to hold back, personal information becomes a kind of social capital, the spending of which is partly constitutive of personal relationships (Fried, 1970; Rachels, 1975).
A number of theorists also argue that privacy claims arise in response to some form of inviolate personality (Bloustein, 1964; Warren and Brandeis, 1890), or that control over privacy is a social ritual vital to the constitution of fully individuated selves (Reiman, 2017), or, as I suggest, that privacy rights violations can impair an individual’s ability to form and maintain an integral sense of self. We have an integral sense of self to the extent that we find some degree of stability and coherence between the past and present versions of ourselves, and see ourselves as authors of our own lives. This can be fractured in various ways, by outside events, such as violence and privacy invasions, as well as internal occurrences, such as episodes of poor mental health (Sangiovanni, 2019). Control over privacy is not only of value to autonomous persons, but also that it is vital to the very creation and maintenance of such individuals. 7
An individual can be wronged where an undetected violation of privacy occurs, but no further harm beyond this actually follows from it, because it leaves them vulnerable to domination. This also makes sense of the ways in which the deep inequalities that have emerged in terms of who has privacy and control over it is a structural concern. Even where the actor in question chooses not to interfere, they nonetheless have the power to do so (Hoye and Monaghan, 2018; Newell, 2014; Roberts, 2015; Williams and Raekstad, 2022). The capacity of actors who own or control a lot of personal information to interfere, rather than actual instances of interference, ought to be of concern (Martin, 2022).
Imagine an individual who spies on another, collecting every possible piece of information about them, but never able to interact with that person or use it against them. Here, the voyeur changes the circumstances under which an individual acts in a way that would otherwise shape their reasons for action. When I decide what to do with my evening, at home, alone, my decision is only autonomous in so far as I am aware of information that could be relevant to my decision. Whether or not I am being observed is relevant. If, unknown to me, I am being observed, then my option set is actually significantly different to what I believe it to be. When I choose to make pasta for dinner, or to do a little dance in my underwear, I choose to do so unobserved. Unbeknownst to me, this option is actually not available because another actor has chosen to secretly observe me. As such, the voyeur deceives the individual by changing the circumstances under which they act while keeping this from them, undermining their autonomy and treating them as a means to their own ends, and so wrongs them (Benn, 1980; Nathan, 1990).
A critic might argue that what is at stake here is in fact just contextually appropriate access to information, and that this can be accounted for without complicating things with this discussion of control over privacy. Indeed, sometimes just lack of access is valuable. Sometimes, it is disvaluable. This is often the case when we are considering the more instrumental ways in which privacy has value. However, the role privacy plays in personal relationships and autonomy, as well as non-domination, is more closely linked to control. Knowing things about one another is a central part of forming and sustaining a friendship. However, it is as much the sharing, and the decision to share, that grounds this, as merely knowing. Two persons who by random chance (or mutual surveillance) come to know a set of facts about one another, and indeed precisely, the set of facts their respective friends know about them, might be no closer to being friends than they would be in the absence of this knowledge. The point is not just that, given the context of friendship, a certain level of informational access is appropriate or legitimate, although of course this may also be true. Rather, the very context of friendship is created in part through the act of mutual disclosure.
Similarly, my decisions to disclose or withhold information about myself, while maintaining control over the further flows of this information, are a vital part of how I come to be constituted as an autonomous person, separated from other autonomous persons. It is the act of deciding where this boundary lies, of controlling what parts of information about me are accessible and to who, that fulfils this role. It is not merely the fact of others accessing or not accessing this information that helps to define me as an autonomous agent within that boundary, but rather others respecting that boundary as I have chosen it, and as though I have the right to choose it. A similar point can be made regarding how privacy connects to the risk of domination along the lines of the feminist critique already discussed. Being in the condition of privacy against one’s will might render one vulnerable to domination. Similarly for having information about one accessed. We often secure ourselves against arbitrary interference, then, when we control the extent to which we are in the condition of privacy, and indeed maintain control of this even as we give up privacy itself under certain circumstances.
The Inadequacy of Privacy Self-Management
The right to privacy suggests individuals should be allowed control over information about themselves absent other weighty countervailing considerations. Such a conclusion seems at first to offer support privacy self-management. However, numerous critiques have been offered. Several individualistic issues with privacy self-management, which have been thoroughly explored elsewhere, can be summarised as:
(I) people do not read privacy policies; (2) if people read them, they do not understand them; (3) if people read and understand them, they often lack enough background knowledge to make an informed choice; and (4) if people read them, understand them, and can make an informed choice, their choice might be skewed by various decisionmaking difficulties (Solove, 2012: 1888).
While these worries about consent practices are troubling, they are not my primary concern here. Even where the bar for individual consent can be and is met, privacy self-management will still be inadequate to secure the right to privacy and the values that ground it because current big data practices also create two interconnected structural problems. The first, the problem of unequal control, is that through many free transactions, individuals can over time end up with dramatically different levels of control over privacy. This should worry us given the way in which control over privacy is linked to personal autonomy, something we are all equally entitled to attain and exercise. The second, the problem of accumulation, is that particular individuals or groups might gain control over or access to the information of many others. Where there is the massive accumulation of personal data in a single set of hands, this often creates risks of domination, as well as other risks of eroding the autonomy individuals have. Both of these problems arise through privacy externalities, whereby an individual disclosing information about themselves allows a data collector to know more or better about other individuals (Choi et al., 2019), and self-exposure, whereby individuals inadvertently reveal more about themselves than they intended, and through the complex interactions between these two phenomena. I will consider each in turn to show how this occurs.
Privacy Externalities
The first set of concerns relates to privacy externalities, which arise when an individual or group giving up information about themselves allows the collector to know more or better about other individuals or groups who may have chosen not to disclose (Choi et al., 2019). 8 In the Wang and Kosinski (2018) case, we can see that numerous individuals consenting to the use of the data they uploaded to their dating profiles by third parties has enabled the creation of a technology that can determine similar information about other individuals who have not consented. Similarly, individuals willingly uploading their genetic information to online databases have made identifiable other individuals who have not consented. Note that in many cases, those disclosing are not revealing any information directly about others, so that, it is not immediately obvious that they are violating consent. The difference here is roughly the same as the difference between (a) tweeting that my partner not home and (b) tweeting that I am having a night in by myself, from which can be inferred that my partner is not home or busy.
Given that individual control over privacy is valuable and ought to be secured, these types of cases where some people’s decision to disclose has a negative impact on the control over privacy enjoyed by others give us reason to limit what some individuals can disclose, or what can be done with what they disclose. Privacy externalities contribute to the problems of accumulation and of unequal control. Those who have more information can use that information to infer more still, deepening further any existing asymmetries in terms of both how much control individuals have over their own information and the extent to which different individuals and groups have control over and access to the information of others. It may create a situation where the level of control over privacy enjoyed by individuals is much lower than it could be.
Self-Exposure
Self-exposure arises when individuals reveal too much about themselves, eroding their own long-term control over privacy. I will consider types of cases where individuals who consent to the sharing ‘anonymised’ information that is then deanonymised, or where innocuous seeming information is mined for more sensitive insights. I also consider types of cases where individuals disclose information about themselves within a particular context only for that information to also be accessed in another time or place where its meaning meanings or implications are different, and where the individual might not have consented to the disclosure. These cases show us that consent to disclosure at one particular time fails to secure what we value about individual control over privacy. Consent needs to be ongoing and revocable, and we ought to secure a similar level of protection for information that has supposedly been anonymised as for that which has not.
Consider the outed priest. Let us assume that Grindr in its terms of service specified that it would share ‘location data only’, without name or other information attached, rather than specifying that the information would be ‘anonymised’. While in agreeing to Grindr’s terms of service he seems to have consented to all of the location information in question being made public, and thus being accessed by those who managed to find his data within the set, it seems clear that he did not anticipate the information being linked in the way that it was to other information he had also deliberately made public. In many cases, individuals have willingly, through consenting disclosure, revealed information about themselves, which resulted in others knowing rather more about them than they might have hoped or intended. There are numerous other instances of supposedly anonymous data sets being deanonymised. Location data sets (De Montjoye et al., 2013), Netflix data (Narayanan and Shmatikov, 2006) and credit card metadata (De Montjoye et al., 2015) have all been found to be vulnerable to deanonymisation. Truly anonymising many data sets may be impossible (Narayanan et al., 2016).
Not all instances of self-exposure work in this way, by deanonymising information that has been made publicly available. Information willingly disclosed often takes on new meanings or implications in a different context. Census information collected in the Netherlands prior to the Second World War later streamlined the rounding up of the Jewish population (Van den Hoven and Weckert, 2008: 311–312). While expressing socialist leanings was something of a fad in the 1930s, by the 1950s, the information that you had identified as such in the past could lead to you being accused of subversion (Schneier, 2015: 109; Solove, 2011: 8). Such dramatic cases are only part of the picture, however, and fail to capture the everyday nature of these effects. Many people now share nude images with their partners only to regret doing so after a breakup. Even if one’s partner does not share such images further, we might think that one has a legitimate claim to have them returned or destroyed when the relationship ends. Many of us would still continue to share this intuition even if the intimate images could be successfully anonymised to prevent any possibility of future identification. Information about your political views, sexual orientation, nationality or health that may seem innocuous in one time and place, leading you to freely disclose it, could prove harmful or dangerous in a new context, or indeed could just become something you simply do not want known.
This is especially amplified by the advances in technology we have seen over the past 20 or so years. Information can very cheaply be stored almost indefinitely, and be accessed, duplicated and shared with ever greater ease. As such, information about an individual, which is disclosed with their full consent in a particular context, is more likely than ever to end up in a context where it is harmful to them or they would otherwise prefer it not to be accessed. If control over privacy is important for individuals, we need to secure control over privacy for individuals across time, and control over whether information is shared further or put to any new use. What this may mean is that we need to limit what individuals disclose about themselves, or, preferably, limit the loss of control that goes with that disclosure.
This is essential to allow people to continuously write and rewrite their own moral biographies, a vital component of autonomy, and to secure them against domination. The idea that consent can be given once and never retracted in relation to privacy may arise because we have been led to think of our information as being similar to external property. If we understand it instead as more similar to our rights in relation to our bodies or ourselves, this conclusion makes sense. After all, it is not controversial to say that consent must be continuous and at any point revocable in sexual or medical contexts. Something similar can be said about the right to privacy.
All of this is to say that any given individual may be limited in how much information about themselves or control over it they can give up, especially to powerful actors, to protect the autonomous personhood of both themselves and others, to ensure that they can now and in the future engage in various social relationships should they choose to do so, and to prevent domination. This can be secured by preventing individuals from sharing information, which I take to be the less desirable option given the connected value of freedom of expression, or, potentially, by ensuring that individuals maintain meaningful control over the information they do share, and limiting how much certain actors can collect, which I see as more desirable. Given the value of individuals being able to disclose what they choose to, it is preferable to have a system that limits accumulation of data, and therefore, power, and limits the amount of control over privacy individuals give up when they give up some of their privacy, as well as the extent to which they expose others when they do so, than one that limits the amount of privacy individuals can give up.
A Nozickean Objection
One response to all of this would be to bite the bullet. Such a response has been framed as operating analogously to Robert Nozick’s critique of patterned principles, and corresponding advocacy for a historical account of justice. How is it, asks Nozick, that a particular distribution can become infected with injustice when all the steps that led to it, from initial acquisition to the transfers that followed it, were completely just? Simply, it cannot. Even if, intuitively, we feel that a particular state of affairs is unjust or unfair, we must accept it as just insofar as no injustice was involved in reaching it. We may refer to such an account of justice as historical, given the backward looking way in which we seek to establish the justice or injustice of a current state of affairs, to be contrasted with the patterned nature of the background justice position I advocate. On a patterned account, some states of affairs are unacceptable regardless of how we got there.
In a paper by Rumbold and Wilson (2019), they argue that inferences from justly held information could indeed constitute privacy violations. In response, Jakob Mainz has taken this Nozickean position and applied it to privacy and the question of whether we might wrong individuals by making inferences about them that they did not foresee or intend. According to Mainz, any epistemically legitimate inference made on the basis of legitimately acquired personal information cannot violate the privacy rights of the individual about whom it was made. Surely the right to privacy cannot consist in a right that others not have certain thoughts about us. Rather, the right must relate to the process by which those thoughts came to be – that is, how a particular individual came to hold information about us 9 (Mainz, 2021b).
The Rawslian rejection of this, and connectedly the position I will take, is based at least in part on the claim that there are values underlying the right of individuals to freely transact, be that with various other resources or with information about themselves. For instance, we might think a certain conception of freedom or fairness is at play. Under some circumstances, that is, when conditions of background justice fail, we see these very values undercut. If we have only principles governing the freedom of transactions we may, for instance, see the emergence of vast wealth inequalities, leaving many with dramatically reduced bargaining power and an inability to freely transact or interact with others as equals. The defender of Nozick could reply that the claimed values simply do not underlie the value of free transactions between individuals. They could, for instance, say that self-ownership underpins the foreground principles of just transactions, and that this is not undercut problematically (Sinclair, 2013: 371). Regardless of whether this response works in that context, it is closed to the privacy theorist. On any plausible account of the value of control over privacy, the cumulative effect of multiple information transactions will undercut the very values that justify securing such control for individuals. The capacity of individuals to have a variety of social relationships, to develop and exercise their capacity for autonomy, and to be free from domination all suffer under the cumulative effects of privacy transactions.
Mainz’ position fails because he does not consider the values underlying control over privacy. Throughout the article, he chooses to ‘assume for the sake of argument that privacy rights exist’ (Mainz, 2021b: 210). Choosing not to commit to a particular account of the value of the right to privacy in making his argument seems reasonable enough at first. However, once we plug in any plausible conception of the value of control over privacy, it becomes vulnerable to the background justice argument. Any plausible account of the value of privacy seems to rest on values that would be threatened by allowing unlimited free information transactions. As we have seen, free individual transactions over time lead to the problems of accumulation and unequal control, which in turn have negative implications in terms of individuals’ ability to form valuable relationships, develop and exercise autonomy, and remain free from dominating control. Regardless of whether the Nozickean challenge succeeds in its original formulation, Mainz’ parallel argument does not work in the context of privacy rights. Whoever is right about individual level (transactional) privacy violations via inferences, Rumbold and Wilson, or Mainz, we know that such inferences can create unjust background privacy conditions, and so that, have reason to be concerned about them, and to implement a patterned conception of privacy justice.
Background Justice and Patterned Privacy Principles
Current proposals either move from an analysis of this problem very quickly onto specific policy proposals or technological solutions (Loi et al., 2020; Véliz, 2020), or advocate for different or more inclusive methods of decision-making to ensure those affected by privacy transactions get a say (Gould, 2019). Regarding the first type of solution, an ad hoc selection of practical responses may help in some instances to mitigate our concerns, but ultimately fail to establish privacy justice, not least because it is not clear exactly what overarching normative framework justifies them, nor exactly what state of affairs they are aiming to bring about. The second type of solution is likely to face many of the same concerns as notice and consent models because some states of affairs are not acceptable, regardless of the fairness or inclusivity of the processes that brought them about. Procedural requirements can only ever fulfil a part of satisfying what privacy-related values require of us. There must also be a substantive element. As Sangiovanni has pointed out, there is no reason why Gould’s arguments in favour of greater democratic control ought not to commit her to more radical measures, such as banning the collection and use of personal information under some circumstances, ‘whatever a democratic majority might say about it’ (Sangiovanni, 2019). As such, a conception of privacy justice that provides not just an account of informed consent but also patterned principles for the distribution of control over privacy is required. This is not to say that these forms of democratic process will not have their place in choosing the exact way to implement the BPP, but rather that without certain limitations they are inadequate.
I offer an account similar to the argument from background justice as presented by Rawls. Free and fair transactions, by themselves, may lead to a situation that is undesirable from the perspective of the very values that justify them (Rawls, 1993). We undercut the very values we seek to secure through granting individuals control over privacy if we allow them to consent to whatever disclosures they choose against current unjust background conditions. Individuals are left with dramatically different levels of control over their privacy, and some may acquire vast troves of information about others. This has repercussions for the autonomy of individuals, and for power relations. As such it needs to be supplemented with more structural principles to govern the distribution of control over privacy and secure the background conditions required for consent to operate against, placing limits on how much individuals can consent to disclosing, and perhaps more importantly, how much personal information can be collected and by who. In many interpersonal cases, privacy externalities and self-exposure will not worry us from a moral perspective, but many contemporary instances involving large corporations or state agencies and new technologies should raise concerns. This gives us a strong reason to limit the amount of personal information which, for example, a social media company can collect and process, but is much less likely to place limits on what the parties on a date can tell each other, or what my friend can tell me about his weekend.
We might initially propose something like the following:
Equal privacy principle (EPP): The highest level of control over privacy consistent with an equal level of control over privacy for all should be secured for all individuals.
I take it that equal privacy principle (EPP) follows fairly straightforwardly from the fundamental equality of persons, and the applicability of the consent requirement to all persons. Individuals’ claims to control information about themselves in the sense of freely exercising consent are likely to conflict at points, and as such, a principle is needed to resolve any such conflicts as they arise. My freely sharing my information can erode the control you have over your own privacy, and as such may need to be limited. Note that this principle does not require that all individuals enjoy the same level of privacy, that is, that they are in the condition of privacy to the same extent. Rather, they may choose to keep or give up privacy within the widest range compatible with their doing so not eroding the control over privacy they or other individuals will enjoy, now or in the future. This generates a limitation on information individuals can share about themselves where such sharing allows others to know more or better about other individuals without their consent, and a limitation on how much control over their information individuals can give up to secure their control over privacy over time. 10 EPP does not imply that control over privacy is the only or ultimate value in a complete account of justice, and it may of course potentially be outweighed by other, external values.
At first glance, EPP may seem adequate to secure what is valuable about individual control over privacy. EPP solves the problem of unequal control, and we might think it would be likely to solve the problem of accumulation too. After all, how could one actor accumulate so much more information about others that they would be able to dominate them, while everyone has equal control over privacy? However, this would be too quick. Consider the following:
Tom from Hi-space: Tom runs a social media platform and is friends with everyone on it. As such, Tom knows quite a lot about me. I also know the same amount about him. Indeed, this is the case for everyone in the world – we all know about Tom the same amount as he knows about us. However, no one other than Tom knows this amount about everyone else. Most of us only know this amount about a few friends and relatives, and of course Tom.
In Tom from Hi-space, we all have equal control over our own privacy, so that, EPP would be satisfied. However, Tom has access to and control over the information about many more other people than anyone else does. This all-encompassing surveillance capability is likely to create risks of domination. Knowing a certain amount about one individual often does not give us a great deal of power. Knowing about everyone, and being able to potentially mine these data for further insights, generally does. As such, we ought to revise our first principle to account for these concerning cases. This will give us the following:
Background privacy principle (BPP): The highest level of control over privacy that is consistent with an equal level of control over privacy for all should be secured for individuals, except where this distribution will create risks of domination.
BPP allows us to account for both individual and more structural or collective aspects of privacy, and the ways in which they are connected. We need BPP because of the problem of accumulation, whereby an actor might acquire the capacity others through multiple consenting information transactions. Such an actor might be granted less control over their privacy because they would be subject to additional demands in terms of transparency. This would allow their power to be constrained or limited somewhat, and for their exercise of it to be accountable to those over who they wield power. We might also grant those who might be especially vulnerable less control over their privacy, forcing them into the condition of privacy against their will to protect them. Clearly, this would be required by BPP, but prohibited by EPP alone. The problem of accumulation may also threaten to undermine the democratic systems of governance, which otherwise might allow for power to be exercised without the risk of domination. Individuals might be limited in how much control over privacy they can give up where this will leave themselves or others vulnerable to domination.
One worry we might have at this point is whether BPP is too strong. How we administer therapy, health care, school, parenting and numerous other valuable practices might have to change dramatically, if they can even be sustained at all in a way that is consistent with BPP. These practices rely on the keeping of extensive records of incredibly sensitive information about people, as well as deep asymmetries in terms of informational access between service providers and users. 11
This objection loses some of its sting when we recall that the extent to which an individual is in the condition of privacy and the extent to which they have control over privacy are not continuous. BPP does not imply that, for example, we cannot have extensive health care records that doctors access. Rather, it means that even as I engage with a health care practitioner, I maintain to the greatest extent possible control over the records they keep about me. While the amount of privacy my doctor and I have with regard to each other is deeply unequal, it need not be the case that the control we have over our privacy be radically different. Consider, for example, the possibility that my health care records might be stored in a decentralised way, whereby I might grant access as and when necessary, rather than being stored and controlled by the service provider. We can also secure patients against domination in various ways when the existence of a particular informational asymmetry will be immensely valuable to the individual who might be exposed. For instance, we implement fiduciary duties on the part of therapists, lawyers and doctors.
School and parenting are particularly interesting cases, insofar as children are emerging as autonomous agents. How we ought to treat individuals whose capacity for autonomy is limited or still developing is complex. It may be that we ought to allow children more control over their privacy than we currently do, especially with regard to their interactions with total institutions, such as schools. Control over privacy is not something simply something given to fully autonomous individuals. Rather, a part of how individuals come to be autonomous is that they are treated as though they have a right to privacy, and in this way come to see their thoughts, bodies and lives as their own to control.
The BPP in Practice
The final question is whether BPP is actually feasible and how this conception of privacy justice might be put into practice. The primary work of this article was to establish that a background justice framing is the best way of making sense of privacy justice in a world of ever more complex data flows, and that we need a patterned principle to establish background justice in the distribution of control over privacy. This final section, then, is not so much an additional set of conclusions, nor is it what I am suggesting is necessarily the best approach, so much as it is a demonstration of feasibility. Deciding the best way of implementing BPP is far beyond the scope of the current work. Numerous considerations, political, legal, economic, social and technical, would need to be accounted for to provide a full answer to what privacy justice requires. There may also be a variety of different ways we might try to satisfy BPP, and choosing between these ought to be the work of democratic decision-making. 12 My more modest aim here is to demonstrate that, in spite of the current lack of incentives, there are already technological and legal approaches in theory and practice that render BPP practically feasible.
While on the face of it, BPP may appear primarily to place limits on the amount of control over privacy an individual may legitimately give up, we should consider it from the opposite perspective – that is, to consider what implications it has not so much for those who would share information about themselves, but for those who would collect this information about them and others. The principle applies to any agent with the power to change the background conditions against which individuals make their privacy decisions. Primarily, this will mean corporations that collect and process data, and the political institutions that regulate them. It is too complex for individuals to consider and act on large scale effects of apparently consenting individual privacy transactions, and to coordinate the collective response required. This requires an institutional response, placing obligations on states and private companies and so on.
States have an obligation to protect the rights of individuals. In this instance, that means ensuring that just background conditions are met, and that instances of consent that happen against this backdrop are fair and free. One way to fulfil this obligation would be to legislate to set clearer lines and higher penalties for corporations that violate individuals’ rights. This may be effective to an extent, but it is still too limited in its content and reach to take seriously the rights of individuals. When we say that an individual has a right, we are claiming that there are certain things that others should not do to them, and that the state should prevent others from doing. This is not to say that a state has failed in its duty every time a right is violated; perhaps individuals have a complaint if they do not have ‘secure access’ to the objects of their rights, rather than if they simply suffer a one-off rights violation within an institutional setting where their rights are generally well-secured. 13 The state reneges on its obligation if it fails to act sufficiently to prevent such violations.
Given what I have said about the values privacy connects to, it seems reasonable to suggest that this creates ex ante duties of protection from corporations by the state rather than being satisfied through sanctions ex post. In addition, states are uniquely well placed to manage the complex coordination required to secure just background conditions. Given the complex and international nature of many companies who make use of big data, coordination between states will also be required. This may also require the creation of global regulatory institutions of some sort, although a fuller outline of what these might look like is beyond the scope of this article.
If we take it that states are primarily responsible for enacting these changes, the next question is what this could look like in practice. Control over privacy can be secured either formally, for instance, a law against trespassing in someone’s home, or materially, by putting a lock on the front door (Reiman, 2017). Both will be required. One approach would be to use legislation alongside what has been described as the ‘law’ of cyberspace itself – code (Lessig, 1999). Recently, the idea of value-sensitive design has gained ground. To help rather than hinder us in realising values, technology must be designed with those values in mind (Friedman et al., 2013). Privacy by design is an approach where privacy-preserving principles guide the design of technologies in a way that is preventive rather than remedial. While some work on this area encourages voluntary uptake of industry codes of practice (Cavoukian, 2009; Clarke, 2009), or the idea that governments should work closely with industry to develop ‘co-regulatory frameworks’ (Rubinstein, 2011). This is unlikely to go far enough. Despite being easier to implement and gain support in industry, they fail to secure the values at stake. Instead, we may require regulatory models that force particular design choices on those who create and profit from technologies making use of big data.
Of course, this may affect profits and slow innovation in some instances. However, the strength of these objections may be somewhat limited. It should be remembered that many made similar claims when seatbelts where first mandated, and although of course innovation is often a positive thing, perhaps we shouldn’t be too troubled at the prospect of slowing down slightly an industry, which for a long time had ‘move fast and break things’ as its unofficial slogan. ‘Move slightly slower and try very hard not to break anything’ does not pack quite the same rhetorical punch, but given the values already discussed to which control over privacy is inextricably linked, perhaps, we ought to take seriously the possibility of adopting it.
One way to implement this approach might be to ‘re-decentralise’ the web. The Solid project, for example, offers a set of conventions and tools for building web apps. Content is decoupled from the app itself – individuals’ data are stored either on a private server which they control. When they require access to these data, an application would have to effectively ‘log-in’ to the user. Users thus maintain meaningful control and ownership of their data, and the ability to revoke permissions, while also still being able to make use of or access many of the services we currently enjoy (Sambra et al., 2016). This may still need to be coupled with ‘multi-layered notices’ or similar. Although it would not fully address the cognitive issues with consent previously discussed, this would make consent more meaningful, by providing simplified information about data practices and a complete policy for compliance purposes (Mcdonald et al., 2009).
Another proposal which might help to secure for individuals greater if not perfect control over their information, while also still making use of it as an asset, is for control over personal information to be pre-distributed, with collectives of individuals co-owning and controlling personal data platform cooperatives as a way of shifting the current monopolistic data economy towards a property-owning democracy. These cooperatives would give individuals, collectively, the ability not only to make decisions about use of their data, but also to shape the very choice architecture within which such decisions are made (Fischli, 2022; Loi et al., 2020).
For those who are concerned about whether limiting the collection of data on people might affect important and socially valuable research, it is worth adding that we might have different processes in place for different uses of data. We might, for instance, allow the use of anonymised medical data sets within a secure research environment, where we can limit the number of queries to reduce the chances of deanonymisation. These kinds of research can also be subject to approval by ethics boards and so on to establish the necessity and safety of the research before data are accessed or used.
Concluding Remarks
While many have prematurely rung the death knell on privacy, numerous activists, scholars and ordinary people maintain that privacy and its value is alive and well, albeit under siege. Given the many complex ways in which the privacy decisions we make as individuals interact, and the current tendency towards ever larger data silos owned and controlled by a small minority, any plausible rescue operation cannot be individualistic in nature. As with all collective action problems, we require a collective, institutionalised response to restore the background conditions against, which at least some privacy transactions can be free and fair. If control over privacy is to function as it should, securing for individuals autonomy and freedom from domination, individuals ought not to have dramatically different levels of control over their own information, nor should any individual or group own or control far more information about others than anyone else. The best way to secure this is to implement the BPP.
Footnotes
Acknowledgements
The author is indebted to the two anonymous reviewers and the editors of Political Studies for their comments, which undoubtedly improved the article greatly. The author would like to thank attendees of the Mancept PhD WIP Seminars and the Mancept Seminars for helping to shape an early draft of this paper. Most of all, the author would also like to thank the doctoral supervisors, Miriam Ronzoni and Richard Child, for their ongoing support.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This paper was written as part of a doctoral research programme funded by the School of Social Sciences at the University of Manchester.
