Abstract
In our attempts to achieve privacy and reputation deliverables, advocating for service providers and other data managers to open Big Data black boxes and be more transparent about consent processes, algorithmic details, and data practice is easy. Moving from this call to meaningful forms of transparency, where the Big Data details are available, useful, and manageable is more difficult. Most challenging is moving from that difficult task of meaningful transparency to the seemingly impossible scenario of achieving, consistently and ubiquitously, meaningful forms of consent, where individuals are aware of data practices and implications, understand these realities, and agree to them as well. This commentary unpacks these concerns in the online consent context. It emphasizes that self-governance fallacy pervades current approaches to achieving digital forms of privacy, exemplified by the assertion that transparency and information access alone are enough to help individuals achieve privacy and reputation protections.
This article is a part of special theme on The Black Box Society. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/revisitingtheblackboxsociety
In 1913, Justice Louis Brandeis famously published “What Publicity Can Do” in
A few years after the publication of “What Publicity Can Do,” in 1927, Walter Lippmann articulated the aforementioned position criticizing impractical self-governance models, or what he termed the “mystical fallacy of democracy” (Lippmann, 1927: 28) in [T]he man does not live who can read all the reports that drift across his doorstep or all the dispatches in his newspaper. And if by some development of the radio every man could see and hear all that was happening everywhere, if
In the Big Data context, which presents the latest iteration of the self-inflicted, self-governance challenge to our normative ideals of democratic governance, the over-simplified impracticality continues. This ongoing problem is linked to the perpetuation of the similarly incomplete Fair Information Practice Principles, which, through the “notice and choice” framework, place the individual in a situation that Lippmann once described as an “unattainable ideal” (Lippmann, 1927: 29). This top-down policy approach for achieving personal information protection for individuals in the digital space involves two components. The first is “notice,” which involves attempting to provide people with information about data practices, including, but not limited to, data collection, management, sharing, retention, and use. Each of these data practices can be expanded considerably, with “use,” for example, encompassing the creation of models for facilitating data-driven decision-making, automation, secondary, and other subsequent analyses as well as aggregation. It is important to emphasize here that algorithmic auditability fits in this space, whereby notice practice not only could offer individuals with information about data sets, but the systems that analyze those data sets and answer the myriad questions asked of them. Information about these processes as well as other information relevant to engagement with the service provider often reifies the “notice” principle through consent mechanisms linked to privacy and terms of service policies. Similarly, “choice” refers to attempts to provide people with access to and control of their data and its use. This might involve the opportunity to access, review, and correct data at each of the myriad entities involved in data practice.
In “Big Data and the Phantom Public: Walter Lippmann and the Fallacy of Data Privacy Self-Management” (Obar, 2015), I repurposed Lippmann’s argument about problematic approaches to self-governance to suggest that similar concerns exist in the Big Data space. While users may get access to information, they certainly do not have access to the time, omnicompetence, or strategies (e.g. interfaces) to read every notice manifestation and review the limitless, ever-expanding data point and data use variegations swirling across the globe. It would be impossible enough to try and find agency if the madness ceased for a moment, and allowed us to walk through the Big Data universe, frozen in time. But that is not the universe we live in, as all is constantly changing, amplifying and growing. As we attempt to read one privacy policy, many others are being created or modified. As we wait on the phone to speak with one internet service provider, many others, including transit providers, move and hold our data. As we attempt to access and review one data set or one dossier at one data manager, thousands more mold, shape, and grow their (likely imperfect) digital manifestations of us and increasingly the resulting decisions that direct our lives.
One place for further reflection on this dilemma is Frank Pasquale’s Let’s assume, for now, that a full transparency agenda comes to the realm of reputational information - data brokers, credit scorers, and all the algorithmic raters and rankers we’ve encountered so far. […] Would that really allay our concerns about the new reputation economy? Probably not. […] as data use intensifies, it will be hard for persons (even with the aid of new software Discovering problems in Big Data (or decision models based on it) should not be a burden we expect individuals to solve on their own. Very few of us have the time to root through the thousands of databases that may be affecting our lives. (Pasquale, 2015: 150)
Unpacking the consent challenge
Calling for transparency is easy. Achieving meaningful transparency is difficult. Delivering meaningful consent from that transparency appears almost impossible, in the Big Data context especially. One reason these problems persist, evidenced by repeated approaches to policy design, is that policymakers, advocates, and scholars seemingly begin and end the discussion of user control at the point of access. Indeed, access to information is an essential component of delivering notice and choice provisions. What continues to be lost, however, is the following question: once individuals have access to notice and choice manifestations, then what? Before beginning to address some suggestions for assistance in this space that acknowledge the need to move beyond access, it is important to further unpack the problem of access.
The consent challenge persists because individuals lack the tools for converting notice materials into meaningful consent. Transparency alone is not enough to help individuals be self-governing in Big Data contexts (Ananny and Crawford, 2018; Kroll et al., 2017). But what are some other reasons for the consent challenge?
In current manifestations, individuals aren’t that interested in online consent opportunities. Individuals seemingly view them as boring, time-intensive, and overwhelming (Oeldorf-Hirsch and Obar, 2019). What’s more is that individuals appear to view these opportunities as unwanted distractions from the primary purpose of accessing digital service. People download apps to use them, not to engage in tangential discussions and exercises relating to privacy. Indeed, “(individuals) view policies as nuisance, ignoring them to pursue the ends of digital production, without being inhibited by the means” (Obar and Oeldorf-Hirsch, 2020). Therefore, even if meaningful transparency (Suzor, et al, 2019) is available, the challenge of getting individuals to engage requires overcoming these concerns. How can access alone achieve this?
A large part of the problem is that while digital service providers are in the dominant position to determine the future of internet governance (DeNardis and Hackl, 2015), they seem disinterested in addressing the challenge. In the consent context, the clickwrap exemplifies this issue. A clickwrap “is a digital prompt that enables the user to provide or withhold their consent to a policy or set of policies by clicking a button, checking a box, or completing some other digitally mediated action suggesting “I agree” or “I don’t agree” (Obar and Oeldorf-Hirsch, 2018). It is often presented using an appealing agree button, above a less-appealing, harder to see link (not a button) to policies. Clickwraps serve an agenda-setting function in this respect, as users, already directed by the aforementioned desire to access services, reading from top-to-bottom, see the appealing button first and likely miss the less-appealing policy links (Obar and Oeldorf-Hirsch, 2018). The clickwrap provides users with a “fastlane to monetized sections of services, as opposed to diverting attention to dissent possibilities” (Obar and Oeldorf-Hirsch, 2018). The prevalence of clickwraps that distract and even train users to ignore consent opportunities, their political economic function (Obar and Oeldorf-Hirsch, 2018), advances the suggestion that service providers are both perpetuating and ignoring the online consent challenge. Thus, even if privacy and terms of service policies detail everything individuals need to know, clickwraps keep people away.
A related concern is what Draper and Turow (2019) refer to as “the corporate cultivation of digital resignation.” They emphasize how studies reveal individuals feel helpless in their attempts to address what notice and choice provisions aim to achieve. The authors suggest these feelings can be linked to service provider practice in the consent context in particular, asserting that various obfuscation efforts by providers demonstrate how converting transparency to meaningful consent appears to be a low priority. They describe the following problematic “tactics” of service providers: Placation involves efforts to falsely appease concerns. Diversion refers to efforts to shift individuals’ focus away from controversial practices. The use of jargon—terminology that is difficult for those outside a specific group to understand—not only generates confusion, but may frustrate efforts at comprehension. Similarly, misnaming describes efforts to occlude industrial practices through the use of misleading labels. (1830)
Strategies for supplementing access
If access is a place to start, where do we go from here? It should be noted that before considering how access might be supplemented, there are many concerns with achieving meaningful transparency and access. Although terms of service and privacy policies combined are reaching tens of thousands of words, and requiring an impractical number of hours for reading and re-reading (McDonald and Cranor, 2008; Obar and Hatelt, 2019), many policies do not include information that would actually educate users and help them advance their privacy advocacy and other self-governance efforts (e.g. Obar, 2019; Reitman, 2017). Assuming that we reach a position where engagement with the content of notice manifestations would provide an actual opportunity to be self-governing in the digital space, the question to answer then is, how to ensure agency, in relation to these materials, is then possible for the multitude.
In
Here, I will mention two ways that the literature is beginning to address this alternative: the information fiduciary and the infomediary. Both suggest that individuals must enter into principal–agent relationships with another entity to achieve a division of labor that can produce results. The individual (principal) would delegate to either the fiduciary or the infomediary (agent) as a way of getting to the scenario that Pasquale describes, where more simplified options perceived in less technocratic and more individualized ways, allow for users to make decisions based more upon their own experience and interest.
The fiduciary model assumes that the agent who is being delegated to for the purpose of providing a service (i.e. social networking, banking, email, cloud storage, and so forth) should serve also as a trustee that will operate in the individual’s best interest (Balkin, 2015). Beginning with this moral assumption, one might suggest that the fiduciary would not only aim to protect personal information in general, but where complications arise (such as with consent processes), construct a scenario that both pre-determines how a user’s best interest is to be served, and ensure that manifestations of the communication process help deliver that outcome. So far, the preponderance of consent materials that distract and overwhelm appears to be the current strategy many service providers are employing. Far more needs to be done if the fiduciary model is to be realized, allowing in the consent context for individuals to evaluate consent opportunities as they evaluate other marketplaces where fiduciaries operate. It is important to emphasize that even if one fiduciary were to construct a pragmatic scenario, and even if a thousand providers did the same, the task of meaningfully consenting to all fiduciaries, and re-consenting, still presents a problem. The solution must also present strategies for addressing this challenge as well.
One place to look for such a solution might be the infomediary model, which could operate in conjunction with the fiduciary model (see Obar, 2020). Infomediation suggests that agents specializing in privacy and reputation protections (the infomediaries), would operate between the individual (principal) and the entities involved in data practice (fiduciaries and others). The delegation to an infomediary would involve engagement with for and/or non-profit models for ensuring principal–agent relationships that could deliver consent scenarios that would be meaningful. Similar to the state of the fiduciary relationship, it is unclear exactly how the infomediary model might work. What is clear is that as principals we delegate to agents all the time to interpret, represent and protect our personal information in varying, highly complex scenarios. Two clear analogies are the lawyer and the accountant whose professions each offer for and non-profit options through which varying individuals can engage, delegate, and benefit. An early attempt to consider what infomediation might similarly deliver includes four potential strategies, suggesting infomediaries might: “1) help overcome information asymmetries, 2) help overcome time limitations, 3) generate narrow choices, and 4) ensure narrow choices lead to clear results” (Obar, 2020). Again, these strategies would operate with the aim of achieving Pasquale’s scenario where individuals aren’t buried in the details.
Realizing how often we end the discussion at transparency and access is an essential first step to solving the online consent challenge. Moving beyond self-governance fallacy is challenging, as it is ingrained in perceptions of Western democratic tradition. In truth, we have never been fully in charge, and never will be. Until we acknowledge our own limitations, and concurrently, ensure governance structures are designed to actually help, we will not reify the privacy policing or digital disinfection we need.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
