Abstract
The Black Box Society was one of first scholarly accounts to propose a social theory of the use of data in constructing personal reputations, new media audiences, and financial power, by illuminating recurrent patterns of power and exploitation in the digital economy. While many corporations have a direct window into our lives through constant, ubiquitous data collection, our knowledge of their inner workings is often partial and incomplete. Closely guarded by private companies and inaccessible to most researchers or the broader public, too much algorithmic decision-making remains a black box to this day. Much has happened since 2015 that vindicates and challenges the book’s main themes. To answer many of the concerns raised in the volume in light of the most recent developments, we have brought together leading thinkers who have explored the interplay of politics, economics, and culture in domains ordered algorithmically by managers, bureaucrats, and technology workers. While the contributions are diverse, a unifying theme animates them. Each offers a sophisticated critique of the interplay between state and market forces in building or eroding the many layers of our common lives, as well as the privatization of spheres of reputation, search, and finance. Unsatisfied with narrow methodologies of economics or political science, they advance politico-economic analysis. They therefore succeed in unveiling the foundational role that the turn to big data has in organizing economic and social relations.
This article is a part of special theme on The Black Box Society. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/revisitingtheblackboxsociety
Throughout the 2010s, scholars explored the politics and sociology of data, its regulation (via privacy and trade secrecy laws, among many other rules), and its role in informing and guiding policymakers. COVID-19 has dramatized the importance of this work. Quality health data has been vital to successful efforts to ‘flatten the curve’. In less fortunate polities, health data is politicized, ignored, or manipulated.
To the extent data collection, analysis, and use are transparent and accountable, we can imagine a ‘grand bargain for big data’: rapid analysis of novel data sources for public health purposes, in exchange for enforceable promises that the data is anonymized as well as possible, and is only used for those purposes. However, all too much of this work is being done in ‘black box societies’: jurisdictions where the analysis and use of data is opaque, unverifiable, and unchallengeable.
Corporations and governments around the world are collecting the location data of millions of internet and smartphone users to find patterns about how the virus spreads. The resulting analytics often appear opaque to outsiders, undermining accountability. They can obscure complex social realities, with lasting consequences for civil liberties and public health. Besides obvious privacy concerns, we already have evidence that location data collection and publication during the COVID-19 pandemic is having discriminatory impact on minority groups and has led in some cases to additional stigma and violence against specific disadvantaged neighbourhoods, local communities, or religious groups. For example, as a result of disclosure of personal information about people that tested positive in Seoul, LGBTQ individuals have been exposed to an outburst of intensified homophobia and harassment (Human Rights Watch, 2020). A key Chinese coronavirus app has classified persons in an opaque way: as a result, Africans living in Guangzhou, China have reported being evicted from their homes, because of an erroneous association with COVID-19 transmission (Marsh et al., 2020). U.S. states are rushing into the app space without adequate guidance about privacy and security implications of novel forms of surveillance. These systems are thus replicating structural power inequalities, instead of addressing them.
What is missing from mainstream debates is an understanding of how data analysed algorithmically is used as a tool for social, political, and economic control. For example, big data biases often distort decision-making, diverting the deployment of crucial funds. Even worse, the emphasis on data and automated tracking of individuals detracts from far more important foundations of public health measures: universally available, free testing and treatment that can establish a foundation of knowledge about disease prevalence and infection fatality rate (both generally, as well as the specific vulnerability of groups and subgroups). Nevertheless, narratives of tech solutionism and even salvation-ism abound.
Given the urgency of these problems, there could not be a more appropriate time to reflect on the accelerating deployment of big data-driven analysis by corporate and state interests that operate in proverbial black boxes.
Much has happened since 2015 that vindicates and challenges the book’s main themes. Cambridge Analytica, social credit scoring, and the ‘robodebt’ debacle have vaulted algorithmically inflected decision-making into the headlines. The General Data Protection Regulation has intensified debates over data protection. Europe’s emerging ‘right to an explanation’ of automated profiling has provoked lengthy debates on the processing of personal information. Yet recurring examples of algorithmically driven injustices raise the question of whether transparency – the foundational normative value in
To address these questions, this symposium features the work of leading thinkers who have explored the interplay of politics, economics, and culture in domains ordered algorithmically by managers, bureaucrats, and technology workers. It is hard to think of an aspect of life that has not been affected by the use of algorithms, automation, and big data, including medicine, education, welfare, voting, dating, communication, law enforcement, warfare, and cyber-security. However, as algorithmic decision-making becomes ever more prevalent in our public and private lives at the local and global level, they all too often remain in the hands of power brokers such as corporations and governments, inaccessible to researchers or the broader public. By bringing social scientists and legal experts into dialogue, we aim both to clarify the theoretical foundations of critical algorithm studies and to highlight the importance of engaged scholarship, which translates the insights of the academy into an emancipatory agenda for law and policy reform.
Starting with illuminating examples from data analytics applied to managers and workers, Ifeoma Ajunwa, in ‘The Black Box at Work’, describes the data revolution of the workplace, which simultaneously demands workers surrender intimate data and then prevents them from reviewing how it is used. Far too many workers remain ignorant of the algorithms governing their workplace and measuring their productivity. Ajunwa examines three dangers of this data revolution: the concealment of disparities in hiring, the potential for ‘data-laundering’, and pervasive surveillance. As she wisely observes: ‘The folly in this oracular reliance on big data-driven algorithmic systems is that without proper interpretation, the decision-making of algorithmic systems could devolve to apophenia’. When patterns are made (and not found via some publicly justifiable method), evaluations of workplace performance are suspect, eroding the dignity and autonomy of workers.
Reaffirming the urgency of intelligible evaluation as a form of dignity, Mark Andrejevic, in ‘Shareable and Un-Shareable Knowledge’, focuses on what it means to generate actionable but non-shareable information. To act on data, without being able to explain it, decentres and devalues narratively intelligible accounts of action. Indeed, as theorized in 20th century philosophy of action, such effects in the world may not be action at all, but mere observable regularity. As unknown automated systems sort and classify ever more benefits and burdens, the public wants to know how persons are being sorted and judged. Regulators could respond by simply forcing the release of code or data. However, we often need more – an understanding of what managers and coders were planning when they designed the system, and how they react to challenges or unexpected outcomes. Narratives help us make sense of such plans, framing complex scenarios into more accessible, causally connected versions of events. Narrative is no cure-all, of course: sometimes it deflects deeper understandings of critical issues or oversimplifies complex causation into hero/villain dichotomies. Nevertheless, it is hard to read Andrejevic’s intervention here (as well as his important recent book
Lack of public understanding of targeted advertising in politics motivates Margaret Hu’s intervention. Having made several important contributions to legal discussions of the role of big data in public decision-making, Hu here turns to the critical interplay between private media and public influence. She helps us reassess the Cambridge Analytica–Facebook scandal (which involved millions of Facebook users’ data being released and exploited without proper authorization). Hu’s article ‘Cambridge Analytica’s Black Box’ surveys a range of legal and policy remedies that have been proposed to better protect consumer data and informational privacy. She convincingly demonstrates that effective reform should include both increased oversight by the Federal Trade Commission and the potential for imposing procedural and substantive due process-type requirements on private actors like Facebook. While the voting booth (or mail-in ballot) itself must be private, the larger public sphere of campaigns must be open to public scrutiny, lest shadowy actors pursue hidden agendas designed to undermine the interests of the very groups they purport to help.
The automated public sphere serves political information to voters in deeply privatized and hidden ways, and now the education sector is beginning to adopt similar technologies of monitoring and personalization. In response, Paul Prinsloo examines ‘Black Boxes and Algorithmic Decision-making in (Higher) Education’. Prinsloo has a strong record of critically interrogating the role of technology in education, and that analytic acuity animates this piece. The use of student data by higher education institutions has become central to operational and strategic planning, as well as the delivery of tailored learning experiences. Like grades, AI-driven evaluations will have significant and permanent ramifications for students’ lives, but unlike grades, they largely remain unknown to students. Prinsloo’s normative framework elevates the importance of autonomy, privacy, social well-being, freedom from bias, fairness, epistemic agency, and ease of information seeking. Each of these weigh against pervasive use of black box algorithms at higher education institutions.
Education never ends, and many well-intentioned policymakers who neglect urgent issues need it just as much as students. Continuing her work to place the climate emergency at the centre of policy making concerning the development of AI, Benedetta Brevini argues that we must account for the environmental costs of AI. Brevini has, in past work, illuminated unexpected and disturbing connections between communications and environmental degradation. Here, in ‘Black Boxes, not Green: Mythologizing AI and Omitting the Environment’, Brevini documents how AI runs on technology, machines, and infrastructures that deplete scarce resources in their production, consumption, and disposal, thus exacerbating problems of waste and pollution. AI also relies on data centres that demand impressive amounts of energy to compute, analyse, and categorize. If we want to stand a chance at tackling the Climate Emergency, then we have to stop avoiding the environmental problems generated by AI but scarcely remarked in popular accounts of it.
As governments and firms repeatedly fail to address climate change, they create the conditions for increasing scarcity and precarity, particularly in cities. These dire conditions create demand for intensified, automated surveillance to sort benefits and burdens. Surveying these grim horizons, Gavin Smith develops the concept of our ‘right to the face’ in ‘The Face is the Message: Theorising the Politics of Algorithmic Governance in the Black Box City’. Smith’s analysis builds on his widely recognized contributions to the sociology of normalization – how certain behaviours are deemed ‘excessive’ and targeted for shaping by authorities. Algorithms are now responsible for important surveillance of cities, constantly passing judgment on mundane activities. Smith examines recent studies of the Australian cities of Darwin and Perth, looking first at their promises to empower AI, before considering some of the political and ethical implications of these structures for notions such as the right to the city and civil liberties.
In ‘Big Data: From Fears of the Modern to Wake-up Call for a New Beginning’, Nicole Dewandre applies her deeply nuanced critique of modernity to algorithmic societies. She argues that Big Data may be hailed as the endpoint or materialization of leading promises and fears of a Western modernity dating back to Cartesian metaphysics, or as a wake-up call for a new beginning. Dewandre advances the latter approach, exploring the downside of calls for transparency rooted in Enlightenment principles. In Dewandre’s framework, calls for more transparent corporations and governments are not merely a burden on the watched, but also on the watchers, who engage in what she vividly deems the ‘work of watchdogging’.
Jonathan Obar confirms this problem empirically in ‘Sunlight Alone is Not a Disinfectant: Consent and the Futility of Opening Big Data Black Boxes’, and proposes solutions to more equitably share the burden of understanding. Obar’s past work in communications has shed important new light on the automated public sphere. Here, Obar shows how a focus on fragmented end-users’ rights and obligations can easily go awry. Most persons do not have the time or inclination to engage with richly meaningful and detailed forms of transparency and consent online. A robust system of information fiduciaries and infomediaries could represent their interests, but much more work must be done to realize the promise of these institutions. A new governing framework for data is necessary.
In ‘Cyborg Finance Mirrors Cyborg Social Media’, Kamel Ajji explains how 21 Mirrors, a nonprofit organization aimed at analyzing, rating and reporting to the public about the policies and practices of social media, web browsers and email services regarding their actual and potential consequences on freedom of expression, privacy, and due process.
While the contributions are diverse, a unifying theme animates them. Each offers a sophisticated critique of the interplay between state and market forces in building or eroding the many layers of our common lives, as well as the kaleidoscopic privatization of spheres of reputation, search, and finance. Unsatisfied with narrow methodologies of economics or political science, they advance politico-economic analysis. They therefore succeed in unveiling the foundational role that the turn to big data has in organizing economic and social relations.
From a purely economic perspective, we might seek better markets for data, or more targeted subsidies and taxes to ensure its proper production and vetting. From a purely political perspective, we might propose new forms of data governance. These are worthy initiatives. But they must be supplemented by the theoretical rigor of political economy, which exposes how the iterated interplay of political and market victories and losses can either entrench or challenge power.
Political economy is a venerable discipline. Integrating the long-divided fields of politics and economics, a renewal of modern political economy could unravel wicked problems neither states nor markets alone can address. It is an approach that forces us to rethink the relations between private experience and common purpose, to pursue viable alternatives to marketization and datafication via surveillance capitalism. All the contributors help us imagine practical changes to prevailing structures that will advance social and economic justice, mutual understanding, and ecological sustainability. For this and much else, we are deeply grateful for their insightful work.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
