Abstract
This article proposes an ethical framework to navigate life in the datafied world that combines the relational ethics approach of the philosopher Paul Ricoeur with the idiom of co-production from the field of Science and Technology Studies (STS). Mainstream data and computing ethics approaches, which tend to view ethics itself as a technology to produce particular outcomes, fail to adequately consider the context of the datafied world for ethics. The datafied world is a condition in which data and computing technologies form the ineluctable infrastructure for daily life, structuring social order, forming power relations, and supporting visions of desirable futures. I argue that the datafied world is not just a background upon which ethics unfolds, rather it demands a novel framework for ethics that understands the contexts of data and computing technologies and their consequences for human action. The idiom of co-production suggests how action gets tied to visions of the good in the datafied world. In particular, it draws attention to the evolution of these actions in the identities, institutions, representations, and discourses of the social world, creating specific forms of life and meaning in datafied societies.
In an ethical moment
In the fall of 2019, University of California (UC) Berkeley students joined a protest against the University's association with the company Palantir. Palantir contracted with the United States Immigration and Customs Enforcement (ICE) Agency to supply a technology that agents used under the Trump administration to perform raids and detain undocumented immigrants. Among the protesters’ demands was that the University administration disallow Palantir from participating in the Corporate Access Program (CAP) of the Department of Electrical Engineering and Computer Sciences. The CAP program allows companies to pay a fee to be able to recruit UC Berkeley students. Here was a public technology scandal: a powerful technology corporation, supporting controversial practices of a national institution, being challenged by students of a public university on grounds of concern about commitment to social justice.
Capturing UC Berkeley in the crosshairs, Jimmy Wu, a San Francisco based activist and writer on technology and culture, tweeted a statement from a student protester: “Every Data Science major must take an ethics course to graduate. UC Berkeley should take a page out of its own curriculum” (Wu, 2019). The tweet hit home. The referenced “ethics course” was none other than the one that I had co-taught. With my course in the spotlight, my co-instructor and I faced the question of what role we and the course should play in the crisis. What exactly should an education on the ethics of data and computing provide to its students? How can instructors offer a path—the vocabulary, ways of thinking, and examples—by which students could evaluate a situation of ethical consequence, develop an opinion, and act?
In this article, I propose an ethical framework intended to support students of all ages to navigate life in the datafied world. The datafied world is a condition in which data and computing technologies form the ineluctable infrastructure for daily life, where they structure social order, form power relations, and support visions of desirable futures. This condition of life is not just a background upon which ethics unfolds, rather it demands a framework for ethics that considers human action in the datafied context.
To bridge an understanding of ethics with the context of the datafied world, I combine the ethical thought of Paul Ricoeur with Sheila Jasanoff's (2004) idiom of co-production in Science and Technology Studies (STS). I build upon the work of scholars in the philosophy of technology who, heeding Hans Jonas’ call to identify an “ethics for the technological age” (Jonas, 1984), investigate the relationship between human and technical action and its significance for how people ought to live enmeshed within sociotechnical systems (Coeckelbergh, 2020; Ess, 2014; Michelfelder, 2009; Vallor, 2016; Verbeek, 2005). Specifically, my project takes up recent work in ethics and political theory by Jarrett Zigon (2019) and Louise Amoore (2020) who consider the conditions for an interpersonal approach to ethics in the datafied world. My framework attempts to support people to make sense of what is at stake, ethically, in the situations they find themselves in, to engage with others in their community in dialogue, and take steps towards constituting forms of life that support collective flourishing.
The trouble with ethics
Many data and computing ethics initiatives exist today (Jobin et al., 2019). For example, a community of data scientists led by Data for Democracy, Bloomberg, and BrightHive have drafted an ethics code for data practitioners; Google has published a set of “AI Principles” for its employees; Omidyar has created the “Ethical Operating System (OS)” for technology companies to use to anticipate the social consequences of their products; and the UK government has instituted a “Data Ethics Framework” to govern data use in government and the public sector. Meanwhile, in academia, new approaches to research and teaching data ethics are in development, such as Santa Clara University's shareable data ethics modules, the Harvard SEAS “Embedded Ethics” project and hundreds of “technology ethics” classes in universities around the world. The need for these initiatives is confirmed by near-daily revelations of data breaches and privacy violations in societies grappling with how to regulate data practices and technologies. And yet, life in the datafied world demands something more than what these initiatives provide: it demands that ethics shed light on its own tangled relationship to technology. Before proposing a framework, I outline a problem with mainstream approaches to data and computing ethics.
One characteristic of mainstream approaches to data and computing ethics is their reformulation of ethics as a kind of technology. Ethical interventions are designed as correctives for human failure (whether accidental or deliberate). Ethics is conceived as a set of processes to arrive at desirable outcomes—such as the provisions of “privacy,” “fairness,” or “security”—in spite of human factors. In data and computing ethics, as in earlier applied ethics traditions like bioethics (Evans, 2006) and engineering ethics (Fleddermann, 2012), ethics is considered to be a body of technical knowledge whose theories can be learned and applied to technical problems, just like a physicist might learn and apply physical theories to engineer a functional technical system. This understanding of ethics as a technical body of knowledge to avert enumerated failure modes motivates the development of ethical “codes,” “operating systems,” or technological artifacts like machine learning bots trained to detect and eliminate human error or malice. Similarly to bioethics (Hilgartner, 2018), instead of supporting the capacity of people to critically analyze the content and authority of data and computing, many data ethics initiatives focus on treating the social consequences of these technologies.
In the ethics-as-technology paradigm, agency and responsibility for sociotechnical systems falls on human actors (Elish, 2019), and in particular on individual engineers or data-practitioners. This ignores the complex configuration of human and technical agency that is the case in all sociotechnical systems, but especially so in the datafied world. For example, Palantir's cooperation with ICE is concerning not just because people disagree with the activity of ICE as an institution, but because Palantir’s technology changes ICE’s practices (Woodman, 2017). Palantir’s tools make it possible for ICE agents to pool an unprecedented amount of data from disparate sources and identify which individuals to target through associations that the individuals have to collectives, a networked approach that is rife with discrimination and misidentification (Eubanks, 2018). This illustrates one way in which the technology informs an institution’s ability to exercise its power, in a way that has direct bearing on what counts as ethical action for the agent, for the technology company, and for the current or prospective workers of the organization. While philosophers have drawn attention to the ways in which technology configures human agency with consequences for ethics (see, for example, Coeckelbergh and Reijers, 2016; Jonas, 1984; Vallor, 2016), this thinking is not taken up in most data and computing ethics initiatives.
The second problem is that, by assigning the responsibility for ethics to the data-practitioner, engineer, or technology corporation leader, data and computing ethics initiatives foreclose the possibility of society’s deliberation on the place and role of the technologies in their midst. The ethics-as-technology remedy is focused on avoiding negative consequences or preventing harm (Jobin et al., 2019) instead of opening up questions about the meaning of the good that societies aim at with technical projects. For example, the “Ethical Operating System (OS),” a system of codes and step-by-step interventions developed by Omidyar’s Technology and Society Solutions Lab, invites technologists to think proactively about the “dark side” of the technologies they create in order to prevent bad scenarios from becoming realities (Omidyar Network, 2018). Avoiding harm with technology is thus implicitly presented as the same pursuit as doing good. Yet there is an important, and mostly ignored, difference between the two. The strategy to minimize harm is prioritized over debating the merits of the good: including what the good means, for whom, and how to pursue it. For instance, in Mark Zuckerberg's testimony in the US Congress about Facebook's role in the Cambridge Analytica scandal, there was no discussion of the meaning of the company's allegedly “good” mission to connect the world. Instead of asking what is the vision of the good that the Facebook platform aims at, whose vision of the good it is and for whom it is created, the focus of data ethics debate is on whether the company may have strayed in pursuit of its mission in ways that have caused harm to individuals or society (Zuckerberg, 2018).
But how do you know that you are maximizing the good by minimizing harm when you do not define what the good is? Mainstream approaches address this by using a few key values as proxies for the good. For example, Mike Loukides, Hilary Mason, and DJ Patil's ethical checklist for data-practitioners can be broken down into concerns about four key values: privacy, fairness, transparency, and security (Loukides et al., 2018). This already narrow set of values acquire even narrower scope by being defined in ways that are inseparable from the statistical and computational solutions that are developed to address them. Thus, “privacy” becomes non-consensual exposure of data that can be remedied through differential privacy; “fairness” is seen as the opposite of statistical bias, and corporate-made tools exist to “de-bias” algorithms of any slant; “transparency” is understood as practices of documentation and reproducibility that technologists can make sense of, without consideration for public oversight and how it can be enacted; and “security” is defined in terms of the impermeability of technical systems, not human users. This way of defining values removes them from the realm of human relationships, where they have ethical import, to the realm of ethics-as-technology. Data and computing ethics become preoccupied with preserving these goods rather than supporting public deliberation on questions about their very significance in the datafied world, such as: for whom and against whom does privacy and security matter in a datafied world where surveillance capitalism is the dominant political economy (Dwork and Mulligan, 2013; Zuboff, 2019)? How do we assure that algorithms produce not just unbiased outcomes within the same unjust institutions but actively acknowledge structural injustice and support society's evolution to justice? To whom should the sociotechnical system be made transparent in order that these systems gain public trust?
In light of these problems with mainstream data and computing ethics, we need a framework of ethics that confronts how computing and data science are constitutive of conceptions of the good and the means for pursuing it. We need a framework that considers the connections between an individual idea of good, the limited, situated, and context-bound opportunities for agency of the individual, with collectively held ideas of the good, and institutional forces—all in the context of their dynamic relationship with technologies.
Paul Ricoeur's understanding of the human as always in formation through the process of action and accounting for one's actions is a starting point of a contrasting approach to ethics. It supports a definition of ethics that is focused on situated human relationships, instead of on “goods” such as “privacy,” “fairness,” or “security.” Any attempt to examine ethics in human relationships today requires considering the constitutive role of science and technology. The STS idiom of “co-production” says that how people know the world and the technologies they choose to build the world are inseparable from how they choose to live in it (Jasanoff, 2004). When we order knowledge, or design technological tools for making or deploying this knowledge, we are also engaged in the normative and political project of ordering society. If we recognize co-production of technology and society to be at work in the datafied world, then it does not make sense to separate human agents and actions from technological agents and actions. If we accept that all individuals, not just technological agents, have an ethical position and an opportunity for ethical formation on issues that comprise the datafied world, then we need models that offer insight into the relationship among individuals and technologies. Within the tradition of philosophers working on the question of configuration of human and technical agency and its significance for ethics, I highlight the work of Jarret Zigon and Louise Amoore because of their attention to conditions of life in the datafied world and interpersonal dynamics of ethics. Relational ethics defines the human to be a relational being and ethics as profoundly shaped by and enacted in relationships among people. A relational ethics approach for data and computing invites analysis of how these technologies mediate and reconfigure the relationship between the self and other. Recent scholarship on relational ethics and data and computing takes its point of departure from the African philosophical tradition, and in particular of ubuntu (Birhane, 2021; Mhlambi, 2020; Wodajo and Ebert, 2021). In this article, I show how relational ethics is also part of some Western philosophy and moral anthropology. The philosophies of Paul Ricoeur, Emmanuel Levinas, Judith Butler, and Adriana Cavarero direct attention to the relationship between self and other, explaining how essential the “other” is to the possibility and formation of the self (Butler, 2003; Cavarero, 2000; Levinas, 1998; Ricoeur, 1992), and how this relationship calls upon the self to respond or “attune” when encountering another (Butler, 2003; Zigon, 2019). In contrast to ethical questions framed in terms of the good (“What is the good?”) or the right (“Did she act rightly?”), the main ethical question of “relational ethics” is “How is it between us?” (Zigon, 2019) With this question, relational ethics draws attention to the space between the “I,” or the self, and the “Other.” Zigon's approach helpfully focuses attention on the specific role that algorithms play in shaping the encounter with other human beings.
Political geographer Louise Amoore attends to the ways in which algorithms “[establish] new patterns of good and bad, new thresholds of normality and abnormality, against which actions are calibrated” (Amoore, 2020). Her concept of “cloud ethics” is shaped by this decidedly co-productionist account of algorithms and human life. Like Zigon's relational ethics, “cloud ethics” is anchored in human relationships among others and in the need to account for oneself and one's actions as an inescapable condition of being human. Ethics, writes Amoore, is “the inescapably political formation of the relation of oneself to oneself and to others” (Amoore, 2020: 7). This understanding of ethics as about human relationships rather than a set of principles or rules, changes the way in which we see what is ethical about algorithmic models. “Cloud ethics” draws attention to how algorithms generate “detailed, active, partial way[s] of organizing worlds” (Amoore, 2020: 20). Algorithms insert themselves into the ethico-political playing field of human accounts and sense-making under conditions of uncertainty and undecidability, the same conditions that algorithms are deployed to resolve.
Aiming at the good life
In this article, I share a working definition of ethics that combines Paul Ricoeur's concept of the “ethical aim” with co-production. I propose to engage more, not less, with ethics: to think more expansively about what ethics is and what relationship it has to technology. Frustrated with the inability of conversations about ethics to get at crucial issues of power and inequalities, a number of analysts and activists in the area of data, computing, and society have moved away from discussing ethics to focus on politics, power, and economy (Crawford, 2021; Pratyusha, 2020; Zuboff, 2019). While I agree with the need to center power and inequality to understand the world and how it is made, I think the problem is not ethics as such, but impoverished applications of ethics to data and computing. When students ask about what they should do and who they should be in relation to how they see data and computing shaping the world, they are asking about ethics, in addition to politics. Instead of ignoring the more intimate ethical aspect of the question, we need to be more aware of the place of ethics in relation to self, community, and technology in order to create from ethics a more powerful tool for making sense of, scrutinizing, and shaping data and computing in the world.
Paul Ricoeur (1913–2005) was a French philosopher who proposed a way of thinking about ethics that was bound with the formation of a person's identity. Ethics is not a code imposed upon the person from outside; rather, it is constitutive of the process of self-formation that is social and relational through and through. Human actions vis-à-vis one another—the same actions that create relations of domination, violence, or community—become the building blocks of identity and constitute one's ethical being. Due to his understanding of ethics as a process of self-formation in community, Ricoeur spoke not of “ethics” as such, abstracted from the person and process, but of the “ethical aim,” which he defined as “aiming at the ‘good life’ with and for others, in just institutions” (Ricoeur, 1992: 172). Before we unpack this definition, it is critical to understand how Ricoeur saw the formation of ethics, via identity, interaction, and narration.
Identity for Ricoeur is enacted in front of other people and formed in the process of interaction and narration. Narratives, whether the narratives we hear and read, or the narratives that we create by recounting our actions, play a crucial role in the formation of our identities because we are temporal beings, living in time. The development of identity happens through configuration by narrative, as it is essential for humans to internalize their experiences and actions while also accounting for actions in front of others.
Ricoeur's theory of narrative has been extended by philosophers of technology David Kaplan, Mark Coeckelberg, and Wessel Reijers to analyze the human relationship with technology. Kaplan proposed that Ricoeur's view of narrative mediation of individual and collective life can help philosophers of technology to deepen the understanding of “the different ways that technologies figure into our lives” (Kaplan, 2006: 50). Coeckelbergh and Reijers (2016) follow this suggestion with an analysis of the ways in which technologies can “configure,” using Ricoeur's term, characters and events similarly to the way that narratives do. They argue that all technologies are “narrative technologies”: they participate in “co-authoring” narratives with human beings that organize human relations in time and become scripts (Akrich, 1992; Latour, 1992) according to which people live.
Coeckelberg and Reijers take up Ricoeur's theory of narrativity because they rightly believe that it can help to account for the linguistic and social dynamics of technology. Having shown how “technologies co-shape meaning and human action,” they conclude that it is possible to “deploy the theory of narrative technologies as an ethical theory of technology” (Coeckelbergh and Reijers, 2016: 344). Their framework foregrounds how technologies configure human actions through the technology's narrative dimensions, opening up another axis according to which technology can be evaluated. I pick up this invitation to think with Ricoeur about the ethics of technology, not, however, from the starting point of narrative technology, but the narrative identity of the human and its ethical import. As we will see, when the starting place of analysis is the narrative subject, ethics turns from evaluation of technology to dynamic process of ethical action in technological contexts.
Ricoeur distinguishes two components of identity, idem and ipse. Idem identity is that part of us which we believe to be constant about who we are. Ipse identity is the part of us that changes with time through our life. These two aspects of identity, what stays the same and what changes, are constantly balancing one another and evolving as people act in the world and narrate those actions to themselves and others. Actions, and the narratives about them, are integral to both idem and ipse identity formation because actions, while each having ends in themselves (i.e. the specific thing that the action is meant to achieve), are directed at an “ethical aim;” that is, actions aspire towards the “good life with and for others in just institutions” (Ricoeur, 1992: 172). Actions are not just instrumental or purposive; they are also ethical (in the sense that they aim at the good life), and they both shape outcomes and form our identity as the basis for future actions and narratives.
Notice how this theory of identity formation, as a basis for a theory of ethics, is distinct from the ethics-as-technology approach. With ethics-as-technology, the person is a rational actor wielding ethics as a purposive tool. Rational action, in this view, reflects economic theory and means action which takes the most effective means to achieve a predetermined end. In contrast, Ricoeur's definition invites us to view ethics as a process of self-knowledge that individuals, situated in their identities and narrative formation, undertake. This process is social (practiced with and through others), and is influenced by institutional realities at the same time as it shapes those realities.
Let's take a closer look at the three components of Ricoeur's definition of the ethical aim of action—“aim at the good life,” “with and for others,” “in just institutions”—and consider what happens to each in the datafied world.
Ricoeur defines “the good life” as “for each of us, the nebulus of ideals and dreams of achievements with regard to which a life is held to be more or less fulfilled or unfulfilled” (Ricoeur, 1992: 179). It is the “good life” as we individually understand it from the perspective of our own in-time identity, but which can change through the process of pursuing it. We “aim” our actions at this vision of the good life. By characterizing ethics as an “aim” rather than as a predefined principle, imperative, or norm, Ricoeur suggests that ethical action is an active pursuit rooted in what is lacking and what we wish for and aspire to. This is fundamentally different from ethics-as-technology where the human is reduced to an operator who must conform to a set of rules, leaving no space for the individual's aims and aspirations to unfold. In contrast, ethics as an “aim” orients our actions and allows the development of our identities, which Ricoeur thought to be experienced as always incomplete self-awareness and always in formation. As such, ethics is rooted in the process of living, in desire for an ideal that emanates from the self at the same time as that ideal becomes a measure for the self.
The second part of the definition moves from the individual's sense of the good life to the realm of interpersonal relationships. “With and for others” refers to friends and people one meets face-to-face during the course of one's life: the proximate others whose attitudes, actions, and perspectives both inform our sense of self and the good while also being the necessary playing field (and set of constraints) in which our ethical aims unfold and evolve. “With” others, because it is by pursuing the good in communities, by giving an account of our actions to others, that we make sense of what is “good” and how we are doing in aspiring to it. “For” others, because our actions necessarily inform the good for, or on behalf of, those proximate others and cannot achieve their aim without engaging with them. This part of the definition recognizes ethics as interpersonal and recognizes action as responsible not only to oneself but also to those in our communities, similarly to approaches from feminist ethics (especially ethics of care) (Gilligan, 1982; Walker, 2007). The interpersonal dimension of ethics and technology's significance for informing it are proclaimed in classic works of philosophy of technology (Jonas, 1984; Verbeek, 2005; Van Den Eede, 2010) and in recent scholarship focused on algorithms and ethics, such as Zigon's “relational ethics” and Amoore's “cloud ethics.”
The interpersonal dimension of ethics is developed further by the third part of Ricoeur's definition: we pursue the ethical aim “in just institutions,” that is, in the wider realm of society and politics that guarantee the very ability to act ethically. The social institutions we create and participate in should share in our sense of the good life, should help us to achieve the good life, and serve as the collective expressions of the good. They are essential to live a good life because we cannot pursue this life without acting together with others in structured ways that are encoded in institutions. Freedom for Ricoeur is that ability to move through the thought of ethics (a temporal, narrative process), confront morality (principles that are perceived as obligations) and to balance between these two with conviction, or something of one's idem identity that persists and makes us return to the same belief about who we are. The process of action starts with the self, passes to the universal (via the interpersonal and institutional), and returns to the self again in conviction, to inform individual action. The ability to pass through this process is what just institutions are meant to protect (Ricoeur, 2000). This understanding of institutions acknowledges that they are not only functional in object-oriented ways, such as securing the distribution of resources; they are also essential to self-formation and to the pursuit of the ethical aim.
Ricoeur's perspective on ethics begins to address problems of data ethics as “ethics-as-technology.” First, it provides a framework for thinking about human action and its relationship to ethics. If ethics is always related to the self, then ethics does not only concern discrete decisions in work with technology, but rather a broader question of pursuit of a good life that transpires through human relationships, both proximate and distant. While ethics is at stake in discrete decision points of working with data (Boenig-Liptsin et al., 2022), the actions at each point are framed, shaped, and formed by a larger level of life plans and experiences (including, for example, choice of profession) and of our own self-evaluation following the act of interpreting our actions and accounting for them in narratives in front of, and with, others. The focus on life aims and narratives links individual ideas of the good with those of the collective and avoids the assumption that good individual actions add up to collective good and that ethics can be a code to direct actions.
Second, in a definition of ethics grounded in an interpersonal formation of the self we also come to recognize the dynamic concept of the good. The concept of the good life that one aims for is influenced by the means one uses to pursue it. Furthermore, the concept of the good life evolves over time, through changing self-understanding and relationships (sometimes struggles) among oneself, friendships, and structures of social institutions. This definition of ethics acknowledges that our understanding of the meaning of a good life is ever-evolving, because we perpetually recast ourselves and our relationships via narratives that attempt to make sense of the dynamic integration between human and technical agency that comprises the datafied world. Whereas ethics-as-technology restricts the focus of ethics to discrete decision points, technological actors, and more narrowly defined outcomes, Ricoeur's framework offers an alternative. It exposes how each of us as individuals (in our whole and evolving identity) has a stake in the problems, choices, and social and institutional relationships through which technologies are designed, used, and controlled to shape our lives.
A framework for ethics that begins with an evolving and socially constructed self and that acknowledges a dynamic conception of the good life is consistent with recent work on ethics. Judith Butler's perspective of human action, responsibility, and formation of the self similarly stresses the central place of accounting for one's actions through narrative as constitutive of both the self and ethics (Butler, 2003). Butler describes how the subject fears the inability to narrate herself or to give a complete account of herself. This fear, however, is necessary since it confronts us with the limits of our knowledge and lack of transparency of ourselves to ourselves. Butler sees the ability to forgive ourselves for this lack of transparency to be the precondition for being ethical and taking responsibility for our actions. While, for Butler, the narrative dimension of ethics is what allows the person to humbly come to terms with their own irreducible unknowability, for moral anthropologist Webb Keane the narrative dimension of ethics presents yet another opportunity. Keane describes the importance to ethics of the “third-person stance” that allows people to give an account (to themselves and others) of their actions (Keane, 2015). To evaluate one's actions from a distance of the third-person is a way to reform the self through the account and draw upon the common resources of one's culture and society's shared reference points that support a socially embedded and dynamic conception of the good life.
Ricoeur's concept of the ethical aim points to salient aspects of the debate among UC Berkeley students about Palantir and begins to help people consider their relationships to, and positions on, the issues and identify possible actions. With Ricoeur's definition of ethics, students see that ethics starts with understanding the human contexts, that is, identity of self in relation to others in a datafied world. Students learn to listen to the narratives of all stakeholders to understand how those form concepts of the good life and of everyone's self-defined identities and roles in moving toward that life. In considering the Palantir case, students assess the place from where their action would begin: their sense of self and expertise as computing and data professionals in training, the state of life that they might envision their profession to bring them, their relationship to immigration or to the Federal government, etc. Ricoeur's definition immediately draws attention to the “who,” or the specific identity and positionality of the person acting in the multiple and dynamic identities and supports examining the variety of ethical stances towards an issue from these multiple perspectives. The approach encourages not only the analysis of each position singularly, but the way in which they bear upon one another, mediating the vision of the good life of each and of the collective as a whole, as well as the institution that the collective constitutes. Engaging in this process of narrating and listening leads students to expose, in the protected and social space of the university, the differing aims and concepts of the good.
Unlike ethics codes and operating systems that treat ethics as a set of guidelines for operators to follow to avert catastrophe or criticism, Ricoeur offers a path that reflects on individual and social identity, on the meaning of the good life, and that undergoes reconciliation across perspectives in the social sphere. It is not just a set of rules developed by “experts” for “the few” technologists, but, rather, it results in a broader internalization and reflection of the ways technology forms the plurality of identities of people living in the datafied world, inserts itself into our relationships, and is formed by and for the community. In opening up the variety of these questions about the Palantir case, this definition of ethics does not—it cannot—serve as a technology that instructs in a preset course of action. Instead, it supports practical ethical work: work of self-knowledge and formation in communities and within specific contexts of the datafied world.
Relational ethics in the datafied world
We said about Ricoeur's definition of ethics that it sets us on a different track by treating ethics as the pursuit of the good life, with others, in just institutions. Although this definition understands ethics as a dynamic product of the individual (at interpersonal and socio-political levels) and recognizes the situatedness of ethics in specific cultures and times, we need to add specific insight from philosophy of technology and STS about human agency and “the good” when living in a world where data and computing technologies are a necessary aspect of daily life. In other words, using the definition requires acknowledging the ways in which science and technology relate to the human sense of self, interpersonal relations, and institutions in ways that are essential to observe and understand if we are to discuss the good life in the datafied world. Below, I examine what is unique to ethics in the datafied world by looking at the interplay of the datafied world, human agency, and the concept of the good life.
The question of who is acting is central to ethics across time, but takes on a unique meaning in the datafied world. In traditional data and computing ethics approaches, the question of who is acting usually remains an unexamined assumption, namely that it is the technologist (technology worker or leader) who takes the ethically-significant action. More generally, ethics traditionally considers the human as the actor (Jonas, 1984). How, however, is the capacity to act informed or altered by technology? For a long time people have empowered technologies to take specific actions in their place (Latour, 1992) and humans regularly delegate agency to technological artifacts and to the sociotechnical systems that support their functioning (Jasanoff, 2016). We delegate the power to technologies to make decisions for different reasons, including safety, efficiency, and in the name of values that are harder to define like impartiality or perceived limits of human decision-making. History of technology teaches that instead of having a cleaner separation among decisions humans make and decisions technologies make, there is greater interdependence among them (Mindell, 2015). Our present challenge and opportunity arises from the integrated power to act that is shared and distributed among humans and technologies. Scholarship on how technological mediation changes the possibility and significance of human action (Borgmann, 1984; Ess, 2014; Hollan et al., 2000; Jonas, 1984; Moricot, 2020; Verbeek, 2005) is necessary for thinking about ethics in the context of the datafied world.
Another significance for human-technological agency in the datafied world is a transformed nature and distribution of risk. As in older sociotechnical systems, the distribution of risk in the datafied world is uneven. Less advantaged populations and innocent victims (e.g. those living around a nuclear reactor, or passengers on an airplane) are disproportionately more affected than others because they cannot mount the same effort to avoid or defend themselves from the risk (Elish, 2019; Jasanoff, 2016). Aggravating this problem in the datafied world, technologies create the possibility of “informational harm” (representation, identity, influence, surveillance) whose “risk” is harder to identify and quantify (Metcalf et al., 2016). Informational harms include influence on democratic processes, such as public expression and deliberation, and transformations to how people represent themselves and are represented by others. These harms make failure of sociotechnical systems in the datafied world more difficult to identify, measure, and avoid than more traditional physical or environmental harms of non-computational sociotechnical systems. Furthermore, managing risk by restricting the use of technologies towards only a particular aim is more difficult because the datafied world context can make actions both faster (real-time interplay of human and mechanical agency, as in robot-assisted surgery) and more extended in time (datasets offer continual opportunity for re-analysis in context of new datasets and research questions). New forms of risk in the datafied world require its inhabitants to articulate new narratives about action and responsibility, reconcile these narratives with evolving understandings of the good life, and advance collective abilities (via participation through institutions) to manage these issues.
STS findings of the dynamics of human-technology action should inform our thinking about ethics in the datafied world. Consider, for example, what the integration of human and technological action means for the central ethical task of giving an account of one's actions. In machine learning, engineers purposefully program algorithms to carry out analyses that human beings are thought to be incapable of, where power to decide on a course of action is delegated to algorithms and their material implementation in robots (e.g. “autonomous” vehicles). Computer scientists further program algorithms to create emergent learned behavior in machines. What happens to the capacity of a person to give an account of their actions when human agency is intertwined with algorithmic and mechanical agency such that portions of a human actor's account may not be understood by or be accessible to the person? What is the significance of a changed temporality of action with data where yet unknown future applications can reframe not only old data but old decisions? What happens to the capacity to take action aimed at the good life with and for others when the perception of those others is mediated by an algorithmically outputted risk score? Or where the human speaking with you is merely giving voice to an automated script (Zigon, 2019)?
By intermingling human and technical action, distributing action in space and time, obscuring human oversight, and redefining risks, the datafied world context obliterates the possible adequacy of reductive and instrumental ethical prescriptions. Ethics-as-technology approaches might serve as limited guardrails, but they are not capable of addressing the extent of the significance of ethics of data and computing issues, neither for the individual nor for the collective. By contrast, Ricoeur's definition of ethics offers a process by which people can examine their identities as intertwined with technologies; formulate narratives about how they live, act, and interact; examine those narratives against their understanding of the good life; examine how their understanding of the good life is shaped by technology and challenged by these new narratives; and come full circle to advance their understanding of themselves in relation to the world around them.
Technologies and the good
As societies we are much better at trying to avoid catastrophes (through mechanisms that have been created for evaluating and managing risks of technologies) than at thinking about the nature of the good that any given technology embodies and organizes. In order to fully consider ethics in the context of the datafied world, we need a way to think about the relationship of data technologies and conceptions of the good (Ess, 2014; Higgs et al., 2000; Wang, 2015).
Ironically, it is the response to novel forms of action, made possible by the integration of human and technological action, that society evolved the reductive and instrumental view of ethics applied to technology. Seeing the reality of new forms of risks created by sociotechnical systems of modernity (Beck, 1986), late 20th century governments embraced techniques of risk assessment and management (Jasanoff, 1999, 2016), of which research ethics, codes of conduct, ethics checklists, and toolkits are primary tools (Hilgartner et al., 2016; Tallachini, 2015). This culture of risk, and mechanisms for dealing with it, frame ethics as being primarily about risk management and catastrophe-avoidance, rather than about shared definitions of the good life, developed and adopted through just institutions.
Similarly for technologies of data and computing, the idea of the good, which the human-technological action aims for, is usually not analyzed, but remains present by a proxy of concepts like “privacy,” “fairness,” “transparency,” and “security.” When we realize that ideas of good (such as certain visions of fairness, justice, efficiency, etc.) are at the origin of technologies, we see that any technology carries, and does work to realize, a particular conception of the good. We begin to pay attention to the ways in which technologies’ design and deployment in pursuit of the good further re-figure and actualize certain conceptions of the good, usually at the expense of others. This is part of the dynamic thinking of the good life that Ricoeur's concept of the ethical aim points to: a recognition that the idea of the “good,” which serves as the aim, emerges from the situated and contextualized process of ethics.
Shannon Vallor has contributed extensively to the capacity to theorize the relationship of technology and the good life. In her work, Technologies and the Virtues: A Philosophical Guide to a Future Worth Wanting, Vallor calls for “rewrit[ing] the conventional script of philosophical ethics” (Vallor, 2016: 9) in light of the need to “include an explicit conception of how to live well with technologies” (Vallor, 2016: 3). Vallor proposes virtue ethics as a solution. She calls for the cultivation of “technomoral habits and virtues,” qualities of an individual person's character that can serve as a “cane” or “strategy” to make technomoral choices under conditions of “acute technosocial opacity.” A virtuous character, Vallor argues, can support human decision-making about technologies in situations when those technologies present novel options and envelop human reasoning and foresight in a “technological fog.”
Although Ricoeur's framework of ethics is built upon a strong foundation in virtue ethics and the cultivation of a person's character over a lifetime, virtue is not the only and not the central element of the narrative approach to ethics. Ethics as “the aim at the good life with and for others in just institutions” takes away the emphasis from individual choice under conditions of uncertainty to consider the continuous and dynamic configuration of self in relation to collectives and institutions. In this context, the interesting questions about technology and the good are less focused on making good “technomoral choices,” and more on arrangements of technology and the good. Within the co-productionist idiom, an analytic tool for identifying these arrangements is the framework of sociotechnical imaginaries (Jasanoff and Kim, 2009, 2015). Sociotechnical imaginaries are “collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology” (Jasanoff, 2015: 4). Visions of desirable futures contain tacit and explicit commitments to the good, such as ideas about what kinds of beings are worthy of esteem and what forms of life are worth living.
I propose four different ways that we can engage with the framework of sociotechnical imaginaries in the process of ethics that aims at the good life in the datafied world. These are ways to think about the co-production of technology and the good: (1) What ideas of the good influence the creation of the technology? (2) What idea of the good is embodied in a technology? (3) How has the use of technology altered users’ idea of the good over time? (4) Where and how can we challenge the alliances between ideas of the good and the technologies that claim to realize those goods (Benjamin, 2019; Haraway, 1985)?
Ricoeur reminds us that the concept of the “good life,” that “nebulus of ideals and dreams of achievements” according to which a life is evaluated, looms over human actions. Co-production and sociotechnical imaginaries corroborate this and provide the crucial insight that it is not possible to conceive of these ideals and dreams as separate from the technologies among which we live. In contrast to the ethics-as-technology paradigm that aims to deliver technologies that bring about a narrowly-technical conception of the good, the co-productionist framework for ethics draws attention to the entanglement of concepts of the good and technology as a starting point for inquiry into ethics. It further recognizes that the “good” is not the goal of ethics in and of itself, but rather a means to the process of individual, interpersonal, and societal formation.
Conclusion: practicing co-productionist ethics
This article moves away from the idea that ethics is a technology that is the responsibility of technologists alone to wield such that, when coupled with data and computing technologies, it will produce social good. Rather than centering “good products,” I propose to center the self in the context of human relationships as the space within which ethics happens. The co-productionist framework for ethics is rooted in a temporal, embodied experience of being human among other people, within social institutions, and in a context of dynamic possibilities of action and the good, as these are expressed in the datafied world. When a person puts co-productionist ethics into practice they draw attention to four dimensions of ethical life in the datafied world: identities, institutions, representations, and discourses (Jasanoff, 2004).
First, where the ethics-as-technology approach addresses the person in the singular dimension of their relationship to the case as the technologist, designer, or entrepreneur, the co-productionist approach recognizes people's varied and dynamic relationships to the case and the fact that identity is constituted by one's actions. It invites each student of the ethical aim in the datafied world to inquire about their various positionalities (Crenshaw, 1991) and relationships to the case and to think through these together. For example, a person at the Palantir protest may be a computer science student and a child of an immigrant. They may have friends who are undocumented or who already work for Palantir. They may be a citizen of the US or a resident of a city where an ICE raid occurred. They may be an administrator or alumni of the University or a teacher of ethics. Each person's positionalities inform what agency is available to them, and to whom, and for what they have responsibility. This multidimensionality should not be seen as a source of moral confusion or incapacity to act and we should not seek to reduce this complexity to a single dimension. Instead, we need to actively investigate the ways in which the agencies and responsibilities associated with the various positionalities are reinforced, brought into contradiction, and brought into being. Having considered one's own varied relational identities, agency possibilities and obligations, the co-productionist ethics approach invites each person to think about the others’ relationship to the case in a similarly multidimensional and dynamic manner.
Second, each person's multidimensional and constitutive identity in relation to the case is an invitation to consider the collective and institutional means of pursuing the ethical aim. While in the ethics-as-technology approach agency is expressed primarily in the actions and choices of an individual (the designer, technology worker, CEO, user), in the co-productionist ethics approach, agency is necessarily distributed, collective, and institutionalized. The co-productionist approach invites people to reframe the perceived ethical problems, choices, and solutions into collective terms. In protesting Palantir's ability to advertise to students, the protesters recognized and mobilized the power of collective action in the name of a university committed to public service and social justice, performing that commitment and building the institution in the process.
Third, co-productionist data ethics draws attention to the way in which the “good life” as the goal of the ethical aim is itself prefigured by ways of knowing and practicing with data technologies. Instead of the ahistorical, de-contextualized, and alleged universal recommendations of ethics-as-technology, the co-productionist approach attends to how representations of the “good life” present in any technology is a product of culturally-specific and historically-contingent conditions and implicit social structures. For example, Palantir participated centrally in constituting post 9/11 regimes of security via new data techniques of risk prediction and management (Amoore, 2013) and, in so doing, contributed to creating the “undocumented” subject as an entity that the Federal government could act upon via its Palantir technology-armed ICE officers. The co-productionist perspective reveals how the techniques of data and computing are always already oriented towards a conception of the good of security in which the undocumented person is perceived as a threat. Instead of seeing technology as a morally neutral tool that can be directed towards “good” or “bad” ends, the co-productionist approach reveals the extent to which representations of the good are constitutive of technologies as well as identities and values that they make cohere.
This insight brings to light a real-world tension for the analysis of the student protests against Palantir. On the one hand it suggests that even if the protests are successful and Palantir is not admitted to recruit students on campus, students learning data science and computing learn with these sciences and techniques the values of security that they deem so clearly problematic in the Palantir case. At the same time, it points to the way that the protestors work to deny Palantir a platform of recruitment, is a powerful way of contesting this alliance of technology and the good around security and surveillance regimes. Again, the coming to light of this complexity of the ethical aim should not be seen as a problem or as justification for inaction or moral relativism. Instead, it can further point students of co-productionist ethics to consider the plurality of actions available to them, from advocating to change university practices, to what courses they take and how aware they are about the history of the techniques they learn, as well as the means and forms in which they account for their actions to others.
Fourth, the insights about identities, institutions, and representations from the co-productionst data ethics discussed above return full-circle to the narrative identity of the protester or students in the classroom. The students tell an account to the world of why they have gathered to protest the university's relationship with Palantir. In so doing, they affirm in the narrative their opinions and demands, and follow through with their ethical aim, which results in the re-making of a discourse. “UC Berkeley should take a page out of its own curriculum” identifies the students as caring about the ethical contexts of data technologies and affirms their will to hold the educational institution accountable to public service and social justice. Through the statement, the students affirm, with conviction, who they are and what they stand for.
To center the narrative account of the case requires considering the medium in which the narrative was made. As a tweet, carried by a key communication platform of the datafied world, the narrative can reach people around the world and be linked with other protests to amplify resistance. Read in a different way, the tweet can be interpreted to collapse the narrative of the protest into an ethics-as-technology statement, that is, to define the ethical action as that which prescribes removing Palantir from the campus program as the only ethical response. The way in which narratives are deployed in this case and how they produce and shape the ethical aim supports the need to make sense of the ethical aim in the context of the datafied world in which it invariably unfolds.
Taken together, this reframing of the case with the co-productionist ethics framework, suggests (but does not prescribe) certain kinds of action for aiming towards the good life in the datafied world. These actions solicit and draw upon insight that emerges from people's varied identities, relationships with and perspectives on the situation of interest; they privilege acting with and through collectives and institutions; they question the representational alliances between the good and the technologies at play; and they deliberately deploy discourses to form one's own and others’ identity and convictions in relation to the situation. Unlike ethics-as-technology approaches that seek to solve the ethical aspects of data and computing technologies by bracketing off discussion of what “the good” means and narrowing the scope of action to pursue it, the co-productionist ethics approach is an instrument of humility (Jasanoff, 2003). It is based upon the recognition of a partial, situated self-in-formation and the limited capacity of anyone to know and pursue “the good,” therefore necessitating collective and processual sense-making. Such a humble instrument, however, may be our strongest ally in the task of aiming at a good life in the datafied world.
Footnotes
Acknowledgements
This article benefited from a fellowship at the Paris Institute for Advanced Study (France) as part of the Sorbonne University – Paris IAS Chair on “Major Changes.” It is dedicated to the late Anne E. Monius, Professor of South Asian Religions at the Harvard Divinity School, who first introduced me to narrative ethics and with whom I had the great fortune to discuss early ideas that became this article.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
