Abstract
Some recent uses of artificial intelligence for (for example) facial recognition, evaluating resumes, and sorting photographs by subject matter have revealed troubling disparities in performance or impact based on the demographic traits (like race and gender) of subject populations. These disparities raise pressing questions about how using artificial intelligence can work to promote justice or entrench injustice. Political theorists and philosophers have developed nuanced vocabularies and theoretical frameworks for understanding and adjudicating disputes about what justice requires and what constitutes injustice. The interdisciplinary community committed to understanding and conscientiously using big data could benefit from this work. Thus, in the spirit of encouraging cross-disciplinary dialogue and collaboration, this piece examines contemporary scholarship in political theory and philosophy to illustrate some of the vocabularies and frameworks political theorists and philosophers have developed for thinking about justice and injustice. It then draws on these frameworks to illuminate how the use of artificial intelligence can implicate questions of justice, with a focus on institutional discrimination, structural injustice, and epistemic injustice. Ultimately, the piece argues that the use of artificial intelligence—far from representing a decision to take power out of human hands—represents a novel way of harnessing human power, making questions of justice central to its conscientious undertaking.
The idea that with technological progress comes emancipation stretches back to Aristotle, who wrote that autonomous machines would render class hierarchy and slavery obsolete: “…if every instrument could accomplish its own work, obeying or anticipating the will of others…if…the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves” (2001: 1131). In an age of burgeoning artificial intelligence (AI), this sentiment offers great solace.
But the solace is illusory. AI systems are yet more tools with which people exercise power over each other. As ever, we can exercise power to promote emancipation or subjugation, justly or unjustly. AI does nothing to change this. In fact, if already-privileged people are disproportionately represented among the creators of AI systems, and if their experiences are disproportionately represented in the data used to train them, one could even argue that AI will reinforce existing unjust hierarchies. (Benjamin (2019) illustrates several ways this could happen.) On this understanding, AI looks like one of the “master's tools” that Lorde (1984: 112, 123) warns us “will never dismantle the master's house,” not a tool of genuine liberation.
Indeed, recent history yields several examples of AI's potential to reflect or reinforce human prejudice. Consider Amazon's recently developed (then abandoned) AI-powered resume screener (Dastin, 2018). It effectively automated sexism—systematically favoring men's resumes, apparently because it was trained on data collected from resumes previously submitted to Amazon, most of which were from men (Dastin, 2018). Similarly, recall Google's AI-powered photo sorter, which mislabeled Black people as “gorillas” (BBC, 2015), reflecting and giving undue credence to a racist stereotype. Or consider Buolamwini and Gebru’s (2018: 8) analysis of three AI-based facial recognition programs, revealing that they perform worst on dark-skinned females. Given this, deploying these facial recognition programs (e.g. in law enforcement) could unfairly disadvantage women of color (who could be falsely identified as criminals) (Buolamwini and Gebru, 2018: 1–3).
As these examples show, and as recent calls for fairness, accountability, and transparency in AI recognize, the use of AI raises pressing issues of justice. To thoughtfully navigate these issues, I propose we turn to political theory and philosophy, which have developed nuanced theoretical frameworks for understanding and adjudicating questions of justice. Here, I examine contemporary political theory and philosophy to illustrate some of these frameworks, drawing on them to illuminate how the use of AI can implicate questions of justice. Ultimately, I argue that using AI—far from removing power from human hands—is a way of harnessing human power, making questions of justice central to its conscientious undertaking.
What is justice?
Before examining what specific theories of justice can teach us about the ethical pitfalls of specific AI systems, we must clarify how specialists (i.e. political theorists and philosophers) understand the concept of “justice” more generally—both the ideas about justice they generally agree on and some of the main disagreements dividing them. According to many political theorists and philosophers, “justice” provides
Not all political theorists define justice in Rawlsian terms. Some (e.g. Cohen, 1997) challenge Rawls’ assumption that justice applies primarily to institutions, arguing that it applies equally to individuals’ everyday choices. Others (e.g. Okin, 1989) emphasize that justice doesn't apply only to formal institutions, like governments, but also to informal institutions, like the family. Others (e.g. Beitz, 1999; Caney, 2005; Ypi, 2012) argue that principles of justice like those Rawls envisions governing a single society actually apply worldwide.
Theorists also disagree about how real-world injustices and the social divisions (e.g. of race and gender) along which they often manifest should inform our reasoning about justice. Rawls (1999: 6–19) argues that, to determine what justice requires, we should abstract away from real-world conditions, imagining what principles of justice we would endorse if we didn't know anything about our identities (e.g. our race or gender) and if we assumed the principles we chose would be complied with. Conversely, Mills (2019) argues that, in a world riven by racial injustice, ignoring race at the level of theory distorts our understanding of what justice requires and robs us of the historical and political knowledge necessary to dismantle racist institutions. Similarly, Crenshaw (1991) argues that people may experience oppression differently based on their overlapping identities (e.g. Black women may experience oppression not experienced by other women or other Black people). Thus, Crenshaw (1991) argues that only attentiveness to how differently identified people experience oppression can enable us to understand “the social world” (1991: 1245) and ensure emancipatory political movements represent everyone they seek to liberate (rather than, e.g. obscuring Black women's distinctive interests by subsuming them under the category “women's interests”).
Moreover, some present injustice as constituted by and located in real-world institutions and power structures and argue that we must dismantle them to achieve justice. Getachew (2019) reconstructs the views of several anticolonial thinkers, like Nkrumah, Williams, Manley, and Nyerere. Though their views weren't identical, Getachew (2019) shows that they all saw the domination to which imperial powers subjected colonies and former colonies as created by certain political-economic arrangements (e.g. the concentration of economic power in the global North and its translation into geopolitical power). Correspondingly, they argued creating a world free of domination required dismantling those arrangements and replacing them with new ones embodying anticolonial commitments (Getachew, 2019). Similarly, Fanon (2004) argues that a central element of colonialism is the way it constructs the (mutually defining) identities “colonizer” and “colonized” and imbues the people it puts in those categories with corresponding mindsets. Hence, Fanon (2004) attempts to understand these dynamics to understand how to overcome them—which he argues is necessary for true decolonization.
If injustice inheres in real-world institutions, practices, and mindsets and achieving justice requires new arrangements explicitly designed to subvert them, we may learn little about justice by abstracting away from the cleavages (e.g. of race) that structure the status quo. Instead, one could argue, we must study these cleavages to learn how they operate and how to undo the injustice people use them to create.
Some also argue that achieving justice requires attention to the identities that often separate people because liberating politics must bring people with different, overlapping identities into its fold. Thus, Lorde (1984: 110–23) argues that women must not ignore or suppress their differences, but use them as a well of power to escape the patriarchal structures binding them. And Crenshaw argues that “the social power in delineating difference…can…be the source of social empowerment” (1991: 1242).
Nonetheless, we could interpret many who disagree about the precise scope, site, or proper method of investigating justice as endorsing the more general idea above—that “justice” provides a set of standards by which to fairly adjudicate certain kinds of claims.
Political theorists also widely endorse three other ideas about justice. First, justice is distinct from well-being. That a policy would make (some) people richer or happier doesn't mean it would be just; that it would make (some) people poorer or less happy doesn't mean it would be unjust. If segregationist shopkeepers in the United States were displeased when they had to integrate their businesses, their displeasure did nothing to diminish the fact that justice required integration.
That said, some think justice and well-being are connected because people have justice-based claims to a certain level of well-being. Shue (1996) argues people have rights to subsistence goods. Miller (2007: 178–85, 207–8) argues people have rights to the goods necessary for living a “minimally decent human life.” Others (e.g. Anderson, 1999; Nussbaum, 2003; Sen, 1980) argue justice requires people have (or have the opportunity to develop) certain capabilities, such as that to participate in democratic politics on equal terms with other citizens (as in Anderson, 1999).
Second, justice is not the same as legality. But it's often thought that requirements of justice should be legal requirements, too. Conversely, some argue the limits on what we can feasibly guarantee via legal institutions should limit what we identify as requirements of justice (see, e.g. O’Neill, 2005).
Third, there is significant disagreement about what justice requires. But this does
In sum, even theorists who disagree about justice often agree that standards of justice serve important social functions. They allow us to fairly adjudicate certain kinds of claims. They provide moral, as distinct from legal, standards dictating how people should be treated—which can't be reduced to commands to make people happier or richer. People disagree about what justice requires. But our actions and institutions inevitably reflect some ideas about justice over others. We are continually faced with the question of which ideas to privilege, and continually challenged to remake our shared practices and institutions in the service of justice.
Selected questions of justice
With this basic understanding of justice at hand, I’ll now examine what some specific theories of justice can teach us about the AI systems described earlier: a resume screener, a photo sorter, and facial recognition software. These are not the only ideas about justice relevant for the ethics of using AI, but they are especially important for understanding the ethical pitfalls of these particular technologies. I examine them here to give some concrete examples of how political theorists’ and philosophers’ insights about justice can illuminate the ethical issues raised by the use and design of AI systems.
Discrimination and structural injustice
Some injustices are created by the cumulative force of countless actions, none necessarily malintentioned. Actors may participate in a shared social system—like a legal or economic system. But they need not intentionally collaborate to advance a shared goal. Each may act independently, on their own motives. Nonetheless, their actions, taken together, can produce injustice.
Shelby writes:
If such bias is implicit and unintentional, institutional racism could arise even without people consciously coordinating to discriminate. Discrimination based on criteria besides race could be similarly carried out, rendering it similarly “institutional.”
AI can certainly facilitate institutional discrimination. Consider the resume screener introduced above. Far from eliminating human prejudices, it automated them—systematically favoring men's over women's resumes, apparently because it was trained on data collected from resumes previously submitted to Amazon, most of which were from men (Dastin, 2018). The screener favored resumes using words more often found in men's resumes and downgraded resumes that contained the word “women's” (Dastin, 2018).
Assuming the training data was disproportionately male partly because of past unfairness—patriarchal norms discouraging women from professional work, popular sentiment that women weren't qualified for technical work, unequal educational opportunities—the screener's reliance on this data is especially troubling. If used on real applications, it would ensure these historically common forms of discrimination
Arguably, such bias would also contribute to “structural injustice” (Young, 2006). According to Young (2006), different people occupy different positions within “social structures,” each with its concomitant expectations, advantages, and disadvantages. Social structures become sites of injustice when they systematically empower people in some positions by disempowering others (Young, 2006)—as men in a labor market using Amazon's resume screener would be empowered because women were disempowered. Moreover, Young (2006) argues that, by participating in a social structure, we help perpetuate it, thereby contributing to the creation of any injustice it generates; therefore, we are responsible for undertaking collective action to remedy this injustice—even if we are not blameworthy for it (e.g. because we didn't design the structure itself or
Young can also help us understand the gravity of Google's photo sorter mislabeling Black people as “gorillas” (BBC, 2015). The moral problem with this is not (only) that it provoked offense, but that it reinforced a mischaracterization of people of color (as less-than-human) that's historically been invoked to defend injustices like colonialism and slavery. Present-day social structures arguably bear the marks of these injustices. Anghie (2006) and Mutua (2000) argue colonialism's central ideas and objectives shaped the development of international law and still structure global politics. If this is right, when we design and use technology that reflects and reinforces those ideas—which underpin the social structures comprising international legal and political institutions—we perpetuate these structures’ constitutive injustice.
Epistemic injustice
Fricker (2007: 1) defines epistemic injustice as injustice “done to someone…in their capacity as a knower.” Someone suffers “testimonial injustice” when others discount their credibility because of some prejudice; someone suffers “hermeneutical injustice” when “a gap in collective interpretive resources puts someone at an unfair disadvantage when it comes to making sense of their social experiences” (as, e.g. women struggled to understand their experiences of what we now call
The use of AI clearly implicates epistemic (in)justice. Amazon's resume screener was trained on data representing certain people (men) to the exclusion of others (women). Consequently, the AI “learned” that “having a good resume” meant “having a resume like those of previously successful men.” In addition to perpetuating structural injustice, a society that adopted this system might begin to see this equation of being “qualified” with being “like previously successful men” as an objective truth. This could encourage testimonial injustice by cultivating the impression that women are not “qualified” by the standards of the tech industry, thereby undermining their credibility in that field. Moreover, if dominant ideas about what it means to be “good” or “qualified” are constructed based on men's data, to the exclusion of women's, this is arguably an instance of hermeneutical marginalization.
Similarly, Buolamwini and Gebru (2018) evaluate three commercial programs that use machine-learning-based facial recognition software to classify images as “male” or “female.” These programs’ accuracy varies substantially based on the gender and skin tone of the subjects (Buolamwini and Gebru, 2018). All three programs perform better on males than females and on lighter-skinned rather than darker-skinned people, and all perform worst on darker-skinned females (Buolamwini and Gebru, 2018: 8).
A society relying heavily on these programs—or perhaps a company using them to regulate employees’ movements around its corporate campus—might come to identify looking “like a man” with looking like a
Moreover, given its relative inaccuracy when identifying women, people of color, and young people, law enforcement's use of facial recognition software could engender discrimination (Buolamwini and Gebru, 2018: 1–3). Buolamwini and Gebru (2018: 1) speculate: “someone could be wrongfully accused of a crime based on erroneous but confident misidentification….” If women, people of color, and young people were more vulnerable than others to such false accusations, this would arguably be an example of structural injustice. Further, in a society where these groups were already denied credibility or hermeneutically marginalized, the general public might be ill-suited to generate the conceptual resources to understand this form of discrimination, subjecting them to further hermeneutical injustice.
AI and human power
Delegating tasks to AI is sometimes described as
Similarly, we’d be remiss not to see the use of AI for law enforcement, surveillance, or autonomous weapons as a way for some people to exert power over others. In these cases, human power operates through computer programs, but they are programs written by humans, trained on human-created data, and put to work by some humans to monitor, regulate, control, and exterminate others.
I won't claim that AI can't do good. (There have been efforts, like the Algorithmic Justice League's opposition to abusive uses of facial recognition programs, to promote AI's use for good ends while limiting its potential to create injustice (Benjamin, 2019, 183–4).) My point is that AI is a tool with which humans exercise power, rather than a
Footnotes
Acknowledgments
The author would like to thank Chloé Bakalar, Eric Lawrence, and Gregory J. Stein for helpful comments on this piece; and Princeton University's Center for Information Technology Policy, University Center for Human Values, and their collaborative venture, Dialogues on AI and Ethics, for inspiring it in the first place.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship and/or publication of this article.
