Abstract
A reparative approach to algorithmic justice provides a compelling alternative to existing fairness-based frameworks, which are often inadequate for challenging the technological perpetuation of unjust social hierarchies. The definition of “reparations,” however, is philosophically contested. I discuss two interrelated but distinct notions of reparations: reparations as accountability and redress for past injustice, and reparations as a constructive worldmaking project focused on present and future justice. Each of these perspectives offers different recommendations and provocations for how to implement algorithmic reparations. I apply this to a case study of housing injustice in the US and offer three interpretations of “algorithmic reparations” in context: first, we can litigate instances of algorithmic discrimination in housing. Second, we can use computational methods to compute damages and demand redress for structural housing injustice in the past. Finally, we can repurpose algorithmic methods to imagine more radical resistance efforts that connect incremental reform to large-scale structural change for the future.
This article is a part of special theme on Algorithmic Reparation. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/Algorithmic%20Reparation?pbEditor=true
Introduction
Predictive algorithms that inform socially consequential decisions often reproduce unjust social inequalities. It is generally mathematically impossible for any predictive algorithm to be both fair and accurate if there presently exist outcome inequities (Corbett-Davies and Goel, 2018; Menon and Williamson, 2018). Additionally, the decision about which predictive algorithms to create is itself affected by socio-political context. For instance, the decision to employ recidivism risk assessment algorithms in sentencing assumes that it is desirable to base sentencing decisions upon predictions of crime risk. Any algorithm used for this purpose, even a fair one, fails to interrogate fundamental assumptions that uphold the US carceral system (Green, 2020; Hoffman, 2019).
Davis et al. (2021) propose an alternative framework of Intersectional algorithmic reparations toward algorithmic justice. An Intersectional approach recognizes how the social construction of identity categories such as race and gender creates and upholds interlocking systems of domination (Combahee River Collective, 1995). The reparative approach names, challenges, and disrupts harmful social structures and institutions. However, the philosophical definition of “reparations” is contested. In one view, “algorithmic reparations” can mean redress for harm perpetuated by algorithmic decision-making systems. Another interpretation of “algorithmic reparations” highlights the use of algorithms to compute damages done by unjust past policies and practices, regardless of whether those harms were themselves algorithmic in nature. A third interpretation focuses less on past wrongdoing, arguing that “algorithmic reparations” ought to use computation to enact structural social justice for the future. These three interpretations are not mutually incompatible, but the specific reparative actions recommended by each are different. In this discussion, I clarify the different moral grounds for each perspective of reparations and apply them to the context of housing justice.
Two reparative frameworks
Reparations as redress and accountability
The standard account of reparations comprises a two-part framework of redress and accountability. First, reparations repair past harm caused by wrongdoing. Reparations may entail restitution, the return of some lost item, right, or privilege, and it may involve compensation, which consists of alternative, substitutionary payment for damage. But reparation is distinguished from restitution and compensation, as neither necessarily involves past wrongdoing (Boxill and Corlett, 2022; Posner and Vermeule, 2003). Second, reparations frequently refer to large-scale instances where a corporation, a government, or a country must be held collectively accountable for mass injustice (Posner and Vermeule, 2003). This sometimes requires a relaxation of typical legal standards of sovereign immunity (28 US Code §2674).
Taken together, reparative acts must explicitly identify an action that wrongfully harmed a group of people and engage in compensation, apology, and/or restitution to repair that harm. This plays out in many existing massive-scale reparative projects; for instance, federal reparations programs for human rights violations in Argentina, Chile, and El Salvador, reparations for Japanese internment in the US, and reparations by the German government for Holocaust war crimes all provide recourse to victims and families of a specific historical atrocity (De Greiff, 2008). In the case of US reparations to Black Americans, Darity details an economic history of injustice, from exploitation through slavery to Jim Crow era discrimination to present-day inequities in the wealth gap (Darity and Mullen, 2020). Darity proposes sending an itemized reparations bill to the US government and to institutions that have benefited and continue to benefit from racial injustice, and he also suggests non-economic compensation in the form of education and apology. The strength of this account is its specificity and its compliance with common intuitions about repair and accountability.
Reparations as constructive worldmaking
Táíwò (2022)'s alternative reparative framework similarly acknowledges how historical injustices have generated present inequality. However, it emphasizes that mass injustice, such as harm done by global racial empire, often cannot fully be attributed to specific institutions or entities. Well-intentioned individuals operating within racist social structures may perpetuate systemic harm.
Thus, the constructive mode of reparations legislates distributive justice without engaging in the legal calculus of identifying perpetrator and victim. Additionally, legislation ought not to fall only under the jurisdiction of individual nation-states, but also should include a broader agenda of distributive justice throughout the international superstructure: “worldmaking” towards an equitable and just future. Inspired by Elizabeth Anderson, Táíwò argues that the object of distributive justice ought to be the development of capabilities that allow individuals to stand in just relation to each other. Distributive justice over capabilities requires not only just allocations of material goods, but also just constructions of the built environment, social arrangements, and “patterns of care, concern, and attention that we learn in our mundane interactions” (Táíwò, 2022).
While this may share similarities with the previous view, the difference lies in their theory of accountability: moral responsibility in the redress view comes from the relationship that we have to past wrongdoing, whereas responsibility in the constructive view simply comes from having present advantages that have arisen contingently from that history (Táíwò, 2022).
This framework aims to address several limitations of the redress model. First, we cannot easily calculate counterfactual damages for slavery by simply estimating the income earned by enslaved persons and their descendants had slavery not occurred. Social constructions like “economic value” are historically contingent, and in the counterfactual world, the descendants of enslaved individuals would simply not exist as we know them. Second, it is difficult to justify intergenerational reparations if descendants of enslavers or later immigrants are not technically responsible for enslavement. Rather, in the constructive view, a responsibility to pay into a reparations tax fund would arise out of the fact that we reap the material benefits of living in a socio-economic structure built upon the existence of slavery.
Under this view, it can be semantically difficult to distinguish reparation from welfare. Concerns remain that it may insufficiently hold specific actors and institutions accountable for harm. However, the constructive view articulates a responsibility for reparations in situations where the specific perpetrators of injustice cannot be casually identified. Crucially, Young argues in a similar account of forward-looking reparations that in such situations, we are still not excused from understanding how past injustices connect to present inequities. Rather, a program of repair must identify how existing systems are broken; this requires understanding how history has created dysfunction in present-day structures (Young, 2010).
Algorithmic reparations in practice
Each of these frameworks recommends different ways to operationalize algorithmic reparations in the context of the US housing system. Algorithmic decision-making systems enter into a space fraught with a history of individual and systemic bias, from implicit bias by real estate agents (Oh and Yinger, 2015) to structural and market discrimination that has generated a racial homeownership and wealth gap (Taylor, 2019).
Redress and accountability for algorithmic harm
In recent years, litigants have accused algorithmic decision-making systems of wrongful harm under the provisions of the Fair Housing Act (FHA), which prohibits housing discrimination based on race, sex, religion, disability status, or national origin.
In 2019, the National Fair Housing Alliance filed a lawsuit against Facebook, now Meta, for discrimination in online housing advertisements. 1 The complaint discusses how Meta developed algorithms to help real estate companies target housing advertisements “by relying on race, sex, and other FHA-protected attributes.” The lawsuit was settled by charging Meta the maximum civil penalty of $115,054. In compliance with the settlement, Meta also developed a Variance Reduction System (VRS), which uses methods in reinforcement learning and differential privacy to maximize group fairness in ad exposure (Bogen et al., 2023).
More recently, the case Louis et al. v. SafeRent Solutions was filed in January 2023 under the FHA. 2 SafeRent Solutions developed an algorithmic tool designed to help landlords assess the credit risk of potential tenants. Louis et al. allege that such scores are discriminatory because they include features like non-tenancy debt and exclude voucher status. Because Black and Hispanic tenants are more likely to have non-tenancy debt and are more likely to use vouchers to pay rent, SafeRent's scoring system is a case of racialized disparate impact.
These anti-discrimination cases are algorithmic reparations in that they seek accountability and redress for algorithmic harm, though many cases are fairly limited in scope and are constrained by existing stipulations of civil law, such as the ceiling on civil penalties [42 US Code § 3614]. Furthermore, proposed solutions, such as Meta's VRS, may be inadequate for enacting a structural program of housing justice (Green and Viljoen, 2020; Selbst et al., 2019).
However, they are also useful: first, they raise public consciousness about how technological decision-making systems perpetuate housing injustice. Second, they highlight how data is a social and political artifact, showing how bias can be encoded in variables such as credit score and debt (D’Ignazio and Klein, 2020). Third, they can aid victims of housing discrimination by setting a precedent for demanding compensation and holding institutions accountable for harm.
Algorithmic methods for computing damages for historic harm
Algorithmic reparations can also refer to computational methods that aid in redress for diffuse harm caused by historical policies and conditions. For instance, in the first half of the 20th century, the Home Owners’ Loan Corporation (HOLC) created “redlining” maps, intended to be a guideline for banks and lenders, that color-coded the “residential security” of different neighborhoods. The HOLC considered the presence of racial minorities to be a risk factor, creating a self-fulfilling prophecy that perpetuated segregation, risk, and property undervaluation in predominantly Black neighborhoods (Rothstein, 2017; So et al., 2022).
In one case study of redlining reparations, the city of Evanston, Illinois promised to allocate at least $10 million from taxes levied on cannabis purchases to a reparations fund (Loc, 2021). Black residents or descendants of residents who lived in Evanston between 1919 and 1969 can claim $25,000 towards home repairs, home purchases, or mortgage assistance. The city states that this policy was meant to repair “the harm caused to Black/African American Evanston residents due to discriminatory housing policies and practices” (Loc, 2021). This program complies with intuitions about reparations as accountability and redress, explicitly acknowledging harm and disbursing funds to specific victims.
Such a policy is admirable for being the first of its kind in the US but also raises several concerns. First, there isn’t an obvious causal connection between marijuana taxes and housing injustice. Where should the funding come from if this program was enacted on a larger scale? As of early 2023, the city has only raised enough money to pay out reparations to 16 families (Felton, 2023). Additionally, the limitations on the use of reparative funds have been controversial; residents have noted that reparations payments for mortgage purchases simply redirect funds to present-day banks and real estate agents (Misra, 2021). Finally, while the program may perform public accountability and apology, critics have claimed that financial harm from redlining is far greater than $25,000 and that the effects of redlining are not uniform. Should certain individuals who have been more adversely impacted by redlining be entitled to a greater payout?
These questions echo earlier philosophical debates about repair, accountability, and desert. Regardless, reparative algorithms can implement redress by calculating counterfactual damages or predicting which reparative allocations will promote maximal welfare. Darity et al. (2022) suggest computing how much wealth Black Americans lost by living in racially segregated, redlined neighborhoods. However, Darity argues later that such counterfactuals may be unwieldy, advocating instead that a reparations bill be based on the current wealth gap, which may be the “best single economic indicator of the cumulative, intergenerational impact of White racism over time.”
Algorithmic methods toward worldmaking
The constructive/worldmaking view argues that the reparative project should focus on bringing about a social structure that ensures that all will have safe, well-maintained living conditions.
One example of constructive algorithmic reparations uses methods in machine learning to calculate the amount of funding necessary to reverse the algorithmic denial decision for every Black mortgage loan applicant (So et al., 2022). This tool identifies the scale of funding needed to guarantee housing justice for all Black applicants and rhetorically challenges an exclusionary algorithm. It also raises important questions in implementation: should we prioritize Black applicants descended from those who were directly impacted by unjust housing policies, as in Evanston? Is it fair, or just, to give different amounts of reparative aid to applicants?
Young argues that a primary way to engage in a forward-looking reparations project is to engage in collective action (Young, 2010). The Anti-Eviction Mapping Project's “Evictorbook” and JustFix's “who owns what” are interactive data platforms that identify property management companies and serial evictors by building and address. These data tools intend to “rebalance the power dynamic between tenants and landlords” by aiding tenant organizing across different properties owned by the same management company, building momentum for collective action against unjust and potentially illegal evictions (AEMP, 2023; JustFix, 2023). Currently, these databases do not incorporate any predictive algorithms. But one can imagine ways machine learning can be used here: for example, with caution for privacy and in collaboration with community organizers, we can use ML to understand and predict the behavior of the worst offending landlords. Young also cites community land trusts (CLTs) as an example of collective action. CLTs are an alternative to private land ownership, where nonprofits hold land in trust on behalf of a place-based community. CLTs have been shown to prevent gentrification and keep housing affordable, and there is ripe opportunity for data-driven analysis in studying the development and growth of CLTs (Choi et al., 2018). In all these ways, reparative algorithms can quantify and formalize methods of instigating structural change that transfer power into the hands of community stakeholders (Abebe et al., 2020).
Conclusion
Different accounts of reparations are grounded in different theories of moral responsibility. “Algorithmic reparations” can thus have several different meanings: litigating against harm done by algorithmic systems, computing damages for historical policies, or using data-driven methods to disempower and delegitimize unjust institutions in the present. In many situations, there exist obvious perpetrators of harm, algorithmic or otherwise, and computational methods can be used within existing legal mechanisms for accountability and redress. In tandem with this approach, our collective responsibility for structural reparations enjoins us to use data and computational methods to inform collective action that disturbs status quo distributions of power, allowing us to imagine a future where safe, affordable housing is accessible to all.
Footnotes
Acknowledgments
I would like to thank the participants at the Algorithmic Reparations workshop, as well as Wonyoung So and Catherine D’Ignazio in particular, for early conversations that inspired this paper. I am also grateful for the continued support of Peko Hosoi and the members of the Systemic Racism & Computation Housing Working Group at MIT.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was generously supported by funding from the MIT-IBM Watson AI Lab (grant number W1771646).
