Abstract
Power and information asymmetries between people and digital technology companies are further legitimized through contractual agreements that fail to provide meaningful consent and contestability. In particular, the Terms-of-Service (ToS) agreement, is a contract of adhesion where companies effectively set the terms and conditions of the contract. Whereas, ToS reinforce existing structural inequalities, we seek to enable an intersectional accountability mechanism grounded in the practice of algorithmic reparation. Building on existing critiques of ToS in the context of algorithmic systems, we return to the roots of contract theory by recentering notions of agency and mutual assent. We evolve a multipronged intervention we frame as the Terms-we-Serve-with (TwSw) social, computational, and legal framework. The TwSw is a new social imaginary centered on: (1) co-constitution of user agreements, through participatory mechanisms; (2) addressing friction, leveraging the fields of design justice and critical design in the production and resolution of conflict; (3) enabling refusal mechanisms, reflecting the need for a sufficient level of human oversight and agency including opting out; (4) complaint and algorithmic harms reporting, through a feminist studies lens and open-sourced computational tools; and (5) disclosure-centered mediation, to disclose, acknowledge, and take responsibility for harm, drawing on the field of medical law. We further inform our analysis through an exploratory design workshop with a South African gender-based violence reporting AI startup. We derive practical strategies for communities, technologists, and policy-makers to leverage a relational approach to algorithmic reparation and propose there's a need for a radical restructuring of the “take-it-or-leave-it” ToS agreement.
This article is a part of special theme on Algorithmic Reparation. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/Algorithmic%20Reparation?pbEditor=true
Introduction
Community-based participatory design is an approach to designing computing technologies with and for different publics (Simonsen and Robertson, 2013), with the aim of forming more equitable relationships between algorithmic systems and often-marginalized publics (Costanza-Chock, 2020; Katell et al., 2020). For the purposes of this article, we use the terms algorithmic systems, machine learning (ML), and computing systems interchangeably to refer to products or services that leverage automated decision-making processes; with recognition that not all ML systems involve automated decision-making nor do all automated decision-making systems involve ML. Computing systems are rarely developed entirely by the publics they serve (Fiesler et al., 2016); and in this way, participatory design is a situated practice of future-making, through which heterogeneous communities collaboratively imagine new sociotechnical futures (Ehn et al., 2014). While participatory design has a long tradition in shaping the design of computing systems (DiSalvo et al., 2012; Shilton et al., 2008), it has more recently become a means to co-create artificial intelligence (AI) transparency and accountability artifacts, such as model cards (Shen et al., 2022), design workbooks (Wong et al., 2017), and user agreements (Chung and Kim, 2022; Rossi et al., 2019; Rossi and Palmirani, 2019).
Accountability artifacts are part of overarching algorithmic governance structures. User agreements, such as community guidelines, terms of service (ToS), and privacy policies, contribute to the kinds of relationships formed between technologies and publics (Bygrave, 2012). However, it is a common critique that user agreements are often cumbersome (Tesfay et al., 2018), difficult to understand (Fowler et al., 2020; Sunyaev et al., 2015), and developed in isolation without input from potential users (Obar and Oeldorf-Hirsch, 2020; Ugwudike, 2021). Bringing participatory design to accountability artifacts is a critical intervention that “facilitate(s) collective and informed decision-making in their own community contexts” (Shen et al., 2022, n.p.), and offers grounded paths to undo forms of algorithmic harm, referring here to the “adverse lived experiences resulting from a system's deployment and operation in the world” (Shelby et al., 2023: 1). As algorithmic systems are embodied reflections of sociocultural and political design decisions (Davis, 2023), harms from algorithmic systems are similarly sociotechnical arising through the interplay of social power dynamics and technical system components.
While reparative algorithms name and undo algorithmic harms (Davis et al., 2021; So et al., 2022), we envision a reparative approach to user agreements as similarly proactive: “incorporat[ing] redress … [and] embedding an equitable agenda into the material systems that govern daily life” (Davis et al., 2021). Following Jenny Davis’s (2023) mechanisms and conditions framework of ML affordances, we define a reparative user agreement as one that has mechanisms that allow and encourage the repair of algorithmic harm, condoning and legitimizing the conditions for repair. As both the potential harms from algorithmic systems and needs of the community are situated and contextual, developing a reparative user agreement requires meaningful collaboration between technology companies and the publics they engage. Extending feminist and postcolonial calls to bring community engagement to user agreements (Rossi et al., 2019; Varon and Peña, 2021), this article outlines five dimensions to scaffold community-centered and reparative user agreements:
The participatory development of user agreements with local, heterogeneous communities to co-constitute reparative relationships; Future-oriented dialogue addressing friction, leveraging the fields of design justice and critical design in the production and resolution of conflict; Opportunities of informed refusal in the development of collective agreements that enable communities to contest aspects of algorithm systems that do not serve their needs; Complaint mechanisms that empower people to report algorithmic harms through open-sourced computational tools; and Inclusion of disclosure-centered mediation through reparation and apology.
These Terms-we-Serve-with (TwSw) dimensions foster more equitable technological assemblages by incorporating a wider range of perspectives in anticipating- and advancing accountability- for algorithmic harms, should they arise, into user agreements. In computing, the so-called principle component analysis (Kong et al., 2017), commonly used for dimensionality reduction, is a method for increasing interpretability through identifying dimensions (principal components) of complex data in a way that preserves the most information. The TwSw dimensions are methods for similarly cultivating and preserving critical knowledge and equitable relations into user agreements. These dimensions offer practitioners — especially startups and policymakers — pathways to co-create algorithmic systems that empower communities historically marginalized in the development of algorithmic systems, including disabled people (Bennett and Keyes, 2020), people in the Global South (Kak, 2020; Mohamed et al., 2020; Sambasivan et al., 2021), and transgender and non-binary people (Haimson et al., 2021). We recognize there can be infinite dimensions, and hold space for new TwSw dimensions to emerge.
In what follows, we briefly outline literature on participatory AI and human-centered user agreements. We then describe each dimension drawing on multi-disciplinary literature from computing research, feminist Science and Technology Studies, and contract law. Next, we share a community-based discussion and initial findings from applying the TwSw framework in practice and offer reflexive questions to help practitioners operationalize it in their respective contexts. We conclude with directions for future research on the reparative role user agreements could play in minimizing and acting on algorithmic harms.
Participatory Ai and algorithmic accountability
With growing recognition of algorithmic harms, there has been a participatory turn in AI, with increased movement towards collaborative methods and design practices (Arana-Catania et al., 2021; Van der Velden and Mörtberg, 2015). Community-based participatory design is an intentional effort to shift relations away from “designer-and-user to … co-designers and co-creators” (Birhane et al., 2022: 2) and encompasses an evolving set of practices concerned with community participation and enabling different publics to bring their situated knowledge to bear on the design, evaluation, and governance of algorithmic systems (Brandt et al., 2012; Costanza-Chock, 2020; Lee et al., 2019). Importantly, the social relations that cohere different communities are fluid and plural (DiSalvo et al., 2012) and may be shaped by intersecting social categories of difference (e.g., gender, race, sexuality, disability, or nationality), cultural histories or geographic boundaries, shared interests and practice, among others. Rather than be prescriptive about what constitutes a “community,” it is thus, important to recognize the multiplicity of experiences within any given construction of community.
Momentum to foster greater community participation in the creation and governance of AI is motivated by concerns about the disproportionate power technologists hold in shaping the structure and assumptions built into algorithmic systems (Baumer, 2017) and dearth of multi-stakeholder input into algorithmic systems and frameworks that have significant consequences for people's lives (Green and Viljoen, 2020). Harms from algorithmic systems arise from the complex interplay between technical system components and intersecting social power dynamics (Shelby et al., 2023; Young et al., 2019); thus, communities who already face systemic and structural forms of inequality disproportionately experience algorithmic harms rooted in social categories of difference (Benjamin, 2019, 2020; Eubanks, 2018; Noble, 2018). This include key so-called ML fairness harms (Microsoft, 2022) including (1) representational harms that reinscribe demeaning social stereotypes (Barocas et al., 2019) and function as what Patricia Hill Collins (2002) terms “controlling images” that justify social oppression; (2) allocative harms that lead to economic and opportunity loss through inequitable resource allocation (Barocas et al., 2019); and (3) quality-of-service harms, such as when algorithmic systems systematically provide performance based on aspects of identity, including computer vision systems that rely on biometric data (Buolamwini and Gebru, 2018) or speech recognition systems (Koenecke et al., 2020; Mengesha et al., 2021). While computing research has accumulated a growing body of knowledge on different harms from algorithmic systems (see Shelby et al., 2023), without direct input from communities with differently situated knowledge (Haraway, 1988), it can be challenging to precisely identify the nuanced ways algorithmic harms appear in different contexts and intervene in them. As such, community engagement offers a critical intervention in “hegemonic AI” practices to develop algorithmic systems more accountable to the publics they reach (Weinberg, 2022), and has a long tradition in feminism to transform the power relations in algorithmic systems, including the “Feminist Principles of the Internet” (2014), the “Digital Defense Playbook” (Our Data Bodies, 2019), and the Carceral Tech Resistance Network (2020).
Community-based participatory design foregrounds the relational understanding of algorithmic impacts, calling for a normative stance, as “developers and operators should be responsive to the people who use or are otherwise affected by their algorithmic systems” (Metcalf et al., 2022: 3). For fostering algorithmic accountability, participatory methods “promote new organizational relationships and ways of communicating that strengthen the internal capability to take ownership of algorithmic systems and repair them when failures arise” (Delgado et al., 2022: 6). Community participation is not a panacea, however (Hoffmann, 2021). When done in extractive ways, participatory design occludes accountability and can become a means of exploitative “participation-washing” (Birhane et al., 2022; Schiff et al., 2021). The ability to meaningfully participate is also unevenly distributed and shaped by center/periphery dynamics. Moreover, status quo forums for participation may be structurally inaccessible and participation itself may carry disproportionate risks for certain communities, especially undocumented communities. Thus, equitable participation requires developing modes of participation that enable transparency, generative friction, and meaningful forms of knowledge exchange (Katell et al., 2020; Sloane et al., 2020), prioritizing the needs of the margins.
User agreements, consent, and the fiction of mutual assent
User agreements can be a site of justice or injustice. The dominant paradigm for user agreements is “notice and choice,” in which the notice is the presentation of a privacy policy and the choice is an action (e.g., clicking a button or using a website) that signals acceptance of terms (Feng et al., 2021; Sloan and Warner, 2014). What is afforded in this paradigm is a unidirectional demand from users that discourages input or feedback (Davis, 2020). This paradigm is long criticized for failing to foster meaningful consent, as people may only be able to “opt out” or consent to all terms offered (Bruening and Culnan, 2015; Kirsch, 2011). As outlined by the United Nations Declaration on the Rights of Indigenous Peoples (2007), the principle of Free, Prior and Informed Consent confers Indigenous and tribal peoples the right to give or withhold their consent for any action that would affect their lands, territories or rights. Similarly, as outlined in the European Union General Data Protection Regulation, user consent should be valid, freely given, specific, informed and active; thus, recent scholarship proposes mechanisms of meaningful user consent that involve agency (Bergram et al., 2020), transparency (Shen et al., 2022), accessible language (Luger et al., 2013), and the ability to revoke consent (Human and Cech, 2021).
There are other challenges to fostering meaningful consent, however. User agreements are often constructed as form contracts containing generic, boilerplate language (Marotta-Wurgler, 2007), and are drafted by organizations (drafters) and offered to individuals (signers) with little to no opportunity to negotiate their terms (Eigen, 2008). While people often pay insufficient attention to reading and understanding ToS (Ben-Shahar, 2009; Fiesler et al., 2016; Reidenberg et al., 2015), a key challenge with user agreements is that while the benefit gained may be known to the user (they are able to use a product or service), what is given up, sacrificed, and even lost, is not clear (Eigen, 2008). These information asymmetries can lead to misalignment between user expectations and the intended use and expressed functional limitations of algorithmic applications communicated in user agreements (Gambier-Ross et al., 2018; Fiesler et al., 2016). This may result in frustration and anger when users realize what rights have been granted (Angwin and Valentino-Devries, 2011), blocking forward-thinking means of repair in instances of algorithmic harm.
As form contracts often contain boilerplate language, the extent of community engagement in developing user agreements is largely limited to regulatory bodies (Belli and Venturini, 2016). While an important means of protecting individual user rights, legislation is often data-driven rather than code-driven, meaning it is not focused on how algorithms may produce harms (Hildebrandt, 2018). Consequently, user agreements for computing systems often focus on privacy-related harms that arise through the collection and sharing of personal data (Solove, 2012) while more contextual and inequality–driven algorithmic harms, such as representational, allocative, and quality of service harms, are often absent. Furthermore, user agreements rarely afford recourse when algorithms invoke harm, often leaving users without appeal and preventing researchers and investigative journalists from being able to audit AI systems (Fiesler et al., 2020; Vaccaro et al., 2015; Vaccaro et al., 2020; Vincent, 2021). Unlike regulatory measures that must account for the general public and act on behalf of society as a whole for a broad range of use cases, user agreements are capable of being highly specific and tailored to contextual use-cases.
In the context of the contractual terms between people and technology companies, users are perceivably given an individual choice, which is increasingly itself a fiction (Hart, 2011; Leonhard, 2012). With the rise of software and platforms, Mark Lemley (2022: 11) articulates a death of (traditional) contracts and a surge in shrinkwrap licenses: in which by “tearing open the shrinkwrap,” parties agree to the terms of use. Accordingly, software contracts became legal artifacts that no longer required explicit mutual assent, referring to how different publics foster agreement and engage in a “mutually advantageous cooperative venture” (Rawls, 2004: 112). In contrast, the mere act of using a product sufficiently amounts to agreement to its terms of use. With the advancements in software products and services, the shrinkwrap agreement has evolved into a clickwrap agreement (i.e., consumer clicks to accept the terms) that in some cases becomes a browserwrap agreement (i.e., merely visiting a website constitutes agreement of its terms). In browserwrap agreements, consumers are not able to see the terms without agreeing to them (Lemley, 2022). They are also mechanisms of static consent, rather than active and ongoing consent, as technology companies have the power to alter contractual agreements without explicitly letting their users know (Lemley, 2022). In effect, the construction of ToS agreements has evolved to intentionally reduce consent to a binary transaction. The normative affordances of form-contracts are rarely designed to be mutually consentful but transactional, often affording greater protection to those setting the terms (Lobel, 2022).
There is increasing momentum to develop more equitable approaches to user agreements. Community-driven projects, such as Terms of Service Didn't Read (2023), EULAs of Despair (2023), and Privacy Not Included (Mozilla, 2023), aim to increase public awareness of agreements governing the use of algorithmic systems. Similarly, researchers and practitioners have developed paradigms for fostering more equitable community-technology relations, such as Allied Media Project's “Building Consentful Technologies’” (Lee, 2017) and “The Feminist Data Manifest-No” (Cifor et al., 2019). These projects tap into and return to contract law's notion of mutual assent and align with movements towards collective data governance (Micheli et al., 2020), as “individualist data-subject rights cannot represent, let alone address, these [collective] population-level effects” (Viljoen, 2021: 573). Development of equitable algorithmic systems “requires inclusion from the beginning of the ideation process of an AI system … [and] a willingness to achieve collective consent reinforcing multiplicity and plurality” (Varon and Peña, 2021: 22). In sum, there is a need for sociotechnical interventions that enable a return to meaningful mutual assent by redistributing power imbalances.
The terms-we-serve-with framework
Implementing algorithmic reparation in practice requires “undoing standard power asymmetries between those who make, and those who are affected by ML systems” (Davis et al., 2021: 7). In this section, we lay out the dimensions of the TwSw framework that can render user agreements a site of justice rather than injustice, and how they support distributed reparative actions in service of algorithmic justice.
Dimension 1: co-constitution of user agreements
Co-constitution is an opportunity to challenge one-sided and coercive ToS through the participatory development of user agreements. We envision multi-stakeholder engagement that empowers local and heterogeneous communities — who may cohere through geography, axes of discrimination, or shared affinity or experiences (DiSalvo et al., 2012) — to take part and be compensated in drafting the user agreement for the technologies that concern them. The particular algorithmic application (e.g., health, lending, or gender-based violence) shapes relevant stakeholder groups, and AI developers should prioritize communities who already face systemic inequalities. Engaging with domain experts and understanding the extant social power dynamics of that domain is required to identify relevant stakeholder communities. When marginalized communities are brought in as co-constitution drafters, the collective can better establish both the desired uses of an algorithmic system, and desired responses to potential algorithmic harms including but not limited to functionality failures. This then empowers more equitable sociotechnical relations for better information sharing, transparency, and trust (Corbett and Denton, 2023; Gordon-Tapiero et al., 2022).
Co-writing user agreements contributes to “bringing a wider community into the agenda-setting” (Hagan, 2020: 7). However, the co-design process also needs to intervene in how user agreements currently normalize contracting by proxies of consent. Proxies enable consent to be transformed into an act (e.g., opening the shrinkwrap, clicking the “Accept” button, etc.). Yet, by placing the “act” of agreement as fundamental to contractual assent muddies the notion of consideration in contracting. Consideration refers to the benefit each party receives in exchange for what is sacrificed. In a simple Sale of Goods agreement, the benefit received from the seller would be the monetary value gained at the loss of the goods to the buyer. In this simplified context, it is clear to both parties what is sacrificed and what is gained. In contrast, ToS agreements are highly transactional “one-way contracts” (Ben-Shahar, 2010). Frequently, they use complex legal verbiage and are disaggregated across various documents including community guidelines, privacy policies, etc.
In the context of sociotechnical risks and harms of AI, the information asymmetry across parties further aggravates a lack of knowledge around the true conditions of service. In effect, there cannot be any reliable consideration given by the user as the parameters of the contractual exchange are neither known nor defined. Often, users may be entirely aware of the harm and risks associated with use, but are left without choice as they are not considered direct parties to the ToS. This is typically the case for algorithmic systems that are mandated by an institution. For example, consider exam proctoring software that is imposed on students by a university. In this situation, there lacks a direct relational exchange between the individual user and the organization. Instead, policymakers need to consider the role of individual and collective forms of co-design (Hagan, 2020) of user agreements centered on mutual assent.
Consistent with prior literature in the space of data privacy (Gordon-Tapiero et al., 2022), contractual terms that have material effect between parties and/or have undefined risks should allow for explicit engagement. Empirical studies have explored the possibilities and impact of participation in drafting contracts. Eigen (2012) demonstrated that when people were informed of the relevant conditions of the contract, and offered the choice to change even a single term, they were actively engaged with the contractual exchange. That is, “they negotiated for its inclusion in the contract” and that this happened “before they consented to the contract” (Eigen, 2012: 7). Drawing inspiration from form-contracts research, we see how contracts can be remedied to reduce coercion and improve agency. In effect, co-constitution, through participatory construction of user agreements, behaves as a tool of empowerment and rebalances the negotiating power between users and organizations. Co-constitutive drafting then reaffirms the relational exchange between parties, transforming the fiction of mutual assent to reality by reintegrating the voice of communities and individual users.
Dimension 2: addressing friction
Whereas co-constitution is about enabling different communities to collectively develop user agreements, friction involves ensuring dialogue among communities is meaningful and oriented towards materializing algorithmic justice. We conceptualize friction in terms of (1) disagreement, misalignment, or conflict between stakeholders and their incentive structures and (2) the interactions between (un)intended users and functionality failures of deployed AI systems. Addressing friction is required to evolve reparative algorithms and redistribute the allocation of benefits and burdens among various groups of people. For developing reparative user agreements, anticipating and addressing friction can disrupt dark design patterns that mislead users (Mathur et al., 2019; Nguyen and McNealy, 2021) and surface touchpoints to co-design alternative sites and resolution of value conflicts (Costanza-Chock, 2020).
Many cases of friction in AI are intimately connected to the algorithmic systems’ failure modes (Raji et al., 2022). When failures occur, people are often left with few options for seeking recourse at large due to indemnification clauses in ToS agreements. These clauses articulate that a user agrees not to hold the indemnitee liable for any damage or loss caused by functionality failures. The fictional consent enabled through conventional ToS agreements (Lemley, 2022) can be understood as a kind of dark design pattern that forecloses recourse for system failures (Stanley, 2017). Continuously engaging communities to surface potential frictions before and after a system is deployed (Birhane, 2021; Costanza-Chock, 2020; Krafft et al., 2021) enables better anticipating how such frictions may materialize into downstream harms in a world shaped by intersecting power dynamics.
We conceptualize AI frictions also as reparative community interventions into power-laden algorithmic systems. For example, activism in disability communities is illustrative of how technology can both enable and disrupt injustice. Hamraie and Fritsch (2019: 1) describe the role of disabled people as experts and designers of everyday life, naming crip technoscience as the “practices of critique, alteration, and reinvention of our material-discursive world.” A key principle in crip technoscience is committing to a praxis that perceives access barriers as friction, “particularly paying attention to access-making as disabled peoples’ acts of non-compliance and protest” in exclusionary systems (p. 10). Disabled activists’ use of technology to expose frictions in an inaccessible physical environment leverages a speculative approach to illuminate and critique systems of power, privilege, and oppression. Here, imagining new equity-oriented forms of technological design is not a solution but a means to challenge dominant norms, values, and incentives (Dunne and Raby, 2013; Disalvo and Lukens, 2009). In the context of AI, speculative design practices can foster reparative modes of addressing the frictions between marginalized communities and complex sociotechnical algorithmic systems.
Users of algorithmic systems speculate about the way algorithms interfere in their social relations by developing and maintaining folk theories (DeVito, 2021; Ytre-Arne and Moe, 2021). Understanding communities’ folk theories informs a broader understanding of the lack of transparency and information asymmetries between users and AI systems; however, there has been insufficient focus on moments of disagreement. Thus, this TwSw dimension is committed to closing this gap through understanding its root causes by negotiating and encouraging awareness of existing frictions and co-designing intentional frictions. For example, consider nudges and choice architecture that empowers transparency, slowing down, self-reflection, learning, and care. For both individuals and communities, understanding friction offers the vocabulary to knowingly refuse contractual terms of use, and, in adverse circumstances, hold organizations liable through complaints.
Dimension 3: enabling refusal mechanisms
Within this dimension of the TwSw framework, we propose that the practice of refusal needs to be made explicit through the relationships between involved stakeholders. We build on prior work in conceptualizing informed refusal (Benjamin, 2016; Cifor et al., 2019) in the context of algorithmic reparation, arguing for (1) enabling refusal mechanisms grounded in a relational justice-oriented approach enacted through the lived experiences of those at the margins, while (2) refusal goes along with a search for equitable alternatives. Such refusal mechanisms need to be explicitly outlined in user agreements.
Fostering meaningful consent in user agreements includes the ability to refuse coercive ToS, particularly when consent is solicited by proxy (i.e., via clickwrap and/or browserwrap). The notion of informed refusal is a justice-oriented approach to constructing more reciprocal relationships between institutions and communities (Benjamin, 2016; Ganesh and Moss, 2022). Whereas informed consent understands the transmission of information as one centered on granting permission, informed refusal shifts the expectation of participation to “the expectation that individuals may very well decline participation” (Benjamin, 2016: 18). Refusing participation is an act of agency and contestation of the terms of inclusion, and for AI systems specifically, confront the terms on which digital participation is understood (Ganesh and Moss, 2022). In this way, refusal is a practice of generative boundary setting (Barabas, 2022) and a tool for interrogating unequal power dynamics and disrupting algorithmic injustice (Benjamin, 2020; Cifor et al., 2019).
Transformative modes of refusal extend beyond the rejection of a user agreement and incorporate future-oriented means to address frictions. In contrast to individual consent forms, such as agreeing to a policy at the point of data collection, by design, the TwSw informed refusal demands ongoing consent/refusal mechanisms. This may include the proactive inclusion of collective forms of refusal into user agreements, for example, bug bounty programs to address performance failures (Kenway et al., 2022) and community-led audit studies (Matias et al., 2015; Shen et al., 2021). Incorporating collective refusal practices into user agreements disrupts unidirectional and one-time modes of consent to operationalize refusal in service of developing reparative algorithms. Furthermore, integrating active and ongoing refusal mechanisms in how we engage with AI enables asking questions about how intersecting power dynamics shape the design of algorithmic systems (Barabas et al., 2018; Garcia et al., 2022). Ultimately, refusal materializes reparation by “resisting, reframing, and redirecting colonial and capitalist logics” (Wright, 2018: 1). Informed refusal is thus a generative stance, playing an active and material role in reforming the relationships between AI systems and often-marginalized communities.
Dimension 4: complaints and algorithmic harms reporting
Complaints are expressions of dissatisfaction, pain, or grief (Ahmed, 2021), and as a TwSw dimension, are a means of proactively and collectively establishing how to understand and act on adverse experiences with algorithmic systems. Incorporating mechanisms of user feedback into AI applications is a common means of understanding user perspectives, including through public-facing app reviews (Fu et al., 2013; Khalid et al., 2014), social media (Griffin and Lurie, 2022), or company-facing user feedback forms (Panichella et al., 2015). While users complain to communicate frustration, the primary motive is to resolve the problem (Holloway and Beatty, 2003). Proactively and collectively deciding how to address systemic algorithmic failures upfront in user agreements — to the extent possible — fosters more equitable and reparative relations between AI developers and publics.
Anticipating the range and scope of what algorithmic failures could arise is challenging (Boyarskaya et al., 2020), especially as algorithmic systems are situated in a complex social world shaped by intersecting social inequalities. Engaging with this TwSw dimension does not expect the impossible task of perfect anticipation of algorithmic harms. Rather, it seeks to repair trust relationships and collectively establish how to respond when harms appear. This needs to be grounded in distinct avenues to report algorithmic harms to archives and knowledge hubs facilitated by trusted third parties situated externally from technology companies. Feminist scholar Sarah Ahmed (2021) describes how while complaints lodged to an organization may catalyze action, that is never the starting point of a complaint; there is an underlying root cause. How organizations respond to complaints illuminates their commitment to interrogating and addressing root causes.
Reparative interventions must be grounded in an understanding of the fundamentally sociotechnical nature of algorithmic harms. By engaging with this dimension of the framework, practitioners can establish contestability mechanisms that empower people to collectively voice and make sense of potentially harmful concerns as testimonies to structural and institutional problems. For example, we envision policy requirements that enable third party oversight (Gordon-Tapiero et al., 2022) and the use of open-source tools for algorithmic harm reporting. Such mechanisms could act as a partner to users and the broader algorithmic auditing ecosystem, contributing to improved justice outcomes.
Dimension 5: disclosure-centered mediation
This dimension of the TwSw framework bridges two seemingly disparate processes — disclosure and mediation — that together foster reparation. We propose that there is a need to reframe existing dispute resolution mechanisms available in user agreements, for example, in the context of Limitations of Liability in ToS clauses.
Disclosures seek to acknowledge the agency and autonomy of individuals. Calls for the requirement of disclosures in the context of AI systems appear in policy recommendations on algorithmic auditing (Costanza-Chock et al., 2022; Raji et al., 2022) and regulatory frameworks by the European Union (see articles 13 and 22 of the GDPR and articles 51, 52, and 60 of the AI Act), and US Congress (Klobuchar, 2018; Trahan, 2021). Meaningful disclosure affirms that the individual has final decision-making power in how they want to proceed. Proper institutional design and implementation of disclosure are necessary counterparts (Ho, 2012; Norval et al., 2022). For example, the organization or its representative is often responsible for disclosing harms and risks in informed consent practices, enabling both an information asymmetry and a power imbalance to emerge in disclosure practices (Cohen, 2020). While, in theory, there are distinct legal standards around who drives the scope of information to be disclosed, in practice, organizations remain largely responsible for defining those parameters and people must make sense of the potential risks and benefits provided to them (Cohen, 2020). As a result, the layperson must largely trust the expert (Chipidza et al., 2015) has their best interest in mind and, as such, do not necessarily have a real choice. Disclosures alone, then, do not sufficiently offer a venue of recourse.
In contrast, mediation is a type of Alternative Dispute Resolution practice (Alexander, 2003) centered on apology and reconciliation. Mediation is both a process and a forum for resolving differences through engagement with a mutually selected impartial individual (Wall and Dunne, 2012). Frequently, mediation is employed for cases of medical error, whereby the trust between physicians and patients has been particularly broken down. Apology plays a reparative role in these circumstances (Robbennolt, 2009). Both patients and physicians express their desire for explanation and apology following medical errors. Expressions of regret acknowledge imperfection and create space for change. An apology is an act of taking responsibility for causing harm and is the first step to repairing a relationship.
We see this TwSw dimension as embodying an analogous enforcement mechanism, one that enables disclosure-centered mediation as an accountability mechanism grounded in the user agreements between people and AI. The reparative approach of an apology, effectively, closes the loop between disclosure and mediation. That is, apology substantiates the meaning behind information disclosed and provides weight to it. We acknowledge that justice requires more than an apology. It needs material resources, legal frameworks, processes, and institutions to guarantee non-repetition. Therefore, we consider that a dispute resolution forum that compounds accountability with apology and disclosure of error can contribute to reparative algorithms that unmask and undo algorithmic harm (Davis et al., 2021).
Operationalizing the framework through reflexive questions
The TwSw is a sociotechnical intervention into user agreements to empower different actors to engage in the practice of algorithmic reparation, thus accounting for intersectional axes of inequality (Hoffmann, 2019). It offers scaffolding to illuminating existing structural injustices and enacting a reparative approach to algorithmic systems that centers the margins in the act of restructuring power. Practical interventions resulting from the use of the framework are to be implemented at different stages of the AI lifecycle (UNESCO, 2021), and this needs to be documented in the user agreements surrounding the system's deployment and use in particular contexts. It is a framework for practitioners to both think with and act with. In support of this effort, we offer reflexive questions that serve as a starting point for operationalizing each TwSw dimension.
We derive the reflexive questions in Table 1 from the theoretical analysis in the prior sections and initial findings when applying the TwSw framework in practice together with the South African startup, “Kwanele - Bringing Women Justice” (Kwanele, 2023). Kwanele aims to help women and children report and prosecute crimes involving gender-based violence (GBV). The team is developing an AI chatbot to guide users in reporting GBV cases and answer any questions related to South Africa's legislation. Recognizing the broader social context, Kwanele sees the chatbot as embodying three roles: (1) a legal analyst, helping make the legalese within government regulations easier to understand; (2) a crisis response social worker, guiding people to report GBV and seek help; and (3) a mental health therapist, conversing with victims in a psychologically and potentially physically vulnerable state. Kwanele's team wanted to leverage the TwSw framework in determining ways to incorporate AI in a manner that aligns with their mission, values, and the needs of their users.
Terms-we-serve-with dimensions and reparative outcomes mapping.
Workshop methods and participants
Embodying a reparative approach necessitated that we engage with marginalized practitioners in the co-design and evaluation of the TwSw dimensions. We recruited a purposive sample (Onwuegbuzie and Collins, 2007) of 15 experts in AI transparency and accountability through Mozilla's Trustworthy AI Working Group (Mozilla, 2019). Participants included members of Kwanele's team, academic scholars, civil society, and policymakers. During a virtual workshop, participants were split into five breakout groups corresponding to the five TwSw dimensions; each group was facilitated by an assigned moderator responsible for documentation. Each breakout session lasted an hour and included discussion questions (see Table 1), following a design fiction method (Lindley and Coulton, 2015). Data gathered from the workshop were analyzed inductively, using reflexive thematic analysis (Braun and Clarke, 2021).
Workshop findings
Our findings converged along five thematic interventions that need to be made explicit through reparative user agreements: (1) improving communication and engagement in user agreements, creating contextual scenarios instead of binary yes/no decisions that prevent meaningful mutual assent in agreeing to contractual terms; (2) clear pathways for escalation of algorithmic harms, sensitive to different needs among different identities and communities; (3) a complaint handling process-based approach that encompasses - confirmation, recognition, acknowledgement, and follow up with impacted users; (4) compassion-centered approach to the user interface, promoting transparency and self-care; and (5) improved feedback loops between product teams and frontline workers who process user reports of algorithmic harms. The critical feminist interventions that emerged during this workshop are a step towards centering work around the lived experiences of members of communities affected by AI chatbot systems. Operationalizing these interventions in practice will need to take into account existing social, legal, and institutional barriers (Davis et al., 2021). Kwanele is an example of a company that now has taken steps towards a practical implementation. Through positioning the TwSw as a multipronged approach grounded in five intersecting dimensions, we hope to inspire transdisciplinary practitioners and policymakers with tools and generative questions to reorient their work towards a reparative approach.
Conclusion and future work
Building on feminist and critical algorithmic justice projects and scholarship, this article argues the need and lays out pathways to transform how contractual agreements between people and technology companies are constituted. The Terms-we-Serve-with framework offers five entry points for technologists and policymakers to co-create algorithmic systems that shift existing power imbalances to replace coercive user agreements, foster more meaningful forms of consent, and enable more transformative modes of algorithmic accountability. Our theory of change is centered on engaging with the reparative role that relational user agreements could play in minimizing sociotechnical harms and risks in AI. Similar to other benefits of participatory AI (Sharp et al., 2022; Wong et al., 2022), the value TwSw dimensions offer is mutual learning and understanding, which we argue can foster more equitable, creative, and reparative futures. Realizing this role requires forging meaningful community participation, and a commitment from technologists to participatory methodologies. Policymakers, too, can enable relational user agreements by legitimizing their need in the regulation of contractual relationships.
Our framework underscores limitations in normative user agreements, particularly around coercion and dark patterns. User agreements and practices are important sites of justice. If user agreements are to contribute to algorithmic reparation, they need to explicitly incorporate meaningful modes of redress and an equitable, future-oriented agenda to address instances of algorithm harm. While improving opportunities for meaningful consent in user agreements is urgent, without multifaceted feedback mechanisms to identify frictions, intervene in community-identified problematic aspects of algorithmic systems, among others, reparative algorithms will be hard to realize.
The TwSw dimensions are a starting point, rather than the final word on developing reparative user agreements. As our current analysis is algorithm-agnostic, future research could more thoroughly investigate the potential for a reparative and relational approach to user agreements in the context of different algorithms and their associated failure modes and harms (e.g., large language models, generative machine learning models, computer vision models). The specific ways harm manifests from the algorithmic system at hand will shape the specific ways the TwSw dimensions take shape and the social domains and contexts in which they are deployed. Future work will be required to link the ideas we lay out to policy recommendations and practical implementation in particular domains. Through a research agenda committed to algorithmic reparation, we can enable more equitable and accountable technological assemblages.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
