Abstract
This article examines how algorithmic accountability is translated into action at the municipal level in the United States. Based on a review of task forces, ordinances, and policy toolkits from New York City and Seattle, I demonstrate the ways municipalities and local publics operationalize abstract notions of accountability. Municipal interventions often prioritize revealing computational tools (transparency) and their effects on people (impact assessments). While these two forms of accountability are crucial, they may neglect to examine institutions—and how they change—as they incorporate automated decision systems. I thus propose a political-economic approach that recognizes algorithmic systems as part of municipal institutions and focuses on their role in intensifying data collection and commodification between public agencies and markets. I argue that algorithmic accountability, especially in public agencies, needs to focus on incompetence and asymmetries of power within a network of governments, tech companies, community groups, and technologies. With a mix of transparency, impact assessments, and political economic review, the paper proposes a more comprehensive assessment of automated decision systems through their development, procurement, use, impact, and decommissioning.
Keywords
Introduction
As algorithmic systems proliferate in public services, news feeds, workplaces, and urban environments, calls to hold them accountable have become loud and persistent. Pushing back against the notion that algorithmic systems are obscure and usually impenetrable, algorithmic accountability, in broad strokes, aims to render these systems answerable to the public and responsible for their decisions. Nowhere is such concern for accountability more prominent than in the public sector, where algorithmic tools are increasingly part of the coercive and redistributive capabilities of the state (Brayne, 2020; Calo and Citron, 2021; Eubanks, 2017; Green, 2019; Safransky, 2020). Academics and journalists run ad hoc computational audits, public scandals follow news reports, and the industry makes superficial attempts at ethics boards and guidelines, but there is little substantial movement toward instituting algorithmic accountability in public agencies. Despite the breadth of documentation of biased decisions and theoretical work on algorithmic accountability (Ananny and Crawford, 2018; Barocas and Selbst, 2016; Citron and Pasquale, 2014; Engstrom and Ho, 2020), the combined opacity of algorithmic systems (Burrell, 2016) and government office operations means we know little about what accountability looks like in practice.
Recently, local governments have become sites of policy interventions and public advocacy for algorithmic accountability in public agencies. As tech companies partner with cities and municipal governments to test work-in-progress automated decision systems, advocacy groups and public officials call on municipal institutions for regulatory action. These efforts have been piecemeal so far, reflecting various kinds of accountability targeting automated decision systems. Expressed or actual limitations inside city agencies, diverse policy objectives, and, in some cases, specific scandals lead municipal agencies and local publics to promote distinct types of accountability.
This paper discusses cases in the United States to examine how algorithmic accountability has been translated into action in local contexts. I first review a pair of well-publicized, multi-stakeholder efforts on algorithmic accountability and equity as examples of two models of holding automated systems accountable: (1) focusing on transparency to reveal the uses of automated systems inside municipal agencies and (2) documenting the differential impacts of automated decision systems on local publics. As the United States has routinely avoided regulating automated systems, these two local initiatives allow insight into the breadth of political imagination for algorithmic accountability in a lax regulatory environment. I then propose a third approach, grounded in a political-economic perspective, which considers the bureaucratic institutions that design, adopt, and test these systems. I use a public controversy from San Diego over smart streetlights to illustrate why the first two approaches, even if executed well, are insufficient for comprehensively holding automated decision systems accountable.
Distinguishing these three forms makes it possible to demonstrate the ways municipalities and local publics operationalize otherwise abstract notions of accountability. The first two models reveal computational tools (transparency) and their effects on people (impact assessments). While they are crucial, they may not pay enough attention to how institutions change as they attempt to incorporate automated decision systems. The political-economic approach I propose recognizes algorithmic systems as embedded within economic and political institutions—rather than as separate computational tools—and focuses on the role of automated systems in intensifying data collection and commodification between public agencies and markets. This approach, I suggest, allows scholars and advocates to examine the ways municipal agencies justify, procure, and use automated decision systems and thus to grasp the discrepancy between their intended and practical uses. It also more explicitly addresses the consequences of how these technologies are tested, adopted, and, in some cases, discarded inside public institutions as they contribute to creating new markets.
This paper offers two contributions to the existing literature on algorithmic accountability and government use of algorithmic systems. First, by demonstrating how municipalities and residents have tried to operationalize abstract notions of algorithmic accountability, it shows the assumptions and limitations of current efforts. Second, I argue that algorithmic accountability, especially in public agencies, needs to focus on incompetence and asymmetries of power within a network of governments, tech companies, community groups, and technologies. Rather than treating automated decision systems as exceptional tools, a political-economic understanding of algorithmic accountability could foster bureaucratic responsibility about what these systems are supposed to do and whose interests they serve. By adding to the mix of transparency and impact assessments a political economy dimension, I propose a more comprehensive view of automated decision systems through their development, procurement, implementation, testing, use, impact, and decommissioning.
Holding socio-technical systems accountable
Over the last decade, algorithmic accountability has emerged as a political response to the outsized power of big tech companies and the promises of innovation and techno-solutionism that work to crowd out other political possibilities for attending to social problems. From scrutinizing computational systems to examining their implications in social life to addressing the larger political-economic context in which they are designed and implemented, considerations of algorithmic accountability aim to understand how these socio-technical systems work (Diakopoulos, 2015; Fink, 2018), redress their harms (Koene et al., 2019), and re-imagine how they might be configured differently (Benjamin, 2019a).
While the term “algorithmic accountability” is relatively recent, debates over the erosion of responsibility in computer systems have a long history. Investigating the moral and legal processes of accountability, for example, Helen Nissenbaum (1996) cites four issues that obfuscate clear checks and balances in a computerized society: the problem of identifying who is accountable (“many hands”); the dominant view in computing that software errors are inevitable (“bugs”); distancing human actors from responsibility by blaming computers (“the computer as scapegoat”); and industry demands for property protection while denying accountability (“ownership without liability”). Twenty-six years after Nissenbaum wrote, these four concerns endure even as an entrenched tech industry develops algorithmic systems that reach farther and wider than before.
Legal scholar Frank Pasquale (2019) observes that while a first wave of algorithmic accountability prioritized improving existing systems through due process, platform neutrality, and non-discrimination principles, a second wave now addresses the political and economic structures underlying the making and use of algorithmic systems. In this new wave, scholars make prominent calls for a “genuine accountability” accessible to groups harmed by computational systems (Hoffmann, 2019; Powles and Nissenbaum, 2018) and advocate room for refusal to build or use some technologies (Gangadharan, 2021). Amid diverse goals, several methods have emerged as practical solutions within the tech industry and among advocacy groups, media, and the academy.
A recent report (Ada Lovelace Institute, AI Now Institute and Open Government Partnership, 2021) examines policy implementations of algorithmic accountability across public agencies in the United States and Europe. It finds eight different approaches, including principles and guidelines, prohibitions and moratoria, public transparency, impact assessments, audits and regulatory inspection, oversight bodies, rights to hearing and appeal, and procurement conditions. These different dimensions of accountability point to a wide range of experimentation and governance models, all of which are inevitably shaped by the politics of the local and national contexts in which interventions occur. This paper thus joins academic efforts to understand better the trials of algorithmic accountability in practice in the context of U.S. cities. The following section briefly reviews how cities have become the target sites of policy debates regarding algorithmic accountability.
Local governments and algorithmic systems
With the rise of smart cities (Wiig, 2015) and the internet of things, local governments have been on the radar of small and large tech companies. Many municipalities partner with them to test novel technologies across the city and inside public agencies (Halpern et al., 2013). The critics of smart cities rightly point out that the primary purpose of these public-private partnerships is the ongoing privatization of public services via techno-solutionism (Hollands, 2015; Green, 2019; Mattern, 2021; Sadowski and Bendor, 2019). Tech companies wield enormous symbolic and economic power over austere municipalities by intentionally magnifying the problems of cities and hyping the promise of intelligence ostensibly sourced from novel data collection methods (Baykurt and Raetzsch, 2020; Halegoua, 2020; Kitchin, 2014; Powell, 2021; Shapiro, 2020). Once urban testbeds or pilot programs are rolled out, cities are often stuck trying to invent uses for half-developed technologies (Baykurt, 2019; Brauneis and Goodman, 2018).
Criticism and failures rarely stop municipalities from adopting computational tools for local governance. Under various terms (e.g. artificial intelligence, algorithmic systems, automated decision systems, or smart technologies), public officials rely on metrics and software in criminal courts, predictive policing, welfare distribution, and urban planning (Brayne, 2020; Christin, 2017; Clark, 2020; Eubanks, 2017; Tavmen, 2020). Governments purchase personal data from private vendors or collaborate with tech companies to furnish urban environments with new surveillance technologies. The resulting data are put to algorithmic work to make vital decisions about public services. New and poorly understood computational tools, market practices, and bureaucratic processes inevitably transform statecraft in the digital age (Fourcade and Gordon, 2020).
In response to the increasing adoption of smart technologies in urban environments and automated decision systems in municipalities, community groups and civil society organizations have recently turned to cities as sites for regulatory innovation. From local laws banning facial recognition technologies to organizing against big tech companies such as Amazon in New York City or Google in Toronto, citizen mobilizations push local governments to launch regulatory mechanisms against the harms of digital technologies. Most of the ensuing task forces, policy frameworks, and pieces of legislation are specific to localities. They may be limited in their effect but they introduce possible visions and practices for holding technologies accountable, especially in the absence of national and international guidelines.
Methods and data
To understand how algorithmic accountability is conceptualized in practice at the city level, I begin by reviewing two well-known examples in U.S. cities, both undertaken in 2017. The first is New York City, the first city to create a task force for regulating automated decision systems. The second is Seattle, which passed one of the most ambitious surveillance oversight ordinances in the country. I pick these cities for two reasons. First, both are municipal-level, multi-stakeholder cases of direct policy interventions in algorithmic accountability and equity. Second, they present a pair of accountability models that enable me to tease out the different assumptions, political actions, and limits of these efforts to operationalize algorithmic accountability.
Through an interpretive, comparative lens, I reviewed these two municipal policies’ ordinances, public reports, meeting minutes, and local and national press coverage. I accessed policy documents through the local government websites of each city. In both cases, because there was strong participation by engaged scholars, nonprofit organizations, and community organizers, I also consulted their reports, websites, and scholarly work as primary source material for accounts and critiques.
I then turn to another city, San Diego, where a coalition of activists and researchers organized against the city's smart streetlights initiative, launched in 2016 and shut down in 2020. As with New York City and Seattle, I accessed the policy documents (e.g. ordinances, procedural documents) that were publicly available either on government websites or thanks to the work of investigative journalists. In addition, there was strong community organizing by the TRUST San Diego Coalition, whose website offered a wealth of resources. In San Diego, too, some engaged scholars reported on their involvement in public reports and academic papers, which I reviewed as primary source materials.
Instead of treating these examples as traditional case studies, I prioritized identifying the range of actions, the timeline of events, and the substance of critiques to theorize the models of algorithmic accountability in the city as exemplified by NYC and Seattle. I studied the similarities and differences between the two cities and analyzed the extent of the two models of algorithmic accountability in practice. I used San Diego's data to illustrate the limits of those two models and to clarify the need for a political-economic approach to algorithmic accountability. Together, these examples showcase different frames and practices of accountability, thereby moving toward a more comprehensive understanding of algorithmic accountability in the city.
Algorithmic accountability through seeing technologies
Since most automated decision systems are perceived as black-box technologies, transparency often emerges as a common theme for holding these socio-technical systems accountable. When New York City created the first task force offering recommendations for regulating government use of automated decision systems in 2017, transparency was at the heart of their efforts. The task force's original goal was to “address how the City uses automated decision systems (ADS), how ADS are managed, how information about them is retained, and what happens when the public asks about, or has a concern about, a specific ADS” (New York City, 2019a). The aim was to shed light on the actual automated decision systems inside municipal agencies and how city officials used them in decision-making.
The task force first convened in 2018 with seventeen members and three co-chairs from city agencies, academic institutions, and civil society organizations (Richardson, 2019). A broad field of actors and organizations outside the task force also produced policy reports, offered expert views, and organized community meetings to generate recommendations about the use and governance of algorithmic systems (Heimstädt and Ziewitz, 2019). The task force spent the first several months debating the definition of an automated decision system (Richardson, 2019). The starting definition in Local Law No. 49 (2018) was expansive: “‘automated decision system’ means computerized implementations of algorithms, including those derived from machine learning or other data processing or artificial intelligence techniques, which are used to make or assist in making decisions.”
Public agencies use several technologies that assist in decision-making. They rely on different algorithms, machine learning, or other statistical methods, so city officials were concerned about the nebulous definition of ADS. Some even questioned whether an Excel spreadsheet would be considered an automated decision system (Richardson, 2019). The task force also had trouble defining what to count as a “decision” when computerized systems assist in several stages of a public agency decision-making process (New York City, 2019b). These long discussions were not necessarily a delay tactic or a semantic confusion. The stakes of precisely defining automated decision systems were high, as this choice would determine the ambit of what could be regulated.
The definitional challenges were exacerbated by the reluctance of NYC agencies to identify the automated decision systems in use in city hall. Once the city staff's unwillingness to provide information on existing tools became too obvious to excuse or ignore, some task force members became openly critical (Lecher, 2019). One member said, “[Not sharing information] undercut the value of the task force, which aimed to escape the theories and generalizations of the ivory tower to examine how these tools were operating in the real world, using the country's largest city as our test case” (Cahn, 2019). The AI Now Institute at New York University released a chart of publicly acknowledged examples of automated tools used in the city (AI Now Institute, 2019) but the agencies remained stubborn in their refusal to provide a complete list during the tenure of the task force (Heilweil, 2019).
The expressed aims of the task force encompassed an expansive conception of algorithmic accountability: to create procedures to establish transparency, promote explainability, and ameliorate disparate harm in the use of automated decision systems in New York City government. Over the eighteen months of their work, however, task force actions narrowed to debating a definition of automated decision systems and what technologies inside public agencies should be subject to oversight (Cahn, 2019; Heilweil, 2019; Richardson, 2019). Coupled with the reluctance of public agencies to share the complete list of automated systems in use, algorithmic accountability in New York City became limited to identifying algorithmic technologies. The task force's final report of November 2019 did not include a full list of existing automated decision systems (Heilweil, 2019; Lipton, 2019). It instead tried to come up with criteria to identify which automated systems should be subject to public oversight (New York City, 2019a).
Scholars in critical data studies routinely argue against an essentializing view of algorithms, that is, against seeking to pinpoint and define them (Dourish, 2016; Neyland, 2016; Seaver 2017). However, NYC's task force—and the public debate that ensued over the task force—dealt predominantly with definitional anxieties and efforts to locate automated decision systems inside city agencies. Outside experts and advocacy organizations pushed the task force and the city government to have a more expansive notion of algorithmic accountability, one that focused on public participation, the harms of these tools on various communities, and power asymmetries within the procurement and development of automated decision systems (Richardson, 2019). Not only did this broader view not materialize but the task force's limited transparency approach failed to produce the intended results (Kaye, 2019).
NYC's attempt at algorithmic accountability also failed at meaningful public participation over the 18 months of its process. Despite consistent engagement of a broad coalition of community organizers, civil rights advocates, and academic researchers, very few of their inputs were incorporated into the final report. Nor was there a deep dive into the impact of automated decision systems on various communities across NYC (Richardson, 2019). A task force outcome was a 2019 executive order that set up a role for an Algorithms Policy and Management Officer to “help provide protocols and information about the systems and tools City agencies use to make decisions” (New York City, 2019c). In 2021, this office published its first report on the number of automated decision tools used inside each municipal agency, with brief descriptions of what they do (Thamkittikasem, 2021). The office also organized a few public events to introduce the city government's agenda on algorithmic accountability.
New York City's experience with the task force and the ensuing controversies demonstrate how transparency as a type of algorithmic accountability is not a straightforward endeavor. It is undoubtedly an essential step toward accountability, especially since automated decision systems are complex computational tools often hidden from public sight. Instituting a task force and collective efforts toward transparency generated broader conversations around cataloging and counting algorithmic systems and pushed municipal staff, at least to an extent, to articulate how they use these technologies in decision-making. However, transparency alone was not a complete frame for understanding the impact and consequences of these systems in municipal decision-making.
Algorithmic accountability through assessing impact
New York City approached accountability with a focus on transparency, emphasizing ways to make the municipal use of technology more visible to the public. Seattle pursued another strategy: impact assessments meant to reveal the effects of automated decision systems. The city government adopted one of the most robust regulations of surveillance technologies, the Seattle Surveillance Ordinance, in 2017 (City of Seattle, 2017). As the city recognized the disparate impact of surveillance technologies on different populations and the lack of transparency at city hall, the ordinance laid out clear procedures for acquiring technologies. These included maintaining a publicly available list of technologies in use or in any stage of procurement, inviting public comment and city council approval before acquisition, and delivering routine equity/impact reports for public review.
The ordinance was not originally framed as algorithmic accountability legislation but many targeted surveillance technologies involved computer vision and automated inference techniques, which fell under the purview of this ordinance. It aimed to establish a robust process for evaluating transparency and outcomes by creating a repository of surveillance technologies and regular impact assessments. By naming the city council a locus of responsibility, the ordinance encouraged community input through council members’ direct public outreach and proposed a more robust oversight process.
Even though this ordinance stood out with the “strength of its detailed reporting processes, public engagement mechanisms, and direct political oversight functions” (Young et al., 2019: 3), its execution encountered practical challenges. One study, for example, found that the city government was overly focused on “the data collection function of surveillance technologies” and ignored data analysis processes (Young et al., 2019: 12). Further, city employees lacked consensus on which existing computational tools would be subject to oversight.
Such gaps between policy and implementation inspired civic organizing in 2019. An academic organization at the University of Washington, the Critical Platform Studies Group, joined with the ACLU of Washington, Tech Fairness Coalition, and the Council on American-Islamic Relations of Washington. They launched a series of co-design workshops on building and evaluating automated decision systems and surveillance technologies, organizing panels to incorporate the lived experiences of different groups affected by automated decision systems in immigration, employment, housing, and welfare (Katell et al., 2020). After a year of collaborative work, the group designed the Algorithmic Equity Toolkit (AEKit), which was released publicly in May 2020. Each of the kit's four main components identifies questions about the impacts and oversight of automated decision systems. Through flowcharts, definitions of key terms, and questions, the AEKit aimed to make automated decision systems more legible and explainable to community advocates. Through the kit, the civic coalition sought to recognize “the potential for algorithmic harm” by educating and guiding city residents in understanding how these technologies work (Katell et al., 2020: 47). It also served to inspire a model of collaboration across multiple groups as the coalition saw “equitable process as determinative of equitable outcomes” (Katell et al., 2020: 47; emphasis in original).
City officials may use constantly changing definitions and operations of algorithmic systems as excuses to punt on regulation. The accountability model of the AEKit thus emphasized the significance of laying the groundwork for ongoing community education and advocacy to bridge the knowledge gap. This model of algorithmic accountability joins a growing movement around conjuring responsibility from the ground up and seeking to document people's experiences, especially in marginalized groups, with automated decisions and data collection technologies (Bloch-Wehba, forthcoming; Lewis et al., 2018). By decentering technology in algorithmic accountability and “beginning and ending with marginality” (Gangadharan and Niklas, 2019: 896), this view prioritizes lived experience and community-led efforts. It centers the voices of those directly impacted by algorithmic systems in design and evaluation (Costanza-Chock, 2020) in contrast to myopically spotlighting the inputs and outputs of systems.
The algorithmic impact assessments exemplified by Seattle's civic efforts do not limit accountability to seeing automated decision systems or opening them up to understand how they work. They focus on revealing disparate effects of algorithmic systems while trying to identify the intentions of designers or governments and the potential harms that may be associated with the use of these systems (Kulynych et al., 2020; Moss et al., 2021). As a community-driven initiative, the AEKit has yet to prove effective in instituting citywide algorithmic accountability. Still, since it builds upon the city's existing surveillance ordinance with clear procedures to document and politically approve or reject technologies, algorithmic accountability in Seattle could potentially move beyond seeing impact and offer processes for responsibility and redress at the municipal level. However, as Metcalf et al. (2021) observe, how and when assessments happen and in what ways they lead to accountability are in no way straightforward in practice and are still very much shaped by power relationships.
If algorithmic transparency centers on the opacity problem, algorithmic impact statements prioritize the disproportionate harm and discrimination caused by automated systems. Similar to impact statements in environmental policymaking or privacy assessments, algorithmic impact assessments aim to establish a governing relationship by accounting for actual or potential harm through automated decision systems’ design, maintenance, or operation (Ada Lovelace Institute, AI Now Institute and Open Government Partnership, 2021). While a focus on impact as a type of accountability significantly complements transparency, it does not target the institutions that decide to adopt or design these systems in the first place. Nor does it offer a clear way into the broader consequences of incorporating algorithmic systems into municipal decision-making and how these systems change workflows inside public agencies.
A political-economic view of algorithmic accountability
Transparency and impact assessments are crucial models for algorithmic accountability but they may miss spotlighting what happens inside municipal agencies as they use automated decision systems. What ought to be done, for example, when algorithmic systems are launched and then scrapped quietly without answering for errors or failures to deliver on promises? How can we know, either in NYC or Seattle, the ways automated systems factor into decision-making processes if we solely review technologies and their results but not how they are accepted or refused inside municipal agencies? Without considering the role of bureaucratic institutions in algorithmic accountability, accountability practices are unlikely to identify incompetence, abuses of power, and malpractice that unfold as municipal agencies embrace automated decision systems and surveillance technologies in workflows (Brayne and Christin, 2021; Levy et al., 2021).
I thus propose to complement transparency and impact assessment forms of algorithmic accountability at the municipal level with a political-economic framework. By this, I mean a model of algorithmic accountability that starts from the assumption that automated decision systems are designed and used by entities whose practices reflect particular economic and political interests. That enables a focus on disjunctures between civic concerns and the interests of business and government (Gandy, 1992). Beyond the intended objectives and uses of automated systems, a political-economic approach to algorithmic accountability recognizes the broader goal of public agencies and private organizations that use and design these systems: an ongoing process of data production and commodification (Bloch-Wehba, 2021; Gandy and Nemorin, 2019; Sadowski, 2019). Thus, it aims to examine how automated decision systems are justified in the first place and how they are tested, negotiated, used, and sometimes discarded inside public agencies. This perspective sets itself apart from transparency and impact assessments by explicitly seeking to reveal power asymmetries, lack of contractual enforcement, and bureaucratic incompetence between municipalities and the industry.
An example from San Diego illustrates the necessity of a political-economic approach to algorithmic accountability. After the city government procured and installed an automated system (smart streetlights), community groups revealed failed promises and overreach of this particular technology. In 2016 the City of San Diego partnered with Boston-based GE Current to launch a Smart Streetlights project that used the company's CityIQ smart city management platform. Streetlights were equipped with cameras, microphones, and object detection algorithms intended to automate dimming and brightening lights while collecting anonymized data. The city government touted smart streetlights as a cost-saving and sustainability project. It promoted data collection as a public good whereby local developers and residents could use data to create new applications or find novel solutions to San Diego's problems.
A year later, it became clear that the San Diego Police Department, rather than developers or residents, had become a significant user of smart streetlights without any oversight (Whitney et al., 2021). An academic report showed that smart streetlights’ transit and mobility data were highly unreliable and not used by citizens or entrepreneurs (Irani and Whitney, 2020).
Following robust community advocacy, journalistic investigation, and city council lobbying, the municipality decided to shut down the smart streetlights cameras in September 2020 (Whitney et al., 2021). It became clear, however, that unplugging cameras meant streetlights would go dark since both relied on the same power supply (Marx, 2020). Instead, cameras continued recording but the data were accessible only to the private company and not to the city or the public. Following this revelation and ongoing advocacy, smart streetlights were eventually turned off and, in November 2020, civic organizers managed a push for an oversight ordinance for surveillance technologies in the city (Irani and Alexander, 2021).
The public controversy around San Diego's smart streetlights demonstrated the broken system behind acquiring smart technologies in partnership with tech companies, which use cities to test work-in-progress technology products while also dictating the terms of collaboration (Irani and Alexander, 2021). It confirmed how weak municipal agency expertise and negotiating capabilities affected adoption and use of these technologies and exposed the gap between what is hyped publicly about such projects (innovation or sustainability) and their actual uses (unaccountable surveillance). Smart streetlights appeared not as a complex socio-technical system but as a half-developed technology likely to fail and whose failures have severe consequences in and outside of city government.
Would it secure accountability if algorithmic transparency or impact assessments were in full force as San Diego adopted smart streetlights? Those strategies might not have provided complete documentation to investigate the layers of power asymmetries, lack of contractual enforcement, and bureaucratic incompetence that we witness in this case. It turned out, for example, that city officials themselves did not have the capacity to impose legal contracts or to evaluate the operational relationships between public agencies and private companies. If accountability documentation only reveals computational systems and their impact on city residents, the public might not capture inter-agency relationships or how it happens that a project is quietly shut down.
San Diego's civic coalition did not frame their interventions as a matter of algorithmic accountability but I suggest their method for holding municipal institutions’ automated systems accountable could complement algorithmic transparency and impact assessments. Many U.S. cities that test or adopt automated decision systems share similar experiences with San Diego (Baykurt, 2019). They display various levels of technological naivete, intentional or unwitting opacity, a lack of concern for equity and civic participation, and layers of incompetence, discretion, and impotence inside institutions. By clarifying “the distribution of power” (Kasy and Abebe, 2020) in the process of acquiring and using algorithmic systems, what I call a political-economic view of algorithmic accountability could examine (1) whether automated decision systems do what their designers and users promise they will do, (2) what kinds of questionable (not just discriminatory but also irrational) assumptions are built into their design, and (3) which areas of public life they might or might not be suitable for. A political-economic framework considers the failures and misuses of automated systems even before adopting them and asks (4) how trials and errors with new technologies shift work priorities and funding decisions inside public agencies and (5) how responsibility and consequential actions are displaced once agencies adopt automated technologies. Explicitly tying the adoption of automated decision systems to commodification and datafication (Gandy and Nemorin, 2019) enables detailed questions about what happens to extracted data or how data travel across different markets and bureaucratic organizations.
I propose this political-economic view of algorithmic accountability for both public administrators who procure and use these systems and civic groups that critically assess their design and adoption. If public administrators incorporate a model of algorithmic accountability that closely observes what happens inside organizations as they adopt automated decision systems, they will place the review of algorithmic systems within existing institutional checks and balances. By shifting attention from the promises of computational systems toward institutions that design, procure, and use them, a political-economic approach asks bureaucracies to see mistakes and risks not as aberrations but as critical consequences of adopting automated systems. It pushes them to justify and answer for the uses of these systems even before adopting them. It would be advantageous for civic groups to embrace this approach as well, since internal reviews might not be enough to fully account for what happens in public agencies.
What I propose for city officials, activists, and scholars is to formalize a model of algorithmic accountability that routinely “studies up” and documents how bureaucratic actors make decisions to incorporate, negotiate, and push back against automated decision systems (Barabas et al., 2020; Nader, 1972). This complementary model of accountability is already mobilized by activists who organize against technologies of predictive policing by offering a systemic critique of police accountability and the criminal justice system rather than solely focusing on the technologies and their properties (Benjamin, 2019b; Hamid, 2020).
A similar model is crucial inside and outside municipal agencies, where quietly tested automated systems displace bureaucratic discretion to less accountable areas (Brayne and Christin, 2021; Horgan, 2022; Schwarting and Ulbricht, 2022). A political-economic strategy can better trace connections between municipal algorithmic systems and automated carceral technologies (Roberts, 2019). In addition, many municipal systems are embedded within various structures of classism, racism, and sexism, which calls for a robust review at every stage of adoption, from procurement to design to policymaking. By examining how municipal agencies justify using automated decision systems in underdeveloped forms and observing how these technologies change organizational practices and shift responsibilities, algorithmic accountability could expand to encompass the broader set of systemic issues for which unexamined, incomplete automated decision systems are assumed to be solutions.
Discussion
Using two U.S. cities as examples, I have distinguished between two recent algorithmic accountability practices. In the case of NYC's task force, algorithmic transparency takes a technology-centric view of accountability, intending to reveal more about automated decision systems taken up inside public agencies. It begins with the assumption that not only are these municipal systems only minimally known by the public but they also operate opaquely. Algorithmic impact assessments such as Seattle's, on the other hand, turn their focus toward people, particularly people at the receiving end of decisions made by automated systems. Starting from the assumption that the consequences of automated decision systems are not uniform, impact assessments aim to identify the differential harms and consequences of algorithmic tools. As we see in Seattle, these people-centric initiatives seek to educate residents about how technologies operate and how to ask the right questions to understand their impact.
These approaches are necessary steps toward algorithmic accountability. However, as the example of San Diego shows, they may fall short of documenting how municipal agencies work with tech companies and do not formalize any processes that reveal what kind of expertise and power dynamics underlie relationships between these public and private institutions. They also lack the ability to lay bare the ways agencies justify adopting these technologies and to track the circulation of data, decisions, and responsibilities within bureaucracies.
The third form of accountability I propose focuses on the in-between space: bureaucratic organizations that design, adopt, and test these systems. It starts from the assumption that these systems are prone to making mistakes and that those who design and use them have their own economic and political interests, which may or may not overlap with the intended goals of automated systems. It raises questions: Is this particular technology necessary for the task? How do trials and errors with these new technologies shift work priorities and funding decisions inside public agencies? What happens when these systems do not deliver on their promises? How do data travel across different markets and bureaucracies? How are responsibility and consequential actions displaced once automated technologies are adopted? This is not a comprehensive list but such questions offer a concrete way for public administrators and civic groups to address specific malfunctions and misuses of automated decision systems. They also emphasize reviewing the political and economic interests served by the ongoing datafication and commodification these technologies enable.
Existing frameworks of digital or AI governance in the public sector already grapple with some similar questions (Dunleavy et al., 2006; Gasser and Almeida, 2017). These reviews, however, often happen either at the start of procurement processes or as part of annual reviews and do not diligently document what happens inside public agencies as they use such systems. Reviews at the procurement stage may depend on third-party vendor expertise, thereby not accounting for the ways city officials change or adapt policy as they incorporate such systems into decision-making (Mulligan and Bamberger, 2019). They also may not question ongoing commodification and datafication in public policymaking if they view the adoption of automated decision systems as inevitable. A study on U.S. federal agencies that use automated decision systems found substantial in-house development of algorithmic systems inside government offices (Engstrom et al., 2020). It is not clear to what extent municipal agencies have a similar creative capacity, nor are there established processes to review in-house design and use of municipal algorithmic systems.
There are also different stages of automated decision systems shifting agency policymaking at macro, meso, and micro levels (Veale and Brass, 2019). A political-economic model of algorithmic accountability proposes reviewing these new technologies more comprehensively by recognizing that, at each stage, accounting for such systems’ impact in and outside municipal agencies will be partial and contingent (Amoore, 2020). Tracking the life cycles of algorithmic systems both internally by city officials (U.S. Government Accountability Office, 2021) and externally by civic groups would better document how automated decision systems factor into municipal decision-making and keep in view potential conflicts between political-economic interests and civic concerns.
A complementary perspective on algorithmic accountability that spotlights bureaucratic institutions is also necessary to establish stricter processes of accountability regarding misuse, failure, or abuse. By singling out the uniqueness of complex computational systems, algorithmic transparency and impact assessments tend not to present robust mechanisms for responsibility and redress. Accountability, however, is as much about consequences and correction as scrutiny and judgment. The possibility of positive and negative consequences (Bovens, 2010) establishes a more committed accountability relationship. That is especially crucial in the case of algorithmic systems where accountability can be used not just to control and justify, but also to reflect and learn collectively (Olsen, 2014).
In practice, accountability gaps emerge from the obscurity of violence and violators within automated decision systems, corporate secrecy, weak expertise inside public agencies, and an overall lack of remedies for harm caused by technologies (Land and Aronson, 2020; Wieringa, 2020). Without an effective legal or rule-making mechanism for consequences in case of harm and failures, transparency and impact assessments do not produce robust accountability relationships. In contrast to conceptualizing algorithmic systems as an exceptional category, a political-economic location of algorithmic accountability within existing structures of accountability would lend itself to clearer mechanisms for responsibility and redress.
Conclusion
Algorithmic accountability has emerged as a response to the rise of automated decision systems in social life, where algorithmic systems decentralize, distribute, and obfuscate governing relationships in public and private contexts. Everyone agrees we need to hold algorithmic systems accountable but there is no settled regulatory method. To understand the gap between abstract debates and empirical trials, this paper has reviewed how U.S. cities have attempted to formalize algorithmic accountability at the municipal level. Using examples of NYC's Algorithmic Accountability Task Force and Seattle's surveillance ordinance and the civic project of the Algorithmic Equity Toolkit, I argue that current policy initiatives on algorithmic accountability are very much driven by the work of advocacy groups and academic discussions on accountability. In NYC, the focus was on identifying the automated decision systems used inside public agencies and cataloging how to regulate them (algorithmic transparency) while Seattle emphasized the disproportionate harms and risks on different populations through impact and equity reviews conducted by both public agencies and community members (impact assessments).
Algorithmic transparency and impact assessments are useful but they may not pay enough attention to how public and private institutions acquire and use technological systems. Many local governments adopt automated decision systems in half-developed forms or test them in partnerships with tech companies. Mostly experimental, these tools often fail to work as intended and are then discarded, sometimes quietly, sometimes spectacularly. I thus suggest expanding the scope of algorithmic accountability efforts to review public institutions and how they make decisions about and with automated decision systems. Even when algorithmic systems work without glitches (which is rare), they do not work autonomously. Public officials often need to figure out a balancing act between the results of algorithmic systems and their own decision-making (Brayne and Christin, 2021; Eubanks, 2017). In addition to formalizing algorithmic accountability to reveal the black box of algorithmic systems and their effects on city residents, I propose to systematically document how public agencies adopt these systems and change their organizational practices in the process.
A more comprehensive framing of algorithmic accountability may sound too ambitious for policymakers when we know that even the limited versions of algorithmic transparency and impact assessments have yet to be widely adopted in U.S. cities. Future research could address the motivations and processes necessary to incorporating a political-economic view into existing municipal decision-making. Through interviews and ethnographic research, scholars may further identify dissonances between different models of algorithmic accountability and their implementation. The scope of this empirical assessment can be expanded to other countries with different socio-political structures to identify what bureaucratic settings better support different types of municipal algorithmic accountability.
An expansive approach to algorithmic accountability, one that also zooms in on institutions, could shift bureaucrats from handwringing over the definition of what constitutes algorithmic systems toward taking responsibility for algorithmic decisions. By more precisely naming the actors responsible for automated decision systems, an expanded vision can more directly tackle what most practices of algorithmic accountability neglect: setting up precise mechanisms for consequences. Whether it centers technologies, effects, or institutions in analysis, algorithmic accountability will be incomplete if it stops at documentation and revelation. To achieve democratic accountability, as opposed to bureaucratic exercises, municipal practices need to establish political mechanisms for how actors will take responsibility for failures, redress harms, and collectively learn from failed experiences with automated systems.
Footnotes
Acknowledgments
I would like to thank Christopher Ali, Seyram Avle, Joshua Braun, Martha Fuentes-Bautista, Devon Greyson, Carin McCormack, Tim Wood, Weiai Wayne Xu, Kevin Zheng, and anonymous reviewers for their helpful comments.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received funding from the Center for Advanced Internet Studies (CAIS) in Germany for the publication of this article.
