This article is a part of special theme on The Black Box Society. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/revisitingtheblackboxsociety
Introduction
The artificial intelligence program will be watching, we won’t be.
Josh Sattler, Darwin Council
1
An increasing number of sensing technologies now interlace the city. These entities capture and convert urban dynamics into a series of simulated data flows that can be profiled and manipulated by an assemblage of non-human algorithms. Such agents, infrastructures and processes are discursively bundled up in promissory discourses which emphasise the inadequacy and archaic nature of previous governance arrangements, while simultaneously accentuating the virtuousness of data-driven modalities. The technologies, platforms and processing codes are idealised in celebratory, abstract and reductionist terms, with the city presented as being more intelligent, responsive, efficient, innovative, democratic, safe and resilient as an outcome of its ‘smartification’. As Kitchin (2011) has argued, a key imperative of these transformations is to render the city programmable so that it can be subject to emerging forms of technoscientific management. Digital technologies, artificial intelligence and automation are touted by public agencies and commercial companies alike as being the pre-eminent tools for solving all manner of known and indeterminate social and ecological crises associated with ‘cityness’: poverty, crime, congestion, pollution, wastage, and so on (Sadowski and Bendor, 2019; Sassen, 2013). In the context of the unfolding COVID-19 pandemic, we can observe city authorities in countries like Russia turning to facial recognition software solutions as a means to govern people’s movements and mobilise swift responses to those violating the lockdown restrictions.
2
As a consequence of these sociotechnical imaginaries and transformations, everyday surveillance work in the city is increasingly enacted by non-human algorithms. We might conceptualise these entities as machinic flâneurs that engage in practices of distanciated flânerie: subjecting the flows of the urban environment to a dispassionate, mechanistic and calculative gaze. In contrast to their human and pre-mediated precursors, this witnessing is orientated less by pleasure, aesthetics, spontaneity and curiosity, than by more instrumentalised motivations: to objectivate a growing collection of urban phenomena and circulations so they can be better governed. While the flâneur imagined by Walter Benjamin was physically, temporally and viscerally located in the space and action encountered, the machinic version conversely projects a presence into the space while assuming an intangibility from – and indifference to – it. Acquiring stimulus from the manifold sensors and digital technologies that track and intermediate urban relations, the algorithms constituting practices of distanciated flânerie are expected to perform several analytical repertoires simultaneously, from facial, number plate and even silhouette recognition to congestion tracking, social interaction analysis and remote coordination of critical infrastructures. The rationale for this software is to further economise, commercialise and securitise urban dynamics, and to generate insights for actionable responses. And yet, the nature, arrangement and operativity of these mechanisms remains entirely unintelligible and unaccountable to the publics and public spaces they routinely expose and modulate.
This paper provides some theoretical reflections on the nascent forms of algorithmic practice materialising in the Australian cities of Darwin and Perth, and some of their implications for urban relations and social justice. It looks at the idealisation – and operational black boxing – of automated watching programs, before considering their impacts on notions such as ‘the right to the city’ and, in some instances, ‘the right to the face’. It will argue that the turn to facial recognition software for the purposes of automating urban governance – and exercising social control – reconstitutes the meanings and phenomenology of the face, giving rise to a dynamic biopolitics.
The work of watching in the black box city
Further fine-grained research is required to open more black boxes to reveal their divergent contents, particularly in an age when the making and profiling of visibility fields … is progressively the work of the computerized algorithmic code; a watching repertoire that is pre-programmable. (Smith, 2014: 162)
A few years back, I concluded my ethnography on the backstage watching practices and cultures of CCTV camera operators in the UK by forecasting the progressive displacement of these human surveillance workers by non-human algorithmic equivalents. The research was conducted at a time when notions of the ‘sentient’ and ‘smart’ city were starting to grip the attention, parlance and activities of urban planners and managers (as well as industry technologists and marketers). Significant developments were occurring in the design and uptake of networked sensor devices, and machine learning software and data science were becoming increasingly potent. Part of this drive to marginalise the presence of human surveillance operators in various ‘centres of calculation’ (Latour, 1987) was undoubtedly motivated by a desire to lower operating costs and maximise system efficiency levels, but also by cunning marketing on the part of those vending the algorithmic ‘solutions’. Algorithms, for instance, do not require to be remunerated for their continuous hours of observation and notification work; they do not belong to unions; they do not get sick; and they do not request leave. Moreover, they are manipulable and only require minimal supervisory oversight.
3
But the turn to software solutions was equally precipitated by their statistical basis and thus scientific genealogy, and their calculative and unfeeling nature. That is to say, algorithms do not imbue what they witness with emotion and meaning, and they do not get adversely affected by their labour. Instead, they can simultaneously and dispassionately enact multiple repertoires of listening and watching, before automating a response.
All of this is in contradistinction to the discerned physiological and psychological limitations of their human counterparts, in terms of the various constraints of (a) the human eye and brain, which can only focus on and process limited stimuli for restricted periods of time, and (b) humans performing pattern recognition at scale (identification work) and memorising – also being able to instantaneously recall – infinite details.
4
Indeed, video analytic software has an additional entrepreneurial advantage vis-à-vis the potentially infinite number of hours of footage generated from CCTV camera systems that has historically gone – and continues to go – unseen and thereby wasted: it has the capacity to extract surplus value from rapid processing of captured visual images, be that in retrospect or in real time (qua the correlations identified, intelligence generated and time/labour savings made). Moreover, various high-profile data breach scandals in recent times have exacerbated public concerns regarding the conduct of human surveillance workers.
5
A further, perhaps more significant, factor explaining the replacement of humans by non-humans in practices of watching, beautifully illustrated in the above epigraph, is the notion that is more palatable – maybe even comforting – for the watched to imagine an automated computer program shadowing their movements, than an unknown, and intrinsically subjective and judgemental, set of human eyes. There is an interesting evocation in Josh Sattler’s opening statement that scientifically derived, machine-driven profiling has a depersonalised and value-free orientation, and it is thereby more impartial – and less intimidating and fallible – than a human profiler. Publics are invited to invest trust (and substantial taxes) in the purity of the numbers and calculative rules underpinning processes of automation, and are presumed to prefer the more anonymised relations of ‘civil inattention’ (Goffman, 1972: 385) these codes purportedly generate. Machinic automation, it is intimated, has a virtuousness about it, helping curb the arbitrariness of human interference and mediation.
Frank Pasquale’s (2015) influential book, The Black Box Society, is instructive in this regard, offering a compelling account of the political, economic, social and cultural circumstances that have given rise to the pre-eminence of code as a major force of social organisation, while normalising the kinds of techno-utopian sentiment and solutionism seen above. Pasquale uses the notion of ‘black boxing’ to describe how social and economic power is exercised and leveraged by corporations and institutions in ways which have become increasingly complex, opaque and unaccountable. A notorious example of this process was the Australian government’s use of the now discredited ‘Robodebt’ program, a scheme which saw thousands of individual welfare recipients algorithmically lumbered with erroneous debts and penalties that derived from unlawful practices of ‘income averaging’.
6
Pasquale elucidates the latent nature of algorithmic processes, and the logics of secrecy and illegibility they typically embody, while also highlighting their paradoxical visibility as discursive typifications in marketing and governmental discourses. He points to the hyped claims made by advocates of algorithmic solutions, who accentuate the superior scope, objectivity, efficacy – and scientific foundations and prophetical features – of machinic and artificial intelligence, and who praise – and evangelicalise – the virtues of automation. As he notes, ‘The allure of the technology is clear — the ancient aspiration to predict the future, tempered with a modern twist of statistical sobriety’ (Pasquale, 2015: 216). Crucially, Pasquale demonstrates the way these entities operate in accordance with how they have been historically, socially and organisationally scripted, functioning to both compress and distort material phenomena and processes, into metrics, scores and profiles, which tend to reflect, reify and reproduce the interests of privileged groups and which ensure resources and penalties are unevenly distributed. That is to say, major insights from his analysis are the invisible, and yet ubiquitous, character of algorithmically driven power; the potential for these systems to intensify social injustices; and the cunning promotional discourses which frame algorithms as a panacea to social crises, helping them colonise social relations and activities in proliferating domains. Neoliberal urbanism is a very receptive milieu for the implementation of such infrastructure, and this paper adapts Pasquale’s broader metaphor to conceptualise ‘the black box city’, where urban flows and relations are increasingly registered, recognised and modulated by agentive sensors animated by unseen software. Importantly, the black box city manifests under a corporatised nomenclature of ‘smartness’, the guiding mantra of which is enhanced visibility, automation and optimisation of urban processes, and the progressive displacement of humans from the apex in hierarchies of governing. An outcome of this arrangement is the steady disappearance of accountability in urban governance.
In what follows, I wish to suggest that contemporary regimes of data-driven/non-human urban governance manifest an absent–presence. This quality and feature make them exceptionally hard for outsiders to tangibly observe, understand and interrogate. Thus, although the devices (e.g. like smartphones, air sensors and CCTV cameras) that intermediate everyday urban actions have a discernible, if increasingly unexceptional, material presence in the city, the immaterial codifications (what Mackenzie, 2006 refers to as ‘secondary agency’) orientating them are distinctly absent. As devices, data and codes stretch between theatres of the street and centres of calculation, they come to embody both a foregrounded (public) and a backgrounded (private) form. Moreover, when melded to the reductionist, promissory discourse of smartification – where smartness is a referent for ‘virtue’ and is constitutive of a set of dividends – the complex technical scripting and operating procedures of these systems are further black boxed under a veneer of progress and deliverance. This discursive framework serves to obscure: (a) how such codifications are derived from selective and simplified constructions of the past and (b) the kinds of future-ing they do and will likely contribute to. Somewhat ironically, given the metaphor of enlightening which accompanies the discourse of smartness, this leaves publics in the proverbial darkness regarding how non-human surveillance work is actually being conducted, and without the necessary information – or deliberative scope – to assess its purpose, value and fairness; or to effectively challenge its presence and legitimacy.
While significant, this situation is not new. The details of previous urban governance measures, like the introduction of CCTV camera networks into public space, were also rendered opaque to the public: in terms of the evidence-base and methodologies on which assertions were derived; and the broader funding, operating and accountability arrangements of the systems (Smith, 2014). It has since transpired that the hyperbolic claims made by self-interested parties regarding CCTV’s impact on reducing crime and disorder have not been realised by the technology’s human-mediated operating realities (see Norris and Armstrong, 1999; Smith, 2014). As a consequence, the vast sums of public monies invested and private profits made in the course of their implementation and operation have been called into question.
7
And yet, notwithstanding the apparent failure of this technocratic policy to actualise the promises made, we see a similar modus operandi being mobilised by urban authorities and legislators to craft their mono-vision of city smartness: a smartness that is almost entirely contingent on the assumed superior techno-cognitive capacities, attentiveness and responsiveness of non-human agents, be they data or codes. A distinguishing feature of this deterministic discourse is the progressive purging of human contaminants – and thus perceived fallibilities – from the machinic repertoires of surveillance work. Notwithstanding the long tradition of scholarship which accentuates the socio-cultural shaping of all technical apparatuses – from their design and development, to their implementation and operationalisation – there is an important methodological and ethical implication of this move from human to non-human forms of watching. While the former has been physically situated behind screens in closed settings, there has always been the possibility of doing empirical studies on the symbolic and material dimensions of surveillance practices, exploring the situated meaning worlds and socio-technical actions of embodied agents as they engage in surveillance work, either in situ or retrospectively. But with the rise and use of the latter, which are predominantly immaterial and indeterminate in form and action, this potential has been practically foreclosed.
The machinic flâneur: Automating control and its implications for ‘the right to the city’ and ‘the right to the face’
In recent times, a number of newspaper articles and local council press releases have reported the turn to data-driven and algorithmic solutions by urban officials in the Australian capital cities of Perth, WA, and Darwin, NT. Both of these places are experimenting with and implementing various digital and sensorised measures, and algorithmic analytics, in a bid to generate what proponents of these systems present as a set of efficiency, security and insight dividends:
We feel this will enable us to respond quicker to community safety concerns and also enable us to react to situations, such as missing children et cetera … in a quicker, more efficient way … Facial recognition technology assists frontline staff in responding faster to potential incidents and undertaking preventative measures.
8
Daniel High, City of Perth
[It will tell us] where people are using Wi-Fi, what they’re using Wi-Fi for, are they watching YouTube etc, all these bits of information we can share with businesses … We can let businesses know ‘hey, 80 per cent of people actually use Instagram within this area of the city, between these hours’.
9
Josh Sattler, Darwin Council
Another rationality provided for these systems is the symbolic and economic importance of leading what might be framed as the national ‘race to smartness’. Perth authorities have been deploying facial recognition software on some of their public space CCTV cameras, as well as developing video analytic capabilities to count and track people, vehicles and cyclists, and predict crowd and anti-social behaviour.
10
Darwin, meanwhile, has installed ‘poles fitted with speakers, cameras and Wi-Fi [which] allow council to gain data on how many people walk on what footpaths and where they use certain websites and apps in the city’,
11
and it has introduced 24 environmental sensors and smart parking sensors to monitor various urban flows. These developments, and their situation within particular models of neoliberal urban governance, bring significant implications for meanings and uses of public space, as well as for democratic principles and civil rights such as privacy, anonymity, accountability and equity. They enable urban authorities to use non-human surveillance workers to ambiently, but anonymously and unilaterally, track the mobilities of citizens as they enter and transit through space. On some of these systems, biometric capabilities enable the linking of scanned faces to various backgrounded databases, the one in Perth tellingly described as a ‘Black Watchlist’: ‘It proposes to use the system in East Perth to detect known troublemakers and people wanted by the police on a council-run “Black Watchlist.”’
12
Who is on the list and for what reason, and how/whether people can get off it, is a matter veiled in opacity. By virtue of this, and in combination with other system capacities which permit the scraping and assembling of metadata extracted from smartphones, these machinic flâneurs will be able to gradually detect and memorise the identities and potential habits of all street actors, and likely conduct forms of trend analysis on aggregations, breaking down who uses what apps, when and where. Crucially, it appears that this can be done without the knowledge and consent of those to whom the data derives, and over whom such currencies can be consequently leveraged in differentiated and differentiating ways.
In the case of Perth, the turn to recognition software for urban governance initiates a complex biopolitics of the face. The face is transformed from its being a fleshly, expressive and indeterminable medium – something we are all literally and metaphorically attached to – to its becoming an immaterialised static object for targeting, measurement and identification – something we are separated from as it is converted into a set of machine-readable binary numbers. In this process, the meaning and phenomenology of the face is reconstituted. In scanning crowds of faces or faces in the crowd, the recognition software instantaneously detaches impressions of the faces encountered. It converts these into a disembodied and one-dimensional simulacrum which then enters into a hidden and encoded dialogue with the virtualised faces situated in the database. If a match exists, the disembodied face is instantaneously identified, meaning the embodied face can then be anchored to an institutionalised identity, but also tracked in real time. Therefore, the facial simulacrum helps to betray its bearer’s presence and biography, combining with other techniques to detect and modulate mood or to determine how that individual will be treated. The asymmetrical and faceless nature of these machinic programs of recognition unsettles the notions of civil inattention and bodily sovereignty that Goffman (1963) noticed in his studies, and the prioritisation given to pattern recognition renders them amenable to ideas/ideals from phrenology and physiognomy. There is a distinct sense that a person’s sovereignty over the materiality of their face is progressively eroded by the presence and agentive powers of its simulated derivative, an entity as likely scraped from previous social media activity as scanned from official documentation. This situation evidently produces significant power differentials, leaving the exposed subjects without recourse to contest their images from being biobanked for recognition analytics, revenue generation and the flexing of biopower. As Civil Liberties Australia president Kristin Klugman recently told a Parliamentary enquiry:
To override a person’s consent in the name of safety and security is not respecting Australian adults; it is treating them as you would a child and so is the very definition of paternalism … it is only a matter of time before obscuring one’s face in public becomes a crime.
13
In this way, algorithmic governance may generate not only forms of facial vulnerability and estrangement (where the scanned face betrays bodily presence, generates misrecognition or is deterministically profiled), but also facial artifice, where individuals come to develop tacit and artful ways of de-facing (obstructing facial accessibility) and re-facing (changing facial appearance) in order to subvert the processes of recognition which leverage these modes of biopower. While biometric surveillance systems might assume a stable object to be instrumentally determined, people are quite literally attached to the materiality of their face, and they subjectively experience this bodily border as the primary stage on which social relations and interaction orders are intermediated. Indeed, people invest considerable time working on their faces (beautifying them, as well as manipulating the sentiments and impressions they communicate), and some are likely to experience the indiscriminateness, formality and asymmetry of facial recognition analytics as a form of dispossession. Moreover, individuals believe in their autonomy to subjectively disguise, decorate, cover or cosmetically alter the appearance of their faces contingent on social situation, and demand barriers to facial accessibility for a range of cultural, religious, biomedical, psychological or interpersonal reasons. Faces, therefore, are not merely passive or innate objects for mechanized modes of scanning and tracking, but rather complex masks that are imbued with considerable symbolic meaning and social significance. In this way, facial recognition programs fundamentally impinge on the right to the face, and for this reason – and the myriad ways they unilaterally mine and exploit the face – we can expect these technologies of governance to mobilise considerable resistance.
More than merely recognising and cross-matching the faces and appearances of moving entities with their static digital identifiers, the machinic flâneurs are capable of projecting a disembodied voice onto the street via the audio speakers, and enacting virtual fencing around particular places regarded as high risk or value. If these perimeters are transgressed, an automatic response is duly triggered. As City of Darwin's General Manager of Innovation, Growth and Development Services explains, ‘We’ll be getting sent an alarm saying “there’s a person in this area that you’ve put a virtual fence around” … boom an alert goes out to whatever authority, whether it’s us or police to say “look at camera five.”’ This is a prime example of commercially generated code colonising public space with proprietary models and rendering it both extractable and defensible. This kind of artificial ‘walling’ or arbitrary division of the city undermines urban ideals such as the right to the city, and it impinges on civic values and liberties such as freedom of movement and freedom of assembly and association. With all these devices exhaustively tracking and tracing social activity, it is not unreasonable to refer to the post-public surveillance space of contemporary cities, where bodies – much like vehicles entering toll roads – must pay a price for their presence: that levy being hypervisibility qua the data traces they continuously leak, but also the progressive reduction of the person and their biography to a sequence of de-contextualised flows. A person’s inability to evade monitoring and tracking, and need to manage the performativity of leaky data impressions they cannot fully know and control, has implications for their right to rest and leisure: to access and occupy a ‘free’, anonymous and restorative space devoid of surveillance, and various ‘data work’ obligations. As I have previously argued (Smith, 2016), surveillance work is performed as much by watchers as by the watched, and it is an exhausting, stress-inducing and often immaterial mode of labour. Given previous empirical research has consistently documented the subjectively driven and thereby prejudicial nature of surveillance operativity, it is not hard to imagine distanciated flânerie being configured to differentially profile targeted faces and bodies, breaching in the process the right of equality:
The gaze of the cameras does not fall equally on all users of the street but on those who are stereotypically predefined as potentially deviant, or through appearance and demeanour are singled out by operators as unrespectable. (Norris and Armstrong, 1999: 201)
New surveillance technologies are regularly tested on marginalised communities that are unable to resist their intrusion. (Magnet and Gates, 2009: 11)
When considering how these governance programmes are likely to be arranged and how their everyday operations will be socially mediated and felt, it is worth highlighting a few contextual details. Each being located on Australia’s furthermost west and north coasts, both Darwin and Perth have materialised as municipalities on the back of centuries of white settler colonialism, where practices of violence, dispossession, displacement, persecution and assimilation have been effected by the white colonies on Indigenous peoples. In both areas, there are high concentrations relative to the population of Aboriginal and Torres Strait Islander residents, and significant levels of racialised division and inequity, in terms of the disproportionate numbers of Aboriginal people who are incarcerated, homeless, on benefits or experiencing chronic health issues.
14
As Lobo writes (2013: 456) with reference to Darwin, ‘global and local discourses interpellate asylum seekers and socio-economically disadvantaged Aboriginal people as dehumanised, burdensome welfare dependents and toxic bodies who invade and contaminate the space of the city and nation’. Given the cultural histories and social demographics of each city, and the influence of Chinese governmental models on the ‘Switching on Darwin’ scheme
15
– where municipal authorities in China are experimenting with facial recognition and other surveillance programs to enact a social credit system for dealing with social, political and viral risks, it is perhaps not unreasonable to assume that these systems will disproportionately target and punish people of colour, and the urban poor more broadly. But because of the algorithmic and closed nature of the governance circulations, this potential outcome will be difficult to ascertain, as it is technically and legally challenging to hold the secondary agency of coded decision-making to account. These concerns are further concretised when one considers the lack of (a) reliable empirical evidence presented to show that these systems are needed, will work and are subject to appropriate oversight, and (b) adequate public engagement and community consultation that has been conducted in the lead up to these technologies being deployed. As The Guardian (June 12, 2019) recently reported:
The [Perth] council confirmed that authorities would not be required to provide any kind of warrant when asking for the facial recognition capabilities to be switched on, only to provide an image or series of images of a person of interest.
The facial recognition aspect of the new camera network was news to me. I feel that the community should have been better informed.
Lyn Schwan, East Perth Community Safety Group Secretary
Very few people in the community are aware of this trial … We haven’t been asked for consent. This is our own biometric data, as unique as our DNA, and there has been no consultation or permission obtained from Perth’s constituents to capture this data or track us as we walk down the street. There is no opt-out function, and no choice for participation by minors. My children are supposed to grow up under these circumstances. I haven’t consented to this, and neither have they.
Lauren Mac, Perth Resident
But the beleaguered [Perth] authority … is refusing to say exactly which cameras will screen citizens because of ‘security reasons’.
16
Moreover, at the time of writing, Perth is being temporarily managed by Government-appointed commissioners following the suspension of the city’s council and mayor in early 2018 on various allegations, and so there are also issues of due process and accountability to preside over in terms of how important social and infrastructural decisions are formulated and justified.
Conclusion: The struggle for rights in the black box city
This paper has analysed some of the drivers and implications of surveillance work being outsourced to non-human algorithmic contractors to perform. It has conceptualised how these corporatised and machinic flâneurs engage in practices of distanciated flânerie: subjecting relations and flows within the urban environment to a dispassionate, calculative and expansive gaze. The focus has been on the emerging black box city, where techno-utopian discursive frameworks and economic/extractive models of smartness reign supreme, and where a biopolitics of the material and immaterial face is eventuating as a consequence of desires to better recognise and govern bodies moving in space. This biopolitics concerns the blanket machinic tracking of publics through the ‘wearable technology’ of the face: an immoveable and inescapable communicative medium that is intrinsic to the embodied subject, and that, as Erving Goffman was at pains to illustrate, is sacred to the socialities of everyday life. We have considered a few of the many social impacts and consequences of these transformations, not least how such diagrams – by taking faces and movement as their objects of governance – encroach on civil and bodily liberties, and are socially and culturally situated within highly racialised and neoliberalised histories which they are only likely to reify and perpetuate in various ways. But of course, they also act to transform the phenomenology and politics of being in space: where physical presence is exhaustively tracked and registered via the contours of the virtualised face (so material faces can be individually identified and subjects nudged to different ends depending on how they are scored and what aggregations they are consequently allocated to), while more specialised assessments and inferences can, on the back of phrenological and physiognomic knowledges and practices, be potentially made about belonging, disposition, mood, intent, wellness and the like.
It will likely prove difficult to conduct close observation on regimes of algorithmic governance for a number of methodological and political reasons. For instance, researchers will need to have requisite technical competencies to apprehend the codes scripting the machinery. And they will need access to gatekeepers who are comfortable with the idea of accountability, and who will tolerate the inquisitive – and potentially troubling – presence of an outsider. Moreover, the highly abstract nature of algorithmic governance makes it hard for publics to comprehend, let alone verbalise, how they are being dominated through these techniques. Notwithstanding the promissory techno-utopian discourses that emphasise the inevitability and efficacy of these modes of urban governance, in practice – and on the basis of much STS research – we can anticipate machinic operations being mediated by considerable negotiation, messiness, failure, inadvertence and friction, both in terms of how these sociotechnical enterprises actually interface with local geographies, cultural politics and environmental factors, and how they attempt to reduce, profile and encode in highly technical ways what are irreducible and unwieldy social forces, actions, experiences and realities. Therefore, as with all telemetric systems (see Smith and O’Malley, 2017), we can expect a series of struggles to ensue over both the right to the city – and various social groups feeling they are being purposefully programmed out of this space by these sociotechnical assemblages – and the right to the face – and various individuals, such as Lauren Mac, defending the sanctity of the face from colonisation by its virtualised referent. That these faceless systems of governance will dispassionately stare into the face of rising levels of social insecurity, as they disproportionately recognise and track the faces most affected by this condition, is a cruel pun not lost on those targeted by them. As with many preceding technologies of power, the core aim of algorithmic governance in the city is not redress of the deeply entrenched structures of social injustice and inequality pervading urban life. Instead, the aim is to manage the symptoms these structures produce using techniques that present as being more efficient and effective.