Abstract
This article aims to contribute to digital criminology by proposing a framework of rhizomatic harms of algorithmic policing. We propose to expand zemiologist insights with a technological and relational component, and to broaden the concept of ‘social harms’ to ‘rhizomatic harms’. Rhizomatic harms are to be understood in all their complexity, as they emerge from multiple entry points with the creation of complex layers of harms as a result. By focusing on the genealogy of rhizomatic harms of algorithmic policing in our analysis, we aim to make visible the collective, relational, cumulative and intersectional dimensions of harms and the role that macro, meso and micro processes play in harm production. The Top400 list and the use of the ProKid + algorithm in Amsterdam, The Netherlands will be used to exemplify our framework.
Keywords
Introduction
Many questions were troubling the traveler, but at the sight of the prisoner he asked only: ‘Does he know his sentence?’ ‘No,’ said the officer, eager to go on with his exposition, but the traveler interrupted him: ‘He doesn’t know the sentence that has been passed on him?’ ‘No,’ said the officer again, pausing a moment as if to let the explorer elaborate his question, and then said: ‘There would be no point in telling him. He’ll learn it on his body.’
In Franz Kafka's In the Penal Colony, a traveller arrives at a remote penal colony with the intent of exploring the discrepancies in legal systems across diverse territories. Within this setting, a peculiar apparatus is employed to inscribe punitive sentences directly onto an individual's skin. The apparatus necessitates six hours for the condemned to fully comprehend the nature of their transgression through the tactile sensation of the engraved lines. Yet, when a high-ranking officer subjects himself to the machine's seat, it unexpectedly malfunctions, causing fatal harm. The apparatus serves as a metaphor for the opaque and often inscrutable mechanisms pervasive in algorithmic policing within our contemporary ‘pre-crime society’. Kafka's narrative mirrors present-day shifts, wherein individuals face penalties before committing a crime, navigating through ambiguous accusations and suspicions formulated by police using crime prediction algorithms. The story not only reflects errors inherent in such technology but also the diverse array of resulting harms. The accused ‘suspects’ find themselves indicted without explanation, mirroring the lack of transparency prevalent in contemporary policing practices influenced by algorithmic authority.
Large-scale deployment of collecting and processing information by algorithmic technologies is increasingly embedded in police practices. This evolution should be seen within a larger temporal and sectoral shift at the turn of the 20th century from a post-crime to a pre-crime society, whereby the role of the state as the most important provider of security has shifted (Van Brakel and De Hert, 2011; Zedner, 2007). Its responsibility of providing public security is redirected towards private, communal and individual actors (Loader, 1999; Zedner, 2007), and now also to technology (Haggerty, 2012; Van Brakel and De Hert, 2011). These pre-crime ‘solutions’ distract attention from other urgent threats connected to privileged actors by prioritizing certain phenomena. In doing so, they inadvertently introduce new risks under the guise of addressing existing ones (McCulloch and Wilson, 2015). Crime control has thus shifted from being reactive to being proactive due to significant changes in criminal justice governance (Ericson and Haggerty, 1997; Feeley and Simon, 1994; Van Brakel and De Hert, 2011).
The use of algorithmic surveillance and profiling systems by police departments has accelerated this shift (Arrigo and Shaw, 2022; Bennett Moses and Chan, 2018; Van Brakel, 2021). The rollout of these technologies has gone hand in hand with growing scientific evidence that they are riddled by error and bias (Babuta and Oswald, 2019; Buolamwini and Gebru, 2018; Lum and Isaac, 2016; Van Brakel, 2016a). But regardless of this evidence, enthusiasm of police and policy makers for implementing algorithmic policing is not fading. These developments show the need to have up-to-date accountability mechanisms to prevent potential harms by these algorithmic systems.
Drawing on insights from zemiology, science and technology studies, Deleuze and Guattari studies, and surveillance studies, the main aim of this article is to contribute to the burgeoning field of digital criminology (Kaufmann and Lomell, forthcoming 2024; Powell et al., 2017), by proposing a framework to study rhizomatic harms of algorithmic policing in the pre-crime society. The starting point for this endeavour is zemiology, which is the study of social harms that focuses on the embeddedness of non-criminalized harms in systems of social relations (Pemberton, 2015). We explore how these zemiological insights can be expanded with a rhizomatic focus, so as to broaden the concept of ‘social harms’ to ‘rhizomatic harms’. By doing this, we aim to open the discussion and contribute to a new direction of criminological scholarship, which focuses on exploring the impact of new algorithmic surveillance systems on criminal justice practices and social justice in a holistic way. By focusing on harms and the genealogy of harms, it is possible to ‘add a sensitivity to the experiences, meanings and images of crime and justice insofar as they are transformed by digital technologies’ (Wood, 2019: 572). In this way, we aim to add a necessary step in the construction of analytical tools to analyse these phenomena as calls for police and public accountability are rising.
The article is structured as follows: in the first part, we provide a brief overview of the literature on algorithmic policing and harm within a pre-crime logic. In the second part we discuss the recent case of the Top400 list in the city of Amsterdam in The Netherlands and the use of the ProKid + algorithm. In the third part, we theorize harm caused by technology within (digital) criminology and zemiology. In the final part, we propose a framework of rhizomatic harms of algorithmic policing.
Algorithmic policing and harm in the pre-crime society
Technological developments have always changed the form and function of policing over time (Chan, 2001; Deflem and Chicoine, 2014; Van Brakel and De Hert, 2011). Today, police forces rely increasingly on algorithmic tools used for surveillance, data collection, profiling, and prediction. These police surveillance technologies are used to collect and analyse information driven by algorithms and big data with the aim to predict crime and intervene before it happens (Van Brakel, 2020).
Research has indicated that pre-crime algorithmic policing practices as a surveillance practice can cause collective and social harms such as cumulative disadvantage, structural discrimination, and chilling effects (Browne, 2015; Fussey and Murray, 2019; Gandy, 2009; Van Brakel, 2020). In line with this, Marjanovic et al. (2021: 391) refer to algorithmic pollution, whereby they hint at the ‘unintended harmful societal effects of automated algorithmic decision-making in transformative services (e.g. social welfare, healthcare, education, policing, and criminal justice), for individuals, communities, and society at large’. They make their case by linking the idea of social pollution, and more specific ‘pollution-as-harm’, to the widespread use of algorithms by a myriad of actors. It is by looking through a harms-lens, that they stress ‘the need for transformative actions to prevent, detect, redress, mitigate, and educate about algorithmic harm’ (Marjanovic et al., 2021: 392).
When zooming in on algorithmic pollution in policing, many of the socio-technical systems and practices are riddled with manipulation, error, and bias. Existing discriminatory structures are woven into the algorithms, which reinforce the status quo of existing structural inequalities and discrimination against minorities (Browning and Arrigo, 2021). The use of pre-crime algorithmic policing tools does not only reinforce but worsens social inequality as they have a negative impact on vulnerable communities and social justice (Williams and Clarke, 2018). For example, data-driven predictions may increase police interaction in already overpoliced areas, resulting in feedback loops that feed cycles of suspicion (Lum and Isaac, 2016). Digital risk assessments generate scores based on the possibility that a person will commit future crime showing racial disparities (Browne, 2015). Hereby, black people are disproportionately targeted. This makes it key to incorporate the nexus of technology and racial justice in our framework when further analysing pre-crime technologies in their broader relational, social, legal and technical context (Ugwudike, 2021).
As discussed above, research into the various harms of algorithmic policing is growing. Yet, a clear framework is missing to address algorithmic policing harms driven by a pre-crime logic, a gap we attempt to fill in this article. To explore this more in depth, we will focus on one type of pre-crime algorithmic policing: individual algorithmic risk assessments and more specifically the use of the ProKid + algorithm and its related Top400 programme.
The case of the Top400
In 2016, the city of Amsterdam launched a predictive policing programme, the Top400, which targets 400 young ‘high potentials’ between 12 and 24 years old. The youngsters were selected by the ProKid + algorithm. Notably, these individuals exhibit no record of significant criminal offences; however, their conduct is deemed problematic and disruptive (Gemeente Amsterdam, 2022). The goal of the intervention is to prevent them from committing a crime in the future. When on the list, youngsters receive intensive counselling and are under police surveillance, which deeply interferes with their daily lives. They are stopped more often, are visited by the police more frequently, and need to participate in mandatory meetings between the police, school, and social workers (Visser, 2022). For instance, street coaches could investigate social media to check the minors’ behaviour or to enforce a curfew or location restriction (Jansen and Klaas, 2022).
In 2016, Amsterdam municipality officials decided to make use of ProKid + to populate the Top400 list. This is an actuarial risk assessment algorithm used as an early warning tool to determine the risk of juveniles committing violent or property crimes in the future (Delsing and Scholte, 2016). ProKid + is developed by the East-Holland police unit, in cooperation with the Amsterdam police. The algorithm is based on police data of 31,771 youngsters (Delsing and Scholte, 2016; Gemeente Amsterdam, 2022; Wientjes et al., 2017). ProKid + provides an assessment of the likelihood of engaging in delinquent behaviour, employing weights attributed to factors empirically established as correlated with delinquency (Delsing and Scholte, 2016). This makes it possible to compare individual risk scores with others based on the same criteria (Singh et al., 2011). ProKid + is the updated version of a pilot project ProKid that ran from 2010 until 2012 to detect children up to 12 years old that could become ‘a risk’ and cause future crime-related problems. An evaluation of the programme exposed a range of system errors and technical problems. Thirty-six percent of the matches were false. The authors of the evaluation made the recommendation not to use this before these errors were solved (Abraham et al., 2011). It is not clear to what extent this recommendation was implemented. The ProKid + algorithmic software used police databases to look at which minors had an encounter with the police. Besides their personal contact with the police, profiles of their immediate environment (family, friends) were also considered (Wientjes et al., 2017).
ProKid + was inspired by theories on the development paths of crime with three trajectories—(a) an (overt) aggressive, (b) a (covert) aggressive and (c) a path of authority conflicts (see pyramid structure in Loeber et al., 2001). These insights were used to determine predictors for the ProKid + algorithm as data with regard to the criminal history of inmates and co-defendants, frequency of criminal offences, variation in offences, sex, and age. The algorithmic tool is based on data from the police registration system, Basisvoorziening Handhaving (Basic Enforcement Provision). To assess the risk, the algorithm collects data across three levels of analysis: ‘1) The person: information from incidents in which the person was a suspect (including what, when, how often, in what pattern), 2) The home environment: information on incidents on the address of the person that is registered in police systems (including what, who, what relationship to the person, type of involvement, how often, and frequency), 3) The social environment: information about co-perpetrators with whom the person shares one or more police mutations’ (Delsing and Scholte, 2016; Jansen and Klaas, 2022: 31).
In 2016, 125 minors who met the specified criteria were selected by the algorithm to populate the Top400 (Gemeente Amsterdam, 2023).
1
As the algorithm did not result in identifying the requested 400 minors, the criteria were elaborated on request by politicians in office. According to Jansen and Klaas (2022: 19): It was decided to add variables that allowed the city to include minors who were believed to be part of prioritized youth gangs and criminal groups and explore criteria that allow the identification of anti-social behaviour that was not captured in the original Top400 criteria.
Theorizing harms caused by technology within (digital) criminology and zemiology
Legal and policy responses to harms of algorithmic policing are embedded in an individualist rights framework, which also permeates into criminology. Zemiology offers a starting point for moving beyond this by focusing on social harms, with a clear commitment towards social justice (Canning and Tombs, 2021; Pemberton, 2015). Its main object is to ‘think anew about social harms and responses to them, increasingly produced by the profit-driven, unaccountable, non-criminalised destructive harms of global capitalism—harms which were being increasingly assumed to be legitimate subjects for criminology to study’ (Canning and Tombs, 2021: 23). Zemiology makes it possible to address the harms’ interplay of micro-level aspects and their drivers at a structural level (Canning, 2018). By focusing on ‘crime’, criminology minimalizes that what harms us most is often not the result of crime. Criminology reproduces the notion of crime as a social truth. As a result, the discipline becomes a political tool in the production of state power (Canning and Tombs, 2021). Hence, zemiology can offer valuable insights to understand the harms of algorithmic policing.
In the field of zemiology, there are strong internal debates on fundamental questions around the ontology of harm. Canning and Tombs (2021) conclude that there is still much conceptual work is to be done. But ‘while striving to understand the ontology of harm, the ultimate zemiological value is to challenge and eradicate harm by developing novel modes of responsibility and accountability beyond criminal justice and through an alternative social organisation’ (Malik et al., 2022: 184). The analytical strength of zemiology lies in the acknowledgement of the rippling effect of social harms, whereby various dimensions of harm unfold and disperse spatially and longitudinally. Social harms are layered and do interact with each other, while producing new ‘levels of harm through their synergetic effects’ (Tombs, 2019: 62). Seeing harms as unfolding in ripples, is acknowledging their endurance over time and contingency of composition as new levels create new perspectives on how harms work (Tombs, 2019). Although this approach is insightful to the framework that we will be suggesting in the following section, the focus is limited to social harms; socio-technical harms in combination with a broader relational analysis conducive to harm remain underexplored. The social harm framework is useful to analyse personal harms in a broader social context but less successful in connecting with socio-technological specificities and their wider consequences (McGuire and Renaud, 2023). Technology as an actor and mediator needs to be given a more central role in the construction of a framework for harm in the pre-crime society.
Actor–network theory offers an opening when discussing the agency of technologies and their role in mediating our actions and how we perceive the world around us (Verbeek, 2005). When applying these insights to a harm-based approach, Wood (2020: 643) argues that: the flat ontology of actor–network theory results in accounts that rightly acknowledge the causative power of technologies but provide little ontological basis for examining how harm-generating mechanisms might inhere in the emergent properties of technologies either by design (utility harms) or unintentionally (technicity harms).
Wood (2022: 509) takes this further by proposing a stratigraphy of harm by looking at human–technology and technology–harms relations to ‘delineate the socio-technicality of harmful events’. He distinguishes four technology–harms relations: (1) instrumental utility harms; (2) generative utility harms; (3) instrumental technicity harms; (4) generative technicity harms. Instrumental harms point at how individuals might use technology to enact harms, while generative harms examine what technologies do to actors. By doing this, he makes a distinction between intended and unintended forms of harm. Wood (2020) himself formulates the remark that if you want to look at harm in a relational way, this model is not sufficient. The socio-technical character of norms and power relations cannot be reduced to technology–harm relations alone. As Wood (2020) rightfully argues, social structures conflate with technological structures. As a result, zemiology should reconsider its ontology of harm by recognizing technology's embeddedness into social structures and as a result distinguish between different types of harms when analysing socio-technical relations.
In their article on the dynamics of social harms in an algorithmic context, Malik et al. (2022) criticize Wood (2020) as he predominantly focuses on the micro-level analyses of human–technology interactions. They provide an answer by building further on his theorization to conceptualize how algorithms are taking part in the transformation of the dynamic of the production of social harm at macro and meso levels. By illustrating how algorithmic technologies affect harms qualitatively, the study examines the mechanisms through which algorithmic harms manifest and the degree to which they affect individuals. For Malik et al. (2022), the importance of the technological aspect in the production of harm equals its social and economic aspect. As Henry et al. (2020: 1847) rightfully synthesize, technology in the digital era is ‘not a merely a tool of abuse, coercion, and harassment, but also often integral to the perpetuation of harm, suffering, and stigma of victims’. We will build further on this direction towards a more holistic account of harms by proposing the concept of ‘rhizomatic harm’.
Towards a framework of rhizomatic harms
In the first part of this article, we noted the absence of a robust framework for examining the harmful effects of algorithmic police surveillance beyond individual-based harms. Departing from conventional criminological theories, we pivoted our focus towards a harms-based approach, aiming to encompass social, technological, and relational facets within this model. Upon further exploration of the harms-based perspective rooted in zemiology, we identified a notable lacuna in acknowledging technology as an active agent in this sphere. Within digital criminology, scholars have slowly started to engage with the concept of the rhizome (Powell et al., 2017) and more specifically ‘rhizomatic harm’, in the context of harms of digital forms of sexual violence (Dodge, 2022). We will take this one step further by engaging with assemblage theory and propose a rhizomatic harm-based framework for understanding the harms of algorithmic policing.
Rhizomes and assemblages
The concept of the rhizome was introduced by Deleuze and Guattari in A Thousand Plateaus (1987) as a new epistemological way of thinking and offers the possibility to grasp the complex, volatile and relational nature and interactions of entities in an agencement or assemblage in a non-categorical and non-hierarchical way (Buchanan, 2015; Thomas, 2020). An assemblage constitutes ‘[a] multiplicity which is made up of many heterogeneous terms and which establish liaisons, relations between them’ (Deleuze and Parnet, 1987: 69). Using this concept indicates the intricacy and knottiness of a system whereby its application as an analytical tool problematizes the understanding of social reality as stable or one-sided (Thomas, 2020).
In this world of assemblages, everything appears only as a partial object, which is a temporary relation of appeared faculties. When these partial objects are activated, Deleuze and Guattari (1987) speak of a flow. The faculties become temporarily activated in a particular way and bring about the establishment of relationships. All relationships are merely what things do but are never its essence. It consists of an active process whereby these self-organizing dynamics can never by reduced to its separate elements but need to be addressed as a relation between the elements (Schuilenburg, 2008). Moreover, in the words of Bergson (1990: 94): It is […] the performance of the movements which follow in the movements which precede, a performance whereby the part virtually contains the whole, as when each note of a tune learned by heart seems to lean over the next to watch its execution.
To use the rhizomatic assemblage as a productive analytical tool to study the harms of algorithmic policing, we need to ask ourselves what it ‘enables us to see that we couldn’t see before?’ (Buchanan, 2021: 2). Wood's (2020) typology of harms, as discussed above, offers a solid framework to reflect on questions of in what situations, when, and where a particular harm happens. However, it provides no answer to the question of how to understand the underlying causes of these harms and the relations and desires that characterize its assemblage. Assemblage theory makes this possible as it brings a ‘why?’ question into the analysis. This includes dissecting the structure of authority that enforces certain policies as to question how it is constituted (Buchanan, 2015).
Within surveillance studies, Haggerty and Ericson (2000) have proposed using the concept of the assemblage—in contrast to the Panopticon—to better understand the architecture of surveillance in light of new technological developments at the turn of the 21st century. According to the authors, surveillant assemblages operate by abstracting human bodies from their territorial settings, by separating them into a series of flows. Surveillance assemblages are always in motion and every surveillance technique and the corresponding social norms that govern its application are intrinsically embedded in its unique socio-historical context (Van Brakel, 2018). This approach enables a broader focus on harms that goes beyond individual transgressions and considers the social ramifications (Van Brakel, 2021).
The assemblage concept has been crucial in illuminating the subtle ways in which surveillance technologies interact with other socio-technical systems to become more powerful. Advanced surveillance systems cannot be considered to operate independently or even at a single operational register. Instead, it has become increasingly important to acknowledge the interoperability of various socio-technical infrastructures (Fussey et al., 2021; Wilkinson and Lippert, 2012). Hence, to understand algorithmic policing as a rhizomatic surveillant assemblage, the structural macro level needs to be combined with the micro-political level of how desire functions.
Rhizomatic harms
By building on assemblage theory and the concepts of the rhizome and surveillant assemblage as discussed in the previous paragraph, ‘rhizomatic harm’ can shine a new light on underexposed or under-studied aspects of harms of algorithmic policing. It implies that it is necessary to combine all levels of analysis to include a micro (individual), meso (institutional), macro (systemic) approach and the relations in between. For instance, the sole focus on ‘bias’ to discuss and understand the harms is too limited as it can be perceived as the result of a flawed or unintentional result of an individual or technology and does not touch upon its systemic institutional character (Dave, 2022).
The choice of terminology for rhizomatic harms and not for example ‘assembled harms’ is deliberate and twofold in nature. First, it is made to circumvent potential confusion with the concept of the surveillant assemblage, as articulated by Haggerty and Ericson (2000). Both concepts draw inspiration from the work of Deleuze and Guattari yet serve distinct analytical purposes. Second, the choice to use ‘rhizomatic harms’ is intended to underscore the resilience of social reality, symbolizing a departure from entrenched, linear conceptions. It aims to evoke a sociological imagination that breaks through the soil of the earth and to highlight the perpetual dynamism and movement inherent in social existence, ‘make rhizomes, not roots, never plant!’ (Deleuze and Guattari, 1987: 24).
Rhizomatic harms present a novel analytical framework that enables a comprehensive understanding of the harms of algorithmic policing. This approach transcends the narrow confinement of individual-based harms and allows for a more holistic examination and thicker understanding of the potential qualitative and quantitative impact of algorithmic policing but also the genealogy of harms considering causal dynamics at the micro, meso, and macro level. Moreover, this approach broadens our examination of harms to encompass relational and socio-technical harms, which denote the harms experienced by individuals intimately connected to the main surveillance subject. Further, drawing from zemiology, we are encouraged to consider harms as unfolding in ripples. This implies that harms manifest in interconnected layers, generating new levels of harm through synergetic interactions. By merging the concept of ripple with intersectional insights that underscore historical systems of oppression and advocate for a multidimensional power approach (hooks, 1984), we aim to illuminate the cumulative impact arising from diverse surveillance technologies. The writings of pioneering intersectional black feminist scholars such as Crenshaw (1991) and Davis (2011) resonate with Deleuze and Guattari's epistemological theory when discussing the importance of intersectional feminism (Romangnoli and Silva, 2022). An intersectional approach breaks with the arborescent, hierarchical dominance of modernist categorizations such as individual rights. According to Romangnoli and Silva (2022: 2), intersectionality: operates not through the separation of the categories of gender, race, class, sexuality and other possible ones, allowing visibility to social problems, revealing structural and dynamic consequences of the complex intersections between two or more subordination axes, that get inter-crossed and potentialized.
To make visible the collective, relational, cumulative and intersectional dimensions of harms and the roles that macro, meso, and micro processes play in harm production, we propose to focus on the genealogy of harm. In what follows, we will discuss four socio-technical processes of harm production, which become visible when approaching algorithmic policing as a rhizomatic assemblage: (1) technological agency and affordances; (2) rationalities behind algorithmic policing; (3) everyday practices and social structures; (4) interventions that are the outcome of algorithmic policing.
First, technologies mediate our interactions and perception of the world around us. As Wood (2020) correctly points out, this is an important aspect in the production of harm. Technologies do things (Verbeek, 2005), it is therefore important to address technological agency as an essential part of harm production. In the context of technology-facilitated violence Wood et al. (2023) underline the role that a technology's design can play in generating harm and how technological affordances play a role in harm production. These insights can also be applied to harms of algorithmic policing. Errors and bias in the way the algorithms are designed and in the data the algorithms are trained on can lead to erroneous outcomes often in the form of false positives and false negatives. These harms have not only an impact on the micro level but are also entrenched structurally on a meso and macro level (Malik et al., 2022). Harms can appear throughout the lifecycle of the technology. For instance, the characteristics of the technology used by the police co-determine the range of errors, its reliability, and validity. In the case of algorithmic policing, the algorithms used will have both advantages and disadvantages. Bennett Moses and Chan (2018: 811) show how, even though different algorithms have different limitations, all are characterized by bias and make assumptions: In some cases based on an assumed model of crime and in other cases based on general factors such as simplicity versus flexibility (with the associated potential of overfitting) and predictive power versus other goals such as comprehensibility, preservation of provenance or non-discrimination.
The ProKid pilot project, on which the ProKid + algorithm is based, has a range of system errors and technical problems with 36% false positives. These are system or registration errors or reports based on irrelevant incidents (Abraham et al., 2011). Moreover, limiting datasets to only two specific police regions raises concerns regarding the potential transfer of biases from one region to others upon its deployment across diverse parts of The Netherlands (Wientjes et al., 2017).
Beyond errors within the socio-technical systems, the potentially harmful impact of technological affordances manifests itself during implementation, in what Fussey et al. (2021) term as an inclination towards deference to algorithms among law enforcement officers. The recommendations put forth by algorithmic decision-making systems are typically embraced and viewed as reliable by police personnel, despite instances where the computer-generated output lacks verifiable accuracy (Fussey and Murray, 2019). Hence, harms do not only emerge from within the technology but also as the result of broader socio-technical aspects. In other words, ProKid + as an automated risk assessment tool cannot be solely evaluated on the basis of an assessment of the accuracy, quality, or completeness of data. When used without critical scrutiny, especially when the risk score serves as a sole criterion without professional assessment, there arises a significant risk of automation bias and unwarranted deference to the algorithm within the context of the Top400 list. This was exemplified by the ProKid + algorithm's addition of 125 minors to the list without undergoing any critical assessment of the selection criteria for their inclusion. Remarkably, the algorithm continued to wield influence even after the municipality's decision to discontinue its use, underscoring the persistent impact of automation bias and unquestioned reliance on algorithmic outputs (Jansen and Klaas, 2022).
Second, the rationalities behind algorithmic policing that characterize the technology co-determine its outcome and can lead to potential harms. This is directly linked to the desiring-production in our society. Deleuze and Guattari (1987) conceptualize desire as a productive force and constitutive part of the assemblage. Seeing desire as the result of a societal process is linked with its impact on the functioning of the rationalities behind algorithmic policing. Rationalities are the result of an interplay between the molar and molecular level, between political economic structures and the micro level of desire. As a result, categories are labelled as a political dimension and should be treated as such. The Amsterdam municipality claims to have a scientific basis for ProKid + (Wientjes et al., 2017; Yeşilgöz-Zegerius, 2022), by determining behavioural indicators such as truancy, school absence, changing elementary school at least three times, and being arrested as a suspect between the age of 12 and 14, which are regarded as risk factors for criminal careers (Gemeente Amsterdam, 2023). The relevant ministers indicate that research conducted by the Wetenschappelijk Onderzoek-en Datacentrum shows that criminal behaviour at a young age is an important predictor for a long and serious criminal career and has been the basis for the early detection and intervention approach (Yeşilgöz-Zegerius, 2022).
Previous criminological research, however, has shown that the use of risk factors that have been identified in criminological research is concerning and can lead to labelling effects and (digital) stigmatization (Deakin et al., 2020; Farrington et al., 1978; Van Brakel, 2018). The children on the Top400 list are systematically targeted as potential criminals, whereby ‘the resulting stigma is almost impossible to “shake off” potentially leading to a self-fulfilling prophecy whereby the stigmatised child begins to engage in crime because they internalise it as a fait accompli’ (Van Brakel, 2016b: 193). Further, scholarly research highlights the methodological shortcomings inherent in risk factor research underpinning such systems. These flaws encompass over-simplification, imputation, determinism, bias, and a limited evidence base, often resulting in erroneous identifications and discriminatory outcomes (Case and Haines, 2013; Van Brakel, 2016b, 2018). The utilization of algorithmic prediction for forecasting future events implies that these models, far from being objective portrayals of crime, are instead influenced by political agendas, normative assumptions, and perpetually evolving standards (Jansen and Klaas, 2022: 5).
Third, the everyday practices and social structures in which algorithmic policing is embedded can play a role in harm production. These are coded with systemic harms that are produced by ever-shifting power relations (Lyon, 2001) and the political economy in which they are embedded. In the case of algorithmic policing, they are, among other things, informed by a long history of racial inequalities and the policing of non-white individuals (Browne, 2015) and by practices of sexual of gender-based violence (Barter and Koulu, 2021). Graeber (2015) describes this phenomenon as the extension of government surveillance being a bureaucratic process and security project as central to the production of the liberal democratic capitalist order and what Scott (2020) calls the social engineering practices of the state, whereby powers ‘from above’ try to make societal processes legible and thus controllable.
In the public arena, security, law and order, and safety are part of the discourse to gain political power as society is presented as in crisis, with crime fighting as a political solution (Garland, 2001). The aftermath of 9/11 combined with lingering fears and concerns about the presence of ethnic minorities (often driven by racist motives) became the breeding ground for the Pim Fortuyn movement in The Netherlands. The aftermath of Fortuyn’s assassination in 2002 can be seen as the start of a politics of discontent, driven by a crisis-driven discourse (Pakes, 2004). In the context of migrants, the term ‘crimmigration’ shows the experienced relation between crime, security, migration and integration leading to targeted policy (van der Woude et al., 2014). This emerging crime-and-security complex resulted in ‘the realignment of crime as an issue of security rather than of justice and a reappraisal of a variety of policies of non-enforcement’ (Pakes, 2004: 285). This social environment can explain the exerted pressure on the administration to stretch the criteria of the Top400 list.
The ProKid + algorithm presents itself as including many levels of analysis to come to a correct image of who has the highest chance of becoming a criminal or not. In practice, the opposite happens. In the case of the Top400 list, data on the ethnicity and socio-economic status are lacking. Official documents declare the absence of ethnicity and nationality in the ProKid + algorithm (Yeşilgöz-Zegerius, 2022). Nevertheless, these groups are targeted by skewing the distribution of minors towards low-income and migrant neighbourhoods. In the case of the Top600 list, an over-representation of suspects with a Moroccan and Surinamese ethnical background is observed (Jansen and Klaas, 2022). The ProKid + algorithm reproduces social inequality as it projects the actions of the environment on the youngsters. In this sense it could be more accurate to call crime prediction algorithms crime production algorithms (Benjamin, 2019).
Fourth, the interventions that are the outcome of algorithmic policing can lead to harm. The non-profit organization De Moeder Is De Sleutel (The Mother Is the Key) was founded by Diana Sardjoe, a mother of two sons who appeared on the Top400/600 list. This organization offers an insight into which harms are produced by these police interventions. It unites mothers with shared experiences and aims at rebuilding trust whereby peers can provide care and safe spaces. De Moeder Is De Sleutel also offers a public platform to speak about their harmful experiences, for example in the docufilm Mothers (Peled, 2022). Once included on the list, not only do the youngsters have to deal with police officers and social workers, but also the family members of the youngsters who often have little information, except that a person is on the list. Subjects are monitored constantly by myriad organizations, police, and communal services. According to one of the mothers of a son on the list: I was completely alone in this struggle, and there was nothing I could do. During this time, my sons withdrew further and further. I felt like a prisoner, watched, and monitored at every turn, and I broke down mentally and physically, ending up on cardiac monitoring.
In the Top400 case, the use of the ProKid + algorithm directly led to emotional distress, stigmatization, and economic loss for the persons on the list and their family (Peled, 2022; Sardjoe, 2022). People in precarious situations are even more vulnerable as the loss of income of one member of the family can have large consequences for the entire household. Some mothers lost their job as the state's strict follow-up became oppressive. Mothers testified about living a double life. During the day, they did their job; during the night, they could not sleep as they worried about their child's future. They ended up in a Kafkaesque situation where they felt surrendered to the power of the authorities. They were flooded with official documents but did not receive help to understand their bureaucratic language. They did not know what to do and did not receive any explanation of how their child ended up on the list in the first place (Peled, 2022). This is a reproduction of existing discriminatory practices in policing and its facilitation of reproducing social, racial, and economic inequalities. The above shows the importance of also including the harms experienced by family members or those who have a close relationship with the surveillance subject in the analysis.
Although the four socio-technical processes of harm production discussed above are rhizomatically entangled, we have attempted to analytically disentangle these processes into separate categories to serve as a useful tool to illuminate further nuances in the production of harm. In sum, by focusing on the genealogy of the harms of algorithmic policing we have attempted to develop a deeper understanding of harm within the limits of this article.
Conclusion
The starting point of this article was detecting the absence of a viable framework to study the harms of algorithmic policing in traditional criminology. In the article, we presented a rhizomatic harm-based approach, starting from zemiologist insights, to fill this gap in the literature and to contribute to the field of digital criminology. However, even though zemiology plays an important part in moving beyond a focus on individual-based harms and legal-based categorizations when discussing algorithmic harms, it has tended to overlook the role of technology in its ontology. Technology-as-actor should be brought into account, which is possible after recognizing technology as being embedded in social structures (Wood, 2020). We argue, that to grasp the complexity of the algorithmic policing assemblage, the technological part in the production of harms needs to equal its social and economic counterparts (Malik et al., 2022). We believe a rhizomatic harms-based approach is most suited to grasp the complex, volatile and relational nature, and the power relations between and drivers of the entities involved of algorithmic policing.
Our proposed framework involved constructing a genealogy of harm, aimed at unveiling the collective, relational, cumulative and intersectional dimensions of harms, while delineating the roles played by macro, meso, and micro processes in the generation of harm. We identified four socio-technical processes of the production of rhizomatic harm in algorithmic policing: (1) technological agency and affordances; (2) rationalities of algorithmic policing; (3) everyday practices and social structures; (4) social practices that are the outcome of algorithmic policing. To illustrate the framework, we employed the example of the Top400 list and the ProKid + algorithm, providing concrete instances that exemplify its application.
The development of this framework has only just begun. First, more empirical cases in different criminological settings need to be analysed to make the framework more robust. In addition, harm-oriented research needs a further boost within and outside of criminology. By going beyond the individual, this approach will benefit the most vulnerable in an intersectional way as it focuses on both structural and micro-political effects. In addition, it is possible to expand this framework beyond police research and focus on other criminal justice settings and bureaucratic government practices. This could range from the use of algorithmic technology in prisons, mental health agencies, substance abuse agencies, asylum and migration agencies, as well as for policing environmental crimes and warfare, in the hope of further uncovering which socio-technical processes produce harms, and by knowing them, tackling them.
Second, emphasizing the rhizomatic nature inherent in the harms stemming from algorithmic policing, as well as algorithmic criminal justice more broadly, could foster novel perspectives on addressing the discussed complexities in harm production. Traditional responses to addressing harms of the use of police technology are ill suited to mitigate these rhizomatic harms effectively. A potential path forward lies in exploring more bottom–up and participatory approaches such as positive criminology (Schuilenburg et al., 2014), restorative justice (Dodge, 2022) and relational ethics (Van Brakel, 2022). By embracing these inclusive and caring approaches, the field of digital criminology can play a significant role in addressing rhizomatic harms more effectively within algorithmic criminal justice.
In this article, we aimed to contribute to digital criminological theory by proposing a rhizomatic perspective to offer a way forward in understanding the intricacies of the current harms of algorithmic policing. In Kafka's narrative, the suspect became aware of their conviction only upon its physical manifestation on their body. Through a rhizomatic lens, we transcend the limited view of the individual body, perceiving it as interwoven with the broader world, intricately linked through multifaceted relationships that carry the weight of harms across diverse territories. Embracing a rhizomatic approach signifies a departure from the entrenched belief in the individual as the centre of the world, emphasizing instead the intrinsic value of the intricate network of relations—human and non-human—that collectively shape our reality. When the apparatus etches its verdict, it is not merely an individual body that bears the mark, but an entire rhizomatic assemblage entangled within the consequences.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship and/or publication of this article: this work was supported by the Interuniversity Flemish BOF project: Future-Proofing Human Rights: Developing Thicker Forms of Accountability.
Notes
Author biographies
Rosamunde Van Brakel is assistant professor and postdoctoral researcher at the Fundamental Rights Centre and Crime & Society Research Group, Vrije Universiteit Brussel specialized in surveillance, AI and crime control.
Lander Govaerts is a PhD candidate at the Crime and Society research group at the Vrije Universiteit Brussel specialized in police power and technology.
