Abstract
Technological innovations including artificial intelligence (AI) and cognate systems with accessible user interfaces are informing policing decision-making around the world. The systems form part of the ‘smart justice’ technologies increasingly replacing analogue modes of law enforcement such as paper-based information analysis. This article maps the evolution of technological innovations in policing over the past 25 years, that is, since the 1990s. The article situates its analysis within relevant socio-political contexts and discusses key prospects, impacts, and challenges.
Keywords
Introduction
Police forces across the United Kingdom (UK) have historically deployed a variety of technologies for crime prevention and crime control. Examples include telephones, radio systems such as two-way and subsequently multiway digital radios, as well as computer-aided dispatch systems for real-time communication within and between forces. Gun, cars and air transport services also have a long history in policing and have all been used for general crime control.
Police in the UK have additionally deployed conducted energy devices (also known as conducted electrical weapons and described colloquially as ‘TASERs’) since the early 2000s. The devices incapacitate people to defuse violent situations including altercations with the police. Although they are said to be less lethal than guns, injuries and deaths have been reported (Bleetman et al., 2023; Dymond, 2019). For some, much of the earlier and contemporary police technologies mirror those deployed by the military. Fox (2022), for example, notes that ‘wars and military development of technology have made a major impact on policing’ (p. 16). Airborne technologies such as drones represent contemporary examples.
This article focuses on the use of digital technologies for policing in the past 25 years. It explores developments from the late 1990s and the new millennium onwards when technologies such as personal computers and mobile wireless computing systems became widespread across UK police forces (Allen and Wilson, 2005) along with other mobile technologies such as portable or handheld mobile devices with integrated global positioning systems. Together, these and other digital technologies have enabled police forces to access and share information via audio-visual affordances, in the process advancing their communicative, coordinative and incident response capabilities.
Body-worn cameras, for example, are additional technologies that are increasingly deployed for evidence gathering and enhanced accountability during police encounters with individuals. They have been used in the United States (US) (Crow and Smykla, 2019) and the UK (Ariel, 2017). Data-driven air transport services are also being used by police forces and these include airborne technologies such as drones and cognate devices (Fox, 2022).
Additional examples of digital technologies that are being deployed by police forces include licence plate recognition (LPR) technology, and biometrics systems such as live facial recognition technology and DNA profiling systems. Crime ‘hot spot’ GIS technologies, and more recently, predictive policing algorithms are further examples. Together, these data-driven digital technologies have been deployed for various operational purposes including crime risk forecasts, surveillance and other forms of intelligence gathering. A National Policing Digital Strategy 2020–2030 has also been launched in recent years to advance the digital transformation of policing (National Police Chiefs’ Council and Association of Police and Crime Commissioners, 2020). This article maps the historical terrain of data-driven technologies deployed by police services over the past 25 years, from the GIS techniques of the 1990s to the AI models currently informing policing decision-making. The article also analyses their prospects and challenges.
Policing with digital technologies: A recent history
Digital technologies in policing have a long history but have become widely available since the 1990s for operations such as surveillance, intelligence gathering, crime risk prediction and suspect identification. They include drones, predictive policing algorithms, LPR systems, and biometric technologies which comprise, inter alia, live facial recognition technologies (FRTs), national DNA databases and automated fingerprint identification systems (AFIS).
Some of the technologies are multifaceted, combining various AI models while providing easy accessibility through ‘smart’ interfaces that allow police forces to perform tasks such as producing surveillance maps to track people in real time. In many ways, some are arguably offshoots of earlier variants. Currently deployed biometric technologies such as AFIS for instance, re-enact the biometric classification systems developed in the nineteenth century to aid forensic operations (Smith and Miller, 2021). In the same vein, airborne technologies such as drones are being deployed by police forces but have long been associated with the military and used for tasks such as surveillance and locating missing people. Their use has evoked concerns about what appears to be the evolving militarisation of the police (Fox, 2018, 2022). Further, predictive policing AI are arguably more advanced extensions of older police technologies such as the manual or computerised GIS-tagging or crime-mapping tools for identifying crime ‘hot spots’ (Sherman, 2013). These were data-driven crime and incident software designed to analyse patterns or trends in crime data to visualise and respond to crime ‘hot spots’ (Ashby et al., 2007).
Predictive policing algorithms share similar aims although their specific goal is to forecast crime risks and prompt pre-emptive intervention. Recognising the continuity between ‘hot spot’ policing and current predictive techniques, Ferguson (2017: 72) remarks that, ‘The move toward predictive policing, then, is more a shift in tools than strategy’. Together, the various techniques are essentially intelligence-focused and augment key areas of police work from surveillance and investigations for crime detection, to overall crime prevention and control.
In the UK, the origins of currently used closed circuit television (CCTV) technology have also been traced back to the 1950s, pointing to the notion of continuity rather than change (Sandhu and Fussey, 2021). That said, recent CCTV systems are more aligned with the technologically advanced versions that became proliferous in the 1990s. Sandhu and Fussey (2021) contend that they were introduced at the time amid heightened anxieties that rising hostilities in Northern Ireland would permeate the mainland Britain. Policymakers depicted the systems as effective crime prevention tools, particularly after they aided the identification of suspected terrorists.
The origins of digitised CCTV systems have also been linked to other developments such as the drive for situational crime prevention strategies involving collaborative work between law enforcement and responsibilised communities, e.g. via local partnerships that bring together public and private sector representatives (Fussey, 2009). Currently used CCTV technologies may have evolved from earlier variants, but they offer new operational facilities. As Fox (2018: 73) observes, more recent CCTV technology, ‘still allows actions to be coordinated at a distance but combines photonics, thermal imaging, facial and behavioural recognition capabilities.’
Taken together, the foregoing suggests that currently used police technologies expand the capabilities and functionalities of older versions although they also open up new prospects and challenges that are considered later in this article.
The rise of AI
Some of the new and emerging police technologies are AI models and they are increasingly being applied by justice systems around the world [AI and Big Data Global Surveillance Index, 2023]. Broadly conceived, AI refers to technologies such as computer software programs that are designed to emulate the cognitive abilities and other qualities usually attributed to human beings. This definition encompasses a wide array of technologies, from those that rely on basic statistical analysis such as logistic regression models, to the most sophisticated machine learning algorithms. It is a definition that is now commonly used, and it depicts AI as a field encompassing various systems from the fundamentally rudimentary to the most esoteric.
The technologies are transforming the field of criminal justice. In probation services, AI applications are being deployed for risk assessment and management (Ugwudike, 2020). In court, lawyers utilise them to optimise case analysis and judges draw on them to craft appropriate sentences (Hamilton and Ugwudike, 2023; Ugwudike, 2020; Ugwudike, forthcoming). Police forces are also using AI systems for various tasks including individual risk prediction (Oswald et al., 2022), spatiotemporal crime risk forecasts (Chapman et al., 2022), and biometric identification including live facial recognition (Fussey and Murray, 2019).
Studies suggest that the technologies are mirroring issues such as biases and lack of transparency (see Chapman et al., 2022; Ensign et al., 2018; Fussey et al., 2021; Lum and Isaac, 2016) associated with other data-driven AI deployed across various sectors from insurance (Tanninen, 2020), and recruitment (Ajunwa, 2021), to health (Price, 2019) and social welfare (Eubanks, 2018).
Predictive models that have been trialled or deployed in recent years include the predictive policing algorithms that statistically forecast spatiotemporal crime risk locations. Examples are ProMap (UK) PredPol now rebranded as Geolitica (UK and US), HunchLab (US), Geographic Data Analysis and Statistics Hub (GeoDash) (Canada), Pre-Crime Observation System (PRECOBS) (Germany) and KeyCrime (Italy). The Harm Assessment Risk Tool has been used to predict risks of reoffending in the UK (Oswald et al. 2022). These and other AI technologies are generally trained (using voluminous data sets) to perform predictive tasks.
Information about the data sets from which the technologies learn how to perform predictive and other tasks autonomously or semi-autonomously is not readily available. But insights from the designers suggest that they rely on administrative data including police recorded crime data (Geolitica, 2023). The technologies also draw on other large-scale digital data sets culled from disparate sources including: biometric; communications; health; demographic; locational; and socio-economic records held by public services, private sector organisations and other vendors (see Hannah-Moffat, 2019 for an analysis of data provenance and algorithmic bias).
Prospects
The socio-political bases of the rise of technology-driven policing over the past 25 years may in part be attributable to the efforts of successive governments to streamline justice systems and reduce costs while improving efficiency. Part of this has involved the partial automation of policing operations (Ericson and Haggerty, 1997; Manning, 1992).
Sandhu and Fussey (2021) trace the advent of predictive policing AI to the extension of the ‘empirical turn’ in policing, marked by a shift away from subjective decision-making towards technology-driven objectivity. They note that this is in part connected to the growth of intelligence-led policing rather than ‘intuition-led’ approaches. Further, they trace the origins of intelligence-led policing to the late 1990s, which saw the emergence of collaborative work between technology designers and police services. The aim was to develop software that would render crime data more accessible to police officers and facilitate efficient allocation of police resources.
In their analysis of the provenance of new and emerging data-driven police technologies, Dencik et al. (2018) go further to identify the 9/11, Madrid and 7/7 terrorist attacks as key catalysts. They imply that predictive policing AI has become a key part of the state's attempt to enact pre-emptive and surveillance-based strategies in response to ‘threatening events’ detrimental to public order (see also Andrejevic, 2017).
What these suggest is that various socio-political developments explain the rise of technology development and application in policing in the past 25 years including: (a) the drive for scientific objectivity and effectiveness via technologies powered by increasingly available big data, (b) the belief that technology-driven resource allocation will improve efficiency and reduce costs, and (c) the emphasis on pre-emptive policing for public protection (see also, Babuta and Oswald, 2019). Several challenges are, however, associated with the technologies and are considered in the next sections.
Challenges
Proponents of police technologies contend that they offer opportunities for scientific objectivity, improved efficiency and the cost-effective targeting of scare resources. But governments, academics, journalists, civil society organisations, digital regulators and other stakeholders have identified several ethical challenges, particularly as police forces around the world increasingly deploy advanced technologies from AI systems to robotics (AI Now, 2019; Big Brother Watch, 2018; Centre for Data Ethics and Innovation, 2020; European Commission, 2021; Ferguson, 2017; Law Society, 2019; Liberty, 2018; OECD.AI, 2023; Reisman et al., 2018; Richardson et al., 2019).
Some of the challenges have long been associated with police technologies and they relate to issues such as racial bias, intrusive and unwarranted surveillance, poor transparency, deprivation of autonomy and user resistance. Other issues associated with the technologies are relatively novel. A key example is the problem of lack of explainability due to the opacity of AI tools. In their review of predictive policing AI, Bennet Moses and Chan (2018: 13) note that, ‘full transparency and comprehensibility is rarely possible in predictive policing’. This pessimism stems from the opacity surrounding key aspects of algorithm design including data provenance, processing and storage. Below, I provide an overview of the key challenges associated with data-driven police technologies and consider remedial strategies.
Bias
The problem of biased decision-making is a longstanding one and has been found to influence police practices such as ‘stop and search’ and other law enforcement activities (Baumgartner et al., 2016; Bowling et al., 2005; Murray et al., 2020). Indeed, official statistics in the UK consistently show that certain minorities, particularly Black people are grossly over-presented in arrest and other policing data (Ministry of Justice, 2021). Studies suggest that this has more to do with racially biased policing including discriminatory ‘stop and search’ practices, than offending propensity (example, Weber and Bowling, 2014).
Commentators argue that the technologies provide a veneer of objectivity and recast policing as both scientifically objective and devoid of the discretionary powers linked to historically ascribed biases (Sandhu and Fussey, 2021). The technologies obfuscate the ways in which police officers continue to apply their subjective professional judgement; for example, when deciding how and when to deploy the technologies.
Apart from issues to do with misuse, critics argue that the actual design of technologies such as the data-driven AI deployed by police forces can foment bias (Ferguson, 2017, 2020). Across western and non-western countries, the AI systems rely on various data including police records such as racially biased arrests and can reproduce biases embedded in such data: the perennial ‘bias in, bias out’ problem. In the UK, police officers themselves have recognised this problem and expressed concerns (Babuta and Oswald, 2019). Studies of police technologies around the world have indeed reported the problem (Barreneche, 2019; Chapman et al., 2022; Ensign et al., 2018; Lum and Isaac, 2016; Minocher and Randall, 2020; Richardson et al., 2019).
How does the problem of bias occur? It occurs via various conduits. In the case of predictive policing algorithms, for example, where such algorithms rely on arrest data, their crime risk predictions will simply direct the police back to the same locations they routinely target and police (areas with high arrest rates). Affected areas, typically low-income locations, could consequently become designated as ‘high crime’ areas (Lum and Isaac, 2016).
Meanwhile studies by the designers, proponents and others imply that the over-policing of such locations (inspired by those ecological perspectives) improves crime detection, prevention and control rates (c.f. Mohler et al., 2015; Saunders et al., 2016). But the studies disregard the correlation between a high police presence and crime rates in a location. Proximity and interactions between residents and the police can artificially inflate recorded crime rates because the police are likely to observe and record more crimes in such areas unlike other locations that are not as heavily policed. As such, the technologies can reproduce historical biases embedded in the data and expose already over-policed and over-criminalised communities to further surveillance (Brayne, 2017; Kaufmann et al., 2019) while legitimising their enhanced tracking Sandhu and Fussey (2021). Reinforcing this, Browning and Arrigo (2021) note that the models are also reproducing the outcomes of racially biased stop and search practices.
As Lum and Isaac (2016: 18) noted in their study of the PredPol predictive algorithm used by Oakland Police in California: ‘It is then plausible that the more time police spend in a location, the more crime they will find in that location’. This problem can particularly disadvantage minorities. In western jurisdictions such as the UK and the US, for example, they are more likely than others to reside in heavily policed areas designated as high crime areas, partly because of their experience of social exclusion (Selbst, 2017; Weber and Bowling, 2014).
The interdisciplinary study by Chapman et al. (2022) brought together experts from four disciplines – criminology, data science, physics and mathematics – to explore the internal mechanisms of a predictive policing algorithm and explore conduits of bias. Their study assessed how the design of predictive policing AI influences their crime risk predictions. They used unbiased synthetic crime data containing two variables, location and time, to observe how design rationalities influence predictions. The study's methodological contribution lies in its use of synthetic data sets which, unlike studies of PredPol by Ensign and colleagues (2018; Lum and Isaac, 2016), and others, provide the flexibility required for testing how predictive policing algorithm outputs change under varied data conditions.
With the synthetic data sets, the researchers ran large-scale tests of the predictive policing algorithm by updating it with the crime data and observing its reactions to cells (locations) updated with more crime data than others. They found that once the algorithm detected that some areas had higher crime rates than others, it got itself stuck in runaway feedback loops that made it continuously identify those areas as high risk regardless of fluctuations in background crime rates.
The algorithm just could not correct itself and readjust its predictions when crime rates returned to their normal levels in the affected locations. This is because it was modelled on the near repeat thesis, which assumes that a crime incident would trigger a nearby crime shortly after the initial event, and so it learned to identify the areas with higher crime rates as future crime risk areas. An implication of this type of bias is that, left unchecked, the algorithm can foment disparate outcomes via repeated police dispatch to the same over-policed areas.
Apart from biases linked to criminal justice data such as arrest records, other data-related biases have also been observed. One stems from the under-representation of minorities in the data used to policing AI such live FRTs. An external audit of commercial FRTs found differences in error rates linked to gender and skin colour differences. Darker-skinned females were more vulnerable to misclassification and experienced error rates of up to 34.7% compared with 0.8%, which was the highest error rate for lighter-skinned males (Buolamwini and Gebru, 2018).
Studies such as these suggest that the data-driven AI deployed by police services can produce disparate outcomes with implications for human and civil rights (see also Fussey and Murray, 2019). Recent media-reported cases of arrests, prosecutions and convictions on the basis of flawed biometric misidentification via FRTs for example, illustrate this (Johnson, 2022; New York Times, 2023).
Surveillance, control and privacy violations
Some commentators argue that contemporary data-driven technologies used for policing are expanding the surveillance and control capabilities of the state and extending its reach into private lives in the community while blurring the boundaries between the public and private (Benjamin, 2019). Reinforcing Foucault's (1977) view about the dispersal of discipline and state control across society, Cohen's (1985) text, Visions of Social Control captured this tendency decades ago. Using a fishing net metaphor in the book, Cohen shed light on the subtle ways in which community-based penalties were enabling the state to expand its social control mechanisms into the community, drawing more and more people into the net of penal control.
Civil society organisations, researchers and parliamentary bodies are highlighting similar challenges in relation to new and emerging police technologies by emphasising the capacity of AI technologies for example, to analyse voluminous data on human activities across and beyond the internet, for surveillance and other purposes that can violate privacy rights (Diaz, 2020; House of Lords, 2021; Lee and Chin, 2022; Liberty, 2018). It has been argued that any biases embedded in the data on which such technologies rely, expose already over-policed and historically disadvantaged and over-surveilled communities to problems such as excessive surveillance and misidentification, intruding on their privacy while drawing them into the net of social control quite unnecessarily (Brayne, 2017; Browning and Arrigo, 2021; Fussey and Sandhu, 2020; Kubler, 2017; Lee and Chin, 2022; Lyon, 2014). Recognising this, in the UK, the Court of Appeal ruled in 2020 that the use of live FRTs by South Wales Police without guidance on how it should be deployed and without notifying the public, was an unlawful violation of privacy rights (R v South Wales Police, 2020).
Deprivation of autonomy
Another challenge associated with technology use by the police in the past 25 years and more recently as new AI systems emerge, is the adverse impact on professional autonomy. Perceived deskilling and fears concerning loss of autonomy and being replaced by machines can evoke resistance by police staff towards technology. This is particularly likely where the introduction of technology risks eroding experiential knowledge and professional judgement. Brayne and Christin (2020) explored the use of predictive policing algorithms by some police services in the US and found instances of resistance fuelled by anxieties about deskilling and micro-management. Regarding concerns about loss of autonomy, the study revealed that the technologies were not displacing professional discretion but were rendering them more overt, eroding accountability.
Limited transparency and accountability
Lack of transparency is another commonly cited ethical issue linked to police technologies. With AI systems, for example, clear information regarding their provenance and prevalence are lacking. Researchers and others such as civil society organisations have had to rely on Freedom of Information requests to gain insights. Through this approach, Oswald and Grace (2016) discovered that several police forces in England and Wales were using predictive algorithms technologies in 2016.
The inscrutability of AI technologies is yet another factor that undermines transparency. Some are complex systems capable of analysing multiple datapoints autonomously, using complex calculations. Eventually, their decision-making processes (black boxes) can become too complex and inaccessible even to their designers. Besides, trade secrets laws provide that designers are not required to disclose the contents of their commercial algorithms. Such lack of explainability and access undermine accountability and has been found to also affect technologies deployed by other sections of the justice systems such as probation services.
The 2016 case of State v Loomis illustrates this (Harvard Law Review 2017). In that case, a defendant who had been ascribed a high-risk score by the Correctional Offender Management Profiling for Alternative Sanctions algorithm and was sentenced to six years in prison, argued that the trial court's decision to deny him the opportunity to assess the algorithm and challenge the score had violated due process provisions which empowers defendants to rebut incriminating evidence. The Supreme Court of Wisconsin disagreed, stating that the verdict would have been the same regardless of the algorithmic score. However, the court held that in future, the courts should be warned of the algorithm's limitations. The court also ruled that such algorithms can only be used after relevant warnings have been issued (State v Loomis). An additional factor that undermines transparency is the current outsourcing or procurement of the technologies design to commercial non-state actors (Joh, 2019; Leese, 2023). This blurs the lines between the public and the private, and convolutes accountability chains.
Conclusion
Police forces have long deployed various technologies for information-focused and intelligence-driven operations but data-driven varieties including advanced AI systems have emerged in the past 25 years. Proponents argue that they offer various capabilities and can support the cost-effective targeting of scarce police resources while also offering levels of scientific objectivity required for improved efficiency.
As police services around the world move increasingly towards the use of AI applications, it has become imperative to pay attention to their ethical challenges. Bias, intrusive and unwarranted surveillance, deprivation of autonomy, resistance and lack of transparency are frequently cited issues. These suggest that legal and regulatory frameworks are required to bring current applications into alignment with existing laws governing other aspects of policing, to safeguard both human and civil rights.
Independent research and third-party audits can also flag potential issues ex ante before the technologies are rolled out or identify problems ex post (Brown et al., 2021; Ugwudike, 2022). The UK government has recently launched the Algorithmic Transparency Recording Standard Hub where public sector organisations can provide information about their algorithms (Centre for Data Ethics and Innovation, 2022). Such interventions if successful, can help protect rights and ensure public support. Highly publicised incidents of AI bias (=Heaven, 2020, 2021) and other challenges such as the failure to comply with ethical and legal standards (Fussey and Murray, 2019; Minderoo Centre for Technology and Democracy, 2022) can erode the legitimacy vital for public support and cooperation.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
