Abstract
Militarized policing strategies aiming to identify and nullify risks to national security in Western nations have become central to the biopolitical regulation of racialized populations. While the disproportionate impact of pre-emptive counter-terrorism policing on ‘Muslim’ populations has been highlighted, the post-racial techno-politics of predictive policing as a mode of securitization remain overlooked. This article argues that the ‘war on terror’ is governed by a state of crisis that conditions a pre-emptive biopolitics of containment against (unknown) future threats. We examine how predictive policing is progressively dependent on the computational production of risk to avert impending terror. As such, extant forms of counter-terrorism algorithmic profiling are shown to mobilize post-racial calculative logics that renew racial oppression while appearing race-neutral. These predictive systems and pre-emptive actions, while seeking to securitize the future by identifying and nullifying suspects, evasively remake race as risky, thus rendering security indistinguishable from insecurity. Hence, we assert that state securitization is haunted by a profound sense of racialized dread over terrorism, for it can only resort to containing, rather than resolving, the perceived threat of race.
‘What is being preempted is not the danger of the known subject but the danger of not-knowing’
Introduction
The declaration of a global ‘war on terror’ (WoT) in response to the 11 September 2001 (9/11), attacks on the USA led to a 20-year war against Iraq and Afghanistan, killing 90,000 civilians and costing $8 trillion (Davidson, 2021). The mission to eradicate the threat of so-called ‘Islamic terrorism’ against Western nations remains unfulfilled, while counter-terrorism and mass surveillance operations on ‘Muslim’ populations have become widespread. The ethno-racial profiling of suspect populations, premised on ‘knowing the future’, has been studied as an imperious practice of homeland counter-terrorism surveillance. However, it is ‘the danger of not knowing’ that lies at the heart of the expansion of strategies of militarized predictive policing as a mode of state securitization and requires further scrutiny and a novel framework of analysis.
In this article, we explore the relationship between post-race logic, risk and the pre-emption of terrorism. That is, the constructed relationship between race and terror is governed through a discourse of risk which, in turn, rests on post-race logic to rationalize a pre-emptive biopolitics of containment. Notions of the risk of terror emerge through discourses of extremism that evasively frame Muslims as the embodiments of future violence. We unpack how computationally driven surveillant assemblages, supposedly race-neutral forms of counter-terrorism risk profiling, obfuscate their racialized epistemologies. These racialized assemblages are shown to be predicated on the fear of Islamic extremism functioning against a supposedly non-threatening standard of national identity that appears unrelated to race while embedded in notions of white normativity. As such, post-racial logic indistinctly mobilizes race as the ultimate risk, as harboring deadly potential and as necessitating constant surveillance, calculation and corrective pre-emptive intervention.
Our analysis of how post-race mythology determines securitizing logics and practices of pre-emption under the WoT is organized into three parts. The first part examines how the predictive politics of militarized counter-terrorism policing is constituted by race vis-à-vis biopolitical intimations of risk. We demonstrate that race, as a modality of constructing fear-inducing ‘abnormality’ and ‘otherness’, determines acute anxiety over the possibility and, thus, risk of terror. The second part interrogates how extant targeted efforts to prevent terrorism have recalibrated toward its pre-emption. We contend that the post-racial is a key determinant of the operational logics underpinning pre-emptive actions under the WoT. In particular, the deployment of counter-terrorism machine-learning algorithms to identify possible ‘suspects’ or acts of terror is not dependent on pre-determined racial categories or behaviors. Rather, the pre-emption of terrorism–the thwarting of a future yet to materialize–is determined by inductive computational post-racial logics of risk, operating in zones of uncertainty and unknowability. The final part of the article further explores the relationship between post-racial and pre-emptive logics to show how it functions through regimes of white normativity that generate racially-charged indistinction between security and insecurity, thus engendering a sense of racial dread that rationalizes a biopolitics of containment.
Prediction, race and risk
The objective of preventing crime, rather than simply responding to already committed crime, has a long history in Western nations. Some of the earliest forms of militarized policing in Britain’s colonies reflected a ‘commitment to a preventative function’ (Brogden, 1987: 11), as part of a broader strategy of racial regulation. However, it was during the 1970s in the USA that prevention became an explicit strategy in attempts to reduce rising levels of crime and make policing more efficient and effective. This heralded a future-orientated intervention-based approach premised on the prediction and pre-emptive management of crime.
The impact of advances in digital technology on policing over the last few decades has accelerated and entrenched a belief in the possibility of data-driven crime prediction. What is now commonly referred to as ‘predictive policing’ has been understood as the proactive use of algorithmically mediated data analysis for the purpose of finding patterns in datasets, based on which risk estimates are produced for either individuals or locations and are operationalized in the form of targeted prevention measures (Egbert and Leese, 2021: 19).
The growth of predictive policing is connected to an increasing number of crime-prediction software applications. In particular, the for-profit company, PredPol (re-named Geolitica), pioneered the adoption of these kinds of crime-forecasting tools among police departments in the USA and UK.
The PredPol software was developed in 2010 through a collaboration between the Los Angeles Police Department and researchers from the University of California. A co-founder of the company, the anthropology professor Jeff Brantingham, used a time- and place-based approach to identify urban geolocations that were supposedly more at risk of certain crimes. Brantingham, working with other colleagues, repurposed algorithms used for predicting earthquake aftershocks to anticipate the location of crime ‘hot spots’ based on ‘near-repeat’ offenses and other behavioral criminology theories (Shapiro, 2017). However, often overlooked are the militarized origins of the development of crime-prediction software programs. Brantingham was motivated to develop PredPol through his original Pentagon-funded work, which forecasted battlefield casualties in Iraq. Other projects included research funded by the US Department of Defense which attempted to model ‘gang’ violence in California via Iraqi insurgent attacks (González, 2015). 1
A key innovation of crime-prediction software has involved utilizing machine learning algorithms, increasingly referred to as artificial intelligence (AI) systems, which are claimed to ‘learn’ from data. 2 Machine learning is based on approaches including ‘supervised’ and ‘unsupervised’ learning in which an algorithmic model is trained on data to predict outcomes. Supervised approaches depend on labeled (structured) data and known outcomes to learn or model the relationship between inputs and outputs. In contrast, unsupervised learning uses unlabeled (unstructured) data and discovers statistical correlations or patterns (significant features) to generate a model without knowing the outcomes in advance.
Predpol appears to use supervised machine learning to make area-based predictions of crimes (Haskins, 2019). It relies on ‘hot spot’ mapping and ‘near-repeat’ theory to produce risk estimates by analyzing existing police data (date/time, location and crime type). 3 More recently, there have been innovations in forecasting crime that move beyond predefined police datasets to (dubiously) claim greater accuracy. HunchLab has been a leader in deploying ‘big data’ predictive analytics to generate geospatial crime predictions, based on utilizing both supervised and unsupervised approaches (Mittelstadt et al., 2016).
Hunchlab diversified data sources involve infrastructural and environmental characteristics including risk terrain modeling (based on weather patterns, locations of amenities and socioeconomic indicators). In HunchLab’s own words, The system automatically learns what is important for each crime type and provides recommendations of where to focus the resources that you have available. If you don’t have particular datasets (such as bars or bus stops), the system simply adapts to use the data available in a given jurisdiction (Azavea, 2015: 10).
Notwithstanding the problematic nature of ‘crime theories’, HunchLab implicitly expounds an ideology of ‘big data’ that is predicated on ‘data science’ empiricist claims of greater objectivity and accuracy in the algorithmic modeling of complex social phenomena. In the case of crime forecasting, infrastructural and environmental data supposedly mitigate against the possible historical partiality of police data or the politics of police knowledge.
Azavea, the original company behind HunchLab, sold the latter to ShotSpotter in 2019, who renamed HunchLab as ‘Community First Patrol Management Software’. The software claims to mitigate bias by using ‘objective, non-crime data and purpose-built mechanisms’ (ShotSpotter, 2022), which acknowledge what is known as the ‘garbage in, garbage out’ computing problem of automated decision-making systems. This approach purportedly addresses how institutional racism has created conditions for the disproportionate targeting and overpolicing of racially defined communities, thus producing crime data determined by racialized relations of power. If these data are only used to train crime prediction algorithms, it leads to a further entrenchment of racism–what has been described as the problem of ‘runaway feedback loops’ (Ensign et al., 2017).
ShotSpotter advocates that by improving (expanding and diversifying) datasets and deploying advanced machine learning algorithms, predictive policing will deliver neutral, scientifically objective crime forecasting. The company promotes an AI dynamic modeling of crime premised on fallible notions that the system improves as it ‘self-learns’. What ShotSpotter fails to address is that data in the form of neighborhood boundaries, types of amenities and other socioeconomic indicators, when correlated, can operate as discriminatory proxies of race. It may lead to amplifying risk estimates of crime in racialized and marginalized areas (Chun, 2021).
It is worth momentarily pausing to clarify that what is called ‘crime’ cannot be grasped outside the function of policing and its relationship to race-making and racial regulation. To elaborate, classic and contemporary work examines how police actively and strategically mobilize race as a social category in the production and publication of crime statistics (see Bridges, 2015; Bridges and Gilroy, 1982; Gutzmore, 1983). This mutual process of racializing crime and criminalizing race legitimizes racist patterns of policing. While such relations of power are harmful in and of themselves, their point is ‘not just to punish Black communities but to mark them’ (Kelley, 2016: 28). In other words, racial regulation (re)produces formal police knowledge that (re)institutionalizes race as a law-and-order problem. As Joshua Scannell (2019: 108) maintains, Policing does not have a ‘racist history’. Policing makes race and is inextricable from it. Algorithms cannot ‘code out’ race from American policing because race is an originary policing technology, just as policing is a bedrock racializing technology.
Scannell draws on the work of the abolitionist scholar, Ruth Wilson Gilmore, when characterizing the criminal justice system as organized through racism. Gilmore (2007: 247) defines racism as ‘the state sanctioned and/or extralegal production and exploitation of group-differentiated vulnerability to premature death’. From this standpoint, Scannell (2019) argues that, In the inverted world of predictive policing, group differentiated vulnerability is translated into probable criminal ‘risk’. Predictive policing software uses almost every conceivable measure of vulnerability and victimization under American racial capitalism. (p. 111)
Our point is that the long, enduring and institutionalized relationship between racism and criminal justice makes the pursuit of ‘objective’ crime prediction impossible. The elusiveness of such an endeavor is further compounded by the biopolitical (and hence racialized and militarized) concept of statistical risk, on which predictive policing is predicated.
Dan Bouk (2015) traces the connections between prediction, risk and race in the history of actuarial science and life insurance policy in the USA. A burgeoning post-Civil War insurance industry effectively created–from what we now call ‘big data’–a ‘statistical individual’ as a commodified risk. Significantly, the operation of ‘white data politics’ materialized via the construction of an exclusionary, normative ‘white’ category in the classificatory practices of insurance companies. Black populations, in contrast, were racially marked not only because of perceived biological differences but also in terms of a black pathology characterized by their itinerancy and indeterminacy (e.g., lacking legitimate medical or property records). Bouk notes how ‘these systems perpetuated inequality in the ways they made risks, in their systematic preference for making African Americans into substandard, subprime risks’ (p. 185).
What emerged from the production of Black populations through notions of probability and prediction was biopolitical, statistical racism, based on (ethno)racially coded ‘otherness’ as ‘risky’, to manage uncertainty, pre-empt future outcomes and inform present actions. Furthermore, the relations of exclusion, isolation and marginalization that materialized from such biopolitical arrangements determine contemporary algorithmic calculations. For instance, financial policies that redlined African American neighborhoods deemed risky ‘have greatly influenced modern algorithms because they generated massive datasets that consist of decades of information built on exclusion and discrimination’ (Allen, 2019: 234). Today, the data used to calculate credit scores, approve or deny mortgages, determine interest rates on loans and make other automated decisions include not only financial transactions but also a host of ‘suspect, race-related data’ (p. 238). This marks a process of ‘digital redlining’ where algorithmic calculations rest on, and reinforce, long-standing relations of racial denial, exploitation and segregation (Noble, 2018).
With race being a key determinant of risk production, contemporary technological practices of predicting and mitigating criminal risk are bound to institutional racism in the criminal justice system and manifest the racial logics and consequences of earlier forms of ‘corporate risk-making systems’ (Bouk, 2015). It should be of no surprise that the ‘big data’ opaque machine learning models of crime-prediction software programs have been entangled with the innovation of predictive analytics of the financial industry. For instance, IBM spent over $14 billion developing analytics software for corporate, law enforcement and state security. The techno-solutionist turn to manage uncertainty has led to the ‘language of race’ being displaced by a ‘language of risk’ (Wang, 2018: 251). This coding of racial vulnerability as risk, and of risk as racially determined, is critical to the construction of a ‘digital carceral infrastructure’. And, as Jackie Wang observes, It is important that we pay attention to this paradigm shift, as once the ‘digital carceral infrastructure’ is built up, it will be nearly impossible to undo, and the automated carceral surveillance state will spread out across the terrain, making greater and greater intrusions into our everyday lives (p. 251).
We have sought to emphasize that race is central to the constitution of risk and its attendant practices of predictive policing. The widespread adoption of racially determined predictive policing systems, in turn, is integral to the intensification of a ‘digital carceral infrastructure’. Such developments are part of the ‘biopolitics of securitisation’ (Nijjar, 2022), where the racialization of risk, and the production of race as risk, determines racial and martial regimes of police surveillance, analysis and pre-emptive action. As we discuss in the next section, the relationship between race, surveillance, analysis, intervention and securitization is central to instigating an inimical development from prevention to pre-emption in the post-racial ‘calculus of risk’ (Amoore, 2013).
Pre-empting racialized terror
Discussions and critiques of predictive policing focus on ‘everyday’ types of crime and generally do not address state securitization, such as counter-terrorism. Predictive policing software programs algorithmically model ‘future’ crime by seeking to identify patterns of occurrence through analyzing historical data. Embedded in the martial logic of pre-emption, this militarized mode of policing has intensified under the WoT, which has mobilized a seemingly benign ‘community policing’ discourse to embed surveillance, data collection, risk analysis and pre-emptive action into the routine machinery of public policy and daily life (see Kalra and Mehmood, 2013; Kundnani, 2016; Webber, 2016). However, the heightened emphasis on pre-emptive national security policies and measures is complicated and problematized by the fact that terrorism is a high-impact, improbable event that is remarkably difficult to algorithmically model.
In professional security circles, attempting to predict acts of terrorism is commonly compared to the futility of ‘finding the needle in the haystack’ (Aradau and Blanke, 2017). For example, a large-scale study by Weisi Guo (2019) examined over 30,000 acts of terrorism in more than 7000 cities worldwide since 2002. The findings indicate that the frequency with which terror attacks occur is random and that such attacks are ‘memoryless’ (the chance of a new attack occurring is essentially independent of previous attacks). It is almost impossible to build accurate predictive terrorism models based on historical data because this would assume that terrorism has stable, discernable patterns of occurrence and is thus rhythmic and non-random (Munk, 2017).
Notably, Nassim Taleb (2010) has elaborated on how terrorism can be grasped as a ‘black swan event’, which has the following key attributes: First, as an exceptional event, terrorism cannot be easily anticipated and lies beyond our regular expectations; second, it is an event with extreme impact that can cause profound social change; and third, while an act classified as terrorism is rare, we are nonetheless compelled to ‘concoct explanations for its occurrence after the fact, making it explainable and predictable’ (p. xviii).
The 9/11 attack and subsequent ones in Europe heightened a realization and a sense of fear that improbable events do occur, with devastating consequences. In response, the reactionary declaration of a WoT presented ‘Islam’ as a capricious threat to national security. Two decades of the ongoing WoT have made Islam synonymous with overlapping anxieties about extremism and the threat of terrorism (Kundnani, 2015). The ‘Muslim as a potential terrorist’ rationalized a racialized and militarized regime of state surveillance that has eroded democratic rights and civil liberties (Bunyan, 2010; Norris, 2017; Sharma and Nijjar, 2018). In addition, the WoT effectively legitimized state-sanctioned racial violence and death production, in the form of rendition, indefinite detention without trial and torture against ‘suspect’ Muslims, as exemplified by the notorious Guantanamo Bay military prison (Giroux, 2010).
Western states have abjectly failed to acknowledge that neocolonial domination and neoliberal global instabilities fomented what has been reductively labeled as ‘Islamic’ terrorism. Domestic deregulation, privatization and de-industrialization, alongside foreign bombardment, have reorganized racial order to produce ‘masses of surplus populations’ (Kundnani, 2021: 52). Rather than addressing terrorism as a complex geopolitical problem that implicates and incriminates Western states and their economic imperatives, the WoT has fixated on the failure of securitization alongside the perceived threat of Islam. In particular, the inability of intelligence agencies to predict the 9/11 terror attacks incited a turn toward a techno-solutionist ideology: ‘Intelligence, counter-terrorism, policing and counterinsurgency have been transformed by the promise of big data and predictive analytics to uncover unexpected patterns and pinpoint potentially suspect ‘needles’’ (Aradau and Blanke, 2017: 374).
However, if it is near impossible to accurately predict low-frequency improbable events, then upon what grounds do counter-terrorism predictive analytics operate? The deployment of such analytics to predict acts of terror has more to do with their performativity than their accuracy. That is, the ‘performativity of prediction’ can alter the space it inhabits and bring into being what it predicts (MacKenzie, 2015). This mode of prediction, premised on fabrication, is not strictly manifested as prevention based on identifying and managing determinate threats in a knowable world. Rather, it operates through a logic of pre-emption that confronts indeterminate threats in an unknowable world (Massumi, 2010, 2015).
Both prevention and pre-emption aim to neutralize threats and regulate the future, but Brian Massumi (2015) contends that they differ in operational logics. Prevention works based on a calculable threat existing prior to its intervention, with the possibility of identifying and addressing its causes to prevent it from materializing. In contrast, the WoT advanced pre-emption as a response to threats to national security that are yet to materialize, and that may not materialize, but are nonetheless imagined as always capable of doing so. The ‘black swan’ event of 9/11 instituted a ‘military doctrine’ of pre-emption that rendered terrorism a permanent, ceaseless threat, ‘tirelessly agitating as a background condition, potentially ready to irrupt’ (Massumi, 2015: 30). David Theo Goldberg (2009: 44) makes a similar point, noting how ‘the ordinary calculation of risks becomes the risk society’, how ‘threats to (a) society become the threatened society’ and how socio-political arrangements become predicated on (constant) fear.
We can further grasp the sense of permanency around the notion of threat through the work of Louise Amoore (2013) who maintains that the risk of terrorism has not necessarily changed. What has changed is the calculus of risk, in conditions of constant emergency, of ceaseless foreboding, of un-ending fear and, as we discuss below, of continuous dread over terrorism. This modality of risk is not expressly involved in the prevention of a probable future event based on existing knowledge. Rather, it looks to preempt an unfolding and emergent event in relation to an array of possible projected futures. It seeks not to forestall the future via calculation but to incorporate the very unknowability and profound uncertainty of the future into imminent decision (p. 9).
The perpetual, background, indeterminate threat of terrorism is the object of pre-emption and its associated practices. That is to say, the performativity of pre-emptive action imperiously makes an indeterminate threat determinate; it brings a threat into being to render it the focus of attention–in this case, the threat of terror. As Massumi (2015: 12) writes, ‘the most effective way to fight an unspecified threat is to actively contribute to producing it’.
Massumi reminds us that the WoT presented the nature and motives of the 9/11 attacks as ‘incomprehensible’, while the former US president, George W. Bush’s formulation of an ‘axis of evil’ framed terrorist states and actors as ‘inhuman’. We would stress that the discourse of the WoT animates a profoundly racialized account of a terrifying and terrorizing Muslim ‘other’, with Islam long deemed the deadly antithesis of white Euro-modern norms (Goldberg, 2009). While generating profound popular and political fear itself, the specious link between Islam and the threat of terrorism corresponds ‘the existential terror of not knowing what is going to happen’ (Wang, 2018: 238). That is, fear over Islam and potential terrorist violence interacts with acute anxieties over a fundamentally unknowable and uncontrollable future. Deep insecurity over race and the future of civilization nonetheless fuels militarized and academic-driven counter-terrorism policies, which inform and institutionalize novel techno-solutionist practices to pre-empt impending terror and remediate racially-coded trepidation (see Martin, 2014).
Accounts of pre-emption single out the events of 9/11 as a pivotal moment for state in/security, managing uncertainty and the unknown terrorist. For Massumi (2015), this is presented as an epochal shift in the modality of power and control. However, our contention is that in the aftermath of 9/11, not only has the calculus of risk and pre-emption been instituted, but this process is imminently conditioned by race vis-à-vis the biopolitical containment of Muslim populations. It is not simply that the consequences of pre-emptive strategies–for example, the UK government’s Prevent program (discussed below)–lead to the racial profiling and militarized policing of Muslim groups, rather we maintain that the force of race, as an ultimate ‘other’ of, and a ceaseless threat to, Western civilization, is what underpins and impels the logic of pre-emption as a modality of regulation.
To further grasp the performativity of pre-emption, we can interrogate predictive algorithmic systems as biopolitical ‘racializing assemblages’. Alexander Weheliye (2014) conceives racializing assemblages as construing ‘race not as a biological or cultural classification but as a set of sociopolitical processes that discipline humanity into full humans, not-quite-humans, and non-humans’ (p. 4). In other words, the machinery of racializing assemblages is fundamental to the biopolitical ‘differentiation and hierarchization’ (Alexander Weheliye, 2014: 5) of populations. Linked to this is the biopolitical management of populations imagined as abnormal, which lies at the heart of modern state racism(s) (Foucault, 2003a). In the context of pre-emption and state securitization, we can grasp a racializing surveillant assemblage as a ‘technology of social control where surveillance practices, policies and performances concern the production of norms pertaining to race and exercise a ‘power to define what is in or out of place’’ (Browne, 2015: 16; see also Sharma and Nijjar, 2018).
It has been highlighted that building predictive models based on establishing stable norms for extemporaneous ‘black swan’ terrorism events is near impossible (Huey et al., 2015). We suggest that pre-emption, as a racially-determined martial logic, relates to the post-racial to constitute a modality of knowing the future, managing uncertainty and containing the perpetual threat of racialized ‘otherness’, abnormality and difference–‘what is in or out of place’. The post-racial marks the mutability of racial differentiation that obfuscates its own reality, a ‘critical affirmation of proliferations of racism in a contemporary neoliberal order that claims to have gone beyond the racial’ (Sharma and Sharma, 2012). A constituent element of post-racial conditions is that amid the apparent death of race, possibility and potential serve as critical aspects of racial regulation under the WoT. Key to such developments in the coterminous politics of racism and militarization is policing strategies of pre-emptive risk analysis. Such strategies, emanating from a politics of race and time, encompass uncertainty and unpredictability over the future terrorist actions of ‘Muslim’ populations: ‘an ever-presence of indiscriminate threat, riddled with the anywhere-anytime potential for the proliferation of the abnormal’ (Massumi, 2015: 26).
The post-racial condition, at face value, appears indifferent to determining the normal/abnormal, the safe/risky subject in biopolitical societies of security (or control). Security apparatuses concerned with constant surveillance generate ‘differential normalities’ that operate as an ideal norm. In other words, biopolitics modulates differences in populations and regulates degrees of ‘normality’–it is not simply stable or invariable (Foucault, 2007). As such, while it seems to disavow the salience of race, post-racial logic relates with pre-emption to determine and proliferate the abnormal, in a way that oversees the elusive biopolitical production of racialized assemblages of protean high-/low-risk populations (Kafer, 2019).
Goldberg (2016a) intimates the paradoxical condition of the post-racial, referring to it as ‘racism’s contemporary articulation’. Thus, if the performativity of pre-emption constructs or brings into being a perpetual background threat, then the post-racial performativity of pre-emption is deeply involved with both threat-making and race-making–fabricating permanent threat through racial logic and race through notions of permanent threat. Amoore (2013) highlights that security practice works ‘on and through the emptiness and the void of that which is missing… It is precisely across the gaps of what can be known that new subjects and things are called into being’ (p. 3). These ‘new subjects’ we contend are charged by race, emerging via the inductive machine-learning predictive analytics of counter-terrorism.
While machine learning algorithms are extensively opaque by design, it is worth looking more closely to unravel their post-racial logics. Ethno-racial classification operates based on differentiation and hierarchization measured against a white norm. In the case of identifying potential terrorists, ethno-racial classification is found wanting because it lacks stability and cannot determine a terrorist. In summary, ‘not all Middle Eastern Muslims are terrorists, and not all terrorists are from the Middle East’ (Munk, 2017). However, in practice, the state security pre-emptive ethno-racial profiling of Muslims as suspected terrorists is profoundly racist (see Sharma and Nijjar, 2018). What the post-racial performativity of counter-terror algorithms does is confound such judgments because it appears to eschew directly relying on pre-existing ethno-racial categories or presumed characteristics.
Among statisticians, the aphorism that ‘all models are wrong, but some are useful’ is commonly cited when defending models as approximations of the ‘real world’. Not only is this line of reasoning presupposed in the development of predictive algorithms but it is also fundamental to legitimizing their utility in the face of unknowability. Machine learning models are increasingly designed to detect patterns in potentially boundless amounts of data: ‘Nearly all pivot around ways of transforming, constructing or imposing some kind of shape on the data and using that shape to discover, decide, classify, rank, cluster, recommend, label or predict what is happening or what will happen’ (MacKenzie, 2015: 415). The approach used to create a model identifying patterns can differ based on the form of machine learning.
‘Deep learning’ algorithms are exalted for their ability to ‘inductively’ learn from structured data and increasingly unstructured data. These algorithms are a subset of machine learning and utilize artificial neural network architectures (McQuillan, 2022). 4 They are not necessarily dependent on statistical ‘variables’ assumed to have some type of causal relations in creating a model with limited dimensionality; for example, variables such as age, gender, occupation, income, and so on. Alternatively, deep learning is based on features (a combination of attributes) that can be ‘learned’ by being exposed to forms of ‘big data’, such as a user’s social media posts, reactions and hashtags, web search history, sentiment, associations, financial transactions, phone calls, reading and viewing habits, travel destinations, type of education, country of origin, markers of religiosity, biometrics and so on. Features can include vast types of attributes existing in a space of high multi-dimensionality (Amoore, 2021; Munk, 2017).
One of the most compelling advances in machine learning is the claim that it can produce meaningful results from ‘big data’ in which seemingly limitless numbers of attributes may be relevant. The computational challenge to analyze these multitudinous feature spaces has given rise to applying sophisticated statistical techniques to develop deep learning predictive modeling. How they empirically work in high-dimensional space is largely opaque and often the focus of efforts to improve machine learning (Neuman et al., 2022). Yet, to only insist on the transparency and explainability of these algorithms falls short of addressing their socio-technicity. While these algorithms differ in modes of operation, they all entail ‘kinds of value’ (MacKenzie, 2015), which are obfuscated by their purported mathematical objectivity. 5
In the case of deep learning exposed to ‘big data’, there is an implicit belief that the ‘data speaks for itself’ (D’Ignazio and Klein, 2020)–the more data available (high-dimensional space), the more patterns can be discovered, supposedly leading to improved accuracy of predictions. However, the problem of framing machine learning as an objective computational technique analyzing a high-dimensional feature space fails to address what kind of ‘space’ these systems are precipitating, in relation to the predictive decisions they are designed to make. Amoore (2021), writing about border security, makes this clear: The feature space is thus always also a political space that can settle on what is important, can decide which features matter. More than this, the feature space is a political space that is positively enhanced by its exposure to volatility and social instability. . .and therefore can both withstand and profit from the societal fractures or geopolitical violence it is exposed to (p. 4).
At stake here is to challenge the view that the statistical operations of algorithms are neutral and, by extension, post-racial and that only when exposed to flawed data are algorithms liable to produce spurious patterns and erroneous predictions. As we have argued above, algorithmic systems involved in the calculus of risk for state securitization are racializing assemblages. The features that matter in identifying a future terrorist and determining levels of threat operate, as Gary Kafer (2019) insists, upon logics of racialisation that are encoded into specific computational parameters. Algorithms do not become racialised when encountering data imbued with elements of sociopolitical difference but rather mobilise logics of racialisation in order to process data assemblages (p. 31).
The outputs of these machine learning algorithms take the form of risk estimates and are interpreted as degrees of riskiness. How risk materializes through the performativity of pre-emption–the proliferation of the abnormal–is not merely the result of statistical probability calculations or fuzzy mechanisms of clustering. Rather, risk is conditioned by the post-racial mutability of race–the algorithmic modulation of the racially determined normal vs abnormal which works evasively to ‘render bodies transparent or opaque, secure or insecure, risky or at risk’ (Puar, 2007: 160), as evidenced partly by recondite expressions like ‘person of interest’ or ‘suspect’ in counter-terrorism discourse. The shifting grounds of determination are governed by the force of biopolitical normalization ‘producing and rearranging racial difference’ (Saldanha, 2007: 197). It is on such shifting grounds that security algorithms driven by post-racial logic create, to deploy Arun Saldanha’s materialist account of race, ‘aggregates—racial formations, racial clusters. These clusters emerge immanently . . . Racial formations comprise multiple spatial scales and continually change over time’ (Saldanha, 2007: 190).
The ‘motility of race’ (Stoler, 1995) has a long history of violence and has been conceived in terms of race as a ‘floating signifier’ (Hall, 1997) that organizes arbitrary human and non-human characteristics and differences according to shifting socio-political arrangements. How regimes of racial signification determine a terrorist ‘suspect’ is complicated by the contemporary post-racial performativity of pre-emptive counter-terrorism algorithmic assemblages. The ‘immanence’ of race exceeds its representation and signification in relation to how seemingly disparate attributes are mobilized and charged by race. A suspect population is in an incessant state of algorithmic emergence–as ‘Muslim’ and as hazardous to national security–but does not depend on a priori ethno-racial classification or profiling. Critically, post-racial pre-emption exists in zones of uncertainty, which binds ‘suspect’ populations to a haunting sense of possibility. Nonetheless, as we discuss below, the potential and perpetual threat of their raciality and abnormality must be contained.
Post-racial pre-emption, in/security and containment
In the above section, we suggested that post-racial logic is a critical determinant of pre-emption. From this analysis, the main question that surfaces is what are racially determined modes of algorithmic counter-terrorism risk analysis and pre-emption trying to achieve? We assert that the relationship between the post-racial and pre-emption prompts a profound but obscure biopolitics of containment. Formal securitizing efforts under the WoT to predict and control future action fail to meet their purported aims (Sharma and Nijjar, 2018), with resulting racial paranoia rendering them unable to proclaim a definitive ‘victory’. However, stubborn denial of, and a general reluctance to accept, the inherent failures that haunt racially targeted pre-emptive measures means that Western security states do not concede outright ‘defeat’. This indiscernibility speaks to a sense of continuity, an everlastingness which is consistent with the long, enduring and evolving overlap between racism and warfare that underlies modern biopolitical arrangements (see Foucault, 2003b; Goldberg, 2016b).
Indeed, the WoT is described as indicative of what Elad Uzan (2019) calls ‘never-ending wars’. Such wars are ceaseless because they are characterized by ‘[t]he absence of an ethics of conflict termination’, with ‘victory’ unclear, contested and shifting, while ‘defeat’ is deemed unacceptable and unfathomable. We suggest that the WoT is unresolvable because it is ultimately about something much deeper than addressing the problem of ‘terror’. The formalized myth that ethno-racial abnormality conditions future terrorism is revealing in this regard, for it means that the WoT aims to contain the perceived riskiness of race under revered Euro-modern hallmarks. It is this concern with containing what is formally imagined as threatening ethno-racial ‘difference’, in the name of national security, that rationalizes a never-ending relation of war between Western states and Muslim populations.
Prevent, one of four strands of the British government’s Counter-Terrorism Strategy (CONTEST), aims to govern the future by intervening at individual and collective levels. A key element of Prevent is Channel, which is a pre-emptive counter-terrorism program that rests on identifying ‘indicators’ of extremism and impending terror, assessing ‘the nature and extent’ of racialized risk to national security and developing de-radicalization interventions that target purported extremists deemed future terrorists. 6 This militarized move to ‘know’ and control racialized terror in advance is based on Orientalist abnormalities. Such deviations from a white normative standard not only produce an emergent Muslim terrorist but also rationalize pre-emptive interventions to securitize a profoundly insecure future haunted by the imagined anti-modern specter of Islam (see also Anderson, 2010; Martin, 2014).
Channel ‘support packages’, which vary according to the degree of risk an individual seemingly poses, aim to normalize various aspects that constitute Muslim subjectivities. They include apolitical and individual-focused (disciplinary) measures such as ‘mentoring support contact’, ‘anger management sessions’, ‘cognitive/behavioral contact’, ‘careers contact’ and ‘family support contact’ (see HM Government, 2012b: 21). Furthermore, biopolitical measures like community cohesion policies seek to suppress the flow of extremist ideas within the population at large (Martin, 2014). Here, police coordinate multi-agency efforts to enforce a commitment to ‘British values’ within the routine machinery of public and private infrastructure occupied by Muslim communities.
What gets overlooked is that disciplinary and biopolitical interventions concerned with nullifying the risk of terrorism are premised on a post-racial white normative framework. Such normalizing power, while overtly nationalistic, intends to align Muslims with ‘British values’ that do not reference race directly but are racialized as white, given their status as hallmarks of racially conceived modernity (see Goldberg, 2002). Accordingly, counter-terrorism pre-emptive measures reflect the workings of what Goldberg (2015) calls an ‘epistemology of deception’. Put differently, pre-emptive normalizing power appears racially insignificant, yet beneath its surface, race actively operates to produce an obscure but prevailing sense of doubt over Muslims, which reifies the latter as risky. Hence, the WoT marks a set of circumstances where post-racial logic is critical to not only the production of risk to national security but also for its irresolvability.
It is unsurprising, then, that securitizing attempts to normalize race and govern the future are a ‘ruse’ (Pemberton, 2013), an illusion. At one level, the risk of terrorism is deemed negotiable through the assertive disciplinary and biopolitical alignment of perceived Orientalist abnormality with Euro-modern norms. However, at another deeper level, pre-emptive power seeks to situate Muslims within a normative framework that constitutes and is constituted by whiteness, and that determines formal and informal racial hierarchy, order and relations. This contradiction is significant, for it means that Muslims remain associated with fear-inducing feelings of capability, possibility and immanency which impel the logic of pre-emption. In summary, the WoT is haunted by an inability to resolve racialized risk to national security. While counter-terrorism pre-emptive power vigorously pursues security, it does so on racialized terms that simultaneously retain and re-energize the fabricated relationship between Islam, abnormality and the risk of terrorism, which obstructs any real sense of resolution.
This racially conditioned contradiction at the heart of counter-terrorism intervention is part of a broader set of prevailing circumstances, in which ‘indistinction has become generalized’ (Goldberg, 2021: 14). Writing about changes prompted by structural shifts and technological developments, Goldberg notes that what were once established conceptual boundaries that offered clarity and certainty, distinction and definition, have now become blurred. Our analysis shows that the relationship between the post-racial and pre-emption subtly makes the WoT an integral aspect of socio-political indistinction. The racial terms through which counter-terrorism pre-emptive power functions, while obscure, ensure that Western security states cannot distinguish between feeling secure and insecure or safe and at risk. Thus, with notions of Orientalist abnormality and white normativity being its underlying driving force, pre-emptive intervention is a militarized measure that occupies the space of indistinction. It clamors for security, safety and certainty, while reproducing what it seeks to resolve–an enduring and profound sense of uncertainty, ambivalence and anxiety.
With security and insecurity indistinguishable, we assert that a deep sense of dread haunts the WoT. Returning to Goldberg’s (2021) insights, ‘dread has become the driving affect best characterizing the palpable anxieties of our time . . . Dread operates in the space of indiscernibility’ (p. 14). Our contention is that the post-racial politics of pre-emption is a critical condition of contemporary dread, for dread is produced by the indefinable, indiscernible, undecidable and unpredictable, all of which are hallmarks of the WoT. Accordingly, dread’s intensity, expressions and modes of biopolitical regulation respond to the obscurity, the lack of transparency and, so, the doubt and indecisiveness that underpins, permeates and penetrates racially-coded counter-terrorism pre-emptive power. It is in this scheme of things that the WoT, far from occupying the terrain of resolution, regenerates myths about race as risky to national security, which then rationalizes more indiscernibility-producing and dread-inducing pre-emptive interventions.
We suggest that counter-terrorism pre-emptive power is a mode of crisis management in crisis. The post-racial politics of pre-emption positions Muslims, at best, on the edge and, at worst, on the outside of Euro-modern normativity. Hence, Muslims are shored up as embodiments of nagging uncertainty, doubt and potential catastrophe, by being fabricated as unable to fully transcend the designation of abnormality. This is apparent in the Prepare strand of CONTEST, which aims to ‘ensure a rapid response to end any attack . . .and minimize the impact on local communities and those affected by the attack’ (HM Government, 2018: 63). Concerned with responding to the perceived threat of race to national security, Prepare is noteworthy because it marks a mode of pre-emption that rests not on preventing future terrorism but on making pre-emptive moves that mitigate its impact. As such, counter-terrorism pre-emptive power concedes to ‘the impossibility of total security’ (Martin, 2014: 66) and, so, to a ceaseless, agonizing and antagonizing sense of racialized insecurity.
The crisis of in/security underpinning counter-terrorism pre-emptive power renders it, in the final instance, a biopolitics of containment. Containment is conventionally associated with militaristic measures to manage public protest, like ‘kettling’, which shows how police contain crowds of demonstrators in a cordoned area to quell the potential of civil unrest and social breakdown (see Pickard, 2018). Not dissimilar to the war on public protest, the WoT looks to protect and preserve the future of Western civilization by containing race. Of course, counter-terrorism pre-emptive interventions aiming to normalize Orientalist abnormality are not as spectacular as containment strategies like ‘kettling’. The ‘correction’ of Muslims is a more understated violence than the overt belligerence of ‘kettling’ enemy figures. However, like ‘kettling’, pre-emption premised on normalizing race is a modality of containment as it dreads the prospect of political catastrophe erupting, which aligns it with a broader set of crisis-management tactics that exemplify a never-ending drive to keep the supposed risk of racial terror in check.
Containment is a form of biopolitical regulation that does not seek to ‘resolve’ racialized riskiness, nor does it concede that the perceived riskiness of race is ‘irresolvable’. Rather, containment is an outcome of the dread-inducing indistinction between security and insecurity, victory and defeat, and the settled and the unsettled. In other words, the WoT is not preoccupied with normalizing Muslims per se, in ways that definitively suppress those ‘alien, invasive, pervasive forces that take hold of the social body’ (Venn and Terranova, 2009: 7). Yet, neither does the WoT admit the inherent failures that plague racialized and militarized surveillance and attendant modes of risk analysis and pre-emptive intervention (see Sharma and Nijjar, 2018). Rather, the WoT concedes to the logic of containment, which marks an ambiguous drive to enclose apparently abnormal Muslim populations within white Euro-modern normative bounds that the former always threatens to evade with deadly effect.
Conclusion
Under the ongoing WoT, Muslim populations have been subject to militarized and multi-faceted modes of surveillance, amid concerted claims about the structural insignificance of race. However, the post-racial security state also sanctions attendant counter-terrorism practices of risk analysis and pre-emptive intervention. This form of racialized and militarized policing, as discussed above, supplements intensified and targeted surveillance through its concern with ‘knowing’ and governing what is yet to materialize, or what Amoore and De Goede (2008) call ‘an invisible political violence’. While the application of racially-targeted pre-emptive measures has been noted by scholars, this article has foregrounded how post-race logic relates with pre-emption to determine martial techniques of racial regulation that transcend the hyper-intrusive monitoring of race.
Police, politicians and academic ‘experts’ have claimed that because the ‘science’ behind ‘revealing’ the future is apparently removed from politics and power, pre-emptive police action is equally apolitical and impartial. Such arguments are part of an attempt ‘to solve the police’s crisis of legitimacy’ (Wang, 2018: 237), as signaled by growing discontent over, and global demonstrations against, racist police brutality, profiling and criminalization. We have sought to challenge such claims about the neutrality of computationally-driven risk analysis and, more specifically, the obsolescence of race in processes of algorithmic profiling. This has meant emphasizing that post-racial politics not only serve as a determining factor of algorithmic risk analysis and pre-emptive intervention but do so in a way that remakes, rather than resolves, race as a risk to national security.
Pre-emptive intervention rests on a long-standing and prevailing notion that ultimately regards whiteness as the main marker of modern normativity and civility. Evasively juxtaposing this inextricable entanglement between racial codification, modernity, normality and civilization is the Orientalist overlap concerning ideas about Islam and anti-modern abnormality and incivility. This constructs a racialized risk of terror where securitizing algorithms mobilize post-racial logic to elusively produce an emergent Muslim subjectivity that embodies impending horror. Accordingly, Muslims are made the primary target of militaristic pre-emptive action. However, post-race logic relates to pre-emption by making the latter appear race-neutral in seeking to securitize the future while racially determining it along the blurred lines of a white normative standard to reproduce an enduring sense of insecurity over Muslims.
Recognizing this profound indiscernibility between insecurity and security as a governing force of the WoT is vital because it occupies the space between ‘victory’ and ‘defeat’ and, thus, is generative of race as a source of dread, for dread, as Goldberg (2021: 23) remarks, ‘is chained to the condition of unknowability’ and its attendant feelings of potential and possibility being realized out of nowhere. Our point has been that such racialized dread in relation to (national) security closes the prospect of resolving the WoT given the underlying status of race as Euro-modernity’s enduring archetypal enemy ‘other’. Instead, the WoT comes to signify a corresponding biopolitics of containment, which is part of the broader timeless, rather than time-bound, character of modern racial warfare.
Footnotes
Funding
The author(s) received no financial support for the research, authorship and/or publication of this article.
