Abstract
This article examines the “aesthetic” and “prescient” turn in the surveillant assemblage and the various ways in which risk technologies in local law enforcement are reshaping the post hoc traditions of the criminal justice system. The rise of predictive policing and crime prevention software illustrate not only how the world of risk management solutions for public security is shifting from sovereign borders to inner-city streets but also how the practices of authorization are allowing software systems to become proxy forms of sovereign power. The article also examines how corporate strategies and law enforcement initiatives align themselves through media, connectivity, and consumer-oriented opt-in strategies that endeavor to “mold” and “deputize” ordinary individuals into obedient and patriotic citizens.
Keywords
Introduction
Haggerty and Ericson’s seminal article, “The surveillant assemblage” (2000), described a world where social control was no longer exclusively authored by human eyes or security forces but relied on a vast electronic ecosystem of sensors and software. Drawing on Deleuze’s writings on assemblage (1987) and Deleuze’s later work on control societies (1992), Haggerty and Ericson detail a near invisible surveillance apparatus that tracks, analyzes, and monitors people by transforming their personal data trails from social interactions, commercial transactions, and institutional documentation into distinct numerical composites or data doubles. Unlike Foucault’s bodily obsessed, state-run, brick-and-mortar enclosures of discipline and punishment, Haggerty and Ericson assert that the surveillant assemblage constitutes a paradigm shift in power where the control of populations no longer focuses on confining or soul-training individuals into conformity and obedience, but rather encourages their mobility, consumption, and connectivity. This abstraction of physical bodies into pure information, the data double, becomes the key to this new configuration of political and commercial power. For the data double makes possible a highly mobile, immutable, classifiable, and comparable form of information capital which in turn allows the forces of commerce, governance, and security to more easily target and exploit certain groups, individuals, or populations as potential customers, criminals, terrorists, or illegal migrants.
In hindsight, Haggerty and Ericson seem even more credible and poignant, especially, when we consider the ubiquity of coded spaces, social networking, and the Internet of Things. Importantly, the practical harnessing of the vast flows and discrete nodes of the assemblage is no longer speculative fiction but a defining reality of the post-9/11, post-Facebook/Google, post-Snowden world (Lyon, 2015). Significantly, what has changed since Haggerty and Ericson’s seminal work is not so much the geospatial integration of the assemblage but rather the affective nature of its visual and temporal orientation. In other words, Haggerty and Ericson’s surveillant assemblage is increasingly becoming a future-oriented enterprise dedicated to preempting possible or uncertain futures by bringing them into the present.
Since 9/11, the politics of preemption and economy of risk have created an increasingly porous alliance of law enforcement/security agencies, communications/tech companies, and other corporate enterprises dedicated to constructing a multipurpose, networked juridical and disciplinary precrime assemblage. Initially promising to identify threats by data-mining online behavior, including search habits, financial transactions, credit card purchases, travel history, and email communications, next-generation security systems have evolved. No longer nescient machines that simply connect given dots from the past, predictive policing technologies are instead becoming intelligent assemblages capable of integrating data from a multitude of nodes in order to foresee and preempt harmful futures. These developments are amplified through continued advances in cloud and scalable computer systems, quantum processing, machine learning algorithms, and effectively limitless data storage. As a result, such efforts have been largely driven by neoliberal incentives to take human resources out of the security loop and, it is argued, to increase efficiency, reduce crime rates, and eliminate human error.
While beta versions from the precrime establishment (such as IBM and Hitachi) are being piloted in realworld environments, the political and social implications of predictive policing are far reaching and understudied. In this regard, my paper addresses two main areas of concern. First, the pervasive growth in predictive analytics for law enforcement signals a paradigm shift in criminal justice. The postcrime orientation of the criminal justice system is, increasingly, overshadowed by the risk rationales, preemptive strategies, and technologies of national security and follows along the same axis that the state embraces in its logic preemptive war and targeted assassination (McCulloch and Pickering, 2009; McCulloch and Wilson, 2016; Massumi, 2015). In the emerging world of precrime, external threats, and internal crimes, law enforcement and military action blur together, people are guilty until proven innocent, verdicts are reached before trial, and punishments imposed before any crime is committed. Second, as precrime practices spread from sovereign borders to inner-city streets, corporate-state initiatives are deepening the reach of veillance into areas of social life once unimaginable. This extension of juridical reach and disciplinary sight is augmented through the growth in social networking apps that promote Ikeaveillance, an Orwellian co-opting of Mann’s (2002) sousveillance (human-focused, bottom up, individualistic approaches to the monitoring of authority). Ikeaveillance encourages citizens to do the securitization footwork of the state by offering them the opportunity to participate in do-it-yourself, reward-centered, pro-active, networked and, at times, and gamified versions of automated governance.
Projecting the writings of Haggerty and Ericson into the realm of forward-thinking sentient machines, I argue that while the precrime assemblage seeks legitimacy by offering the widely admired (if often unproven) predictability, impartiality, and objectivity of techno-scientific solutions, its ultimate goal is merely to preempt nonimmediate threats to the body politic by extending juridical reach and disciplinary sight. The longer term stakes—and arguably intent—of the precrime assemblage is to preserve the domains of its masters, who will control immense existential and predictive data that will allow them to shape public perceptions, mold social behavior, and quell possible opposition, thereby ensuring the exception incontrovertible and infinite life.
I believe Massumi’s theory of ontopower (as a subset of Foucault’s biopolitics) is innately suited for examining the prescient and aesthetic turn in securitization. For, albeit Foucault’s work on biopolitics is far reaching and in many ways still applicable to the modern condition, his theory of power that makes life its referent object is incomplete due to major social, historical, technological, and political transformations after his death, specifically, the exponential growth of individual and commercial practices of veillance, the rise of the risk regime, 9/11, and the subsequent war on terror. As Massumi (2015) contends, since 9/11 biopolitics has morphed into the “more processually-intense and far-reaching mode of power” (234). As such, the defining signature of ontopower is its proprietary logic of preemption that operates through largely speculative and imaginative frameworks. The precrime assemblage operationalizes ontopower’s practical capacity to anticipate, preconstruct, and preempt the emergence of harmful yet uncertain futures through networked forms of automated governance. On an aesthetic level, the precrime assemblage legitimates ontopower’s logic of preemption through contemporary data visualization practices and networking technologies that not only reconfigure criminality, the body and social control, but in doing so, promise to deliver a scientifically accurate, neutral, and objective precognitive experience.
According, I examine the shift from postcrime to precrime society, highlighting the constitutive element of precrime as well as citing specific case examples from Japan, Australia, and the US. Next, I explore the prescient and aesthetic turn in the surveillant assemblage, as it pertains to the burgeoning field of predictive policing technologies. Lastly, I map on the rhizomatic expansion of the precrime assemblage by analyzing various corporate-state synergies and strategies that align themselves through consumer-oriented opt-in strategies that endeavor to “mold” or “deputize” ordinary individuals into obedient and patriotic citizens.
From postcrime society to precrime society
Although risk modeling and statistical prediction approaches to public security can be identified as early as the situational crime prevention studies in the early 1970s, the anticipation, preconstruction, and preemptive response to vague and uncertain threats before they emerge is, in fact, a distinctly 21st-century phenomenon. In Ontopower (2015), Massumi contends that a mode of power driven by an operative logic of preemption is spreading throughout the various structures, systems, and processes of modern life. While acknowledging preemption to be as old as war itself, Massumi argues that the decision to act on speculative feelings of nonimminent threats was born out of the tragic events of 9/11. Similarly, Amoore and de Goede (2008) and Amoore (2013) have argued it is not that the present world faces greater dangers and threats, but that since 9/11 a dramatic shift has occurred in the way society understands itself, the future and security through the lens of risk management and its related technologies of securitization. Concurring, Martin (2007) suggests that the logic of preemption and the perception of security as an investment, adjusting for risk and leveraging it for public safety and financial gain, have infiltrated almost every aspect of modern life. He writes “[r]isk is not simply a construct that one abides but something somatized as a way of being” (21).
For Zedner, the risk practices of identifying, classifying, and forecasting threat as a means to prevent the possibility of crime now take precedent over post hoc responses of criminal justice to wrongful or harmful acts against society. In “Pre-crime and post-criminology?” (2007), she argues that criminology is on the verge of a paradigm shift from a postcrime society consisting of “crimes, offenders and victims, crime control, policing, investigation, and trial and punishment” to a precrime society focused on “calculation, risk and uncertainty, surveillance, precaution, prudentialism, moral hazard, prevention and archiving over all of these, the pursuit of security” (262). The goal of precrime society, Zedner contends, is to shift the emphasis from the past (taking action against criminal acts) to anticipating and forestalling risks (preempting crimes that have yet to occur or possibly will not happen). As a result of this reordering, rather than focusing on the apprehension or prosecution of individuals, law enforcement agencies are becoming predisposed to monitoring, disrupting, and coercion of targeted populations or groups for the threats that they may “collectively pose” (265).
An illustration of ongoing precrime practices can be found in Japan. In 2010, over 100 documents from Tokyo’s Metropolitan Police Department (MPD) were leaked online that revealed the blanket surveillance of over 72,000 members of its Muslim community (Bakkarly, 2016). Besides the surveillance profiles contained extensive details of personal bank account information, domestic movement, work history, friendships, mosque affiliations, and passport records. Moreover, the leak exposed systematic use of undercover agents and informants by MPD as well as the monitoring of places of worship, halal food stores, restaurants, and Islamic charity organizations. As a result of the revelations, 17 residents named in the documents sued the MPD and government, hoping to have the surveillance declared illegal. The plaintiffs argued that state surveillance of citizens based on religion or ethnic background was not simply prejudicial, discriminatory, and therefore unlawful but also a violation of their human rights of privacy. While the Tokyo district court eventually ruled in favor of the plaintiffs for damages caused by the police’s mismanagement of information, it completely sidestepped the legality of the MPD’s surveillance program and the profiling practices targeting the Muslim community. In delivering his verdict the presiding judge defended the MPD’s intelligence gathering activities as “necessary and inevitable” without bothering to add further explanation. In 2016, the plaintiff’s appeal to have the surveillance program declared unconstitutional was dismissed by Japan’s Supreme Court. In siding with the lower court’s initial decision, the presiding judge stated that the surveillance program was not unconstitutional, reiterating almost verbatim the previous judge’s words (Payton, 2016). While the final verdict shows that the judicial system in Japan is not independent of the government, it also highlights the arbitrary legal interpretation of human rights in Japan under the draconian regime of Shinzo Abe. Blanket surveillance of Japan’s Muslim community puts into sharp relief precrime’s collapse of the distinction between national security and law enforcement. But it also makes clear how ontopower invokes a discretionary, almost colonial rule of law, where normlessness and exceptionalism become the legitimate legality.
For McCulloch and Pickering (2009), this collapse of traditional distinctions between the political and judicial, public safety and national security, the state of exception and normal life comes as a result of the culmination of laws, legislative acts, preemptive doctrines, and counterterrorism frameworks put into place after 9/11 and the ensuing US-led Global War on Terror. Expanding on Zedner’s seminal framework, they point out in the wake of 9/11, the increasing acceptance by legislators to follow the same logic that national security policy makers adopt in their justifications for preemptive war and targeted assassination. Critically, authors argue precrime grants police and the state special legal powers that exceed the traditional post hoc boundaries of due process and criminal law. In Australia, for instance, the New South Wales (NSW) bar association warned that its government is expanding the preemptive and discretionary powers of police in order to create a rival criminal justice system based on intuition, rumors, suspicion, and prejudice (Farrell, 2016). In 2008, under the banner of counter terrorism and antiorganized crime legislation, NSW passed the “serious crime prevention orders” that allowed the courts to convict persons based merely on association. With the more recent “public safety orders” in 2016, the NSW police no longer need to prove reasonable doubt in restricting the movement, prohibiting employment, or setting up curfews for persons who have never committed an offence (Angus, 2016). Moreover, the “investigative detention” provision of the new legislation give police the authority to hold persons as young as 14 without charge and without any recourse for contacting an attorney, family, or friends for up to two weeks. Effectively, the NSW “public safety orders” and its precharge component grant police the ability to apply the same powers they have in forestalling terrorism to civic law enforcement. Importantly, they shift the balance further away from the principles of due process where people are innocent until proven guilty and more toward a new era where crimes are committed before they happen, citizens are disappeared without recourse to defense, and where guilt and imprisonment are based on suspicion, rumor, association, or simply left to the intuitive “gut feeling” of police officers.
Importantly, the preemptive measures of precrime should not be confused with crime prevention strategies. In normative criminology, crime prevention is understood as nonpunitive measures that strive to lessen the chances of crimes being committed, or strategies that focus on social or environmental circumstances (McCulloch and Pickering, 2009; McCulloch and Wilson, 2016). Crime prevention addresses known threats by working objectively through empirical data in order to prevent the recurrence of crimes. As Massumi (2015) suggests, it assumes that “uncertainty is a function of a lack of information, but in which events run a predictable, linear course from cause to effect” (5). Preemption, on the other hand, embraces Donald Rumsfeld’s nebulous warning of the “unknown unknowns.” It does this by perceiving the universe, primarily, as a threat environment where randomness and uncertainty ensure that no degree of intelligence or security can guarantee harms will not happen. In the world of “unknown unknowns,” the only way to halt the fruition of harm is by adopting a proprietary logic that seeks to anticipate vague and nonimminent threats and thwart them before they emerge. To do this, precrime strategies function largely through speculative and imaginative frameworks that preconstruct crimes (often to the point of fabricating them) in order to preempt their “alleged” eventuality. Unlike the nonpunitive character of crime prevention, precrime operates through “measures that link substantial coercive police or state action to suspicion without the need for charge, prosecution or conviction” (McCulloch and Pickering, 2009: 2–3). For example, in The Terror Factory (2012), investigative journalist Trevor Aaronson reveals how the FBI under the pretense of counterterrorist frameworks has bolstered its conviction rates by orchestrating elaborate schemes of entrapment. Aaronson notes that in the wake of 9/11, the FBI came under pressure to root out homegrown terrorism to justify enormous increases to its terror-fighting budget but also help legitimate the self-fulfilling prophecy of the War on Terror. Citing case after case, he reveals how undercover FBI agents along with over 15,000 highly paid informants not only manufactured hundreds of elaborate yet phony terrorist plots but also lured jihadist wannabes by supplying them with the finances and raw materials needed to carry out terrorist attacks. More recently, in light of the recent wave of Islamic State-inspired lone wolf attacks in the US and abroad, the FBI has stepped up its covert surveillance and usage of informants in the US Muslim community (Currier, 2016). This transformation of the FBI from a reactive law enforcement agency to a group of terrorist hungry agent provocateurs dedicated to preemption highlights just one facet of ontopower and the precrime shift now occurring from within law enforcement and the criminal justice system. It also speaks volumes about the inherent moral and ethical dilemmas raised by covert policing of suspect populations in presumed democracies.
Policing the future
One reason why the shift from postcrime to precrime society goes largely unnoticed by the general public is due to the fact that a large part of the transformation is occurring at an institutional level within criminal justice. Today, what little the public understands about precrime is more commonly associated with computational advances in crime science that promise to reduce crime risk or solve past crimes. For law enforcement agencies and the private sector, the preferred term is “predictive policing” for it highlights the more practical applications of the technology while simultaneously downplaying the less credible “futuristic” connotations of precrime made famous by Philip K. Dick’s short story Minority Report (1956; adapted to the Hollywood screen in 2002). Yet in many ways, predictive policing does call to mind many of Dick’s social, moral, technological, and juridical warnings. Indeed, predictive policing forces an acknowledgment of contemporary patterns of domination/subordination, and the moral dilemma posed by interfacing organic intelligence and intelligent machines to catch and convict individuals before they commit crimes. To be sure, the dystopian fears of a lack of human accountability when sentient machines become surrogate forms of authority is a dominant concern, especially if we consider the emerging era of precrime as a kind of inverse Turing test, so to speak, where instead of the human being tasked with the responsibility of determining the legitimacy of the machine, the onus is on the machine to judge the legitimacy/legality of the human.
In the world of precrime science, the focus of preemptive law enforcement shifts from visual evidence taken directly from individuals to algorithmic projections of crimes yet to happen based on real-time data streams and archival criminal meta-data of what others have done in the past. As Finn (2009) points out “[c]riminality is no longer associated with the individual physical body of an offender but is now something that could be found in all bodies and in the visualization of data that represents those bodies” (107). Instead of mug shots, fingerprints, or DNA strands as visual representations/evidence of criminal identification, risk terrain models, target lists, and color-coded hotspot maps become the basic currency of precrime forensic practice and precognitive truth. Importantly, the predictive accuracy of such technologies relies on tapping the vast data streams of the surveillant assemblage and channeling them into a centralized system that learns and grows its predictive acumen by sifting through greater and greater realms of data in order to recognize patterns, corelationships, and anomalies.
For instance, in the aftermath of the Ferguson unrest in 2014, local police officers outfitted themselves with Hunchlab, a risk-terrain modeling software that calculates the probability of various threat risks (larceny, assault, carjacking) occurring in specific geographical locations. Besides incorporating data parameters based on past crimes and arrests, Hunchlab also collects and collates data from upcoming public events such as concerts, sporting events, and political rallies as well as environmental factors such as weather conditions and moon phases (Hunchlab, 2016). By looking at the software’s map grid, police officers are able to visualize the likelihood of a crime being committed in a given location according to risk score given each particular square on the grid—the darker the color shading of a “hotspot,” the more likely a crime will occur. Similarly, Hitachi’s Public Safety Visualization Suite 4.5 positions itself as a turnkey security solution that utilizes cloud computing and machine learning algorithms to harvest and archive information from various public and private “assets.” This includes mining data from an array of various nodes such as remote video systems (hotels/city streets/commercial and private properties/transportation lines), gunshot sensors that alert CCTV cameras, vehicle license plate recognition systems, wireless communications, Twitter and other social media, mobile surveillance systems as well as useful data from smart parking meters, public transit systems, and online newspapers and weather forecasts (Hitachi, 2015). As Hitachi’s public relations officer for its Smart City Solutions, Mark Jules states, the goal of the fusion center is to visualize future threats on a “single pane of glass” (Hitachi, 2015). Notable is Jules’ favored tagline “on a single pane of glass” for it illustrates not only how corporate solutions of preemption are leveraged through convergence, connectivity, and the collapse of public/private distinctions, but it also makes clear how the aesthetics of precognition vis-à-vis information visualization become a self-legitimizing force of precrime technology.
Lang (2010) states that the aesthetic function of data visualization is twofold. First, it allows us to see things only a machine can see. In computer science, algorithms are operational instructions or sequential procedures that get operationalized through code (software as text). Albeit specialists can read the language of code, only an algorithm’s effect can be seen (Burrel, 2016). By compressing vast amounts of invisible data into visible signifiers (in this case graphic representations of emerging threats), data visualization accelerates mental processing of information (Lang, 2010). Thus, one of the aesthetic functions of predictive data visualization is to transform abstract data, opaque instructions, and imperceptible logic into observable crime patterns and trends. However, as McCulloch and Wilson (2016) suggest, when it comes to such technologies, the very human politics and social constructions of suspicion that go into the speculative and imaginative decision-making processes of what constitutes high- or low-risk threat are not seen in their final representation. Instead, a “black-boxing” (Latour, 1986) occurs, with a propensity to perceive data visualization of the future as a neutral and empirical truth. McCulloch and Wilson (2016) call the black boxing of precrime: “the scientification of suspicion and speculation” (82).
The second aesthetic function of data visualization concerns the sensory aspects of perception. Whereas cognition is usually defined as the “mental” processes that allow us to gain knowledge, perception is considered to be the “sensory” processes through which knowledge is gained through external/environmental stimuli. In the virtual world of predictive policing, the aesthetic experience of precognition is based on the ocular dimensions of “what” the screen is telling its user but also on “how” the predictive tool manages to affect stimuli responsible for insight. In other words, the aesthetic experience of predictive policing technologies is also contingent on the performative qualities of the interface, such as navigability, scalability, and tactility. For example, IBM’s COPLINK promises to transform the average policeman patrolling his earthly beat into a 21st-century cyber-detective. Purposed as an intelligent mobile application, COPLINK positions itself as a user-friendly, precrime tool bundled with an array of interactive crime-fighting applications. Some of the features listed in the product literature include a facial recognition app that allows police to snap photos from their mobile phones of suspects and have them matched to photo-bank of convicted criminals and their associates; interactive visual flow charts showing hierarchical relationships between members of suspicious groups and organizations or gangs; and navigable street maps that not only help pinpoint the geographical location of past crimes, fires, and 911 calls but also analyze temporal and geospatial patterns or associations between these incidents (IBM, 2016). Thus, sensory stimuli generated by the precrime interface transform a potentially static graphical representation of the future into a dynamic, convergent, and navigable space of emergence. Unlike the fixed, and arguably, hegemonic frame of Grusin’s (2010) premediation, 1 the performative aesthetics of precrime technologies allude to granting users the ability to “control” and ultimately preempt the unfolding of emergent futures. Yet like any interactive medium, user control is beholden to the operational logic of the system. As Rehak asserts (2003: 20): “[i]nterfaces are ideological. They work to remove themselves from awareness, seeking transparency – or at least unobtrusiveness – as they channel agency into new forms.” Although predictive policing grants agents of the state power to decide whether or not a given population is labeled trusted or suspect, the precrime machine only replicates the biases of its data parameters and which are always dependent on mutable conceptions of criminality and errant behavior.
While risk technologies may in the future bridge the security gap by facilitating more timely police intervention, the forecasting of certain places, persons, or groups as risks is problematic. First, civil libertarians argue that the prophesying of hotspots will only amplify police presence in specific neighborhoods or commercial establishments such as malls, liquor stores, or public parks. In turn, they suggest that this greater visibility of authority will aggravate existing tensions and hostilities between police and local communities. Second, much of what constitute the actual data and variables used in predictive calculus is dependent on historical records of previous crimes and offenders. Lynch (2016) insists this reliance on the past will only lead to an increase in and reinforcement of racial, ethnic, or class biases, leading to greater unlawful police practices such as profiling and harassment. Third, there is a generalized fear among civil libertarians that approaching law enforcement through the lens of risk management will encourage greater degrees of pervasive surveillance, and thus further widening the categories and populations deemed suspicious or dangerous to society. Fourth, the preemptive character of predictive policing will, ultimately, dictate earlier and earlier strategies of intervention in order to mitigate the possibility of risk. As Lynch contends, the inclination toward identifying risks and targeting will eventually overshadow any attempts to address the underlying social causes of criminal behavior. Concurring, Massumi (2015: 223) writes “[t]he problem posed by this now increasingly dominant processual matrix concerns perception and time, more than justice and fairness.” As precrime technologies and practices become the new norm and standard operating procedure of law enforcement, there will be a greater gap between the actual systems and processes and any consideration of morality.
In fact, recent findings indicate that while predictive technologies have a marginal effect on reducing crime in stable hotspots, their effectiveness in dynamically changing hotpots is uncertain (Mohler, 2016). Similar conclusions were reached by the Rand Corporation in a three-year-long study of an experimental predictive policing program run by the Chicago police department (Saunders et al., 2016). Besides finding little validity in crime forecasting software to actually predict crimes, the researchers were concerned with the ways in which the officers were actually utilizing the technology. Specifically, they noted the misuse of the much-hyped algorithmically driven “heat-lists.” Originally, such lists were promoted as a way in which officers and social counselors could identify “at-risk” persons as much as simply identifying high-risk offenders. The idea promoted to the general public was that social workers would visit the potential offender’s family members and friends to perform counseling interventions before any preemptive police action occurred. But researchers found that heat-lists were simply being used as target lists to profile, arrest and in many cases, bring about unwarranted surveillance on individuals. The Rand researchers concluded that while such technologies may be capable of forecasting the “risk” of future events they cannot predict actual events. As such there appears to be no conclusive evidence as yet to suggest that predictive policing technologies have led to major crime reductions.
Albeit the future of predictive policing technologies is predicated on continued advances in cloud and scalable computer systems and quantum processing, like the regulatory history of DNA profiling, their ultimate legitimacy lies in transparency, standardization, and regular audit of their algorithms. Yet at the moment this seems improbable. Burrel (2016) notes, while transparency and accountability are, generally, discouraged in the technology industry for competitive reasons as well as the possibility of hacking, intelligent machines are also thinking for themselves and developing their own algorithms and logic that humans may not themselves understand. Importantly, she adds, the accuracy of machine learning algorithms is designed to improve with greater amounts of data. As discussed in the next section, the access to and control of such data for criminal identification and public safety is further problematized by corporate-state synergies and initiatives predicated on furthering the reach of the precrime assemblage into areas of social life once unimaginable. Efforts are already underway to undermine fundamental distinctions between public and private space and tap much wider data streams in society for the purposes of identification, social sorting, and control.
Widening the nodes
Similar to Delueze and Guttari’s belief to assemblages expanding in a nonhierarchical, rhizomatic fashion, precrime exponents argue that regardless of any technological advancements in machine learning, quantum processing, or artificial intelligence, the increased accuracy of predictive policing will largely depend on its ability to shift from mining historical data of past crimes and criminals to harvesting “fresh” data streams (Heaton, 2012). This strategy implies the real-time extraction of personal data from an individual’s daily life—monitoring their patterns, routines, habits, emotional tendencies, preferences, idiosyncrasies, and geospatial coordinates. Such a colossal effort entails not simply the bulk mining of social media interactions, email, and phone communications but videogame playing, TV watching, and online shopping. In other words, practically, everything that constitutes virtual life. Precrime enthusiasts contend that exhuming behavioral data from the various social and financial flows of contemporary life will facilitate easier identification of “criminal signatures” and allow law enforcement to intervene at earlier and earlier stages.
Yet interest in harvesting the intimate and often mundane data details of an individual’s daily life for predictive endeavors is not limited to law enforcement, nor is it an entirely new phenomenon. For some years now, commercial forces have come to regard behavioral data as the holy grail of persuasive advertising. Google first discovered the latent surplus value of “data exhaust”—the behavioral data created through their users’ search habits (Mahdawi, 2013; Williams, 2013). At first, Google simply treated user data as way to refine their analytic capabilities in order to provide their customers with a better search experience (Pariser, 2011). However, it did not take long to realize that this approach limited their business model unless they went the unlikely direction and charged a user fee for their search engine service. Instead, they decided to use their analytics and huge cache of behavioral data to boost their advertising revenue by matching key search words of a user to certain ads. Every time a user performed a search, Google’s engine would provide a list of links crafted to their past preferences. But now it would also bring up advertisements that had direct relevance to that user’s stored behavioral data. Today, however, Google’s product goal is no longer to provide ad relevancy but rather to offer personally tailored lifestyle choices. Zuboff (2015) contends that Google are the pioneers of a new genus of wealth accumulation called “surveillance capitalism”—a pervasive and unilaterally intrusive form of systematic observation. But she also warns of an Orwellian end game lurking in the shadows. Surveillance capitalism is not simply about predicting a consumer’s next shopping choice but rather strives to change the way they think and behave. Today, for example, electronic giants such as Toshiba, Samsung, Sony, and LG explicitly state in their privacy agreements that their Smart TVs and gaming consoles will collect, use, share, and store information using gesture controls, voice, and facial recognition technology. While they argue that their huge data caches are primarily geared toward enhancing their own customer experiences, the trade in behavioral data to third parties is not only growing but, as many suggest, an unstoppable tsunami (Zang et al., 2015).
If, in fact, we are entering a new era where an advertiser’s goal is not simply to persuade but modify consumers’ habits of consumption through reward-centered veillance, then it stands to reason that the precrime assemblage may function in a similar way to reduce crime and preempt possible threats. Yet there is also the distinct dystopian possibility, in its never-ending ontopolitical pursuit to colonize and regulate all aspects of social life, that it may suppress dissent and discourage nonconformist thought or behavior. Already we are seeing such practices occur today with the increasing trends of self-censorship in social media due to fear of state surveillance and authoritarian reprisal. Penney has shown that internet traffic to Wikipedia articles related to terrorism fell more than 30% after the Snowden revelations. Penney also found that traffic dropped even more significantly for privacy-sensitive topics (Penney, 2016). For Kitchin and Dodge, self-censorship is only one effect of an emerging future in algorithmic soul-training. In Code/Space, the authors write “the use of software is changing how governance unfolds. New forms of regulatory technologies are qualitatively and quantitatively transforming the nature of surveillance, both deepening the level of discipline, and actively reshaping individual behaviors” (2011: 233). Such forms of automated governance and algorithm-based soul-training are already in place throughout the world. A prominent example is the burgeoning field of automotive telematics. In the mid-1990s, companies such as OnStar, a General Motors’ subsidiary, began developing onboard sensors and alert systems for commercial fleet-oriented industries such as rental car, transport, and courier companies. The initial idea was simply to collect and transmit information to company operators of any vehicle operation problems, mileage efficiency, location, and in specific industries such as transport, internal cabin temperatures for refrigerated cargo. As both cellular technology and computer analytics developed, OnStar services expanded to regular car consumers with added features such as navigation, 911 calling during emergencies, and security such as being locked out or shutting down in the case of theft (Griffith, 2016).
Today, telematic technologies are the nexus of landmark joint venture projects between the automobile makers and insurers. The goal is for all future cars to be equipped with black boxes that will collect and transmit a driver’s behavioral data to insurers. Currently, North American insurers are offering the black boxes on a voluntary basis to their customers. On the one hand, data will be used to assess and reward good driving behavior by offering lower premiums and, perhaps, third-party product discounts. On the other, it will set punishments such as higher policy rates if a driver brakes too hard, accelerates too fast, or swerves too drastically all this data will be collected and transmitted to an insurer’s own databases for analysis. Moreover, insurers will use the recorded data to better assess accident reports and claims, insurance fraud, and possible litigation. Already, the privacy agreements set by automakers allow them to collect and retain a customer’s driving data. Furthermore if requested, these automakers are legally bound to hand over this data to courts (Bond, 2014).
Police departments have adopted telematics to improve the driving skills of their patrolmen. Yet given the aforementioned precedents and agreements, and the increasingly eroding boundaries between commercial interests and risk regimes, the possibilities of control creep are not so remote. If a driver’s data can be shared with courts in the post hoc traditions of our judicial system, following along the axis of its emerging precrime reorientations, rather than simply raising insurance premiums, automotive telematics may assume greater powers of authorization, including for instance, the real-time issuing of electronic traffic tickets; alerting police officers to dangerous and drunk drivers; or even locking/shutting down a suspect’s car in the case of road rage, police pursuit, or spot-check. To be sure, the rise of automotive telematics and its ability to preempt the emergence of harmful futures before they happen by molding behavior through reward-centered feedback loops is an exemplary modality of Ikeaveillance, a do-it-yourself, voluntary opt-in approach to algorithmic governance. But it is also an important ontopolitical marker of the processually intense nature of the precrime assemblage. For telematics not only widen the possibilities for full spectrum surveillance but at the same time gives new meaning and possibility to F.W. Taylor’s “principles of motion economy,” the transformation of spatial mobility into an automated yet fluid disciplinary space designed to modify and monetize human behavior in the pursuit of preempting harmful futures.
A more current example of Ikeaveillance is Sesame Credit (2015), a Chinese government-led effort to gamify obedient citizenship. Made in partnership with the online retail giant Alibaba, Sesame Credit is a social networking game that mines players’ online social interactions and financial transactions in order to see how well they adhere to state initiatives to promote good citizenship. For instance, if a gamer tweets negative criticisms of recent government policies or posts incriminating photos of dysfunctional government services on Facebook their score goes down. But if gamers share state news about the rise of Japanese militarism or US imperialism the score goes up. Additionally, since Alibaba is also the largest online retailer in China, Sesame Credit is able to pull data from a gamer’s purchases (Osborne, 2015). So if they buy what the government believes to be socially valuable products such as gardening tools, work shoes, or local products they are rewarded with bonus points. If they do not pay a utilities bill, or purchase goods from Japan, their score goes down (Hodson, 2015). Significantly, what makes Sesame Credit a foreboding interpellative mechanism is that it imparts real-world consequences—all the game levels come with patriotic ranking, and high scores mean actual perks. For instance, having a high score of 600 allows citizen/players to rent cars or hotels without security deposits, 650 or higher lets them book into hotels faster, and over 700 speeds up the processing time for travel visas (Hodson, 2015). At present there are no penalties for players having low scores, but the more Orwellian part of Sesame Credit, besides the government’s mandatory 2020 opt-in date, is its ability to identify a player’s social network of friends/relationships (Osborne, 2015). When players check their score they can also see the scores of other players in their social networks. Civil libertarians argue that players will eventually be penalized or lose points for having friends with low scores and, in turn, this will increase social conformity and obedient behavior. By subverting the democratizing potential of Mann’s sousveillance into a consumer-oriented, postpanoptic massive multiplayer online game, Sesame Credit attempts to govern the forms of self-government, structuring and shaping the field of possible actions of subjects in alignment with the wishes and agenda of state through interactive yet ideologically pretuned systems of control.
While the Orwellian implications of Sesame Credit seem more credible in China, a country widely criticized for its authoritarian tendencies, consumer-oriented security apps for mobile phones in the West also function in a similar capacity with comparable dystopic end games in mind. For example, the iSay-iSay app for android devices (and the discontinued PatriotApp) enables individuals to be part of a networked juridical and disciplinary apparatus of securitization, by allowing them to report suspicious activities and persons directly to local, state, and federal agencies. By touching one of several icons, users can alert the FBI, Centers for Disease Control, FEMA, or local law enforcement agencies of suspicious-looking persons or activities. There is even a whistleblower icon that allows users to alert the Government Accountability Office to institutional wrongdoings. In 2012, as part of a wider see-say program, the Massachusetts Bay Transport Authority launched its own Transit Police app that deputizes its riders by encouraging them to share photos, text messages, or location details of suspicious individuals, missing persons, or real-time crimes to local police. Not surprisingly, the US Department of Homeland Security also jumped on the security app bandwagon several years back with its own version. Besides its less-than-creative name, MyTSA does not offer any of the wider panoptic features of its commercial competitors. Rather it functions more as a pedantic heuristic for unenlightened citizens of the security state. It does this by offering travelers information on permissible carry-on items, acceptable IDs, and, interestingly, updates on the processing times of pre- and postsecurity boarding. Moreover, the majority of these security apps come with a prominent color-coded terror alert bar that adds a temporal dimension of insecurity to reinforce constant vigilance. Interestingly, The City of Boston security app, Citizen Connect, offers a more upfront, reward-centered feature called “street cred.” Designed like a loyalty program, “street cred” allows users to create personal profiles and earn recognition points for being frequent contributors. Citizen Connect users who actively report on suspicious persons, ongoing crime, random acts of violence, or municipal infrastructure hazards get promoted to special “patrols” where they earn special badges of civic distinction. Similar to the benefits offered by trusted-traveler programs, Citizen Connect encourages a neoliberal subjectivity that envisions civic-minded security as a consumable product rather than a public good. Like the performative aesthetics of predictive policing technologies, security apps allude to empowering individuals by offering them chance to participate in a dynamic, convergent, and navigable space of automated governance.
Importantly, the growth in reward-centered feedback loops such as automobile telematics, Sesame Credit, and consumer-oriented security apps is emblematic of the emerging era of Ikeaveillance, where automated governance is relegated to software systems that “… people willingly and voluntarily subscribe to and desire their logic, trading potential disciplinary effects against benefits gained” (Kitchin and Dodge, 2011:11). Ikeaveillance offers what Massumi (2015) calls “collective individuation … a singularly multiple subjectivity without a subject” (239) through the soft tyranny of interactivity. As such, it represents an emerging modality of automated governance that endeavors to “mold” and/or “deputize” ordinary individuals into obedient and patriotic citizens. Moreover, Ikeaveillance illustrates the rhizomatic expansion of the precrime assemblage, the continual nonhierarchical growth of new nodes, modalities, and possibilities of full spectrum surveillance but also how ontopower’s proprietary logic of preemption is orchestrated, navigated, and, ultimately, legitimated.
Conclusion
As I have argued, the precrime assemblage signals the prescient and aesthetic turn in securitization. The rise of predictive policing illustrates how the political economy of risk is not only shifting from sovereign borders to city streets but also altering the temporal orientation of our legal system. While the precrime assemblage advocates mutable boundaries between public and private spaces of social interaction and flows in communication, its risk models rely on the rigidity of actuarial-oriented decisionism. Yet such data-driven predictions hide racial, religious, and often socioeconomic disparities behind a veneer of scientific impartiality. Rather than creating more equitable and fair law enforcement practices and judicial processes, the precrime assemblage judges persons not as individuals but rather as numerical signifiers—placing them in actuarial categories based on what others have done in the past. Yet far from Foucault’s omniscient and all-seeing Panopticon lording over a select group of helpless citizens, the widening of the precrime assemblage indicates, at least to some degree, complicity on the part of the masses to voluntarily trade privacy or sacrifice anonymity for product discounts, benefits, and services or to self-enhance notions of civic-minded servitude. This rhizomatic expansion of the precrime assemblage is encouraged through media, connectivity, and consumption—notably, voluntary opt-in strategies that offer the neoliberal subject optimized environments of opportunity, convenience, and safety. In doing so, the precrime assemblage illustrates an emerging modality of power that delegates social control (and sovereign power) to software systems which simultaneously monitor, interpellate, discipline, and monetize populations into new realms of civic capital that can, in turn, better contribute to the preemption of harmful futures.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
