Abstract
In this paper, I use The New York Times’ debate titled, “Can predictive policing be ethical and effective?” to examine what are seen as the key operations of predictive policing and what impacts they might have in our current culture and society. The debate is substantially focused on the ethics and effectiveness of the computational aspects of predictive policing including the use of data and algorithms to predict individual behaviour or to identify hot spots where crimes might happen. The debate illustrates both the benefits and the problems of using these techniques, and makes a strong stance in favor of human control and governance over predictive policing. Cultural techniques in the paper is used as a framework to discuss human agency and further elaborate how predictive policing is based on operations which have ethical, epistemological, and social consequences.
In 2015, The New York Times published a debate titled, “Can predictive policing be ethical and effective?” Predictive policing, as defined by RAND Corporation’s (Perry, McInnis, Price, Smith, & Hollywood, 2013, p. xiii) report, is “the application of analytical techniques—particularly quantitative techniques—to identify likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions.” The use of predictive policing in law enforcement is part of a longer historical shift from reactive policing to proactive policing. As described by Sarah Brayne (2017, p. 989), since the 1980s, the police work has moved from chasing suspects of crime into proactive policing of particular hot spots where crimes may occur. According to Brayne (2017), “[p]redictive policing is an extension of hot spots policing, made possible by the temporal density of big data (i.e., high-frequency observations)” (p. 989). As explained by PredPol, one of the service providers, predictive policing uses algorithmic analysis of “criminal behavior patterns” together with three data points, “past type, place and time of crime” to provide “law enforcement agency with customized crime predictions for the places and times that crimes are most likely to occur” (Predpol 2016).
The New York Times debate does not focus on particular predictive policing technologies or service providers but discusses predictive policing on a general level. The debate shows how predictive policing has been praised for its effectiveness, while its ethicality has been criticized. As Cami Chavis Simmons (2015), a former federal prosecutor and a professor and the director of the criminal justice program at Wake Forest University School of Law, puts it in the debate, “some researchers claim [that predictive policing software] is better than human analysts at determining where crime is likely to occur and, thus, preventing it.” The caveat is, for example, she (Chavis, 2015) continues, that Algorithms cannot inform police about the underlying conditions in the “hot spot” that contribute to crime in that area. A computer cannot tell the police department that rival gangs that are about to engage in a violent confrontation about territory, but a local resident could.
While some of the commentators in the debate such as Sean Young (2015), executive director of the University of California (UC) Institute for Prediction Technology, believe in the power of algorithmic sorting of data and are “developing a platform to analyze this social data and spit out real-time predictions about future events and help public health officials prevent disease outbreaks, stop violent crime and reduce poverty,” others like Faiza Patel (2015), the co-director of the Liberty and National Security Program at the Brennan Center for Justice at New York University Law School, are more skeptical about the effects arguing that “algorithms used to predict the location of crime will only be as good as the information that is fed into them.” In her commentary, Seeta Peña Gangadharan (2015), an assistant professor in the Department of Media and Communications at The London School of Economics and Political Science, even states that “these technologies are fundamentally discriminatory.” These examples illustrate how the ethics and effectiveness of predictive policing is located somewhere in between new technologies and techniques that go into their use and consequently are produced by them.
Using The New York Times debate as an inspiration, I examine the intertwining of the ethics and effectiveness of predictive policing through a framework German media theory has called kulturtechniken. 1 Kulturtechniken, or cultural techniques, as the concept is now being translated, offers a way to understand how “media and things” provide “their own rules of execution” and “structure our possibilities in praxis” (L. C. Young, 2015). By looking at The New York Times debate, I ask what are the techniques important to predictive policing. Like the debate, my approach does not focus on particular technologies or service providers, but speaks about the general level of principles according to which the predictive policing is seen to operate. Following Bernhard Siegert (2008), I want to highlight “the operations or operative sequences that historically and logically precede” and help us to generate and understand “media concepts” such as predictive policing (p. 29). The debate itself consists of six brief commentaries. The commentators Kami Chavis Simmons, Aderson B. Francois, Seeta Peña Gangadharan, Andrew Papachristos, Faiza Patel, and Sean Young are experts of their own fields, and each of them represents a slightly different approach to predictive policing. Importantly, I will focus on what the experts say about predictive policing in this particular debate instead of looking at the corpus of their actual research on the matter. What I aim to achieve by this delimitation is a description of the fundamental recursive operations of predictive policing as they are described for the general audience and of the distinctions or effects that these operations are seen to produce in our current culture and society. In practice, each of the subchapters of this article begins with an introduction to one of these operations as they are mentioned in The New York Times debate. What I aim to bring forward from these operations is what Siegert (2015, p. 3) calls as the technical apriori: the technological conditions which determine and define how the ethics and effectiveness of predictive policing become possible in the first place.
Cultural Techniques or Ethics as Method, Method as Ethics
In her article, “Ethics as method, method as ethics,” Annette Markham (2006, pp. 50-51) describes method as a way of getting something done and ethics as a dialogical process of making sense of the world. Ethics then is neither a pre-established value system nor a set of beliefs of what is right or wrong but an active process of becoming. Getting something done always involves ethical choices, and ethical choices are made to get things done. Although Markham focuses specifically on researchers and their ethics and methods, I want to suggest is that her understanding of ethics as method and method as ethics could be extended to the non-human realm as well.
If we remove both words “ethics” and “method,” we are left with the idea of getting things done as a process that makes sense of the world and vice versa. I posit both getting things done and making sense of the world as particular techniques. Techniques, here, are not merely different human skills, aptitudes, and abilities but operations that are co-constituted with non-human objects and the materialities that enable them (Cf. Winthrop-Young, 2013, p. 6). This take on techniques is specific to an approach called cultural techniques—an approach that has been developed in German media theory and only recently introduced to the Anglo-American research tradition.
Now cultural techniques as a notion has different meanings connected to different historical periods (see Siegert, 2013; Winthrop-Young, 2013), but here, I am relying on Geoffrey Winthrop-Young’s (2015) recent formulation of cultural techniques as operative chains composed of actors and technological objects that produce cultural orders and constructs which are subsequently installed as the basis of these operations. At the core of this [. . .] meaning of cultural techniques is the notion that fairly simple operations coalesce into complex entities which are then viewed as the agents or sources running these operations. (p. 458)
Winthrop-Young exemplifies this statement by noting that people draw pictures before the concept of an image was conceived and played music before the concept of tonality existed. Concepts do not have ontological priority, rather they emerge from practice (Winthrop-Young, 2015, p.459).
In The New York Times debate predictive policing is defined as a technique of incongruous mixture of systems, technologies, events, and actors. It is seen as a computationally based practice, which has the capacity to prevent crime, but it also carries a capacity to bring along new unjust conditions. One interesting feature of the debate is that many of the critics of predictive policing focus on its computational techniques; the practices of using computational systems in making future predictions, and acting upon those predictions in the real. For example, Gangadharan (2015) in her commentary warns us that predictive policing can lead to ceding judgment to predictive software programs and justifying actions because “the computer said so.” In these critiques, predictive policing is shaping our relations to the surrounding world autonomously and without human control.
Jussi Parikka (2015a, p. 30) notes that cultural techniques have epistemological, organizational, and social consequences. One particular technique of predictive policing is focused on flagging hot spots based on the probability of crime. As I will discuss in the following sections, these predictions are seen to have the capacity to change not only how we understand certain locations as high-crime areas but the flagging of particular areas as hot spots also changes our orientation and, for example, law enforcement officers’ orientation toward those areas and people living there (Chavis, 2015). This exemplifies, what Siegert (2013, p. 57) means when he notes that human is always a product of cultural techniques and has no ontological priority. Cultural techniques are means by which humans come to understand themselves and are formed as subjects (Parikka, 2015a, p. 30). This is the point where the approach of cultural techniques not only bears similarities with many traditional takes on posthumanism but also differs from them. Winthrop-Young (2014, p. 386) points out that posthumanism argues for the hybridization of the human through and with technology, but for cultural techniques, the human never existed without the non-human.
The idea that we are entering to an era where things happen because the computer said so is very much in line with how the cultural techniques approach frames human–technology relations. For the cultural techniques approach, the human mastery over a technology is problematic. Cornelia Vismann (2013), for example, maintains that Cultural techniques define the agency of media and things. If media theory were, or had, a grammar, that agency would find its expression in objects claiming the grammatical subject position and cultural techniques standing in for verbs. Grammatical persons (and human beings alike) would then assume the place assigned for objects in a given sentence. (p. 83)
What Vismann (2013, pp. 83-85) suggests is a change of perspective where the sovereign subject or the autonomously acting person is disempowered and situated within objects and technologies, which have agency and as such determine the possible courses of action. Her example, adapted from Wolfgang Schadelwaldt, is of bather and spear thrower, which denote two different ways to think about agency. According to Vismann (2013), “the bather is carried by the water” and “the trajectory of bathing remains bound to the medium of water,” while the hand that throws the spear only initiates a process with a goal in mind (p. 85). However, while the example of water as a medium is more in line with cultural techniques, the difference in these two processes is actually only superficial because all “things and media will always function as carriers of operations, irrespective of what is at stake in their execution” (Vismann, 2013, p. 86). Like the water, also the spear determines an act, and the operation produces “a subject, who will then claim mastery over both the tool and the action associated with it” (Vismann, 2013, p. 83).
This is where the effectiveness of predictive policing meets its ethics. Predictive policing when executed carries specific operations. These operations are tied, for example, to the computational power of processing large datasets and analyzing information from various sources ranging from criminal records to weather reports and in some cases even social media data. Computers and algorithms are more effective than human beings in making connections between different data sets. If we look at the current techniques of analyzing data in predictive policing from the perspective of effectiveness, we can quite easily accept Friedrich Kittler’s (2017) notion that, we are in a situation where “computers and cybernetics” are becoming “increasingly necessary” and the humans are becoming “increasingly random” (p. 13). If for the sake of effectiveness, we are moving to computational cultural techniques, then the question is how ethical can a machine or a computational technique be? What is the role assigned for humans? In other words, how does ethics function through cultural techniques.
Location and Data
Geoffrey Winthrop-Young (2013) proposes that “Rather than tackling the question ‘What are cultural techniques?’, it makes more sense to ask: ‘What is the question to which the concept of cultural techniques claims to be an answer?’” Following this line of questioning, let us begin to explore The New York Times debate and see what are the particular cultural techniques that are highlighted in the context of ethics and effectiveness and what are the questions they aim to answer. One particular question related to the effectiveness of predictive policing is where will a crime potentially take place?
According to Josh Scannell (2015), many of the current predictive policing technologies are based on disease or weather prediction models: these technologies focus on locations and operate preventively by, for example, increasing law enforcement presences in the areas of potential crime. In other words, the mapping of a location is based on techniques of data analytics and data visualization, which is then used to control certain areas. PredPol (2016), for example, “enables law enforcement to enhance and better direct the patrol resources they have” by automatically generating 500 feet × 500 feet areas from the map as potential places where crime can occur “for each shift for each day”. Here, we are reminded of Siegert’s (2013) notion that spaces never “exist independently of cultural techniques of spatial control” (p. 57). The locations predictive policing distinguishes are very particular because their borders are defined through predictive modeling rather than zip codes, street corners, or physical boundaries of the place. These areas can be, for example, areas of high crime and near-by areas which are determined as being at risk for “subsequent crime” (Brayne, 2017, p. 989).
The importance of spatial control and cultural techniques that are able to distinguish particular locations based on their crime potential have also a significant focus in The New York Times debate. Many of the experts, including Patel, Gangadharan, and Chavis Simmons raise the concern that the creation of “hot spots” is never entirely an objective process. . In his commentary, Aderson B. Francois (2015), professor of law at Howard University and supervising attorney of its Civil Rights Clinic, notes that “predictive models carry an inherent risk of racial profiling”. These authors point out that even if predictive policing is seen as only mapping locations, it has effects on the individuals, who are either passing through or trying to live their lives in those locations. To be more explicit, Patel (2015) argues that even if predictive policing technologies were using only locational crime data, they would hardly be neutral: “If an algorithm is populated primarily with crimes committed by black people, it will spit out results that send police to black neighborhoods.”
What these authors maintain is that the hot spots are actively produced by particular techniques and these techniques have their own biases and problems. Francois (2015) in his New York Times commentary points to the history of crime prediction techniques: Using data to forecast crime is not a new concept; in essence, it relies on the ancient truism that criminals are creatures of habit and, as such, will tend to commit the same crimes, at the same times, in the same places.
Models that would predict the likeliness of parolee’s reoffending have been developed since the late 1920s, and data-based risk assessment has been part of the justice system for the past three decades (Brayne, 2017, p. 981). “What is new about modern predictive policing” Francois (2015) continues “is the promise that, using so-called big data, law enforcement can use sophisticated objective statistical and geospatial models to forecast crime levels, thereby making decisions about, when, where, and how to intervene.” Gangadharan (2015) calls this promise “myopic view of technology’s role in public safety. A misguided belief in the objectivity and neutrality of predictive technologies permeates every step of the process.” Francois, Patel, Chavis, and Gangadharan all use racial profiling as an example of how the objectivity of statistical and geospatial models is a fallacy. Francois (2015) points out that racial profiling has been a problem in crime forecasting, long before computational analysis of data. Gangadharan (2015), maintains that over-reporting of crime incidence by law enforcement in minority communities—whether due to implicit or explicit racial bias—will literally color the computational analysis, designating these areas a “hot spot” for more policing, which will probably lead to increased incarceration rates there.
If we think about predictive policing as a cultural technique rather than technology, our register immediately moves from neutrality or objectivity toward acknowledging that these systems are constantly drawing distinctions and shaping our culture in their own ways. Techniques and technologies are never neutral; they, for example, establish and maintain “power-laden boundaries across race, gender, and class” and have differing consequences for different people based on not only identity-related factors but also for example access to the technologies and techniques in question ( Noble and Roberts, 2015, p.2, 9). Drawing distinctions is not only a question of how tecniques or technologies are being used by people. As Liam Cole Young (2015) notes, “[t]he study of cultural techniques holds that media and things are not simply passive objects to be activated at the whim of an intentional (human) subject. Media and things supply their own rules of execution.” He is here referring to Siegert’s famous example of the door as a cultural technique. For Siegert, door has particular affordances, which limit its potential usage. One can, for example, open or close the door, and when opened one can move from one space to another. To rephrase, big data techniques used in predictive policing can be seen solving some ethical problems but while doing so, they open others. For example, in the context of locational data, Brayne (2017, p. 997) points out that big data crime prediction can eliminate some problems such as the human tendency of utilizing stereotypes regarding class or race when facing incomplete information of a potential suspect. However, referring to previous research, she (Brayne 2017, pp. 997-998) is also quick to note that big data techniques only appear neutral on the surface, and crime data are often incomplete; for example, crimes taking place at public places have a heightened role because of their reporting, crimes are not reported by groups and individuals not trusting the police, and the police attention is often focused on particular neighborhoods at disproportionately high rate. “These social dynamics inform the historical crime data that are fed into the predictive policing algorithm,” she (Brayne 2017, p. 998) notes.
Many of the critics of predictive policing in The New York Times debate seem to suggest that we cannot really know where a crime will take place because the data are biased. They highlight that using biased data in locating hot spots may even create “tension and further destabilizes an area most in need of police protection” (Chavis, 2015). Could we solve the problem by building more extensive and comprehensive mechanisms for data analysis, which would then better inform us about potential crimes? What the cultural techniques approach would argue is that data are only part of the problem. Paraphrasing Siegert (2008) cultural techniques never merely communicate or exchange information, but they are acts that create “order by introducing distinctions” (p. 35). The larger epistemological and ontological problem, then, is related to drawing distinctions between different areas and people living in those areas in the first place, in the Roman Empire with a plow, today with data, and tomorrow who knows how. Drawing a line, mapping a location, defining a hot spot is never neutral nor objective but an ethical decision and a political act. It marks “the distinction between inside and outside, civilization and barbarism, an inside domain in which the law prevails and one outside in which it does not,” as Siegert (2013, p. 60) puts it.
Individual
On her New York Times commentary, Patel (2015) notes that sometimes predictive policing is used not only to “forecast where crime will likely take place” but also “who is likely to commit a crime.” This question moves us to cultural techniques, which target individuals and operate based on future potential. Here predictive policing introduces cultural techniques, which no longer evaluate the human based on his or her individual characters or even his or her past history but the future capacity to act. It is the predicted future, the potential that begins to condition human agency. What is, of course, important here is that the predicted future is achieved through particular cultural techniques. In The New York Times debate, the particular cultural techniques that try to predict individual behavior are discussed by Patel (2015) and Sean Young (2015).
The data we produce of ourselves by ourselves through social media sites have an increasing role in calculating individual’s potential future. Andrew Guthrie (2017, pp. 1139-1140) notes that, for example, Chicago Police Department studies “social networks, and even social media” to map the relationships between gang members of the city and to defuse retaliatory violence. Sean Captain (2015), in a news story discussing Hitachi’s Visualization Predictive Crime Analytics software, notes that “Social media plays a big role in predicting crime, they [Hitachi] say, improving accuracy by 15%.”
The importance of social media data in predictive policing is further exemplified by Sean Young (2015), who in his New York Times commentary argues that Just five years ago, many people thought social media was a pointless tech fad. But social media’s use is no longer in dispute. It allows people to connect with others, express themselves and advertise brands, of course, but it is more than a tool for business and self-promotion. Social media can be used to help predict and prevent crime.
Young (2015) references the school shootings of Marysville-Pilchuck High School in 2014 and notes that information that the shooter might harm himself and others circulated on social media, a month prior to the events. For him, early detection and immediate treatment of the person-in-risk might have prevented the situation. Young (2015) notes, As predictive technology becomes more available and reliable, it could be used to provide immediate treatment (through a collaboration between law enforcement and mental health professionals) for a person-at-risk to prevent deaths, as well as provide services and information to those in danger.
For Sean Young, the power of predictive policing culminates in the idea of recognizing and identifying risk subjects. These techniques are of course already in use in different spaces; Louise Amoore’s research, for example, focuses on explicating how border control is identifying risk subjects with big data techniques. According to Amoore (2011, p. 27), contemporary risk calculus is not seeking causal relationships between data points but is based on calculating uncertainty and opening the world of probabilities. As an example, Amoore gives us an equation: “if *** and ***, in association with ***, then ***” and explains that [i]n the decisions as to the association rules governing border security analytics, the equation may read: if past travel to Pakistan and duration of stay over three months, in association with flight paid by a third party, then risk flag, detain.
This is what Amoore calls a data derivative.
According to Amoore (2011, p. 27), data derivative is a specific form of abstraction that is deployed in contemporary risk-based security calculations. It is a technique that speculates on future value, the potential, rather than actual value. Data derivatives are techniques of identifying potential terrorists at airports (Amoore, 2011), but they are also used for targeted marketing and identifying potential audiences (Arvidsson, 2016). Adam Arvidsson (2016), who builds on Amoore’s work, argues that derivatives have two fundamental characteristics: first “derivatives operate with derived qualities: qualities that have been derived from an underlying entity, or simply an ‘underlying’” and second, derivatives are paths “projected into the future” (pp. 5-6). If we combine these two characteristics, what is sought with derivatives is the future value of an underlying (assets, goods, etc.). What is emphasized by both Amoore and Arvidsson is that the process of de-constructing the underlying into qualities, constituent elements, and attributes, and then re-constructing them into derivatives itself constructs a reality of its own without any necessary (representational) relation to the underlying entity. The question is no longer “who we are, nor even on what our data says about us, but on what can be imagined and inferred about who we might be—on our very proclivities and potentialities” (Amoore 2011, p. 28).
According to a story on The Washington Post, predictive policing system Beware, for example, translates data into threat scores to inform police about the situation or person in question: As officers respond to calls, Beware automatically runs the address. The searches return the names of residents and scans them against a range of publicly available data to generate a color-coded threat level for each person or address: green, yellow or red. (Jouvenal, 2016.)
Futhermore, Justin Jouvenal (2016), a reporter behind the story of Beware notes that Exactly how Beware calculates threat scores is something that its maker, Intrado, considers a trade secret, so it is unclear how much weight is given to a misdemeanor, felony or threatening comment on Facebook. However, the program flags issues and provides a report to the user.
What can be imagined and inferred about people connect to another cultural technique, which is how the imaginations and inferred information are being mediated to people who need it. The color-coded threat score is a result of data analytics but also techniques of mediating and visualizing information. The color-coded threat level indicator has a precedent in the world after 9/11. Brian Massumi (2005) has shown how the Department of Homeland Security developed and used a color-coded threat alert system not only to inform people after the terrorist attacks but also to “calibrate the public’s anxiety” (pp. 31-32). What is important here for Massumi’s argument is the idea that the color-coded threat charts do not describe the content of the threat to the public. Massumi calls these color-codes signals without signification. Threat charts and threat scores make data relations perceptible but as Munster (2013) notes, “[t]o make something perceptible [. . .] is not the same as perceiving something” (p. 82).
When predictive policing systems generate a color-coded chart to visualize the threat level rather than describing how the threat is conceived, the police is—from the perspective of epistemology—controlled by the technique. Parikka (2015a) calls this as metaprogramming: “coding the humans as computational aspects of an organization” (p. 45). He (Parikka 2015a) is interested in organizations as “software computerized environments” where the labor is trained to follow abstracted commands and adjusts to the patterns of organizational logic (p. 45). Metaprogramming here relies on the psychological and physiological modulation of a human being through different cultural techniques. From the perspective of metaprogramming, the police officer using predictive policing becomes one operative chain in the process where the technology is trying to get things done. If the identity of the risk subject or potential criminal is based on mathematical modeling, the capacity of the police to act is based on a color in a diagram. Human agency and the epistemological groundings for the capacity to act are conditioned by the color-coded threat chart and the ways in which the police is trained to use the information which they receive. 2
Policing
Ross Coomber, Leah Moyle, and Myesa Knox Mahoney (2017) use the concept “symbolic policing” to describe a form of policing that does not address the sources of crime directly but tries to prevent crimes by signaling that the areas under control. Predictive policing, with its methods to highlight hot spots to locate police presence not to stop a crime when it is happening but preventing it before it has even started or to identify risk subject and intervene before they are on harm’s way, seems to fit very well under this definition.
Siegert (2015, p. 13) notes that material and symbolic go often hand in hand with cultural techniques. Cultural techniques operate with distinctions, and through distinctions, “the symbolic is filtered out of the real” and “conversely, the symbolic is incorporated into the real.” To illustrate this process, Vismann (2013, p. 84) refers to the Roman Empire and the cultural techniques of using a plow to draw a line, which marks the limits of the city. Inside the lines are the material and symbolic regime of the human, with walls and laws, moral codes, and market places. What is left outside is nature, which becomes the symbol of unruliness and barbarity. The line the technique draws both materially and symbolically differentiates us from them (Vismann, 2013; L. C. Young, 2015.). Similarly, predictive policing operates with the logic of pre-emption, and it shows how the symbolic (the potential for crime) is carved out from the real (the spatio-temporal data), and after algorithmic filtering, the symbolic (the presence of police) is incorporated into the real (the street).
The effectiveness of predictive policing here is premised upon not only the effective use of data but also the effective use of police resources and the physical presence of human bodies. Interestingly, Francois (2015) in The New York Times debate asks, if using law enforcement is the most ethical way to respond to predictive modeling of crimes: the deepest flaw in the logic of predictive policing is the assumption that, once data goes into the model, what the model predicts is the need for policing, as opposed to the need for any other less coercive social tools to deal with the trauma of economic distress, family dislocation, mental illness, environmental stress and racial discrimination that often masquerade as criminal behavior.
Papachristos (2015) notes that in cities like Chicago, potential criminals are being identified with predictive policing systems, but instead of arresting or judging these people, more subtle ways to convince them out of harm’s way are used: Police and community members sit down at the same table with those at risk. The police warn of legal consequences; community and family members raise a moral and compassionate voice against gun violence; and service providers offer access to employment and health services.
Policing here no longer refers to the duty of a police force to enforce the law but rather in the techniques of making sense to the world with other means. Papachristos (2015), for example, suggests a victim-centered public health approach; here techniques of predictive policing—that is, risk assessments and observations—would no longer be techniques of only law enforcement but also something social services and community members could use.
These views reveal the complexity of predictive policing. On one hand, the debate shows that the ethics and effectiveness could be found from the technology. For example, Sean Young (2015) states that “Technologies, whether they be computer models or novel medical procedures, have risks and benefits. [. . .] We, as a society, should continue to study these ethical questions as we implement innovation.” On the other, what Francois’ and Papachristos’ examples above exemplify is that to be constructive, the criticism of predictive policing needs to extend from the questions of technology, data, and algorithms to the various physical manifestations where that data can have a role, where it can be used, and who uses it. This is a movement from technology to techniques, a perspective through which neither the effectiveness nor the ethics of predictive policing can be predicated on, for example, issues related to big data and algorithms but as Francois (2015) maintains, we need to account for the roles and cultures of law enforcement and their current policies.
Coda: Ethics
“Can predictive policing be ethical and effective?”, The New York Times debate asks. In his answer, Papachristos (2015) notes that “algorithms might help narrow the focus and reach of the justice system, leading to fewer and fairer contacts with citizens. But it cannot happen if police and prosecutors use data without oversight or accountability.” A similar view is echoed by Chavis (2015), who also warns against the over-reliance of predictive policing technologies and addressed that there is always a need for human analysis. Importantly, positioning humans as ethical governors or gatekeepers of predictive policing does not answer if predictive policing can be ethical and effective but tries to figure out ways how it could be both. The more routinized these techniques become and the more spread out they will be in the different fields of our society, the more our perspective is changing; we are no longer asking if predictive policing should be used but where and how it could be used. Predictive policing is becoming a cultural technique in its own right, and our ethical understandings need to adapt to this new technique.
In the existing scholarship, very little has been said about the connection between ethics and cultural techniques, perhaps because the technical apriori seems to have little room for human-based ethics. Yet, if “every choice one makes about how to get something done is grounded in a set of moral principles,” as Markham (2006, p. 50) notes, then also ethics seem to have an important role in the discussion of cultural techniques. When predictive policing is used, it sets particular orders and constructs into the world. These orders and constructs are cultural techniques through which ethics function and ethical models are invented. These orders and constructs are what Parikka (2015b) calls a “systematic rearranging” of relations of sense and sensibilities which “are not merely anymore expressed in what is directly perceivable by the senses” (p. 181). These orders and constructs do not emerge from the blue sky but are part of recursive chains of operations; movements from reactive policing to proactive policing are tied with the development of statistical analysis, computational big data predictions, and even data visualization. Proactive policing and predictive analytics bring with them the technique of identifying and targeting hot spots, which then could be transformed into techniques of identifying and targeting individuals. These techniques can be adapted from the fields of law and security into other fields of our culture and society as well. As optimists like Sean Young (2015) put it, “prediction technology gives us a class of tools that were previously only accessible by secretive agencies like the CIA and NSA. Let’s use them.”
The New York Times debate begins from an implication that there is a distinction between ethics and effectiveness. In the debate, if we follow this logic, we see that the effectiveness is often defined by computational techniques (and effectiveness here simply means that these techniques produce something in the world rather than producing, for example, accurate results) and ethics are located in the human realm. Interestingly, The New York Times debate does not mention that one proposed solution for the ethical governance of computational systems is to imagine “the construction of ethics, as an outcome of machine learning rather than a framework of values” (Ganesh, 2017). In specific, computational ethical applications have been discussed in the context of autonomous weapon systems, where researchers suggest that ethical problems could be overcome by designing an “ethical governor,” a system where moral decision-making becomes a function of a machine (see Arkin, Ulam, & Duncan, 2009). 3 The effectiveness of this system is based on not only in the ethical constraints that are coded in the component but also in the use of data and statistics for making predictions. In principle, these techniques could be used not only in the context of military technology but also in other fields of our culture where automation and computational power has an important role and ethical governance is needed and demanded (see Arkin, Ulam, & Wagner, 2012; Böhlen & Karppi, 2017, p. 13). The ethical governor component, thus, points toward the possibility for a computational approach to ethics. But before these systems become implemented, let us return to the role of the human in the debate.
The “study of cultural techniques raises questions about how things and media operate,” Vismann (2013, p. 87) argues. On one hand, the more we know about how predictive policing operates, the more the demands of ethical human governance and control of computational systems start to seem like a paradox: the effectiveness of these operations is based on going beyond the human threshold. As such, predictive policing highlights what Vismann (2013) calls as “the vantage point of cultural techniques,” where “the sovereign subject becomes disempowered, and it is things that are invested with agency instead” (p. 86). But on the other, the demands for human-based ethical governance of these systems are also a manifestation of how we as humans are forced to find new roles in a culture where many fundamental parts of the society are being reorganized through computational techniques. Paraphrasing Bruno Latour (2009, p. 174), we “are never faced with people on the one hand and things on the other,” but rather we “are faced with programs of action, sections of which are endowed to parts of humans, while other sections are entrusted to parts of nonhumans.” For Vismann (2013), the sovereignty of a subject is not only limited by cultural techniques which “determines the scope of the subject’s field of action” (p. 84), but they also make sovereignty possible at least in some form (2013, p. 88). In other words, if the sovereign human subject has become disempowered in the field of big data and predictive analytics, maybe the field of ethics could be a place where we as humans can find a new meaningful role.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was partly funded by a research grant from the Kone Foundation.
