Abstract
The surveillant capacities of smart phones have generated an array of safety apps targeting cis female users. Current feminist scholarship studies these apps from a variety of disciplinary perspectives that stress their detractors, namely, that they are largely ineffective and that they instead burden the user with the labor of continuous assessment of oneself and one’s surroundings. This article acknowledges the apps’ numerous failings while at the same time turning attention to the surveilled, responsible, projected user they reproduce in order to tease out some of the internal contradictions and nuances of this figure and its place in digital culture. The study samples a number of safety apps that focus on gender violence in public spaces and finds that the apps solicit a form of gendered labor which asks largely cis women users to work towards ‘feelings’ of safety.
Keywords
Starting in the early teens, a surge of more than 200 apps addressing sexual violence entered the market (Bivens and Hasinoff, 2018). While the apps target a variety of issues from domestic violence to the sexual abuse of people with disabilities, a large category of personal safety apps focuses on sexual violence against women in public space. Most of the feminist literature on these apps categorizes their functions and many failures (Ali et al., 2015; Bivens and Hasinoff, 2018; Eisenhut et al., 2020; White and McMillan, 2020; Ford et al., 2022; Wood et al., 2022). The safety app industry itself conducts a similar inventory of features, distinguishing one app from another for the purpose of promoting premium versions that come with more comprehensive forms of risk-management. In place of this more taxonomic analysis, I’m interested in the projected user the apps produce, the labor demanded of her, and how she fits into and perpetuates cultural imaginaries of sexual violence against women. Most of the apps gender the user as female; this article refers to the user as a woman, not because it assumes all users of the apps are women but because of the user the apps construct. Feminist literature on safety apps covers how safety apps and phones in general naturalize neoliberal regimens of self-monitoring and responsibilization (Cumiskey and Brewster, 2012; Bivens and Hasinoff, 2018; White and McMillan, 2020; Das, 2021). This article deepens this line of inquiry to explore some of the tensions and contradictions attached to the neoliberal functions of the safety apps and the tracked user that they construct. It introduces new concepts for thinking about the ideological underpinnings of safety apps and theorizes the promises the apps make around ‘feeling safe’. I expand upon this idea of ‘feeling safe’ to map the apps’ affective mechanisms and the tensions they present. This article pivots attention toward the imagined user and the perceptions of sexual violence that this figure generates.
This article presents a theoretical discussion and discourse analysis of the safety apps. It makes no claims about people’s experiences of the apps. I look at the experience the apps construct through their marketing materials and design to make arguments about their discursive function in popular understandings of women, gender violence, and technology. The constructed female user is promised a projected yet impossible state of feeling safe that is deeply rooted in neoliberal gendered labor, technological, and rape discourses. The aspiration of feeling safe relies on the solicitation of an affective labor that produces a datafied and traceable risk-attracting user at odds with the safe experience the apps promise to deliver. The apps do not construct safety as a right, but rather a routine of upkeep performed by the risky, vigilant woman traveling through public space. Feeling safe is the central promise of the safety app that is expressed in two main techno-affective mechanisms, cross-tracking and the machine legible body, which ultimately misdirect attention away from the patriarchal ideologies that underpin sexual violence.
Cross-tracking is the live self-tracking of the user’s movements combined with the tracking conducted by friends and family. Cross-tracking is meant to alert a friend or loved one should something happen. By machine legible, I mean the apps glorify a direct access to bodily information that bypasses user decision-making. Feeling safe is defined by a set of contradictions that playout through these two primary mechanisms which make safety ultimately unachievable. The apps claim to be preemptive but their features are largely retrospective in nature and focused on gathering evidence of violence after it has occurred. They emphasize user agency and self-monitoring before, after, and during a violent attack but they offload this same personal safety labor to automated forms of threat assessment and response. The apps define the user as at risk at the same time they offer an experience of safety. The user is meant to inhabit a state of simultaneous hypervigilance and calm. The apps’ marketing materials paint a female user who walks through the world with a confidence that is balanced with the proper degree of caution. These contradictions are circumscribed within larger neoliberal, preemptive, gendered cultures of safety labor, and the digital curatorship of the self through social media. Safety apps extend these familiar tensions, but they harness them in new and interesting ways that define safety as a feeling or sensation tied to the at-risk user, her physical space, and digital expressions of her proactiveness.
Safety app scholarship
Most safety apps emerged in the early to mid-twenty tens. This initial wave coalesces around the global response to the particularly heinous 2012 New Delhi gang rape and murder of a woman known by the pseudonym Nirbhaya. This incident prompted international feminist protests, public discussion, and a flurry of safety app development across the world. The high-profile rape case coincided with the first generation of self-management and productivity apps that centered on digitally tracking, quantifying, and improving the user across different areas of life: sleep, reproductivity, dieting, exercise, finances, etc. The first personal security apps emerge out of this cultural moment and were fairly simple in their features, most designed around either reporting gender violence in urban space or inviting loved ones to follow the user on their journey home. Overtime they became more elaborate. In their contemporary form, safety app marketing embeds a greater awareness around a diversity of gender publics that experience vulnerability and harm. There are also increased pairings of apps with wearable smart devices (Wilson-Barnao et al., 2021). They are developed predominantly for North America or South-East Asia. I acknowledge cultural differences across these countries; however, the architectural similarities of the apps in terms of their features suggest a broader techno-cultural imaginary of technological solutionism and gender violence that this article explores.
Safety app literature accounts for the functional failings of the safety app. Many safety apps have numerous bugs that include downloading and registration issues (Maxwell et al., 2020; Ford et al., 2022). Despite claims to make safety more accessible, many apps restrict features behind paywalls (Bivens and Hasinoff, 2018). The majority of safety apps focus on the victim and are developed by men working in the private sector (Wood et al., 2022). Most significantly, there is no evidence that safety apps actually improve safety (Maxwell et al., 2020; Ford et al., 2022; Wood et al., 2022). In effect, they are launched, malfunction, and then disappear. The short life cycle of safety apps is in part due to their proclivity to be absorbed into larger infrastructural projects, one example being Safetipin. According to the Web site, the creators of Safetipin want women to feel safer in public space. The app asks users to log aspects of the urban environment that affect their feelings of safety like lighting, visibility, the presence of others, and the state of the sidewalk. In 2016, Bogota used the app to evaluate the city infrastructure of bike paths for a sustainability project. Nairobi coopted the app as a platform for citizens to communicate their grievances to the government. A darker form of cooptation occurs when abusers use the apps as extensions of power and control to track their partners (Kalms, 2017; Das, 2021). From this history, safety apps exist as coding and data farming experiments that are folded into other interests, whether that be municipal projects or abusive partners. Safety apps are essentially a discursive exercise that this article places at the center of study. While safety apps are widely viewed by scholarship as ineffective, they continue to attract academic and public discussion. I would argue this is due to the persistence of the stranger rape narrative in popular imagination. In other words, they persist because of the symbolic function they perform. In addition, the prospective state of feeling safe that they promise is deeply entangled with other techno-affective neoliberal practices of self-management already ubiquitous in digital culture.
As well as addressing the operational and more pragmatic shortcomings of the apps, some scholarship discusses their ideological problems, in particular their tendency to reproduce gender systems that responsibilize women for sexual safety (McCarthy, Caulfield and O’Mahony, 2016; Bivens and Hasinoff, 2018; Hjorth et al., 2018; White and McMillan, 2020; Das, 2021). In this way, the apps reinforce and extend the safety labor already expected of women. Safety apps belong to a culture of safety work embedded in daily routine that burdens vulnerable publics with the continuous affective labor of self-monitoring and self-protection in public space (Beck, 1992; Stanko, 1992, 1997; Campbell, 2005; Rader, 2008; Kelly, 2013; Vera-Gray, 2018; Lennox, 2019; Vera-Gray and Kelly, 2020). Safety apps formalize and visualize defensive practices already in play for women, what Vera-Gray calls ‘avoidance behaviors’ (Vera-Gray, 2018: 86). One example might be a woman crossing the street to avoid a suspicious looking stranger. Women are expected to take up less space, avoid public areas that are deemed risky, get off the train or bus to avoid an aggressor, and take the shortest path home if they feel threatened (Leszczynski and Elwood, 2015: 15). As a result, women are less likely to loiter or get lost in the city, which are part of the luxury of getting to know a space and feeling at home in it (Das 2021: 5). This said, the intersectionality of gender, race, and class complicates this distinction between the privilege of getting lost and the burden of a definitive path. Marginalized publics in general need to perform continuous self-surveillance in public space. But the apps imply that women’s paths, by their very nature, include impediments, interruptions, and redirections with the ultimate interruption coming in the form of an attack.
These forms of safety labor, digital and analog alike, are part of a broader cultural context. Neoliberalism interpollates an endlessly pliable and willing subject who is responsible for achieving success, happiness, and wellness. These aspirations are shaped around consumption, constant improvement, and adaptation. The pressure of intense and continuous self-management are undoubtably present in the safety apps. In this context, their offloading of responsibility onto the individual and away from emergency services presents a narrowing definition of citizenship (Ellcessor, 2018: 163). The turn away from the institutional and civic duty to provide safe spaces and turn toward safety as an experience that is controlled by the individual participates in and reinvigorates the victim-blaming already endemic to gender violence in popular culture. The technological self-responsibalization of women in abusive situations extends to the private sphere. Writing in the context of domestic violence, Woodlock, McKenzie, Western and Harris analyze the significant affective safety labor women are encouraged to perform by police and the legal system in order to manage their digital lives against technologically-enabled forms of coercion and control (Woodlock et al., 2020). Women are expected to alter their participation in social media, various online accounts, and the way they use their phones to circumvent abusers’ digital intrusions. In sum, safety apps overwhelmingly responsibilize women for preventing, managing, documenting, and responding to their own abuse.
Feminist scholarship highlights the expansion of digital tools that coopt sexual safety to justify more intrusive systems of surveillance. Stardust, Gillett, and Albury write on sexual regulation in Tinder saying that, ‘As Tinder and other dating apps turn to employ third party services, and establish law enforcement partnerships, there is a good argument that these initiatives weaponize women’s safety to justify police and state surveillance tactics, expansion of police power and, inevitably, more punitive carceral measures’ (Stardust et al., 2022). Other media scholars point to the more redeeming capacities of digital self-tracking technologies. Schüll distinguishes between the data we collect about ourselves versus the data about us that is taken without permission. Self-datafication that is initiated and controlled by the individual can offer opportunities for self-reflection and self-narration in agential and fruitful ways (Schüll, 2019). Levy writes about the data-tracking technologies that focus on our intimate behaviors from sexual performance to fertility windows. This ‘intimate surveillance’ is far from neutral and many of these technologies construct women as ‘monitored subjects’ (Levy, 2014: 688). That said, Levy suggests that intimate surveillance technologies offer a sense of control over a seemingly uncontrollable area of life. Neff and Nafus hold a similar stance on digital cultures of self-tracking, pointing to their capacity to as a political tool as well as the need to establish new regulations around privacy rights, the right to access personal data, and how to counterbalance uneven access to these technologies (Neff and Nafus, 2016). While these studies do not pertain to safety apps specifically, they speak to the expanding theorization of personal safety, self-datafication, and the cultures and technologies of tracking. Sim offers a theory of ‘technologies of sexual governance’ which mediate sexual conduct in different contexts. She studies workplace misconduct software to show how such technologies are so strongly guided by a dominant social imaginary of rape and sexual victimhood that they can preclude other forms of sexual violence as well as non-legal solutions (Sim, 2021: 2, 8). These programs graft a universalized feminist rhetoric around empowerment onto a corporate, computational logic that forecloses a greater diversity of reports and experiences from being heard (Sim, 2021). In short, not everyone’s feminism is the same, not all complainants are women, and not all misconduct cases are sexual in nature. Despite the drawbacks of intensifying cultures of self-datafication, these studies create room for entertaining the more empowering qualities of safety apps.
Safety apps recommend what action to take when in an unsafe context. In effect, they construct threat through scripting the response to threat. In this way, safety technologies participate in cultural constructions of harm. Critics of the cybersecurity industry take issue with how it externalizes threat. Safety apps are part of a larger trend in cybersecurity where assumptions around gender violence reinforce false binaries of public/private and intimate/stranger (Slupska, 2019). Slupska argues that the oversight of private space is a cause for concern. Even when private space is the focus, cybersecurity delineates harm as external; home security systems assume threat comes from the outside (94-5). When acknowledged, private sources of violence tend to be minimized. Intimate forms of tech abuse like revenge porn are frequently dismissed as a personal issue by cybersecurity across numerous sectors (86-7). This is concerning since a growing body of feminist literature on cybersecurity and gendered technologies shows that tech-enabled abuse presents a sizeable and increasing portion of gender violence (Ging et al., 2019; Slupska, 2019; Slupska and Tanczer, 2021). Tech-facilitated abuse can take the form of compromised Internet of Things devices that are redeployed to track intimate partners (Slupska and Tanczer, 2021). Granted, the tech-enabled abuse in these cases is slightly different than the techn-enabled anticipation of gender abuse attempted by safety apps. They both belong, however, to a growing ecosystem of digital technologies that shape and are shaped by social imaginaries of gender violence, individual responsibility, and misled hierarchies of harm that exist within a broader climate of intensifying self-datafication and self-management.
Methodology
I began with a set of 43 safety apps that I found through online searches, the app store, and those apps already mentioned in existing literature. After eliminating apps affiliated with smart safety accessories like necklaces, those related to domestic violence, and those used for tracking children, I chose 12 that I studied closely in addition to others that I touch on briefly (UrSafe, Safe and the City, Noonlight, Hollaback!, One Scream, bSafe, Circle of 6, Zich, Watch Over Me, Kitestring, I go safely, SafeCity). These apps were developed across the following countries in the twenty-teens with many based across multiple countries: the UK, America, India, South Africa, Colombia, Nigeria, Israel, and Malaysia. Eleven were primarily based in the United States. I looked for safety apps that construct gender violence as occurring in public space and perpetrated by strangers. I downloaded the apps, looked at their features, and studied the experience of moving through them. I also analyzed their Web site marketing materials and gathered any affiliated press coverage. I paid attention to app aesthetics, what they ask users to do, and the language around these functions. Coding was not used in this analysis. I do not include consent apps because this group is designed around a different function with regard to addressing rape – the documentation of sexual consent between two users rather than facilitating preemptive and reactive measures to an assault in public space.
Based on their marketing materials, the apps target young, cis women users. I use the terms ‘female’, ‘woman’, and ‘feminine’ as reflective of the simplistic user identity that the apps construct. Marketing materials like websites, ads, and the images in the apps themselves problematically exclude a variety of people who also experience violence and fear in public spaces including mature women, children, queer people, people with disabilities, racial minorities, and straight cis men. This is a qualitative discourse analysis of the safety apps. I’m interested in the user experience and projected user that the apps construct that is embedded in a broader discourse on sexual safety and women. This is not a unique approach. Kang and Hudson’s research on the acoustic gunshot detection AI system, ShotSpotter, explores the ‘sociotechnical logics’ that underpin the program by studying its marketing and public statements as well as contextualizing the program within the technologization of sound in police work (Kang and Hudson, 2022). They entertain the cultural value of listening and sound in crime detection and truth making. In a similar way, this study takes the marketing and design of safety apps to make an argument about their gendered techno-affective logics around expectations of safety. This analysis similarly looks at software programs at a discursive level to outline their ideological underpinnings.
The idea of technological affordance is helpful in theorizing the projected app user. Nagy and Neff introduce the idea of ‘imagined affordances’, to better capture perceptions of users and designers, and the materiality of the technology itself. Imagined affordance is part of the environment and technology at the same time that it describes the prospective experiences of the user and designer. Affordances are shaped by and reinforce power relations, ‘Affordances can and should be defined to include properties of technologies that are “imagined” by users, by their fears, their expectations, and their users as well as by those of the designers’ (Nagy and Neff, 2015). The imagined affordance of a safety app exists at the intersection of the user, how she is conceived by the designer, the inherent properties of the app which construct a would-be user, and the social imagination of sexual violence that shapes all of the above.
Feeling safe
Safety apps project an imagined user who feels safe through a constant regimen of self-surveillance and digital tracking. ‘Feeling safe’ describes the techno-affective ideological framework of the apps. The branding language of the apps reiterates the state of feeling safe. Sekura’s tagline is, ‘Helps you feel safer where you go. wherever you are’ [sic]. The feeling of safety is mobile and attached to the app user rather than the perpetrator. One Scream’s Web site lists consumer reviews on their Web site that repeatedly stress feeling safer: ‘makes me feel so much safer’; ‘makes me feel safer when I’m walking home’; ‘I feel much safer!’ Its founders claim, ‘We built One Scream to make you feel safe!’ A similar app, Safe and the City, quotes a consumer who says it, ‘makes London feel safer’. One Scream’s Web site also profiles this testimonial: ‘even if it’s just at the bottom of my purse – I hope I will never need to activate One Scream, but knowing it is on and could save my life makes me feel so much safer’. The last statement admits that the app delivers a sensation of safety, not necessarily the assurance of a safe journey through public space. The focus on feeling safe rather than being safe is a rather morbid admission to the apps’ inefficacy in terms of crime prevention. The apps do not claim to change the status of one’s safety. They claim to assist in the building of an affective experience of safety in public space, even if this means the apps are merely installed on phones and remain dormant at the bottom of purses.
I argue that feeling safe is ultimately an extraction of affective labor associated with a submission to being technologically tracked. Feeling safe is not just affective, it is also deeply technological. Any app user is accustomed to making tradeoffs around privacy in a multiplatform climate where digital citizenship requires that we compromise personal data in order to access various kinds of digital public forums and services. However, in the context of feeling safe against sexual violence, this negotiation can embed problematic messages about the right to safety. To feel safe, one must offer up and carefully manage one’s digital information. In effect, there is no right to data privacy in the business of feeling safe. I’m not merely claiming that the apps demand a form of safety labor. I’m claiming that feeling safe rests on technologically-shaped, tracking practices that are presented as preemptive, responsible, and proactive. I’m going to talk about these tracking practices and the apps’ embedded expectations of continuous evaluation that asks the user to track herself, coordinate her tracking by others, and avail her body to being machine legible by the apps. All these aspects are embedded in feeling safe and they are affective procedures. The affective labor that safety apps demand of women is that they generate their own feelings of safety through tracking, decision-making, and submission to surveillance.
The affective labor of feeling safe fits within larger trends in neoliberal culture around risk and threat. In his work on affect, war, and temporality, Massumi argues late capitalism has shifted its conception of threat, a turning point symbolically marked by 9/11 although the transition had already been underway. Responses to threat have shifted away from deterrence and prevention, measures that are built around a threat that is already there. Responses to threat are now more commonly preemptive in that they conceive of and react to a threat before it has taken shape (Massumi, 2015). For example, the United States applied the deterrent tactic of a naval blockade in the Cuban Missile Crisis of 1962 in response to the very real threat of a Soviet attack on the US. In contrast, drone strikes in Afghanistan and Pakistan that began with the US invasion in 2001 try to preempt a terrorist attack before it has happened and before plans of an attack even materialize by targeting terrorist training camps, recruitment hot spots, and leaders in the making. The safety apps use a similar logic. In theory, using the app puts safety practices into place that hopefully preempt sexual violence in public areas. This management of risk relies on an affective labor entailing the continuous self-evaluation in public space. The apps claim one should feel safe by participating in their preemptive logic, which involves significant work: the work of evaluating risk in one’s space, where to carry one’s phone, which safety protocol to implement in the event of an incident, whom to choose as one’s guardian contacts and more. Feeling safe is laborious. It’s ironic that safety apps tend to encourage scrutiny of the self in public space rather than one’s familiars in private ones. Feeling safe shifts conversation away from the spaces and people that present the most risk and instead burdens with users with an endless affective protocol of self-assessment that promises a better experience of public space.
Safety app labor extends the security practices common to women navigating public space or what Vera-Gray calls ‘safety work’. These include evasive behaviors like taking the long route home to avoid a stalker or assuming a guise of removal or disinterest to discourage interactions with strange men (Vera-Gray, 2018: 13). She argues that women are never allowed to ‘just be in space’, a privilege more often reserved for men (Ibid: 40). While public spaces are relatively safe, Vera-Gray acknowledges that women experience them as exhausting and relentless work that goes unacknowledged. The apps materialize a digital extension of Vera-Gray’s safety work by ushering women into an intensified version of the gendered self-monitoring and self-tracking already endemic to public performances of femininity. While Very-Gray’s work is based on interviews with women who speak to their lived experience which sits in contrast to the analysis here is based on the apps’ marketing and design, I would argue safety apps are part of the same safety culture that demands an ongoing state of vigilance. Lennox distinguishes between feeling safe a genuine experience of physical safety and staying safe which encompasses the safety work women perform in public (Lennox, 2022). However, I would argue that the term ‘feeling safe’ is so frequently conflated with staying safe that genuine experiences of security have become precluded from this space. They are instead rewritten and deferred as a state of feeling safe that is aspired to but never attained.
These gendered regimens of continuous self-assessment are familiar. Safety apps belong to a broader cultural context in which women appear trapped between irreconcilable demands of self-monitoring, self-scrutiny, and public performances of confidence. In defense of the apps, part of the project of making women feel safer in public space is aligning feeling with statistical reality; public spaces are factually much safer than private ones for women. In criminology, this disjoint between public perception and crime statistics on public space is known as the fear of crime paradox. In fact, public spaces are far more dangerous for masculine and queer presenting bodies (Skogan, 1987; Smith, 1988; Stanko, 1992). Despite this fact, women engage safety work in public space. Lennox shows that routinized safety measures actually make women feel less safe. Echoing Vera-Gray’s work, Lennox defines ‘virtue maintenance work’ as the ‘performative-though largely invisible and unrecognized-form of identity work through which women signal their sexual virtue to men in public spaces’ (Lennox, 2022). Lennox argues that many of these signals communicate a defenselessness and docility towards men, like wearing headphones or pretending to talk to someone on the phone in order to avoid interactions with strangers. Lennox’s findings reinforce other theorizations of safety labor as relentless and exhausting – hardly descriptive of the kind of belonging and calm one would think to associate with safety (Rader, 2008; Vera-Gray, 2018; Vera-Gray and Kelly, 2020).
The labor of feeling safe participates in the reproduction of the neoliberal citizen who must be, ‘flexible, positive, resilient, and creative’ (Gill and Kanai, 2018). For women and girls in particular, neoliberal pressures demand the performance of confidence or what Gill and Orgad call the ‘confidence cult’ (Orgad and Gill, 2021). This entails the digital curatorship and showcasing of one’s confidence through sharing a continuous process of self-transformation and monitoring (Negra, 2009; Gill and Kanai, 2018). This balancing act describes many online performances of feminine identity which demand both an openness to the continuous process of self-transformation and iteration, and demonstrations of confidence, relatability, and self-assuredness (Dobson, 2015; Orgad and Gill, 2021). Safety apps are an extension of this gender culture because they similarly demand both confidence and preparedness, preemptive self-scrutiny and self-advocacy. Feeling safe entails an impossible double bind of remaining composed yet watchful, a contradiction that parallels the mixed messages that describe neoliberal feminist culture.
The apps suggest that feelings of safety come with engaging in an ongoing process of self-regulation as well as a submission to surveillance from one’s familiars and the app itself. I divide the techno-affective practices of feeling safe into two main categories: cross-tracking and machine legibility. Cross-tracking refers to the apps’ tracking features that allow family and friends to ‘follow’ the user from location to location. Cross-tracking is more prominent than self-tracking in the safety apps which usually confer more power to the user’s social network than the user herself. The user’s role is to organize and maintain the cross-tracking network that will oversee her journey. Machine legibility describes the app’s demand for direct access to the user’s body to recommend responses to threat. Feeling safe is defined by a set of impossible contradictions that play out through these techno-affective mechanisms that detract attention away from the patriarchal ideological causes of gender violence. Rather than sending the message that men should behave better or that public safety should be a reality and not a feeling, the apps assert that women need to work on feeling safer.
Cross-tracking and feeling safe
The apps reproduce a familiar script for sexual predation in public spaces (i.e., the long-nailed, salivating, crazed stranger-predator who leaps out of the bushes). Safety apps respond by tracking the user’s body during her journey home. The apps tend to feature the journey home rather than the journey out. According to these familiar cultural scripts, women’s movements in the city are meant to have a planned destination and that destination is more often home rather than, say, a bar in the city. These gendered spatial dynamics play out in the apps’ construction of the trackable female body whose violence must be fixed to a time and place. Most safety apps involve live GPS tracking that is shared with the user and a group of elected contacts via text, phone, and social media accounts. Once activated by the user, Abhaya continuously tracks the phone and sends its coordinates to contacts until told to stop (Yarrabothu and Thota, 2015). Cheeka interfaces with Facebook to connect users to their networks and share their location. Cheeka’s ‘Black Cat’ mode sends the location via text to friends at regular time intervals (Kanagaraj et al., 2013). An augmented reality feature that was proposed but never came to fruition for Cheeka overlays visuals of friends on maps showing the user’s location (Ibid: 291). Safetipin’s ‘Stay With Me’ feature notifies friends if the user’s route has changed from the one planned. Bsafe offers to ‘follow me with a timer’ that notifies friends if the user does not reach their destination on time. Circle of 6 (later rebranded as Circulo) designed an interface where the user’s network of friends can communicate with each other while also tracking the user’s location. The interface has the appearance of a military operation, with a live chat along one panel and the user’s location on a map on the other. Many apps encourage users to choose a combination of people who are emotionally reliable and/or geographically close in order to maintain a network primed for an emergency. Safety app cross-tracking encourages a reflection on past journeys as well as the strength, reliability, and geographic reach of one’s social circle. Cross-tracking is as much about the auditing and tracking of friends and family as it is about the user. In this way, the GPS tracking features in the safety app offer expressions of technologically-mediated interpersonal connection, networks, and a process of continuous self and cooperative evaluation that cohere around the tracked, female body in public space.
The techno-affective ritual of cross-tracking belongs to a longer history. The desire to make the female body trackable and quantifiable has long historical precedent (Kalms, 2017: 116; Wilson-Barnao et al., 2021). Anxieties around the location and trackability of women’s movements are baked into the history of women in public space and how this relationship was shaped by the rise of the middle class, industrialization, and urbanization in Western countries. Ironically, at the same time of the emergence of traditional definitions of private and public spheres in which women became associated with domestic space, women were also experiencing greater mobility in the public sphere at unprecedented levels (Friedberg 1993: 35). This increased freedom was met with social anxiety around women’s occupation of and new consumption practices in new public spaces like amusement parks, cinemas, department stores, and museums (Friedberg 1993). The rise of the liberated and mobile New Woman emerged alongside a medical and institutional discourse that pathologized and documented her with new technologies of the time like photography and film. The same historical period sees the medical diagnosis and photographic documentation of women with hysteria, a catch-all category for stigmatizing and institutionalizing women who had mental illness, epilepsy, were reproductively challenged, or who simply felt unfilled at home. The photographs were often sexually charged, capturing women in various states of undress, spasming and experiencing states of euphoria (Didi-Huberman and Hartz 2003). Photography captured female spiritual mediums in séance sessions making contact (Schoonover 2003). These gendered applications of photography are historical precedents for the use of media technology to document, rationalize, and contain the sexualized female body at moments of significant cultural transformation. Their preoccupation with measuring and capturing a risky female body is similar to the techno-gendered logic of safety apps. Another historical example is the scientific measurement of women’s bodily response to the first films. William M. Marston, who claimed to invent the lie detector, attached sensors to women’s pulses to measure arousal during kissing scenes (Olenina 2015: 251). New methods of measurement and quantifiability are historically applied to the mobile and sexual woman. There is a pronounced pattern here of emergent technology producing the sexualized, tracked, and recorded female body. The cases in the article center on the process of tracking and documentation: the methods and technologies used to track women in space. Surveillance technology turns the female body into a site of exploration, discipline, and opportunity. The violated female body has a repeated value in how technological change is greeted and integrated into a social imaginary of sexuality and public space.
Cross-tracking includes the continual auditing and quantification of the user’s space, usually applying categories like lighting, visibility, the state of the footpath, visible security in the area, crowd size, and the gender composition of the area. Safetipin uses google maps to crowdsource incidents of gender violence which are then shared with government bodies to make environmental changes like improvements to lighting. Some apps register the crowdsourced information in order to designate risky zones. Many apps ask users to register the location and time of the incident, generating crowdsourced maps pockmarked with violated bodies (Hollaback!, SafeCity, Watch Over Me, Sekura, Safe and the City). Some apps generate recommended routes for the user that either avoid risky areas or suggest the shortest route to the destination. The surveillant potentials of these features are clear. They encourage the user to surrender a certain degree of privacy to the app and to her contacts, while also yielding up a tremendous amount of personal information for the apps to do with as they will. However, might the crowdsourcing aspects of the apps serve a greater benefit? Klein and D’Ignazio’s Data Feminism points to the feminist capacity of data science to lay bare and visualize gender inequality. While paying attention to the power dynamics of data collection and what it means to be represented by a data set, Klein and D’Ignazio explore the feminist potential of data. They use the example of Maria Salguero’s map of femicides in Mexico which gained substantial media and political attention in the mid-2000s and raised awareness around the ubiquity and overlooked number of femicides in the country and by region (D’ignazio and Klein 2020: 37–8). Indeed, these uses of data bypass victim testimony to communicate the extent of gender violence, which is visibly undeniable in their graphics. While I acknowledge the feminist potential of some of the apps which crowdsource and visualize information on gender violence for users, the propensity of reducing violence against women to geographic locations and violated female bodies can (1) abstract and sanitize violence and (2) remove the perpetrators from the image. Maps of hot spots or quickest routes home abstract acts of violence to trajectories and locations, distancing them from the people involved. While the maps are not entirely destructive, there are other data sets and data visualizations yet to be made that tell another part of the story; for example, there is no data visualization to date that shows male perpetrators and their movements in public space.
Some apps automatically upload pictures of the user’s surroundings to their Facebook feed (Das 2021: 3). RSO SAFE allows users to upload pictures of a person for cross-comparison with registered sex offender databases (Bivens and Hasinoff 2018: 1057). The anticipated acts of gender violence encompass everything from ‘objectionable comments’ to sex trafficking and rape (SafeCity). The apps focus on datafying the mobile female body and documenting the violent event. A limited number of safety apps include gender violence across private and public space (Safecity). The visual this produces shows a map of the city with colored dots designating different forms of violence. While the visual impresses the ubiquity of gender violence, it remains unclear what a map showing instances of domestic violence and public groping is meant to accomplish in terms of immediate safety work. Such data visualizations translate gender violence to a computational logic guided by data-accrual and categorization. Sim made a similar observation about technologies of sexual governance and how misconduct software might be seen to force fit a range of complex human behavior into a set of categories (Sim 2021: 9). While there are benefits to aggregating a broad data set into a single visual, one must ask how they are meant to be implemented. Regardless of their purpose, they have the ideological effect of collapsing different forms of violence and distancing them from a discussion of their cause.
Realistically, none of the apps’ documentary functions necessarily thwarts an attack, but they are constructed as useful in the aftermath of a crime (Kalms 2017: 114). It remains unclear who receives the data that is captured and what preventative purpose it serves (Weiss, 2016; Maxwell et al., 2020). There is evidence to suggest that the act of sharing personal stories and experiences through the apps offers a form of digital witnessing and community that affirms the experience of the user (Weiss 2016: 4-6). In this sense, the safety app map becomes a visual legitimation the ubiquity and universality of sexual violence for users. However, the categorization of different forms of sexual violence (which can oftentimes be elusive and slippery) and their suture to a space and time suggest that sexual violence is easily defined, identified, responded to in the moment and confined to one space and time.
This insistent geolocative framing of gender violence stands in contrast to the statistical reality that the strongest predictor of sexual recidivism remains prior offenses, not spatial opportunism. In fact, research indicates that sex offenders make their decisions based on a set of dynamic variables that demand continuous adjustment and adaptation (Beauregard and Leclerc, 2007). Repeated attacks in the same public or semi-public location happen most often in cases of homicidal sexual offenders who target gay men in known queer areas like bars (Chan, 2015). Arguably, a geographic visualization of gender violence in space would better serve the gay male community than cis women. If any link exists between location and stranger sex crime against women, it is the home because most sexual homicides of women committed by a stranger occur in the victim’s home (Chan, 2015: 47).
Cross-tracking works to reassure us that sexual violence is spatially and temporally bounded, defined, and able to be digitally captured. Alongside the emphasis on live cross-tracking and in-the-moment decision-making, the other app features usually revolve around fixing and recording crime in a specific space and time. The apps associate feeling safe with many features that are retrospective – capturing video of the crime and recording the last known location of the user. The apps associate sex crime with female bodies and public spaces and, as a result, sexual violence is confined to moments in time in specific spaces rather than an omniscient and ongoing culture of sexism and misogyny that reproduces violence. The violent act becomes spatialized, datafied, and temporalized as fixed or something that happens to the user at a risky time and place. This definition elides the kinds of gender violence that have duration, remain unanchored to a location, or exist in the digital sphere.
Machine legibility and feeling safe
Feeling safe involves geotracking, but it also implements self-monitoring in the form of elaborate decision-making trees for managing fear and responding to escalating levels of threat. Practically speaking, executing the app’s safety protocols might be challenging during the flight, fight, or freeze response elicited by an attack. Some apps require women to take their phones out, unlock them, open the app, and then choose from a list of options that put different digital protocols into motion. This choice is based on the user evaluating the severity of the threat, her safety at that moment, and her judgment of how quickly and in what ways the threat may escalate. The features are tiered according to the severity of the situation. Apps start with lower risk measures like giving users the option to schedule ‘check-ins’ with friends along their journey home (Kitestring, bSafe, UrSafe). They allow users to request ‘fake calls’ to extract them from an uncomfortable encounter (Circle of 6, Sekura, bSafe, UrSafe). Users can request that their nominated ‘guardians’ or ‘protectors’ virtually follow them home using GPS (Igosafely, LeeLou, Zich, Circle of 6, bSafe, Guardly, Kitestring, RunAngel). Many apps escalate by giving users the option to message their contacts to come and pick them up. Almost all safety apps culminate in a panic button or function that shares the user’s location with emergency services. Some panic buttons activate audio-visual recordings (Nirbhaya, bSafe, One Scream). These decision-making protocols extend to the users’ networks of friends who are also asked to respond to the information the app feeds them. In much the same way that the app archives and visualizes users’ support circles, it structures and documents the user’s ability to make what it infers are the correct decisions in the right contexts.
The more extreme features of safety apps offload the cognitive labor of risk appraisal to the at-risk body itself. Some apps monitor the user’s body to make decisions on its behalf, leading to problematic biologically essentialist constructions of gender. One Scream claims that all women have a biologically identical panic scream that triggers the app to notify emergency services. The idea of a universal woman’s scream is problematic; what about cis women who have deep voices, transgender women, users who are mute, or victims freeze instead of scream? Watch Over Me’s name alone is disturbingly supplicant and patriarchal. Watch Over Me and Shake 2 Safety claim that one shake of the phone will turn on the alarm and either start a camera recording or automatically take a photo of the user’s surroundings to send to contacts. VithU’s emergency system can be activated with two clicks of the smart phone’s power button. Circle of 6 (now rebranded as Circulo) allows users to tap its main button twice to notify guardians of trouble. Igosafely’s alarm is activated if the headphones are disconnected. A Taiwanese project called iMACE (Chang et al., 2011) proposes worn wireless sensors that directly relay the user’s information to police and friends if they are triggered. In addition, shaking the phone would trigger the alarm, take photos, and automatically upload them to an iMACE server. Future iterations propose networking the individual’s system with CCTV infrastructure (74). This would have the potential, for example, for certain app protocols to be preemptively triggered in CCTV serviced areas that are flagged as risky. For example, the app might automatically start recording video if the user moves into threatening territory. Watch Over Me is automated to stop tracking once users reach their designated ‘home’ or ‘office’, thus presuming those locations are safe. An app called Companion will ask if the user is ok by starting a 15 second timer that alerts friends if the timer is not stopped. This feature is triggered when headphones are removed, the phone is dropped, or the user begins to run. For the apps that monitor the body directly, the user needs to feel safe at both biological and affective levels; in order not to be triggered, the biometric safety app measures its users’ states of bodily safety as measured by biodata like heartrate and the status of the headphones in relation to the body. Feeling safe therefore also entails subscribing to a machine legible body that can be accessed and monitored through the app’s processes of tracking and documentation. This extends to making a violent event accessible to the app’s logic should something happen.
Placing aside the potential value in having a highly sensitive alarm system, the very real possibility of its accidental activation adds yet another layer of self-monitoring. Beyond that, the apps are troubling for their redirection of safety work away from perpetration and patriarchal ideology to the cis, female body in crisis. The most pronounced example of this is a research proposal for a safety app designed by Roy, Sharma, and Bhattachayya which describes a wristband that measures sweat, heartrate, blood flow, and glucose content. If the app senses a state of duress, it initiates safety measures on the part of the wearer (Roy et al., 2015). In addition to apps that have automated sensory components, wearable safety devices also indicate the desire for a more immediate, direct, and bodily response to a violent incident that would offload individual decision-making. These include smart jewelry that is activated by pressing a button, protective belt buckles, bras that deliver an electric shock, barbed condoms, and pepper spray stilettos (White and McMillan, 2020; Wilson-Barnao et al., 2021).
Cross-tracking and a submission to machine legibility are forms of affective labor. They entail a state of consistent vigilance as well as a wholesale subscription to being read by the app. The forms of cross-tracking and machine legibility discussed earlier contribute to the promise of feeling safe. They also introduce an odd paradox of the both risky and safe-feeling body which is expected to be legible to digital surveillance as simultaneously calm and hypervigilant. The sensors that deliver some form of biofeedback from the user assume that a woman’s default bodily state should be one of calm alertness and problematically assumes any elevated state equates to an emergency. What if the user is riding a roller coaster? What if they are watching a horror movie? What if they are exercising? What if they are playing loudly with their children? What if they are having loud sex? As part of the virtue maintenance work that Lennox theorizes, women who are truly safe are expected to biologically register as alert and placid. In this sense, the safety apps promise to relieve women’s guardedness in public space only to introduce safety labor in a different form.
The elaborate flow charts of risk assessment and response demand a proactive affective labor to achieve feelings of safety or at the very least a responsible preparedness to react appropriately. Safety apps artificially construct gender violence as an isolated and public occurrence that begins with a perception of threat that escalates to an attack. Women are meant to evaluate red flag indications of violence along the way to either secure help or document an attack. However, given that this type of stranger attack in public space is rare, more likely than not women are simply burdened with continuously evaluating their surroundings under the auspices of being proactive. The reduction of gender violence to a single body, time, and space eclipses violent acts that entail duration and calculation which can involve grooming, silencing, and intimidation. These practices exist within larger systems of power that perpetuate gender violence. In other words, the maps and the tracked female body reinforce a temporality associated with gender violence that allows it to be abstracted, consolidated, dismissed, and distanced from its root causes. In their most extreme forms, the apps treat the cis, female body as a lifeless conduit for information and feedback. The body does not need to make decisions; the app simply records its traversal of and violation in public space. The female user becomes a deadened barometer for trackability, a to-be-violated object whose testimony is bypassed by the haptics of the app and how it reads the body.
Conclusion
Whether the apps are effective or not, studying how automated, digital culture constructs sexual violence is critical to understanding sexual violence as a deeply technologized space. The safety apps have many faults, but there is something to be gleaned from their flaws. The sum of their features and promotional materials reveals a projected, cross-tracked, machine legible woman defined by a set of neoliberal contradictions endemic to contemporary digital gender culture. The apps offer a datafication of movement, social exchanges, and phone activity in a retrospective capacity. However, their marketing material very much frames the apps as preemptive, suggesting that the act of downloading the app or signing up for a premium membership designates the user as more proactive than others. The apps also embody a tension between self-responsibility and automated decision-making. On the one hand, they offer elaborate protocols for assessing risk and reactions, but on the other hand many apps offload decision-making to an automated response to biological or environmental feedback that curiously eclipses the perpetrator’s role in sexual violence. In this scenario, the female user is little more than an object attached to a smart phone whose testimony must be circumvented by the apps’ more ‘reliable’ forms of evidence. To be vindicated after an attack, the user must preemptively offer the app complete access to their body and environment. This body must however maintain a careful balance of calm alertness so as not to falsely activate safety protocols. Cross-tracking and machine legibility are entangled with the notion of feeling safe, a preemptive onus defined by a state of feeling that never arrives.
These essential properties of the safety apps (cross-tracking, machine legibility, and feeling safe) are grounded in neoliberal, digital, automated, gender culture more broadly. In the context of sexual violence discourse, they confine sexual assault to public space and represent it as quantifiable, recordable, and fixed to a time and space when so many acts of gender violence involve duration, uncertain boundaries and communications, unexpected spaces, and bodily and cognitive responses that may surprise us. The apps domesticate, contain, and detach gender violence from its cause: patriarchal ideology and its embeddedness in lived experience and institutions of power. The apps mobilize violence as an opportunity for exploring an expanding security and media technological infrastructure that has little to do with the lived reality of gender violence. In return for this service, the user is offered an abstraction of safety that reduces sexual wellbeing to the absence of sexual harm and the vague promise of a feeling.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
