Abstract
This article illustrates how racial capitalism can enhance understandings of data, capital, and inequality through an in-depth study of digital platforms used for intervening in gender-based violence. Specifically, we examine an emergent sociotechnical strategy that uses software platforms and artificial intelligence (AI) chatbots to offer users emergency assistance, education, and a means to report and build evidence against perpetrators. Our analysis details how two reporting apps construct data to support institutionally legible narratives of violence, highlighting overlooked racialised dimensions of the data capital generated through their use. We draw attention to how they reinforce property relations built on extraction and ownership, capital accumulation that reinforces benefits derived through data property relations and ownership, and the commodification of diversity and inclusion. Recognising these patterns are not unique to anti-violence apps, we reflect on how this example aids in understanding how racial capitalism becomes a constitutive element of digital platforms, which more generally extract information from users, rely on complex financial partnerships, and often sustain problematic relationships with the criminal legal system. We conclude with a discussion of how racial capitalism can advance scholarship at the intersections of data and power.
This article is a part of special theme on Data, Power and Racial Formations. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/dataandracialformations
Introduction
Scholarship increasingly acknowledges how technology, data, and the internet—far from being post-racial or colourblind (Daniels, 2016; Noble and Roberts, 2019)—enshrine whiteness (Daniels, 2013). Whiteness, the racial grammar that reinforces logics of white privilege and racial hierarchies, maintains power and status associated with racial categories (Bonilla-Silva, 2012). Research has traced economic, political, and social harms of these practices, including how minoritised communities endure heightened forms of surveillance, face challenges obtaining resources, and experience more interactions with criminal legal systems (see Bhatia, 2021; Browne, 2015; Eubanks, 2018; Jefferson, 2018). The disproportionate effects of datafication rely on variegated systems of capital, which are enabled through the interplay of data, labour, knowledge, technical expertise, infrastructure, and noninterventionist regulation. Commodification, extraction, and exploitation—what some refer to as data colonialism (Couldry and Mejias, 2019) or technocolonialism (Madianou, 2019)—are not only central to reaping value from data; they also maintain racialised sources of privilege and status associated with accessing and using data.
While scholars recognise data as a core feature of 21st century capitalism (Sadowski, 2019) and that capitalism perpetuates gendered and racialised oppression (Melamed, 2011), research on big data still reflects little engagement with critical scholarly traditions that explore entanglements of capitalism and racial formation. This article offers an analysis of how racial capitalism, which is central to commodification for the purpose of “deriving social or economic value” (Leong, 2013, 2152), operates in and through data—even when racialised dimensions are not evident on the surface. Our aim is to illustrate how racial capitalism, inclusive of its distinct strands (Bhattacharyya, 2018; Cottom, 2020; Leong, 2013; Ralph and Singhal, 2019; Robinson, 1983; Virdee, 2019), offers an analytic for understanding data practices often misunderstood as neutral or as colourblind. Akin to other analyses of racial capitalism, this article sheds light on how systems of capital accumulation are constitutive of racialised forms of “exploitation, expropriation, and expulsion” (Bhattacharyya, 2018, 37), which include—but can exceed—concerns of whiteness.

Callisto homepage (source: myCallisto.org 2021).
To illustrate, we examine two prominent digital platforms used for intervening in sexual harassment and gender-based violence. They are part of an emergent sociotechnical strategy that aim to offer emergency assistance, education, and a means to report and build evidence against perpetrators. While other studies illustrate gendered processes articulated through these anti-violence technologies (Bivens and Hasinoff, 2018), our analysis points to overlooked racialised contours. We highlight how racial capitalism implicates the design and use of anti-violence apps, particularly the data capital generation and accumulation that they enable. Of course, these patterns are not unique to anti-violence apps: platforms more generally extract information from users, rely on complex financial partnerships, and often sustain problematic relationships with criminal legal systems (Jefferson, 2018; Mason and Magnet, 2012; Srnicek, 2017). Distinct in this example is the accumulation of economic value “directly out of the act of doing social good”—a set of concerns that have not yet received the same level of attention as wealth generation by large for-profit companies (Magalhães and Couldry, 2021, 353).

Callisto “about” page (source: myCallisto.org 2021).
In the pages that follow, we trace arguments that detail how systems of racial capitalism contribute to the commodification of race for value generation, including how whiteness often becomes embedded in these processes. We highlight distinguishing features that play out in relation to digital modes of collecting data and the role of data capital (Sadowski, 2019). We then contextualise the rise of anti-violence apps and explain our methods before discussing thematic concerns that emerge in relation to two exemplars in the field, Callisto, a confidential reporting platform whose matching algorithm connects victims of the same perpetrator to identify repeat offenders, and Spot, the first cognitive interview artificial intelligence (AI) chatbot for reporting sexual and other workplace misconduct. Extending our earlier analyses of the data protection challenges of anti-violence apps (Shelby, 2021; Shelby et al., 2021), this article focuses on how both Callisto and Spot evince three key areas identified in the literature on racial capitalism: racialised property relations built on extraction and ownership, capital accumulation that reinforces benefits derived through property relations and ownership, and the commodification of diversity and inclusion promotion. We conclude with a discussion of how these reporting apps operate in tension with stated concerns of data justice.
Situating data and whiteness within concerns of racial capitalism
Scholars have traced whiteness as a normative logic embedded in and sustained through the technological tendency to default to whiteness—that is, racial privilege remains unacknowledged and taken for granted (Schlesinger et al., 2018). For instance, data-based devices, including Amazon's Alexa, and representations of technology in film, media, and marketing exhibit white attributes that range from skin colour, facial features, voice, language, and cultural sensibilities (Cave and Dihal, 2020; Phan, 2019). Commonplace characteristics attributed to technology, such as higher levels of professional status and “smartness”, also reflect longstanding hierarchies of racial power: features have been selectively associated with advantaged social positions along axes of class, gender, race, and ethnicity—namely, middle-class, cisgender, masculine, white groups (Cave and Dihal, 2020).
These attributes are not unique to digital technologies. Earlier Critical Race Theory (CRT) analyses demonstrate how whiteness not only has value as a property (as in possession), but also has constitutively taken on characteristics of property (as power relationships) (Harris, 1993). More recent studies reveal whiteness has become an “entitlement to social goods”, carrying “reputational value” and “the power to exclude” (Bhandar, 2018, 7). Racial capitalism examines these processes of differentiation: it, as outlined by Gargi Bhattacharyya (2018, 103), is not as “an account of how capitalism treats different ‘racial groups’”, but an approach that illuminates constitutive relationships between racism and capitalism (see also Cottom, 2020; Robinson, 1983). It enables investigation of how the inequalities observed in relation to property “are not givens or inevitabilities, but rather are conscious selections regarding structuring of social relations” (Harris, 1993, 1730). In sum, racial capitalism captures more than how whiteness may transfer privilege to those who can acquire it; it scrutinises how racism operates through practices of accumulation, simultaneously shaping the distribution of benefits and forms of exclusion.
Digital data bring distinct dimensions of property relations to the fore through cycles of capital accumulation. User activity yields reporting data, metadata, transaction data, technical data, and other types of information. Such data are then used to shape content, develop and enhance products, and expand digital consumer bases for conversion into economic value (Sadowski, 2019). Their use also aims to influence behaviour, purchasing patterns, latent demand, and “the attention of specifically targeted groups” (Fuchs, 2012, 705), which Andrejevic (2012, 76) frames as a form of intentional “control and manipulation”. Alienation, appropriation, and coercion shape users’ contributions to generating value and creating data capital (Fuchs, 2012), as corporate entities often own and define how their contributions are used and enjoyed. As their labour is appropriated, the lack of informed consent over the full range of data usages sustains the coercive nature of these relationships.
Although nuanced, many reflections on data capital often miss how capital is itself racialised. Race can operate as capital, both in a social sense and in the Marxian tradition. In terms of social capital, markets have arisen with value placed on diversity, which as Nancy Leong (2013, 2181) explains, have enabled white people and institutions to engage in ways that “enhance their status by signalling cross-cultural credibility”. In terms of Marxian capital, social processes are also important; they facilitate the conversion of labour into commodities with exchange value, not simply things that are useful. According to Leong (2013), racial identity – regardless of whether it reflects the privileges of whiteness – can become a commodity when it is conceded via exchange. Racial capital becomes pronounced through this process. The commodification of racial identity enables value generation and extraction vis-à-vis markets. As capital is often invested and reinvested, white persons and institutions (the dominant class for Marx) can receive additional and cyclical benefits from surplus value.
These observations about racial capital have direct relevance for analyses of data capital. Cottom (2020) argues the logics of racial capitalism are embedded across the platforms enabling the internet economy, suggesting data capital is similarly shaped by them. As most technologies for collecting and processing data rest with companies, “the forms of ‘knowing’ associated with big data mining are available only to those with access to the machines, the databases, and the algorithms”, which renders them as “advantageously positioned compared to those without such access” (Andrejevic, 2014: 1676). Modes of data appropriation and alienation are notably uneven. Certain sources of data are valued more than others; data are disproportionately solicited and extracted from certain groups, while others tend to be excluded (Sadowski, 2019). Looking at these digitised processes through the lens of racial capitalism sheds light on the racialised contours of the harms caused through extraction and alienation—an observation that our analysis of anti-violence apps illuminates.
Situating the rise of anti-violence apps in response to gender-based violence
Reporting apps rely on data about gender-based violence to assist institutional decision-making and responses. Though functionality varies, they tend to feature algorithms that accumulate data in information escrows and match assault incidents to identify repeat offenders (e.g. Callisto, JDoe) as well as AI chatbots that interview victims and produce incident reports, aggregating data in dashboards to identify organisational patterns of misconduct (e.g. Talk to Spot, #NotMe, Botler AI). Although responding to—and sometimes explicitly appropriating—language from the #MeToo movement, existing analyses of anti-violence apps find they reinforce gendered rape myths (Bivens and Hasinoff, 2018), strengthen surveillance structures (Mason and Magnet, 2012), and support limited legal responses (Sim, 2021).
The broader market for anti-violence apps emerged after 2010, driven by advances in smart technology and a U.S. government call for software innovators to leverage mobile technology to address sexual violence. The Obama administration's 2011 Apps Against Abuse competition sparked venture capitalist interest in the possibilities of anti-violence technology, bolstered by the emerging “smart” personal safety industry and its promotion of machine learning, AI, and big data analysis. The 2017 mainstream #MeToo movement increased scrutiny of the failings of legal institutions and bureaucracies to protect survivors, prompting renewed pleas for innovators to develop accessible technologies to increase reporting (Glaser, 2016). In light of high-profile repeat offender cases, such as Larry Nassar and Harvey Weinstein, apps seemed an effective tool to produce evidence that would counter authorities’ apathy towards survivors (McPhee and Dowden, 2018). Numerous apps have since proliferated, particularly in higher-income countries, with the personal smart safety and security market ballooning to 2904 million USD in 2019 (Market Research Future, 2020). They are now implemented in multi-national corporations, such as Instacart, Kickstarter, and Zillow, and across university campuses in North America.
The embrace of reporting apps offers a site for examining the value they provide organisations in the post-#MeToo economy—a viral landscape that critical observers argue has been steered by white feminism and the desire for state and institutional redress to personal injury (see Phipps, 2019). For example, as part of the 2018 report, Transforming Workplace Culture in the Era of #MeToo, #BlackLivesMatter, and More, Seyfarth, a large corporate law firm, endorses reporting apps, including Callisto and Talk to Spot, as a strategy for employers to “react quickly or bear the brunt of public backlash and shareholder disapproval” (Gesinsky et al., 2018, 2). The #NotMe reporting app promotes its value as helping companies “reverse the trends” of costly employee absences, lawsuits, and turnover. AllVoices, an anonymous reporting platform for businesses, indicates its “goal is to support companies in creating healthy, safe feedback cultures that directly lead to more productive and engaged employees, higher productivity, and less turnover”. These framings underscore how apps are imagined as projects that protect organisations, increase worker productivity, and ensure a competitive edge.
The promise of these apps relies on a presumption that technology can optimise a survivor's report by collecting and transforming personal accounts of violence into institutionally legible data. Reporting, however, cannot be abstracted from material conditions in which legibility is understood. Feminist scholars have traced how racist formations of sexism shape understandings of what gender violence is and who is legible as a survivor (e.g. Combahee River Collective, 1982; Davis, 1981; McGuire, 2010). For much of U.S. history, law has not provided formal protection to all women, excluding women of colour (Sommerville, 2004). Even though whiteness is elastic and shifting, it remains a property of gender violence legibility (Cacho, 2014). Reflecting on the figure of the survivor in the viral #MeToo movement, Alison Phipps (2019, 17) explains that whiteness is still central to shaping narratives of fragility and “woundability”—that is, “the assumed purity and vulnerability of white women”. People of colour, as well as gay, lesbian, bisexual, transgender, and queer (LGBTQ) people, are often not afforded this presumption of innocence. Instead, they are more likely to be blamed for their assaults than their white and heterosexual cisgender counterparts (Esqueda and Harrison, 2005). Anti-violence interventions that rely on hegemonic notions of gender tacitly reify experiences of privileged survivors (Richie, 2012), reinforcing a false assumption there is a universal experience of gender-based violence (Crenshaw, 1991).
Despite these observations, reporting apps respond to the problem of violence through a universalising design that imagines algorithms, machine learning, and data analytics as generating new institutional power relations untethered to the inequitable politics of survivorship. Negating how class, dis/ability, racism, and sexuality particularise reporting experiences 1 , they present computational modalities as viable solutions to the systemic inaction of authorities and devaluing of survivors’ accounts. Promises of cryptography-based individual privacy, user-friendly and efficient interfaces, and institutional dashboards render technical responses and big data production as modes of increasing accountability. These design features also subject users to processes of data collection, accumulation, and circulation. In doing so, the belief that digital reporting will present survivors’ accounts in ways that lead to the recognition and support of victims, including the realisation of their right to bodily integrity, encourages participation in what are extractive systems.
In addition to these extractive logics, racialised dynamics that contribute to the inequities among survivors carry over into processes of generating data capital. As Katz (2020, 8) notes, AI is a technology that “works to flexibly serve a social order premised on white supremacy”. Reporting apps do not escape this critique, even when they mobilise diversity and inclusion discourses. They both mobilise and obscure racialised experiences in datafication, mirroring the recognised contradictions of offline racial and gender politics (Ferguson and Hong, 2012). These tools exemplify a longstanding trend: that contemporary capitalism “has depended as much on the production and negotiation of difference as it has through enforcing sameness, standardization, and homogenization” (Hall, 2017: 118–119; quoted in Virdee, 2019, 9). Their contradictions are the outgrowth of constitutive relationships between data, capital, and inequality as well as how racial capitalism underscores the interplay between them.
Methodology
Our argument draws upon a qualitative analysis of two popular reporting apps—Callisto and Talk to Spot. They rely on similar logics: accumulating data for decision-making about gender-based violence and using collected data to support institutional action. For Callisto, they are to identify repeat offenders; for Talk to Spot, they are to optimise HR responses. Project Callisto is a 501(c)3 organisation whose mission is to “create technology to combat sexual assault, empower survivors, and advance justice” (https://www.mycallisto.org/about). Initially formed in 2011, Callisto refined their reporting app based on survivor feedback in 2020. This app allows university users to enter a record into their matching algorithm to identify repeat offenders, review resources and potential actions to consider (e.g. report to police or talk to an attorney), and create a record of the assault using a standardised form 2 . In contrast, Talk to Spot is a chatbot meant to encourage employees to speak out about discrimination, harassment, bullying, misconduct, and other inappropriate behaviour. Founded in 2017, Talk to Spot's chatbot feeds into a webapp where managers and HR professionals field reports. The system is designed for organisations and has been adopted by various corporations, including ASOS, Davita Dialysis, and Kickstarter.
While using two examples of anti-violence technologies aids in illustrating how racial capitalist formations operate in diverse ways, this approach did limit the scope of our analysis to primarily U.S. contexts. We felt the benefits outweighed the tradeoffs: the combined analysis of Callisto, which is supported by a non-profit organisation, and Talk to Spot, which has a for-profit model, revealed that both tools generate data capital in ways that embed whiteness and contribute to the commodification of race for value. For each app, we collected textual and qualitative data, including the app websites and advertisements, legal terms of use (e.g. privacy statements), financial and technical reports, functionality assessments, and media coverage since 2011.
We located media and news coverage of Callisto and Talk to Spot through targeted NexisUni and web searches of technology publications and other mixed-media, including Ted Talks and gender-based violence-themed podcasts. In total, we reviewed 273 articles and 15 videos (including podcasts) published between 2015 and 2021. Where possible, our research team conducted “walkthroughs” of the apps, making step-by-step observations of the apps’ features and flows to understand the expected functionality and cultural messages conveyed in their design (Light et al., 2018). As Talk to Spot is for organisations, we had to review videos produced by the company to document user and employer interfaces and the processes for sample reports. Given the multimedia nature of the data, we open-coded documents in vivo using Atlas.TI. In carrying out data analysis, we also reflected on insights gleaned through archival work that began in 2017 and autoethnographic insights from being approached by anti-violence technology developers and funding bodies.
The analysis presented here draws on materials that contextualised big data practices in relation to racial capitalism and findings that reflected commonalities and gaps across the text. We iteratively revisited our analytical themes of commodification, data extraction, ownership, property relations, and instances of racialisation through the lens of gendered racial capitalism to isolate patterns and distinctions in and across the collected data (Timmermans and Tavory, 2012). In line with qualitative analyses of big data cultures and practices, this approach examines “the institutionalised routines, habits, and knowledge practices of the app publishers with respect to data” (Albury et al., 2017, 2). It also enabled us to identify features that aligned with insights about racial capitalism, informing the structure and argument of the article. The inherent opacity that comes with studying proprietary technology, however, prevented us from closely examining algorithms and functionality—aspects often considered trade secrets. We were, however, able to observe some operative intricacies of these technologies and their processes of generating and extracting value from data capital. In the next sections, we examine how reporting apps produce data capital that is not simply gendered but also racialised.
Racial capitalism in and through the datafication of gender violence reporting
Reporting apps mediate and control extractive data flows among victims, organisations, and third-party data brokers in ways that obscure and mobilise racial dynamics. While their datafication of violence abstracts information about the assault and identities of survivors, data accumulation by dispossession—a practice of producing privatised data capital (Thatcher et al., 2016, 995)—facilitates the commodification of social hierarchies as reporting data comes to take on value. Our scrutiny of these apps reveals practices of racial capitalism emerging in three ways, which are often overlapping and interrelated: (1) through racialised property relations built on data extraction and ownership; (2) through capital accumulation that reinforces benefits derived through property relations and data ownership; and (3) through the commodification of diversity and inclusion promotion. These techniques of racial capitalism operate as mechanisms of coercive power in the data economy by tapping into survivors’ desire for institutional recognition and action.
Property relations through coercive data extraction and data ownership
Critical analyses of property relations and ownership highlight how law bolsters the generation of racial capital and inequality by securing property rights, enabling the enforcement of contracts, and creating subjects (Bhandar, 2018). In the United States, various legal frameworks articulate the terms of data ownership, generally privileging the rights of data controlling and processing entities. For example, the Privacy Policies and Terms of Use used by Callisto and Spot facilitate data brokers’ property interests by enshrining their right to own, share, and retain personal and technical data through a ‘notice and choice’ framework. While this framework purports “to put individuals in charge of the collection and use of their personal information” (Reidenberg et al., 2014, 489), the consent request facilitates claims to data ownership that ensure use value for companies with access.
With Spot, collected personal information “is controlled by All Turtles Corporation, San Francisco, California” (Spot, 2018); users “agree to permit All Turtles to collect, access, process, and use a variety of information when you use Spot for Teams” (Spot, 2020). To encourage consent, both Callisto and Spot emphasise that their technologies increase disclosure: Callisto (2018) notes survivors are six times more likely to report after visiting the app, and Spot (n.d.) reports a 70–80% follow-up response rate from employees who use the software. Although appealing to the desire for institutional legibility and accountability, neither app makes the processes and outcomes of its services transparent. Obtaining users’ consent becomes explicitly coercive when apps are institutionalised within an organisation. Consider, for instance, when apps become the formal means of reporting harassment and other workplace conduct to HR. They require survivors to agree to service terms that demand data about themselves and acts of violence to enable the processing and circulation of data. The generation of data is the key source of these apps’ value, yet the extraction of data from users relies on opacity, obfuscation, and information asymmetry. While survivors may manually enter information through the digital interface, the apps can collect much more, such as user analytic data that can be personal and technical in nature, anonymised IP address, device, operating system, and browser information, and statistics regarding how the user engages with the website (e.g. areas of the website visited, number of clicks, time spent on each page).
Callisto encrypts entered information so it cannot be shared with its staff and Legal Options Counsellors 3 unless permission is granted. Stored information regarding an alleged offender includes “unique identifiers such as your telephone number, email address, or social media account information” (Callisto, 2020). Spot notes the only mandatory personal data required is one’s email; however, it enables the collection of “optional” information, such as name, demographics (e.g. gender, occupation, but not race), information about their employer, name of the alleged offender, and details of the event. In this case, collecting information about gender but not race amalgamates diverse identities, creating consumable data subjects that may align with the privacy norms of the internet economy but not the goal of transformative gender violence justice. By annexing race to the data shadows (Leonelli et al., 2017), these practices whitewash subjects and their experiences of violence, which are inextricably linked to interlocking systems of oppression. In short, reporting apps actively racialise data as if subjects are the same—that is, as normatively white 4 .
The design of reporting apps, particularly how they collect information from users, ensures they are the primary generators and conduits of data flows. In fact, Callisto users must repeatedly engage with the app to derive direct benefit or opt out—for example, to be matched with another victim of the same perpetrator or to have a legal counsellor advise on the best recourse. If one's information matches with another user's submission in the system, “the entry will not be deleted until the Legal Options Counsellor closes your case”, and survivors must contact their assigned Counsellor if they wish to close their case (Callisto, 2020). As such, it reflects one mechanism through which reporting apps turn survivors into objects of data collection rather than subjects with agency and power over their data. As an organisational and enterprise technology, Spot encourages in-app communications by offering multiple channels for HR and employees to continue conversations” (https://talktospot.com/index). Although Spot's storage terms differ slightly, it, like Callisto, relies on the extraction of data, which can be reused and reinvested—even in cases where users do not benefit directly. Further, reporting apps promise data will be anonymised, encrypted, and confidential to encourage users to submit information. While these design features prompt users to engage the apps in pursuit of an algorithmic match or institutional accountability that may never occur, Callisto and Spot benefit from the exchange value of the data capital they circulate in the broader digital economy.
These practices attest that reporting app privacy policies and data governance are less about protecting users and more about enshrining property relations that create more capital using survivors’ data. This distinct is important. The realities of how data become commodified tip its use value in favour of apps, organisations, and data brokers, a central feature of both data capital and systems of racial capitalism (Leong, 2013; Sadowski, 2019). For instance, in an interview for The New Yorker, Jessica Ladd, Callisto's founder, was asked “Will Callisto retain the reports submitted by victims who may no longer be enrolled in the service?” Ladd replied, Don’t tell anybody, but yes” (Mead, 2018). Callisto's Privacy Policy suggests it can be indefinite: it is therefore important that our users’ entries are maintained and stored over time to enable the matching feature. This is why even after your account is terminated, some or all of your data may still be stored on our platform.
Extracting data along these terms also ensures surplus value through the sharing of data with third parties, the direct benefits of which are not for users. MailChimp, Callisto's email service provider, facilitates data sharing with social media platforms: users with “the ‘Social Profiles’ feature” turned on actually enable “additional information” to be exchanged and more widely circulated (Callisto, 2020). Its use of Google Analytics has similar implications, enabling the sharing of “information collected by Google Analytics about your visits to our site and other sites is governed by the Google Analytics Terms of Use” (Callisto, 2020). While Google Analytics can provide reporting apps with information about how people interact with their products in ways that make them more accessible to survivors, Google Analytics and the other big data technologies they talk to through APIs are fundamentally marketing technologies. Non-profits and apps might use them for “social good”, but the data they circulate flows into the big data economy. These policies thus reflect distributive politics characteristic of Big Tech markets, with information flowing from users to reporting apps to corporate partners who can translate and repurpose data into profit. Callisto and Spot share data with Facebook, Google, Mailchimp, and Twitter; the Spot AI further integrates with business apps, such as BambooHR, Namely, Slack, Workday, and Zenefits. As reporting apps and technology companies can continuously draw out new value from data, these arrangements reflect asymmetrical power relationships of data capital ownership.
Privacy Policies and Terms of Service operate as forms of law that facilitate data brokers’ interests and privileges. For example, they shift liability by detailing how parent companies, despite owning and benefiting from data capital, are not responsible for data leaks, identification or loss, or poor HR responses. In doing so, they elide culpability for harms arising from authorised access to data or institutional failure. In addition, the acknowledged problem of “TL;DR” (too long, didn’t read) in privacy notices (Obar and Oeldorf-Hirsch, 2020) supports these coercive capitalist conditions. The resulting dispersion of data ownership and responsibility puts the onus on the actual producers of data (survivors) to understand, locate, and take control of their reporting information. Scrutinising the distribution of benefits aids in unveiling the racialised dimensions of these dynamics, as discussed in the next sections.
Data capital accumulation as reinforcing benefits derived from data ownership
Reporting apps and their partners’ advantageous position within systems generating data capital reflect the overlapping and negotiated nature of ownership. Capital accumulation reinforces benefits for data controlling and owning entities in three interrelated ways: 1) through data capital extracted from users, 2) through exchange and surplus value gained by sharing and investing with other users and consumers (like businesses and universities), and 3) through data reinvestment in developing and improving reporting websites, their services, and business decisions. The racialised contours of these mechanisms are not necessarily evident on the surface, because the naturalisation of ownership masks how property relations operate in ways in which “selected private interests are protected and upheld” (Harris, 1993, 1730).
As reporting apps invest and reinvest the data capital they own, they reap the benefits enabled by the surplus value of those investments—in this case, Callisto, Spot, and their corporate collaborators. In a CNN Money article, Ladd, Callisto's founder, acknowledges seeing “a clear market opportunity — but success won’t be monetary. Profiting off of rape is something that makes people very uncomfortable” (O’Brien, 2017). She describes how she sees a need to shift how organisations investigate wrongdoing, including police brutality and immigrant deportations, envisioning Callisto as a catalyst (McHugh, 2019). To encourage this normative shift, Callisto's code is available on Github for “others to copy the code and implement in their HR systems”. This push, however, negates how human experiences—in this case, those directly associated with harassment and violence—are annexed and reorganised within the broader data economy. In addition to whitewashing data under the pretence of data privacy, Callisto and Spot do not provide support for those who are historically illegible as victims of violence, such as non-white, disabled, and LGBTQ people. In open sourcing their matching algorithm, Callisto is not simply sharing; it is seeking to institutionalise digital reporting systems and create more value from data about users’ experiences of violence and from interactions with these systems.
These practices reflect what Victor Ray (2019, 42) calls racialised decoupling, which enables “organisations to maintain legitimacy and appear race neutral or even progressive while doing little to intervene in pervasive patterns of racial inequality”. As universities and corporations—often unmarked spaces of whiteness (Moore, 2008)—institutionalise reporting apps, racialised decoupling is central to how data capital benefits reporting apps, these predominantly white organisations, and Big Tech entities. While Callisto and Spot promote the sentiment that “your data should work for you” (https://talktospot.com/index), they sustain markets that rely on gender-based violence to enhance data about users by promoting a “better data” paradigm. The business model requires more data to generate better data, incentivising expansion that can amount to a form of predatory inclusion (see Cottom, 2020). That is, survivors of violence may be better positioned to report through apps, but they do so on exploitative terms driven by data dispossession, with no publicly available evidence of effective institutional action based on their use. Revisiting Ladd's vision, these platforms can be used to respond to other abuses, “including police brutality and immigrant deportations”, which increases markets and opportunities for the exchange of data capital (O’Brien, 2017). As people of colour disproportionately experience gender-based violence, police violence, and immigrant deportations, these systems of data extraction often target them but do not challenge white racialised organisations that are the beneficiaries of data's use value.
Consider, for example, how media conglomerate Thompson Reuters has sold access to their private database containing “more than 400 million names, addresses, and service records from more than 80 utility companies covering all the staples of modern life” to U.S. Immigration and Customs Enforcement and other law enforcement agencies (Hartwell, 2021). This example illustrates how data are shared across markets and in ways that can fortify surveillance regimes, even if users had not intended their data to be used for those purposes. In the case of reporting apps, violence enabled through gender inequity, structural racism, and exclusionary citizenship can become sources from which data markets can extract value—and to benefit of institutions, such as law enforcement, that propagate racialised sexual violence (Purvis and Blanco, 2020; Ritchie, 2017). In doing so, the “better data” paradigm does not necessarily support justice and accountability; it does, however, support a broader marketplace, which can reinscribe profoundly racialised configurations of power between users, data, corporate actors, and the state apparatus.
Representatives of these apps have made public statements suggesting that they understand corporate and institutional consumers of these apps might privilege the value of data over direct services for survivors. According to Kate L. Lazarov, Callisto's project officer, “recording information about an assault rather than reporting it could be valuable for colleges” (Fabris, 2015). Value generation relies on the circulation and exchange of standardised information about survivors (and perpetrators if named in the reports) within organisations, across digital survivor communities, and to HR and legal systems. Organisational cultures, however, can isolate employees and foreclose conversation among co-workers about harassment and other workplace misconduct. In the United States, it is already common to “avoid the harasser, deny, or downplay the gravity of the situation, or attempt to ignore, forget, or endure the behavior” (Feldblum and Lipnic, 2016, n.p.), with intersectional analyses demonstrating that Black women are disproportionately represented in sexual harassment charges across industries (e.g. Rossie et al., 2018). A report by the National Women's Law Center indicates 35.8% of women who filed charges alleging harassment also reported workplace retaliation in which they risked a pay cut, job loss, damaging future career options, and developing a reputation as a ‘troublemaker’ (Rossie et al., 2018). The fear of retaliation is a central reason why employees do not report, especially among those working in low-wage jobs. Although apps purport to increase incident reports, they do not offer protection against retaliation. By “black boxing” submissions and making their processing of complaints opaque, they can, in fact, affirm cultures of silence.
In contributing to practices on universities and in businesses, reporting apps can cultivate new digital survivor subjects, which produce opportunities for potentially biased decision-making through data analytics. As Crooks (2021: n.p.) notes, “data work, like other bureaucratic processes, is race work . . . by virtue of the interpretive and calculative mechanisms which pre-emptively translate racial projects into questions of computation”. By design, reports apps elide concerns of racism in gender-based violence. They confer benefits that support the operation of racialised organisations in their current form, not institutional changes that counteract violence. For example, Spot aims to create “workflows for sensitive issues”, which has expanded to include other workplace problems “from Covid-19 concerns to corrective action to bullying”. The company offers a case management platform to “investigate reports and claims” and “get actionable insights on the health of your organization” through reporting on trends. Similarly, Callisto provides their university partners with aggregated data reports twice a year meant to offer better data for the existing offline reporting infrastructure of survivor advocates, Title IX coordinators, and counsellors (EEOC, n.d.). While they present these practices as modes of catalysing organisational change by providing “better data” to upstream stakeholders, their circulation of whitewashed data reifies modes of racialised decoupling.
There is also evidence that these apps create incentives for predatory practices that sustain corporate power exercised by Big Tech. Both Callisto and Spot use persistent cookies, which remain on the reporter's computer until deleted. As explicitly surveillance-oriented tools, cookies monitor, track, and log user activity, and “in ‘persistent’ form, could enable reidentification of those users when they returned to the site later on” (Cohen, 2019, 54). Data on usage trends are collected, stored, and analysed for several stated reasons, such as “to understand how our users are using our website and platform, and which features or content appear to be more useful and have the most impact” (Callisto Privacy Policy). Spot similarly notes, “We may use or disclose the personal information we collect… For testing, research, analysis, and product development, including to develop and improve our websites and services” (Spot Privacy Policy). These statements reveal how reporting apps continually aim to generate value from data capital through product optimisation, which translates into value through “better data” products. In doing so, organisations, including those selling big data analytics such as Google Analytics and Hotjar, rather than victims, stand to benefit most. Anti-violence apps, particularly those designed for corporate entities, enable significant reinvestment—and not in ways that aim to provide direct benefits for survivors. By prioritising commodification and capital accumulation, rather than data justice, the digital infrastructure of racialised organisations may become more robust; however, their discriminatory features can go unchecked.
Racial commodification through the promotion of diversity and inclusion
As other analyses of racial capitalism attest, non-white racial identities can become commodities that predominantly white institutions instrumentalise as a form of social capital (Leong, 2013; Ralph and Singhal, 2019). As public relations are integral to capitalism (Madianou, 2019), promotional materials often showcase representations of racialised persons as “a tool for reaching these groups as well as a sign of their potential, or virtual, market share” (Marino, 2014, 7). Reporting apps are no exception. Although the functional mechanisms of Callisto and Spot may whitewash data, their marketing mobilises “diversity and inclusion” rhetoric.
Callisto has worked to cultivate a reputation as an organisation that upholds and helps others enact diversity and inclusion mandates. As it has recruited a more diverse leadership over time, its visual promotions have evolved to support the perception that their product is race conscious and supports equity. In fact, some universities consider Callisto part of efforts to “expand diversity and inclusion initiatives that help marginalized groups on campus, including victims of sexual assault and violence” (Pollack, 2020).
These visualisations centre representations of non-white people to support advertising efforts, evincing a kind of tokenism that “‘leverages undervalued identities’ and ‘preserves commodified values of race by parading an exception’” (Leong, 2013, 2195). Callisto's abstract illustrations cultivate an image of racial inclusion by emphasising bodily features stereotypically ascribed to Black and Brown people, including curly hair and fuller lips (Figures 1 and 2).
In contrast to this racialised imagery, the design of Callisto does not facilitate modes of attending to how people of colour disproportionately experience violence. Appealing to racial inclusion thus works in the service of marketing the app, not in tailoring its services for survivors. Carefully curated, this aesthetic conveys promotions that may appeal to people of colour while still being palatable for primarily white academic institutions and investors. Its commodification of race is of interest to institutions that want to support an image of a diverse user base, doing so without disrupting the racial status quo. Moreover, as scholars of racial capitalism explain, these kinds of tokenising practices facilitate a framing of white racialised universities as “nonracist and culturally competent actor[s]” (Leong, 2013, 2013).
While Callisto actively employs racial difference to support its market expansion, Spot's advertising targets corporate clients, which are primarily predominantly white organisations that have had problems with harassment (e.g. Ramos Law, 2017). Rather than explicitly confront issues of age, disability, gender, or race, Spot depicts its product through images of gender and racial ambiguity. Its promotional videos utilise abstract animations of humans that are coloured red and blue to signal a bifurcation between women and men. In picking primary colours rather than skin tones to differentiate these figures in their advertisements, Spot promotes a colourblind approach that does not ‘see’ race or ‘choose’ racial groups. It also presents its AI as “completely unbiased” when addressing workplace issues, “because Spot is a bot and not a human, it cannot judge or assess you” (Mercer, 2019). The discourse that the Spot AI “listens without judgment” (Childs, 2019) invites users to produce data in a context somehow disconnected from the gendered, heteronormative, and racialised environments that users navigate. This marketing, of course, is misleading, as digital tools can encode gendered, racial, and other prejudicial ideas and values into the system (Noble, 2018) 5 .
This race-neutral rhetoric carries over into Spot's unmarked AI chatbot. Spot CEO Jessica Collier has stated explicitly that its “genderless, personality-neutral interviewer” is to “effectively eliminate reporting bias and the discomfort of telling another person (of another gender, race, or background)” about traumatic experiences (Childs, 2019). Though asserting neutrality, Spot is nonetheless a white racialised AI: its seemingly benevolent design is to enhance the extraction of data through a performance of an educated, literate, middle-class (white) persona without indication of a lived culture or history (see Cave and Dihal, 2020; Phan, 2019). As such, Spot's proactive framing of race as unmarked “equate[s] the dominant cultural identity group with a universalized vision of humanity” (Marino, 2014, 3). The racial capitalistic elements become clear when considering how these appeals to whiteness aim to enhance capital accumulation and value.
Spot's embrace of normative whiteness extends into its approach to diversity. One of its turnkey business solutions is a one-hour diversity, equity, and inclusion (DEI) training, delivered to employees in “10-min episodes [that] are more like Instagram stories than the cringe-worthy videos we all love to hate”, covering topics primarily related to gender discrimination, sexual orientation, and unwelcome conduct (https://talktospot.com/training). Spot's DEI training has an additional one-hour training for supervisors, including one covering how to “debunk false reports”. Such training seems antithetical to the app's mission to help facilitate delicate information because it reinforces the myth that false reporting is rampant by teaching supervisors to recognise “credibility discounts”, which are always already classed, gendered, and racialised (Tuerkheimer, 2017). Reinforcing claims that supervisors need to get at the “truth” of the matter obscures the nature of the systemic problem, which is not false reporting, but poor institutional action and lack of accountability.
These tensions reflect how priorities of data extraction and value generation often undermine the apps’ stated goal of empowering users. Although different in their approaches, both Callisto and Spot commodify race. It supports their shared aim of enhancing their ability to produce data on experiences of harassment and violence in the pursuit of an expanded consumer base and access to new markets. The users of reporting apps are further alienated from the benefits of capital produced from their labour.
The value derived from these apps is not simply economic. Racial commodification generates capital that has social, cultural, and human dimensions. Capital that supports favourable social status and reputation can also attract new economic resources. For example, in 2018, Callisto received the Skoll Award for Social Entrepreneurship, which awards “social entrepreneurs who take the risks to right the most unjust systems” with $1.25 million USD in grant funding (Callisto, 2018). While grant funds support Callisto's operation, such awards also provide social capital through reputation enhancement and extended networks. More broadly, prestigious awards, which often come from primarily white foundation funders, can be leveraged in other fundraising efforts in which having prior awards is helpful in winning new awards and grants from other primarily white funders.
These practices illuminate additional dimensions of the complex racialised dynamics in which Callisto and Spot are situated, which exceed inequalities observed in relation to reporting. Not only do apps fail to explicitly confront power structures that uphold racial inequalities, but they also maintain—and expand—ties to legal systems that perpetuate harms disproportionately experienced by people of colour. Take, for example, Spot's co-founder, Julia Shaw, a memory expert and psychologist who previously trained police and military personnel to conduct interviews on emotional events (Reynolds, 2019). She has repeatedly emphasised the AI replicates police interview techniques that use cognitive science to “focus on neutrality and gather factual, detailed evidence from memories” (e.g. Byers, 2019; Reynolds, 2019). In a Digital HR Leaders podcast, Shaw describes, “just like a police investigation, if you have an HR investigation, you want as high-quality information as possible and you want the evidence that you're using whatever it is to be high quality” (Green, 2020). Shaw falsely presents police techniques as neutral and draws upon claims to scientific authority, expertise, and the credibility conferred upon police interviewing to position the AI as offering more reliable evidence (as opposed to forcing or leading responses to questioning). In short, Spot actively brings carceral techniques into corporate HR processes.
Callisto is similarly entangled in civil and criminal legal processes. Although its original purpose was to use its matching algorithm to report, track, and hold accountable repeat offenders through Title IX claims, Callisto has since incorporated Legal Options Counsellors as the key resource for users to navigate modes of recourse. These actors open the door to more legal actions, including “getting a restraining order,” engaging a “criminal justice process or a civil lawsuit,” or legally mediated “restorative justice”. While Callisto suggests twelve possible avenues for survivors, their choice to call the survivor liaison a “Legal Options Counsellor” reinscribes legal forms of recourse as preferred, negating how various legal processes—whether administrative, civil, or criminal—fail to provide adequate support for complainants who are people of colour, have a disability, and/or LGBTQ (e.g. Kim, 2014; Spade, 2013). Here again, by prioritising value generation and supporting existing systems, these apps work to uphold, rather than contest, racialised power relations.
Conclusion
Through an analysis of reporting apps, this article illustrates how emergent questions of data capital should be considered in relation to interlocking systems of domination and oppression. It illuminates how an intersectional sensibility might enhance the analysis of technological solutions to criminological concerns specifically and data capital more generally (see Henne and Troshynski, 2019). Our aim, however, is not simply to demonstrate the possibilities of intersectional analyses of data. Rather, it is to provide a grounded example of how racial capitalism persists through a gendered project, sustaining racialised regimes of value extraction and exchange. In this case, as datafication transforms social action, experience, and identity, these apps support modes of accumulating and circulating capital in ways that do not necessarily translate into direct benefit or use value for survivors who are the sources of this valuable information.
Our findings raise serious questions about how reporting platforms comprise a new front-line approach for addressing gender-based violence. While Callisto and Spot are promoted as empowerment tools for survivors, examining them through the lens of racial capitalism aids in understanding how they fail to challenge systems of oppression that contribute to gender-based violence and the inequitable realities of reporting. In this case, apps can be used by organisations and institutions to avoid making meaningful changes to structures that enable harassment and discrimination. This seeming paradox reflects larger societal shifts in which the control of knowledge is fast becoming the foundation upon which economic, legal, political, and social influence is exercised (Haggart et al., 2019).
Having illustrated how racism operates even when something appears race neutral, this case study is instructive for future critical and feminist interventions. Data policies, infrastructures, and practices are critical sites of justice. If conceptualisations of data capital are to contribute to liberatory agendas, they must unveil and confront how racialised property relations are inextricably linked to its generation. Anonymising data does not offer sufficient means of protection (see Shelby et al., 2021), as it does not prevent powerful actors from exchanging, investing, and ultimately benefiting from mined data. In the context of gender-based violence, the use of digital apps may promise better evidence, but they also add new layers to pursuing justice through formal remedies—which are already often opaque and at times invisible to survivors. For such interventions to effectively serve survivors, theses data infrastructures must centre the needs and rights of historically excluded users.
Recognising that scholars concerned with big data, platform governance, and surveillance are asking pressing questions about how data extend and challenge understandings of contemporary capitalism, we hope this article serves as a cautionary example of why not to accept the appearance of race neutrality when pursuing this critical line of research. Data capital tends to benefit predominantly white institutions and organisations because it can continually be reinvested and redeployed in the service of actors that have few to no incentives to ensure it has use or exchange value for users. Through its focus on constitutive relationships, racial capitalism offers an analytic that can strengthen and expand analyses of whiteness in data-intensive applications, supporting more robust scholarship at the intersections of data and power. It illuminates how racism is embedded in inequitable data relations and how gendered data projects can advance racist processes—in all, pointing to the need to better adapt intersectional analysis for data studies.
Footnotes
Acknowledgements
The authors would like to thank the anonymous reviewers for their comments and ideas that contributed to this paper's development.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Mellon/American Council of Learned Societies Fellowship and the Australian National University Futures Scheme.
