Abstract
Dating apps can facilitate friendships, relationships, and other connections, but they can also be sites of risk, harm, and violence. While platform policies and associated documents outline duties and protections offered to users, platform governance can embed carceral forms of justice enacted through surveillance and punishment. Through a comparative study of 30 dating apps’ safety materials, we identified three themes reflecting carceral logic in dating app governance: individualization of user safety; policing through peer moderation and reliance on law enforcement; and surveillance-based features encouraging users to exchange their data for perceived protection. These approaches frame harm as an individual failure, ignore systemic violence, promote punitive responses to justice, and risk exacerbating users’ vulnerability to privacy violations. We recommend that dating apps adopt alternative models of justice centered on users’ existing safety strategies, mutual accountability and responsibility, and build in mechanisms that afford transparency and user autonomy in choosing safety mechanisms that can be adapted according to the differential risks users face.
As dating apps have seen widespread uptake over the past decade, public discourse has often highlighted their risks, with disproportionate emphasis on shocking episodes (Albury et al., 2020). While greatest attention is often paid to moral panics concerning violent or specifically captivating incidents, these technologies have been consistently implicated in risks and potential harms experienced by users, often in the form of bias, discrimination, harassment, and violence linked to systemic inequities and structures of oppression (Gillett, 2018; Williams, 2024). Platforms’ policies and guidelines, which undergird their features and functionalities, reflect promises to users in relation to the services, duties, and protections they offer (Gillespie, 2018). However, platform governance tends to reflect the carceral logic dominant in neoliberal societies (Schoenebeck et al., 2021), which spreads out from the penal system to embed an individualizing and punitive approach to justice throughout institutions. Through a comparative study of 30 dating apps’ safety materials (e.g. terms, guidelines, policies, and tips), this article examines dating app companies’ discursive constructions of safety as it relates to users, governance approaches, and app features and functionalities.
Qualitative coding of safety materials revealed three key themes that reflect an underlying carceral logic concerning approaches to safety in dating app governance. First, dating app companies designated individuals responsible for their own safety while prescribing how users should protect themselves, rendering them culpable for harm if they failed to follow such guidelines. Second, apps relied heavily on moderation through peer policing, invoking users to report prohibited activity, and many companies instructed users to contact law enforcement, closely intertwining dating apps with formal approaches to policing. Third, peer and formal policing necessitated intensive surveillance, executed through technical features reliant on personal, visual, and biometric user data in exchange for promises of deterrence and identity verification, both of which were conflated with safety. Overall, the carceral logic embedded in dating apps’ approaches to safety frames risks and experiences of harm as individual problems, encouraging the punishment of bad actors rather than addressing the structural nature of these issues. Further, this logic normalizes multiple forms of surveillance—from peers and authorities, and through dataveillance—embedded in technological solutions that promise, but may not deliver, safety for all. We conclude that dating app companies should transition toward alternative models of justice through governance approaches and app designs that are attuned to users’ dating practices and needs, build in mechanisms to foster a sense of shared accountability and responsibility, and provide greater levels of consent, transparency, and multiplicity when offering safety measures. These reflect steps toward recognizing that safety is not experienced universally, and that different features or policies can inequitably distribute risk across diverse user groups.
Dating apps and safety
Dating apps have long been subject to worries about stranger danger and accusations that the relationships they initiate are less authentic than in-person ones (Baym, 2015). These concerns about the veracity of online relationships become linked with worries pertaining to public life and sex (i.e. policing the normativity and visibility of sex in public life) to create a “trifecta of anxieties” (Tiidenberg and van der Nagel, 2020, p. 21), which feeds moral panics and gives rise to public discourses focused on sensational risks and shocking cases of harm (Albury et al., 2020). Since moral panics are commonly gendered, a disproportionate amount of concern surrounds women's use of dating apps, building from gendered double-standards and stigmas surrounding casual sex (Albury, 2018). Gender biases that position dating apps as dangerous for women also responsibilize them for their safety and render them culpable if they experience harm (Farvid and Aisher, 2016). Concerns about financial scams, murder, and sexually transmitted infections are also common in public discourses that present dating apps as a social and public health problem (Albury et al., 2020)—often couched under the umbrella of safety.
These moral panics not only obscure positive outcomes of dating app use, such as connection, friendship, and relationships (Blackwell et al., 2015; Broeker, 2023), but they also detract attention from more common, mundane, and ongoing risks and harms surrounding this technology. Gender-based and sexual violence are frequently facilitated through dating apps (Dietzel, 2021; Filice et al., 2022), and Gillett (2018) observes that intimate intrusions—from sexually aggressive messages to threats of physical assault—have become so common that they often go unaddressed. Racism is also facilitated through dating apps (Carlson, 2020; Dietzel, 2022), often reinforced through filters and algorithmic matching that categorize users according to characteristics with racial or ethnic associations (Williams, 2024). These experiences are intensified at the intersection of identities, as racialized women experience increased harassment and fetishization through dating apps (Banks et al., 2024). Women, especially those with limited digital literacy, are targeted more often than men for financial fraud and romance scams (Bilz et al., 2023). LGBTQ+ dating app users experience rejection, discrimination, and threatening behavior, as well as homophobia, biphobia, and transphobia (Albury et al., 2021). These studies point to ongoing concerns about safety and dating apps, showing that threats to safety can encompass physical, psychological, social, and financial risks and harms.
Even so, the risks and harms encountered through dating apps are not experienced by all users or at the same intensity across demographics. As explained by one scoping review: “The broader evidence base on technology-facilitated sexual violence indicates that certain population subgroups are disproportionately at risk, most notably women and girls, sexual and gender minorities (e.g. gay, lesbian, bisexual, trans, and gender-nonconforming persons), and people of colour” (Filice et al., 2022, p. 12). While harm is often experienced as an interpersonal act, the broader instantiation of these harms—their consistent and continued nature—is collective. Spaces where people engage in sharing, witnessing, and resisting harassment, such as the Instagram account Tinder Nightmares through which women share common and recurrent examples of sexually harassing pick-up lines (Hess and Flores, 2018), testify to the collective nature of these harms. From this vantage point, the connection between individual incidents and broader systemic oppressions, such as racism, cisheterosexism, and misogyny, becomes clear. The evidence of dating apps’ risks and their role in facilitating and reinforcing structural harms (Williams, 2024) raises questions about how dating app companies frame safety issues and how they move beyond moral panics to address the real dangers that users encounter.
Carceral logic and platform governance
Carceral logic reflects the ideological and infrastructural expansion of carceral systems of justice throughout society. Coyle and Nagel (2022) explain, “While some of this carceral logic results in criminalizing that is formalized in our penal institutions (law, police, courts, prisons and others), some of it remains informal or semi-formal,” with less formal instantiations expressed through “ideologies, practices and worldviews” (p. 25). Carceral logic is a political and economic approach to establishing stability within capitalism and a form of social control to reinforce power relations, such as those based on racial oppression (Gilmore, 2007). Forms of discipline and punishment, sometimes administered through prisons and other times through surveillance-based social control, were integral to industrial growth throughout modernity (Foucault, 1977). However, carceral logic that emphasizes disposability, the threat of removal (or imprisonment), and citizenship characterized by scarcity and competition among expanding populations has been conducive to the growth of neoliberal postindustrial societies (Schept, 2015). According to carceral logic, justice is pursued in an authoritative yet distributed manner, as institutions and citizens are enlisted to surveil, police, and constrain actors to align with laws as well as dominant structures, processes, and norms. This logic emphasizes citizen obedience and discipline of perpetrators as the primary solutions to addressing and preventing harm (Coyle and Nagel, 2022; Gilmore, 2007). As such, it departs from alternative forms of justice, such as restorative or transformative justice, which focus on the needs of those harmed, prioritize accountability, aim to address the root causes of harm, and rely on community-based responses (Hasinoff and Schneider, 2022; Schoenebeck et al., 2021).
Rather than understanding violence as systemic or structural, carceral logic focuses on punishment, such as forced removal from communities and surveillance (Coyle and Nagel, 2022; Gilmore, 2007; Kim, 2020). The prominence of surveillance, by authorities and peers, breeds suspicion among citizens and promotes individualism, since every person is responsible for preserving their freedom, a reality exacerbated by the lack of social support systems and expectation of economic independence placed upon citizens in neoliberal societies (Schept, 2015). These qualities are evident in the emergence of “carceral feminism” as surveillance leading to frequent invocation of “the carceral system of police, prosecutors, courts, parole, probation, jails, and prisons for the protection of women” (Kim, 2020, p. 310). Carceral feminism is especially linked to concerns over sexual violence and women’s safety, including those that circulate in relation to dating apps, which call for protections that may only protect some women while rendering others more vulnerable (Davis et al., 2022; Kim, 2020). The “carceral regime harms Black and other people of colour and marginalized groups” (Davis et al., 2022, p. 42), as it often targets these groups and leaves them less safe by dissolving local systems of support, economic wellbeing, and care (Gilmore, 2007).
Carceral logic is instantiated within and through digital technologies. Benjamin (2019) explains how technologies, such as those used for predictive policing, reinforce racist forms of social control: “technology is not just a bystander that happens to be at the scene of the crime; it actually aids and abets the process by which carcerality penetrates social life” (p. 2). This occurs, for instance, when police departments adopt big data surveillance systems—often fed by private vendors who can circumvent calls for data transparency—and deploy predictive analytics alongside claims to neutrality that elide the bias built into these technologies and their use (Brayne, 2021). Similarly, “carceral communication” reflects “the interconnection between communication, digital traces, and surveillance” that serves to link individuals and communities to prisons (Lane and Ramirez, 2024, p. 675). Social media platforms are increasingly implicated in the use of digital data for criminal surveillance and complicit with law enforcement’s aims. Many platforms and apps remain headquartered in the United States and, while Section 230 of the U.S. Communication Decency Act has largely provided immunity from liability for user content (Gillespie, 2018), they have predominantly developed content moderation that mirrors principles from the U.S. criminal justice system (Schoenebeck et al., 2021). Platform companies often focus on moral panics and vow to punish “bad actors” while using issues of public concern as rationale for scaling up automated moderation and data-intensive surveillance (Gillett et al., 2022). However, such governance mechanisms tend to disproportionately censor and remove marginalized users, such as racialized users, LGBTQ+ users, and sex workers (Are, 2024; Gray and Stein, 2021). Racialized users are frequently targeted by these mechanisms as “digital platforms have become places that reify institutional practices that police, surveil, and criminalize Black women’s practices” (Gray and Stein, 2021, p. 539). Peer-based moderation saddles users with the duty of surveiling one another, regardless of their values, positionalities, or interpretations of app policies (Crawford and Gillespie, 2016). While deterrence has long been a goal of surveillance, the expansion of surveillance through digital data collection shifts toward scalable, automated interventions that aim to prevent violations (Andrejevic, 2020). As automated governance blurs lines between state and commercial surveillance, platforms are a key component in “the infinite productivity of surveillance capitalism,” predicated on “more sensors, more data, and endless intervention” (Andrejevic, 2020, p. 249). Building from Zuboff's (2019) notion of surveillance capitalism as the translation of human experience into behavioral data for commercial purposes, Andrejevic (2020) highlights how these processes can also be applied to the governance of citizens.
Dating apps can be understood as a subset of social media, as they often involve similar features and governance approaches. While some operate independently, many are owned by conglomerates, such as Match Group or Bumble Inc., that develop specific app policies as well as governance approaches spanning properties, and apps that connect with infrastructural platforms, such as Facebook or Google, may also be subject to their policies (Duguay, 2017). Research speaks to the insufficiencies of peer moderation on dating apps, as women, LGBTQ+ people, and people of color are often subject to malicious reporting—yet are still called upon to perform the labor of blocking and reporting others (Gillett, 2023; Williams, 2024). Dating apps are also “intense sites of data generation, algorithmic processing, and cross-platform data-sharing” (Albury et al., 2017, p. 1) with user data increasingly feeding automated surveillance tools (Gillett et al., 2022). As for dating app “safety” features that extract volumes of user data for intensive surveillance or integrate law enforcement into dating experiences, Stardust, Gillet and Albury (2023) conclude that “pursuing a techno-carceral approach that equates surveillance with safety, technology with progress and police with justice will only serve to deepen and encode existing inequalities on dating apps” (p. 290). Thus, our comparative analysis of dating apps’ safety materials was initially sensitized to the prevalence of carceral logic throughout dating app governance, which became a focal lens throughout our analysis for understanding its discursive instantiation.
Methods
This study analyzed dating app companies’ policies, guidelines, and expectations for user behavior set out through public-facing safety materials. Since public discourses generated by platform companies and their representatives often shape the relationship between a platform and its users (Hoffmann et al., 2016), such documentation reveals “how platform companies see themselves as ambivalent arbiters of public propriety” (Gillespie, 2018, p. 46) as they—often reluctantly—moderate user behaviors. As Davis (2020) reminds us, “technologies don’t make people do things but instead, push, pull, enable, and constrain” (p. 19) since their features afford certain actions over others. While affordances place users in a relationship with technology, one in which they can use, refuse, or resist features, the technological interfaces provided to users and the rules that go with them are determined by platform developers. How well technologies cater to users’ creativity, autonomy, and differential needs is a design decision (Escobar, 2018), and designs are informed by, and reinforce, policies and associated discursive materials. For dating apps, community guidelines are generally supplemented by more formal terms of service and less formal documents (e.g. FAQs and dating tips), which together articulate developers’ intentions, reflected as an app’s expected and appropriate uses (Duguay et al., 2017). Together, these safety materials comprise the data analyzed in our study, as they relay the ideologies, priorities, and values underscoring app companies’ discursive constructions of safety.
To engage with these safety materials, we developed a sample of dating apps available in Canada, including popular apps and those targeting diverse demographic groups. Our focus on Canada is motivated by the lack of research compared to the United States (e.g. Vogels and McClain, 2023) and Australia (e.g. Albury et al., 2021) and calls for comparisons across apps (Matassi and Boczkowski, 2021). We began by reviewing the top 200 apps in the “Dating” category of the Canadian Google Play Store (Android) and the “Lifestyle” and “Social Networking” categories of Apple's Canadian App Store (iOS), which does not have a specific category for dating apps. We excluded apps for social networking (e.g. Instagram), messaging (e.g. WhatsApp), friendship (e.g. Meetup), anonymous chat (e.g. Chatroulette), and dating support (e.g. AI assistants). To focus on popular apps while retaining a diverse sample, we removed apps with fewer than five million downloads from the Google Play Store, except those targeting minority user populations. Then we purposively sampled additional apps catering to different facets of identity (e.g. age and religion) and removed a small number of apps pertaining to groups over-represented in the sample (e.g. gay men). This resulted in a final sample of 30 apps.
Table 1 lists each app, its primary user base according to its in-store description, and the number of safety-related documents gathered from its website. We collected a combination of documents, including terms of service, privacy policies, and community guidelines. Most apps included additional information, such as “safety tips” or safety-specific guidelines. Match had the most safety-related materials (n = 43), though other apps with large user bases, such as Badoo, Bumble, Grindr, HER, Hinge, OkCupid, and Tinder, also had many materials. More niche apps, like Christian Mingle, Feeld, Raya, and PURE, often had fewer safety materials. From September to December 2023, copies of 447 safety materials were collected for analysis.
Dating apps included in this sample and the number of safety materials gathered from each company.
To understand how dating app companies discursively constructed notions of safety, we implemented a three-step process of qualitative constructivist coding (Saldaña, 2021): close reading of all safety materials; descriptive and topical coding to create initial categorizations; and higher-level axial coding leading to the development of key themes linking categorizations through theory and literature. Throughout, the research team sense-checked codes and reviewed emergent themes, arriving at final themes through consensus (Cascio et al., 2019). As mentioned, the coding process was sensitized by the feminist and social justice lenses mobilized in our literature review to make sense of recurrent topics in the materials (e.g. moderation and in-person dates) and develop analytical themes (Silverman and Marvasti, 2008).
The following sections illustrate that carceral logic often surfaced in these safety materials through discourses instilling: (1) individual responsibility for safety, rendering users culpable for harm experienced within and outside the app if they did not follow particular prescriptions; (2) policing through peer moderation that encourages users to report others’ behavior within the app and to law enforcement; and (3) user- and technology-driven surveillance that encourages users to exchange data for safety, enabling increasingly totalizing, and automated forms of surveillance. Dating apps’ materials promote notions of safety and approaches to justice that overlook the systemic nature of harms and the need for alternative solutions. Such governance approaches are more likely to punish, disadvantage, and fail certain populations while only protecting some individuals.
Since our analysis was limited to safety materials, it cannot speak to the myriad ways that users develop their own safety practices (Broeker, 2023; Gillett, 2023), though we contextualize our findings with research about user practices. Policies inform interface design and feature arrangements, which users can creatively resist, ignore, redefine, adapt, or reinvent through third-party apps or unexpected practices (Davis, 2020; Duguay et al., 2017). While this research is part of a larger project that also analyzes app interfaces and user experiences (2024), this article focuses on safety materials as texts that reveal dating app companies’ discursive logic. Although users negotiate dating app risks in ways that enable connection and relationship formation (Broeker, 2023; Ferris and Duguay, 2019), scholars call for the burden of establishing safety to not be placed solely on users (Gillett, 2018; Williams, 2024). As our findings show, dating apps’ approaches even sometimes inhibit users’ safety strategies. Thus, this analysis creates a foundation for understanding the logic 1 behind pivotal decisions that commercial software developers make, enabling calls for governance approaches and attendant designs that take up alternative models of justice.
Safety as an individualized responsibility
As is common with platform policies, all terms of service included statements where app companies revoked liability for user safety, with some emphasizing their neutral role. Seeking asserted that its app “is only a venue […] for individuals to post personal and contact information for purposes of dating” and “anything beyond that is not in our control and is carried out at the Members’ own risk.” Such disclaimers exempt Seeking from allegations that the app’s focus on “sugar dating” encourages monetary exchanges for sexual encounters, which could be perceived as sex work and therefore illegal in many jurisdictions. Nonetheless, across our sample, the user was deemed “solely responsible” for their safety—a phrase present in materials from Ashley Madison, BLK, Feeld, Grindr, Muzz, Plenty of Fish, PURE, Raya, Salams, Scruff, Silver Singles, WooPlus, and Zoosk.
Despite the legal basis for presenting their technologies as neutral, app companies become involved in users’ behavior (Gillespie, 2018), as evidenced by the proliferation of safety-related prescriptions across the apps’ materials. Companies frequently told users to “protect yourself” (Plenty of Fish), “always be cautious” (BLK), and “be web wise” (OkCupid). Safety materials often mixed enjoyable aspects of dating with reminders about responsibility. While Zoosk said, “Online dating can be fun and rewarding, but it’s important that you stay vigilant about your safety,” BLK, Hinge, Match, OkCupid, Plenty of Fish, and Tinder—all owned by Match Group—told users: “Meeting new people is exciting, but you should always be cautious when interacting with someone you don’t know.”
The apps’ safety materials often advised users to employ their judgment, prescribing which behaviors reflected proper judgement. For in-app behavior, much of this advice related to limiting the flow of information. BLK stated, “If you choose to reveal any personal information about yourself to other users, you do so at your own risk” while HER told users to “always think twice before sharing personal or explicit content with anyone, no matter how genuine they seem.” While such warnings are intended to help users protect their information, they ignore that users tend to incrementally disclose personal information to build rapport with potential matches and assess compatibility (Broeker, 2023). Another contradiction was evident in companies’ precautions about location sharing and their apps’ geolocating function, which presents users based on proximity. Grindr’s “Holistic Security Guide” advised users to “Hide distance in your Grindr profile” while Raya stated that users should “only elect to disclose your location after careful consideration.” Since geolocation is a central organizing principle in most apps sampled, this advice counters dominant practices and interface defaults toward disclosing a user’s approximate location. Warnings concerning personal information and location often took an all-or-nothing approach, counseling against sharing rather than providing support for incremental disclosures.
Safety materials also advised against the common practice of interacting across multiple platforms (Broeker, 2023; Ferris and Duguay, 2019). Ashley Madison, eHarmony, Happn, HER, Hornet, Lex, Match, Muzz, OkCupid, Plenty of Fish, Salams, Scruff, Silver Singles, Taimi, and WooPlus encouraged users to only communicate on their platform. For example, Match tells its users to “Get to know new matches where you’re most protected: The app” to persuade them to stay on the platform. While users often communicate via dating apps, messaging apps, and social media to gain a more authentic understanding of a potential date (Broeker, 2023), remaining on one app maintains user data and engagement under the umbrella of a single company. Although users often draw on multiple platforms to employ different affordances in the disclosure of intimate information or sexual expression (Tiidenberg and van der Nagel, 2020), such as by sharing a “private” photo album housed on a pseudonymous platform like Tumblr or time-limited messages on Snapchat, these materials missed the opportunity to outline how users could draw on the broader app ecosystem in their safety practices. While dating apps co-constitute this ecosystem through their multiple built-in connections across apps (e.g. cross-platform account authentication and importing data), materials that center dating apps in this ecosystem may discourage users to harness other apps’ affordances for safely interacting with others.
Safety materials frequently extended advice to in-person dates, such as by advising users to avoid private transportation: “Don’t get into cars” (Happn). Muzz’s assertion that users should “avoid meeting at private homes or anywhere secluded” echoed common recommendations to meet matches in public spaces. Some apps, like Match, framed this decision as a matter of sexual restraint: “Don’t succumb to the temptation to take first dates to your home (or to go to his/her home).” However, individuals may not always feel comfortable meeting in public, such as if the dater is not publicly out about their sexuality or belongs to a culture in which casual dating is frowned upon. Apps also cautioned against substance use, with Grindr noting that “drinking and using drugs may decrease your ability to identify a situation as potentially dangerous,” and several apps advised users to monitor their drinks, implying or explicitly warning about date rape drugs. Tinder told users, “Keep your phone, purse, wallet, and anything containing personal information on you at all times.” Several apps, including Badoo, BLK, Feeld, Hinge, Grindr, Match, OkCupid, PURE, Scruff, and Tinder, offered safety tips for sexual encounters, often discussing safer sex practices, sexually transmitted infections, and consent.
This advice may be helpful, as it reflects messages often deployed by public authorities, like health services or educational institutions. It also echoes how dating app companies took on an authoritative role in the COVID-19 pandemic as they advised users on distancing, mental health, and wellbeing (Myles et al., 2021). However, unlike governments and public health authorities, commercial dating app companies are primarily experts in their business of data-driven profit generation and software subscriptions rather than these other domains, and they are not beholden to their usership in the same way that governments are to citizens. Companies were clear that their obligation to protect users did not extend beyond outlining “tips” and “simple steps,” framed as the way to avoid harm: “Pay attention to these dating safety rules and your odds of a bad experience will be vastly reduced” (eHarmony). Such framing ignores the systemic nature of harm, such as the higher targeting of women over men through date rape drugs (Du Mont et al., 2010), which reflects societal problems of misogyny and gender-based violence (Filice et al., 2022). Thus, users are rendered individually responsible for their safety while being told to protect themselves in specific ways, some of which contradict apps’ default settings and common user practices. If users do not follow these instructions and harm occurs, they are rendered culpable, since they were warned. These approaches to safety place the burden on the individual and reinforce victim-blaming narratives, which often situate women, racialized people, and other minorities as responsible for harm inflicted upon them (Farvid and Aisher, 2016), rather than recognizing structural conditions that render these users disproportionately at risk in the first place.
Policing through moderation and law enforcement
Moderation encompasses the main form of policing on dating apps, with threats of reporting and banning as the central forms of punishment in this justice model. Dating apps’ safety materials encouraged users to police each other's behavior, with some companies’ attention to reporting perhaps attempting to counter the normalization of leaving abusive behavior unaddressed (Gillett, 2023). Bumble explained that its “Block & Report feature is designed to protect you, and you should feel empowered to use it any time that you feel uncomfortable or unsafe—whether it’s about someone’s behavior on the app or in person,” later reminding users they should use the feature “as often as needed.” Peer moderation was framed as a service to others, with Hornet explaining, “the more people report the bad actors, the faster we can get rid of them,” and HER telling users, “You’re playing a crucial role in making online dating safer for everyone.”
In assuming this role, users needed to know what behaviors to police. Dil Mil asked users to report “any instances of misconduct and violations of our policies.” Other apps emphasized policy infringements, specifying that users should report “suspicious, offensive, harassing, threatening, fraudulent, unwanted, or harmful behaviors or if another user requests money or attempts to sell a product or service” (Hily). Lex and PURE asked users to be on the watch for self-harm content, while Ashley Madison, Grindr, Hily, Match, Muzz, Salams, Seeking, Silver Singles, and Tinder encouraged reporting people suspected of engaging in sex work. Many apps, including Ashley Madison, Dil Mil, HER, Hily, Hinge, Match, OkCupid, Plenty of Fish, PURE, Salams, Seeking, Silver Singles, Taimi, and WooPlus, urged reporting of potentially fraudulent activity. Such demands require users to not only be adept in specific app policies but also experts who can properly identify issues, such as mental health problems, sex trafficking, interpersonal violence, and legal infractions. Only Her recognized the work of community moderators, enlisting some users into a volunteer moderation role (alongside regular peer reporting) of helping “maintain a safe, inclusive, and nurturing space for all.” They earned a black M on their profile and were eligible to receive HER merchandise for their service (Her, 2016). This exceptional moderation model builds from the potential for users to cultivate mutual support through a shared sense of community, relying on the dedication of HER’s users to a sense of accountability and responsibility stemming from shared identifications under the umbrella of queerness. This approach echoes other community moderation models, such as Reddit’s volunteer moderation that has been employed to bring lesbian-queer communities together across difference (Foeken and Roberts, 2019). However, the labor of moderation—whether formalized through a designation or informally distributed across users—is intensive, benefits the platform by enabling the appearance of governance at scale, and often falls on the shoulders of those at highest risk for harm (Nakamura, 2015). No apps in our sample offered users monetary remuneration for the time or labor necessitated by moderation or reporting.
If detecting specific violations was beyond users’ capability, they could simply rely on feelings. Badoo, Bumble, and Tinder encouraged users to “trust their gut” while Hornet said to “trust your instinct.” Plenty of Fish told users, “You know when someone’s crossed the line.” However, research shows that white daters may feel “uncomfortable” when avoiding or rejecting racialized users (Williams, 2024), indicating how discomfort can stand in for prejudice. Some dating app companies recognized this conflation between bias and harm, and included warnings against targeted reporting. Tinder’s Community Guidelines read: “If you see someone who doesn’t meet your personal criteria, don’t like them or unmatch and move on. Don’t report them unless you think they’ve violated our policies.” Though this statement suggests that users may engage in targeted reporting based on “personal criteria,” Tinder hesitates to make a direct link between preferences and systemic biases when sexual racism is often brushed off as a preference (Carlson, 2020). Other apps explicitly recognized identity-based targeting: “Intentionally reporting another member solely based on a protected attribute will not be tolerated” (Badoo). Bumble explained, “We may take action against a member if we’ve found them to be intentionally creating false or inappropriate reports against other members solely based on their protected attributes,” such as “transgender or nonbinary members for no reason other than their gender identity or expression.” This statement acknowledges the prevalence of anti-trans reporting, which inhibits transgender users’ dating experiences (Griffiths and Armstrong, 2024). The misuse of reporting tools, warranting such guidance, shows how users may differentially interpret and instil their own values and notions of justice into peer moderation processes.
Although companies emphasized the importance of reporting users, it was rarely clear what followed. Responses to users often included vague or open-ended outcomes, like “Depending on the severity and the frequency of the reporting, that member will be warned or banned” (Hinge), or reassurance that “reports of harassment, intimidation and assault are taken very seriously” (Plenty of Fish). Many smaller or niche apps, such as BLK, Christian Mingle, Feeld, Raya, WooPlus, and Zoosk, did not provide details about reporting processes. In contrast, Bumble provided a step-by-step outline of “what happens behind the scenes when you report something,” which included review by their “team of human moderators.” Though many apps, including Badoo, Bumble, BLK, Grindr, Hily, Hinge, Hornet, Match, Muzz, OkCupid, Plenty of Fish, PURE, Seeking, Silver Singles, Taimi, and Tinder, described using a combination of human and automated systems to monitor and review content, it was not always clear which decisions were made by humans or automated, as is common with moderation at scale (Gorwa et al., 2020), which can leave users to navigate complicated account restoration and appeals processes (Are, 2024; Gray and Stein, 2021). The variation in available information about moderation processes reflects a common characteristic of carceral logic, wherein the ultimate determination of guilt and punishment are executed with little or no transparency.
Several dating apps encouraged users to contact law enforcement, thereby merging communication technologies with the possibility of carceral punishment (Lane and Ramirez, 2024) and demonstrating companies’ understanding that criminal activity was outside their liability or jurisdiction. For instance, Coffee Meets Bagel, eHarmony, Grindr, HER, Tinder, and WooPlus suggested contacting various agencies, including some tailored to Canada—like the Canadian Centre of Cyber Security—and others defaulting to U.S. authorities, such as the Federal Bureau of Investigation. Hily framed such action as dutiful: “Reporting criminal activity by another user may help prevent a perpetrator of a rape, assault, or financial crime from hurting or continuing to hurt others.” Users were encouraged to determine when to involve authorities, with HER saying, “If the situation escalates to the point where you feel it necessary to involve authorities, you should report it to your local police station.” Both Tinder and Plenty of Fish integrate with Noonlight (available only in the United States), a third-party emergency service app allowing users to “discreetly trigger emergency services if you’re feeling uneasy or in need of help” (Tinder). Like other safety technologies, third-party services place the onus on potential victims to protect themselves (Bivens and Hasinoff, 2018). They also invoke forms of justice akin to carceral feminism where individuals at ease with law enforcement, such as those who are rarely subject to the classist, racist, or transphobic tendencies of policing (Stanley, 2021), feel as though they are doing the right thing by flagging concerns to the police.
Totalizing surveillance
Policing was intertwined with calls for surveillance across safety materials, as users were encouraged to share data in exchange for safety solutions. Along with uploading date information to third-party apps like Noonlight, Tinder’s and Plenty of Fish’s “Share My Date” feature allows users to share the time, location, and their match’s profile with others to “[make] it even easier for you to let your friends or family know when, where, and with whom you’re going on a date” (Plenty of Fish). Taimi asserted that more information was better: “Don’t refrain from being specific about the time and place of your date to someone you trust—if something happens, they’re one text away from rescuing you from a bad situation or from alerting authorities about anything worse.” Although it has been a longstanding practice for online daters to inform others when and where they will meet someone (Couch and Liamputtong, 2007), these features and advice normalize sharing exact data about other users with those outside the app—as well as with app companies—and perhaps more details than they would normally. Again, since some users may require greater privacy than others, the circulation of a user’s face and detailed information could be particularly harmful. Such a seamless feature that demands intensive data exchange may inhibit users’ attempts at managing identifiability, for instance, stymying the practices taken by some teachers, counselors, or others in the public eye to limit their visibility on Grindr and avoid those who may view their roles as incompatible with their sexual expression (Blackwell et al., 2015).
Further, peer moderation necessitates watching others and questioning their motives. Silver Singles emphasized, “it is completely acceptable for you to be skeptical” of other users. Many apps, including Badoo, Bumble, Dil Mil, eHarmony, Grindr, Hily, Lex, Scruff, Seeking, Taimi, and WooPlus, recommended that users investigate their matches via “online vetting” (Bumble), which could include “a reverse-image search of the person’s photo” (Hily, Taimi), “look[ing] up phone numbers” (Scruff), “typing your match’s name into a search engine” (eHarmony), and searching for their social media profiles (eHarmony, Hornet, Scruff, and Seeking). Seeking reassured users, “personal research is not creeping,” instructing: “Google them! The best defense is a good offense.” While online daters already tend to perform background searches on potential dates (Gibbs et al., 2011), these suggestions encourage intensive data gathering as self-protection.
Apps can mandate that users provide certain information for the sake of identity verification, such as their mobile number or social media accounts, which may be used to register or login. Raya stated that mandatory registration credentials, including a person’s “Instagram handle, contact information, and phone contacts, among other things,” are “used to verify your identity and to evaluate your real-world connection to existing members, among other things.” While these credentials may be justified through Raya’s premise as a members-only app, BLK, Christian Mingle, Hornet, and Match explained that their companies gain access to a range of data when users choose to link their social media accounts to the dating app. Tinder and Muzz encouraged users to upload government-issued identity documents (e.g. driver’s licenses and passports). Seeking offered background checks to international users through the third-party service TC LogiQ, thereby introducing another data intermediary, while Tinder explained that U.S.-based users can access BrightCheck, a third-party service that “offers criminal history, anti-catfish, and social media checks” and “aims to provide essential information, enabling you to make safer and more informed choices for your dating experiences.” Background checks raise concerns over faulty data, the disproportionate rates at which certain populations (e.g. racialized people and transgender people) are targeted by police, and the stigma associated with criminal activity (Corrigan, 2021). Further, their use and the data they draw upon are outside the control of the user being investigated, unlike verification options wherein users choose whether to provide documentation or link accounts.
Increasingly, dating apps prioritize visual forms of identity verification. One common feature, which compares a user’s live selfies to their profile photos and results in a badge on their profile, is known as “photo verification” (BLK, Bumble, Match, and Tinder) or “selfie verification” (Muzz and Salams). Hornet presented this feature as “the best indicator showing that you are talking to a real person” while Bumble told users that it gives potential matches “confidence that you’re the same great person IRL [in real life] as you are in your profile photos.” Bumble also advised, “before you meet, ask your match to get verified.” Associating identify verification with safety ignores evidence that being identifiable online does not preclude users from acting in aggressive or harm-inducing ways (Rösner and Krämer, 2016). In turn, research attests to how anonymity does not necessarily enable harmful behavior and can be a positive and protective affordance for some users, especially when afforded in spaces with a shared sense of community (Kennedy et al., 2016). Although verification often involves facial and biometric recognition technology, which companies often refer to as artificial intelligence (AI), these technologies are often less effective for racialized people and those under-represented among technology developers (Buolamwini, 2023). Certain users may have more difficulty getting verified, depending on their physical characteristics or ability to take selfies—a task that can be difficult for people with disabilities (Cavallero, 2021). While such features are often promoted as differentiating humans from bots, users may have difficulty proving their humanity.
When biometric technologies work for racialized and under-represented individuals, they are often used for policing and surveillance (Benjamin, 2019; Brayne, 2021). Tinder explained that “face geometry data” is used to “keep other members safe” and may be accessed in “preventing, detecting and fighting against violations of our Terms” as well as “fraud and other illegal or unauthorized activities”—and, in these later cases, data may be shared with law enforcement. Several other apps, including Badoo, Bumble, Hinge, OkCupid, and Plenty of Fish also noted that data collected during selfie verification may be shared with affiliates, third parties, and law enforcement. Some companies included disclaimers acknowledging that the verification process may yield errors, such as Tinder cautioning, “it’s important to note that this feature doesn’t guarantee … the safety of a particular user.” Even so, verification badges are often incentivized (e.g. rewarded with Tinder’s in-app currency) and viewed as symbols of status and authenticity with potential to sway users’ opinions of others (Caplan, 2024).
In-app video calling is another feature that companies regularly endorse as a safety measure. Introduced to many apps during the COVID-19 pandemic as a means of dating at a distance (Duguay et al., 2022), this feature was now framed as a necessary step in the courtship process: “a video call is a good idea before meeting in real life” (Zoosk). Muzz, which markets this feature as valuable to more traditional dating cultures, offers “Dad verification” through which a match can “directly speak to her parents via video call and get that seal of approval to move things forward,” earning a “Dad Verified badge.” Several companies asserted that users may be at risk if a match does not agree to a video call, including Coffee Meets Bagel: “Be wary of anyone who will not meet in person or talk on a phone/video call—they may not be who they say they are.” This framing ignores the large volume of personal information that can be gleaned from video streams in domestic spaces (Tran, 2024), making this feature a privacy risk for some users. Some individuals may also have reasons to avoid making video calls from their residence, such as if one’s living situation does not enable them to speak privately or freely. Video calls also subject individuals’ homes and material possessions to scrutiny, which may perpetuate stigma relating to socioeconomic status, an issue already faced by online daters who are feel pressure to display the same financial status as others (Kozma, 2018).
Automated content filters operate as a form of constant surveillance in users’ in-app chats. Plenty of Fish, OkCupid, and Tinder have “Safe Message Filters” that use “automated tools to scan interactions” among users “to find instances of harmful or illegal behavior,” which can result in “removing content, banning the user and/or notifying the appropriate law enforcement resources.” Beyond automated scans that may be imperceptible to users, some apps present users with moderation-oriented prompts and features, like Tinder’s “Are You Sure?” to encourage reflection before sending a message deemed potentially inappropriate and “Does This Bother You?” to facilitate a recipient’s flagging of problematic content. Some apps filter images, such as Scruff’s “Sensitive Content Filter” that “gives you control over how you view content that is limited or restricted by app stores” and Bumble’s “Private Detector” that blurs “lewd images.” While Scruff’s filter may enable a workaround for app store prohibitions on nudity and Bumble’s is framed as protecting users from nudes, both filters draw attention to sexual imagery over, for instance, violent content and blur it by default, despite nude photos not always being perceived as unsafe or unwanted when exchanged among consenting adults (Paasonen et al., 2019). Badoo boasted that its text filter has been “fine-tuned through over 2 million messages and recognises 100 languages,” pointing to the volume of user data implemented in training these tools. Despite such training, speech detection tools are often inaccurate, unable to gauge complexity, and may lead people to avoid discussing complex topics (Gorwa et al., 2020). While constant automated surveillance may deter certain behaviors and eliminate policy violations through “endless interventions” (Andrejevic, 2020), it may not aid in assessing safety if it leads to greater self-censorship, obscuring signs of danger that would otherwise surface in chats prior to meeting in person.
Discussion and conclusion
Through an analysis of safety materials, we found that dating app companies tend to frame risks and harms associated with their technologies as individualized problems for which they bear little responsibility. Even so, these companies prescribed how users should preserve their safety, framing experiences of harm as failure to follow specific tips or guidelines. Since threats to safety were constructed as isolated instances perpetrated by bad actors, users were called to police one another through peer moderation, with expectations of user acuity as to what constitutes policy violations or when to contact law enforcement. Such policing involves intensive peer surveillance and justifies verification mechanisms that demand volumes of user data, conflating the exchange of visual and biometric data with the designation of being a safe match. While automated filters promise continuous surveillance, such surveillance does not guarantee safety, whether online or in person. By deterring overt policy violations within the app, surveillance may mask problematic behavior, making it more difficult for users to assess whether a match could pose a threat offline. Overall, these approaches to safety are underscored with carceral logic that individualizes harm, normalizes surveillance, and seeks to inflict punishment through relatively opaque platform governance processes and, in certain cases, intervention by law enforcement.
The risks posed by dating apps reflect broader social, cultural, and structural violences (e.g. misogyny and racism) that are not the fault of these companies. No singular safety feature could address them, nor is it solely the responsibility of these companies to do so since such risks reflect deeper societal problems. Moreover, it might not be possible for dating app companies to fully mitigate or prevent the various harms that users can experience, given the challenges that come with governance at that scale (Gorwa et al., 2020). However, governance approaches imbued with carceral logic comply with its longstanding use as a “tool of white supremacy, colonialism, heterosexism, and the numerous forms of heteropatriarchal capitalist hierarchy” (Coyle and Nagel, 2022, p. 4) as they implement inequitable measures in the name of safety. Recognizing elements of carceral logic in apps’ safety materials provides an opening for considering other approaches that incorporate alternative justice frameworks into platforms, such as transformative and restorative justice approaches (Hasinoff and Schneider, 2022; Schoenebeck et al., 2021).
Transformative justice approaches focus on addressing conditions that lead to harm in order to prevent it (Hasinoff and Schneider, 2022). They are concerned with broader systemic and structural factors that can enable harm and aim to transform the underlying social conditions that produce harm. One of the most evident takeaways from our analysis is that dating apps’ materials do not always recognize existing user safety strategies or prevalent dating practices. Since users are likely to incrementally disclose intimate personal information, safety materials could advise how to do so cautiously and in ways that consider varying identities and contexts across diverse users. Similarly, since users employ multiple apps (e.g. Snapchat and WhatsApp) in this process of developing intimacy, materials could help users assess their options and associated risks. These approaches move away from abstinence-only guidance, like not sharing personal information or not moving off the app, and toward digital literacy and harm reduction approaches related to actual dating practices. Further, with many users needing to manage identification to preserve their own safety (e.g. avoiding being fired and seen by abusive ex-partners), apps’ identity verification policies and mechanisms could allow for more flexibility. This could include, for example, requiring minimal information to verify a user to the app company while allowing pseudonyms or ambiguous self-presentation among users or blurring a user’s background on a video call by default, promoting privacy-preserving measures as a norm. Such options recognize that the risks of identifiability are not the same for all users and enable individuals to judge how to self-disclose and to what extent.
Underlying these suggestions, however, is the broader indication in our analysis and across scholarship that safety must be a mutual and collective endeavor among users, technology developers, and institutions. For instance, institutional regulation plays an important role since legal, policy, and accountability frameworks can influence how platforms define and operationalize safety (Suzor, 2019). Considering, specifically, interventions at the level of platform governance, platforms that afford anonymity and community-based moderation that works well tend to foster a shared sense of responsibility and accountability among users (Kennedy et al., 2016). To this end, we look to our sample’s exceptions, such as HER’s call for community moderators who “believe in creating a social space where women can meet and talk to one another openly, positively and in a supportive manner.” This model could be paired with resources supporting moderators’ labor and guidelines ensuring it does not lapse into only surveillance, such as through restorative justice approaches that engage malicious reporters in accountability processes including education so they can understand the harm caused and to prevent it in the future (Bailey and Cole, 2021). Badoo’s and Bumble’s attempts to directly address identity-based targeting and harassment are also stronger efforts to bring together a user community—through recognition of the potential for othering across difference—than guidelines that ignore such issues. While community is a fraught and complex notion, safety materials that aim to foster mutual understanding could be more effective than those that pit users against each other by encouraging suspicion and surveillance.
Since dating apps have the challenge of bringing diverse users together across multiple positionalities and experiences, our analysis underscores a need for governance approaches coupled with design decisions that accommodate a pluriverse: “a world where many worlds fit” (Escobar, 2018, p. xvi). Creativity, autonomy, and specificity underscore pluriversal designs, which respond to communities’ needs. Dating app companies can take a step toward alternative models of justice through governance approaches and attendant designs that: are attuned to users’ existing needs and the practices they use to satisfy them; unite users in being accountable to each other and responsible for mutual care rather than self-preserving surveillance; and enable user autonomy through a wider range of safety options that can be chosen rather than imposed upon users unilaterally. 2 To this end, we echo scholars’ calls for greater transparency in how user data is collected and used (Stardust et al., 2023), allowing for more clarity about which data is necessary for safety purposes versus extractive and reflective of commercial and state overreach.
Undoubtedly, dating apps require safety features, and users should be informed of the risks associated with these technologies as well as the options for addressing them. However, the risks that are focused upon, the features that are developed, and the effectiveness of how well these features protect (or compromise) the safety of different users are intertwined with developers’ ideologies, values, priorities, and expectations of users. Restorative and transformative justice frameworks offer alternatives to carceral logic, as they recognize structural conditions that lead to harm while prioritizing accountability and community involvement. While these approaches may force dating app companies to reckon with whether their current operating models are at odds with safety efforts, they also enable the imagining of alternative policies, guidelines, and technology designs that could be taken up by grassroots or community-driven developers. While our study focused on safety materials, it paves the way for future research to explore how the logic undergirding app governance extends into interfaces and how users heed, ignore, or resist that logic.
Footnotes
Acknowledgements
The authors would like to thank the paper's anonymous reviewers as well as those who provided feedback during presentations of this study's preliminary findings, including members of LabCMO and the Association of Internet Researchers (AoIR).
Ethical approval
Ethics approval was not required.
Authorship
Christopher Dietzel and Stefanie Duguay were involved in data collection, primary analysis, and initial paper drafting. All the authors were involved in project and paper conceptualization, secondary data analysis, paper refinement, synthesis, and editing.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by funding from the Social Sciences and Humanities Research Council of Canada (SSHRC ref: 10.13039/100021638) and the Fonds de recherche de Québec—société et culture (FRQSC ref: B3Z—331853).
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability statement
Data from this study is available upon request to the corresponding author.
