Abstract
This article examines how sexual content creators manage their (in)visibility, as they navigate the constraints of online hyper(in)visibility. So far, research has focussed on how creators more generally attempt to enhance their visibility through social media platforms. Yet, especially for sexual content creators, platform visibility is not straightforward. These creators are hyper(in)visible: facing simultaneous risks of erasure and public scrutiny, harassment, and stigmatization. Drawing on 27 interviews with creators—online sex workers; LGBTQ+ activists; and sex educators—this research outlines the harms of hyper(in)visibility and creators’ tactics for strategic invisibility. These interviews showcase how hegemonic norms make socially marginalized content both hypervisible and invisible, as well as how these dynamics are reproduced and institutionalized on platforms. As they are transgressing hegemonic sexual norms, the interviewees discuss risks of platform surveillance, outing, doxxing, harassment, and capping. Yet, within the confines of platforms, these creators find ways to manage and resist these risks and positively engage with strategic invisibility. Taken together, the analysis shows the need to complicate the notion of creator visibility and sensitize research to how creators seek particular types of visibility, as well as strategic invisibility.
Introduction
Research on the creator economy has primarily focussed on the benefits of online visibility and the challenges of achieving it (Duffy & Meisner, 2022; Glatt, 2022). Online visibility has the potential to provide representational opportunities (Poell et al., 2021, p. 85). Moreover, it has been widely documented how creators attempt to monetize audience visibility through sponsorship deals, direct monetization, or subscriptions (Bonifacio et al., 2021; García-Rapp, 2017; Glatt, 2022). Monetization involves self-promotion across multiple platforms: for instance, sexual content creators on OnlyFans need to attract patrons to be profitable via cross-platform posting, since the subscription platform does not offer opportunities for discovery (Uttarapong et al., 2022). Yet, being visible on a public platform is both a resource and a risk (Samermit et al., 2023, p. 5632). As platforms structure features like discoverability, creators do not have full control over their visibility. Platforms, through algorithmic curation and content moderation, shape who and what can become prominently visible. While end-users tend to collectively focus on particular viral creators, which can, as we will discuss, take a dark turn. As we will show, this type of visibility generates vulnerabilities (Duffy et al., 2023). Sexual(ity) content creators, here, explain how they experience and deal with these vulnerabilities.
Extensive research has highlighted how platforms institutionalize biases, shape visibility, and how creators of all stripes develop strategies to remain visible. On large commercial platforms such as Instagram, TikTok, and YouTube, there is a constant “threat of invisibility” (Bucher, 2012, p. 1171; Gillespie, 2018). Platforms not only rank content and creators, placing most far below the top of the feed, but they also remove, shadowban, and demonetise, often replicating offline inequalities in the visibility afforded to marginalized communities (Cotter, 2019; Haimson et al., 2021; Noble, 2018). In response, creators have to put in a lot of extra work to maximize their visibility within these systems. Through “aspirational labor,” “visibility games,” and “algorithmic gossip,” creators attempt to reap the benefits of online visibility (Bishop, 2019, p. 2590; Bucher, 2017; Cotter, 2019, p. 899; Duffy, 2016, p. 443). When made invisible, content creators are faced with both financial and emotional consequences (Are & Briggs, 2023; Stardust et al., 2023). In those situations, creators develop tactics and insights to avoid detection, or safeguard their income once deleted (Duffy & Meisner, 2022; Hamilton et al., 2022).
Yet, not all creators try to maximize their visibility. This article complicates understandings of online visibility by highlighting the experiences of a diverse set of “sexual(ity)” content creators, for whom visibility comes with substantial risks. We use sexual(ity) as an overarching term to refer to sex workers, nude and sexual content creators, as well as those whose content involves sexual expression and education. The term highlights the heightened challenges these creators face, recognizing these creators engage in different labor with varying levels of stigma, but without morally separating those who express themselves sexually from those who perform sexual labor. All such creators build sexuality-related brands and personas to connect with follower communities (Johnston, 2017, p. 78). Like other online workers, they lack labor protections and face precarious working conditions. On top of this, they face additional precarity through stigmatization and its consequences, especially if they are sex workers (Easterbrook-Smith, 2022, p. 3). This added stigmatization strongly affects the online visibility experiences of sexual(ity) content creators, who rely on visibility for representation and remuneration, but who face specific risks when visible. Online publicness can non-consensually out queer users, sometimes with devastating consequences (Cho, 2018, p. 3185). Often used to harass sex workers, doxxing (the publishing of personal/identifiable information), and capping (the recording and spreading of content non-consensually) further enhance the risk of visibility (Jones, 2016). Greater digital exposure is a risk for online sex workers, who may face harassment from customers and over-moderation by platforms (Hamilton et al., 2022, p. 11).
Sexual(ity) creators are also disproportionately affected by the threat of invisibility through platform moderation practices, which have historically focused on sexual content (Gillespie, 2018, p. 26). In general, platforms appear to algorithmically demote nude and sexual content (Blunt & Stardust, 2021). However, especially since the 2018 enactment of FOSTA-SESTA (Fight Online Sex Trafficking Act/Stop Enabling Sex Traffickers Act)—a US anti-online trafficking (and de facto anti-sex work) law—censorship of online sexuality has intensified (Are, 2020, p. 742; Blunt & Wolf, 2020, p. 117). Platforms over-moderate sexual(ity) content as a result. Economically, sexual(ity) creators have particularly borne the brunt of FOSTA-SESTA. Their advertising opportunities have been removed (Blunt & Wolf, 2020), and they face shadowbanning, account and content deletion on major social media platforms (Are, 2021; Blunt & Stardust, 2021; Coombes et al., 2022), as well as the restriction of their sexual expression on adult platforms (Stegeman, 2021), platform loss or de-platforming (Hamilton et al., 2022), and the withdrawal of payment processors (Blunt et al., 2021; Stardust et al., 2023).
Sexual(ity) content creators therefore share many aims and tactics with other creators, but their relationship with online visibility is more fraught. Considering these simultaneous risks of visibility and invisibility, we examine sexual(ity) creators’ visibility management. We focus on their visibility practices across platforms, from social media such as TikTok and Instagram to specific sexual platforms. We explicitly include a variety of sexual(ity) content creators to illustrate shared struggles and grounds for solidarity. We show how online (webcam) sex workers; sex educators; and LGBTQ+ activists face similar issues and develop useful modes of resistance.
To gain insights into the different ways in which this group manages visibility, we draw on 27 interviews with a range of sexual(ity) content creators, who highlight the risks of online hypervisibility and their tactics of strategic invisibility. Together, these risks and modes of resistance illustrate the harms of visibility alongside the risks of invisibility, complicating a binary understanding of the effects of online visibility. As such, the article challenges the touting of visibility “as unilaterally beneficial to marginalized populations” (Smilges, 2022, p. 24). Toward this aim, we build, in the next section, on research on the oppressive logics of hyper(in)visibility as developed in Black, queer, and fat studies (Gailey, 2014; Johnson & Boylorn, 2015). While our interviewees are not all Black, queer, and/or fat, we draw on this literature because it has clearly exposed hegemonic norms of (in)visibility. The work in these fields shows the complex ways in which visibility and marginalization interact. The concept of hyper(in)visibility is especially useful to straddle the simultaneous harms of hypervisibility and hyper-invisibility.
Hyper(In)Visibility
Hyper(in)visibility has primarily been theorized in offline contexts. It highlights the ways in which socially marginalized and stigmatized groups are simultaneously hypervisible and hyper-invisible in public spaces. In the context of fat women’s experiences, Jeannine Gailey (2014) makes clear that “hyper(in)visible” “means that a person is sometimes paid exceptional attention and is sometimes exceptionally overlooked, and it can happen simultaneously” (p. 7). Online, hyper(in)visibility also frames fat creators’ experiences of harassment (Cotter, 2022). Discussions of hyper(in)visibility remain relevant in fat studies where, for instance, the invisibility and intolerability of fat bodies in certain spaces (Harjunen, 2019), the interplay of stigma, design, and hyper(in)visibility (Stevens, 2018), or the framing of the very erasure of fat itself as a “solution” (Kyrölä & Harjunen, 2017) are theorized through this concept.
The concept of hyper(in)visibility especially has a long and broad theoretical grounding in Black and queer studies. Hegemonic marginalization interacts with visibility in multiple, dialectical, and (seemingly) contradictory ways. Minoritized groups are generally underrepresented in media and politics, and their experiences are overlooked. Mediated and digital visibility is structured along hegemonic power dynamics (Hobson, 2016). Simultaneously, hegemonic standards are represented as neutral: “Whiteness and heterosexuality seem invisible, transparent [. . .] they are simply norms” (Reddy, 1998, p. 55). Because of the pervasiveness of normative standards, these norms and the structures upholding them become invisible. The “unmarkedness” and “blind spot” of Whiteness represent the unfair distribution of power and privilege of White supremacy (hooks, 1997; Richards, 2017, p. 41). By contrast, deviation from these hegemonic norms is explicitly made visible as difference. As Fanon (1967) argued, a Black person is “overdetermined from without” (p. 116), and experiences constant visibility as a Black person to White gazes. Similar norms apply for what is thought of, and hypervisible, as sex work, namely work “done by queers, people of color and subaltern bodies” where the sexual labor performed for instance within “heteronormative courting [. . .] are rendered invisible through the privileging of the bodies that perform it” (McNamara et al., 2015, p. 49). Marginalized bodies are hypervisible not in their own right but in their stereotyped form (Johnson, 2019, p. 208). Similarly, Smilges (2022) concept of queer silences shows how society over-fixates on queer bodies, while underrepresenting their voices. This dynamic describes how deviation from White, cis, heterosexual norms is noted exactly for its difference.
Hyper(in)visibility unifies this simultaneous experience of hyper-invisibility and hypervisibility of marginalization. Historically, minoritized groups are “trapped between regimes of invisibility and spectacular hypervisibility” (Rusert, 2017, p. 98). Online, racist demoting co-occurs with surveillance-based centering (Benjamin, 2019, p. 82). At first glance, this seems contradictory. Yet, these visibility interactions are two sides of the same coin. For instance, “Black women have on one hand always been highly visible, and so, on the other hand, have been rendered invisible through the depersonalization of racism” (Lorde, 1984, p. 42). Oppressive structures depersonalize and over-scrutinize in the same breath. The concept of hyper-invisibility unifies this contradiction, describing “a space where bodies are visible, but in limited ways that tend to mark those bodies even more invisible” (Johnson & Boylorn, 2015, p. 22). Hyper(in)visibility describes how non-normative identities are at once overlooked, ignored, underrepresented, and subject to intense scrutiny, surveillance, and stereotyping. It is both maintained on institutional and interpersonal levels (Gailey, 2014, p. 7). What the concept of hyper(in)visibility primarily highlights is how both hyper-invisibility and hypervisibility perpetuate harm at once.
Notably, in Black and fat studies, “overdetermining from without” (Fanon, 1967, p. 116) and the inability to manage how certain identities are seen are central to the harms of hypervisibility and hyper-invisibility. Black and fat bodies are read in these ways and there are few options for outwardly presenting otherwise (Steinbugler, 2005, p. 437). Here, our focus is on actors who can engage in visibility management, which entails making strategic decisions on where and how one becomes visible. While participants in this study have varying physical visible embodiments, they also have the option to share or hide their status as sexual(ity) content creators. To manage risk and representation, it is not always necessary or desirable to make a particular identity visible (Smilges, 2022, p. 22). The status as a sexual content creator, in Goffmanian terms, is “discreditable,” but not as permanently marked as “discredited” stigma, which is always visible (Goffman, 1986, p. 5). This allows for more individualized modes of risk management. This also means creators agentically engage with dominant regimes of visibility.
This type of stigma navigation can be described through the notion of in/visibility management, which, as Ham and Gerard (2014, p. 308) show, is used by sex workers to enhance both income and mobility in and outside sex work. This refers to the “containment of worker visibility” (Ham & Gerard, 2014, p. 302). Other overpoliced populations, such as undocumented migrants, also engage in such strategic invisibility for their own well-being (Villegas, 2010, p. 159). In relation to sex work, this tactic offers a mode of perseverance within vulnerabilizing social and legal contexts (Ham & Gerard, 2014, p. 301). We demonstrate how this concept is also used in a variety of online spaces by socially stigmatized sexual(ity) creators.
Hyper(in)visibility and in/visibility management theories help us demonstrate how common understandings of visibility as necessarily positive and invisibility as necessarily negative unravel online. Groups subjected to hyper(in)visibility experience the pain of invisibility, as well as the risks of visibility (Steinbugler, 2005, p. 429). This makes it necessary to engage in a “dialectic of calculated visibility and strategic invisibility” (Rusert, 2017, p. 26). Earlier research on “the burden of representation” (Mercer, 1990) or on “hypersexuality” (Miller-Young, 2010) already highlights how being visible in certain ways and invisible in other ways can enhance marginalization. This article further challenges the unilateral understanding of visibility on platforms to gain insight into how sexual(ity) content creators exercise their agency by strategically managing visibility. Complicating the idea that visibility is positive and invisibility negative, we want to highlight two types of visibility, alongside the more often discussed experiences of involuntary invisibility (e.g., over-moderation) or voluntary hypervisibility (e.g., promotion practices): experiences of involuntary hypervisibility and strategies for voluntary invisibility. Involuntary hypervisibility refers to experiences in which sexual(ity) creators are scrutinized by platforms, unintended audiences, and hate groups. These harms of hyper(in)visibility online have, for instance, already been noted in the body positivity movement on TikTok (Cotter, 2022). While there has been a lot of research on these forms of harassment (Cotter, 2022; Duffy & Hund, 2019; Perrett, 2021) and moderation (Are, 2020; Coombes et al., 2022), we specifically want to theorize and research these in terms of visibility management. This follows the conceptual move made by Duffy et al., (2023), who highlight the vulnerabilities, such as harassment and surveillance, associated with digital visibility (p. 3). Like these authors, we combine findings about harmful audience behavior and platform surveillance, usually studied separately, to show how they can cause harm to marginalized groups. This framing not just allows us to discuss sexual(ity) content creators as potential victims of harassment or excessive moderation, but also to examine how they resist platform visibility logics.
This feeds into the last category of visibility, which has not received a lot of attention: voluntary invisibility. Sometimes creators make strategic use of invisibility to hide themselves from platform moderation or hateful and unproductive audiences. Unlike previous work that has focussed on reduction of content, self-censorship, filtering, or the ignoring and blocking of hate (Duffy et al., 2023), we discuss ways in which creators actively try and limit their visibility. Creators have been shown to move smaller, more positive, and rewarding audiences to paywalled platforms (Glatt, 2023). Some creators in our study also use this tactic to limit their visibility. Moreover, as we will demonstrate, sexual(ity) content creators frequently attempt to limit their platform visibility, not just to minimize risk but also to increase their revenue and create positive online environments.
Methodology
To gain insight into the experiences and tactics of hyper(in)visibility across creators, we draw on two sets of qualitative interviews. The first set focuses on the labor experiences of European webcam performers, based in the United Kingdom, The Netherlands, and Romania. The second set discusses sexual content creators’ experiences with online harassment and de-platforming in the United Kingdom, Italy, United States, Ireland, and Australia. The interviewees in both sets are considered creators, highlighting that sexual content is also cultural content, aiming to destigmatize this type of work and counter sex exceptionalism (Nayar, 2021, p. 160). Moreover, while these sets of interviews originate from separate research projects, they are related: webcam performers often raised experiences of harassment and deplatforming in descriptions of their work, whereas creators discussing harassment detailed how this shaped their working experiences. Bringing together research on different types of sexual(ity) content creators makes it possible to situate sex worker rights in wider conversations on online labor, counter to usual sensationalized perspectives (Shaver, 2005, p. 103). The original aims of both research projects centered around identifying potential ways of improving online working conditions for sexual(ity) content creators, and communicating these to policymakers, advocates, and platforms. In discussing visibility, we hope to turn participants’ insights about hyper(in)visibility into advocacy goals. We also specifically combine data from two samples to illustrate how struggles experienced by sex workers and other creators are often shared and require collective solutions. As this article highlights the harms of visibility, we want to make clear that the results should not be interpreted as a case for limiting these creators’ visibility altogether. Censoring sexual content is not helpful, especially for these creators, as we have also argued elsewhere (Are & Briggs, 2023; Stegeman, 2021). Rather, we highlight that creators should have increased opportunities to control their visibility online. Some of the necessary solutions, as shown in this article, focus on not just the right to be seen, heard, and represented online, but also the reintroduction of some important points about the right to be forgotten or hidden.
Regarding the first set of interviews, the first author of this article conducted interviews with 67 webcam performers in the Netherlands, Romania, and the United Kingdom from the summer of 2021 until the autumn of 2022. Of these, five interviews from each country commenting most explicitly on issues of hyper(in)visibility are analyzed here. This entire project received ethical approval from the ethics committee at the University of Amsterdam [2021-AISSR-13723]. Furthermore, the combination of the two sets of interviews in this study was discussed with data protection officers and was in line with prior consent given by all participants. Data from each project was only examined by the author who initially conducted that study.
All interviewees for the first project were (recently) involved in webcamming as work, although many of them also did other sex work or content creation. Interviewees were recruited through online posts on Twitter, in sex worker networks, on webcamming platforms, forums (always only with permission of forum owners), and through existing community connections and snowballing. Compensation was offered for interviewees to include workers who cannot miss time away from work, and show appreciation for sharing their time and knowledge (Bloomquist, n.d.). These semi-structured interviews focused on labor circumstances and experiences. Yet, as semi-structured interviews allow (Galletta, 2013; Kallio et al., 2016), topics from COVID-19, parenting, financial discrimination, and crucially negotiations of visibility were also explored. The guide for these interviews was piloted, tested, and altered with a three-person paid advisory board of experienced webcam performers on various platforms and in different countries. This advisory board also reviewed results, and their creation and dissemination. Besides this, results from this research project, including these findings about visibility, were presented to interested participants in flexible, accessible online presentations. A total of 13 out of the 67 participating performers attended such presentations and offered further feedback on methodologies, results, and ways to use them. In anonymous surveys, these participants all indicated they felt findings reflected their experiences and seemed important to them, at times offering suggestions for further research areas.
For the second set of interviews, the second author carried out 12 semi-structured, virtual ethnographic interviews via the videoconferencing software Zoom with Instagram and TikTok content creators, who had publicly shared that they believed they had been de-platformed after malicious user reports. As part of the ethnographic process, the second author was deeply immersed within the communities and experiences appearing in this article, meaning that while no co-researching practices were followed as such, the interview guide was grounded in shared issues previously raised in said communities. Ethnographic interviews rely on participants’ description of spaces, actions, or events and on researchers’ ongoing analysis of data through field notes, observation, and participation in research settings (Roulston, 2021). For the second author, this meant observing and recording her own ongoing experiences of censorship, following perceived flagging on Instagram and TikTok (Are, 2021). To take part, participants had to have experienced content or account deletions on Instagram and TikTok and have received negative comments on their posts, mirroring other instances where user reports triggered de-platforming (Perrett, 2021; Silverman & Fortis, 2023).
This project was approved by the ethics board connected with the second author’s institution, who recognized participants’ wishes to be credited for their expertise and/or to be kept anonymous for their safety. This was specified in the application because most participants wanted to be credited and named for sharing their stories, which we respected as a form of crediting of who deserves to be seen as an algorithmic expert, in line with research by Bishop (2019). Regardless of their choice, all interviewees were paid £50 for their expertise. For those who asked to remain anonymous, we chose pseudonyms, identifiable in this article through an asterisk (*) which follows their names. All the names with an asterisk, also in the data gathered by the first author, are pseudonyms.
For data protection and anonymity reasons, we respectively analyzed our own samples. In both cases, this involved the transcription and thematic analysis of these documents. Thematic analysis is a qualitative method that allows researchers to identify, analyze, and report themes or patterns within data (Braun & Clarke, 2021). The interviews from the two projects were coded through an inductive and generative process, in which themes related to (in)visibility were identified that we could not have foreseen beforehand (Braun & Clarke, 2021, p. 332).
The 27 interviewees lived in the United Kingdom, Ireland, United States, Australia, Italy, the Netherlands, and Romania. As such, the experiences here do specifically still concern Global North creators. Consequently, our findings do not necessarily apply equally everywhere. The focus of this article is on (in)visibility management by sexual(ity) creators; however, participants’ other identities inevitably also play a role in this. The sample is primarily White with 1 Black, 1 mixed-race creator, in the sample of 27. These 27 creators self-described as 20 cis women (4 identifying as queer), 2 trans men, 2 cis men, and 3 non-binary creators. Specifically with the understanding of the intensification of hyper(in)visibility dynamics for Black populations (Johnson & Boylorn, 2015), this research cannot do justice to all these important differences. As White cis researchers from Global North contexts, there are inevitable experiences that cannot be discussed adequately by us. At times, we do discuss participants’ demographic characteristics as these interviewees themselves describe how these shape their relationship with platform visibility.
There are further limitations to this study. For one, all interviews were conducted in English, and Dutch or Italian. In certain areas, such as Romania, this limited participation to those speaking one of these three languages. Besides that, both researchers were also embedded within certain communities in which calls for participants were shared, which has likely influenced the type of creators who joined our study and the platforms they used, reflecting experiences of hyper-invisibility tied to specific social networks or adult platforms. Finally, since visibility was not from the outset of the explicit research interest of either study, but emerged as a significant pattern afterwards, findings came up in varying ways, limiting our ability to ask about exact experiences across the sample.
Corresponding with our aim to complicate the dichotomy between visibility as positive and invisibility as negative, we will focus on analyzing the experiences of involuntary hypervisibility and strategies for voluntary invisibility. These two categories structure the two sections of the analysis (see Figure 1).

Distributed examples of creators’ experiences of involuntary hyper(in)visibility and strategies of voluntary hyper(in)visibility. The two marked areas concern examples discussed in this article.
Experiences of Involuntary Hypervisibility
While online sex work is often put forward as a “safer” option for work (Jones, 2016), the potential of online hypervisibility presents its own set of risks. As Lucy*, a webcam performer and creator from the United Kingdom explains “I actually spoke to quite a few people who said they would rather escort than be online. Because at least when I’m escorting it’s private, whereas as soon as you’re online, people can see you.” It is the “seeing you” that is especially risky for sexual(ity) content creators who face social stigma in both online and offline environments, especially when these environments become connected. This type of context collapse (Marwick & boyd, 2011) is especially an issue as sexual, non-normative content can and does elicit negative responses: “Many people in my small town in Italy didn’t take it very well” (Ale, non-binary activist and creator). While visibility to respectful, supportive, and paying audience members might be sought after, hypervisibility to unintended audiences and hate groups is not desirable.
However, it is not just hypervisibility to audiences that is problematic, hypervisibilty to platforms can be equally damaging. Major platform companies, wary about losing advertisers, mainstream audiences, or being prosecuted under FOSTA-SESTA, have clamped down on sexual(ity) content (Gillespie, 2018). As previous research shows, this has often resulted in the removal of content and loss of accounts and subsequent income (Are & Briggs, 2023; Duffy & Meisner, 2022). As Reed (UK-based sex worker, educator, and creator) relates about their experiences on TikTok: A lot of what I posted that I want to talk about, that I think it’s important to talk about, isn’t allowed on TikTok which is one of the reasons why it’s taking me so long to actually actively get on there, because I’ve had constant posts removed from TikTok, which has sort of pushed me further away from it.
Similarly, UK-based online sex worker Kate* notes how specific platforms have it out for people creating sexual content: “I don’t use Instagram so much, just because like you know Instagram does not like sex workers at all.” These creators’ experiences show they feel scrutinized and limited by the platforms on which they work.
When sexual(ity) content does remain visible on platforms, it can reach the wrong audience. Here too, governance by platforms plays a role. For instance, Bel (Irish transfeminine “wannabe” creator) calls out TikTok’s algorithm, which Bel sees as designed to spread content beyond its intended audience. This is both “a blessing and a curse.” Bel thinks that “bigots react,” when they are not expecting or wanting to see certain sexual content, but still encounter it. Such platform-induced spreading of content can have deeply harmful consequences. Rachel*, a UK-based webcam performer, describes what happened when the managers at her day job found her webcamming account: “They said either you quit the camming and we’ll just go back to normal . . . or you know you could just resign and don’t have any of this on your employment record.”
Platform exposure also opens creators up to harassment by aggressive audience members. Still, platforms tend to do little to protect sexual(ity) content creators against such aggression (Redman & Waring, 2021). As the Dutch webcam performer Nienke* relates, these threats can transform into physical danger when audience members find out “which supermarket you go to” or “where you live.” Creators also express worries about loss of control over their content, if audiences decide to record it. This is precisely what prevents Mihaela*, a Romanian webcam performer, from growing her audience too much, “I don’t wish to get really popular, I don’t want people to share my posts too much, or I don’t know if they screenshot, record videos, or whatever, or link to my page.” Again, not just users but platforms play a role here as well. As previous research has demonstrated, some webcam platforms record and resell content for profit (Stuart, 2022).
Another damaging form of involuntary hypervisibility concerns the stereotyping of sexual(ity) content creators. This especially applies to Black or trans creators, who detail how audiences sometimes hyper fixate on one aspect of their identities. Kate* (UK online sex worker) noted how hypervisibility as a mixed-race creator grew her audience, but not in a desirable way. “They like fetishized me and it really weirded me out for a really long time. I think it’s weird, like (laughs) for someone to just be attracted to you because . . . like . . . your ethnicity.” Pete*, a British adult content creator, also describes how race and gender impact online success “but being male, being Black, those things can affect your success in this business . . . erm . . . no matter how good you are.” As the above examples demonstrate, the hyper(in)visibility of sexual creators is intensified when intersecting with other socially marginalized identities. The attention certain audiences pay, in this case, to racialized creators heightened their visibility while limiting it to specific features of their identities. This is precisely the dynamic, described by Amber Johnson (2019, p. 208), where the stereotypes of individuals become hypervisible, while further complex aspects are made invisible
Sometimes, audience harassment takes a coordinated form. Through flagging, conservative platform users can contribute to the over-moderation of sexual accounts (Are, 2023). Reed describes how groups target sex workers: Someone sent me an account via Twitter that was of a Twitter account basically very openly saying, “We are against sex workers being on Instagram, our main objective is to take sex workers off of Instagram.” And on their Twitter feed this person tweeted, ‘Right, we found this sex worker on Instagram, here is the link, everyone report.
Lucy* believes a similar campaign led to the deletion of one of her TikTok accounts: “so I’ve had people mass report me, like, I was on some discord server . . . so that led to people like mass reporting, and I’ve had people who’ve sent their followers to report me so my first account got banned.” Such targeting takes an emotional toll, as Elia, a Italian trans activist, describes an experience of being mass-reported: So people were sharing screenshots of my posts in the Telegram group saying: “Guys, go flag this person,” as well as sharing a series of slurs. All the screenshots were of my face, or just me, so they weren’t criticising my work, or the fact that I did sex education, or that I posted sex toys—it was a personal attack.
These risks are not distributed equally. For example, research shows trans individuals are targeted by hate groups, who search them out (Perrett, 2021). In this context, it is imperative to attend to the ways in which creators who are already hyper(in)visible in different ways, all experience varying degrees of hyper(in)visibility through posting sexual(ity) content online.
Taken together, the first step to gain insight into the visibility management of creators is to examine how hypervisibility on platforms exposes creators to both harm from audiences—doxxing, harassment, stigmatization—and platforms, in the form of moderation and surveillance. Pursuing such an inquiry, it is vital to remain mindful, as Brooke Erin Duffy and Colten Meisner (2022) have pointed out, that such harms tend to be distributed highly unevenly across creator populations. Clearly, sexual(ity) content creators are very much exposed to such risks, which are further intensified for particular gender and racialized groups.
Strategies for Voluntary Invisibility
To deal with the risks of hypervisibility, creators pursue strategies of invisibility in creative and innovative ways. While we know that sexualised creators are involuntarily made invisible by platforms all the time (Are, 2020), our interviewees also make use of strategic invisibility. When hypervisible to surveillance, harassment, and moderation, this strategy provides sexual(ity) creators with some cover. Rather than longing for an imagined and nostalgic pre-platform past, they circumvent hostile audiences and technologies (Berg, 2022, p. 57; Hardy & Barbagallo, 2021, p. 543). And vitally, as we will see, for some creators, such as webcam performers, resisting visibility can also function as a monetisation strategy. As with hypervisibility, it is important to observe that there is variation in how creators manage their visibility to platforms, unintended audiences, and hate groups.
To deal with platform over moderation of sexual(ity) content, creators, first, manage how they present themselves to become less detectable. This can involve managing the visibility of certain body parts for starters. As activist-creator Ale says: “One day, I posted a picture of my story with me naked, but again, not completely naked, cause I’m well aware of what Instagram allows, I’ve never posted anything completely naked.” Due to the, as described by Kelly Cotter (2021, p. 1227), “Black box gaslighting” platforms engage in, creators engage in speculative tactics such as described by Ale. These tactics fit with the self-censoring creators engage in to avoid detection by restrictive moderation algorithms (Hamilton et al., 2022, p. 14). Rob* (UK trans activist and creator) also describes negotiations of which body parts to post. This is specifically done to avoid detection by social media platforms, “you can do this, but only if the rest of your body is in it too, or if your face is in it too” he says about choosing to post specific body parts. These creators pay particular attention to what parts of their bodies are visible to whom to protect themselves from censorship. Whether on adult or social media platforms, it is rare that creators feel safe to show all of themselves. The management of visibility becomes embodied through creators’ decisions on how to present their body.
Similarly, creators are well aware of how to manage their visibility across platforms. Most nude and sexual content creators simultaneously work on multiple platforms, also to manage risks when they are inevitably over-moderated (Hamilton et al., 2022, p. 11). As such, they have particular understandings of what to make invisible and where. Webcam performer Lucy* explains, for example, why she streams on MyFreeCams rather than Twitch, since “on Twitch if you wear just a bra you get your stream suspended.” Since moderation is intransparent (Gillespie, 2018; Poell et al., 2021), creators, such as Bel, need to figure out by themselves what content to post where: “I’ve noticed [on] Instagram, it’s not as pervasive as TikTok, but I tend to be more cautious because Instagram is more conservative.”
Sexual(ity) creators also manage the visibility of their bodies to deal with the risks of being exposed to the “wrong” audience. To face less risk of being outed, multiple webcam performers in the Netherlands explain that they do not show their face in their streams. Lina* (Netherlands-based webcam performer) decides which identifiable parts of her body she shows, also based on the visibility of clients: “my pics are anonymous for sure, but if I do end up showing someone my back tattoo, then I first make sure they also have their cam on and I can see their face.” This strategy corresponds with how people more generally express sexuality online, beyond a labor context. Cropping identifiable features (e.g., face) out of nudes is common practice (Tiidenberg & Van Der Nagel, 2020, p. 130). Safety considerations also inform which platform creators choose to be visible on. As Romanian webcam performer Alice* describes, few Romanian people outside the camming industry are on Twitter, which makes it feel relatively safe. So, “as a webcam performer you want to be invisible, in incognito mode, but it’s okay to be on Twitter.”
Many of our interviewees displayed an acute understanding of platform visibility regimes and their own hyper(in)visibility. They certainly do not buy into platform visibility logics. Instead, as Romanian cam performer Eva* describes in relation to the “coveted” highly visible spots on the homepage of a webcam platform: “I didn’t want everyone to see me, I couldn’t control this when I was on the first page.” While this research has focussed on how platform curation perpetuates inequalities (Jones, 2015; Poell et al., 2021; Van Doorn & Velthuis, 2018), it is simply not true that all creators want that kind of visibility. Dutch performer Lina* also talks about the “first page” on a webcam platform, which “if you’re on there it creates different expectations.” While the idea persists that creators always aim for the highest visibility and virality, in the case of sexual(ity) creators this does not necessarily reflect reality.
Contrary to commonly held assumptions about the relationship between visibility and income, our interviewees outline how resisting platform visibility logics can actually help them to monetise their labor. Especially for webcam performers, paywalls mediate levels of visibility to audiences. Earlier work has already discussed how performers use “privates” to try and minimize their digital footprint (Stuart, 2022, p. 184). Yet, creators here outline how besides protection, invisibility also might enhance income. Curly* (Romanian performer) maintains that she only reveals certain things in “private” for a higher price “if you really want to know more about me.” Also the invisibility strategy of not showing your face on cam can be used to generate income: “they [the clients] might even find it more exciting that you’re anonymous, because then they want to know for sure, and precisely then you do not show your face” (Karla*, Dutch performer). All this ties into what Eline* (Dutch performer) describes as “not giving away too much.” She consciously chooses not to be online “that much” and not to reveal too much. By creating scarcity, “everything eventually is worth more.” As such, these webcam performers show how precisely not chasing platform visibility can be beneficial. They subvert common understandings of competition and income in the creator industry by benefiting from partial invisibility.
Crucially, this is also a matter of controlling to whom they are visible. The interviewees explain the importance of the right users consuming their content. They seek audience quality over quantity. Kimmy* (UK performer) explains that it’s all about “knowing your niche and trying to attract the right type of clientele for you.” On niche platforms, or within niche categories, webcam performers are able to cater to a smaller but more invested audience (Jones, 2019, p. 287). Often it is the smaller core audience that constitutes a large part of a creator’s income: “half of my income comes from my regulars who love my shows” (Curly*). Webcam performers focus a lot of their work on maintaining these regulars (Van Doorn & Velthuis, 2018, p. 186). Users and audience members that do not belong to this core, quality audience might be purposefully scared off. This fits with how some of these creators portray themselves on platforms: “I feel I constructed a figure of myself on social media, which is very much in charge, and maybe people are intimidated” (Ale, UK, nonbinary activist/creator). Lucy* even more actively tries to reduce her visibility to undesirable audience members: “I won’t be super nice to people [. . .] which does scare some people away as well, but it keeps you with the right audience, I think.” Eline* similarly describes “driving away” audience members she does not like.
Finally, creators construct niche, more hidden communities to only be visible in productive ways, rather than being hypervisible to harm. This is a strategy observed by Zoë Glatt (2023) in her research on marginalized content creators in the United Kingdom and United States. She describes how some of these creators, confronted with harassment on the open web, choose to cultivate a smaller, but highly supportive community within the intimate paywalled settings of the membership platform Patreon. Similar concerns motivate webcam performers to only engage in explicitly sexual acts in private shows. For example, Kimmy*, a UK performer, has never suffered capping: “I think that’s probably because you’re behind a paywall, if someone is camming, you’re chatting, having a one-to-one time with you.” Lucy* makes a similar point “yeah stuff gets stolen all the time . . . mainly from like free shows or like public . . . that’s another reason that a lot of people only do stuff in privates.” Moreover, for less explicit but more community-based conversation, small spaces are beneficial, as they were for Malli (UK mental health and relationships creator), who describes a feeling of safety and comfort in his self-created, smaller online community: “I think I was a bit naïve, had a bit of an innocence that I built this little community where we talk about mental health, or just going live, and if anyone’s got anything to say we just you know say it, things like that.”
Thus, to understand how sexual(ity) content creators practice visibility management, we need to examine the various types of invisibility they mobilize to minimize the risks generated by platform surveillance and moderation, as well as by groups that might harass them. These tactics range from hiding body parts and certain acts on particular platforms to trying to appear “vanilla” to the platform. Examining how visibility management is related to monetization strategies, it becomes clear that commonly held assumptions on the relationship between visibility and income in the creator economy do not necessarily apply to all sexual(ity) creators. Our interviewees point out that in some cases maximizing visibility is not a priority. Instead, it can work better to limit visibility, create scarcity, and to cultivate a small rather than a large audience.
Conclusion
This study has complicated the dominant dichotomy between online visibility as necessarily positive and invisibility as necessarily negative. Consequently, we question whether increasing the visibility of marginalized groups on platforms is the main requirement for social justice. Instead, our interviews demonstrate the harms of being too visible in certain respects and not visible enough in others. The dialectics of hyper(in)visibility in a nutshell (Rusert, 2017).
Our interviewees develop their own methods of resistance, showcasing the ways in which certain types of invisibility can be productive as a risk management technique and a way to cultivate more productive and lucrative audiences. Sexual(ity) content creators demonstrate that platform visibility logics can be bent and redirected. These creators seek out visibility to the right audiences while trying to avoid the harm associated with being a sexual person online. Like other socially marginalized populations (Ham & Gerard, 2014; Villegas, 2010), they assert their agency and engage in visibility management. Of course, it is important to note that such visibility management is not open to all creators in the same way. As Black and fat studies, scholars have pointed out the inability to manage certain identities is central to visibility harms (Steinbugler, 2005).
To broaden the scope of creator studies and include the concerns of marginalized creators, it is vital to develop a more nuanced understanding of the different modes of online visibility—including involuntary hypervisibility and voluntary invisibility—at play in creator work. While maximizing visibility might be the optimal strategy for some creators, it is not necessarily beneficial for sexual(ity) creators all of the time. Instead of providing avenues for good work and self-expression, platforms often become yet another space where marginalized users are over-policed and over-targeted by abusers. For sexual(ity) creators, the same visibility that could turn influencers into overnight stars may also open them up to job loss, stigma, and swathes of harassment. This can have devastating impacts on their work, personal lives, and well-being (Are & Briggs, 2023).
As our interviews show, this does not mean that sexual(ity) creators are merely victims of hyper(in)visibility. Instead, we observed agentic and intricate visibility management techniques that creators tailor to their specific cases, experiences, and needs. Content creators at times seek out visibility, and like the sexual(ity) creators discussed here, at times limit it. Our interviewees learned these techniques the hard way. There is no specific guidebook to teach creators to avoid malicious flagging or “bad” virality and it is currently up to creators to pick up the pieces if things go wrong. In this light of this study, policymakers and advocates should be pushing platforms to allow creators much more control over their own online visibility.
Footnotes
Acknowledgements
This article has among others been inspired by our panel on ‘Platforms And The Precarity Of Creator (In)visibility’ presented at AoIR 2022. We thank the contributors to this panel, especially Brooke Erin Duffy, for their insightful and nuanced discussions of creator visibility. Furthermore, the authors would like to thank Olav Velthuis and the members of the Culture Club (Department of Sociology, University of Amsterdam), as well as the two anonymous reviewers for their encouragement and valuable feedback.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors received funding from the Dutch Research Council (NWO) [grant number 406.DI.19.035] and the Engineering and Physical Sciences Research Council [grant number EP/T022582/1].
