Abstract
This study investigates how social media creators navigate and respond to severe cases of ongoing, networked harassment. Drawing on 19 in-depth interviews with Twitch streamers who experienced a form of networked harassment known as “hate raids” on the platform, including three creators who built and shared tools to combat these attacks, this analysis pays particular attention to the nature and coordination of responses to networked harassment and the extent to which creators’ responses are also networked. This study’s findings suggest that, in the absence of communication and technical support from Twitch, creators started ad hoc networks for sharing technical tools, offering strategies for managing audiences during attacks, and providing emotional support for peer creators. Yet these networks of support were unevenly accessed across the streamers interviewed in this study and often existed only temporarily. This study’s findings furthermore indicate that communities on Twitch form primarily around individual streamers, which fosters supportive connections within that streamer’s audience community but limits opportunities for solidarities to form across streamers as a class of creative workers. I conclude by considering more broadly how platform infrastructures can facilitate or constrain different forms of community building among creators and their audiences.
Keywords
Introduction
On 1 September 2021, streamers and users of the popular live streaming platform Twitch banded together for a boycott of the site because of its failure to address rampant harassment against streamers of color and other marginalized communities. Over 500,000 Twitch viewers and 5,000 streamers supported the effort which organizers called “A Day Off Twitch” (Andrew, 2021). For months preceding the boycott, Twitch users publicly called on platform leadership to address community safety and provide solutions to widespread racist, homophobic, and transphobic harassment with many streamers sharing their experiences through the hashtag campaign #TwitchDoBetter on Twitter. The most common manifestation of streamer harassment on Twitch comes in the form of “hate raids,” or an automated spamming of a live stream’s chat with hate speech, threats, and other abusive messages with the intent of overwhelming streamers and shutting down live streams. Hate raids dominated Twitch streams for months—particularly for Black, LGBTQ+, and women streamers—with little communication from the company about how to navigate or resolve harassment during live streams (Parrish, 2021).
A Day Off Twitch prompted extensive media coverage about hate raids on Twitch and shed light on the complex nature of sociotechnical harassment on social media platforms. Twitch responded with a series of statements and new anti-harassment tools—though by that point streamers had already produced many of these same technical solutions for themselves. The platform’s resource page for combating targeted attacks concedes as much: “We know many Creators are already using these strategies, and have been sharing tactics and tools with each other, and we want to ensure this information is readily available for others who may need it” (Twitch, 2022). While hate raids on Twitch have continued, albeit less frequently, creators continue to refine their tools for community governance and share updates with each other on Discord and Twitter.
Twitch is hardly an outlier in its challenges combating harassment on the platform. According to Pew Research Center, 41% of Americans have personally experienced a form of online harassment ranging from physical threats and stalking to sexual harassment, name calling, and purposeful embarrassment (Vogels, 2021). In recent years, researchers have rightly described the nature of such coordinated—and technically sophisticated—harassment as networked (Marwick, 2021; Marwick & Caplan, 2018), often entrenched in toxic cultures of misogyny, racism, and marginalization and orchestrated through an array of platforms (Banet-Weiser & Miltner, 2016; Massanari, 2017). Although studies have examined the coordination of harassment campaigns by groups perpetrating attacks (e.g., Lewis et al., 2021), considerably less is known about responses to networked harassment particularly among social media creators whose livelihoods depend on the visibility that could inevitably make them targets.
This study investigates the nature and coordination of creators’ responses to networked harassment on Twitch—and the extent to which their responses to networked harassment are also networked. Drawing on 19 in-depth interviews with Twitch streamers who experienced networked harassment on the platform, including three creators who built and shared tools to combat hate raids, this study suggests that, in the absence of communication and technical support from Twitch, creators started ad hoc networks for sharing technical tools, offering strategies for managing audiences during attacks, and providing emotional support for each other. Yet these networks of support were unevenly accessed across the streamers interviewed in this study and often existed only temporarily. Rather than identifying one large network of streamers coordinating responses to hate raids as a collective, my findings suggest that communities on Twitch form around individual streamers, providing richly supportive connections within that streamer’s audience community but limiting opportunities for solidarities to form across streamers as a class of creative workers.
The article begins by reviewing the broader literature on online harassment and the vulnerabilities experienced in creator labor before exploring hate raids as a case of sociotechnical harassment against content creators. I then present my findings across three dimensions of streamers’ networked relationships—with audiences, fellow streamers, and Twitch management. I conclude by considering how platform infrastructures can facilitate or constrain different forms of community building at a time when social media creators face ongoing harassment with uneven support from the platform companies that profit from their labor.
Networked Harassment in Online Communities
The phrase “online harassment” refers to a variety of aggressive behaviors mediated through the web and new media technologies, such as physical threats, sexual harassment, or stalking, all of which have intensified in severity in the last decade particularly among Black, Hispanic, and LGBTQ people (Vogels, 2021). When social media platforms became widely popularized among youth, researchers were initially interested in how face-to-face patterns of harassment (e.g., bullying in schools) translated to online spaces through cyberbullying (Tokunaga, 2010). However, unlike negative face-to-face encounters, online attacks have afterlives through sharing, indexing, and archiving (Citron, 2014), amplifying and prolonging the burden felt by victims who may already be marginalized in their offline lives (Phillips, 2015). Women—especially women of color—are frequently subjected to online harassment about their gender (Banet-Weiser & Miltner, 2016), and platform companies, adopting the rhetoric of free speech and neutrality, offer little solace to victims who are left with the choice of deactivating their accounts or accepting harassment as part and parcel of online life (Lawson, 2018; Vitak et al., 2017).
Harassment has now evolved such that it is often networked: organized and coordinated among a community with connections distributed across platforms (Marwick & Caplan, 2018). Lewis et al. (2021) analyze the case of the “response video” genre on YouTube as a procedural blueprint for networked harassment. These response videos—which “respond” to statements or actions of public figures, online creators, or microcelebrities—identify a target and offer justifications to their audiences for harassing the target. Then audiences motivated to harass an individual can use the affordances of YouTube to collaborate against a target, resulting in a coordinated harassment campaign against an individual that is often substantial in size and thus more difficult to manage. Marwick (2021) characterizes this process as morally motivated networked harassment (MMNH): the result of perceived norm violations within online communities that are leveraged as justification for attacks against targets and become amplified by key social media figures. Because the networks originating attacks share ideological frameworks, harassment is reframed as morally justified and serves to reinforce norms within online communities. When a public figure becomes increasingly identifiable as norm-violating from the perspective of these communities, the justification for engaging in harassment—and subsequent reinforcement of their own in-group norms—becomes even simpler for perpetrators.
MMNH also points to a larger challenge: toxicity is as much a cultural problem as it is a social media problem (Vickery et al., 2018). For example, Reddit supports toxic technocultures not only because anti-feminist and racist activist communities form on the site but also through a variety of technical means including the gaming of karma and thread ordering, feed aggregation that grants visibility unevenly, and unaccountable flagging processes (Massanari, 2017).
Much like the reciprocal relationship between the rise of popular feminist movements and an uptick in popular misogyny (Banet-Weiser, 2018), the mere visibility of marginalized people in many digital spaces could become a site of contestation for oppositional networked groups. Complicating matters further for victims of networked harassment, communities exist within a distributed platform environment, making it difficult for a single platform company like Discord or Reddit to regulate the behavior of a group that strategically disperses its activities (Heslep & Berge, 2021) For social media creators or any public personality online, networked harassment is a “new normal” that individuals are often forced to handle themselves with inadequate assistance from platform companies.
Governing Harm and Toxicity on Social Media
As platforms attempt to effectively manage harassment and toxic cultures in their online communities, one of the biggest challenges they face is imprecision: what counts as harassment in platform governance? The 15 most popular social media platforms in the United States each have different definitions of harassment in their community policies (Pater et al., 2016) reflecting not only a technical problem for platform operations but also a challenge to platforms’ abilities to recognize and repair harm for the diverse communities they aim to serve (Schoenebeck & Blackwell, 2021). Given that all platforms must moderate the glut of content uploaded to their sites every second (Gillespie, 2018; Roberts, 2019), most major platforms have implemented extensive content moderation apparatuses that remove content and accounts as well as algorithmic recommendation systems that render some content and accounts more or less visible (Gillespie, 2022). Content moderation—especially algorithmic content moderation (Gorwa et al., 2020)—not typically embraced content and cultural expressions deemed non-normative, including from already marginalized groups (Duffy & Meisner, 2022; Thach et al., 2022). Even when formal community policies are in place to protect users from harassment and discrimination, governance infrastructures, such as user-submitted content flagging, can either be abused against a target or simply ignored by the dominant use culture (Duguay et al., 2020).
On live streaming platforms such as Twitch, community governance is largely decentralized. Given the role of group norms in networked harassment, (volunteer) community moderators play a significant role monitoring the identity and ideology of their community as reinforced through moderation decisions (Gillett & Suzor, 2022; Matias, 2019). Recent studies of community moderators on Twitch streams reveal the critical, reflective decision-making process that weighs viewer intent and community goals in real time when a violation occurs during a stream (Cai & Wohn, 2021) and emphasizes the importance of post hoc assessments of moderation decisions to evolve practices to adapt with the community culture and norms (Cullen & Kairam, 2022). Although community moderation on Twitch may often be guided by promoting inclusion and care in some communities, the power of moderators to shape the norms of a community—and thus determine what behavior is permissible or not—is noteworthy. Community moderation practices that facilitate positive norms in one community could just as easily be deployed to reinforce toxic norms in a different community.
Twitch, like most online communities, hosts a wide range of content: from uplifting and humorous chat-focused streams (Chow, 2016) to personal sharing sessions and mental health discussions (Gandhi et al., 2021), and the aggressive, masculine-coded gameplay infamous in gaming culture (Graham, 2018). The hegemonic gender relations in the content of many video games and in gaming culture more broadly (i.e., who can and cannot be a legitimate gamer) produce a particularly difficult space for women streamers and other marginalized groups to broadcast on the site as evidenced by very public controversies like GamerGate (Gray et al., 2017). This culture also bleeds into platform-wide discourse and community guidelines that fail to fully reject dominant misogynistic ideologies (Zolides, 2021) leading to “disproportionate regulation and subjective perspectives, shaped by explicit or implicit bias” of marginalized streamers (Ruberg, 2020, p. 15).
Decentralized community-led governance plays an important role in shaping the culture, values, and rules that govern conduct across Twitch, but without a deeper commitment to support marginalized streamers in the face of harassment through responsive technical support and open communication; community governance cannot and should not be expected to defend against the constantly evolving techniques of harassing groups aiming to disrupt streams and harass streamers already persisting in a platform environment that already tells them, explicitly or implicitly, that they do not belong.
Labor and Vulnerability in Live Streaming
The principal goal for many social media creators is visibility. For Abidin (2016), visibility labor is “the work individuals do when they self-posture and curate their self-presentations so as to be noticeable and positively prominent” (p. 90). Like other types of visual media platforms, live streaming emphasizes visibility labor as a means of getting noticed by potential viewers, but streamers must also maintain kindness, humor, and entertainment over the duration of a live stream in the hopes of retaining viewers for sustained engagement (Woodcock & Johnson, 2019). We can thus understand Twitch streamers as a particular class of creators: “commercializing and professionalizing native social media users who generate and circulate original content to incubate, promote, and monetize their own media brand on the major social media platforms as well as offline” (Cunningham & Craig, 2019, p. 70).
Yet creator labor on Twitch has its own particularities compared to the broader creator economy and culture. Game streaming is not merely a labor of gaming and streaming but instead a complex task of “performing play” that requires technical knowledge of playing and producing a game stream on Twitch, building a community around a live stream, and a sense of cultural awareness about the nature and norms of the site (Pellicone & Ahn, 2017). Because of the role of chat windows in live streams, audiences play a significant role in supplying “content” for streams, keeping streamers and audiences alike entertained, and helping establish the brand of a streamer and their channel or account (Meisner & Ledbetter, 2022), often turning to sites like Discord to engage and build loyal audience communities outside of live streams (Johnson, 2021). Creators and many sophisticated users more generally engage in visibility labor in part as a response to algorithmic governance and the “threat of invisibility,” (Bucher, 2012) or the risk of becoming an irrelevant subject and disappearing in algorithmic recommendations. These fears among others have contributed to concerns among Twitch streamers about the sustainability of pursuing a full-time career through streaming (Johnson & Woodcock, 2019).
The visibility compelled by social media labor also produces new vulnerabilities, including the high propensity for receiving hate and harassment as a content creator and often managing and responding to these attacks in a public setting (Thomas et al., 2022). Many live streaming communities have fostered inclusive participation, but as TL Taylor (2018) argues, “For many who are marginalized, it remains a space where meaningful participation, and creative expression are emotionally taxing, contentious, and sometimes dangerous” (p. 106). When streamers are harassed on Twitch, they must not only manage their real-time emotions in front of a live audience but also manage technical operations like moderation amid the chaos of in-stream harassment (Uttarapong et al., 2021).
Marginalized groups in online communities have developed a variety of strategies to shield themselves from the risk of harassment, and they also research and adopt technical solutions to support their ongoing anti-harassment operations. When faced with misogynistic harassment in Xbox Live, women gamers intentionally separated themselves from the wider gaming community through the use of private groups and teams to enjoy their experience in gameplay without being attacked (Gray, 2012). Technical solutions developed on sites like Twitter and Reddit also inform the available strategies for community moderation on networked platforms like Twitch. For instance, shared blocklists are widely used on Twitch to help streamers identify known users that have engaged in problematic or harassing behavior and share that information with peers. In his study of shared blocklists on Twitter, Geiger (2016) understands this as “counterpublic moderation,” or a means by which targeted groups can respond, through a networked moderation mechanism, against harassers.
Responding to “Hate Raids” on Twitch
When creators on Twitch prepare to conclude a live stream, Twitch offers a feature to “raid” another live stream, wherein a streamer sends their viewers to another channel after their stream ends. On their support page explaining the feature, Twitch acknowledges, “While raids are intended to be a positive, collaborative experience, it’s important that broadcasters are able to maintain control over their channel” (Twitch, n.d.). This somewhat casual warning obscures the reality of a dark trend on the platform: what are now known as “hate raids.” Hate raids use bots to automate the “raiding” of channels with fake accounts often using names with thinly veiled racist, sexist, homophobic, and transphobic language. During a live stream, creators are bombarded with a wave of slurs in account names joining the stream, and chat windows become overtaken with hate speech, death threats, or doxing, in some cases (Parrish, 2021). Due to the sheer overwhelm brought on by hate raids, many new, less experienced, or less supported (e.g., lack of large moderation team) streamers are forced to shut down broadcasts amid hate raids. In their large-scale quantitative analysis of hate raids on Twitch, Han et al. (2023) confirmed public reports that minority groups were disproportionately targeted by the attacks and were particularly entrenched in anti-Black racism and antisemitism. Hate raids aim to disrupt live streams, but they also have severe implications for the labor of live streaming and the personal well-being of content creators and their audience communities. Given the networked and distributed nature of harassment via hate raids, the management of hate raids goes far beyond Twitch as these attacks could easily be reproduced on similar sites across the platform ecology (Grayson, 2021). Hate raids—and other forms of emerging networked harassment—raise a critical question for researchers studying labor vulnerabilities in the creator economy. While we know that harassment is often a highly technical, networked phenomenon (Lewis et al., 2021; Marwick, 2021; Marwick & Caplan, 2018), we know less about the ways creators respond to networked harassment as a similarly networked formation if at all.
Researchers have long acknowledged that precarity creates conditions for solidarities among workers in the culture industries (Gill & Pratt, 2008). And while other types of collectives—such as influencer engagement pods (O’Meara, 2019)—demonstrated proactive forms of organizing, studies have not yet examined networked practices among creators as a defense mechanism to protect them from ongoing harm and toxicity. Social media creators and the wider public at large have slowly begun to acknowledge the legitimacy of creator labor. Despite this shifting sentiment, creators have not yet followed other groups of platform workers in forming collectives as a class of workers to demand fairer pay, reform of their payment and evaluation systems, and with some exceptions, organizing strikes (e.g., food delivery drivers in Brazil; see Strecker et al., 2022). Social media groups are commonly used to organize workers—particularly those workers geographically isolated from each other—and “enable them to manage the ambiguities of their experiences of marginality” (Soriano & Cabañes, 2020, p. 8). This study examines the circumstances under which these collectives might form in the case of creators’ widespread experiences with networked harassment on Twitch. Thus, this study considers: how are Twitch streamers’ responses to networked harassment negotiated in relation to fellow streamers, audience communities, and Twitch governance?
Method
I conducted 19 in-depth interviews via Zoom with Twitch streamers who have experienced hate raids and other forms of networked harassment, including two founders of A Day Off Twitch and three creators who occasionally stream on Twitch but primarily develop tools for fellow streamers to use on the site. Although most streamers interviewed for this study were based in the United States (n = 14), I also interviewed two streamers based in Australia, two streamers based in the United Kingdom, and one streamer based in Chile. Given that identity-based harassment was rampant on Twitch during the waves of hate raids, all participants recounted experiences with harassment that became entangled with their personal identities—whether as part of the LGBTQ+ community, as members of a historically marginalized ethnic group in their geographic context, or due to the nature of their content being socially stigmatized on Twitch. See Table 1 for more detailed information about all interview participants, including their geographic location, racial or ethnic identity, gender identity, sexual identity, and relationship with Twitch (e.g., Affiliate status or Partner status).
Interview Participant Information.
Out of respect for participants’ identifiability and privacy, sexual identities presented in this table reflect only whether an individual identified as part of the LGBTQ+ community at the time of the interview.
Although we primarily discussed their responses to networked harassment in the context of Twitch, most participants were also content creators on other major social media platforms like Instagram, TikTok, and YouTube. To minimize the risk of retraumatizing participants by recounting every detail of their harassment experiences, interviews primarily focused on the responses to networked harassment, such as their approach to coordinating with other streamers for assistance. Two of my participants were widely credited as founders of A Day Off Twitch, and while the boycott was discussed when relevant, interviews encompassed discussions of a much wider range of community-led efforts to respond to hate raids beyond the boycott. Adopting a grounded theory approach (Glaser & Strauss, 1967), I iteratively evolved the interview protocol throughout data collection as interviews progressed and new dimensions of the topic emerged in conversations with streamers. Participants received a US$25 gift card for their time and insights, and interviews were pseudonymized and transcribed by a professional service. First-round coding focused on identifying substantive themes recurring in the data (Maxwell, 2013; e.g., composure for audiences, peer technological support, Twitch criticism, etc.), which were then used to scaffold theoretical categories to build understanding about streamers’ responses to networked harassment across three relational dimensions.
Given the sensitive nature of discussing harassment experiences, I recruited participants who had already publicly disclosed that they had experienced hate raids either in news articles or on their Twitter accounts by searching hashtags like #TwitchDoBetter and #ADayOffTwitch. Although this recruitment strategy was guided by an ethic of beneficence, I also recognize that inevitably there are many voices potentially excluded from this study. Several interviewees explained that hate raids severely affect new streamers still beginning to gain a following and find community; in many cases, these streamers deactivate their Twitch accounts and keep a minimal web presence to maintain their privacy following harassment incidents. Indeed, leaving Twitch in the face of harassment is also a response to networked harassment. This study’s findings therefore represent the experiences of Twitch streamers who still maintain some presence on the platform, whether infrequently streaming, taking a break, or maintaining a highly active status.
Findings: Fragmented Solidarities in Creative Labor on Twitch
As Twitch streamers described their experiences managing numerous waves of hate raids, their accounts reflected a sense of simultaneous connection and disconnection with community moderators, their audiences, other streamers, and Twitch governance. In brief, streamers responded to hate raids in ways that were sometimes, somewhat networked. I analyze their coordinated (or relatively isolated) responses to networked harassment along three dimensions: relationships with their audiences, other streamers, and platform governance. My findings suggest that, by emphasizing connections with audiences as the primary mode of community building, Twitch streamers, with few exceptions, failed to connect with peer creators as a class of workers which ultimately fragmented their ability to facilitate widespread solidarity against hate raids. In what follows, I unpack the sources of both connection and disconnection within each dimension of coordinated responses and, in doing so, highlight both the promise and limitations of streamers’ networked responses to hate raids on Twitch.
Coordination with Audiences
For the participants in this study, the word “community” held a great deal of weight, almost always referring to streamers’ audience community and their team of volunteer moderators. Streamers, particularly those from historically marginalized backgrounds, tended to foster extremely close connections with their audiences and thus leaned on that support in response to networked harassment on Twitch. By contrast, some streamers indicated that their role as an entertainer prompted them to minimize the public discussion of hate raids as they occurred to keep their management of the problems in the “backstage” with their moderation team.
Connections
Many streamers recounted feelings of reciprocity in their relationship with their audience communities. During and after hate raids, some streamers felt that their continued presence on Twitch was an important show of strength for their community, too. Vincent, a U.S.-based streamer who identifies with the LGBTQ+ community, said: A lot of people that, even to this day, still come to my stream, we’ve all experienced this common trauma that we all fought so hard to fend off and save my name in a way. We’re all connected on this very deep level. But even more so, a lot of people just come to me with their personal problems, and for me to just disappear and not be a support system for a lot of these people that I call friends, and a lot of people that just depend on me for their own struggles, it was hard for me to look the other way and not let them down.
Many marginalized streamers echoed this sentiment of shared experiences with their audience and felt that, given the targeted nature of hate raids against particular groups, it was important to discuss reactions and responses to hate raids with their communities. Taylor, a U.S.-based streamer and one of the founders of A Day Off Twitch, shared: I am a Black, queer, nonbinary, femme-presenting person. I am like the alt-right nightmare. So, we talk about these [hate raids]. We talk about the politics of who we are because none of us asked for things to be political.
Because the nature of hate raids is often identity-based—and often fueled by racism, homophobia, and transphobia—many streamers also felt that audiences were also victimized through hate raids because they belong to communities attacked with hate speech from bots during attacks. Robert, a U.S.-based Twitch Partner who was raided with racist and homophobic language several times, explained, “While they’re not the ones streaming, they’re still in chat. They’re still seeing these words possibly fly up live. Sort of let them know that you may see things that are horrible.”
For other streamers, audiences took a more active role in resisting against harassers. Because hate raids are orchestrated through external sites to Twitch, exacting revenge on harassers is not a simple task. Kaia, a U.S.-based Twitch Partner, described both her moderators and her general audience as a community that is “very protective” of her and indicated that they “like to fight sometimes.” Beyond drawing attention to hate raids through affiliated sites like Twitter, many audiences aim to provide emotional support for creators as streamers and moderators take the next steps in reporting hate raids to Twitch and seeking technological assistance, both of which were stressful for many streamers. Alicia, based in the United States, said of her community, “I have an incredibly supportive community. I think that’s a big part of it. I had their support, and I never really felt like I was fighting this thing alone.”
Disconnections
While streamers’ accounts of their connections with audiences were overwhelmingly positive, some streamers felt that their role as an entertainer for their audience limited their ability to discuss heavy challenges, such as hate raids. Preston, a streamer with a day job in the tech industry in the United States, explained his strategy of minimizing the disruption from harassers: “My goal is to ban them so quickly that the audience doesn’t realize it, because it will derail the conversation, and then the troll gets the attention of like, ‘Now he’s talking about it. I made an impact.’” In addition to not feeding into the desires of harassers, many streamers are concerned about audience engagement and attendance during and after hate raids. Riley, another U.S.-based co-founder of A Day Off Twitch, said, “I can still entertain, and people not really know what happened or what’s going on. They’re steadily being entertained, and then I’ll just send a message to the mods like, ‘Hey, we’re getting follow botted right now.’” For Riley and others, hate raids were a threat to the entertainment value and financial reward of live streams.
Harmony, a Twitch Partner based in the United States, felt that with so many options of streams to choose from audiences would be inclined to join a more positive space, opting to avoid discussing her hate raids during her streams: People might understand and feel bad, but it’s also like, if you’re in a bad mood for the rest of your stream, people are going to want to leave and watch other content creators that’s going to bring the mood up. So, you definitely have to have a lot of self-control and be able to manage your emotions really well, because if you don’t, then it’s just going to be detrimental to your content.
Overall, streamers coordinated with their audiences primarily through identity-based connections and emotional support. While a few streamers chose to conceal responses to hate raids from their audiences to protect their stream’s entertainment value, the majority of participants detailed the important role of audiences in providing immediate affective care during live streams while their moderators executed technological solutions to help the stream continue following harassment.
Coordination with Streamers
As Twitch streamers fostered emotional connections with their audience despite ongoing waves of hate raids, creators tended to disconnected with each other overall, with some exceptions of closely networked marginalized users who shared technical tools and resources with each other. Streamers reported feelings of disconnection from peer streamers due to a combination of constraints from Twitch, cultural etiquette on the site around self-promotion, and an overload of information being shared among streamers.
Connections
Creators, particularly those with common identities and identifications, reported pre-existing and new connections that helped them navigate hate raids that disproportionately targeted Black, queer, and transgender streamers. When discussing the resources available to him on Twitch, Isaac, a Black queer Twitch Partner based in the United States, said, “A lot of other like marginalized creators ended up being a very good resource for me, sharing resources as well of like how to handle hate raids or how to handle follow-botting situations.” Many groups of streamers had already been established on sites such as Discord, where they could discuss day-to-day activities of being a streamer not unlike the workplace conversations you might expect in a traditional brick-and-mortar office. Alicia described the process of forming her group’s Discord channel which was created to support streamers managing hate raids: I got involved with some other streamers since we were all under attack constantly. We actually started a Discord server; there was like a small group. I think we started with maybe three of us, and I think there’s maybe five or six admins now. We basically just started compiling. We had people coming to us and reaching out through Twitter, because it was all over Twitter. People started reaching out that were more tech savvy, the ones that were able to build and code bots that could help protect your channels and perform some of the tasks that Twitch should have built into the back end that weren’t built in.
While these groups are model examples of networked solidarities among Twitch streamers, they were not evenly accessed by participants in this study. Elijah, a Black streamer based in the United States who identifies as part of “Black Twitch,” felt that these networks were incredibly helpful resources—but only for those who knew about how to access the groups. He shared that as a Black man, “It makes it a little less daunting to reach out because it’s like reaching out to a buddy, but trying to reach out to someone outside that circle might be a little leery.” These identity-based solidarities within marginalized communities on Twitch could serve as a blueprint for community formations that promote both emotional support and technological resources. However, as I analyze in what follows, many streamers felt that Twitch did not foster the ability for streamers to connect with peers—and in some cases promoted a culture of competition among streamers rather than solidarity.
Disconnections
The infrastructure of Twitch leaves streamers without the ability to effectively communicate with other peers, particularly when they are not active in a live stream. Thus, streamers take to sites such as Twitter and Discord to discuss professional challenges, such as hate raids. Andre, a Twitch streamer based in the United States who also creates solutions for managing hate raids, explained why this platform design creates problems in facilitating widespread solidarity among streamers: “But a lot of streamers aren’t on Twitter, so they don’t know what’s going on. They don’t have that kind of exposure. Some people only stay on Twitch where, you know, they’re in their own little bubble.” Camille, a U.S.-based streamer, echoed Andre’s sentiment, in particular, noting the problems with the way Twitch as a platform constrains communication across creators: You have to pursue them and have to go out of your way to join their communities, whether they have Discord or you have to go out of the way to follow their socials and make your presence known and comment on their stuff and like their stuff. But yeah, a lot of the ways that you, you know, you would try to connect with other streamers is you, you know, it tends to be off Twitch rather than actually on it.
In addition to the challenges associated with the platform’s design, others felt that the culture of Twitch put streamers in an uncomfortable position for self-promotion—or at least the appearance of self-promotion between streamers. Many interviewees expressed apprehension about befriending other streamers out of fear that they would be perceived to be promoting themselves rather than legitimately enjoying another creator’s live stream—a challenge that many in this sample referred to as an “unwritten rule of Twitch.” Christine, an Australian-based Twitch Partner, further elaborated, “There’s also this area of streamer etiquette. It’s a hard place for new streamers because you don’t say that you’re streaming . . . because a lot of streamers get really, really miffed if you do that.” Taken together, both the design and culture of Twitch made it difficult for many participants to feel comfortable to reach out to peer streamers and foster relationships with them if they knew how to reach them in the first place.
Finally, some streamers felt that there were an abundance of people and resources available in the later stages of the hate raids. Yet given the trauma of being harassed for months on end, they sought out official channels—such as Twitch governance, which did not act immediately—due to concerns over who they could trust in responding to harassment. Taylor, whose case of harassment was particularly severe and public, said, “I wish I could say that I reached out to a lot of people. I didn’t because I didn’t know who to trust. There was a lot going on, and I really just shut down.” Another consideration important to streamers was emotional sensitivity among streamers who experienced harassment. Christine explained this tension, “You didn’t want folks retraumatizing those who had experienced it by asking them questions. That’s also the danger as well. So, you’ve got to go and find your way somewhere.” Overall, many streamers reported having support networks among identity-based communities, though many did not have access to such groups, leaving their solidarity to be experienced in fragments rather than widely felt across my sample of interviewees.
Coordination with Twitch
As noted above, many challenges that participants experienced during the waves of hate raids were linked to problems with Twitch’s culture and infrastructure. Furthermore, in terms of the company’s response to the networked harassment of their streamers, the majority of participants were also disappointed and frustrated. Although the technical solutions rolled out by the company were satisfactory to most of the streamers I interviewed, their largest desire from the company—communication and affective support during hate raids—fell dramatically short.
Connections
Hate raids continued for months on Twitch with little response from platform administrators. Eventually, technical solutions were provided to streamers who continued to face hate raids, most of which came from external sites outside of Twitch’s control. One of the tools they released had already been developed by audience communities and shared across Discord channels for months: shared blocklists. Alicia explained how this tool facilitates the sharing of information across streamers: If you have a suspicious user come into chat, you can flag them as a suspicious user and it flags them, I believe, across the whole site . . . That way, we’re not doing all of the work, and we’re not having to do all the work individually, because that was the big part originally was that we had to do all of the work individually and then we had to compile our resources and kind of share the information like a weird Pony Express of the internet.
Twitch has also released a suite of moderation tools and a page discussing hate raids and harassment on their website (see Twitch, 2022). Many streamers I interviewed integrated these technical solutions through third-party hardware accessories called “stream decks” that allow streamers to build personalized shortcuts for their frequently used technical tools for efficient use during streams. Mila, a queer Twitch Partner based in the United Kingdom, shared the following about her stream deck’s simplified solutions: You can just set-up the button to do whatever you want, and then if you do get a hate raid, for example, where you’re being bot-followed or if you’re being spammed or whatever, then you can just hit that. It’ll put your chat into follower-only mode, and you could set it as like followers of only an hour or more can talk in your chat. You can turn off all alerts. You can do whatever you have to do to minimize the exposure that those trolls are getting.
Maurice, a Twitch Partner based in Australia, identified the core opinion held by almost every streamer interviewed in this study: “I feel like their communication has been very delayed, but in terms of the actual tools that they give, they’ve been quite good.” Most streamers had more criticism than praise for Twitch’s handling of hate raids—perhaps most evident by the A Day Off Twitch boycott which ultimately prompted action by the company.
Disconnections
Despite the apparent satisfaction with Twitch’s technical tools for streamers managing hate raids, Twitch streamers consistently critiqued Twitch for their lack of communication and empathy during hate raids. Christine recognized the challenge of quickly developing technical solutions but called on Twitch to do something in the meantime: “I think communication was key, even if they didn’t have anything concrete to say.” This desire for Twitch to speak about hate raids on the record was shared by many participants. Quinn, based in the United States, also felt that the platform’s denunciation of hate raids did not properly explain the nature of hate raids as racist, homophobic, and transphobic attacks on Twitch streamers. She shared, I think that their response could have been one, faster, just in saying anything and two, less tolerating. It seemed to be pretty like open and it was like, “We don’t like bullying,” and it’s like, “Okay, great. Everybody’s going to say that,” but I want it to get to the point where Twitch is saying, “This is not something that we’re going to tolerate,” and I want it to be a little bit more aggressive on that side so that Twitch’s creators can feel a lot more protected. I don’t feel like anybody feels protected with the type of engagement that Twitch ends up saying at the end of the day, which is just like, “Bullying is bad.”
A Twitch Partner in the United States, Reagan, described how queer streamers felt that Twitch has not prioritized their concerns historically. She said, “Even earliest versions of AutoMod were flagging any use of the word queer. And it’s just, it’s offensive to the queer community that you’re going to do that. It feels like we’re getting additionally marginalized by the platform.” Twitch’s delayed acknowledgment of hate raids—and the sluggish rollout of technical support—is a reminder that platform governance is “not so much imposed as it is negotiated” among interactions with cultural producers, advertisers, and other stakeholders (Poell et al., 2021, p. 100).
A key point of contention for Twitch streamers was that—despite the relative success of Twitch’s technical solutions—Twitch captures the lion’s share of the game live streaming market, meaning that successful streamers have no better option to produce content elsewhere for the same amount of revenue. Harmony said, “We’re only here because we feel like we don’t have any other options at the moment . . . It just kind of feels like you’re a dollar sign.” Twitch produced technical solutions that were praised and continue to be used by members of their community. However, after months of inaction and lack of communication from platform management, most streamers were overwhelmingly disconnected from Twitch in their response to hate raids, working mostly independently or in small groups to crowdsource solutions and develop strategies to manage harassment on the platform.
Conclusion
While A Day Off Twitch was considered a success due to its role in publicly highlighting hate raids against marginalized creators, it is worth noting that many streamers, including those involved in organizing the boycott, were frustrated that they were left with only this option—a boycott to catch the attention of platform management—in the first place. As Taylor, a founding organizer of the boycott, shared: We were actually going to plan for a different day, but the reason we rushed it up was because people were starting to get doxxed, so people were having their personal information thrown out into the Twitchverse. On Twitter, it was happening. It was happening to me. It was becoming very, very, very dangerous. And so, we were like, “We have to do this now. Like, we have to do this now because I don’t want somebody getting hurt because of this.”
However, despite the turn toward a networked solidarity among Twitch streamers, the coordinated management of hate raids brought on many challenges and exposed vulnerabilities in the networks connecting streamers, audience communities, moderators, and Twitch governance.
This study offers meaningful implications for theorizing networked organizing among social media creators. Building from work on platform labor organizing and entrepreneurial solidarities (O’Meara, 2019; Soriano & Cabañes, 2020; Strecker et al., 2022), I found that some Twitch streamers were ephemerally networked in their responses to ongoing waves of networked harassment for months but perhaps more importantly identified elements that hindered networked responses across the site’s streamers. As harassment and online attacks become more prevalent among creator communities on social media, networks of connection and disconnection form among creators, their audience communities, and governance of and by platform companies. However, this networked coordination across all constituent groups opens opportunities for privileging connections with one domain over another. Participants’ accounts of their experiences responding to hate raids on Twitch revealed a privileging of the audience community as the primary source of connection on the site, and thus, when faced with a challenge felt by most of their occupational peers, they did not facilitate widespread solidarity with few exceptions, such as the boycott. I build from this case to argue more broadly that creators’ orientations to their audience can eclipse meaningful connections with fellow creators in ways that fragment their solidarities as creative workers. Academics, journalists, and the public at large have begun to understand content creation on social media as work—and creators as workers—but we must now theorize how these workers might form collectives to advocate for and protect their interests as a class of creative workers.
This study also builds from Marwick’s (2021) theory of MMNH to understand the defensive coordination or lack thereof by creators facing networked harassment. While Marwick’s conceptualization establishes the process by which networked harassment is enacted, this study demonstrates how individuals—in this case, social media creators on Twitch—respond to MMNH in the case of ongoing hate raids. Similar to the studies of counterpublics in the marginalized public sphere (e.g., Squires, 2002), my findings indicate that streamers have uneven experiences forming networked responses to networked harassment, meaning that not all victims of networked attacks will respond equally nor can we discuss networked responses as a uniform strategy adopted across a class of workers. It takes shape based on the unique combination of actors who become involved in the response, including the affordances of Twitch and affiliated networked sites like Discord where networked responses may emerge. By focusing on responses to networked harassment, this study draws attention to the deeply entangled web of platform governance in which creators become implicated when attempting to create sustainable solutions for ongoing harassment. Because platform companies do not govern unilaterally, lasting solutions are negotiated among the various competing interests surrounding the platform (Poell et al., 2021) much to the dismay of creators waiting for harassment campaigns to cease.
Future research should explore the relationship between global worker precarity and the formation of labor collectives in the social media creator economy. This study was limited by its overrepresentation of U.S.-based creators who experienced hate raids on Twitch, which were shaped by the history of marginalization against Black, LGBTQ+, and femme-identifying people in the United States more broadly. Although their experiences still represent those marginalized by wider gaming and platform cultures, future research should examine the presence of creator labor solidarities across global contexts and specifically among creators in the Global South, where other platform labor collectives are currently thriving. Further work can also be done to understand the nature of contemporary networked harassment on Twitch outside of the U.S. context beyond “hate raids,” which have unfortunately dominated community safety discourse about Twitch. For all the promise surrounding the creator economy, it is worthwhile to recall Walker’s (2014) reminder about the labor of streaming: “This play happens under techniques and technologies of control and surveillance. . .and, building on top of a growing collection of stored data, [opens] up new modes of corporate action” (p. 441). As content creators continue to be key players in the business model of many social media platforms, platform companies must consider their infrastructures for connections across creators, not only to foster more supportive communities on their platform but also to help them retain individual creators who leave their platforms when faced with networked harassment without support.
Footnotes
Acknowledgements
The author thanks the research participants who shared their time, experiences, and insights. He also thanks Tarleton Gillespie, TL Taylor, and the Social Media Collective at Microsoft Research New England; the New Media & Society working group at Cornell University; and two anonymous reviewers for their feedback on various iterations of this research.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Social Media Collective at Microsoft Research New England.
