Abstract
On the face of it, contemporary “alt-tech” platforms appear more moderate than legacy hate havens. Yet it's also clear that virulent hate in the form of misogyny, white supremacy, and xenophobia has not disappeared. Probing this tension, this article conceptualizes two forms of hate: Surface “Hate” (moderate content that is highly visible and easily accessible) and Sublevel Hate (explicit content that is more marginal and less discernible). These terms are illustrated by examining several viral videos on Rumble. This twinned mechanism explains how alt-tech platforms can be both accessible and extreme at the same time. Stratified hate is strategic, heightening the appeal and durability of online communities. Recognizing this dangerous dynamic is key for interventions seeking to counter it.
This article is a part of special theme on Mapping the Micropolitics of Online Oppositional Subcultures. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/micropoliticsonlinesubcultures
Hate without hate?
Digital platforms employed by the radical right have been decried as hate havens, spaces where anti-black, antisemitic, and anti-LGBTQ+ attacks can flourish (Stokel-Walker 2018; Abril 2020; Bomey 2020). Gab, Parler, Rumble, BitChute, and other platforms are typically listed under this banner. These “alt-tech” (Donovan et al. 2019) platforms emulate the affordances of mainstream equivalents like Twitter and Facebook while offering an alternative information ecosystem.
But when we look at featured content and official products on these sites, hate speech in the strict sense is difficult to find. When scanning Rumble's leaderboard or Gab's popular posts, there is nothing overtly racist or misogynistic, and no clear incitement to violence. And when searching across 40 articles featured on Gab News, no terms from the “hate lexicon” (Mathew et al. 2019) appeared except for queer, which was used in a neutral manner (Munn 2022). This softer approach resonates with other research which documented fewer mentions of “unambiguous” hate speech on some alternative platforms (Rogers 2020: 222) and found that outward-facing channels are used for edgy or fun material that stays within legal thresholds (sCAN 2020). Such moderateness is even more striking when alt-tech platforms are set against legacy hate havens like 8chan, where hate is overt and racial slurs and references to rape and genocide are easy to find.
This is not to assert that alt-tech platforms are somehow progressive. A number of Gab News articles, for example, featured xenophobic sentiment (Munn 2022). However, that sentiment was inferred rather than explicitly stated, a subtext diffused across a range of articles on sports, healthcare, national security, labor, and so on. Slurs or statements outrightly denigrating immigrants never appear. Indeed, the ability of these platforms to package edgy or controversial (but not “hateful”) content into a modern interface and user-friendly experience can be considered a key element of their allure.
Instead of the monolithic framing of hate haven, I suggest two new terms. Firstly, there is Surface “Hate,” moderate rhetoric that is highly visible and easily accessible for an online community, such as a platform's featured feed. Secondly, there is Sublevel Hate, explicit rhetoric that is less visible or discernible, such as a status update from a user. These twin terms better capture the strata of content and experiences on radical right platforms and also explain how they address the growing pressure on hate speech. To show how these two forms play out, the next section looks at several popular videos on Rumble.
Hate on Rumble
Rumble is a video-sharing platform that is essentially a right-leaning alternative to YouTube. Founded in 2013, Rumble has become popular in recent years and now receives 41 million visitors per month (Peters 2022).
First, we can examine “Cops savagely troll Lebron James and break the internet,” a viral video posted by popular Rumbler Dan Bongino. The video shows a cop apparently calling the basketball superstar on the phone and reporting that he is witnessing one man shooting another. The cop is asked what the races of the two men are, and he responds that both are black. The cop is told to do nothing and let them fight it out. The video's point is clear: violence doesn’t matter unless a white person is attacking a black person. In one sense, the clip caricatures a particular understanding of racial justice closely associated with Black Lives Matter. In another sense, it also skewers a basketball megastar and his “woke” beliefs. Both readings slot comfortably into satire, a cutting but acceptable form of free speech and social commentary.
Yet while the video is tempered, the comments below it are anything but. James is a “worthless piece of shit,” writes one user. He “hates whites,” says a second. He is a “disgusting human being,” states a third. “The more this porch monkey runs his mouth, the more we see how ignorant he is,” opines another, deploying a deeply racist and highly offensive term (O’Dea and Saucier 2017). Immediately, then, we see how user comments take the basic foundation of the video—a satirical but fairly mild criticism of a prominent figure—and significantly ramp up its derogation. In the views of these users, James is no longer a person with a double-standard of racial justice, but a “disgusting human” or something less than human. Users latch on to key concepts from the original content but extrapolate it to form explicitly hateful attacks.
The same pattern occurs in a second Rumble video with thousands of views. The video is essentially a reaction video to a trans woman who wants to talk about gender fluidity to children. The content creator, who is black and staunchly Christian, distances himself from any criticism of being “transphobic” and actively presents himself as tolerant. “If you want to be a man and identify as a woman, more power to you,” he states, “if you want to crossdress, you want to flipflop dress, paint your hair blue, have a beard and a mustache, I don’t care… I’m not going to be hateful.” And yet, this tolerance reaches a limit when the woman starts “indoctrinating” young children with her beliefs. She hasn’t minded her own business, and the content creator criticizes this outreach. She has the right to her own lifestyle, but shouldn’t push her personal agenda on others. Yet even this criticism is civil, done without ad hominem attacks or any kind of transphobic slurs.
While the video is civil, the comments targeting the woman are explicitly hateful. “It's a subhuman monster,” states one user. This thing is an “abomination,” states another. Such comments are paralleled in a similar video featuring a transgender woman. “I can’t look at that thing” writes one user, while another claims she is possessed by a “demon.” These images of demons, monsters, and disgusting objects fit within a grammar of dehumanization (Smith 2012). Such rhetoric places an out-group in a separate ethical or ontological category from the in-group. They are no longer human, with human rights, but subhuman. In this way, dehumanization can help legitimize violent acts, turning a license-to-hate into a license-to-kill (Kallis 2008) and adding an affective element of righteousness into this form of hate. In this vision, cleaning up society means eliminating corrupting, less-than-human elements, a logic repeatedly seen in genocide (Smith 2021). Given this pattern, it is no surprise to see calls to violence in the comments. “Should be shot immediately,” states one user; “to the dumpster with it” calls another. These explicit death threats seem a world away from the supposed “tolerance” shown by the original content creator. And yet there is also a clear continuum here, an arc that can be traced between the softer original post and these hardcore responses.
Surface versus Sublevel Hate
The Rumble examples above anchor our initial concepts and allow us to conceptualize them further. Surface Hate is moderate content that is highly accessible and visible. This tempered rhetoric is used by popular content creators, personalities, or even platform owners. These figures abstain from using racial slurs. They often employ a “logical” approach to argument interspersed with banter, jokes, and other rhetorical devices designed to win audiences to their side. They adopt a “reasoned” or “common sense” position and decry any implications of being racist, sexist, or in any way bigoted. Indeed, as we saw above, they may even embrace some elements of “progressive” politics, such as tolerance to people who are different.
Surface Hate is often ambient and hard to pinpoint, a matter of reading between the lines. There is no single word that can establish racial prejudice, no obvious epithet that can prove religious discrimination. Such ambience poses an additional challenge for machine learning models, which already struggle with hate speech as something ambiguous and context-dependent (MacAvaney et al. 2019; Kovács et al. 2021). There is a form of plausible deniability here, where the largest channels and official products of a platform remain just within the bounds of the Overton window. And yet even without slurs, this ambient hate is still toxic—in fact, it is precisely its more acceptable or everyday quality which may enable it to be propagated and absorbed more successfully (Tirrell 2017).
Sublevel Hate, by contrast, is explicit content that is less visible. This material is more marginal and ephemeral, posts and comments that never appear on the homepage of a platform or in the trending section of an app. Such comments are deeper in the information infrastructure and typically seen by users who are logged in to the platform. Search engines do index some of this material, but special flags/operators must be used to return it (Creps 2021). On alt-tech platforms, which attract a particular demographic, users are more likely to sympathize with these ideologies than flag, block, or censor them. This suggests that Sublevel Hate, while potentially powerful in its ability to derogate others and radicalize individuals, largely remains out of sight and under the radar, circulating within a like-minded community.
Sublevel Hate often extrapolates from a more high-profile piece of content, as with the comments discussed above. Criticism in the originating content was strident but civil or at least socially and legally acceptable. The user grasps this originating idea but amplifies it rhetorically, emotionally, or politically. While the content creator spoke in measured tones, user comments lash out with open rage and hostility, including overt incitements to violence. While the content creator admitted that his target was a human who was entitled to basic human rights, the user will make no such concession, framing them as a subhuman. And while the content creator refrained from explicit language, the user lets loose with hateful epithets designed expressly to “other” and annihilate.
The strategy of stratified hate
Surface and Sublevel suggest that hate is stratified into layers of different intensity, moving from moderate, public-facing material to deeper and more virulent vitriol. This stratification appears strategic in three ways.
First, it responds to the intense pressure placed on hate speech. Hate speech has become a major concern for mainstream social media platforms, who have committed enormous resources to policies and moderation designed to counter extremist material (Ganesh and Bright 2020; Borelli 2021). Alongside hiring huge numbers of human moderators (Murphy and Gershgorn 2017), platforms have deployed algorithmic content moderation (Gorwa et al. 2020). This combination of human and technical responses, while by no means perfect, places overtly hateful content at risk of being blocked or deleted altogether. Moreover, online spaces who condone hate speech have come under increased fire. Even 8chan's founder admitted its content was so toxic it should be taken offline (Occeñola 2019). Similar calls for deplatformization occurred with Reddit's most overtly toxic communities, “coontown” and “fatpeoplehate” (Chandrasekharan et al. 2017), with both being removed. In this context, explicit hate is a vulnerability, exposing online communities to a range of sociotechnical pressures. Platforms aiming to build sustainable communities must be strategic, carefully modulating hateful rhetoric (Munn 2022).
Second, stratified hate is strategic in concealing toxic elements and fostering mainstream ambitions. While certainly diverse rhetoric existed in 8chan and Stormfront (Daniels 2009), explicit tropes of upholding white supremacy, advocating race wars, and inciting violence were common threads. Such overt antagonism limited their audience and appeal. The former is regarded as the internet's “first hate site”; the latter is frequently called “the cesspool of the internet.” As a result, they could be dismissed as marginal, niche communities: hateful spaces for a small subculture of hateful people. In contrast, newer radical right platforms reject any implication that they are hateful (Torba 2021). Parler presents itself as “the world's town square” and has added millions to its user-base, briefly occupying the top spot on the App Store (Heilweil 2020). Gab champions itself as “the platform for the people” and has been vocal about attracting a large and diverse demographic from “progressives” to “ethnic minorities” and “millenial women” (Ehrenkranz 2017). While their numbers still pale in comparison to mainstream giants like Facebook, they gesture to the success of alt-tech platforms in downplaying hate and framing themselves positively as bastions of free speech, expansive spaces for a far broader demographic of “the people.”
Finally, stratification also seems strategic in terms of expanding a platform's audience and increasing its appeal. Surface Hate provides a soft entry point to new users, easing them into alt-tech platforms. This material is edgy and exciting, but never in-your-face in terms of vitriol, resonating with research suggesting users are less likely to recognize intolerance on a website using a soft-sell approach (Valeri and Borgeson 2005). These memes, videos, and posts leverage established social media techniques to attain engagement. Such non-overt content may provide opportunities for not-yet politicized users to develop affinities with far-right causes (Bogerts and Fielitz 2019). While more research is needed into the efficacy of this strategy, the surging user-base of alt-tech platforms like Gab and Rumble at least suggest its quantitative success. Once acclimated to this rhetoric, users may follow a radicalization pipeline (Munn 2019): navigating off the front page, drilling down into comments and posting responses, or creating their own posts that openly attack minorities, queers, progressives, and other out-groups. In other words, by digging into the platform, users can access Sublevel Hate, gradually increasing the intensity and explicitness of hate they engage with. Stratified hate explains how alt-tech platforms can be both accessible and extreme at the same time—and this is ultimately what makes it more dangerous.
Disclaimer
This article contains racist, sexist, and otherwise hateful language. While quoting such material is necessary to demonstrate the argument, it in no way represents the views of the author, editors, or publisher. Such statements are never formally cited or hyperlinked, avoiding amplifying these ideologies and driving traffic to radical right platforms.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article
