Abstract
This article explores how Alt Tech platforms – Parler, Truth Social, Gab, Rumble, BitChute, Odyssey, Gettr, and Minds – conceptualise a core social media value: safety. These platforms, often associated with the Alt-Right, challenge mainstream social media by reinterpreting the value of safety. Using a mixed-method approach to study values promoted in social media policies, this study reveals that Alt Tech platforms reject traditional safety measures, viewing them as censorship, manifesting in three key strategies: advocating for unrestricted freedom of expression; reframing safety-oriented governance as censorship, and promoting an ideal of the digitally
If we were to pinpoint a date that underscores the importance of the value of safety, it would have to be 4th of November 2019. This was the day Mark Zuckerberg, CEO of Facebook (now Meta), eliminated the 2009 version of Facebook Principles from the website and all its allusions to free circulation of content and instead introduced safety as the company's mission, aiming to keep people safe and protect privacy. This effectively meant that Facebook was temporarily
1
leaving behind the cyberlibertarian mindset
This shift to foregrounding safety aligns with critiques such as those by Maddox and Malson (2020), who, by alluding to the metaphor of ‘Marketplace of ideas’ emphasise how platforms increasingly frame content moderation not merely as a means of enabling free expression but as a necessary safeguard against harm. This framing reflects a broader trend in which platforms leverage values like safety to navigate the challenges of maintaining user trust and meeting regulatory demands. However, foregrounding safety is not universal for all platforms especially so in the advent of current platform fragmentation. In this context, Barlow's ideals of a free and unregulated Internet appear to resonate within Alt Tech platforms, though not as a straightforward adoption but as deliberate provocations to rekindle the imaginaries of early Internet culture. However, regardless of these provocations, the increasing external regulation that platforms face curbs Alt Tech suggestions of returning to the early glory days (Kopps and Katzenbach, 2022). As Gillespie (2018, p. 47) notes, early platform policies engaged in discursive performances through community guidelines and public statements with very little will to enforce them. Today, platforms must respond to formal regulatory mechanisms, which they often do by alluding to their commitment to values like safety, freedom of expression or user wellbeing in their public communications (Scharlach, 2024). As a result, the cohort of policies that platforms are required to apply due to external regulations is, in their majority, framed as
Alt Tech platforms have become popular since the widespread bans of Far-Right accounts from 2018 onwards (Siapera, 2023). Several months before the 2020 US presidential election, conservative politicians, pundits and self-described patriots alleged that their speech was being censored by the ‘Tyranny of Big Tech’ (Hawley, 2021). Backed by US politicians who had been deplatformed, these Alt Tech platforms have attempted to market themselves as unfettered, unmoderated areas which prioritise unlimited free speech. This, they claim, is in contrast to ‘Big Tech’ platforms, which they portray as censorious (Buckley and Schafer, 2022). However, it is noteworthy these bans or deplatforming actions were largely justified by mainstream platform companies on the precise grounds of safety (Rogers, 2020). A prominent example is the case of influencer Andrew Tate, whose official accounts were removed from Instagram and Facebook by Meta in 2022. The company told the media that this action was taken for violating their policies on ‘dangerous organisations and individuals’ (The Guardian, 2022). Specifically, Meta's ‘dangerous organisations and individuals’ policy rationale states: ‘In an effort to prevent and disrupt real-world harm, we do not allow organisations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Meta’ (Meta, 2024) suggesting that this policy is related to prevent harm and therefore facilitate safety, which raises for us the following question: given that the enforcement of safety by mainstream platforms has been pivotal for the creation of Alt Tech platforms, we ask: How do Alt Tech platforms, often championed as bastions of freedom of expression, articulate and approach safety?
We explore this question through the lens of Platform Governance and Media Studies. Based on previous findings as exposed above, our starting point is that mainstream platforms’ adoption of safety as a core value is primarily a rhetorical strategy to provide a hostility-free environment that favours content production (Viejo Otero, 2025). In contrast, our analysis shows that Alt Tech platforms challenge this notion, aiming to reshape the tech landscape and, consequently, open new possibilities for studying Alt Tech platforms under Platform Governance Studies lenses.
Empirically, this article critically examines the rise of Alt Tech platforms and their approach to safety by comprehensively analysing the Community Guidelines and Mission Statements of eight prominent platforms: Parler, Truth Social, Gab, Rumble, BitChute, Odyssey, Gettr and Minds. Our mixed-method approach combines Document Location Analysis (Prior, 2008), Word Frequency Analysis, and Thematic Analysis (Braun and Clarke, 2019). Our analysis reveals that Alt Tech platforms introduce or refer to safety primarily as a legal obligation rather than a core foundational principle, while simultaneously critiquing, deconstructing and even mocking the very notion of safety as mandated by external regulations. The word frequency analysis indicates that references to safety emerge largely in response to legal requirements, whereas the thematic analysis uncovers a deeper contradiction: these platforms subtly critique or implicity reject the legal and normative frameworks tied to safety by reframing moderation as censorship, and by championing the idea of free space for digitally
Literature review
The question of safety
The concept of safety has become increasingly central to platforms’ operations in recent years. Safety has emerged as one of the core values of social media companies, especially promoted in policies (Scharlach et al., 2024) and public communication (Gillett et al., 2022), to justify moderation practices (Viejo Otero, 2025). This emphasis on safety underpins the knowledge frameworks and intentions driving mainstream platforms’ operational techniques to regulate hate speech and harmful content (see Miller and Rose, 2008 study on governance). Indeed, this prioritisation of safety represents a significant evolution from the platforms’ initial ethos.
In their early stages, mainstream social media platforms were primarily designed as digital ecosystems promoting unfettered expression and the free flow of information and, references to safety in their initial Community Standards were minimal and lacked systematic enforcement (see Katzenbach et al., 2023). The subsequent elevation of
An important observation is that the adoption of safety does not abandon the principle of freedom entirely. Moreover, at the core of this safety framework for content moderation is the principle of freedom of expression (Viejo Otero, 2025) – users should be free to express themselves and able to upload any form of content in a hostility-free environment. Consequently, when platforms speak of ‘balancing freedom with safety’ or when in literature we find the idea of ‘trading off freedom for safety’ (Hildebrandt, 2013), it means that users unfettered freedom of expression will end where platforms’ freedom to control the flow of information starts (Viejo Otero, 2025).
Recognising the paramount importance platforms claim to place on safety, Gillet et al. (2022) conducted an extensive analysis of five prominent platforms’ blogs to elucidate how these companies interpret and enforce this value. Their findings reveal that platforms often adopt approaches reminiscent of criminal justice systems, focusing on ‘bad actors’ to justify their content moderation efforts. However, the precise definition of safety and harmful content remains elusive and subject to interpretation. Scharlach et al. (2024) further explore the concept of safety in their study of platform values promoted in platform policies, defining safety as ‘freedom from danger or risk’. Importantly, they note that the meaning of safety is often left ambiguous, echoing the observations of DeCook et al. (2022), who argue that platforms treat harm as a flexible concept, adapting their definitions to what they deem relevant offences. These offences, or ‘trigger events’ (Siapera et al., 2018, p. 39), are typically non-specific occurrences tied to political, social and cultural issues that can lead to a surge in negative comments.
To combat harmful content, platforms have gradually implemented various strategies, including real-name policies (Gillett et al., 2022) and sophisticated content moderation techniques (Gillespie, 2018). These measures, often justified as safety initiatives, enable platforms to adjust their moderation approaches in response to shifts in public opinion, while also maintaining an environment conducive to content production (Viejo Otero, 2025). In this context, safety is not merely a generous value aimed at protecting users, but rather a
However, social media platforms associated with Alt-Right seem to disregard the benefits of such a controlled environment. Whether for differentiation in the digital market or ideological reasons, Alt Tech platforms often position themselves as bastions of unrestricted free speech and free flow of content, triggering a transformation or a change of meaning in the value of safety promoted by mainstream platform companies. Therefore, it becomes increasingly important to critically examine the policies of these platforms and, more specifically, to analyse the ongoing tension between mainstream and Alt Tech platforms as both a discursive battle and a subject of platform governance studies.
Alt Tech's challenge to the value of safety
Alt Tech's challenge to the value of safety is multifaceted. In the first place, Alt Tech platforms prioritise free speech over safety. Even if Alt Tech platforms advocate for freedom of expression at their centre, it seems that they are paying attention to the notion of safety and actively reject it. For instance, messenger apps like Telegram appear to prioritise free speech over safety concerns deliberately. As Watkin and Conway (2022) observe, ‘these platforms [such as Telegram] have the tendency to believe that free speech is the ultimate safety’ (p. 14). This ethos is reminiscent of the 1990s US campus hate speech debates, where advocates argued for the right to ‘speak back’ or ‘counteract’ as a form of self-defence against hateful and harmful utterances (Shiell, 2009). Secondly, Alt Tech platforms emerge as a reaction to mainstream moderation practices: The opposition to safety can also be found in arguments around deplatforming. Deplatforming is a moderation practice that involves banning groups or individuals deemed a threat, often for security reasons rather than solely political motivations.
For instance, Rogers (2020) notes that the reasons cited by Facebook and Instagram for banning figures such as Yiannopoulos, Jones, Laura Loomer and Paul Joseph Watson encompassed accusations of organised crime, inciting hate, or orchestrating violence – all results of ideas about safety and security. Moreover, prominent individuals and groups deplatformed from mainstream platforms have responded by creating or hosting their own alternative platforms. While these new platforms may not explicitly align with Far-Right ideologies, they nonetheless position themselves as champions of free speech. Consequently, they often host a wide array of content associated with the Alt-Right movement, including nativism, anti-woke arguments and negationism (Donovan et al., 2019; Siapera, 2023).
In the third place, Alt Tech platforms conceptualise safety differently from mainstream platforms: crucially, these Alt Tech platforms have emerged not merely as alternatives to mainstream governance models but as direct reactions against them. A prime example of this reactionary emergence is Truth Social, founded by recently re-elected US President Donald Trump. Following Trump's suspension from Twitter in January 2021 for violating the platform's ‘Glorification of Violence’ policy (Twitter Inc., 2021), Truth Social was launched in February 2022. It explicitly positions itself as a bulwark against the ‘tyranny of Big Tech’ (Hawley, 2021), emphasising minimal content moderation and unrestricted speech (Trump Media & Technology Group, 2022) which aligns with cyberlibertarian ideals of early Internet culture (Barlow, 1996). Just as Jasser et al. (2023) argue, these platforms tend to promote cultures of anonymity and advocate for minimal investment in content moderation, in stark contrast to their mainstream counterparts. This divergence raises critical questions about how Alt Tech platforms conceptualise and implement user safety.
In light of the above, our study aims to address these concerns by systematically and deeply examining the discourse that Alt Tech platforms have constructed around safety. We pursue this goal through the following research questions:
Approach and methods
This article analyses eight alternative technology platforms’ community guidelines and mission statements: Parler, Truth Social, Gab, Rumble, BitChute, Odyssey, Gettr and Minds. These platforms were selected based on multiple criteria. Firstly, we considered their prominence in scholarly literature on the Alt-Right, drawing from works such as Siapera (2023). Secondly, we referenced rankings of monthly active users (Statista, 2024) to ensure we included platforms with significant user bases. A crucial selection criterion was the platforms’ relatively recent establishment. The platforms in our study were founded between 2013 and 2022: Truth Social 2024, BitChute 2024, Gettr 2024, Odysee 2024, Gab 2024, Parler 2018 and 2024, Rumble 2024 and Minds 2024. The inception time frame is significant as it indicates that these platforms were created with the benefit of observing and learning from the experiences and challenges faced by mainstream social media platforms as well as mainstream ethos and manners to govern its users. Finally, information about these Alt Tech platforms is often fragmented and inconsistent (Buckley and Schafer, 2022). To address this limitation, we supplemented our primary sources with our systematic explorations of these platforms, building on methodologies from both current and previous analyses (see Light et al., 2018).
In summary, selecting these specific platforms enables us to analyse how alternative social media entrants have positioned themselves regarding content moderation and user safety. Their relatively recent establishment allows them to learn from the successes and failures of mainstream platforms, potentially shaping their approach to free speech, content moderation and user safety.
Data collection
We adopted an inductive methodological approach that considers the location of documents to be as informative as the analysis of their content. Drawing on Prior's (2008) work, a document should be examined for its substantive information and context of its discovery. For instance, a document stored in a file room drawer holds a different symbolic value compared to a widely accessible newspaper article. Prior's method emphasises the importance of understanding the symbolic location of documents concerning the broader context of their site and surrounding elements, as these factors provide insight into the document's significance (Prior, 2008). Accordingly, during data collection, we captured screenshots and made detailed annotations of the documents to reflect their symbolic value. Our Document Location Analysis, informed by previous research such as van Dijck (2013) and Gillespie (2018), demonstrated the importance that Alt Tech Platforms give to its Community Guidelines or sets of enforcements. Previous research by van Dijck (2013) highlighted that the control of user behaviours could often be found within the Terms of Service. However, as platforms evolved and their governance mechanisms became more sophisticated, research often focused on the emergence of a specific governing technique: Community Guidelines. Gillespie (2018) described these Community Guidelines as central to mainstream platform governance, often prominently displayed for users using a friendly and approachable tone while easy to access. However, the experience of locating our data for analysis has informed us that Alt Tech platforms do not align with Gillespie's observations about the visibility and accessibility of mainstream Community Guidelines. Instead, we found that Alt Tech platforms tend to bury their behavioural policies within the labyrinth of their Terms of Service, more closely mirroring the patterns observed by van Dijck (2013) than the centrality that Gillespie (2022) observed for Community Guidelines. Indeed, this variation in the location of Community Guidelines, or behavioural policies is informative in itself. Alt Tech platforms may assign a different symbolic value to these policies, particularly concerning the prioritisation of user safety. This disparity in approach underscores the need to closely examine how Alt Tech platforms conceptualise and operationalise user safety within their governance frameworks.
The present study has also been complemented with mission statements when available (Katzenbach et al., 2023). According to Katzenbach and colleagues, investigating the elusive nature of a platform's Community Guidelines raises questions about whether these policies are solely based on company definitions or if they also encompass rules found elsewhere on the site. Their research probes whether these guidelines are purely company-based or encompass rules elsewhere on the site, concluding that help pages and other auxiliary sections are integral to the Community Guidelines. This investigation, therefore, extends beyond just Community Guidelines to include mission statements offering insight into each platform's unique approach to safety.
Retrieval of community guidelines
Our systematic investigation of Alt Tech platforms’ public-facing policies employed a consistent methodological approach across all examined platforms. We utilised private browsing modes, primarily employing Firefox, with occasional use of Chrome, to ensure uniformity in data collection and minimise potential biases from personalised content.
We followed a standardised protocol, incorporating elements of the walkthrough method (Light et al., 2018) and document location analysis (Prior, 2008), which involved: initial platform identification via search engine queries, systematic navigation of platform websites, following a step-by-step walkthrough, identification and description of the location of relevant policy documents within site, documentation of document locations through screenshots, adhering to location analysis principles and finally manual collection and verbatim extraction of policy text for analysis. Together, this methodology revealed significant variations in the accessibility and presentation of community guidelines and content policies across platforms. For instance, Parler's guidelines were available in search results and prominently linked on their landing page. Gab's content standards were embedded within their Terms of Service, requiring more extensive navigation.
Notably, we observed a spectrum of approaches to the location of Community Standards, namely: direct accessibility (e.g. Parler, BitChute), nested on the Help Center (e.g. Truth Social), nested on Terms of Service (e.g. Gettr), and integrated within the wording of the Terms of Service (e.g. Rumble, Gab). Additionally, the nomenclature for these policies also varied, with terms such as ‘Community Guidelines’, ‘Content Standards’, ‘Content Policies’ (e.g. Minds), or even ‘Declaration of Indifference: Community Guidelines’ (Odysee, 2024) used across different platforms.
Data analysis
In terms of data analysis, we followed Scharlach et al.’s (2024) approach to analyse values in platform policies, combining a top-down (frequency analysis) and bottom-up reflective theme analysis (Braun and Clarke, 2019) approach. The aim was twofold: first, to explore how this cohort of platforms positively presents the notion of safety; second, to determine what readings we extract from that presentation.
Frequency analysis
We employed a multi-stage analytical process that combined automated tools with human expertise to understand how the selected Alt Tech platforms present themselves concerning safety. Our approach began with a frequency analysis using WordStat to quantify the occurrence of safety-related terms across the platforms’ policy documents and guidelines. This initial step provided a robust dataset of relevant terms and their prevalence. Moreover, to refine our analysis, we organised a dedicated workshop at our institution bringing together our research team and a group of research assistants. During this collaborative session, we used WordStat to perform frequency analysis and a manual run-through to combine words with the same roots based on consensus coding (see Scharlach et al., 2024). This step was crucial in consolidating our data and ensuring we captured the full semantic range of safety-related concepts, even when expressed through different word forms or closely related terms.
Qualitative data analysis
Following the automated analysis, we proceeded to our study's most nuanced and interpretative phase: manually clustering terms connected to safety. This process involved intense discussion and deliberation among our research team and assistants. We meticulously examined each term in context, debating its relevance and categorisation. This human-driven approach allowed us to capture subtle nuances and implicit references to safety that are impossible to catch with automated processes alone.
The resulting clusters gave us a rich, contextually grounded understanding of how these Alt Tech platforms both present and understand safety and the societal implications of their approach to safety. Finally, to further deepen our understanding and interpret these findings, the authors applied Reflective Thematic Analysis as outlined by Braun and Clarke (2019). This methodological approach enabled us to move beyond the surface-level insights from frequency analysis and delve into our data's underlying meanings and implications. We followed the six steps recommended by the authors, with particular emphasis on familiarising ourselves with the data and engaging in reflective analysis by constantly asking why the selected cohort of platforms referred to safety or the lack of it in some particular ways. In other words, we took a step back from the quantitative data to reinterpret our findings, aiming to uncover what safety is for Alt Tech platforms.
Results
Conceptualising safety by what is not allowed
Our study conducted a comprehensive analysis of the community guidelines of Alt Tech platforms, with a particular focus on the terminology related to safety and user protection. Through frequency analysis, we identified patterns in how these platforms address safety concerns explicitly and implicitly. The findings reveal a distinct approach to safety on Alt Tech platforms compared to mainstream platforms (Scharlach et al., 2024; Gillett et al., 2022). Specifically, Alt Tech platforms tend to conceptualise ‘safety’ by emphasising legal requirements and delineating prohibited actions (see Table 1).
The community guidelines of the eight platforms are notably concise, with most prominent words connected to potentially harmful actions. ‘Safety’ appears 17 times across the analysed documents, while ‘harm’ appears 14 times. Additionally, related words such as ‘unlawful’ (13 occurrences), ‘abuse’ (17 occurrences) and ‘violence’ (27 occurrences) promote terms that define what not to do on the platforms. While not directly using the word ‘safe’ these terms relate to user safety and platform integrity. Furthermore, terms like ‘illegal’ (26 occurrences), ‘prohibited’ (23 occurrences) and ‘moderation’ (15 occurrences) suggest that safety concerns are often addressed through the lens of content moderation and policy enforcement.
This indicates a tendency among Alt Tech platforms to frame safety in terms of what is not allowed in the context of legal references for which they mainly apply, like self-governance, flagging reports for illegal content, and in the specific case of Minds, an external jury made by peer users that follow their decisions on the Santa Clara Principles. Our frequency analysis reveals a constellation of related terms that collectively paint a more comprehensive picture of how these platforms approach user protection. The only other prominent words in this list are ‘sexual’, concerning forbidden content and ‘engagement’ (32 times), both mainly emphasised by the platform Gettr. Our word frequency analysis reveals that safety is not a value that platforms adopt as a mission, as it occurred with mainstream platforms. Instead, safety appears under
To explore how Alt Tech platforms understand safety beyond legal jurisdictions, we conducted an in-depth analysis of the documents, specifically through Reflective Theme Analysis (Braun and Clarke, 2019). This process revealed three fundamental themes that Alt Tech platforms use to conceptualise safety, which we will delve into below: (1) by opposition: you are free, and what you do is not our responsibility and (2) by reconstructing safety on mainstream as censorship and therefore offering a censorship-free environment as opposed to hostile free (3) by self-governance which it is understood as self-defence. Fundamentally, with these three themes, we understand that the cohort of platforms we have analysed do not represent a desire to improve safety practices, but rather aim to reject them.
Opposing safety: You are free; and what you do is not our responsibility
A key tenet underlying the Alt Tech ecosystem is a staunch commitment to freedom of expression absolutism. This principle is rooted in the First Amendment of the US Constitution and is based on the belief that the right to freedom of expression is absolute, and it encompasses all freedoms. According to White, the First Amendment is for the individual and only the individual, as it holds the idea that singular beings are ‘capable of giving individual meaning to their life experiences. It meant that humans, individually, had the potential - the freedom to alter those experiences’ (1996, p. 301). Indeed, as the following lines illustrate, the Community Guidelines that we have analysed constantly play under the guise of the marketplace of ideas, referring to individualism, the First Amendment, and freedom of expression above all. Truth Social, for example, states: ‘We do not marshal your feed; we do not pretend to be qualified to do so. Our team believes only you are qualified to monitor your feed. Our objective has always been to provide a social platform in the spirit of the First Amendment to the United States Constitution, for free thinking and the ability to share ideas freely’. (Truth Social, 2024)
BitChute takes a similar stance, asserting that ‘As an individual, you are responsible for your own actions, and you will be held accountable for any content that you add to the BitChute platform. You own and are legally responsible for all the content you add to the BitChute platform’. (BitChute, 2024). Gettr echoes this commitment, declaring that it is ‘the platform for free speech, and we've grown rapidly as a community. We're committed to remaining viewpoint-neutral and bias-free’ (Gettr, 2024). Odysee takes an even more hands-off approach, stating plainly: ‘We don't care about what you publish, livestream, comment, or include in channel descriptions for the most part’ (Odysee, 2024). Gab frames its position within the context of the First Amendment, stating that ‘As a general rule, written expression that is protected political, religious, symbolic or commercial speech under the First Amendment of the US Constitution will be allowed on the Website’. The company further asserts that: ‘The foundation of a free society requires people to peacefully settle their differences through dialogue and debate. Gab exists to promote the free flow of information online. We will not be tracking hate speech providing any mechanism to report it specifically, nor responding to anyone who reports it, because the Constitution and Federal law protect our right to do so and forbid New York from legislating otherwise’. (Gab, 2024)
In contrast to previous research on platform values (Scharlach et al., 2024), some platforms, such as Gab, do not just put the responsibility for the promotion of values on individual users but explicitly on governmental legislation. Parler also follows this position, stating that it ‘honor[s] the ability of all users to freely express themselves without interference from oppressive censorship or manipulation’ (Parler, 2024). The platform further commits to ‘remain[ing] viewpoint neutral and objective, providing tools for users to manage content exposure’. Minds, one of the largest decentralised social media networks, declares that ‘We believe freedom of speech and open dialogue are the solution to these polarized times’ (Minds, 2024).
Alt Tech platforms’ commitment to freedom of expression absolutism reflects a profound adherence to individualism and the First Amendment of the US Constitution, Platforms like Parler consistently emphasise that users are solely responsible for monitoring and managing their content. This ethos highlights personal accountability and minimal interference. For example, Odysee (2024) titles its Community Guidelines as ‘Declaration of Indifference’, underlining the belief that unrestricted freedom of expression and minimal to no intervention by platforms upon content is essential for society. By prioritising minimal intervention, Alt Tech platforms position themselves as bastions of free speech, challenging the more regulated approaches of mainstream social media. They conceptualise safety through the lens of individual freedom, aiming to foster a digital environment where user autonomy is paramount.
Safety as censorship: Censorship-free environment
A prominent theme that emerges from our analysis of Alt Tech platforms is their conceptualisation of traditional content moderation practices as a form of ‘censorship’. These platforms explicitly reject the notion of user safety as a primary concern, instead framing it as an infringement on the principles of free expression. Truth Social, for instance, states that it will not ‘determine what content will be removed or filtered, or whose account will be eliminated, based on the point of view shared within the content at issue’ (2024). The platform emphasises that its policies are ‘viewpoint-neutral and fully inclusive,’ suggesting that any content removal would violate free speech. Similarly, Gettr (2024) declares it is ‘committed to remaining viewpoint-neutral and bias-free, and we all want to express our views without fear of being “cancelled”’. This statement directly equates content moderation with suppressing certain viewpoints, which the platform rejects. Gab takes an even more explicit stance: ‘Since our inception in 2016, we have been committed to creating spaces where communication thrives, unrestricted by the pro-First Amendment Terms of Service. As larger platforms succumb to political and international pressures, narrowing the scope of American free speech, we persist in our dedication to freedom of expression’. (Gab, 2024)
The platform further asserts that it ‘will not be tracking “hate speech”, providing any mechanism to report it specifically, nor responding to anyone who reports it, because the Constitution and Federal law protect our right to do so’. Parler (2024) also frames its ‘Sensitivity Filter’ as a means of ‘upholding the tenets of free speech while also protecting users from encountering content that may be inappropriate or distressing’. This suggests that mainstream content moderation used by commercial social media platforms and required by governmental bodies aimed at protecting users is seen as a violation of free expression. Rumble, meanwhile, maintains that it has ‘the sole discretion to decide whether Content or material is permitted on the Rumble Service’ (Rumble, 2024) but does not appear to have clear guidelines or mechanisms for enforcing this discretion, likely to avoid the perception of censorship.
By constructing safety as an infringement on free speech, the Alt Tech platforms position themselves as proponents of a more
Self-governance as self-defence
Looking at the context of freedom of expression absolutism and the framing of safety as censorship, what strategies do these platforms employ, and how do they address safety? Our analysis reveals that their Community Guidelines indicate a preference against top-down moderation, viewing it as counterproductive. Indeed, the Alt Tech ecosystem presents a distinctly different approach to user safety. Rejecting the notion of safety as a primary concern, these platforms claim to empower users to govern their own digital experiences, openly positioning self-management as a core tenet of their governance model. This emphasis on self-governance is evident in the platforms’ community guidelines and mission statements.
Truth Social, for instance, provides users with a ‘variety of features and functions – including the ability to mute or block other users, or to mute or block all comments containing terms of your choice – and we encourage you to use these tools whenever you wish to control your content’. The platform states that its ‘preference is to leave choices regarding what is seen and who is heard to each individual person’, avoiding centralised content removal or account elimination (see Truth Social, 2024). BitChute (2024) takes a similar stance, placing the onus of responsibility on content creators, who are ‘obligated to ensure that they do not add content which violates the Prohibited Content or Platform Misuse guidelines’. The platform further encourages its ‘community to have a responsibility to each other to follow these guidelines’, rather than relying on top-down moderation. Odysee (2024) echoes this approach, stating that it ‘encourages[s] our users to take advantage of additional moderation tools available to them and shape their experience on Odysee in a way that aligns with their personal values’. The platform acknowledges that ‘there's no such thing as a one size fits all approach to moderation’, empowering users to curate their own digital environments. Parler, meanwhile, implements a ‘Sensitivity Filter’ on potentially offensive content, allowing users to ‘choose to view the content anyway or continue scrolling past the post in their feed’ (Parler, 2024). This strategy positions the user as the primary arbiter of what they are willing to engage with rather than the platform enforcing a universal set of rules.
Discussion
Performing and rejecting safety
In this study, we have conducted a comprehensive empirical examination of the rise of Alt Tech platforms and their unique approach to safety. By analysing the Community Guidelines and Missions of eight key platforms – Parler, Truth Social, Gab, Rumble, BitChute, Odyssey, Gett, and Minds – using a mixed-method approach that includes Document Location Analysis (Prior, 2008), Word Frequency Analysis and Thematic Analysis (Braun and Clarke, 2019), we have explored how these platforms conceptualise and present the value of safety. Our frequency analysis also shows that Alt Tech platforms
Moreover, the concept of ‘rejection’ emerges as the defining characteristic of how Alt Tech platforms conceptualise safety. Our theme analysis reveals a deliberate and systematic deconstruction of mainstream social media practices. These newer, alternative platforms position themselves as bastions of free speech in direct opposition to mainstream platforms’ moderation policies, offering a so-called
Upon closer inspection, at the heart of this approach lies a reimagining of the ideal user, which aligns closely with the ‘Alpha user’ archetype described in manosphere literature (Ging, 2019). This self-reliant individual, capable of navigating the digital realm without platform intervention and without the need to curate their feed, stands in stark contrast to the more protected user envisioned by, for instance, European regulatory bodies.
Consequently, when ‘safety’ is constructed within the Community Guidelines of Alt Tech platforms, it is done so in opposition to the ‘pernicious censorship’ (Gab, 2024) that mainstream platforms have implemented. From this perspective, the moderation systems enacted by major social media sites, whose rationale is rooted in user safety, are reframed by Alt Tech as unacceptable restrictions on free speech. By constructing safety as a form of censorship, this cohort of Alt-Tech platforms appears to be revitalising the ‘glory’ of the early social media days (Kopps and Katzenbach, 2022) reimagining safety as an individualistic self-governance strategy for digitally
Ultimately, this approach represents not just a different business model, but an active reaction and rejection of mainstream governing practices and an example of how the meaning of the value of safety is contested territory in the platform landscape.
Conclusion
We concluded this article on the 8th of January 2025, a day after Mark Zuckerberg announced the suspension of Meta's Fact Checking Programme, which will have long lasting repercussions on the study of platform governance. Ironically (or rather, unfortunately), the findings of our study, especially the construction of moderation as censorship through the means of safety, is now incorporated into the principles of Meta (see Zuckerberg, January, 2025).
Our study illuminates how Alt Tech platforms present and understand safety. The mixed-methods analysis reveals that these platforms are not merely offering an alternative to mainstream social media; they fundamentally challenge established platform governance and user protection norms. This approach manifests through three key strategies: First, advocating for unrestricted freedom of expression, positioning it as paramount to user safety; second, reframing safety-oriented governance as censorship, thereby rejecting mainstream moderation practices; and third, promoting an ideal of the digitally
While this study provides critical insights into the evolving landscape of platforms and their rejection of mainstream safety norms, several limitations should be acknowledged. First, our analysis is confined to the US versions of Community Guidelines and Missions of eight platforms, which do not capture the full diversity of the Alt Tech ecosystem. Future research should examine non-US policies of Alt Tech platforms to identify potential differences and commonalities. It could also delve into how these platforms articulate safety concerning their political ideologies and values, especially according to US political developments, as well as their critiques of mainstream social media content moderation. Additionally, expanding the scope to include more platforms or adopting a comparative approach – analysing Alt Tech and mainstream platforms side by side – could offer deeper insights into the evolving concept of safety within digital ecosystems, as well as the concept of moderation as censorship. Considering the drastic changes in content moderation announced by Zuckerberg on 7 January 2025, platform governance research must look into the implications of the relations between free speech and censorship free environments of the early days of the Internet in comparison with our present and ever evolving Internet culture. Researchers could also investigate the impact of these platforms’ safety approaches on different user demographics and explore the broader implications for online discourse and public safety. By addressing these limitations, future research could deepen our understanding of how Alt Tech platforms shape the digital public sphere and contribute to the ongoing debates around free speech, censorship and user protection.
Notwithstanding these limitations, our findings address previous calls for future policy research of alternative platforms (Buckley and Schafer, 2022; Gillett et al., 2022) and have significant implications for the future of online content moderation, user protection and the broader digital public sphere. By rejecting the safety-first approach of mainstream platforms, Alt Tech sites are effectively creating parallel online ecosystems with markedly different norms and expectations. This development raises critical questions about the fragmentation of online discourse, the potential for increased exposure to harmful content, and the long-term societal impacts of divergent approaches to platform governance.
Footnotes
Acknowledgments
The authors wish to express their sincere gratitude to the Platform Governance, Media, and Technology (ZeMKI) student assistants, Alessa Eggeling and Andrea Stefania Roca Rubio, for their invaluable contributions and dedicated support in the timely and meticulous data collection for this study.
Data availability statement
The data that support the findings of this study are openly available
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Ethical approval and informed consent statements
This study was conducted in accordance with the ethical standards of our institution.
Notes
Appendix
Frequency analysis.
| PARLER | BitChute | GAB | Gettr | MINDS | Odysee | RUMBLE | Truth social | Sum (all) | |
|---|---|---|---|---|---|---|---|---|---|
|
|
1 | 0 | 0 | 30 | 0 | 0 | 1 | 0 | 32 |
|
|
4 | 4 | 4 | 10 | 3 | 0 | 2 | 3 | 30 |
|
|
1 | 6 | 0 | 8 | 2 | 5 | 2 | 3 | 27 |
|
|
3 | 1 | 1 | 10 | 1 | 3 | 1 | 6 | 26 |
|
|
3 | 1 | 0 | 3 | 10 | 4 | 0 | 4 | 25 |
|
|
12 | 6 | 0 | 3 | 1 | 0 | 1 | 0 | 23 |
|
|
1 | 12 | 0 | 3 | 1 | 1 | 0 | 1 | 19 |
|
|
7 | 0 | 10 | 1 | 0 | 0 | 0 | 0 | 18 |
|
|
0 | 3 | 1 | 11 | 0 | 1 | 0 | 1 | 17 |
|
|
0 | 3 | 4 | 2 | 1 | 0 | 2 | 4 | 16 |
|
|
2 | 0 | 2 | 4 | 3 | 3 | 1 | 0 | 15 |
|
|
1 | 0 | 1 | 6 | 0 | 4 | 0 | 2 | 14 |
|
|
1 | 2 | 0 | 8 | 0 | 1 | 2 | 0 | 14 |
|
|
0 | 6 | 0 | 0 | 8 | 0 | 0 | 0 | 14 |
|
|
2 | 12 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
|
|
1 | 1 | 5 | 1 | 2 | 0 | 2 | 1 | 13 |
|
|
0 | 2 | 1 | 1 | 2 | 2 | 3 | 0 | 11 |
|
|
2 | 4 | 0 | 5 | 0 | 0 | 0 | 0 | 11 |
|
|
0 | 1 | 0 | 3 | 0 | 1 | 6 | 0 | 11 |
|
|
1 | 4 | 2 | 3 | 0 | 0 | 1 | 0 | 11 |
|
|
0 | 1 | 3 | 5 | 1 | 0 | 1 | 0 | 11 |
Note: Only safety-related words that appeared more than 10 times were included. Ambiguous words such as ‘content’ and ‘community’ were removed from the list (see also Scharlach et al., 2024).
* Words from the same root were combined in these entries (e.g. safe and safety).
