Abstract
Platforms are increasingly part of everyday life, but they remain opaque and impenetrable spaces for most users. To manage life on platforms, users thus need to engage with sense-making practices that help them understand and navigate online spaces. This paper studies how far-right populist activists interpret and navigate their presence on Twitter, using the concept of platform folklore, that is, unofficial and collective narratives aimed at relieving feelings of uncertainty associated with the opacity of platforms. The data consists of 20 life-history interviews with Swedish and American Twitter activists from right-wing populist communities, who all participated in populist Twitter debates. The analysis shows how platform folklore is constructed not only based on observations of content moderation, as emphasized in previous research, but also in correspondence with the political ideology and identities of the activists. This led them to interpret Twitter as a leftist organization that disfavored them, which stopped them from developing strategies to reach the goals they held as central to their Twitter activism. The paper concludes by discussing the role of emotions and ideology in platform folklore within political communities and suggesting directions for future research.
Keywords
Introduction
Inhabiting digital spaces means our lives are, to a growing extent, mediated through platforms (Bucher, 2017; DeCook and Forestal, 2023; Lomborg and Kapsch, 2020: 74). Platforms have become increasingly important for political participation, and during the last years, a growing academic and political concern has emphasized the role of platforms in facilitating radicalization, polarization, and other harmful behavior (Are, 2023; Kalsnes and Ihlebæk, 2021; Krämer and Otto, 2024; Mahl et al., 2023; Van Schenck, 2023). Platforms are framed as breeding grounds for undemocratic attitudes, facilitated by “large-scale social media infrastructures that organize and steer people toward certain epistemologies by way of their affective responses” (DeCook and Forestal, 2023: 635). To counteract these harmful tendencies, and for commercial reasons, platforms employ various content moderation practices, both automated and human-operated, through which hate speech or graphic violent or sexual content is flagged or removed (Gorwa et al., 2020: 3). Content moderation was brought into public awareness after Twitter users claimed the platform was targeting and silencing conservative voices (Gorwa et al., 2020; Savolainen, 2022). Indeed, previous research suggests that conservative users are restricted to a larger extent than other political groups: however, they also break community guidelines more often (Moran et al., 2022). Additionally, there are several alternative platforms where far-right ideas are allowed to flourish (Dowling, 2023).
As political participation becomes increasingly digitalized, activists must learn to adapt to both explicit and implicit rules of platforms. Political speech is often targeted and flagged by automated moderation systems, making political participation online a precarious endeavor (Gorwa et al., 2020). Users thus have to navigate and adapt to these moderation practices, to remain on the platforms. However, these digital platforms are mysterious spaces that few users have the resources to fully understand, relating to the opaque nature of platform infrastructure and regulation practices (Are, 2022, 2024; Gorwa et al., 2020; Lomborg and Kapsch, 2020; Moran et al., 2022). Because of the opacity of platform regulations, platforms have epistemic authority over how they and their algorithms work. When platforms deny the exercise of content moderation (Savolainen, 2022), this creates a clash between the official knowledge communicated by the platform and the lived experience of its users. To relieve this feeling of uncertainty, people and communities engage in different kinds of sense-making practices to “gain a full or substantive understanding of the online spaces they inhabit” (Moran et al., 2022). In this paper, I will use the concept of platform folklore to explore how far-right populist activists interpret their experiences on platforms, focusing particularly on how their understanding of platforms and their regulations relate to their political beliefs. Platform folklore refers to sense-making practices about how platforms work, consisting of non-authoritative, non-professional knowledge, spread through communities by (digital) word-of-mouth, helping people make sense of experiences on platforms. By using platform folklore as a theoretical lens to understand how people interpret and navigate life on platforms, this paper aligns with previous research focusing not on how platforms work, but on how they are experienced (Bucher, 2017; Cotter, 2023; DeVito et al., 2018; Huang et al., 2022; Moran et al., 2022; Savolainen, 2022). However, most previous research has focused on ordinary internet users (Bucher, 2017; Lomborg and Kapsch, 2020) or influencers (Bishop, 2019; Savolainen, 2022) rather than political communities, which has implications for how these experiences have been theorized. Additionally, interview data with political communities is also relatively uncommon in the field. This paper addresses this gap in the literature by analyzing interviews with far-right populist Twitter 1 activists in Sweden and the United States. The definition of populism used here emphasizes the tendency to view society as “ultimately separated into two homogenous and antagonistic groups, ‘the pure people’ versus ‘the corrupt elite’” (Mudde, 2004: 543), adding the right-wing orientation toward anti-immigrant sentiments on nativist grounds (Mudde, 2014). Populism is often framed as capitalizing on emotions of resentment, anger, or frustration (Freistein et al., 2022; Hochschild, 2016; Leser and Spissinger, 2020). Often, populism is also characterized by anti-pluralist and anti-expertise sentiments, but as Marwick and Partin (2022) argue, populism is not necessarily anti-expertise. Rather, it questions whose expertise counts, and favors alternative ways of constructing expert knowledge (Kattumana, 2023; Marwick and Partin, 2022). The study also contributes to the field by drawing upon unique data collected through interviews with people active in far-right populist communities on Twitter. This allows a deeper investigation into how activists make sense of their experiences of online activism, and how they place their interpretations within larger systems of political beliefs, which in turn contributes to our understanding of how platform folklore works in political communities. The research question this paper addresses is: What is the role of platform folklore in reinforcing or challenging political ideology and political identity in populist Twitter communities?
This paper proceeds with four main sections. First, the conceptual and theoretical framework is outlined, drawing upon previous literature on platforms, platform regulations, and users’ efforts to navigate them. Thereafter, the methodological approach of the project is presented. In the following section, I use the lens of platform folklore to analyze the ways far-right populist activists interpret their experiences on Twitter, focusing in particular on how their interpretations relate to identity and ideology, and how they guide strategies and action. I conclude by discussing implications for future research.
Living with platforms: Navigating uncertainty through folklore
As a response to the uncertainties faced by users while navigating platforms, various sense-making practices arise. In the literature, this kind of sense-making is often referred to as folk theories (DeVito et al., 2018; Huang et al., 2022; Moran et al., 2022), algorithmic imaginaries (Bucher, 2017), algorithmic folklore (de Seta, 2024; Savolainen, 2022) or algorithmic gossip (Bishop, 2019). In this paper, I will use the term platform folklore, for a few reasons. First, “theorizing” implies some degree of systematic observation not present for the activists in this study. Folklore, instead, emphasizes the non-professional and non-authoritative way in which this kind of knowledge is created and allows for a greater emphasis on cultural dimensions. Second, the participants in this study used the terms algorithms and platforms interchangeably. Platform, I argue, is a more inclusive concept than the narrower, and technologically more complicated, algorithm, and I therefore use the term platform in a way that includes algorithms as part of platforms. Third, platforms are talked about as social actors with interests by the participants of this study, in a way that algorithms are not; therefore, the term platform contains political dimensions of relevance to this study (see Gillespie, 2010 for a discussion about the politics of the term platform).
Traditionally, folklore has been described as “myths, legends, folktales [. . .] whose medium is the spoken word” (Definitions of Folklore, 1996: 256), characterized by its “alive and shifting” nature and its position of opposition against “serial number, stamped product, and the patented standard” (Definitions of Folklore, 1996: 255). Folklore reflects its time and place, is rooted in group culture and experiences (Definitions of Folklore, 1996: 257–258), and has historically sometimes been framed as belonging to “uncivilized,” “illiterate,” and “primitive” communities (Definitions of Folklore, 1996: 257). However, with digitalization, folklore scholars have adapted to a new reality, now emphasizing particular socioeconomic or national groups less, and the participatory form of the digital more (Blank, 2014; Bronner, 2009; de Seta, 2020). There are different strands of digital folkloristics, focusing on either folklore in the digital, or folklore about the digital. This study, focusing on the sense-making practices employed by online activists to understand and navigate their online lives, falls into the latter. Just like folklore traditionally has touched upon the “wonders of the invisible world” (Definitions of Folklore, 1996: 257), folklore about the digital also functions as managing ambiguity and anxiety (Bronner, 2009; de Seta, 2024; Yogarajah, 2022).
I use the concept of platform folklore which refers to a body of non-authoritative and non-professional knowledge, beliefs, or myths (de Seta, 2024) spread between individuals by (digital) word of mouth, in contrast to canonized and legitimized knowledge (Savolainen, 2022), aiming specifically at explaining how platforms work. Platform folklore is dynamic, collective, and situated in political and cultural contexts. It serves as socially constructed “meaning systems and guiding framework for interpretation and prediction” (Huang et al., 2022:2) of otherwise inexplicable experiences, helping users manage emotions and uncertainty (de Seta, 2024; Moran et al., 2022; Yogarajah, 2022). For example, previous research points to how users, when observing changes in reach and interaction (getting fewer followers, views, likes, or comments), cannot know why this is happening, but they can use platform folklore to try to understand what behavior leads to such restrictions (Bucher, 2017; Cotter, 2023; Moran et al., 2022). This is described by Huang and colleagues, who in their study on folk theories about dating-app algorithms argue that “people’s beliefs about the algorithm may influence [. . .] behaviors and outcomes more so than the actual algorithms themselves” (Huang et al., 2022: 2). Folklore is described as dynamic and flexible in nature (DeVito et al., 2018; Savolainen, 2022), and this dynamic nature is reinforced by the way platforms work: as Moran and colleagues argue, “the speed, creativity, and flexibility of folk theorization around social media match that of algorithmic change, meaning that additional content moderation measures spur new theories and tactics for circumvention” (Moran et al., 2022: 9). Similarly, the opportunities for collecting, archiving and sharing data within political communities also shape how context-specific knowledge is formed: for example, in their study on QAnon activists, Marwick and Partin (2022) show how people use technological affordances of archival infrastructures to gather evidence of conspiracy and, subsequently, construct their populist expertise.
Platform affordances and action
In the field of platform studies, there has been what Plantin and Punathambekar (2019) call an infrastructural turn emphasizing the technological underpinnings and structures of platforms, that is, their affordances. Affordances theory is also common in folklore studies, highlighting “interactions between humans and their environments, how humans are shaped by their surroundings, and how they in turn seek to shape their surroundings” (Flinterud, 2023: 444). The concept of platform affordances, described by Caliandro and Anselmi (2021: 3) as a “set of contextual constraints and props that shape the usage of technology,” is central to understanding how platform infrastructure shapes user behavior (Are, 2023; boyd, 2011; Kakavand, 2023; Kalsnes and Ihlebæk, 2021). For example, the affordances of TikTok promote imitation and repetition, as much content is produced through the usage of pre-made sound templates. Other platforms, like Twitter, promote reactive behavior as a form of participation in public debates, as the easiest way to use the platform is to retweet, like, or quote tweet popular tweets posted by more followed users (Caliandro and Anselmi, 2021). Affordances also structure how users can form into communities, with Facebook organizing communities in Facebook groups, and Twitter through hashtags or direct-messaging-groups, while other platforms are organized around individual users following each other rather than communities interacting with each other. The concept of platform affordances includes not only the infrastructure of platforms, but also the “users’ perception of their utility” (Caliandro and Anselmi, 2021: 3). This means that users are also somewhat free to creatively use, challenge, or circumvent the affordances (Duffy and Meisner, 2023; Lomborg and Kapsch, 2020). For example, activists on Instagram use Instagram-specific technological features, like covering specific words with stickers and emojis, or mixing political content with lifestyle content to create a visually ambiguous profile which they believe will be less detectable for automated moderation systems. Other strategies include using lexical variations or coded language (Åkerlund, 2022; Steen et al., 2023). This shows the dialectical relation between platforms and users, where the platform affordances produce strategies of resistance among its users.
Another aspect structuring online action is content moderation. Platforms have explicit community guidelines stating approved or prohibited behavior in the form of content moderation practices. For commercial reasons, platform companies have an interest in keeping their platforms free from hate speech, sexual content, and violence to attract advertisers (Gorwa et al., 2020). Therefore, platforms have developed content moderation practices that, through human or machine actors, “identify, match, predict, or classify some piece of content (e.g. text, audio, image, or video) on the basis of its exact properties or general features” (Gorwa et al., 2020: 3). For example, offensive language or violent graphic content can, through these technologies, be identified and removed. Both human-operated and automated content moderation practices have been criticized by scholars and civil society actors: human content moderation because of the straining work environment for moderators, often contracted from the global south; automated content moderation because of its opacity, insensitivity to social justice concerns, and its depoliticization of platform regulations (Gorwa et al., 2020). Automated systems contain a power imbalance between the platform and its users, as they are intentionally opaque and impenetrable (Bucher, 2017; Cotter, 2023; Creech, 2020; Gorwa et al., 2020; Huang et al., 2022; Savolainen, 2022). Additionally, platforms have guidelines that restrict not only prohibited but inappropriate behavior, which users often must decode themselves. This means that users must adapt their behavior to not only explicitly stated, but also imagined, moderation principles to make sure they do not lose their platforms (Cotter, 2023; Duffy and Meisner, 2023). In this way, both platform affordances and algorithms, as Bucher (2017) argues, “are not just abstract computational processes; they also have the power to enact material realities by shaping social life to various degrees” (p. 40).
The concept of platform affordances leaves room for the creativity and freedom of users but underemphasizes the role of meaning, identity, and emotion. Liao (2023) offers one of the few uses of the affordance perspective where politics and values matter, in her study on how the Chinese platform Weibo spreads and reinforces misogyny through the intersection of platform affordances and what she calls state ideology. Similarly, this paper argues that while materiality matters for how people move in (digital) spaces, we should emphasize how beliefs and meaning shape action in digital settings, which can be done using the term platform folklore. In the case of political communities, especially ones that mobilize around emotion and identity, this is particularly important. When right-wing populist activists engage with platform folklore to understand and navigate their online existence, their folklore is not only reinforced by the technological affordances of “speed, creativity and flexibility” (Moran et al., 2022: 9), but is also constructed in dialogue with their political values. For example, within both the social media economy and populist communities, decentralized and unauthorized sources of information are promoted over official, mainstream ones. Similarly, individuals who are charismatic storytellers are given authority in both contexts (Cocker and Cronin, 2017; Lewis, 2020; Mahl et al., 2023). As Savolainen argues, in conservative communities, the experience of perceived restriction on platforms is interpreted as expressions of something “undisclosed, ‘shady’, and potentially illegitimate,” and it functions as a “discursive gathering point for the articulation of multiple experiences and beliefs of platform governance, united by the feelings of uncertainty and not knowing” (Savolainen, 2022: 1094). Thus, platform folklore within right-wing populist communities aims to relieve feelings of uncertainty, similar to how conspiracy theories tune into people’s feelings of fear, uncertainty, and resentment (Ekman, 2022), feelings that are often labeled as negative and separate from rationality (Leser and Spissinger, 2020; Morgan, 2022). Additionally, Moran and colleagues find in their study on vaccine-opposed influencers that they construct narratives about politically motivated big-tech censorship, which allows them to claim and reinforce positions of marginalization and martyrdom (Moran et al., 2022). This points to another function for populist platform folklore, namely to reinforce the “outsider” position that, in populist narratives, grants the messenger credibility and legitimacy and can be weaponized by populist activists or influencers to strengthen their arguments about ruling elites and platforms, as well as signal group belonging (Krämer and Otto, 2024; Moran et al., 2022). Drawing upon the insights from Moran et al. (2022) and Savolainen (2022), this study aims to analyze how platform folklore and political ideas relate to, reinforce, or challenge each other, as well as how platform folklore affects actions, emphasizing platform folklore as politically encoded (Yogarajah, 2022).
Methods and data
This paper builds upon interview data collected within a project exploring the role of emotion, identity, and motivation in far-right populist participation online in Sweden and the United States. To sample participants, three hashtags were identified from which data was scraped, aiming to capture ongoing debates where populist ideas were prominent. The hashtags were #stopthesteal for the US context, and two Swedish hashtags about leaving the EU (#Swexit) and freedom of speech (#yttrandefrihet). 2 The choice of hashtags was motivated by the purpose of the project, aiming to capture debates that contained critique or resentment toward the current political system and its representatives. Despite collecting data from two countries, this paper makes no comparative analysis as not enough differences were identified in the data. The hashtags were scrutinized to make sure they represented debates relevant to the project. From the Twitter data, 496 profiles were selected based on criteria such as number of followers, time on the platform, and how well their political profile or activities matched the purpose of our project. Participants were contacted via Twitter, both by direct messaging and by responding to tweets. Ultimately, 20 interviews were conducted (11 Swedish and 9 Americans, 3 women and 17 men of different ages), representing a response rate of 4%. The interviews lasted on average 2 hours and 36 minutes. Due to the COVID-19 pandemic, all interviews were conducted on Zoom. The interviews were semi-structured, focusing on people’s political background (visualized through life-history diagrams, see Söderström, 2020), motivations, identity, and emotions. The data was analyzed iteratively by reading each transcript in full and collecting initial reflections and questions. After finding a pattern of people talking in particular ways about platforms and their experiences of online activism, the following step included a more targeted reading of the transcripts where these themes were in focus. The data does not capture the collective aspect of folklore but instead represents the individualized attempts to navigate the digital world using narratives shared by many.
The low response rate illustrates the recruitment difficulties in this project, which likely relate to both how we contacted them and to their populist ideology, which poses academia and other elite institutions as untrustworthy and biased. Contact attempts were often met with skepticism, with some people writing tweets warning others of participating in the project, accusing it and the university of targeting people with unaccepted views (see Söderström et al., 2024). 3 Many of those we contacted were anonymous on Twitter, potentially making them more reluctant to participate. Those who participated all showed their names and/or faces, although some only had the first name. Ultimately, these aspects may have produced a biased sample, excluding more radical or skeptical people. However, the recruitment difficulties also point to the importance of the project, as it taps into a resource that is often difficult to obtain in research, namely the inside perspective of people who do not usually want to talk to academia. The data thus offer valuable insights into narratives that are otherwise accessed by researchers through digital data, such as tweets or forum posts. The people who did participate were not particularly moderate in their opinions but instead expressed radical views on gender and sexuality, immigration and ethnicity, and COVID-related themes such as vaccines, lockdowns, and mask mandates. Importantly, however, is to note that the degree of radicality varied significantly in the group, with some people being more centrist and some being very far-right. They also varied in their political background: some were only active on Twitter, while some had been involved in local politics (campaigns or running for positions in local government), or were passive members of parties, while others had just recently become politically interested. Some participants described their orientation as populist, far-right, nationalist, America-first conservative, or similar, while others did not identify strongly with any political label. The participants were selected not based on identification or self-ascribed labels but on their participation in what we defined as right-wing populist debates.
Experiences of online activism
Most of our participants considered Twitter important for their political participation, because many of them were limited by health, finances, or family circumstances (apart from the COVID-19 pandemic). Twitter was also considered the most efficient medium for political participation and news consumption, based on ideas about biased mainstream media, the toxic political climate in real-life discussions, or closeness to politicians and others in power. Social media as central to democratic participation and freedom of speech was highlighted by S4, 4 amongst others: “Today all discussion, all political education and involvement takes place on social media, it’s not like you’ll walk around town and yell out your messages.” However, just like the vaccine-opposed influencers in Moran et al.’ (2022) study, they held conflicting views on Twitter: while depending on it for their activism, they were also critical or even hostile toward the platform.
The most salient source of skepticism toward Twitter was the perceived bias of the platform. The idea that Twitter was a “leftist organization” (U2) was present in several interviews, and most participants entertained narratives of Twitter marginalizing right-wing voices and promoting leftist or liberal ones. This caused fear of being banned or restricted for posting “unaccepted” opinions. For example, U1 described this anticipation saying “every time I post something now I’m afraid that that’s gonna be my last post.” Some were surprised it had not happened yet: as U8 expresses it, “I’m AMAZED Twitter hasn’t blocked me, hasn’t restricted me yet.” Some had personal experiences of temporary bans and anticipated a permanent ban, but their fear was also based on what they had seen or heard happen to others: for example, U1 claimed she could not name many “that survived” Twitter’s targeting of right-wing users. The anticipation of restriction represents the uncertainty described by scholars as central to life on platforms: the power to determine individual users’ future access lies in the hands of platforms, and information about how to avoid this is restricted from the public (Cotter, 2023; Van Schenck, 2023). It also reflects a central property of platform folklore, namely its function as a “guiding framework for interpretation and prediction” (Huang et al., 2022: 2): their anticipations were partly based on their own experience and partly on collective stories. Some had developed strategies that they believed would reduce the risk of being banned, including self-censoring and using lexical variations to avoid terms believed to be targeted by content moderation systems. For example, U4 avoided using the word QAnon and instead wrote “seventeen anonymous patriots,” based on the letter Q being the 17th letter in the English alphabet. According to him, “Twitter never figured it out, but everyone knew what I was talking about,” pointing to how these strategies tap into collective knowledge. Others who anticipated being restricted or banned from the platform developed external documentation systems where they saved the content they produced, for example through screenshotting posts. These documentation strategies were also a way to keep track of how political narratives develop over time, similar to the archival strategies of QAnon researchers (Marwick and Partin, 2022).
Apart from anticipating losing one’s profile, participants also thought of restrictions in terms of reach, for example through observing changes in the number of likes, followers, or retweets, similar to findings in previous research (Bucher, 2017; Cotter, 2023; Moran et al., 2022). Because of the opacity of platforms, people had to use personal observations to analyze patterns in content moderation. For example, U4 recollected the slow beginning of his Twitter activities, attributing it to the algorithm:
Turns out that the Twitter algorithm is that nobody sees it until, anything you put up, just on your own, ‘til you have five hundred followers. And then there’s not much until you have eleven hundred. So with four [followers], nobody saw me.
Similarly, S4 says he believes that the platform has limited his reach, which negatively affects his motivation to participate. When using the platform as a tool for political participation, limited reach means you have less possibility to positively affect “the other side” and to become a “political influencer,” something held as a goal by several participants. Some people had explicit strategies to handle limited reach, such as S6’s “follow-back policy,” which aimed to ensure that his account would not become “small and irrelevant,” but mostly it was met with frustration and resignation.
Although the anxious anticipation among our participants of losing their profiles was common, it is also important to note how being suspended or restricted was not only framed as a defeat, but as a source of pride. As part of the populist ideology, being dismissed by the “powerful elite,” in this case represented by big tech companies such as Twitter, can be worn as a badge of honor as it allows the “weaponization of marginalization and ‘censorship’ as a route to achieving message credibility” (Moran et al., 2022: 9). Having one’s content removed was seen as a verification of its truth-bearing and subversive nature: a sign of being onto something the “elite” wants to hide. For example, as U2 says about being accused of being a Russian bot:
I consider it a badge of honor to be considered such a threat to Hillary Clinton, that they wanted to shut down my Twitter account! [. . .] I’m forever proud of that! Me and only 199 other people get to say that! [laughs].
Content moderation efforts to keep platforms clean from conspiratorial or populist ideas may thus contradictorily contribute to the reinforcement of these ideas within populist communities (Mahl et al., 2023; Van Schenck, 2023).
Emotionality, rationality, and political identity
Considering the degree of skepticism the participants held toward Twitter, and knowing there are several alternative platforms where far-right ideas are allowed to flourish, their determination to stay on Twitter was surprising. While it may be explained by path dependency, the most commonly expressed motivation for remaining on Twitter was related to how they perceived their political identity. Many argued that rationality was a central part of their political identity, and explicitly distinguished themselves from what was referred to by many as “the emotional left.” For example, U7 said, “Everything I do in my life is conducted through reason and logic” while talking about debating with his leftist brother whose arguments he saw as emotional. U2 similarly said that “I back up what I say with facts, not emotion.” Contrastingly, anti-racist political involvement was seen as based on “emotions rather than logic” (U3), whereas both S1 and S11 claimed their anti-immigrant sentiments were “not some emotional thing” (S11). Instead, they claimed to value the consumption of various sources of information, forming opinions based on rational assessment of facts rather than emotional convictions. For example, S3 described herself as “super-open to all information, I don’t want to limit information, I’ll decide by myself, in my head, if it seems right or if it’s logical.” Similarly, U6 argued that he would “seek out the truth for myself, synthesize that, produce my own conclusions,” pointing to the construction of non-professional, non-authoritative knowledge typical of platform folklore. Because of this, they could not leave Twitter in favor of alternative far-right platforms: they had to remain where they could access content from political opponents to form thought-through, rational opinions.
Right-wing or conservative identity was thus framed as being open-minded, consuming content and perspectives from political opponents, and being able to discuss with opponents without resorting to infantilism or name-calling which they believed to be prominent within the left. This distinction between emotionality and rationality represents a general pattern in how they relate to emotion and politics, discussed in length in another publication (Söderström et al., Unpublished). However, sociologists of emotion generally recognize how rationality and emotionality are not conflicting, or even separate, but that emotion is foundational in social action (Barbalet, 2001; Leser and Spissinger, 2020). Additionally, emotion was a big part of the interviews: several people who were critical or resentful of the perceived emotionality of the left themselves cried, raised their voices, expressed fear and worry about different political issues, and showed clear resentment toward individual politicians. For example, U4 cried talking about wanting to make God proud with his activism; S1 said he felt like throwing up thinking about biased mainstream media; S10 expressed strong worry about the term “Swede” losing its meaning with mass immigration; and U8 cried as he shared the feeling of politicians not listening to the people anymore. Emotions were central to their experiences of politics, but so were their efforts to deflect the label of emotionality.
Twitter as a leftist space
Previous studies on how people experience platforms often assume that users’ interpretations and folklore about how platforms work are based on observations of platform affordances, and are thus specific to particular platforms. For example, Bucher’s (2017) analysis of how people navigate popularity on Facebook frames ideas about the algorithm as limited to the platform itself, meaning people did not generally think of strategies for cultivating online and offline popularity in similar ways. Similarly, Huang and colleagues argue, based on their study of online dating, that scholars must “delineate folk theories of technologically mediated social processes from the ones that people hold about the algorithms that underlie or facilitate those processes” (Huang et al., 2022: 10). In the case of online dating, people had developed separate folk theories about online dating as a social phenomenon, and about algorithms in online dating.
In the study of political communities, however, I argue that such a distinction may be unproductive. As Moran et al. (2022) demonstrate in their study of vaccine-opposed influencers, users’ interpretations of platform affordances must instead be situated within larger political belief systems. The participants of this study, for example, did not think about restrictions of reach as attributed to the algorithm per se, but as part of something bigger.
It’s harder to get followers now, it was a lot easier to get followers years and years ago than it is now, it’s like really strange how much more difficult it is to get followers [. . .] It’s a bit fishy, to say the least. (U8)
Attributing non-successful growth rates to something “fishy” exemplifies how their platform folklore is aligned with their political ideology. Many participants believed that right-wing voices were silenced, which they attribute to Twitter being a “leftist organization” (U2). Findings in previous research suggest that the more frequent restriction of right-wing users is a response to their excessive breaching of community guidelines (DeCook and Forestal, 2023), but for the participants, it was seen as a consequence of unfair treatment. Personal experiences were framed as politically relevant:
I see how unfairly I’m treated, you know. [. . .] I see people with nothing following that sometimes have, who do not have the kind of political views I have, meaning conservative views, and they can get thousands and sometimes tens of thousands of retweets and go viral – I NEVER had a tweet that has gone quote unquote viral! Never. Because Twitter won’t allow it! (U7)
The idea that Twitter moderates based on political interests was also linked to the government; U3 argued that “we’re heading to a dark place, where people can’t even speak the truth. Because it’s not approved by your corporate fascist government!”. The way big-tech companies moderate their users’ behavior was repeatedly linked to the political interests of the ruling elite, the government, or the left, rather than to the algorithm itself. Thus, the platform folklore constructed within this community relied more heavily on their political ideology than on their experiences of platform affordances, something that is sometimes overlooked in previous studies drawing on the concept of platform affordances.
A central part of the concept of platform folklore is the relation between ideas and action. In previous research, scholars point to how people’s observations and interpretations of algorithms and platforms shape how they choose to use the platform to reach specific goals, often relating to popularity and reach. However, when platform folklore draws upon populist narratives, the relationship between ideas and strategic action changes:
The algorithm works against us, they don’t work against them. So . . . that doesn’t bubble me, that bubbles them. They don’t see a lot of our thoughts coming through on their timeline, I still get bombarded by their manure on a daily basis.
Ok, so you don’t really have to like actively go find opposing views?
Right, it still comes into my Twitter all the time.
As described above, the main reason for staying on Twitter, and a central part of how they perceived themselves as political beings, was being open-minded and consuming content from their political opponents as a way to form rational opinions. However, their belief that Twitter would automatically push leftist content into their feeds stopped them from developing independent strategies to reach the other side. With few exceptions, no explicit strategies were mentioned despite probing for them in the interviews. For example, when asked about how he interacts with political opponents, S8 responds “It’s easier to talk to the right, no doubt,” and proceeds to discuss something else. Similarly, S1 says that he wants to interact with the left to “see what arguments they present,” but that “there is no point” because they will block him; similarly, S9 argues that “many people on the other side are not open to discussion.” It was in fact more common to hear explicit strategies for staying within their political community, such as participating in private chats and DM groups where coordinated efforts to target leftist content or promote populist ditto were initiated. Similarly, people developed offline relations with like-minded Twitter peers and organized social, political events. When discussions with opponents took place, some participants described them as accidental, rather than intentional. Even for people who believed that “the algorithms make it so that you end up with your own people” (S7) or that Twitter “tries to silo you” (U3), there were no or few strategies to counteract this tendency. In this way, beliefs about how the platform works shape behavior in a way that is, first, based on politically motivated platform folklore, and second, that leads to outcomes that contradict their claimed goals. While they were suspicious and critical of the algorithm, ascribing it leftist agency, they simultaneously left the fate of their access to information in the hands of that same algorithm. As suggested by previous literature, platforms have been shown to amplify extremist content, favoring the participants of our study (DeCook and Forestal, 2023): regardless, their belief about the platform guided their behavior and the outcomes rather than the platform affordances or algorithms themselves. Additionally, it also strengthens DeCook and Forestal’s (2023) argument about algorithms and platforms producing a form of undemocratic cognition that is both antagonistic and skeptical (toward others) and passive (in terms of seeking out information).
The emotional function of platform folklore
Drawing upon the concept of platform folklore as a meaning system that guides action and aims to relieve feelings of uncertainty, we can understand why people engage in strategies that are counteractive to the goals they claim to want. Of course, these stated goals could be mere positionings, a way to legitimize one’s opinions by ascribing them rationality. It could also be about reinforcing identity, indicated by the fact that in most cases, people described themselves as being rational people, without being able to specify how they engaged in rational action.
The emotional role of folklore is described in previous literature as relieving feelings of uncertainty by offering explanations of experiences otherwise inexplicable. But when populist ideas of ruling elites, conspiracy, and attacks on democratic rights come into play, the character of the emotional relief offered by platform folklore is affected. Many participants had difficult emotions related to politics: for example, people mentioned becoming depressed after the January 6th storming of the Capitol, as it revealed what they saw as corruption: “I knew there was political corruption in this country, I did NOT know how deep and how [inaudible] it was” (U4). The election fraud conspiracy was also a source of hopelessness, anger, or resentment, even for Swedish participants. Some believed they, as right-wing, were to be compared to Jews in 1930s Germany.
Reading the history of the Nazis taking over, I always think about like, why didn’t the Jews leave, why didn’t they get out of there, couldn’t they see what was happening? And . . . and then I see how things have gone downhill so fast and not just in the US but other Western countries, and [. . .] I think I’m getting the feeling of what it was like. (U8)
In general, people held a belief about themselves as politically marginalized and victimized by a political elite that not only ruled governments but also platforms. The emotional function of platform folklore for this political community thus seemed to have an agitating effect, not managing but amplifying emotions of uncertainty, anger, and frustration, allowing them to linger, similar to how Yogarajah (2022) argues that cryptocurrency communities reinforce uncertainty as emotion management. It became a way for these activists to make sense of platforms as placed within a larger system of beliefs about the world. The participants linked the unfair treatment they felt they were exposed to, to the same elite that they thought was behind the import of a new electorate as part of “the Great Replacement” (U3), pushing trans ideology as a form of birth control, or performing a climate hoax. This linking of events, central to conspiratorial thinking (Ekman, 2022; Mahl et al., 2023), may offer a sense of coherency and simplicity that relieves feelings of uncertainty. Linking content moderation experiences with larger political, conspiratorial ideas about some powerful group ruling the world also allows people to cling to their identities as the martyrs of contemporary online politics (Moran et al., 2022).
Conclusion
This paper analyzes how Swedish and American far-right populist activists interpret and navigate platforms and platform affordances, using the concept of platform folklore. It aimed to answer the research question: What is the role of platform folklore in reinforcing or challenging political ideology and political identity in populist Twitter communities? In previous literature, platform folklore is framed as particular to specific platforms, and as based on observations of, for example, content moderation or algorithmic recommendation systems. Users are then assumed to act guided by their platform folklore to reach specific goals. The emotional function of platform folklore is linked to the relieving of emotions of uncertainty by offering answers, regardless of what these answers are. My analysis, however, points to the importance of understanding folklore as formed with and reinforcing both political ideology and identity. The intensity of emotional relief platform folklore can offer does not simply reside in the fact that it makes experiences comprehensible, but in how they are made comprehensible. In this case, the experiences of Twitter for the participants are interpreted in dialogue with their overall political ideology, contributing to a coherent story about the world. Of course, learning that one’s experiences on Twitter are governed by political interests wanting to silence you is not particularly emotionally soothing – but it might become so when placed within a larger narrative. Additionally, platform folklore relates to political identity. Rationality was framed as a central part of right-wing identity, and this included engaging with, and critically evaluating, standpoints from one’s political opponents – this was the main reason people stayed on Twitter, despite the platform’s “bias.” However, the idea of Twitter inherently disfavoring right-wing perspectives and promoting leftist content meant people did not develop strategies to engage with political opponents. Instead, they relied on the algorithm to do it for them. Thus, by understanding Twitter through platform folklore that is deeply intertwined with their political ideology, posing themselves as victims of platform governance, their perception of themselves as rational and open-minded could be reinforced without them actively having to seek out political opponents. The only explicit strategies mentioned that related to the consumption of content were strategies that reinforced their position within their communities, not outside of them. The analysis shows how platform folklore, political identity, and ideology come together to manage emotions, but perhaps not in the ways one would expect.
This study draws upon data that reflects beliefs, not how platforms work. But as some scholars argue, beliefs are central to guiding action on platforms, and they have real implications (Bucher, 2017; Huang et al., 2022). In the case of populism, being restricted can grant legitimacy to the message and messenger, strengthening their populist expertise (Marwick and Partin, 2022), as populism tends to value alternative sources of knowledge (Mudde, 2004). Thus, the populist worldview is not necessarily challenged by restricting users’ expression of such opinions, but may instead be reinforced. Restricting users who break community guidelines for political speech may also have the unintended effect of pushing users toward less regulated and researched platforms (Mahl et al., 2023).
The results of this study are particular to the right-wing populist community that has been studied, and further research is needed to understand how political activists of other ideological orientations interpret and navigate platforms, and what the relation between their political beliefs and their experiences of life on platforms looks like. A more thorough comparison between the two countries would also be an important contribution. The symbolic role of Twitter may also have changed since this study was carried out, with Elon Musk promising radical free-speech policies perhaps altering these activists’ experiences of Twitter as a leftist space.
Footnotes
Acknowledgements
The author wants to thank the anonymous reviewers whose generous comments made this manuscript significantly better. Also, the author wants to thank Johanna Söderström and Markus Holdo for their invitation to work on their project The Politics of Resentment in Sweden and the United States, as well as the Cultural Matters Group at the Department of Sociology, Uppsala University for helpful feedback.
Correction (November 2024):
Article updated to include the two references in the reference section and minor textual changes.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project was generously supported by FORTE, project number 2018-00583.
Ethics review
The Swedish Ethical Review Authority reviewed and approved the project (nr 2019-03899).
