Abstract
How and why the social limits of racist speech have become obscure and ‘outdated’ for a YouTube star PewDiePie and his over 100 million fans? How have the policies of YouTube affected the general understanding of the limits of racist discourse in the digital media context? In this article, I argue that the case of PewDiePie shows how YouTube exercises a neoliberalist understanding of freedom of speech. In my analysis, I contextualise PewDiePie’s own comments and YouTube’s publications into the history of Internet culture and introduce the development of YouTube into a neoliberalist sphere. I illustrate how neoliberal ideology is now implemented on three levels on YouTube: through creating an illusion of intimacy between a creator and his/her fans, through the promise of equal opportunity on YouTube and through a neoliberalist interpretation of the marketplace-of-ideas principle. The analysis reveals how YouTube’s policies and practices as ideological choices contribute to the normalisation of racism on social media.
Keywords
Introduction
In February 2017, the Wall Street Journal (WSI) published a news story about PewDiePie, the most subscribed YouTuber, claiming that he is supporting anti-Semitic ideology in his videos. The most infamous of them included an incident where PewDiePie tested the Fiverr service and paid five dollars to get Indian freelance actors to laugh and show a sign saying ‘Death to all Jews’ on 11 January 2017. As a result of the WSI’s story on 14 February 2017, PewDiePie lost his financial agreements with his creator network, Maker Studios, and the Google Preferred advertising programme. PewDiePie (aka then 28-year-old Swede, Felix Kjellberg) was quick to deny that his videos were racist or offensive. Instead, he blamed WSI for not understanding the ironic nature of his videos, saying that they were taking his jokes out of context. He also claimed that WSI had a limited understanding of Internet culture and that they envied him for his popularity.
PewDiePie’s case underlines the importance of understanding the political values of YouTube and their entanglement of ‘platformed racism’ (Matamoros-Fernandez, 2017). It shows how racist discourse in the digital media context are often veiled by irony, and therefore accepted by the platforms, which makes it difficult to counter racism. Even though PewDiePie claims that he has no connection to alt-right movement, he has been celebrated among alt-right supporters after the controversy. As Paul Gilroy (2019: 3) has argued, fascism has been successfully rebranded, not as evil and destroying, but as ‘daring, transgressive, comic, ironic and futuristic’, and PewDiePie’s content seems to fit that purpose well. The alt-right movement has been very skilful in taking opportunities for the networking capacities of online media and for the reluctance of social media platforms to set restrictions for ‘free speech’ (Jakubowich, 2019; Lewis, 2018; Matamoros-Fernandez, 2017; Topinka, 2018).
In this article, I ask how and why the social limits of racist speech have become obscure and ‘outdated’ for a YouTube star and his over 100 million fans. How have the policies of YouTube as a media company affected the general understanding of the limits of racist discourse in the digital media context? This discussion is mainly done in the American context; although PewDiePie is originally Swedish, he is now a global star who works on an American-based platform, and he is part of an Internet culture that has been historically modified by American political thought (see Marwick, 2013; Streeter, 2011).
This article argues that YouTube exercises a neoliberalist understanding of freedom of speech. Through analysing PewDiePie’s videos together with YouTube’s promotional and user guidance material, I will demonstrate how the libertarian heritage of Internet culture affects how media companies like YouTube understand their role as a ‘platform’ or ‘technology company’ and how they, as a company, formulate their policies towards creators, advertisers and lawmakers. In my analysis, I place PewDiePie’s own comments and YouTube’s publications into the historically formed, neoliberal political context of Internet culture. My analysis reveals how YouTube’s policies and practices as ideological choices contribute to the normalisation of racism on social media. I introduce the development of YouTube into a neoliberalist sphere and contextualise it as a part of a specific Internet culture. Then, I demonstrate how neoliberal ideology is implemented on three levels on YouTube: through creating an illusion of intimacy between a creator and his/her fans, through the promise of equal opportunity and through a neoliberalist interpretation of the marketplace-of-ideas principle.
YouTube as neoliberalist sphere
When YouTube was launched in 2005, the company’s self-definition emphasised the user’s ability to share self-made videos with their personal network (Burgess and Green, 2009: 1–4). In line with that, many journalists and some early media researchers celebrated YouTube as the epitome of the new participatory and DIY culture along with other social media applications that promised to democratise digital technology – and consequently, the whole society (Marwick, 2013: 21–25). Yet, the operational logic of YouTube, which was bought by Google in October 2006, has been modified several times over the years directing it towards increasing commerciality (Lobato, 2016; Snickars and Vonderau, 2009). In 2007, YouTube launched their Partner Program, which gave creators an opportunity to monetise their video content through Google AdSense, splitting ad revenue roughly half and half (55%/45%) with YouTube. In 2008, the company launched the Click-to-Buy e-commerce function with the possibility to promote videos and pre-roll adds and the YouTube Insight analytics tool that gave creators the possibility of following their users’ data. In 2009, they started Individual Video Partnership in which the creator earns for clicks and views of ads connected to their videos, and the Video Targeting tool through which advertisers can target their ads through key words, viewer demographics, interest-based categories, and so on and even overlay ads to individual videos (YouTube5Year/History of Monetization at YouTube)
Merchandising, channel subscription payment and different kinds of advertising options, including product placement and sponsored content, have been introduced on YouTube videos, and step-by-step, every surface of YouTube from comments to pop ups has been opened to business. The growing set of monetising measures has managed to entice traditional media companies to upload professionally produced content onto YouTube and make the site more attractive for advertisers (Gerhards, 2017; Lobato, 2016: 349; Raun, 2018).
All in all, the technological affordances of YouTube have been refined to enable effective profit-making for its owners (Postigo, 2016). This has also had a strong impact on the contents of videos. In 2011, YouTube acquired Next New Networks and their Next New Creator Program that was fostering upcoming YouTubers to turn themselves from amateurs to professionals by providing them with the know-how to grow their audiences and income (Cohen, 2011; Cunningham et al., 2016). The Next New Creator Program was then transformed into YouTube’s Creator Academy (Lorenz, 2019). The Creator Academy (and the YouTube Creators channel from 2018 onwards) provides YouTubers with free online lessons on how to grow and sustain a loyal audience, how to boost their channel’s search and discovery, how to use Google Analytics, how to promote videos and how to make money (Learn with the YouTube Creator Academy).
YouTube encourages creators to compete for attention and rewards them economically for promoting themselves. As a result, self-commodification and self-branding have become integral part of working as a creator. The self is an enterprise; the self is presented, branded and developed through social media applications and strategically creating an audience-targeted identity (Marwick, 2013: 191–194). Self-commodification and self-branding are not only related to or encouraged by YouTube, but they have become increasingly dominant in all social media platforms as Internet has turned more explicitly into a marketplace labelled by online advertising and e-commerce (Senft, 2013: 348). Currently, self-branding has become platform-specific as producers try to balance their brand according to their imaginations of platform affordances, audiences and the producer’s own self-concept, and they continuously have to rework their digital ‘self’ (Scolere et al., 2018).
In YouTube, marketing and human interaction are fused together as the creators engage with their fans on their channels and on other social media platforms both to create a strong social bond and ‘community’ with them and to gain visibility so that they can lure new followers and advertisers. The fusion of promotion and social relations is also evident in the cooperation between YouTube creators. While creators, such as PewDiePie and Jacksepticeye, present themselves as close friends, recommend each other and visit on each other’s videos, both creators, their sponsors and advertisers, and YouTube also benefit economically when these ‘friendly’ actions increase the number of followers. YouTube helps creators in self-commodification through their tutorial materials and by providing them analytics regarding the viewings of their videos.
Most of the popular YouTubers now have a contract with multichannel networks (MCNs), like PewDiePie had with Maker Studios. MCNs offer ‘advertising, production, audience, talent and brand management’ and promise to boost advertising revenue through brand integrations and multiplatform ad sales with the help of data analysis (Cunningham et al., 2016: 10).
Also, algorithms have their place in the commercialisation of YouTube. The platform has a machine-learning-based algorithmic system that sorts, categorises and recommends video content based on scanned images, keywords, titles, descriptions, watermarks and other metadata (Kumar, 2019). YouTube’s categorisation, search and recommendation algorithms favour certain kinds of keywords and titles, which suggests that the video content will easily match with certain kinds of consumer groups, profiled, for example, by gender, age and location. Since creators are able to follow their user data and know that algorithmic processes have a strong impact on visibility and on possible popularity, and therefore on the advertising revenues, creators learn to self-optimise and to adapt their keywords, titles and descriptions in such a way that their videos will be favoured by YouTube’s algorithms (Bishop, 2018). Even though YouTube has announced that it favours watch time of videos in its recommendation algorithm to increase the quality of videos, an empirical study (Rieder et al., 2018) suggests that the YouTube’s search algorithm – that may help in gaining new followers – often emphasises controversial videos as they can quickly gain attention through actively commenting audiences. In addition, it seems that channels that constantly upload new videos are favoured. This obviously contributes to the pressure the creators, like PewDiePie, experience while trying to continuously renew their brand and publish content that raises lot of attention among users. Furthermore, there is a strong, evidence-based suspicion that search and recommendation algorithms grant far more visibility to monetised videos than to channels and videos without advertisers and sponsors, even though YouTube officially denies this link (Kumar, 2019).
Clearly, YouTube has come a long way from the ideals of DIY and participatory culture. However, Internet culture has always embodied the tension between beliefs in open sharing and market-led capitalism. In terms of ideology, this tension has its roots in core liberalist values: protecting property and civil liberties, promoting individual autonomy and tolerance, securing a free press, ruling through limited government and universal law and preserving a commitment to equal opportunity and meritocracy (Coleman and Golub, 2008).
Historically speaking, the tension between open-sharing Web culture and the Internet’s market-led capitalism dates back to the dawn of Internet culture in the 1980s and to the age of microcomputers and hackers. For the hacker culture of the 1980s, it was essential that people have unlimited access to computers; information is free and authorities are not trusted. Yet, at the very same time, the mainstream press hailed Steve Jobs and Steve Wozniak as heroic, independent entrepreneurs, and the microcomputer market was cited as proof of free-market capitalism optimally serving people’s needs by giving them the opportunity to buy themselves unlimited access to ‘information’. Thus, the way Internet culture developed was closely intertwined with the rise of an ‘advanced’ form of liberalism, that is, neoliberalism (Streeter, 2011: 69–92; Turner, 2006).
Later the ‘Web 2.0 discourse’ inherited this fusion of hacker and open-source movement and the free-market market ideology of entrepreneurial capitalism. According to this line of thinking, a free market will best guarantee free choice, creativity is dominant and successful entrepreneurs are treated as mythic heroes (Marwick, 2013: 21–29). Google has acted as a prime example of this ideological fusion and contradiction. While the company’s founders have proclaimed philanthropic ideals and anti-authoritarianism, braced nerd culture and free flow of information and touted technology as solution to world’s problems, the company has, at the same time, developed the most advanced online advertising system in which personal user data are effectively sold to advertisers, has been willing to obey China’s censorship rules to enter its market and has secured its ruling position in online market through purchasing key competitors, such as YouTube (Hillis et al., 2013: 30–52). On Web 2.0 culture, capitalism is primarily embraced as an agent of this future social change and not questioned. Already, Steven Levy’s 1984 book, Hackers: Heroes of the Computer Revolution, declared in meritocratic spirit that hacking skills matter more than ‘bogus’ real-world criteria like race, gender or education. The Web 2.0 culture has not been able or willing to recognise the hindrances of this meritocracy and the historically formed social and political hierarchies that neoliberalist free-market capitalism only reinforces (Coleman and Golub, 2008; Marwick, 2013: 21–29).
Neoliberalism is a wide and contested concept that can be understood in three ways: as an ideology, as a mode of regulation and as a market-oriented governmentality (Byrne, 2017). Government’s role is not to keep a safe distance from markets as in classical liberalism but to ensure through regulation that competitive mechanisms rule in every aspect of society, including practices of the self (Byrne, 2017; Rose, 1999). Thus, neoliberalist governmentality encourages people to regulate their own behaviour along business ideals making self-promotion and self-commodification almost indispensable practices (Marwick, 2013: 12).
Following the idea of governmentality, this article sees privatisation, commodification, self-branding and constant competition between individuals as essential symptoms of understanding the market as the governing principle of society. In what follows, I demonstrate how the YouTube user guidance and tutorial materials spur creators, in the neoliberalist spirit, to promote themselves to their own targeted audiences and to understand their content, not as public communication but as a private commodity that is exchanged only between the creator and his/her fans. Furthermore, I explain how the principles of meritocracy and equal opportunity, which Internet culture has embraced since 1980s, are now mostly interpreted in a neoliberalist way. In this kind of thinking, everyone, in principle, is able to stand up for himself/herself and compete as individuals in the same vain, which makes political correctness unnecessary and even harmful. YouTube backs up this line of thought through their Community Guidelines, and I show how PewDiePie, either knowingly or unknowingly, appeals to this neoliberalist interpretation of meritocracy and equal opportunity in his videos. I will demonstrate how YouTube’s Community Guidelines and the company’s actual practices related to PewDiePie’s infamous video show that the company is exercising a neoliberalist interpretation of the marketplace-of-ideas principle. That is, when media content is understood as a commodity that competes for popularity in the marketplace, markets may determine the line of acceptable content, even when the content is racist.
Method and data
The empirical data of this study consist of two kinds of materials. First, I have analysed five videos in which PewDiePie comments on the accusations of racism. The first one, ‘I’m Racist’ (PewDiePie, 2016b) is related to his video published in December 2016 in which he said that YouTube would rather see ‘someone else on the top’ other than him because he is White (‘Deleting My Channel at 50 Million’), which gained public attention from the British media (e.g. Griffin, 2016). In January 2017, PewDiePie released the infamous video in which he commissioned two Indian actors to hold up a sign that said ‘Death to all Jews’. This video is no longer available on PewDiePie’s own channel but was reuploaded many times by other users. However, in this article, I have analysed four videos that comment on the public reaction on that video as those videos are intriguing as part of the discussion on the controversy. These include ‘In my defense’ (PewDiePie, 2017c), ‘I’M BANNED’ (PewDiePie, 2017b), ‘My Response’ (PewDiePie, 2017d )and ‘How About That…’ (PewDiePie, 2017a). I worked from transcriptions of all the five videos.
Second, my research material includes different kinds of publicly available user guidance and promotional materials from YouTube. Thus, I am analysing YouTube’s presentation of themselves and their mission’s four essential freedoms (Mission). Essential for this study are also the general Community Guidelines (CG), and especially, their section on Hate Speech (HS). In addition, I scrutinise the YouTube Partner Program Overview (PP), Advertiser-friendly Content Guidelines (AF), and YouTube Creators Academy Main Site (CA). These texts are key for understanding how the company wants to represent their actions and policies towards both advertisers and creators.
The five videos and promotional materials are analysed in relation to each other using critical discourse analysis. The main principle is to take notice of the facets of discourse, cognition and society in the analysis process. Discourse, here, is understood in a broad sense as ‘communicative events’ including conversation, associated gestures and facial expression, written text, images and other dimensions of signification. Cognition refers to beliefs, goals, evaluations and emotions both on a personal level and on a social level. Society includes local microstructures as well as global, societal and political structures including such things as groups, groups-relations, organisations and political systems (Van Dijk, 2009). I am analysing the discourses of two main materials: PewDiePie’s talk (and to some extent also his gestures and other acts) in his videos and the written discourse on YouTube’s promotional and user guidance material. These discourses are understood to represent certain social beliefs and ideological evaluations. In my analysis, I show how these beliefs, such as belief in the Internet as an ultimately democratic place, are historically constructed and produced in certain American political and social situations at the time the Western Internet culture was transformed. And finally, I link the discursive and cognitive aspects of my analysis to the politically and socially formed structures of social media, that is, legislative and economic factors that YouTube as a company utilises. In the following section, I will demonstrate how neoliberalist ideology is implemented on three levels: through creating an illusion of intimacy between the creator and his/her fans, as a promise of equal opportunity that abolishes the need for political correctness and through a neoliberalist interpretation of the marketplace-of-ideas principle.
Behind the veil of intimacy between a YouTuber and his/her target audience: Racist jokes just between 50 million of us
A number of researchers have demonstrated how intimacy has become a central mode of expression on YouTube. These analyses show how in lifestyle and beauty vlogging, intimacy is the integral feature both on the level of content (issues of body, everyday happenings, etc.) and on the level of form, as vloggers use self-representational techniques that create an illusion of mutual and close relationship between him/her and the followers (Abidin, 2015; Hou, 2018; Jerslev, 2016; Raun, 2018) However, I claim that intimacy is not only a matter of content or self-representation techniques; instead, I claim that intimacy is a foundational feature of YouTube as a platform.
In their ‘academy’ for creators, YouTube stresses an understanding that creators possess their own ‘community’ of loyal viewers and strongly supposes that the creator’s aim is to ‘strengthen [their] relationship with [their] audience’. In lesson 1, creators learn that ‘when creators take the time to interact authentically with their loyal community, it can encourage audience participation and ultimately result in a larger fanbase’ and that they should be ‘savvy and responsible leaders’ of their community’ (CA). Thus, the discourse of their ‘educational’ academy pages for creators is creating an illusion of a closed, intimate community between the individual creator and his/her fans. The company guides their creators to understand their content, not as public and open for general discussion but as something that is semi-private and only shared between the creator and his/her followers. This does not only apply to lifestyle vloggers whose content is related to intimate topics but apply to all kinds of channels.
When PewDiePie was accused of racism, one of his main counterarguments was that his content was only meant for his fans. He claimed that all his fans understood that his content was meant as a joke, and he appeared to be offended and surprised that newspapers were interested in it. In response to the fact that Daily Stormer, the well-known American neo-nazi and White supremacist website announced itself as ‘the world’s #1 PewDiePie fansite’ right after the controversy. PewDiePie responded: I have a big problem with the whole thing in general because this whole thing wasn’t a problem until it was brought up cause my audience understands they were jokes. They weren’t actually doing anyone any harm. But the way, the Wall Street Journal defended this was that Neo-Nazi websites were supporting me. Calling it the number one PewDiePie fansite, which obviously, I had no idea about. Believe it or not, I don’t go to Neo-Nazi websites [gives a laugh]. (‘How About That…’; PewDiePie, 2017a)
YouTube, like other social media platforms’ logics of action, differs dramatically from that. YouTube’s tutorial material urges creators to believe that published content will not be judged according to any moral criteria beyond the fans’ acceptance. Because of that, creators may have difficulty understanding that their publicly available videos could also be watched and evaluated by other kinds of criteria than channel statistics and fan comments. In fact, in one of his videos related to the controversy, PewDiePie underlined the fact that among his fans, the reaction to his video was mainly positive. Furthermore, he seemed to justify the release of the video by a number of fan comments that praised the video as being ‘very funny’: The response in the video, initially, was really great! I knew people would be offended, and I knew people wouldn’t like it. But I also knew people would see the joke in it, and would find it funny. And, honestly, the comments when the video came out where like ‘this is the funniest video I’ve ever seen’. And I’m not just strawpicking…people loved the video, they thought it was very funny. (‘I’M BANNED’; PewDiePie, 2017b)
Making racist jokes among an intimate audience obviously has historical roots that predate YouTube culture. As Jane Hill (2009: 109) explains: ‘Light talking and joking are prototypically private, associated with spaces of intimacy where interpersonal solidarity is more important than strict adherence to truth’. Hill (2009: 96), following Joe R Feagin and Vera Hérnan (Feagin, 2006: 27), uses the term ‘social alexithymia’ to describe the White cultural pattern in which feelings of the racially insulted are totally bypassed in judging whether or not racism was involved. This cultural model has provided White people with a licence to use racist language ‘among friends’ with a shared understanding that ‘in reality’ they are not racists. This understanding contains personalist ideology considering that it is the person’s intentions and ‘true’ ideologies are what matter, not the used words that sometimes may ‘slip’. According to Hill (2009), by the personalist ideology, to censure ‘light’ offensive talk is considered an attack, not to the interest but to the character or judgement.
In PewDiePie’s case, he too, was appealing to the fact that he was telling jokes for his fans, an audience that ‘would understand’ and would find them funny and was offended by the racist accusations. Whatever one thinks of the humorousness and possibly ironic nature of PewDiePie’s pranks, it is quite clear that the jokes PewDiePie chose to showcase the ‘craziness of web’, underline that his presumed audience consists of young, White males who share his sense of humour and the power relations reflected in it. Using low-paid Indian actors to make an ‘ironic’ anti-Semitic video, he creates a cultural boundary towards both the Indian actors and the Jews between him and his fans – between those who are laughed at and those who can laugh at their ‘craziness’ and ridiculousness from out and above. While the verse ‘Death to all Jews’ as part of the prank links it with the long line of online anti-Semitic jokes that find amusement in the Holocaust (Weaver, 2013), the fact that the Indian actors are fooled into doing it for five dollars underlines PewDiePie’s and his fans’ superior cultural and economic position. Overall, PewDiePie’s pranks and jokes on the Holocaust and Nazism, Lesley Jones and female freelancers create and sustain a discursive community (Hutcheon, 1994: 45–49) in which the massacre of Jewish people, insinuations of Black people as an inferior race and attempts to ridicule young women are primarily material for entertainment. Characteristically for racist online communities (Nikunen, 2015), irony works here as a guiding sensibility of discourse, and therefore, as an effective tool that binds the community together. Sadly, judging from PewDiePie’s rising popularity, promotion of his new style was well received among his target audience. The example of PewDiePie demonstrates how YouTube neoliberalist policies and practices lead to a situation in which the limits of acceptable content eventually become a private matter, a question of individual capacity of ‘understanding satire’ and willingness to be part of (default White) ‘community’ rather than an area for political discussion or shared social responsibility. Racism is successfully commodified and ‘privatised’ to the audience that works at the same time as a thoroughly analysed consumer group and an ‘intimate’ community of like-minded people around the creator.
Political correctness, equality of opportunities and right to ‘ironic racism’ in the name of the free speech
In ‘In my defense’ (PewDiePie, 2017c), PewDiePie said: I think being a political correctness police is essentially just going to fuck us all over. And this year 2017, I decided, I’m taking a stance back, I’m gonna be true to myself, I wanna do the sense of humour that I enjoy. And this is the price for it. I’m fine with that. This video really has no point. And you guys…And I’m kind of preaching to the choir right now. I just wanna say thanks for seeing through the bullshit. I see the comments, I see that you guys see the bullshit and criticize, just like I’m doing right now. And I think that’s…I think that’s pretty cool. Thank you, I appreciate that [blows a kiss].
The Internet and discussions concerning it have acted as an important venue for the political battle regarding freedom of speech and political correctness. As an essential part of Internet culture, hackers have ethically based their actions on liberalist values of freedom, free speech, privacy, the individual and meritocracy. Furthermore, hacker culture has entailed the concept that technology was meant to be played with and should not be restricted in any way. In practice, this also meant transgressive pranks (Coleman and Golub, 2008; Philips, 2015: 121–122). Hacker culture’s ideas of unrestricted individual expression and some patterns of their activities have also served as seeds for trolling. In the same vain as hackers, trolls consider any form of online censorship, including moderation, a violation of freedom of speech. For trolls, offending people verbally to win a debate, to gain ‘lulz’ and admiration from the community is something that should have no moral or legal rules whatsoever (Philips, 2015: 27–36). Thus, trolls hate ‘political correctness’ and see it as an attempt to restrict their right to say whatever they want, offensive or not (Higgin, 2013).
Opposition to political correctness has neoliberal roots in the sense that it entails a belief in the fairness of the market and competition. In neoliberalist thinking, the state should only ensure that there are no legal obstacles to fair competition, and after that, it is only up to an individual’s merits and capability is he/she is able to succeed in competition. Thus, according to neoliberalist ideology, it would not be fair to take into account historically formed structural hierarchies that would affect an individual’s success since that would ruin ‘equality of opportunity’ as fair competition among individuals (Amable, 2011). In this discourse, racist humour is acceptable as long as similar jokes are targeted at ‘everyone’. Racist humour is presented as an expression of free speech and liberalism, while those who do not accept that are pronounced ‘illiberal’. Equality of opportunity is presented to make political correctness unnecessary as everyone can speak for himself/herself. In this kind of thinking, offence only reflects the victim’s inability to take a joke (Perez, 2017: 964–965; Philips, 2015: 24–25).
Especially, online gaming culture is known for severe trolling. Many male players consider racialised and gendered insults and exclusion a ‘normal’ part of playing that one has to endure as part of the game (Ortiz, 2019; Salter, 2018). Offensive humour is used as a means to keep those people out who would not ‘fit’ spaces that have been formerly been coded as ‘white’, ‘straight’ and ‘masculine’. This behaviour is part of larger movement to preserve the Internet as a place ‘free of politics’, relating to illusory freedom from past violence, market borders or cultural politics. In practice, this means that certain online spaces have tried to preserve free from the challenge to White masculine, heterosexual hegemony (Higgin, 2013). ‘Gamergate’ was an escalation of this kind of exclusion and harassment, and even though it made the seriousness of the problem more known outside the gaming community, it also gave force to the reactionary core narrative of culture war in which the ‘progressives’ are trying to destroy the Internet culture as an area of freedom and White ‘geek’ masculinity. Unfortunately, Gamergate helped the alt-right movement to raise their popularity when some of alt-right’s main figures, such as Milo Yiannoupoulos, encouraged harassers through Twitter and YouTube (Bezio, 2018; Salter, 2018).
YouTube’s Community Guidelines forbid hate speech but state that the company will ‘try to defend your right to express unpopular points of view’ (CG). In their Advertiser-friendly Content Guidelines (AF), YouTube gives more specific instructions about the use of satire. When writing about ‘hateful content’, the guidelines state that ‘content that is satire or comedy may be exempt; however, simply stating your comedic intent is not sufficient and that content may still not be suitable for advertising’. However, it is also promised that their ‘intention is to treat each video based on context, including content that is clearly comedic, educational, or satirical in nature’ (AF).
In his comments, PewDiePie appeals strongly to YouTube’s instructions on the importance of context: What I just think, and I believe strongly, is that, it’s 2017 now. We’re gonna have to start separating what is a joke and what is actually problematic…or what actually fulfils the label of whatever you’re calling it. Is a joke actually pure racism? Is this something that would actually be called homophobic or anti-Semitic and all these things? Context fucking matters. (‘In my defense’; PewDiePie, 2017c)
As Matamoros-Fernandez (2017) has previously suggested, platforms contribute to racist dynamics through their affordances, policies, algorithms and corporate decisions thus constituting new kind of ‘platformed racism’. Unclear rules of moderation, platforms’ protection of humour and their recommendation system enable the vast circulation of racist material. As she states (p. 933), while platforms perform a rhetoric of neutrality, in practice their policies sustain the default whiteness of the Internet. Although PewDiePie’s personal responsibility should not be underestimated, I suggest that his content should be seen as both a cause and a symptom of normalisation of racism in the online environment, which platforms’ policies have made possible and even advanced. This has political consequences, though PewDiePie and YouTube may insist that he is only creating entertainment.
YouTube as the marketplace-of-ideas
When presenting the key aims of the company (YouTube’s Mission), YouTube introduces the following slogan: ‘Our mission is to give everyone a voice and show them the world’. This mission is based on ‘four essential freedoms that define who we are’. These four freedoms are freedom of expression, freedom of information, freedom of opportunity and freedom to belong. According to YouTube, freedom of expression means that ‘we believe people should be able to speak freely, share opinions, foster open dialogue, and that creative freedom leads to new voices, formats and possibilities’. As for freedom of information, YouTube says: ‘We believe everyone should have easy, open access to information and that video is a powerful force for education, building understanding, and documenting world events, big and small’. About the freedom to belong, they say: ‘We believe everyone should be able to find communities of support, break down barriers, transcend borders and come together around shared interests and passions’. And finally, freedom of opportunity is explained thus: ‘We believe everyone should have a chance to be discovered, build a business and succeed on their own terms, and that people – not gatekeepers – decide what’s popular’.
The four freedoms that constitute YouTube’s policy emphasise the individual’s right to state his/her opinion and create content without any borders or ‘gatekeepers’. YouTube’s manifesto of the four freedoms is clearly based on the classical libertarian understanding that freedom of speech should not be restricted in any way by the government (Slagle, 2009). In addition, the US Telecommunications Law guarantees the rights of social media platforms to operate without any responsibility to restrict beforehand the contents that the users publish (Gillespie, 2017).
The fact that American legislation has, in practice, left platforms as a sphere for unregulated content production has been an implementation of the neoliberal theory according to which, speech rights and opportunities should be determined by ‘neutral’ market mechanisms rather than by government policymakers (see Stein, 2006: 106–107). Additionally, the marketplace-of-ideas approach to free speech, in its both classical libertarian and neoliberal modes, has been dominant among Internet culture since the 1980s (Marwick, 2013: 11–15; Streeter, 2011: 69–82). While the classical libertarian marketplace-of-ideas theory emphasises citizen knowledge, informed decision-making and effective self-governance, the neoliberalist model emphasises consumer satisfaction, efficiency and competition. This model focuses on the effective exchange of goods and services with no acknowledgement of a broader democratic function. Thus, the marketplace-of-ideas is a market in which information and entertainment as commodities are sold under the assumption that any content will and can be supplied as long as there are enough consumers to make its provision profitable (Napoli, 1999).
However, as Tarleton Gillespie (2017) has pointed out, according to the US Telecommunications Law, platforms as enterprises have a legal right to monitor their content and they have used this right in numerous ways. For example, YouTube’s Community Guidelines steer creators’ publishing. The need for guidelines is rationalised for creators and users by pleading for everyone’s enjoyment: ‘Millions of users respect that trust and we trust you to be responsible too. Following the guidelines below helps to keep YouTube fun and enjoyable for everyone’ (HS).
The Community Guidelines document lists several kinds of content types that are ‘not ok’. These are nudity or sexual content, harmful or dangerous content, hateful content, violent or graphic content, harassment or cyberbullying, spam, misleading data and scams, threats, copyright infringement, infringement of personal privacy, impersonation and child endangerment (CG). When anyone creates a YouTube account, he/she must accept the ‘terms of service’ that obliges him/her to comply with the guidelines.
In addition to the general rules, YouTube has published even stricter instructions for the members of their Partner Program. Joining the program is a precondition for a creator to sell their content for advertising. In December 2017, YouTube tightened the eligibility requirements slightly for accessing the program, and at the moment, joining is possible when creator’s videos have gained 4000 watch hours in the previous 12 months and at least 1000 subscribers (PP).
If creators want to monetise their videos, they are strongly advised to follow the new Advertising-friendly Content Guidelines (AF), (2018) that are stricter than the general Community Guidelines. The instructions in the Advertising-friendly Content Guidelines make it clear that videos that contain controversial issues or sensitive events, drugs or other dangerous substances, harmful or dangerous acts, inappropriate language, inappropriate use of family entertainment characters or videos with incendiary or demeaning, sexually suggestive, violent or hateful content are ‘not suitable for most advertisers’. Serious or repeated violations against these rules may lead to ads being disabled from the channel or suspension from the Partner Program. Hateful content means, for example, content that promotes discrimination or humiliates on individual or group of people based on race (YouTube’s Advertiser-Friendly Content Guidelines (AF) (2018); YouTube Creators’ Academy, Lesson: Making Advertiser Friendly Content). Although YouTube, like other social media platform enterprises, has positioned itself as the flagship of unlimited freedom of speech, they increasingly monitor the content in the fear of reactions from advertisers (Gillespie, 2017). In 2017, YouTube hired 10,000 additional human moderators, but first and foremost, YouTube is developing their machine-learning moderating system that automatically either demonetises videos or flags them so that human moderators can find them and remove them if the videos violate the Community Guidelines. Channels are shut down by moderators after three videos have been removed. However, in practice, the moderation policy has been very inconsistent, and some videos with hateful content have been removed; some with similar content have not (Levin, 2017; Matsakis, 2018).
Even the libertarian model of freedom of speech originally maintained certain ethical grounds for unlimited freedom of speech, for example, the quest for truth and best possible knowledge that would eventually benefit society (Slagle, 2009: 239–240). As YouTube insists that it is only a platform for various kinds of user-generated content, and not a media, they have never grounded their operations on such things as seeking truth or supporting democracy. When looking at YouTube’s Mission with its four freedoms together with the Advertising-friendly Content Guidelines, it is clear that instead of a libertarian agenda, YouTube promotes a neoliberalist understanding of the freedom of speech. The moral law of neoliberalist theory values the individual’s ability to contribute, not to society, but to the production of surplus value and the accumulation of capital (Clarke, 2005: 55). In the neoliberalist ideology, a human being is not a member of society but a member of a firm, or himself/herself as a firm, whose only purpose is to gain profit (Brown, 2016). Everything can be commodified, and the market is presumed to work as an appropriate ethic for all human action (Harvey, 2005: 165). In a similar vein, YouTube’s allegedly unlimited freedom of speech is not justified by seeking truth or best possible knowledge but for the individual’s possibility for seeking popularity and economic success. Therefore, freedom of speech can also be limited by YouTube on the same basis – a quest for the best possible advertising income.
Thus, the ethics of freedom of speech, and therefore, the limits of acceptable content are driven by the markets. Managing hate speech and racism becomes part of business strategy; individual provocations can be seen as careful balancing acts between pleasing sensation-hungry audiences and conforming to established rules. The boundaries of appropriate behaviour are constantly tested, and trade-offs are made; losing individual advertisers may not be a major problem if increasing popularity compensates the revenues in the long run.
The new Advertising-friendly Content Guidelines were released in June 2017 – 4 months after the public controversy regarding PewDiePie’s videos started. However, YouTube never really distanced itself from PewDiePie even when his former network, Maker Studios, did. According to several articles, the video ‘Death to all Jews’ was never removed by YouTube but by PewDiePie himself, and the company did not classify the video as hate speech but considered it satirical (‘How About That…’1; PewDiePie, 2017a; Solomon, 2017; Unknown writer/Irish Times, 2017). PewDiePie’s channel was not removed or suspended from YouTube, and he has all this time been a member of the Partner Program, which has provided him with significant revenue. Although PewDiePie probably faced a short-term economic backlash, as the second season of his YouTube series Scare PewDiePie was cancelled from the YouTube Red Subscription and his YouTube channel was removed from Google Preferred advertising program, his subscription numbers kept rising, and he has made several new sponsorship agreements after the incident. With his over 100 million subscribers and other fans, he remains as the most influential and the highest earning individual creator.
Conclusion
Since the controversy analysed in this article, PewDiePie has been involved with several other incidents that link him, although loosely, with the alt-right movement. In February 2018, he praised a book by Canadian psychologist and Internet celebrity, Jordan B Petersons, who opposes ‘political correctness’, denies the existence of White-privilege movement and considers patriarchy natural (‘BOOK REVIEW’; PewDiePie, 2018b). In November 2018, he invited American reactionary right-wing politician, Ben Shapiro, to analyse memes on his channel (‘Okay, this is epic’). In December 2018, PewDiePie made a video in which he recommended to new promising YouTube creators, including one called E;R, whose videos contain racist, sexist and homophobic slurs. After criticism, PewDiePie removed the recommendation and claimed that he had no knowledge of that but also considered that public outcry was unfair (Alexander, 2018). However, the incidents or the contents of these videos did not bother most of his subscribers whose number kept rising.
Earlier, in October 2018, PewDiePie announced a rivalry between him and T-Series, an Indian music record label and film production company, about which of them would first reach 100 million subscribers, as he had noticed that T-series was catching up as the most subscribed YouTube channel. The announcement was implemented as a diss track, a song that mocked T-series, underlining the company’s mass production against his own entrepreneurism (‘bitch lasagna’; PewDiePie, 2018a). Later, in March 2019, he made another offensive song related to T-series that included racist remarks about Indians (‘Congratulations’; PewDiePie, 2019a). Both songs were condemned as abusive and racist by the Delhi High Court in India (Sekhose, 2019).
His fans and many other creators were caught up with his campaign, since he had managed to brand himself as an independent creator who worked in the original DIY spirit of YouTube against a multiplatform corporation. His fans launched a campaign ‘Subscribe to PewDiePie’, which spread throughout social media and on the streets and at real-life events. The slogan was also used by the Christchurch terrorist who made a live-stream video of his attacks to two mosques in March 2019. However, PewDiePie condemned on Twitter the use of his name by the shooter. Yet, PewDiePie did not put an end to the use of ‘Subscribe to PewDiePie’ slogan until the end of April 2019. In this video, he explained that the campaign started as ‘fun and positive’, not linked with racism or hate. He mentioned also his diss tracks, which were not meant ‘to be taken seriously’ but ironic jests, and denied being racist or supporting such ideology (‘Ending to Subscribe to PewDiePie Meme’; PewDiePie, 2019b).
After the severe competition, T-Series reached first the mark of 100 million subscribers in May 2019, but PewDiePie followed with the same achievement in August 2019. At the time of finalising this article, in March 2020, PewDiePie has over 103 million subscribers. As a part of celebrating this milestone, he announced that he, as a responsible creator, would donate US$50,000 to the Anti-Defamation League; however, he took his promise back after his fans criticised his decision (Paul, 2019). YouTube congratulated PewDiePie for reaching 100 million subscribers by releasing a video on Twitter showing a timeline of his progress as a creator (‘YouTube Congratulates PewDiePie for 100 Million Subscribers’) and gave him a ‘Red Diamond Play Button Award’ (‘Unboxing 100 MIL YouTube AWARD!!’; PewDiePie, 2019c).
Obviously, YouTube is not concerned over PewDiePie’s constant ‘slipping’ of racist content nor the fact that he is, judging by his latest channel content, coming closer to the inner circle of the alt-right movement. This is, unfortunately, consistent with my claim that YouTube is exercising a neoliberalist interpretation of freedom of speech by its actions. Managing hate speech and racism has become a part of a business strategy, and losing individual advertisers or sponsors is not a major problem when increasing popularity eventually compensates the lost revenues. As the drawback of his donation to the Anti-Defamation League brand after fans’ criticism shows, brand building needs careful balancing; fans may not be happy with too many counteractions and apologies of racism if they admire him as a warrior against ‘political correctness’.
The original controversy and the later events show how YouTube’s policies lead to a situation in which racist slurs are not condemnable if they are made in the ‘intimate’ community of YouTube creator and his millions of fans. Although PewDiePie did face consequences from his sponsors, YouTube never regarded the ‘Death to all Jews’ video as a violation against its Community Guidelines but deemed it satirical. In addition, even though his diss tracks against T-series were condemned as racist by the Indian court, YouTube did not take any action to remove the videos, except in Indian YouTube, and, at present, they are accessible at PewDiePie’s channel. YouTube’s reluctance to react both to the ‘Death to all Jews’ video and to inflammatory songs on the Indian T-Series reveals how the company’s policies repeat the White cultural pattern in which feelings of the racially insulted are totally bypassed in judging whether or not racism was involved. The company’s guidance material actually encourages the creators and their fans to adapt this kind of cultural pattern, as YouTube spurs creators to understand their channel as a place for virtual interpersonal relationships between the creator and his/her audience that makes an intimate ‘community’ regardless of the size of this audience. This blurs the understanding of YouTube videos as public presentations that are available for everyone to see all around the world. The example of PewDiePie demonstrates how this, in turn, seems to obscure the limits of socially acceptable content as both the audiences and the creators learn to evaluate the content only according to its popularity among the fan ‘community’. Yet, racist jokes may have unpredictably serious political consequences, as later incidents show.
During the controversy, PewDiePie attacked the idea of ‘political correctness’. His video can be seen as a part of the rise of ‘equal opportunities’ rhetoric in which racist humour is considered acceptable since its targets have, to their mind, an equal chance to stand up for themselves. This rhetoric has historical roots in the hacker and trolling cultures, which still have an effect on the policies of the platforms. YouTube’s understanding and tolerance of expressing ‘unpopular views’ and the use of satire related to them have made the vast spread of the racist humour possible on the platform. Consequently, YouTube and other social media platforms have provided a seedbed for the alt-right movement that has skilfully used this potential for their cause.
At the moment, PewDiePie continues to make content for his over 103 million subscribers and other followers, and YouTube, PewDiePie and advertisers continue getting massive income from his videos. The original case, and the events that followed, indicate that YouTube has enabled flirting with racism and the alt-right ideology to become a successful business strategy. As long as sponsors’ and advertisers’ limit for possible reputational harm is not exceeded but match with subscribers’ and followers’ tastes, YouTube is not interested in restricting the free speech of creators, even though their ideological message could be harmful to society. Thus, in practice, YouTube works as a marketplace-of-ideas in its neoliberal form, and racist content can be used as entertainment as long as there are markets for that kind of media commodity.
