Abstract
Rising media and academic concerns for the social implications of online trolling require that scholars understand how everyday users conceptualize trolling. The validity of survey instruments may potentially be at risk if online users and academics, who subsequently advocate for interventions, have competing understandings of trolling. Surveying 120 US-based online users, I find that the spectrum of behaviors classified in the literature as “trolling” do not resonate with respondents. Instead, respondents report that harassment on the basis of race and gender is indicative of trolling. These results suggest a disconnect between academic definitions of trolling that focus on prosocial or humor-based forms of trolling and lay definitions which foreground identity-based harassment and harm. Current instruments that assess trolling behaviors and experiences do not attend to this definition of trolling as identity-based harassment, calling into question construct validity. Furthermore, sociologists identify identity-based harassment as a form of discrimination in other domains of social life, but present measures of discrimination do not consider the online domain. This suggests that studies of everyday discrimination and the implications of mistreatment should extend to online spaces, where racism and sexism are reproduced.
Introduction
Since the 1980s, Internet users able to communicate via email, chat relays, and forums have been confronted with particular behaviors understood to be deviant, deceptive, aggressive, or antagonistic (Donath, 1999; Kim & Raja, 1991). Recognizing this “trolling” behavior required an insider knowledge of the norms of these particular websites in order to distinguish between sincere/insincere or genuine/deceptive interactions (Herring, 1999). Today, it is not only those who are deeply embedded in these specific websites, or those who spend the majority of their time in specific online spaces, who have an intimate knowledge and conception of what trolling is. Russian trolls have been exposed for meddling in the presidential election, exacerbating racial tensions around #BlackLivesMatter, and engaging over 600,000 users on Twitter (Rosenberg, 2018). The president of the United States, Donald Trump, is described as trolling political elites and everyday people (Applebaum, 2017). Academics are harassed on their social media accounts by right-wing trolls (Ferber, 2018) and the White supremacist organizers in Charlottesville were linked to online trolling communities (Walters, 2017). Trolling has thus made a shift from the margins of the web to the forefront of academic and media discourses about free speech, harassment, racism, and politics.
Many scholars argue that trolling is essentially an umbrella term for a spectrum of multidimensional, antagonistic, antisocial, or deviant behaviors and motivations online (Buckels et al., 2014; Hardaker, 2010; Phillips, 2015; Sanfilippo et al., 2018). However, feminist and antiracist scholars argue that trolling can often be a form of identity-based harassment, typically in the contexts of trash talking during online gaming, Twitter tweets, or personal emails sent to a target (Gray, 2012; Gray et al., 2017; Vera-Gray, 2017). Yet the trolling literature has not yet explored how trolling is generally perceived, let alone as identity-based harassment, across online contexts. Part of what informs the debate over definitions of trolling is that academic conceptualizations of trolls’ intentions and motivations, and the impact of that trolling, are informed by content analyses of trolling discourses (Hardaker, 2010; Herring, 1999), not how people who experience trolling conceptualize it. Without an agreed upon definition, comparison between studies is difficult. Moreover, without knowing how everyday users may conceptualize a term, survey instruments’ content and construct validity are at risk. The questions of whether, or when, trolling is synonymous with identity-based harassment is also sociologically relevant, as identity-based harassment has serious implications in other domains of social life (Williams et al., 1997).
In order to understand how users conceptualize trolling, and explore the extent to which trolling is perceived as identity-based harassment, I conducted an anonymous, open-ended survey on the online marketplace Amazon Mechanical Turk. One hundred and twenty US-based adults with various social identities, experiences with trolling (including trolling others themselves), and different platform participation discussed their general understanding of trolling, provided examples from their own experiences, and offered explanations for trolling.
I find that for respondents, trolling is a collective form of harassment perceived as having the malicious intent to provoke another user. In fact, for many respondents who experience overt racist and sexist speech, or attacks on their identity, trolling is racism and sexism. With this definition, I am not suggesting there has merely been a semantic shift over time, wherein new or different behaviors are now defined as trolling, while academic definitions have been—or should be—tossed aside. Instead, I argue that scholars should take seriously the possibility that lay definitions compete with academic definitions of trolling in important ways, which would put at risk the construct validity of survey measures and the relevancy of our theoretical lenses. Indeed, I find that by overlooking how everyday users experience and make meaning of trolling, many academic definitions operationalize trolling as non-malicious and unintentional, which competes with lay interpretations of trolling as racist and sexist. Consequently, our measures and interventions cannot account for the ways everyday users experience trolling as racism and sexism. Everyday racism and sexism have serious consequences related to negative health outcomes, but more broadly, they also work to bolster systemic racism and sexism in activating underlying power structures to normalize dehumanization (Essed, 1991). Thus, we might expand how we understand trolling to consider the spectrum of lay experiences, which align with how feminist and antiracist scholars have argued we frame trolling.
Definitions of Trolling
Scholars do not agree on how to define or operationalize trolling, despite decades of research that seeks to develop typologies, illuminate trolls’ motivations and intentions, and theorize the broader implications of the phenomenon. Such a vast array of theoretical frameworks, contexts, and sampling frames also make it difficult to compare studies of trolling. Indeed, scholars argue that trolling is contextually bound and subjective; what users consider trolling in one space such as Wikipedia may not operate as trolling in another site, such as Instagram (Hardaker, 2010). In addition, what one user may interpret as trolling, another user may not even notice if it were aimed at them (Sanfilippo et al., 2017). One connecting thread within this literature is the emphasis of trolling as a behavior intended to provoke a reaction in another user. Trolling can be meant to provoke a disagreement or conflict (Beyer, 2014; Coles & West, 2016; Crystal, 2001) or more generally provoke an interaction that the troll ultimately finds amusing (Buckels et al., 2014). The troll can be humorous, manipulative, deceptive, aggressive, shocking, and/or impolite as a means to provoke (Bishop, 2014; Buckels et al., 2014; Hardaker, 2010, 2013; Hong & Cheng, 2018; Phillips, 2015).
Much of the trolling literature is scholars debating what we should consider trolling. As Jane (2015) points out, some scholars argue that there has been a conceptual inflation of trolling, implying that researchers consider every single deviant behavior within an anonymous online context a form trolling, regardless of what the troll was intending (e.g., O’Sullivan & Flanagan, 2003). These scholars argue such an inflation of trolling is problematic because it obscures what “actual” trolling it, and it groups “humorous” trolls with “malicious” ones. This then “overemphasizes the negative aspects of trolling” and “overlooks the aggregate impact of trolls” (Sanfilippo et al., 2018) and unjustly attributes immorality to “harmless” trolls and works to erase the “prosocial” implications of trolling. These scholars also argue that while we should have discussions about what constitutes “real” trolling, we cannot privilege the target of trolling or their perceptions because we run the risk of miscoding data. Following the lead of O’Sullivan and Flanagan’s (2003), these scholars try to “objectively” categorize the troll, target, and observers in a neat timeline in order to correct what they see as incorrect analyses that contribute to the moral panic surrounding trolling (Jane, 2015). The reaction(s) the troll intends to provoke, and if the intention to provoke is ultimately relevant at all, are up for debate.
In direct contrast, feminist and antiracist scholars argue that definitions of trolling are conceptually deflated, in that they do not foreground the malicious, antagonistic forms of harassment that people experience on the basis of race and gender (Gray, 2014; Jane, 2014). To be sure, some scholars may include those types of trolling in their definition or analysis, but harassment is often understood to be one of many behaviors for a troll, and not the core strategy (e.g., Shaw, 2013; Thacker & Griffiths, 2012). Feminist scholars argue that these scholarly definitions give too much agency to the intent of the troll. That is, if the troll was intending to be funny, then that often takes precedence in analyses over how it was perceived by the target. These scholars suggest that we have to examine how trolling operates as a form of discipline toward marginalized groups online, how trolling encourages the systemic silencing of people online, and functions as a form of symbolic violence (Cole, 2015; Herring, 2002; Jane, 2015; Lumsden & Morgan, 2017; Ortiz, 2019a; Sobieraj, 2018; Vera-Gray, 2017). Based on the literature of harassment in other social domains (Almeida et al., 2009; Essed, 1991; Lewis et al., 2013; Oh et al., 2014; Rodriguez, 2008), this feminist and antiracist argument seems reasonable. However, we do not know under what conditions people consider trolling to be harassment. This study will assess how closely everyday definitions of trolling align with the feminist and antiracist conceptualizations of trolling, as forms of identity-based harassment and everyday racism and sexism in the United States (Essed, 1991).
Broadly speaking, the definitions of trolling that most scholars rely on were developed from content analyses of trolling incidents on specific sites (Hardaker, 2010; Herring, 1999). These definitions have not been systematically compared to how people across online contexts understand trolling in their own terms. While the literature demonstrates that trolling can be a range of behaviors from hacking, to releasing private information, posting satirical comments, posting redundant information in order to disrupt a conversation, to hate speech, no study has inductively examined how everyday people define trolling across these online context. Sanfilippo et al. (2018) agree that the academic, media, and common experience mismatch in conceptualization is problematic, and utilize focus groups and interviews to understand how 10 college students’ understanding of trolling align with the literature. Part of that methodology, however, included providing pre-selected, “exemplary” trolling cases to respondents. Cook et al. (2017) conducted interviews with 22 self-identified online gaming trolls to examine how they understood their behaviors, goals, and motivations, but we do not know how the potential targets of this trolling understand their own experiences, and how those perceptions vary across other online spaces. These two studies are no doubt important interventions in clarifying how trolling is conceptualized. However, debates over interventions and implications of trolling are potentially disconnected from how everyday people construct meanings and understandings of their own experiences. Meaning construction is not only significant for its role in cultural construction (Lamont, 2000), but the meanings people construct of events also shape their coping mechanisms and behaviors following those incidents(Lazarus & Folkman, 1984).
Such a project has been critiqued by scholars who acknowledge the definitional problems of studying trolling, but who nevertheless argue that, ultimately, “the traditional sense of the polysemous label trolling should be sustained in the scholarly literature, regardless of the contemporary lay language users’ inconsistent parlance . . .” (Dynel, 2016). The “technical understanding” of trolling may indeed be different from lay persons’ definitions, but this difference should not be taken as self-evident. Furthermore, that some scholars dismiss lay users’ definitions, which are not unanimous themselves, sparks interesting questions. What are lay definitions of trolling? How might we gain insights into the experiences and implications of trolling by examining these definitions?
Data and Methods
In order to explore everyday meanings of trolling, I recruited 120 respondents from Amazon Mechanical Turk (MTurk) to complete an open-ended survey. Qualitative data provide a useful starting point for exploring how people make sense of and construct meanings of their subjective experiences (Miles et al., 2014). MTurk is an online marketplace where employers can create “human intelligence tasks” that a worker can complete for a wage set by the employer. These tasks range from comparing images, transcribing podcasts, copying information into a database, and answering survey questions. MTurk workers generally perform the same on surveys as participants recruited from face-to-face techniques and via social media (Casler et al., 2013), but I also chose MTurk because I needed total anonymity in order for respondents to feel comfortable answering questions regarding trolling. I also wanted a sampling frame of users whose online participation was diverse, as I was interested in trolling across online contexts, as opposed to bound to experiences on any one specific site. MTurk workers are not linked to any one particular context or website, which may have potentially been the case with users recruited off of specific social media sites or forums.
I set the recruitment parameters for my task such that my recruitment materials could only be accessed by adults residing in the United States, as discourses of race and gender are bound to national contexts (Omi & Winant, 2014) and my use of an everyday discrimination (Essed, 1991) framework is concerned with particular US-based experiences of racial and gendered inequality. Before a respondent accepted my specific task they were able to read the informed consent form, wherein I explained that I was interested in “exploring how people define and understand trolling.” This study was completely anonymous as I did not collect or have access to names, emails, or IP addresses. I also did not have access to their financial information, as they were paid the US$2 incentive through the MTurk system. Beyond the financial aspect, there is also an incentive within MTurk to take tasks seriously, as employers can not only choose whether or not to pay a worker based on their performance, but employers also rate a worker’s performance, which may affect that worker’s chances of being selected for subsequent tasks. Employer ratings then become associated with a worker, and future employers can limit their recruitment to workers with a rating that denotes reliable and focused performance.
This sample was composed of 14 women of color, 12 men of color, 32 White women, and 62 White men. Respondents’ age ranged from 21 to 47, with a mean age of 36. Following Sanfilippo et al.’s (2018) interview protocol, the open-ended survey (Appendix) asked respondents to define how they understand the term trolling, explain why they believe people troll, distinguish between trolls and other online deviants, and describe their own experiences being trolled. If applicable, respondents were also asked to describe their own trolling behaviors. Question stems did not include terms such as ideology, power, harassment, discrimination, race, gender, or politics, which might have shaped how respondents subsequently framed trolling. I also did not provide respondents with any definitions of trolling to assess. One hundred respondents took around 1 hr to complete the entire survey, while the remaining 20 respondents took at least 30 min. Answer lengths per open-ended item range from four sentences to three paragraphs.
To code the qualitative data, I utilized the thematic analysis approach (Braun & Clarke, 2006). Where a narrative analysis and a discourse analysis may look at text as an object itself and seek to maintain continuity and trace contradictions across individual cases, a thematic analysis looks at text as a proxy for human experience (Braun & Clarke, 2006). Thematic analysis is an inductive approach, whereby the researcher identifies themes in terms of both prevalence and significance; these themes tell us something important about the data in relation to the research question. How I identified what was prevalent was not only just how many respondents shared a common idea or experience, but the themes as a whole had to go toward piecing together respondent stories comprehensively to understand the complexities by which trolling was understood.
Findings
According to respondents, trolling is a collective form of harassment perceived as having the malicious intent to provoke another user. Trolling is understood to be collective, in that despite being undertaken by individuals, the strategies used by trolls are shared beyond that individual person or one interaction. Respondents also suggest that trolls activate group position and “exercise power” in their efforts to target women, feminists, people of color, and disabled persons. Trolling is perceived as being tied to the group positions of male, White, and conservative. I found this even among the White male conservative men who admitted to trolling others. By harassment, I mean that respondents describe sustained or repeated attempts of emotional and psychological harm and physical threats over time and across contexts. When based on social identities, this harassment activates discriminatory practices in the differential treatment of some groups over others (Essed, 1991; Tynes et al., 2008; Williams et al., 1997).
Respondents’ description of trolling also aligns with discrimination in the form of disparate impact. This is when a discriminating act or process is not exclusively about race or gender, but the overarching rules nevertheless affect specific groups over others. Respondents explain that a norm across social media sites is that trolling is normal, frequent, can be aimed toward anyone, and is to be expected. But trolling affects groups differently, as those who occupy marginalized identities can be also trolled on the basis of those identities. Conceptualizing trolling as discrimination in the form of disparate impact allows us to see how despite its multidimensionality, that is, it is not always racist or sexist (Sanfilippo et al., 2018), it nevertheless reinforces and reproduces racial and gendered disadvantage (Pager & Shepherd, 2008).
The final piece, that trolling is a malicious intent to provoke another user is also important, as it speaks to respondents’ general understanding that trolling is not about trying to make a target laugh or distract a target from their conversation about trivial matters. Instead, trolling is understood to be a tool to distract targets from discussions “that matter,” by provoking them to feel intense anger, hurt, or annoyance, or to ultimately force targets into silence or retreat into their private accounts and disconnect from public discourses. Respondents mention having been trolled, and trolling others, during instances where religion, race, free speech, gun rights, feminism, or other inherently political topics were at stake. Respondents focused the majority of their answers describing trolling as a specific subset of behaviors: harassment. To most scholars and trolls themselves, trolling encompasses a spectrum of behaviors undertaken by individuals (Hopkinson, 2013; Phillips, 2015; Sanfilippo et al., 2018). The most vile and graphic trolling may be sub-categorized as “hate trolling” or “flaming” within the literature, but respondents do not see this form of trolling as distinct; rather, racist and sexist harassment constitutes trolling itself.
One hundred and three respondents focused exclusively on trolls’ goals to elicit negative responses (such as “anger,” “sadness,” and “destruction”) through “harmful,” “bigoted,” “sexist,” or “racist” means. These respondents also all used the word “harassment” at some point throughout the survey when describing trolling. While the trolling literature argues that there is a large umbrella of behaviors and motivations subsumed under “trolling,” these respondents’ definitions do not reflect such a multidimensional conceptualization. There was some variation, in that the remaining 17 respondents suggested that trolls are not always negative in trying to get attention online or bait people; trolls can merely disrupt a conversation by posting something pointless or they can post an “unpopular opinion about a politician” or a “polarizing comment about a sensitive issue like gun control” in order to “spark debate.” While academic definitions regard a much wider range of behaviors as trolling, this variety is not reflected in user’s accounts of trolling. I find that lay definitions more often focus on direct attacks, as opposed to more harmless, humor-based trolling such as “Rickrolling.”
Of the 103 respondents who described trolling as harassment, 91 of them described their own experiences of trolling using terms that focus on explicit insults and harassment on the basis of social identities. These experiences of trolling were consistent with the literature on everyday discrimination in terms of the repeated pattern of harassment, which set in motion changes in respondents’ behaviors and reactions (Essed, 1991). Respondents noted significant negative feelings about their experiences and the anticipation of subsequent abuse. After defining trolling as “harassment to make someone feel bad about themselves,” this 24-year-old White woman explains her experience of trolling on the image sharing platform Instagram: . . . I posted a selfie once on my instagram when [my profile] was public. I tagged it #bopo and #selfcare because I was trying to feel more comfortable with my weight and my body, and that community is the place to find others in the movement . . . I had men tagging each other in the comments section of the picture, calling me a fat cunt and describing how they would have sex with me if I lost weight . . . I felt disgusting. . . I made my account private. That makes it hard to connect to the #bopo community.
Instagram has been noted for its prosocial potential in terms of helping to build community (Blight et al., 2017) which this respondent intentionally sought out when she initially posted pictures of herself publicly. What the respondent encountered in response was a group effort by men to harass her through misogynist speech and comments on her body and sexual value to them. The example she provided was one that was incredibly negative for her, as she notes feeling personally disgusting. In anticipation of more harassment, she made her account private despite wanting to engage with an online community. This second example, from a 26-year-old White man, demonstrates the negative emotional implications of trolling, but also the everydayness of the experience.
Most recently I was playing a game online and my stutter came out because I was excited. The other guys called me fucking retarded, and they were just laughing on and off the rest of the match. One of them asked me if I was an asian who just learned english. It happens a lot to me, but I was still hurt and insulted.
This respondent defined trolling as “bordering harassment in most cases . . . trolls love to use people’s personal traits against them.” To be sure, this respondent wants to clarify that such an experience is common for him. Ability, race, and language are mobilized as insults he was subjected to on this particular occasion, which is consistent with the work of scholars who study trolling in gaming (Gray, 2012, 2017; Ortiz, 2019a, 2019b). The respondent finds these experiences hurtful despite the frequency with which they occur. Experiencing trolling in these forms can also come to shift one’s broader behavior online, as explained by a 36-year-old Black woman: I’m a nonwhite woman, so anytime I go against my better judgement and post anything online about race or feminism, I get a storm of racist trolling comments. People will insult my nose, my hair, my skin, and my breasts . . . When I refuse to engage them, they’ll say I can’t handle a debate or I can’t defend my opinions . . . I hate being trolled, so I tend to avoid posting on threads if it involves race and feminist politics.
This respondent defined trolling as “pure, unfiltered, perverse racism.” While these interactions are often framed by trolls as “debate,” respondents conclude that these discussions are meant to harm them and “suck [their] energy into a meaningless back and forth,” an explanation corroborated by the 19 respondents who also identified as trolls. As a 21-year-old White male self-identified troll stated, “With trolling, you gain a sense of power because you are able to distract someone from the message they were sending, and now they are drawn into your world and away from their cause.” Despite concerns over labeling trolls’ intentions as political (Phillips, 2015), the respondents who explain their justification for trolling confirm suspicions that trolling is politically motivated. Nineteen respondents, all of whom identity as “conservative,” openly admit to trolling others as a “strike to political enemies,” seeking to “smash precious, sensitive bullshit worldviews” or “humiliate liberals, feminists and minorities, who just never stop shoving their views down people’s throats.” It is plausible that women, people of color, and liberals harass others online, but may not consider it trolling. Evidence of this was not present in the data, and thus, I cannot speak to how trolling in those forms are conceptualized.
I also found that perceiving trolling as identity-based harm, and experiencing this harassment consistently, then changes respondents’ interactions. Many noted losing the desire to post or share their opinions on issues they find meaningful or important and expressed a perception of their online spaces as hostile. Every respondent who experienced trolling as harassment on the basis of social identities described changing their interactions in anticipation of victimization, in addition to feelings of emotional pain and exhaustion. Such responses are consistent with everyday discrimination in face-to-face domains of social life and future studies should examine that relationship more closely (Essed, 1991; Williams et al., 1997).
When asked to define trolling and provide examples from their own lives, respondents in an open-ended and anonymous online survey share narratives not represented in the majority of the trolling literature, which highlights insincere, disruptive, “joker” behavior as opposed to malicious attacks. To be sure, Buckels et al. (2014) argue that trolls are characterized by sadistic tendencies, but respondents add additional insights by suggesting that their harassment on the basis of race, gender, political affiliation, and disability are what provide trolls pleasure. As one 26-year-old self-identified troll shared after describing how they troll #BlackLivesMatter posts: “It feels good to see them suffer. I know I’m getting at their psyche deeper than if I was there holding a sign at a protest.” A sense of anger at the contemporary demographic, economic, socio-cultural, and political changes, and a desire for revenge against those they feel have wronged them, are also pleasurable for trolls. According to this 32-year-old self-identified troll, There is so much focus on immigrants, and black people, and women, and all of it is so polarized. The media is liberal and it prioritizes their needs over conservatives. Populations are increasing in those groups, so I understand the need to take needs into account. I get tired of being ignored. I speak up against the liberal agenda and bias, and demand to be heard . . . If it hurts their feelings so be it.
Thirty-three respondents also suggest a collective emotionality that both binds trolls together, as one Asian American woman respondent explained: Misery loves company, but so do anger and resentment. It’s not lost on me that those are bedfellows of racism. Combining those three online, you get a group of white men who feel lost and angry. They see themselves in each other, and they find others like them through trolling . . . They punish us how they can’t to our faces.
This collectivity functions as an effective recruiting mechanism for others and also further motivates the behavior. Both trolls and non-trolls explain that the sheer frequency and intensity of trolling operate in ways that normalize the very act. That most respondents’ definitions focused on harmful, normalized direct attacks, and implied a collective targeting of marginalized communities may be in part due to the political climate of the present era, marked by White male rage and unprecedented political polarization. Respondents are navigating that climate both online and offline, which may render online interactions that reproduce racism and sexism more salient to their ordinary lives. This could be why trolling is so often synonymous with racism and sexism among respondents. However, the literature lacks data on lay definitions of trolling from previous political eras, so I cannot argue that these definitions are novel. It is also possible that marginalized groups and politically motivated trolls have framed trolling as racism and sexism for much longer than the Trump era, but again, we lack everyday users’ lay definitions.
Conclusion
I have provided a brief overview of how an increasingly academic and media concern of trolling is understood by everyday social actors. I found that despite the vast diversity of trolling definitions propagated by scholars, users generally described trolling as a form of harassment with the malicious intent to provoke another user. Respondents who note experiencing this form of trolling describe a process consistent with conceptualizations of everyday discrimination. That is, respondents perceive their mistreatment on the basis of social identities, note feeling sad, exhausted and hurt in response, and come to anticipate additional harassment in their subsequent reactions. The culmination of these frequent experiences lead respondents to change their patterns of interactions in order to avoid this harassment, which includes withdrawing from personally meaningful conversations on social media sites. The experience of trolling in the form of identity-based harassment becomes embedded in respondents’ expectations for what participating online entails such that witnessing and experiencing trolling is then mundane. This study thus corroborates feminist and antiracist scholars’ arguments regarding the oppressive nature of trolling. Survey measures, guided by academic definitions that imply or assert a colorblind and postfeminist assessment of trolling, overlook the racist and sexist nature and impact of trolling, which lay definitions prioritize. This methodological oversight leaves the lived experiences of racist and sexist trolling unattended to.
Trolling is not only a personality disorder (Buckels et al., 2014) or performed “for the lulz” (Phillips, 2015). Respondent narratives suggest that trolling is a mechanism through which White men, especially politically conservative men, collectively target others with their rage, disgust, and discontent. While it is clear from the literature that not all trolls conceptualize their actions as such, and not every target/victim conceptualizes trolling as harassment, this study shows an important way trolls and targets understand trolling. Trolling may be a strategy people use to virtually respond to real and perceived demographic, political, and economic changes that mobilize and outrage White conservatives in the contemporary era (Hochschild, 2018; Norris & Inglehart, 2019). Trolling can also function as a cultural mechanism through which groups construct, impose, and defend group boundaries in virtual spaces, which operates as a negotiation of real or perceived social changes. As Jakubowicz (2017) argues, “. . . the internet has become not merely a battleground but indeed a real weapon in the conflicts over resources, power and life choices.” As these results suggest, virtual spaces should be viewed as micropolitical sites where macrosocial transformations reverberate through the lives of everyday people.
For the study of racism and sexism specifically, this article suggests that trolling as a specific interaction within a specific social domain must be examined by scholars interested in the reproduction of racial and gendered logics. As Cresswell et al. (2014) argue, racism is a construction that gains its meanings from the social context in which it is used among social actors. The social meanings of trolling and racism/sexism as often synonymous should direct scholars’ attention to online spaces, where racism and sexism are experienced as an everyday reality. Scholars of systemic racism and sexism should also pay attention to how trolling normalizes overt forms of racism and sexism, which challenge postfeminism and colorblind theories. Discrimination measures, such as the Everyday Discrimination Scale (Williams et al., 1997), should be modified to take seriously the role of inequality experienced online.
Phillips (2015) argues that trolling be understood using a digestion metaphor, that is, examining the feces of a bear teaches you a lot about what the bear has been eating. Likewise, if trolling is a form of society’s feces, studying the content trolls adopt, the jokes trolls make, and groups trolls most frequently target provides insights into the cultural environment of society (Phillips, 2015, p. 143). Trolling as feces, as a byproduct of society’s cultural production, frames trolling as an outcome. But my results suggest that trolling also plays a more active role in shaping society. Trolling faciliates the reproduction of racism and sexism, which then inform broader meaning-making processes among everyday people who must navigate hostile online and offline environments. Trolling, then, is not only the byproduct of an unequal society, it is also a mechanism through which everyday users experience inequality.
While trolling is not always perceived as identity-based harassment, scholars should be careful in framing trolling as apolitical, or as disconnected from the racism, homophobia, and sexism from which it spawns and simultaneously reproduces (Ortiz, 2019a). Doing so positions these acts as merely an unfortunate and fixed reality of online life. This sentiment normalizes an experience that targets identify as traumatic, exhausting, and sad and overlooks the fact that trolling in the form presented by respondents, like everyday discrimination in general (Essed, 1991), is not randomly distributed, but socially patterned. Future studies should compare the experiences and implications of discrimination between online and offline domains, and how users’ conceptualizations of trolling are shaped by online and offline cultural and political contexts. This article contributes to debates over how to best conceptualize trolling, as concerted efforts on the part of academics across disciplines to address trolling have yet to consider how the users themselves understand their experiences.
Footnotes
Appendix
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
