Abstract
Over the past decade, the rise of political extremism and its associated linguistic expression resulted in communication companies’ decisions to restrict hate speech and, in many cases, ban speech emanating from specific users. Before we attempt to regulate expression per se—whether through “cancelling” expression, “deplatforming” speakers through suspensions or platform restrictions, rewriting social media terms of service, or criminalizing harmful speech—we should seek a clearer understanding of how hate appeals are used to accomplish particular communication purposes. In this analysis, we analyze hate speech as a stratagem—an artifice or trick of war—used with great effect during the 2020 election. Our concern is how this tactic is used to harm the body politic, reducing citizen ability to engage with divergent publics and points of view, and threatening democratic rule. Critically, we must understand how communication on social media platforms is being used to destabilize the communication environment and prevent the robust discussion of ideas in a public forum, a prerequisite for democratic governance.
Hate itself is not a crime. In fact, hatred of morally repugnant acts may be seen as a moral good, such as hatred toward child abuse or injustice. Yet there are certain kinds of hatred that have become matters for legal intervention, such as when “the victim was engaged in a federally protected activity” like voting (FBI, 2021). Though hate crimes have been investigated in the United States since World War I, it was not until the Civil Rights Act of 1964 that federal invention was given a mandate. The first major case was MIBURN, the code name for the Mississippi Burning investigation, which investigated, arrested, tried, and convicted seven men who conspired to “violate the constitutional rights of the slain civil rights leaders” Michael Schwerner, Andrew Goodman, and James Chaney (FBI, 2021). Since then, new federal mandates expanded the role of the federal government into hate crimes, defined as a “criminal offense against a person or property motivated in whole or in part by an offender’s bias against a race, religion, disability, sexual orientation, ethnicity, gender, or gender identity” (FBI, 2021).
Going back to the 1990s, hate crimes have spiked nationally before and after presidential elections (Davis, 2020). After reviewing decades of data, the Center for the Study of Hate and Extremism (2018) observed that hate crimes “precipitously spiked after instances of political invective, terror attacks and elections” (p. 31). In 2008, for example, hate crimes against Black people rose when Barack Obama was on track to be the first African American president. Similarly, with the culmination of President Donald Trump’s election in November 2016, the nation saw the worst month in more than a decade for hate crimes with over 750 reported incidents. More recently, violence against members of the Asian American Pacific Islander community spiked in 2020, apparently in response to the novel coronavirus and rhetoric referring to COVID-19 as the “China virus” (Chen, 2021).
Communication plays a central role in the prosecution of hate crimes for it is the expression of hatred that is at issue. While in the United States there is no legal definition of hate speech, courts have ruled that such speech is protected by the First Amendment. Despite that, hate crime litigation depends on the expression of hate and, in essence, criminalizing it when it incites imminent lawless action or consists of specific threats of violence targeted to a protected person or group.
In the 2020 election, we witnessed the tension created as expressive rights guaranteed under the First Amendment came into conflict with efforts to deny the franchise of citizens to vote as is guaranteed by the 14th and 15th Amendments to the Constitution. Far-right groups—including the White nationalist Proud Boys, supporters of the QAnon conspiracy, and militia groups—attempted to stop the election’s certification at President Trump’s behest (Biesecker et al., 2021). As the 2020 election ended and the recount battle ensued, a portion of the electorate became violent, culminating in what The Trump Impeachment Resolution described “seditious acts” during the siege of the capitol on January 6, 2021, while the Congress sat to count the votes of the electoral college.
Since the siege, a variety of responses emerged in attempts to cool the temperature of such hostilities: the federal government filed numerous charges against those who entered the Capitol; Internet Service Providers limited access to platforms for those espousing threats; and apps removed access to communication of those determined to advance the “Glorification of Violence” (see Twitter, 2021). Nearly every tech platform took action against President Trump and his supporters following the capitol riot (Fischer & Gold, 2021): Twitter, Facebook, and Instagram banned him, Twitter Snapchat and Twitch disabled Trump’s accounts, Shopify removed two online stores associated with Trump, and TikTok redirected hastags like #stormthecapitol to its Community Guidelines to reduce discoverability. Calls for the regulation of social media became pronounced, and First Amendment scholars are beginning to wonder if an absolutist position on the First Amendment can be maintained in the era of social media.
Clearly, lies and misinformation quickly spread through our media ecosystem and tech platforms are under pressure to stop the spread. Indeed, fictitious stories told for political purposes have become commonplace. One study found that the top 100 fake news stories on Facebook in 2019 were viewed over 150 million times (Gilbert, 2019). Yet in our haste to remedy social injustices, we are, perhaps, looking at the wrong thing. As Shannon McGregor and Daniel Kreiss (2020) argued, “Journalists and voters should pay more attention to the motivations, content, and drivers of mis- and disinformation” (para. 10). That is what this article will do.
Before we attempt to regulate expression per se—whether through “cancelling” or censoring expression, “deplatforming” speakers through suspensions or platform restrictions, rewriting the rules of social media, or criminalizing harmful speech—we should seek a clearer understanding of how hate appeals can be used to accomplish particular communication purposes. Critically, we must understand how our social media destabilizes the environment and prevents the robust discussion of ideas in a public forum, a prerequisite for democratic governance. Therefore, in this analysis, we analyze hate speech as a stratagem—an artifice or trick of war—used with great effect during the 2020 election. Our concern is how this tactic is used to harm the body politic, reducing citizen ability to engage with divergent publics and points of view, and threatening democratic rule. Once understood, we can better evaluate a variety of methods to ensure better stability in the communication environment.
The Changing Media Environment: Threats to Democratic Discussion
The conventional, folk-theory of democracy is based on the core principle that people should be persuaded by and make decisions according to coherent premises using the available evidence, and pursuant to logically consistent goals (see McGann, 2016). The idea is that democracy requires citizens to hold their leaders to account and, at least nominally, to vote for parties and politicians who will perform well in office. Accordingly, governments should be responsive to the public will and develop policy through a deliberative, marketplace-of-ideas framework.
American government is a long way from this normative ideal. As practiced in 2020, American politics is better understood as a clash between identity groups and political parties, not the rational preferences of individual voters (Achen & Bartels, 2016; Klein, 2020; Mason, 2018). In this sense, American democracy is less a contest between ideas and more of a contest between identities, with both parties trying to galvanize collections of voter identities to support their political tribe. Today, parties diverge both ideologically and culturally, and digital media and other communicative channels have confirmed, amplified, and exploited those identities. Rod Hart (2020) reminds us that Donald Trump is a product of the country’s declining political trust, “structural racism, institutional sexism, and a carnivorous right-wing media” (p. 18), adding that “Trump is one of us and ought not be dismissed as a cultural alien” (p. 12).
In this environment, elites and opinion leaders are incentivized to stoke polarization among the citizenry, while partisan media sources play to their base audiences by portraying the other side as caricatured radicals. On social and digital media, algorithms reinforce and amplify outrage and extremity. Both legacy as well as digital and social media have mastered the art of monetizing anger, paranoia, and distrust (Taibbi, 2019). Digital media companies give us more of what we want and “we are particularly susceptible to glimmers of novelty, messages of affirmation and belonging, and messages of outrage toward perceived enemies” (Tufekci, 2018). As Thompson and Warzel (2021) encapsulate: “Facebook rewards exaggerations and lies,” adding, “Facebook groups for like-minded people are where lies begin to snowball, building momentum, gaining backers and becoming lore.” These are sporting events with distinct winners and losers; they are not the environment for making good policy decisions for the country.
The apex for these trends was the January 6, 2021, storming of United States Capitol, where social media was foundational before, during, and after the attack. Scholar Kate Starbird called the riot “hashtags come to life”—a real-world manifestation of the outrage campaigns popular on right-wing social media (quoted in Nguyen & Scott, 2021). Experts said the insurrection was the “inevitable culmination of years of rhetoric among militias and far-right groups that had openly fantasized about overthrowing the government” (Nguyen & Scott, 2021). Such rhetoric, which used to be seen as deviant or radical (see Hallin, 1986), is now pulled into the political mainstream and seen as a topic of legitimate debate. We cannot escape commenting on the fact that our nation’s most powerful leader participated in the mainstreaming of such ideas. The power of the bully pulpit has been on full display. In both his social media posts and in speeches during rallies, President Trump played an important agenda-setting role, both online and offline, as Kyle Pope (2021) noted, Trump’s Twitter account became a disinformation drip mainlined by newsrooms across the country; cable networks broke in with chyrons quoting even his most nonsensical claims. Outlets aired his rallies, unedited, as he spouted racist vitriol. Newspapers sent reporters to hear from Trump’s fans, who had internalized his distorted picture of the world, and repeated it back. (para. 4)
The media ecosystem nourished these mediated narratives as protesters contributed user-generate content from the scene. Even during the storming of the Capitol, numerous protestors live-streamed the event to friendly audiences on YouTube, Twitch, Facebook, and other platforms using the hashtags #StoptheSteal and #PatriotCapitol. Conservative media and various political campaigns encouraged viewers to post comments and donate money through Patreon and GoFundMe (Alexander et al., 2021). After the attack, images and videos were quickly turned into memes and gifs to advance falsehoods about the violence, such as the fiction that those who attacked the Capitol where Antifa, not a mob of Trump supporters. Violence in word and deed became the titillating force behind the evening news.
The entire media landscape is now in frenzied motion with little communication flowing between groups. This is critically important since social media networks now dominate our political talk. People “consume 1 billion hours of video content every day,” just over half the world’s population is now on social media, and they spend “144 minutes on social media sites every day” including “38 minutes on Facebook alone” (Henderson, 2020). While users believe they are in control, the flow of the world’s attention is structured by a handful of digital platforms (Tufekci, 2018). Just over half (53%) of U.S. adults say they often or sometimes get news from social media (Shearer & Mitchell, 2021). Facebook is the most popular social media site for news with about a third of Americans (36%) going to the network regularly for news. Considerable numbers of citizens also routinely get news from YouTube (23%), Twitter (15%), and Instagram (11%).
Importantly, these platforms are not integrated, and a common “public sphere” where citizens meet to talk no longer exists. In the simpler, more traditional days of media news, Walter Cronkite, the anchor for CBS news, was found to be the most trusted man in America. No such trusted source serves to unite the country any longer. “We the people,” have become clusters of narrowly defined groups and most online communities interact with themselves rather than with other groups (Freelon & Lokot, 2020) creating an echo chamber that merely affirms their worldview. Noting that we all experience the world independently on our phones, Zeynep Tufekci (2018) argued that political participation feels like private conversations: “Today’s phantom public sphere has been fragmented and submerged into billions of individual capillaries.” Reflecting on the “profound unreality” of our media ecosystem, Peggy Noonan (2021) asserted that we our losing our sense of reality: “We are removing ourselves from ourselves. It’s all the image before your eyes and what you feel. There is no emphasis on thought, on reflection, on the meaning of things.”
This new media environment is important to understand as it has fundamentally changed the way receivers view their world. The distinction between news and entertainment is blurred beyond previous limits. Such faux news has been readily available throughout the last century. Gossip rags and faux news sites like the National Enquirer could be found at most supermarket check-out stands. In their heyday, millions of subscriptions were purchased each year. In one sense, the presence of titillating and overtly fictitious stories has been around a long time. What is distinct is that these past examples of print could be easily identified as fictional, whereas in today’s social media environment, a majority of Americans (59%) find it difficult to separate truth from lies (Santhanam, 2020).
Today, many voters do not trust their government and patently do not believe in the veracity of the news they receive. The 2020 election provides ample evidence of that. According to one research report, “false beliefs about the election are not merely a fringe phenomenon” (Pennycook & Rand, 2021, p. 1). Their study showed that a majority of Trump voters “particularly those who were more politically knowledgeable and more closely following election news . . . falsely believe that election fraud was widespread and that Trump won the election” (p. 1). Indeed, as we transition to a new administration in Washington, we live in a time when at least a third of Americans on either side of the political spectrum are entrenched in alternate factual universes.
These different universes are evidenced in differing perceptions of the January 6th attack. While 62% of voters saw the storming of the Capitol building as a threat to democracy, Democrats and Republicans perceived the insurrection differently (Smith et al., 2021). Specifically, 93% of Democrats considered the actions a threat to democracy. However, only 27% of Republicans saw it this way, with 68% saying it was not a democratic threat. In fact, a slim plurality of Republicans (45%) actively supported the riot, while nearly the same amount (43%) expressed their opposition. When asked which labels best describe those who stormed the legislative complex, Democrats most commonly described the crowd as “extremists,” “domestic terrorists,” “criminals,” or “antidemocratic,” while Republicans most commonly said “protestors” or “patriots.” One attitude Americans of both parties held was that America was crumbling. An Axios-Ipsos (2021) poll found that four fifths of Americans—83% of Republicans and 78% of Democrats—agreed that “America is falling apart.”
At this point, evidence suggests that social media has the power to manipulate the “shadows on the wall” that constitute our perceptions of reality. Importantly, as shown above, this is not a liberal or conservative issue. Both sides are susceptible to disinformation and skewed messaging (Freelon & Lokot, 2020). This is troubling because, as Darren Kew (2021) observed, “without some basic agreement on what happens, political compromise and respectful coexistence become difficult or impossible, and democracy requires these to survive” (quoted in Rayasam & Ward, 2021). As the Declaration of Independence famously declared, governments derive their power from the consent of the governed, yet Joe Biden lacked this symbolic authority as 40% of Americans and 80% of Trump voters said they believed the Biden was not the legitimate winner of the 2020 election (Salmon, 2021). In this context, we next turn to hate as a heuristic in this contested environment.
Hate as a Political Stratagem
Communication strategies are employed in a variety of contexts during an election season and, as we might expect, are becoming increasingly more sophisticated. That is certainly the case with hate speech. First recognized after a particularly bitter Alabama governor’s race, Whillock (1995) wrote about the employment of hate as a communication stratagem—that is, as an artifice or trick of war. While noting that her initial analysis examined hate speech appeals to drive action at the ballot box, “it could just as easily be used to create civil unrest or perpetuate violent acts in other contexts” (p. 29). Since then, numerous studies have examined hate speech (e.g., Ahmed, 2014; Caponetto, 2018; Davison, 2007; Del Vigna et al., 2017; Ikeanyibe et al., 2018; Kopytowska, 2017; Malmasi & Zampieri, 2017; Waltman, 2003, 2018, Waltman & Haas, 2011).
The events surrounding the 2020 election revived the analysis of hate speech as a stratagem for achieving political ends. Building on the four appeals Whillock identified, this analysis evaluates the increased psychological, technological, and communicative aspects of how hate has become weaponized in the age of social media.
The First Appeal Is One of Attraction: To Consciously “Inflame the Emotions of Followers” Through Whatever Means Necessary
In the miasma of messages surrounding us in this social media age, gaining attention—or “eyeballs” as social media firms often articulate it—is critical for any type of success. Indeed, we live in an “attention economy” (Goldhaber, 1997). In previous generations, speeches and news publications were the primary modes of communication followed by the introduction of radio, film, and television, each with their own attendant techniques and embedded value systems. Today, interest can be easily determined and quantified by mining the available data to determine both what attracts and sustains viewers. The field developed a sub-genre for “computational propaganda,” an area of study that assesses how data are collected and used to create and manipulate publics. Programmers use a series of algorithms to scrape information on social media platforms and devices, then analyze that data to uncover trends that can be used to influence opinions and, hopefully, behavior.
Previously driven by human actors, today’s interest appeals are finely honed mathematical formulas. Once the characteristics of these appeals are determined, mathematic formulas and machine learning are applied to bots to do the work of keeping people engaged. In essence, these algorithms lead viewers through personalized playlists. “YouTube says these recommendations drive more than 70% of its viewing time, making the algorithm among the single biggest deciders of what people watch” (Nicas, 2018). Human interaction does not fully capture the spread of information, rather machines searching and prioritizing the content of other machines stokes the spread. “The tendency of humans to use automation (i.e., automated decision support aids) as a heuristic replacement for vigilant information seeking, cross-checking, and adequate processing supervision, is known as ‘automation bias’” (Johnson, 2020, p. 2).
On the face of it, there would be little objection to this application of technology, but Woolley and Howard (2016) find a more sinister use when they explore the use of political bots for social control. They define political bots as “algorithms that operate over social media, written to learn from and mimic real people so as to manipulate public opinion across a diverse range of social media and device networks” (p. 4885). Moreover, the trend for holding viewer’s attention skews toward extremism as platforms escalate their message intensity to attract and maintain attention. The Wall Street Journal commissioned a study that found political bias in the algorithm itself, stating: “YouTube typically recommends videos that echo those biases, often with more-extreme viewpoints” (Nicas, 2018). So not only do viewers inadvertently believe they are interacting with people like themselves; they may easily fall prey to bot-led strings of ever more extreme messages. Interestingly, these are frequently embedded within strings of suggested videos often with teaser headlines and images (i.e., “click bait”) that induce curiosity. While some media organizations are supposedly working to change the algorithm to favor “more authoritative” sources, there is clear manipulation in the current environment to hold the attention of viewers.
Why does that matter? Most members of online communities will not meet one another in the initial phases of organization. That means that the use of algorithms permits organizers to find the various strands of audiences who might be motivated to associate with a particular worldview. These “big tents” give the illusion of uniting people despite their tenuous but common threads. Such networked coalitions may be enhanced, in part, by acknowledgement in the news media and by their identification as “groups that matter” in trending conversations. Seeing your opinion and content supported through likes, comments, and other engagement similarly provides validation and opinion reinforcement. The circular frame is complete as more people learn about and join the conversation, often repeating hashtags that brand them with associated ideas. Thus, this crowdsourced viral messaging can spread across platforms to develop the appearance of a larger, more sustained movement. In these online communities, content creators can develop a degree of celebrity and become the object of attention themselves, empowering the user with the sense that their opinions matter. This may be especially impactful when someone previously felt their voice had been ignored.
Importantly, while people are led to believe they are finding meaningful connections, often they are talking to machines who mirror back their hurt and then position them to express that grief in unhealthy ways. Berghel (2018) suggests that these tactics resemble phishing–they involve technical subterfuge (antisocial use of networking technology), perception management (manipulation of the public by getting them to think that they don’t see something they do, or do see something they don’t) and social engineering (motivating people to do something that they probably wouldn’t have done otherwise, such as subscribe to a controversial blog. (p. 72)
Used together, disparate people start to coalesce into a public of people and machines that seemingly share much in common while projecting strength and unity.
But it does not stop there. As groups coalesce into change organizations, they provide affirmations for public action. Initially, that action may take the form of sharing what you have learned to others in your social network, thus lending source credibility to the message being disseminated. A slightly more robust action is for a viewer to post a message affirming the group ideals themselves. Once “liked” by others in the group—and more devastatingly by bots posing as humans—the affirmations open the door to further, more complex interactions. Some participants will become organizational heroes or influencers, while others may drive actions such as in-person gatherings including marches, protests, and political rallies. Thus empowered as a coalition, such groups establish norms for member behavior such as increased voter participation or participation in protests that demonstrate strength. For individuals, the objective is not merely to find voice, but to find a voice that matters–that brings about results. The refrain “no one in Washington represents people like me” should be a wakeup call for democracies everywhere; those are the people who seek affirmation the most.
Awakening emotions is a prime target for propaganda to work. The use of emotion to sway crowds has long been acknowledged. For example, Cicero argued that a rhetor may secure the good will of hearers by attacking the opposition and arousing hatred (odium), prejudice (invidia), and contempt (contemptio). In Mien Kampf, Adolf Hitler (1939) explained the importance of emotional appeals in his rise to power: all great movements are movements of the people, are volcanic eruptions of human passions and spiritual sensations, stirred either by the cruel Goddess of Misery or by the torch of the word thrown into the masses, and are not the lemonade-like outpourings of aestheticizing literati and drawing, room heroes. Only a storm of burning passion can turn people’s destinies, but only he who harbors passion in himself can arouse passion. (pp. 136-137)
These emotional appeals frequently take the form of group hatred, collective status threat, resentment, anger, and a sense of victimhood. Today, the arousal of emotion still works, but the passion is driven often through memes and soundbites rather than speakers.
This is not to suggest that every person will succumb to emotional entreaties. Yet for some it spurs action before thoughtful reflection. As Johnson (2020) argues, A central risk posed by AI may not be the generation of bias, or decisions based on AI fuzzy logic, but rather the temptation to act with confidence and certainty in response in situations that would be better managed with caution and prudence. (p. 1)
We would add another, more insidious risk: though aligned along central themes, membership in formal and easily identified hate or antigovernment groups is no longer required. One does not need to attend a meeting or wear a white hood; extremist messages may be delivered in their social media feed or “recommended content” alongside news articles and personal updates from friends. Bot-organized “spider web” organizations demonstrate the fluidity of organizational dynamics that permit a morphing of ideas and structures over time. For example, Richard Barnett, the man who sat at a staffer’s desk inside Nancy Pelosi’s office with his feet on the desk, was not affiliated with any tracked hate group. In fact, he engaged with various groups under a pseudonym that led him to learn about the January 6th rally in D.C. (Miller & Gais, 2021). Ordinary participants once feared that joining groups like the Ku Klux Klan would make them more easily tracked by authorities so even while they might identify with the cause, they often preferred to be lone actors with only loose affiliates. The availability of extremist speech online makes independent lone actors more likely. These lone actors are not only almost impossible to identify, but pose a great threat to national security.
At some point, participants are sucked in and become engaged in a web of lies, some of their own making. As an example, U.S. Representative Marjorie Taylor Greene (GA), who was stripped of committee assignments by her Congressional colleagues for her incendiary comments, noted in her speech to Congress just before the vote on her future: “I was allowed to believe things that weren’t true and I would ask questions about them and talk about them, and that is absolutely what I regret.” Among those lies, she admitted to making were comments on social media that some school mass shootings were staged in order to gain public support for gun control, that the 9/11 attacks were staged by the U.S. government, and that Jewish forces “sparked a deadly wildfire with a space beam” (DeBonis, 2021). In her speech, Representative Greene portrayed herself as a victim of misinformation, not someone promoting obvious lies, despite the fact that she authored numerous articles for conspiracy websites.
Like it not, even lies have meaning. It’s the story they sell; the identity and authority assumed by the teller. Despite Representative Greene’s partial repudiation of previous comments, she used her new-found media focus to raise “$1.6 million amid the media’s coverage of her controversial comments” espousing her fears of the “deep state” (Evers-Hillstrom, 2021). Notably, most of the money—some $950,000—spent on her election campaign was from her own pocket. Yet in the wake of media coverage of Congressional action against her, money in the form of small donations under $200 began pouring in. Again, the point here is that once she was elected and media scrutiny of her comments made her a public figure, people who sympathized with her censure became outraged. People, many outside her Congressional district, responded to her appeals. Representative Greene, seemingly willing to renounce some previously held views, did not back down from the tenets that threaded her supporters together: QAnon conspiracy theories, the presence of a secret deep state and media agenda that are controlling our lives, and that Trump won the 2020 election. Greene attempted to consubstantiate her followers through shared grievance, tweeting, “It’s not just me they want to cancel. They want to cancel every Republican. Don’t let the mob win.”
So, will not fact-checking work to stop the spread of outright falsehoods? It is particularly useful for specific lies, particularly those with visual evidence. For example, Trump’s claim that the size of the crowd at the 2017 Inauguration could easily be checked both by official reports and photographs of the crowds.
Yet fact checking is of limited value for alt-facts, posttruths, and big-lies. That is, once people begin to believe a statement, they find ample reason to support it despite evidence to the contrary. As Niccolo Machiavelli declared in the 16th century: “one who deceives will always find those who allow themselves to be deceived.” Small inconsistencies between perceived events are taken as irrelevant since in propaganda lies are often blended with facts and polarizing opinions (Freelon & Lokot, 2020; Howard et al., 2019) making it difficult to distinguish truth from falsehood. Fact checking issues around which judgments are formed is nearly impossible. For example, Berghel (2017) notes that fact checking a leader’s “performance is akin to shoveling smoke” (p. 113).
Bigger and cruder lies are more frequently believed and followed than small lies (Higgens, 2021). Hitler, who coined the term “big lie” in 1925 and relied on the lie that Jews were responsible for Germany’s ills, wrote in Mein Kampf that ordinary people “more readily fall victims to the big lie than the small lie, since they themselves often tell small lies in little matters but would be ashamed to resort to large-scale falsehoods” and “they would not believe that others could have the impudence to distort the truth so infamously.” In 2020, after spending months claiming the election would be rigged and that he would not accept the results if they did not favor him, Donald Trump wrongly claimed that he had won on Election Day and later that his victory had been a landslide and a sophisticated conspiracy had stolen the election (Snyder, 2021). Yale professor Timothy Snyder (2021) summed up the challenge of countering “Trump’s big lie”: “It takes a tremendous amount of work to educate citizens to resist the powerful pull of believing what they already believe, or what others around them believe, or what would make sense of their own previous choices” (para. 3).
This is not an attack on the uninformed. Notably, many followers have spent hours digging through internet leads to try to learn more about a topic. “Citizens, especially those who are knowledgeable and care the most about politics, are motivated to defend their beliefs and attitudes in the face of discrepant information” (Strickland et al., 2011, p. 935). In essence, we are all prone to defend our beliefs.
The Second Appeal Used Is Designed to Denigrate the Outclass
Humans are hard-wired to protect their own group against competing groups (Mason, 2018). As social psychologist Jonathan Haidt (2008) puts it, our biology is designed “to unite us into teams, to divide us against other teams and then to blind us to the truth.” Americans of all political stripes can find some group to dislike, thus enhancing the wedge between groups even among those whose level of tolerance for opposing opinions is high. Conservatives are encouraged to vilify immigrants and “own” the liberals; liberals are pushed to vilify rural communities and evangelical Christians; and they are both urged to hate each other. This hatred can also be supported by outside actors. For instance, a study assessing the impact of the 2016 Russian Twitter disinformation campaign concluded that “racist stereotyping, racial grievances, the scapegoating of political opponents, and outright false statements were four of the most common appeals” found in Twitter responses (Freelon & Lokot, 2020, p. 1). Thus, hate tactics serve to build walls between citizens and exacerbate difference.
Hate is a unique emotional state. In order to truly feel the depths of hatred, something you love must have been violated. Identifying the source of that violation is critical to an understanding of how hate plays out. From a single incident, people find unity with others who have faced similar heartbreaks. Perhaps even without realizing it, stereotypes of whole classes of people may be seen to be the cause of the voter’s dissatisfaction. Playing on emotions, the effect is to solidify group identity and, eventually, enhance divisiveness (Freelon & Lokot, p. 2).
The stories of injury are inherently emotional, ranging from anger to grief. These are stories of personal loss. Yet when hate stratagems are employed, these stories become rhetorically connected to the loss experienced by others. Whillock (1995) noted, the use of synecdoche—linking smaller events to larger ones then arguing that the part stands for the larger whole—invites the audience to believe that the examples the rhetor offers are not atypical. Isolated events, then, take on greater significance because they are understood in light of a larger scheme or plan. (pp. 28-29)
Finding such support groups for grief is a normative process. In Strangers in Their Own Land, for instance, Arlie Hochschild (2016) chronicled the “deep stories” crafted by followers of the Tea Party to capture and explain their hopes, fears, disappointments, anxieties, and resentments. Yet in the current social media era, the ways these connections are organized and exacerbated bear further scrutiny.
One way outgroups are denigrated online is through entertainment, humor, mockery, and caricature (Godey et al., 2016). Additionally, humor is one of the ways that falsity becomes mistaken for fact. Consider, for instance, how many Americans believe Republican Vice-Presidential nominee Sarah Palin said, “I can see Russia from my house,” even though comedian Tina Fey invented the phrase when impersonating Palin on Saturday Night Live. More recently, the U.S. Department of Homeland Security (2020) noted in their October Homeland Threat Assessment report that Iranian propagandists spread COVID-19 through videos and cartoons from state media outlets on popular social media platforms to appeal to Western audiences. Cartoons and viral videos do not purport to be factual, nor do they offer source citations, but they can have an impact on audience perceptions.
We know from a variety of studies that humor increases attention (e.g., Baum, 2003; Cao, 2010). Yet humor also aligns people and encourages in-group coding. Getting an in-joke signals to yourself and others that you are in the group. The cartoon character Pepe the Frog, for example, was used by the Alt-Right in frequently racist memes. Pepe began as a popular reaction meme, but became appropriated as a way of spreading hate messages online. For example, the frog was sometimes depicted sporting a Hitler mustache and his signature was changed from “feels good, man” to “Kill Jews Man.” He was also seen in a KKK hood and robe. Pepe posts were labelled as a hate symbol by the Anti-Defamation League in 2016 and the organization joined with Pepe’s creator Matt Furie to form a #SavePepe campaign. Clearly, not everyone who viewed or shared Pepe cartoons was racist or perpetuating hate, but using the cartoon as a conveyance for hate got eyes on the message. Humor is a lure to magnify the message: “humor has frequently been listed as media entertainment’s most attractive characteristic and correspondingly, to greatly increase audiences’ selective exposure” (van der Wal et al., 2020, p. 478).
Humor is a particularly effective tool for hate messages because it targets an outclass for disdain and subordinates them to the in-class. In fact, “superiority (also called disparagement) theory sees aggression as the fundamental component of humor” (see Zillmann, 1983, p. 87). Not only does it denigrate the outclass, it protects the sender from accountability. After all, when challenged on statements, the response, “I was just joking,” removes blameworthiness in spreading the hate.
Last, humor is processed differently since its arguments face less scrutiny. Despite the fact that humor is often seen to be disparaging, condescending, patronizing, or demeaning; laughter and ridicule still receive much protection in social circles. For many, it is just fun to share. That makes such humor as a tactic of hate much more treacherous.
Beyond humor itself, trendiness is a characteristic of social media. Viewers often assess the power of ideas by how popular the messages are on the social media landscape. Among the recent trends are conspiracy theories. Uscinski et al. (2020) discovered four recent trends: (1) much of our lives are being controlled by plots hatched in secret places; (2) even though we live in a democracy, a few people will always run things anyway; (3) the people who really “run” the country are not known to the voters; and (4) big events like wars, the current recession, and the outcomes of elections are controlled by small groups of people who are working in secret against the rest of us (p. 9). Conspiracy theories paradoxically give believers a sense of control and insider knowledge because a grand scheme may be more palatable than the randomness, anxiety, and uncertainty of modern life.
Conspiracies feed on themselves and are self-perpetuating. Accepting one conspiracy theory makes it much easier to believe in other theories (Goertzel, 1994). Initially intriguing, the more time people spend on social media, the more evidence they find to support their conclusions. This pattern is a perception error known as apophenia. “Apophenia is the tendency to perceive illusory patterns in random and unconnected events or stimuli” (Ellerby & Tunney, 2017, p. 281, see also Ayton & Fischer, 2004; Falk & Konold, 1997; Gilovich et al., 1985). Apophenia, we argue, is central to the propagation of conspiracy theories and helps account for how some people continue to believe in lies and distortions even in the face of overwhelming evidence to the contrary. Conspiracy theories also play on a lack of information, distrust of politics and institutions, and the intellectual vanity of followers.
Such theories also gain trendiness by coopting opposition narratives to mock the very principles the opposition holds. For example, “White Lives Matter” and “All Lives Matter” played off the “Black Lives Matter” slogan in an effort to fight the perceived exclusionary bias of the movement. Relatedly, people protesting COVID-19 mask-wearing mandates used the Pro-Choice argument “My Body, My Choice” to counter liberal claims that argued for the mask mandates. The “Don’t Tread on Me” banner once used in the Colonial and Revolutionary periods as an argument for the presence of a powerful central government has been coopted first by the Tea Party and later by larger groups arguing for the need to fight back against a strong central government. Once invoked, clashes between groups are triggered thus elevating the trending messages while simultaneously enhancing the fissures of society.
Finally, the outclass is vilified by word-of-mouth (WOM). From a communicative perspective, there are three main uses of WOM: “opinion seeking, opinion giving, and opinion passing” (Chu & Kim, 2011, p. 50). Each of these centers on a subjective assessment by the receiver regarding their trust in the communicator(s) sending the message and their trust of the medium through which it is sent. In most digital WOM platforms, sources of information are not used (Zhang et al., 2019, p. 48) thus forcing recipients to make assessments based on the media source (see, e.g., Metzger et al., 2003), compatible political alignment, and their assessment of the prevalence of attitudes through measures such as “likes.”
Why do share counts matter? The primary answer is that “exposures from multiple sources impacts the probability of spreading a given piece of information” (Mønsted et al., 2017, p. 10). That is, the more times a person is exposed to a message, the higher the probability they will share it. Despite the fact that messages may contain misinformation, the social reinforcement of the message lends the message credibility. Moreover, rather than a single source of persuasion, messages from numerous sources over an extended period of time serve as almost incontrovertible evidence that the statement bears some truth. As one researcher noted, “We’ve now entered the era of lock-on’ news feeds that nourish the addiction to misinformation. Instead of looking for counterexamples to our worldview, we allow others to file them, thereby ensuring the growth of collective ignorance and prejudice” (Berghel, 2018, p. 72).
The Third Appeal Is Designed to Inflict Permanent and Irreparable Harm on the Opposition
When we refer to the harms of hate, we are concerned with more than hurt feelings. In his book, The harm in hate speech, Jeremy Waldron (2012) argued that there is a “permanent visible framework of society” that ensures us that people “can know when they leave home in the morning [that] they can count on not being discriminated against or humiliated or terrorized” (p. 84). Damaging that framework harms each citizen of a democracy for those injured no longer speak from reason but from pain. His argument is that hate acts as a “slow-acting poison” (p. 4), that, once invoked, we cannot recall the hate nor do we know when there are sufficient toxins released by it to produce irreparable harms.
As an example, many people—especially noncitizens and people from racial and ethnic minority groups—have not enjoyed the “freedom from fear” that U.S. President Franklin D. Roosevelt articulated in his Four Freedoms speech. As Princeton’s Eddie Glaude described it, “I have had the privilege of growing up in a tradition that didn’t believe in the myths and the legends because we had to bear the brunt of them” (quoted in Scott, 2019). Violence and terrorism against Black people, indigenous people, and others have long belied popular ideas of America being the finest democracy in the world. Quoting historian Ibram Kendi (2021): “White terror is as American as the Stars and Stripes.”
Societies require mutual trust to function. Trust is easily destroyed but slow to build. Yet we are dependent on trust to live in a society that believes in democratic principles. Rainie and Perrin (2019) document the current trust gap noting chasms between people in terms of our treatment of others, respecting the rights of dissimilar others, obeying the law, and accepting election results. Repairing trust between groups is not a trivial matter. Hate stratagems build walls. Increasingly, groups are insulated from one another, neither hearing counterarguments or claims nor believing in the opposition’s relevance.
Similarly, hate stratagem speech can be designed to attack public institutions, including the Fourth Estate of the news media. Politicians have long “worked the refs” in an attempt to receive more favorable coverage; however, these attacks have escalated to the point where journalists are called “fake” and labeled as “the enemy of the American people.” Though far from perfect, democracies require a free and open press to share information and opinion, test ideas, and inform the citizenry and their leaders. Not only is “the media” not trusted, neither is expert opinion. That’s why the current trends toward distrust of institutional sources such as experts, the news media, and government elected officials is so harmful.
At the same time, news organizations must earn the public trust. Nearly 30 years ago, noted journalist Carl Bernstein (1992) wrote, We need to start asking the same fundamental questions about the press that we do of the other powerful institutions in this society—about who is served, about standards, about self-interest and its eclipse of the public interest and the interest of truth. For the reality is that the media are probably the most powerful of all our institutions today; and they are squandering their power and ignoring their obligation. They—or more precisely, we—have abdicated our responsibility, and the consequence of our abdication is the spectacle, and the triumph, of the idiot culture.
These are wise words because there are consequences when the news media cannot be trusted. Without such trust, people are driven to pay attention to alternative sources, many of which are established solely to feed particular biases and worldviews (Guess et al., 2020). Such sources have no interest in giving even lip service to alternative viewpoints. They not only fail to acknowledge opposing views, they are, in fact, one of the most persuasive mediums for the perpetuation of falsehoods. One study (Guess et al., 2020), for example, suggested that information perpetuated on such fake news websites was clearly linked to beliefs in false claims. As Berghel (2018) notes, we must “recognize that objectivity is an enemy to the tribe!!” (p. 73).
Realizing that a chasm exists between perceptions of institutions, logic and reason are no longer effective responses to opposing claims. As Carlin et al. (2005) found, “online discussions have limited potential to encourage public talk that honors diversity and equality of ideas” (p. 633). Instead, social media “flame wars” that depend on name-calling and emotional reactions to people, institutions, or events serve to draw attention but do nothing to increase political knowledge or understanding. Repeatedly, President Trump tweeted the words “Sad!” “Weak!” “Dumb!” and “Loser!” to describe those who opposed him. Those who opposed Trump were quick to retort with name-calling of their own. The argumentative style of online talk is centered on adversariness, self-interest, flippancy, and hostility toward difference (Carlin et al., 2005). Despite optimism about the mobilizing power of digital and social media, online political talk as currently practiced falls well short of the “Digital Agora” normative ideal (see Kirk & Schill, 2011).
In its more extreme form, distrust breeds denialism (see, e.g., Bardon, 2019). Denialism is “the employment of rhetorical arguments to give the appearance of legitimate debate where there is none, an approach that has the ultimate goal of rejecting a proposition on which a scientific consensus exists” (Cook, 2010). Denialism starts with doubt, which is a useful tool, but then moves quickly to question both the motives and ethics of the source. Disagreements with other experts in the field are amplified (and sometimes “experts” are people elevated in social media to appear as experts but who lack foundation), leaving room for doubt. Harms of the original premise are then presented with appeals to personal freedoms of choice and a rejection of any counter information, arguing that such information is just wrong and cannot be trusted. Relatedly, those who believe in conspiracy theories are more likely to distrust experts and authorities like scientists (Uscinski et al., 2020, p. 2). Denialism is a direct attack on scientific discovery and expertise.
The immediate harms are clear. While attempting to replicate the power of scholarship, misinformation or faux scholarship is spread to make claims that evidence cannot support. Denialism “has persuaded people to turn down life-saving HIV/AIDS treatments or preventive measures such as vaccinations, leading to distorted attitudes and years of severe illness and death” (Schmid & Betsch, 2019, p. 931). But the ultimate harms are even more pervasive: we lose the ability to envision a shared future. Philosopher Stanley Cavell (2015) explained, We learn and teach words in certain contexts, and then we are expected, and expect others, to be able to project them into further contexts. Nothing ensures that this projection will take place (in particular not the grasping of universals nor the grasping of books of rules), just as nothing ensures that we make, and understand, the same projections. That on the whole what we do is a matter of our sharing routes of interest and feeling, modes of response, sense of humour, and of significance and of fulfilment (p. 52).
When language becomes decoupled from meaning and action, it leads to instability in society and inhibits our ability to understand the world and each other—an epistemological and ontological unmooring.
Finally, the Last Goal of the Stratagem Is Ultimately, to Conquer
The most critical destruction imposed by hate stratagems is the radical and forceful overthrow of one worldview for another. No doubt, worldviews change over time and within various cultures. However, hate speech does so without the normative protections of democratic self-rule, relying instead on an authoritarian pose that declares a framework that patently rejects any oppositional dialogue. Kahn-Harris (2018) gave a specific example when he argued, “Forms of genocide denialism are not just attempts to overthrow irrefutable historical facts; they are an assault on those who survive genocide, and their descendants.” He further argued that in the process of doing so, Jews were branded as dangerous liars, while the reputation of the Nazis was rehabilitated. And though they may “yearn” for the day when “the story of how the Jews hoaxed the world will be in every history book,” every day history is questioned, and counterfactual theories are espoused as a step toward that goal. Indeed, antisemitism reverberates in contemporary conspiracy theories.
Perhaps no clearer example can be given than the rise of White supremacist accelerationism. When applied to politics, accelerationism holds that government is irredeemable and argues that White supremacists should work to create civil disorder, accelerate chaos (often in the form of a race war) in order to destroy “The System.” Rooted in the notion that current government and social policies of the left will result in the genocide of the White race, the call to violence is perpetuated as the only solution. Moreover, rather than an all-out war in the traditional sense, accelerationists believe that events like the insurrection at the Capitol on January 6th will result in a government crackdown that will spark others to join the movement and accelerate their perceived final solution. White supremacists view limits on gun rights and restrictions on social media sites as proof that their action must be imminent.
In this social media era, the other facet of accelerationism is that media events can rapidly affect judgements. Such social acceleration means that we connect more quickly, make snap judgments, share those with others, and move to quick decisions. In essence, our systems are not equipped to handle the speed of a rapidly changing political environment. The result is an unpredictable environment for decision making.
The resulting chaotic media environment is transformed into an event-based decision-making forum, where the public cocreates events on equal footing with experts and traditional news media. Suddenly, there are no gatekeepers, decisions are crisis-driven, story narratives are difficult to determine, and “truth” is no longer a desired or universal standard. Historically, authoritarians have seized power in these contexts by promising control and renewed nationalism. Relatedly, as Corman et al. (2008) noted, Once a system—a social reality—is created, it has a tendency to sustain itself even in in the face of contradictory information and persuasive campaigns. Members of the system, routinely and often unconsciously, work to preserve the existing framework of meaning. To accomplish this they interpret message in ways that “fit” the existing scheme, rather than in ways that senders may intend. There is no “magic bullet”—no single message, however well crafted—that can be delivered within the existing system that is likely to change it. (p. 156)
Frighteningly, as Bardon (2019) claims in his book The truth about denial, biased thinking can become ideological denialism. Nothing can overcome such positions for even in the face of overwhelming evidence to the contrary, denialists hold themselves up to be intellectually courageous for opposing the dominant ways of thinking.
The Way Forward
The hate stratagem we discuss is not limited to hate speech, although that is certainly a part of it. The stratagem manages the communication environment with the specific intent of influencing future actions and decisions. Moving beyond persuasion where the receiver has some sort of choice in belief or conduct, hate stratagems are authoritarian tactics that set out to control choice by whatever means necessary to win. Voltaire famously described the connection between misinformation, authoritarianism, and carnage in Questions Sur Les Miracles, writing: “whoever can make you believe absurdities can make you commit atrocities” (Olson, 2020).
There is an oppositional view of hate speech argued forcefully by scholars like Sunstein (1993) who argue that expressions of hate can be useful. Hate speech may be taken to function like a steam valve, merely releasing pressure. In a normative marketplace of ideas, perhaps that is true. We ask the reader to consider what happens when there is no marketplace–when no oppositional expressions are permitted? This is the goal of those who use this stratagem.
Importantly, we do not propose that viewpoints should be censored nor suppressed. Censorship is not the answer nor is refutation by counterevidence. Instead, we advocate for a greater awareness of the tactics and techniques used by those who wish only to destroy the very communication climate that gives their views voice. Once exposed, the communication environment can regain its equilibrium and once again become a deliberative space.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
