Abstract
This article examines the Facebook group #jagärhär, a Sweden-based collective of thousands of people who have made a regular practice of responding en masse to what they regard as hateful comments online. #jagärhar is one of the largest and best-organized collective efforts to respond directly to hatred online anywhere in the world. Drawing on data collected through ethnographic observation and interviews, the article explores two primary research questions: (1) how do the external counterspeech actions of group members work to counter hatred (and, sometimes, misinformation)? and (2) how do the internal practices of the group keep members engaged? I argue that instead of focusing their work on preventing future hateful speech (presumably by changing the minds or incentives of those who post it), #jagärhär members fight against its effects—attempting to lessen the impact of the hateful speech by hiding it in the comment threads, speaking to the “movable middle” rather than those posting hatred, and encouraging more counterspeech against it.
Introduction
Do you think about what you ought to say, or what you wish others would say when you read the comment fields in social media? Would you dare to share your opinion more frequently if you knew that others in the same thread supported you? Are you already taking action to disrupt those who are spreading hate on the Internet, and do you wish you had the support of others who are doing the same?
1
These are the words that greet visitors to the Facebook group #jagärhär, a Sweden-based collective of over 70,000 people who have made a regular practice of responding en masse to what they regard as hateful comments online. Members of #jagärhär (which means, “I am here”) seek out hatred in the comment threads of newspaper articles posted on Facebook and then respond together, following a strict set of rules which includes keeping a respectful and non-condescending tone and never spreading prejudice or rumors.
Counterspeech groups are very unusual, and #jagärhär’s model seems to be unique in that it has been replicated in many other countries—16 at this writing. All the groups (in Sweden, Australia, Bulgaria, Canada, Czech Republic, France, Finland, Germany, India, Italy, Norway, Poland, Slovakia, Spain, the United Kingdom, and the United States) are named “I am here” in the relevant languages. Together, they make up the #iamhere International Network.
This article is a detailed account of #jagärhär’s efforts, the first qualitative study of such a group. Counterspeech, defined here as “any direct response to hateful or harmful speech which seeks to undermine it” (Dangerous Speech Project, 2020), 2 has been touted by internet platforms (Yadron, 2016) and civil society (Anti-Defamation League, 2016) as a possible answer to hatred and extremism, but there is only limited empirical evidence of its success (Buerger & Wright, 2019) and even less research on the individuals who produce counterspeech and what they have learned from their work. This article begins to fill that void. By making the counterspeaker the point of inquiry, this article offers a new perspective on the goals of those attempting to counter hatred online, the challenges they face, and the strategies they use to overcome them. These lessons provide valuable insight and replicable practices for others seeking to develop user-led interventions to counter hatred online.
Drawing on ethnographic observation and semi-structured interviews, the article explores two primary research questions: 1) How do the external practices of #jagärhär members work to counter online hatred (and, sometimes, misinformation)? and 2) What impact do the internal practices of the group have on member engagement?
I argue that instead of focusing their work on preventing future hateful speech (presumably by changing the minds or incentives of those who post it), #jagärhär members fight against its effects—attempting to lessen the impact of the hateful speech by hiding it in the comment threads, speaking to the “movable middle” rather than those posting hatred, and encouraging more counterspeech against it. There is some evidence that this is working. Members flood comment sections with messages that challenge racist, xenophobic, or other hateful narratives. The findings of the study also point to aspects of the effort that have helped the groups grow and remain sustainable over time. These include a feeling among members that counterspeaking as a group provides protection from online attacks as well as weekly rituals that have contributed to a sense of community within the groups.
Literature Review
Every day, internet users from around the world engage in counterspeech, responding to hateful and dangerous speech in order to refute or undermine it. Many of those who have taken on this effort go about it alone, while others form groups to coordinate responses and support each other. Although some executives of social media platforms have touted counterspeech as a method of reducing online hate, 3 some argue that there is little evidence of its effectiveness.
This raises another question, namely what it means for counterspeech to be effective. An obvious answer is that it changes the beliefs or behavior of the person to whom it responds, persuading them to apologize or stop posting harmful messages. That is very difficult to achieve. In one study, drawing on over 7,500 Facebook comments, researchers tested the effectiveness of counterspeech to respond to negative portrayals of the Roma in Slovakia between April 2016 and January 2017 within a select number of comment fields (Miškolci et al., 2018). The study found that counterspeech was not a particularly effective method of changing the behavior of the user who posted anti-Roma comments. It was, however, followed by an increase in the number of pro-Roma comments within a particular comment thread. Schieb and Preuss (2016) concluded that counterspeech can influence the original speaker, although the effectiveness of a counterspeech interaction depends on the proportionate size of the group of hateful speakers within a particular online space. In their study, a message was more effective when counterspeakers greatly outnumbered those sharing hateful messages. They also found that a small group of counterspeakers could still be effective, as long as the other users within an online space held relatively moderate (rather than extreme) views.
Studies have shown that other factors are important to the impact of counterspeech as well such as whether people are counterspeaking as part of a group (Friess et al., 2020; Garland et al., 2020), the tone used by a counterspeaker (Bartlett & Krasodomski-Jones, 2015; Frenett & Dow, 2015), or even specific characteristics of the people doing the counterspeaking—such as their perceived popularity (Seering et al., 2017). Because of these factors, success is still rare.
Changing the mind or behavior of someone who has posted hateful speech is not the only way counterspeech can be effective, however (Benesch et al., 2016). In fact, most counterspeakers I have interviewed say it is not their primary goal. Far more often, counterspeakers try to influence the audience—the hundreds or thousands of people who witness the exchanges. This goal makes sense when looking at the large body of scholarship on misinformation, which is relevant to our discussion because online hatred and dangerous speech often contain false statements that target specific groups of people.
Scholars have identified three primary elements of misinformation: the agent, the message, and the interpreter (Wardle & Derakhshan, 2017). Members of #jagärhär, and many of the other counterspeakers with whom I have spoken, target their responses as to influence the “interpreters”—those who see a message and then decide whether or not to further spread it. As Karlova and Fisher (2013) note, receivers of misinformation and disinformation make judgments about the credibility of the message and then make decisions about whether to share or discard it. Interpreters judge the message itself, but also the credibility of the original person purporting the information (Sundar, 2008; Swire et al., 2017), and the comments surrounding it (Petit et al., 2021). Many counterspeakers target the receivers of hateful speech and misinformation, hoping to persuade them against believing or spreading the harmful messages. Intervening at this stage is especially important, as research has shown that “hateful content diffuse farther, wider and faster” than non-hateful content (Mathew et al., 2019, p. 173).
By targeting the interpreters rather than those initially creating the hateful speech, counterspeakers also avoid directly engaging with so-called “trolls,” who may be spreading hateful or harassing speech. As Whitney Phillips (2015) has documented, many “trolls” spread offensive content simply to disrupt the conversation and gain attention. Not all hateful speech is spread by trolls, but for the speech that is, refusing to directly engage with it denies the poster the attention that they seek.
There are also other reasons for counterspeakers to target the audience rather than those posting hatred. One of these is the growing body of evidence showing that discourse norms spread online. Han and Brazeal (2015) found that people exposed to civil comments were more likely to write a civil comment themselves, but they did not find that exposure to incivility increased uncivil expressions (overall expressions of incivility were low in their study). Conversely, other studies (Cheng et al., 2017; Kwon & Gruzd, 2017) found that exposure to antisocial or negative comments did make a person more likely to post an antisocial comment. Two studies (Han et al., 2018; Molina & Jennings, 2018) found that metacommunication comments (those that address the tone of a comment rather than its content, such as when a user scolds incivility rather than commenting on the opinions being expressed) don’t necessarily increase civility but do engender additional metacommunication comments.
These findings have important ramifications for counterspeakers, as they demonstrate that the style and tone of responses can improve the quality of a discussion, and thus improve the likelihood of influencing the behavior of others. And because some research has found that antisocial behavior is also contagious, reducing exposure to hateful comments could limit the spread of similar behavior. Findings such as these support the model of counterspeech employed by #jagärhär and the rest of the #iamhere network.
A recent study of #ichbinhier (the German group) provides further evidence that internet users take discourse cues from others. Researchers from the Heinrich Heine University Düsseldorf (Friess et al., 2020) used a data set of comment threads in which group members engaged between 1 November 2017 and 31 January 2018 to answer two questions: whether comments made by #ichbinhier members were more “deliberative” than those posted by nonmembers (researchers coded for rationality, constructiveness, politeness, civility, and reciprocity), and whether deliberative top-level comments were associated with more deliberative second-level comments. Their study found the answer to both questions to be “yes,” suggesting that discourse norms established or reaffirmed by groups in the #iamhere network can have an impact on the quality of online discourse (Friess et al., 2020, p. 15). The study was somewhat limited in that it only investigated the relationship between top-level comments and direct replies to them (Friess et al., 2020, p. 17)—but it is an important step in evaluating the effects of the #jagärhär method.
Methodology
This study draws on data collected through semi-structured ethnographic interviews to examine the external and internal group practices of #jagärhär members. 4 How do decisions made by individual counterspeakers (e.g., whom to target, how to publicize their group membership) impact the way that their model brings about change? Internally, how do day-to-day operations within the Facebook group impact the engagement of members? In total, 25 group members participated in the study. To select them, I made a sampling frame by assembling a list of every member of #jagärhär who had participated (commented or “liked” a post) on the group’s Facebook page over a 2-week period (N = 5,580). I drew a random sample from that list and invited those individuals to be interviewed. For those who agreed to participate, I conducted verbal interviews in English, over Skype or Facebook Messenger. Interviews were conducted until saturation was reached (the point where no new themes emerged from new interviews). Interviews were done between July and December 2019 and during July and August 2020. Primary analysis took place between January and March 2020.
I also used ethnographic observation to better understand the “culture” of the #iamhere network. I joined the #iamhere Facebook groups and regularly visited their pages, reading updates and observing the rhythms of the groups—for example, when moderators posted each day and how many times. Through this observation, I was able to learn what kind of content was shared and how members responded to it and to each other.
Digital ethnography is an incipient research technique (Wilson, 2019), although there is a growing body of literature on the methodology (Boellstorff, 2012; Pink, 2016; Varis, 2016). As with offline ethnographies, ethnographic observation helps researchers develop a tacit understanding of the culture that they observe, knowledge that then serves as the basis for a more grounded interpretation of data collected through other methods, such as interviews (Dewalt & Dewalt, 2002). In this way, ethnographic observation contributes to both data collection and data analysis.
#Jagärhär: Collective Counterspeech
In 2016, a Swedish woman named Mina Dennert had an idea. Dennert had noticed an alarming surge in hateful and xenophobic content online after more than a million Syrians, Afghanis, and Iraqis sought refuge in Europe, mainly in 2015. At the time, Sweden had some of the world’s most generous immigration policies, significantly exceeding European Union minimum standards for immigration and asylum (Emilsson, 2020, p. 99).
Public support for these policies decreased throughout 2015, however. By the end of the year, there was a growing concern that Sweden would not be able to sustain its social service offerings if the same number of asylum applicants entered in the following year (Johnson & Sennero, 2018). In public discourse, Sweden started describing the migration situation as a “crisis” (Skodo, 2018).
Dennert emigrated from Iran as a young child, growing up in a small town in southern Sweden. Although she had experienced discrimination and harassment throughout her life, what she saw in 2015 and 2016 felt different. She had become used to seeing racist and anti-immigrant messages posted by people she describes as “the usual subjects”—extreme right-wing social media users who commented frequently on social issues. But in spring 2016, she started seeing the same hateful narratives being repeated by people she knew—people she thought of as “good people.” She urgently wanted to find a way to counter the spread of such ideas, and she wondered: Could a group of people working together to respond to hatred online shift the discourse in online spaces, and perhaps therefore in other contexts, toward more civil and fact-based speech?
Dennert began responding to people on Facebook who were posting what she saw as hatred and related misinformation. The work was difficult, and too much for her alone, so she recruited 20 friends to help. They set up a Facebook group to organize their activity, calling the group “#jagärhär,” Swedish for “I am here.” Dennert says the name has two different meanings, one practical and one normative. “It has the meaning of ‘this is where I am, and I need help here in this comment field’, since we call for each other to help out. It also has the meaning: ‘I am present. I can see what you are doing here, and I don’t agree. I am here too.’” 5
Much of #jagärhär’s early work focused on countering racist and xenophobic speech. A prime example occurred in the weeks before St. Lucia’s Day in 2016. St. Lucia’s Day is a festival of light celebrated throughout Scandinavia. Traditionally, young girls dress in long white gowns and wear a crown of candles as they carry a tray of cookies and saffron buns to their families. In 2016, Swedish department store Åhléns posted an ad celebrating the holiday with a photo of a gender nondescript dark-skinned child dressed in the traditional St. Lucia’s Day costume. The store’s social media pages were soon flooded with racist and sexist comments (Keating, 2016). Many characterized the ad as anti-Swedish. “You are provocative and you are against Swedish culture. You are advocating the death of Swedish culture and complaining about the folks who don’t like it” (“Department Store Pulls Festive Ad,” 2016) wrote one user. “Looks more like a gingerbread man!” wrote another (Backlund, 2018).
Members of #jagärhär tackled many comments posted below news articles describing the incident. They also directly addressed comments on the Åhléns Facebook page where the ad was posted. “Good picture! Good progress! Love to all and zero tolerance for racism. #jagärhär,” wrote one member. Although there were around 200 hateful comments about the ad on the Åhléns Facebook page, there were more than 20,000 “likes” or “loves” on the post, and numerous counterspeech comments, many tagged with “#jagärhär” (Keating, 2016). Even though Åhléns eventually pulled the ad at the request of the child’s parents, it marked a major turning point for Dennert and her group. In the week following the action, #jagärhär received some press attention, and the group grew from around 14,000 members to 25,722. 6
Four years later, what began as Dennert’s small group of friends had ballooned to about 74,000 members—a substantial number in a country of only 10 million. Of those, about 2,000 to 3,000 are active at least once a week. They all work as volunteers, even though some of them devote more than 10 hours a week to the project. 7 The group has a team of 15 to 20 politically diverse moderators 8 who organize counterspeech actions and help manage the day-to-day operations on the group’s Facebook page (each day, there are two moderators on duty), and six administrators (including Dennert) who handle the larger workings of the group. About 70% of members are women, and the majority of them are between 35 and 40 years old. 9 Most members live in Sweden, but some do live in other countries. In the beginning, members were highly concentrated in urban areas. Although there are now more members from other areas, urban dwellers continue to predominate.
The group continues to focus their work on countering speech that promotes disdain for members of a specific population group. As a guide for deciding which speech to counter, the group uses the Swedish legal concept of “agitation against a population group,” defined as “a statement or other communication that is disseminated” that “threatens or expresses contempt for a population group by allusion to race, colour, national or ethnic origin, religious belief, sexual orientation or transgender identity or expression [. . .]” (Swedish Criminal Code, 1999). Although this type of speech is outlawed in Sweden, several #jagärhär members told me that the law is rarely enforced.
Whether the speech to which #jagärhär responds always meets this definition is, of course, a question. Although the members with whom I spoke general said that they respond to “hate” or “hatred,” many would argue that at least some of the speech that they counter is better defined as uncivil or misinformation that targets specific groups. Members may also respond to a variety of comments within a thread (praising those that contain factual arguments, offering support for the targets of hatred, and countering related misinformation), even if they were originally alerted to the thread because it contained several comments that group administrators deemed to meet the definition of “agitation against a population group.”
In total, 146,000 people have joined one of the #iamhere Facebook groups. 10 The Swedish group is the largest, followed by the German group (#ichbinhier), which has over 40,000 members. The other groups vary widely in size, with the smallest (Poland) having just 161 members at the time of writing. Generally, a small proportion of members participate in the groups’ activities; for example, over one 2-week period, 5,580 out of the nearly 74,000 members of the Swedish group engaged with a post in the #jagärhär Facebook group.
Research Findings
The findings from this study are divided into two primary areas: (1) how the external practices of #jagärhär members work to counter online hatred and (2) how the internal practices of the group keep members engaged, thereby increasing the sustainability of the effort.
External Actions: How Do They Work?
Members of #jagärhär and other groups in the #iamhere network seek to counter hatred online and increase what they consider to be civil, productive discussion through a variety of practices. These include taking advantage of how Facebook rewards user engagement and addressing their comments to the “silent readers” (rather than those posting hatred) in hopes that the readers will either be convinced by their counterspeech arguments or be emboldened to become counterspeakers themselves. Each of these practices is discussed in detail below.
Using Facebook’s Platform Architecture
The #iamhere groups from around the world operate primarily on Facebook, 11 and they do so because their method of responding to hatred has developed around the platform’s architecture. Their counterspeech strategy uses the Facebook commenting algorithm as a way to amplify their own comments, while burying comments that contain speech that they regard as hateful.
In addition to (or sometimes in place of) commenting, group members “like” each other’s comments as well as other constructive, fact-based comments written by nonmembers. This trick leverages Facebook’s comment algorithm, which rewards engagement. According to Facebook, on pages “with a large number of followers,” comments are automatically set to sort by “relevance”—a ranking determined at least in part by the number of “likes” or replies that each comment elicits (Facebook Help Center, 2020). This design makes some content more visible, while other content remains on the margins. Users, in turn, are more likely to read and engage with the easily visible content than they are with comments ranked lower. As danah boyd (2011) has noted, the practices that occur on social media are informed by the affordances of the platform architecture: “affordances do not dictate participants’ behavior, but they do configure the environment in a way that shapes participants’ engagement” (p. 39). As Jenny Davis (2017) has noted, Facbook’s affordances “reinforce status quo ideas and popular people while maintaining an ancillary status for those on the margins.” She continues, “while everyone is allowed to post on Facebook, rewards distribute in a way that encourages the popular kids and keeps the shy ones quiet.” That is, except, when the shy kids band together. Working collectively, #jagärhär members leverage this design feature to amplify their counterspeech while driving attention away from the hateful content that once predominated in these spaces.
Each day, members of #jagärhär learn where to counterspeak through what they call “actions.” Group members search on Facebook for hateful comments on news articles and other public pages and send those to administrators who confirm that the post’s comments meet “action” requirements. A comment thread would not meet the criteria for an action if there were only a few hateful comments, if it is on a person’s private Facebook page or in a closed group, or if the comments contain disagreement rather than “agitation against a population group,” or slander.
After receiving action post suggestions, the moderators choose a few (usually 2–4 per day) to share with the whole group, directing members to post and “like” each other’s comments in the relevant comment thread. They try to spread their attention around so as not to be constantly counterspeaking on the same newspaper or group pages.
When writing counterspeech comments, members follow a set of rules developed by Dennert around the time she founded the group. Dennert said that she decided to write them because “even anti-racists, even people who meant well, were making things worse. It could be very condescending. It was very much like, ‘you’re wrong, you’re right’. And I just thought, this isn’t getting us anywhere.” Today, the full list of rules appears at the end of each action post.
The rules are as follows: 12
Like, react, and write supportive responses to good comments to lift them up in the fields and poke down hateful comments. All efforts are important!
Avoid reacting and writing many answers to hateful comments, as this lifts them higher in the fields. Rather like and react to already existing good answers.
Write what you think and think yourself. But keep in mind that as members of #iamhere, we never spread hate, prejudice, slander, gossip, or rumors. We also do not comment on other people’s spelling or writing methods. We always stay factual.
Keep a good tone! We never express ourselves using condescending, despicable, or scornful language or by insulting other people. Instead, with our choice of words, we show that we stand for transparency, respect, and good conversation. This applies both in the comments fields we link to and in here in the group.
Debate and discussion on the issues of facts should not take place in the group, but in the comments we link to. It is outside the group where we should work to make change and make a difference. It’s out there that we’re going to stop the hate and nuance the debate. #iamhere is an action group—not a debate group.
Small talk about the action, such as encouragement, support, tips, and advice, however, is OK here in the group. Please also message here if you have commented (K), liked (G) or reacted (R), and in which comments.
If you want to link here directly to a comment you have written in any field, you can click or right click on the “timestamp” under your comment and copy the link address.
The group’s moderators are responsible for ensuring that members follow the rules. If violations occur inside the Facebook group, moderators remove the comment and write to the person explaining why they took it down. If it happens outside of #jagärhär’s Facebook page, during an action, for example, moderators take a screen shot of the comment and send it to the person who posted it, with an explanation of how it violated the rules. Sometimes, when the infractions are serious enough (e.g., if a member posted a racist or xenophobic comment) or when they continue over time, moderators remove the member from the group. Some members choose to leave the group soon after joining because they do not agree with the rules. According to Dennert, some people tell her, “what are you doing? You’re not making any difference” or “I need to tell people when they are stupid.”
Most members with whom I spoke reported being strongly in support of the rules, saying that they believed that following them was the most effective way to change discourse within comment sections. There were a few, however, who admitted that they did not always follow the rules. As one woman laughingly said,
The most important is to not get too emotional and attack. It’s very easy to do that. I do that and then I erase that comment, and then when I’ve rephrased it enough, I post it. But no, you can’t always follow [the list of rules]. You just get too emotional, too upset, too angry. But I don’t ever want to be mean. The most important thing to me is that every person deserves respect. If I do post something nasty, I just don’t use the hashtag.
13
When she does tag her comments with “#jagärhär,” she is sure to follow the group’s rules.
Not all #jagärhär comments are alike, and they represent a variety of viewpoints and approaches to counterspeaking. In general, however, comments posted by members take one of four forms:
Comments that seek to correct misinformation that is targeting a population group or present facts to counter a hateful narrative,
Comments that criticize the tone of hateful comments,
Comments supporting the person or behavior being attacked by the hateful speech, and
Comments written in support of other counterspeech comments.
Take, for example, an action from July 2020 in which members of the group responded to comments on an article reporting that there had been several confirmed cases of bubonic plague in China. When #jagärhär posted this action, the comment thread was already filled with remarks such those shown in Image 1.

An example of some of the comments that triggered #jagärhär’s engagment with a story about bubonic plague in China.
The examples in Image 1 are illustrative of the way that misinformation becomes linked with hateful speech, where false notions are used to describe and demean a whole group of human beings. In response, #jagärhär members wrote comments challenging the idea that the diets of Chinese people are uniquely dangerous, correcting misinformation about the plague, and calling many of the comments in the thread racist. They also wrote comments in support of others who were counterspeaking in the thread—both members and nonmembers. Image 2 shows a few examples, including one replying to another counterspeech comment.

An example of the responses #jagärhär members wrote to counter comments like those shown in Image 1.
Members cite many reasons for choosing to write one type of comment or another. Many members said that they were more likely to write comments directly challenging a hateful statement (especially one containing misinformation) if they felt they had some expertise in the topic being discussed. Expertise also informs how some members construct their counterspeech. For example, Fredrik, a 49-year-old academic who lives in Göteborg, Sweden’s second largest city, says that he often refers to scientific articles in his counterspeech comments: “I am an academic. I work at the university. So for me, it’s very important to follow standards in argument. I see myself as a knowledge producing institution representative.”
Most members with whom I spoke reported both “liking” comments and writing comments of their own on actions throughout the week. What they choose to do on each action relates to factors such as their level of expertise in the subject being discussed in the article and comments, as well as their daily schedule and energy level. Group administrators note that early in the morning, commuting hours, and late evening are particularly busy times for the group. People tend to “like” comments early in the day, while riding the bus to work for example, and then engage more deeply by writing and responding to comments later in the evening.
For Elin, a 47-year-old woman who lives just outside of Stockholm, counterspeaking sometimes begins as soon as she wakes up. From bed, she turns on her phone and reads recent comments, adding her “likes” to ones with which she agrees. In her spare moments throughout the day, or in the evening after everything has quieted down, she again returns to Facebook to check on the actions that were posted during the day, writing comments and adding more “likes.” 14
When #jagärhär members “like” each other’s comments, it moves them up in the ranking. Ideally, they hope to push their comments into the top section that is visible when one scrolls through one’s news feed. By pushing their own comments up, #jagärhär members try to make their comments the first things (or perhaps the only thing) that people read in reaction to an article. Many studies have documented that the tone of social media comments at the beginning of a comment thread have an impact on the tone of the future discourse within that thread. 15 For example, if a user encounters so-called “civil” comments, they are more likely to post similarly civil comments (Han & Brazeal, 2015). Likewise, several studies have found that exposure to antisocial or uncivil comments make a person more likely to post an antisocial comment (Cheng et al., 2017). If #jagärhär members are successful in pushing their civil comments to the top and the hateful ones to the bottom, they may well influence at least some of the other users who comment within the same field.
For their efforts to be successful, #jagärhär members must be careful not to amplify the content to which they respond. On Facebook, responding directly to a comment can boost that comment higher in the thread, even if the response is critical. The group’s rules therefore state that members should not respond directly to a person or a specific comment. Even if they are writing in response to a specific comment, members write so-called top-level comments
16
(clicking “reply” on the original post rather than on another comment on that post). By writing their counterspeech as a new comment, rather than as a reply to another comment, others can elevate the counterspeech (by “liking” it), without also amplifying a hateful comment as they might have done by replying directly to (“engaging with”) it. In other words, as one member said, she often doesn’t reply directly
because I don’t want to give the original post—because of the way the interaction algorithm works—I don’t want to give it more views or more power . . . I don’t want to reply to a person, because I don’t want to give that person more space.
17
Addressing the “Movable Middle.”
Pushing their comments to the top of comment threads has other potential impacts as well. Members of #jagärhär said they try to amplify comments that are logically argued, well-written, and fact-based, whether they are written by #jagärhär members or not, because they may be able to reach the larger reading audience—those scrolling through their Facebook feeds who might encounter the article. Some of those people, group members posit, will not have made up their mind yet about the topic being discussed, and therefore could be potentially swayed in different directions by the speech in the comments.
Those working to shift policy or public opinion on social issues such as abortion (Lane, 2019), migration (International Center for Policy Advocacy, n.d.), or LGBTQ+ issues (GLAAD and Them, 2019) often call this audience the “moveable middle,” people who do not currently hold strong opinions about a particular topic and are therefore able to be swayed toward one side or the other. Activists generally see the moveable middle as the ideal target of messaging, as they are more willing to listen sincerely to an argument than those who openly oppose it. There is support for this strategy in the literature. For example, researchers have found that even a small group of counterspeakers can influence the discourse within an online space if the audience that they are speaking to holds relatively moderate views (Schieb & Preuss, 2016).
Rather than attempting to change the minds or behavior of those posting hatred, they use strategies that they believe are able to reach, and hopefully persuade, those in the movable middle: providing factual information and documenting dissent. Although research indicates that fact-checking is not very effective for changing someone’s mind (Kolbert, 2017), #jagärhär members understand that their comments may reach audience members with a wide variety of opinions on a subject. Since many people in the imagined audience for #jagärhär counterspeech are “silent readers” (those not actively participating in the thread), members do not identify specific people to target with their comments. Rather, they seek to reach a broad spectrum of people who hold relatively moderate views who could either be persuaded to become counterspeakers themselves or inoculated against believing—or sharing—hateful comments. Sharing links to articles that counter hateful misinformation may persuade someone with little knowledge of the topic or a person who is pretty sure the misinformation is false, but is not confident enough to say so, even if it is not an effective way to change the mind of someone holding more extreme views.
In order to help members counter misinformation that targets specific population groups, some groups have assembled factsheets that contain useful links and statistics. For example, when the US military was leaving Afghanistan in 2021, and Afghan refugees were being evacuated to countries around the world, many of the groups found themselves countering comments that contained both hateful speech directed at refugees and misinformation about the relocation. The UK group, for example, assembled a factsheet with information about which countries were taking in the most Afghan refugees, the specifics of the United Kingdom’s Afghan Relocation and Assistance Policy, and many other links where members could read more about the crisis, refugee rights, and the Taliban. As previously mentioned, members reported being more likely to write a comment of their own (rather than just “liking” comments written by others) if they felt they had some level of knowledge or expertise on the topic. Factsheets such as this one allow members to educate themselves about rapidly developing situations and feel confident in their responses.
Members said dispelling myths and making accurate information easily visible would allow the “silent readers” to “make up their own minds.” Noteworthy in these statements is the underlying assumption that, if presented with both accurate and inaccurate information, readers would likely be convinced by #jagärhär’s arguments (which many members described as typically “logical” and “well-formulated”), an assumption that presents an inherently optimistic (although possibly misguided) view of the average reader.
A second way in which #jagärhär members try to reach their audience, which many believe is powerful, is to document dissent. One member said he doesn’t counterspeak to make people see that they are wrong, but to show that there are different views. “These comment fields can make the impression that most people are hateful; they’re not,” he stated.
18
Another member shared a similar viewpoint:
Even if you write an answer for that side [those posting hatred], everyone else can read it too. If you go into a place where a lot of bad things are written, then people say, “oh, God! That is what everyone thinks!” But this is not what everyone thinks. A lot of people think differently; and that’s important.
19
In her article “Blocking as Counter-Speech,” philosopher Rae Langton describes how readers can “block” the impact of hateful speech by mentally resisting what it is trying to say. 20 The impact of xenophobic speech could be lessened, for example, if people who read the hateful speech understood it to be untrue, and therefore remained unmoved by its message. For this internal blocking to be successful, however, Langton argues that the reader must feel confident that the speech is wrong and must overcome “the fear of being an epistemic outlier—the odd one out, who disagrees not only with the speaker, but also with what everyone supposedly takes for granted” (Langton, 2019). This is exactly what members of #jagärhär are trying to correct. They want to ensure that when readers encounter hatred online, they see that they are not the only ones who disagree. As one member said, “You are trying to reach every reader to make the reader understand that it seems like everyone in Sweden is against immigration, but that’s not true.”
Activating New Counterspeakers
Convincing others that they are not alone in disagreeing with a hateful comment so that they can more easily “block” (Langton, 2019) the message is important, but this study also identified another mechanism through which #jagärhär’s counterspeech fights hatred: drawing new counterspeakers into conversations.
During interviews, many members of the group told stories of their own experiences joining #jagärhär—how they had felt alone and hesitant to speak against the hatred they were seeing online. Many also said that they did not counterspeak before joining the group. They were disgusted by the comments that they were reading, but they felt too afraid to say anything.
Mattias has been a member of the group for about 3 years and a member of the moderator team for over a year. Like many members with whom I spoke, Mattias was not counterspeaking before joining #jagärhär. He was active in environmental justice work but had avoided taking part in online conversations with people he did not know because the commenters were so aggressive, hateful, and quick to spread misinformation. “It was bad to the point where I had decided not to click any comment sections because it led me to so many bad emotions.” Mattias is a graduate student studying communication and, during our conversation, he ruminated about how this training has led him to think critically about why he and others choose to participate or not in various types of conversation:
I’ve always been really into communication, so my frustration was even bigger [before joining #jagärhär] because I knew how you can communicate about issues, even emotionally loaded issues, in a way that doesn’t devolve into vitriol basically. So it was a period in my life where I said “I’m just not going to participate in social media at all, it’s all so toxic.”
21
Like many members, Mattias first discovered the group by seeing a comment tagged with #jagärhär. Even though he joined, he was still hesitant to get involved and become an active counterspeaker himself:
I wasn’t really active for a while—I just joined to show my support. Then more and more I started to get involved. So it was more me showing my support in the beginning, then I realized that it actually had an effect and the comments became better and better—and now it’s easier for me to have a better conversation. Now more and more the people are commenting in an effort to have a better conversation.
For Mattias, seeing the discourse improve within comment sections made it easier for him to participate.
One of the most common remarks from those interviewed for this study was that the group made them feel braver. As previously discussed, many members stated that they did not feel comfortable entering the comment sections before joining #jagärhär, describing the comments that they used to encounter as predominantly “toxic,” “aggressive,” and “hateful.” A solitary dissenting voice would draw attention and potentially garner attacks. But with the #jagärhär model, members counterspeak as a group, leaving the individual less exposed within the comment thread. Members said this left them “feeling safer” or “more protected.”
This is an important finding, as it adds explanatory value to studies like the research completed by Miškolci et al. (2018) on countering anti-Roma speech in Slovakia. Their study found that counterspeech was able to draw out more pro-Roma comments from other users. When combined with the findings from this study, one might posit that counterspeech is able to draw out more counterspeech because each additional counterspeaker is less exposed to potential abuse. As the number of counterspeakers increases within a comment field, the level of risk and related emotional cost decreases for those joining the conversation.
For comment sections identified by group moderators as official action items, members of #jagärhär can confidently counterspeak knowing that they will not be alone in doing so. For an average action post, about 50 members usually write comments and many more “like” various comments. This number fluctuates, depending, at least in part, on the speech to which they are responding. According to Dennert and several group members, when there is a clear “bad guy” and a sympathetic victim in the content of the article, the actions generally draw more counterspeakers. For example, a post describing a hate crime against a child would likely draw a large number of #jagärhär members.
Many members mentioned that the group’s size provides a sense of support that extends beyond simply feeling protected by a herd. Some called it “support,” others “trust” or “camaraderie,” but most mentioned some manner in which being a member of the group allowed them to move more bravely through the online world. “#jagärhär helps in many ways. The effort of many is directed and collected; you can get support, even knowing there are people figuratively behind your cyber back is a comfort,” stated one. 22 Another said, “You feel that if I get jumped at, I will have my friends in back to cover me. That is very important.” 23 “If you come attack me, there are 10 people who will come to support me. That is incredibly important,” echoed a third. 24
One mechanism through which #jagärhär members actively encourage new counterspeakers is by supporting counterspeech comments posted by nonmembers. Members will find such comments and “like” them, sometimes directly responding with comments saying something like “Well said!” along with the group’s hashtag. Monica, a #jagärhär moderator, was counterspeaking on her own before she joined the group back in December 2016, only a few months after it began. Before she joined, she remembers one day responding to a hateful comment about asylum seekers when some of the commenters began attacking her. “I don’t remember exactly what they said, but I remember it was aggressive, and that I didn’t know exactly what I should do. I thought, should I keep responding? Should I just keep quiet?” 25 Before she had made up her mind, she noticed that others had joined her. People started “liking” her comment and others began citing statistics about immigration and trying to refute claims that refugees were a danger to Sweden. Their comments included a hashtag: “#jagärhär.” “I looked it up, and I decided to join. It was just in time. I had started losing some faith that responding was worth it. To see so much hate. That can eat you up at times,” she said. But after finding the group, she felt more hopeful: “I thought that I could make a difference with other people. We could do this together.” 26
By including the hashtag in their comments, #jagärhär members demonstrate the fact that they have the group’s support and they say that makes them feel braver. When the group first formed, members would tag all of their comments with this hashtag so that other members could easily find and “like” them. It also helped recruitment, as noted by several members including Monica.
But as the group grew, they changed their strategy. Elin, the member who begins her counterspeech first thing in the morning, said that at some point she began to feel that by using the hashtag, the group was bullying the people to whom they were responding. Elin herself had been bullied as a child, and she was easily able to empathize with readers who disliked seeing hundreds of comments tagged with “#jagärhär” that were written in response to a far smaller number of hateful comments. “I actually wrote to Mina when I was sort of six months in,” Elin said:
I said to her that I don’t think that it’s really clever that everyone use the hashtag because we are too strong. You can’t have one person writing something stupid and then have 300 persons just sort of picking at them. Then it turns the other way. Then you have the good guys turning into bad guys because there are too many.
Some far-right voices say that the #iamhere groups are de facto censors who muscle their own opinions to the forefront while silencing others. 27
As time passed, the group norm changed from everyone tagging their comments with #jagärhär to only a few doing so on any given post. Ideally, the message communicated is that many different individuals with unique viewpoints are counterspeaking rather than there being an outpouring of criticism from one unified source. “I was not the only one thinking and talking to Mina I think. I think there were many more than me,” said Elin. 28
In my interviews, most members said they generally only reveal their association with the group during their counterspeech in specific circumstances. For example, one member said,
I would use it [the hashtag] in specifically infected commentary fields—if there are a lot of really mean comments and people are attacking each other, then I would put on the hashtag, like armor. It adds a level of protection.
Others, however, felt that the hashtag sometimes made them more vulnerable because it brought them unwanted attention from the aggressive Facebook users whose hateful comments were prompting their counterspeech. Take Monica, for example, who joined #jagärhär at the end of 2016 and became a moderator a year or two later. Monica said that, as a moderator, she uses the hashtag in order to show other members that she is there to help if they need it. She is more likely to use the hashtag in the beginning of an action to show group members that they are “safe,” and not when there are already many counterspeech comments. However, the fact that she is so frequently visible in the comment threads as a member of #jagärhär also means that she has faced more harassment than many other non-moderators. On one occasion, a piece of propaganda for an extremist group was delivered to her home. Even though there was no personal message, and she could not prove that it was targeted, she believes that it was sent to her intentionally to show her that members of the extremist group knew where she lived.
In these examples, members strategically deploy the hashtag, or refrain from using it, because the group’s reputation and/or size can offer either benefits or vulnerabilities depending on the circumstances. But considerations about reputation and association with the group can also flow in the other direction. Individual members who have online reputations of their own sometimes avoid using the #jagärhär hashtag to keep their own reputations from tarnishing the group. In 2018, the political editor of a self-declared “independently liberal” Swedish newspaper (Göteborgsposten) wrote an editorial disagreeing with Swedish Holocaust survivors who compared the ideology and policy priorities of the present-day Sweden Democrats to those held by the Nazis in Germany in the 1930s. In response, Fredrik, a #jagärhär member and university professor, wrote a petition calling for her removal from the paper. His petition received over 800 signatures within the first 24 hr. It also brought a wave of online attacks—social media posts calling him a communist and a traitor, emails to his employer demanding his removal, and even death threats—from those associated with the radical right in Sweden. “It lasted for a week,” he said. “I’m pretty used to being in the spotlight, but this was really too much, so since then I’ve been more cautious about what I post.” Fredrik says that although he used to use the #jagähär hashtag, today he does not:
I had a discussion with a friend who said “your online reputation is already ruined, so they [those posting hatred online] would say that this crazy communist from the university is trying to silence people.” If I use [the hashtag], the discussion isn’t about the topic anymore, it’s simply about me. And I understand what my friend is talking about. Unless I can counter that image of what is around about me, [people believing he is a communist or a traitor] I understand why it could do more harm than good. So if I would tag my comments, I would provide people ammunition to use my support as an example that the whole initiative is corrupt.
29
Both Fredrik and Monica became visible as individuals, distinct from the protective mass of the group—Monica through her work as a moderator (being an early and frequent poster on action threads) and Fredrik by publicly campaigning against a prominent political editor. This visibility brought on attacks that could not be fended off by the support of other group members “liking” comments or writing their own comments in solidarity with the attacked member. Although the online attacks on Fredrik did not emerge from his work with the group, he now wonders if he should have reached out to Dennert or the other moderators for help after the attacks started. The group’s admin team has developed a process for taking reports of online harassment from members and helping them contact the police, if necessary. In the months leading up to the 2018 Swedish general election, the group even developed a separate task force to take reports, as more group members reported harassment. These days, Dennert says that regular members (those not on the team of moderators and administrators) are not harassed often, only occasionally.
The possibility of online attacks does not seem to have dampened the confidence of #jagärhär members though. Notably, several group members said that their increased willingness to participate in counterspeech extended beyond #jagärhär group actions, stating that they had changed as individuals, becoming more confident in the value of their own opinions. One said, for example, “It has made me stronger, I think. I know that there are a lot of people just like me. I feel stronger, and I think I dare to speak my mind more.”
30
Another noted as follows:
I speak up more often now online in places where #jagärhär is not involved. I also think it feels a little easier to give my opinion in different situations offline since I became active in the group. It’s a good school. You get a lot of practice in patience and methods of dealing with different kind of conversations.
31
Comments like this reveal that #jagärhär’s counterspeech not only can activate new counterspeakers to take part in their actions or join their group but it can also encourage their own members to expand the range of their own work responding to hatred. Elin also described how her participation in the group had changed and empowered her:
I have had the words and an interest in writing since I was young, but when I was young, I met a lot of adults who said “you don’t have the language, you don’t write well,” sort of pushing me down. So this [joining #jagärhär] was sort of regaining myself saying “sorry, you’re actually wrong. I can use my language. My language is not wrong. I just have to know how to use it, because I know I can touch people by my words.” So I sort of reclaimed myself.
32
Internal Practices: Keeping Members Engaged
The literature on activism notes that burnout is one of the primary challenges to sustaining participation. Initially coined by psychologist Herbert Freudenberger in 1974, people experience “burnout” when they lose their original drive for doing their jobs and become physically and mentally exhausted by prolonged stress associated with their work. In research with social justice activists, this loss of drive is most frequently attributed to the toll taken by the intense emotional labor often associated with social justice campaigns (Goodwin & Pfaff, 2001; Maslach & Gomes, 2006). Researchers have also documented the relationship between burnout and a “culture of selflessness” among social justice activists who may feel that, in the context of the huge societal-level challenges they are trying to overcome, taking care of their own mental health would be selfish (Rodgers, 2010).
These problems can also exist for online activists. As many studies have demonstrated, offline activists who are able to avoid burnout and remain engaged with causes over many years often do so by developing strong social ties with others in the movement (Gladwell, 2010). But strong social ties are not always easy to form in online activism campaigns. In interviews, many #jagärhär members spoke of the emotional energy required to counterspeak and said the work—which they all do as volunteers—can be exhausting. One said, for example, that before deciding whether or not to participate in an action, she asks herself, “How many comments do I have the energy to do?” Another woman described in more detail how the emotional demands of the work can make it hard to continue:
It’s very wearing—one reason why I’m not all that active now. At first, you had to gather all of your courage. Then you feel the support and feel that you are part of the group and it’s pretty easy. Then as time passes, it gets harder. You get tired of it. You meet the strangest opinions. There are some [members of #jagärhär] who join other, right-wing groups to start discussions there.
33
I don’t do that because I like my peace of mind. You get really tired.
34
Similarly, other group members noted how taxing it can be to continually attempt to counter hateful speech.
Despite the emotional toll of the work, most #jagärhär members have managed to avoid burnout and have continued counterspeaking for years, building up experience and helping the group to remain sustainable. There are several reasons for #jagärhär’s relative longevity, they said. One is the salutary effects of working together in a large group. Research has shown that activists—even those working offline—often face a feeling of isolation stemming from the fact that they deal directly with societal problems that others in their communities seem “unable or unwilling to face” (Maslach & Gomes, 2006, p. 43). By coming together as a group and participating jointly in actions, #jagärhär members stave off feelings of isolation. This was evident in the 73% of members interviewed for this study who spontaneously explicitly stated that joining #jagärhär has made them feel less alone. 35
But the impact of the group on well-being extends beyond the sheer presence of others. Members described a more existential notion of not being alone in their fight against hatred—they were part of a community with a shared vision for the world. “You don’t feel that you are the only one who thinks some way,” said moderator Mattias:
When it comes to a certain topic, you can feel that the only way that people react is with hate. But you can always bring it up in #jagärhär and get a totally different reaction—it’s much closer to what you want to see. It’s a group with similar values. It’s very powerful. It makes me more secure in my own values.
Before joining #jagärhär in 2017, Lena had long been active in online forums, even helping to moderate one for a group of athletes. But online discourse seemed to her to be deteriorating. She saw comments asserting that immigrants posed a serious threat to Swedish identity and culture, and even some suggesting that more immigration could lead to a “white genocide.” Troubled by what she saw, Lena at first thought that it was just a few extreme voices. When it seemed that such discourse was taking over many of the comment threads she was reading on Facebook, she wondered if more people agreed with these ideas than she imagined.
Then, one day she noticed the hashtag “#jagärhär” in a comment thread. She searched for it on Facebook, found the group, and after reading the page’s greeting and learning more about the group, decided to participate. “It was a big comfort to find them because it is really horrible out there in the comment threads,” she said. “It’s almost that I was thinking, ‘is this the new way?’ But then I found them (#jagärhär), and I thought, ok, I can feel safe. It’s not the new way. I was thinking in the right direction,” she said.
The development of a shared moral vision also helped members feel rejuvenated and contributed to a sense of and belonging within the group. One member stated, “#iamhere is somewhere where you can charge your batteries somehow.” 36 She later said that it felt like “being with friends, even though you don’t know them. But you know they want the same things, so they feel like friends.” Other members said similar things: that despite not actually meeting other members in person, or even really forming strong individual connections online, there was a general feeling of friendship and familiarity. Members felt that they understood and could anticipate how other members would respond to their comments within the group and that they had a shared value system.
The various #iamhere groups also have their own collective rituals, developed to bolster the mental and emotional well-being of their members and to fend off burnout by sharing encouraging stories. The Canadian group, for example, does something called “well-wishing Wednesdays” where group members celebrate individuals 37 who go out of their way to help people. Each week, group administrators select one person, post a bit about their story, and then encourage members to send messages of support to the person being honored. #iamhereCanada supports a wide range of people through their well-wishing Wednesdays; recipients have included Autumn Peltier, a teenager from the Wiikwemkoong Unceded Territory who advocates for clean drinking water for First Nation communities, and Toronto Raptors basketball player Serge Ibaka whose foundation has provided thousands of meals to those in need (Ricci, 2019). These well-wishing posts often draw even more engagement than action posts. “Some people have told me that this [fighting against hatred] is just too hard. They need to see that they aren’t struggling by themselves,” said one of the group’s administrators. The Swedish group does something similar, calling its practice “love bombing.” As with the Canadian group, these posts are quite popular, with one—a tribute to a Swedish man who sewed over 6,000 face masks in his home to give away during the COVID-19 pandemic—receiving over 1,500 “likes.”
#Ichbinhier, the German group, has created perhaps the most extensive set of practices to take care of members. Every evening, a group administrator posts an “Absacker” (“nightcap”), a post inviting discussion around a topic usually unrelated to counterspeech, like being stuck at home during the COVID-19 lockdown or favorite first sentences from books. These posts give members a break from the emotional labor of fighting hatred, while providing a space for group discussion and bonding. Several members of #ichbinhier also created a sister Facebook group called “Happy Place für #ichbinhier,” where members post a steady stream of light-hearted video clips and memes, animal pictures, and feel-good stories. The group is open to all members of the larger #ichbinhier group and has 819 members at the time of writing. 38
The specific practices and structures designed to promote self-care, and the general feeling of belonging cited by members, largely attributed to sharing a common moral vision and goal, are both likely reasons why members view the #jagärhär Facebook group as a rejuvenating place. They are surely part of the reason why thousands of people have continued to do the unpaid work of responding to online hatred, week in and week out, for years.
Impact
It is clear that #jagärhär has affected the mental states and behavior of its members. There is also evidence that the group has had a wider impact, affecting the way that individuals outside of the group behave. This can be observed on multiple levels. The first is the effect that the group has on speech in particular comment threads. As discussed above, pushing their comments to the top of comment sections allows #jagärhär to influence the tone of comments that are posted by nonmembers who may read the thread and write comments. It also provides a pathway to reach those in the “moveable middle” and convince them that the hateful comments they may have seen are not the predominant views in Sweden.
On a second level, #jagärhär may have helped make discourse norms on the Facebook pages of newspapers and public groups in Sweden more civil and less xenophobic. There seems little doubt that there has been a notable shift, observed by #jagärhär members and others such as journalists, in the years since the large influx of migrants to Sweden and other European countries. The shift has coincided with #jagärhär’s 4 years of efforts, though we (so far) lack evidence of a causal relationship between the two.
There is anecdotal evidence, however, including among people outside #jagärhär. The editor-in-chief for a network of self-identified “liberal” regional newspapers in Sweden told me that from her observations, she believes #jagärhär’s counterspeech has had a large impact on the discourse within Swedish comment threads. She said that several years ago, there was not much discussion on her network’s newspapers’ Facebook pages—and the comments that appeared did not seem to provide a representative sampling of readers’ opinions. “It was kind of weird. It didn’t matter what kind of story it was, all of the comments were hatred and right-wing comments. It didn’t matter what we were writing about.” 39 Now, the comments are more balanced. She believes #jagärhär made it feel safe for the paper’s readers (who are not members of the group) to comment on articles. The group’s actions seem to have diluted hateful comments significantly, though they have not eliminated and may have not even decreased them.
Many members with whom I spoke mentioned that they felt that the discourse in the comment threads had improved over the time that they had been members of #jagärhär. “Let’s dial back five years,” one member said. “Whenever there was a debate about immigration, there was an absolute majority of people throwing up hate, just drowning everything, and [there was] very little counterspeech because there were so few people doing it, you got attacked.” He continued,
Five years ago, the amount of personal attacks you got was enough to deter quite a few, I’m sure. But with #jagärhär, it wasn’t just me standing up. They got so many others on the bandwagon that they [those posting hatred] started to disappear in the threads. From being 90% of threads, they went to 20% of the threads.
40
Graduate student Mattias shared a similar opinion when I asked whether he felt the group had an effect:
I don’t really understand why or how it works, but I definitely notice that it does work. I didn’t even feel like I could, I mean, I never made comments on public Facebook pages a few years ago. Almost every comment was toxic. What #jagärhär has done, somehow, I don’t know how, people can now make comments expressing their opinions and they don’t have to be toxic. I mean sure, there still are toxic people, but there is always someone there to back you up.
I asked Mattias if he felt the change had come from there being fewer people posting hatred online. “No,” he said, stopping to think for a moment. “It’s just that there are more reasonable people. Now the status quo is more balanced, so people who go along with status quo are less toxic.” The hateful commenters didn’t go down in number, but because the number of counterspeakers has increased dramatically, the proportion of hatred has changed. This means that those who encounter a comment thread are less likely to reach the conclusion that the opinions expressed in the hateful comments are the prevailing view in Sweden. They are also less likely to refrain from adding their own counterspeech. The model of counterspeech used by #jagärhär documents dissent to hatred and supports other counterspeech comments, two actions that make online spaces in which #jagärhär members are present feel less risky for others who may be contemplating adding their own counterspeech comments. Thus, although the actual hatred may not have changed, the impact of that speech likely has.
The perceived changes in discourse norms also motivate #jagärhär group members to comment more since they feel they are having a real impact. As one member said,
The thing is that since the group has grown so much, you actually find that the tone at least in normal media has changed a lot. There are many more people who are contradicting racist things, so it’s easier. You don’t have to go by #iamhere, you can just go and start commenting, and you will always have people supporting you.
41
Another member agreed:
People aren’t as afraid to give their opinions. For me personally, it means I am more prone to comment now than ever before when I just avoided any commenting at all. I realize that if I set the tone with the first comment when my local paper publishes something it makes a difference.
42
The growth and influence of #iamhere groups has brought them criticism as well. For example, some people have come to see them as a coordinated effort to silence voices with which they disagree. One #jagärhär member told me, “I have seen that quite a lot of people believe it’s like a sect. So if you have 10 comments, and they all have the hashtag, then one guy will comment, ‘oh the sect is here!’” 43 Others (generally those promoting far-right political ideas) have described the group as censors. The German group has faced similar criticism. Critics have called them the “Stasi 2.0.” and a Facebook user once called the group “opinion gorillas” in a comment thread. 44 As discussed above, criticisms such as these have led to some members feeling that using the group’s hashtag can cause more harm than good within a comment thread. Members worry that when it’s apparent that they are highly coordinated, their counterspeech will not be trusted or deemed “authentic.” As the groups around the world continue to grow and become more well-known, this criticism is likely to continue.
Conclusion
This study suggests that there is value in collective action against hateful speech online. Although previous research has documented that counterspeaking as a group may have an impact on discourse in certain cases (Friess et al., 2020; Garland et al., 2020), this study is the first to consider how both the internal and external practices of counterspeech groups contribute to the effectiveness of the effort. This study is also the first to consider the various consequences of doing collective counterspeech for individual members of the group. Group members report feeling braver and more willing to enter difficult conversations. In addition, they spoke of many aspects of the #jagärhär model that may prevent burnout, a major obstacle to sustainability for many social change initiatives.
Are they succeeding? The findings from this research suggest that they likely are. “Success” for #jagärhär members isn’t measured by how many hateful comments exist in a conversation, but by how much space has been created for alternative viewpoints. As one member said,
In the end, it’s about democracy, it’s about debate, it’s about freedom of speech that people will have the courage to say what they think. If you have lots of hate comments, maybe you are afraid, and you don’t want to say what you think. But if we are 10-20 people arguing against the hate then I imagine that others will also want to do so, so that not only the people screaming the highest can say their opinion.
45
This new way of conceptualizing effectiveness poses some challenges for measurement and calls for further inquiry. But the findings of this study call attention to the different possible audiences of counterspeech and the multiple pathways through which it can bring about changes in online discourse. And in doing so, it helps researchers begin to design studies to measure the various impacts of counterspeech.
Footnotes
Acknowledgements
The author would like to thank Susan Benesch and Tonei Glavinic for their support and guidance throughout the project; Richard Wilson, Joshua Garland, and the two anonymous reviewers for their insightful comments on earlier drafts of this article; and Mina Dennert and all the members of the #iamhere network, especially those who so generously shared their time and experiences.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The Dangerous Speech Project’s work is supported by the John D. and Catherine T. MacArthur Foundation, the Ford Foundation, and the Open Society Foundations.
