Abstract
In recent years, a growing body of research has documented how both local and national politicians experience insults, sexism, and even threats on social media platforms. Studies show that such online abuse can have serious democratic consequences and negatively impact politicians’ mental health and wellbeing. We investigate whether online abuse directed at politicians can be countered by providing citizens with information regarding such abuse. Specifically, we examine whether information about how abusive comments (1) hurt individual politicians, (2) violate social norms, and (3) undermine democratic quality can influence citizens’ abuse tolerance and their willingness to engage in behaviours that counter online abuse on social media. We test our preregistered hypotheses in a survey experiment using a sample of 2000 Danish citizens. Our results show that information about consequences and norms regarding online abuse does not significantly affect citizens’ tolerance of abusive online behaviour. However, it can increase citizens’ willingness to engage in behaviours that counter instances of online abuse, such as writing a comment to support the targeted politician or reporting the abusive comment to the social media site.
Introduction
Politicians are frequently targeted with online abuse on social media platforms. Studies across multiple countries show that politicians at both national and local levels routinely receive insults, sexist remarks, racial slurs, and even threats of physical violence (Pedersen et al., 2025; Petersen et al., 2024; Southern and Harmer, 2021; Ward and McLoughlin, 2020). Such online abuse may have detrimental consequences for the mental health and wellbeing of the targeted politicians. For instance, politicians report having experienced stress, insomnia, and anxiety due to online abuse (Collignon and Rüdig, 2020; James et al., 2016). Further, the prevalence of online abuse has serious democratic consequences: some politicians avoid sharing their beliefs on controversial topics and others even leave politics altogether due to the burden of online abuse (Erikson et al., 2023; Pedersen et al., 2025; Petersen et al., 2024).
Politicians are, however, not the only group that experience online abuse, and studies on online abuse and hate speech directed at other groups, for example, racial or ethnic minorities, suggest that at least part of the solution to online abuse may be social media users themselves. Abusive online behaviour is primarily driven by a small subset of individuals (Bor and Petersen, 2022), and the vast majority of people generally oppose online abuse (Pedersen et al., 2025; Petersen et al., 2024). This means that ordinary social media users can play a vital role by reporting abusive messages when they encounter them and responding directly to the few users who write these abusive messages. Importantly, such counter speech from bystanders seems to have a tempering effect on the subsequent activity of the abusers (Crawford and Gillespie, 2016; Elsayed and Hollingshead, 2022; Garland et al., 2022; Hangartner et al., 2021). This suggests that ordinary social media users can be an effective weapon in the fight against online abuse of politicians – if they can be mobilized to take action. The question, however, is how to achieve such public engagement in counter speech, especially when it may expose citizens to the risk of becoming targets of online abuse themselves.
In this study, we investigate whether ordinary people can be motivated to take action against online abuse of politicians if they are given information about its negative consequences or social norms regarding such abuse. Drawing on research on empathy, social norms, and civic engagement, we examine how information stating that abusive comments (1) hurt individual politicians, (2) violate social norms, and (3) undermine democratic quality influences citizens’ tolerance of abusive comments and their willingness to engage in counter speech or report the comments. We do this in a survey experiment with a sample of 2000 Danish citizens, in which participants are randomly assigned to one of four conditions: three treatment conditions containing information about the negative implications of online abuse and a control condition where these implications are not highlighted. All participants are shown several real-world examples of abusive social media comments targeting a politician, after which we measure their abuse tolerance, willingness to engage in counter speech, and reporting intentions.
Our results show that while information about online abuse of politicians does not seem to affect citizens’ abuse tolerance, all three experimental treatments affect citizens’ willingness to engage in counter speech. Additionally, information about how online abuse of politicians violates social norms and undermines democracy also affects citizens’ willingness to report abusive behaviour.
Online abuse on social media and citizen reactions
At the most general level, online abuse can be characterised as online content created with the intention of distressing or insulting a specific individual (Ward and McLoughlin, 2020). In practice, online abuse can take many forms, for example, insults and slurs, sexist and racist remarks, or outright threats (Pedersen et al., 2025; Petersen et al., 2024). While social media sites continually remove abusive messages, such top-down content moderation is insufficient. Researchers and experts have therefore suggested activating social media users to more effectively counter online abuse (Garland et al., 2022; Munger, 2017). Indeed, political theorists even argue that the individual bystander has a moral duty to condemn and fight abusive online language when it is encountered (Howard, 2021). There is both a perceptual and behavioural aspect to this bottom-up activation of social media users. First, people need to be convinced or (if they already feel that way) reminded that abuse of politicians is undesirable. Second, people need to be willing to do something about the issue. One low-cost action is to report abusive behaviour to the social media platform. Another action, which is more personally costly but also potentially more effective, is to speak against abuse, for example, by posting a comment that offers support for the targeted politician.
A key question is how this kind of perceptual and behavioural activation of citizens can be achieved. Although many actors, like the UN (2020), aim to encourage citizen action against online abuse, little is currently known about which arguments should be advanced to maximize effects, for example, in information campaigns. This article focuses on three pieces of information that emphasize concerns about abuse at different levels: At the personal level, we target people’s empathy and explain how abuse hurt politicians’ well-being. At the cultural level, we target norm-abiding behaviour and remind people that most fellow citizens are against political abuse. Finally, at the institutional level, we tap into an almost universal support for democracy and explain how extensive abuse have side effects that can undermine democratic quality. The following sections develop our hypotheses on these informational effects in greater detail.
Personal consequences of abuse
The first type of information that might affect citizens’ attitudes and behaviour towards online abuse of politicians is information regarding the personal consequences of such abuse. A growing body of literature indicates that online abuse of politicians can negatively affect politicians’ mental well-being (Akhtar and Morrison, 2019; Collignon and Rüdig 2020; James et al., 2016). We contend that highlighting such personal consequences may evoke citizens’ feelings of empathy towards politicians. When people are exposed to others experiencing harm, this exposure may trigger moral reactions such as ‘empathic anger’, which can motivate people to ‘protect the victim’s interests by undoing the harm, compensating the victim, and punishing harm-doers’ (Hechler and Kessler, 2018, 271). Several studies indicate that appealing to empathy may affect social media behaviour. Munger (2017) found that empathic appeals decreased the use of racial slurs, and Hangartner et al. (2021) similarly found empathy-based counter speech to be efficacious against xenophobic hate-speech. Based on this research, we propose the following hypotheses:
Exposure to information that highlights the personal consequences of abuse of politicians makes citizens less tolerant of such abuse.
Exposure to information that highlights the personal consequences of abuse of politicians makes citizens more likely to take action against such abuse.
Social norm violation
A second type of information that may affect attitudes and behaviours towards online abuse of politicians is information regarding the norms about such abuse among fellow citizens. Specifically, informing participants about predominant norms, in this case that most citizens are clearly against the abuse of politicians, might make them more likely to act in accordance with this norm. Such information is also factually correct, as studies have shown that people are generally strongly opposed to online abuse of politicians (Pedersen et al., 2025; Petersen et al., 2024). Theoretically, this intervention is based on the principle of social proof (Cialdini and Goldstein, 2004). According to this principle, individuals often determine appropriate behaviour by observing and then mimicking others. Studies across multiple domains have shown that the perceived beliefs and norms of other people can have substantial effects on individuals (Garland et al., 2022). In political science, a large literature on voting behaviour has, for example, found strong ‘bandwagon effects’, where voters tend to vote for parties popular among other voters (Morton et al., 2015). Overall, this leads us to the following hypotheses:
Exposure to information that highlights societal norms about abuse of politicians makes citizens less tolerant of such abuse.
Exposure to information that highlights societal norms about abuse of politicians makes citizens more likely to take action against such abuse.
Democratic consequences
The third type of information tested in our study is information regarding the democratic consequences of online abuse directed at politicians. Online abuse may harm democracy and democratic deliberation in several ways. Politicians may engage in self-censoring by refraining from speaking out and engaging in certain political topics or discussions out of fear of repercussions, and online abuse may ultimately make some politicians consider leaving politics altogether (Erikson et al., 2023; Pedersen et al., 2025; Petersen et al., 2024). Although people may sometimes be ready to accept undemocratic behaviour if the undemocratic behaviour aligns with their own policy goals (Krishnarajan, 2023), support for democracy as a system of governance is generally still high in established democracies (Wuttke et al., 2020; 2022). Thus, we propose the following hypotheses:
Exposure to information that highlights the democratic consequences of abuse of politicians makes citizens less tolerant of such abuse.
Exposure to information that highlights the democratic consequences of abuse of politicians makes citizens more likely to take action against such abuse.
Research design
To examine our hypotheses, we conducted a survey experiment among 2000 citizens in Denmark. Denmark constitutes a relevant test case, as Danish politicians are often exposed to online abuse. For instance, a report from 2016 showed that 56% of Danish mayors said they had experienced online abuse on social media (Bhatti et al., 2016). More recently, a study among local, regional, and national politicians found that a large share reported having experienced abuse online (Pedersen et al., 2025; Petersen et al., 2024). Of particular concern, about 30% reported receiving threats through online channels. At the same time, the vast majority of citizens strongly disapprove of antidemocratic behaviours (Pedersen et al., 2022), which makes Denmark particularly suitable for testing hypotheses 3a and 3b.
Survey experiment
We fielded the survey among a nationally representative panel at Voxmeter, a commercial polling company. Invitations were sent by email, and for their participation panel members received points that they could use for lotteries, goods, and charitable donations. As stated in the preregistration available on Open Science Framework, we set the sample size to 2000 participants. 1 This was based on the intention to maximize statistical power under budgetary constraints. 2
Experimental Groups and Treatments.
We sought to make the three treatments as similar as possible in terms of their length and how they were presented. Thus, all three treatments emphasize that the provided information is based on research and ends by giving an example of what that research shows. On the same page that the treatment information was presented, we asked participants: ‘To what extent are you surprised about this information regarding abuse of politicians?’ As stated in the preregistration, we only used this question to ensure that participants read the treatment information.
Following the experimental treatments, we provided the participants with one of six different examples of real abusive comments that citizens have sent to politicians on social media following a post on Facebook. For example, a participant might see the following comment: ‘Hope you get a visit tonight and get completely beat-up’ (all comments are found in Appendix A as Table A1). As stated in the preregistration, we aggregate across the comments in our analysis and do not hypothesize about the effect of specific comments. The primary reason for sampling stimuli from six different comments was to rule out that one particular comment would drive results.
Dependent variables
After showing the abusive comment to participants, we measured our outcome variables. Specifically, we asked participants to what extent they agreed with the following three statements: 1) Politicians should be able to tolerate such comments (abuse tolerance), 2) I would consider writing a comment to show my support for the politician (counter speech), and 3) I would report this comment to Facebook (reporting). Online Appendix B provides descriptive statistics and additional details on the dependent variables. As stated in the preregistration, we tested the effect on each of the three measures independently in our analysis. In support of this approach, an exploratory analysis revealed that a cumulative index consisting of the three measures had poor internal reliability (Cronbach’s alpha = 0.44). Before the experiment, we measured several other variables (age, gender, education, political interest, conflict avoidance, and political trust), which are included as covariates in our analysis.
Results
In line with our preregistration, we used OLS regressions to estimate treatment effects. Below, we use figures to show the marginal effects of our experimental treatments, while the full model results are in Appendix C. This appendix also shows results from an exploratory model without the pre-registered covariates and the predicted means for both preregistered and exploratory models.
First, as the leftmost panel of Figure 1 shows, we do not find consistent evidence that citizens’ abuse tolerance is affected by any of our three treatments. Thus, there is no support for Hypotheses 1a, 2a, and 3a. One explanation may be that our measure is not sufficiently sensitive to changes in citizens’ abuse tolerance or that the effects are simply trivial. Thus, it is worth noting that the variation on the abuse tolerance measure is relatively limited, as the vast majority of the participants did not believe that politicians should tolerate abusive comments. Hence, we cannot rule out that our null findings reflect a floor effect rather than a real and genuine absence of effects on tolerance. There may simply not be enough abuse tolerance to temper. Main Treatment Effects (preregistered). Note: Estimates with 95% confidence intervals. Based on Models 1-3 in Appendix C.
In contrast, the middle panel of Figure 1 shows positive, significant effects of all the three information treatments on participants’ inclination to write a comment in support of the targeted politician. Specifically, all three treatments increase the mean response on the seven-point scale by 0.3 to 0.5 points. Measured in standard deviation of the dependent variable, these effects correspond to 0.20 SD for the treatment emphasizing personal consequences, 0.16 SD for the treatment emphasizing societal norms, and 0.27 SD for the treatment emphasizing democratic consequences.
In addition, as shown in the rightmost panel of Figure 1, emphasis on both the negative democratic consequences of abuse and social norms against abuse also significantly affects people’s inclination to report abusive comments. For the reporting variable, the treatments emphasizing societal norms and democratic consequences show effects of 0.17 SD and 0.19 SD, respectively. However, we see no significant effect on reporting intentions of the treatment emphasizing the personal consequences of abuse. Our findings therefore support H2b and H3b and partially support H1b.
Conclusion and discussion
The results of our experiment clearly indicate that relatively simple and factually correct information regarding online abuse of politicians can have effects on ordinary citizens. Although the information did not affect stated tolerance towards abuse, it did influence citizens’ willingness to write comments in support of abused politicians, regardless of whether the information focused on personal consequences of online abuse, societal norms regarding abuse, or democratic consequences of online abuse. Furthermore, information regarding societal norms and democratic consequences also significantly increased citizens’ willingness to report an abusive comment. Contrary to expectations, there was no significant effect of information regarding personal consequences on citizens’ willingness to report. There may be two explanations for this. First, citizens may be well aware of the personal consequences that politicians suffer from online abuse. For example, it is an issue regularly discussed in Danish news media. If so, the information that we provide to them is not ‘new’ and does not affect their intended behaviour. Second, the treatment may not evoke sympathy among citizens. Politicians are part of an elite group in society – in terms of power and status – and citizens may therefore not be inclined to feel as bad on their behalf, as they would if someone from, for example, a marginalized group was being targeted. A related limitation here is that we do not know whether the effect of personal consequences more generally depends on citizens’ co-partisanship with politicians. According to social identity and partisanship theories, citizens may be more likely to feel sympathetic towards politicians who share their political beliefs. However, while we cannot rule out that this is the case, recent research in Denmark and the USA suggests that partisanship does not influence citizens’ perceptions of online abuse (Eady and Rasmussen, 2024; Pedersen et al., 2025; Petersen et al., 2024).
The main implication of these results is that a way forward in the fight against online abuse of politicians is to design information campaigns that explain to the public how such abusive behaviour is largely undesirable as it hurts politicians’ well-being, breaks with widely held societal norms, and undermines democratic quality. In some Nordic countries, authorities have already experimented with such campaigns (Medietilsynet, 2023), although their effects were not evaluated. We encourage future research to test these information frames in real world settings to validate whether the information treatments can mobilize citizens on social media to act against abuse. Importantly, such research may also examine whether repeated or prolonged exposure to informational treatments would help solidify citizens’ perceptions of the negative consequences and norms of online abuse and increase their willingness to engage in counter speech. That said, we wish to highlight three caveats to our own findings.
First, we note that our experiment documented effects on intended behaviour, not actual behaviour on social media. While these results provide a proof-of-concept in a survey setting, we strongly encourage future research to employ different methods that may better capture citizens’ actual behaviour when provided with information about the negative consequences and societal norms of online abuse against politicians. Furthermore, we cannot categorically rule out that the stated intentions (writing a comment and reporting), to some degree, are driven by social desirability or demand effects. However, recent work suggests that the risk of demand effects in survey experiments may be overblown (Mummolo and Peterson 2019) and that self-reported intended behaviour on social media correlates fairly well with actual social media behaviour (Mosleh et al., 2020). Importantly, one may even argue that social desirability bias is an important mechanism activated when citizens are provided with information about the social norms and the desirability of adhering to these norms to avoid negative outcomes (e.g., condemnation).
Second, our study was conducted in Denmark, and the effect of our treatments may be somewhat context dependent. For example, emphasizing how online abuse has personal consequences for politicians may be less effective in countries where politicians are viewed with more disdain than in Denmark, where trust in politicians is relatively high (Nielsen and Lindvall, 2021). Still, our information treatments were to a large degree based on general principles, for example, support for democratic institutions, and it is reasonable to expect that these treatments will work outside the Danish context. We encourage researchers to conduct similar studies in other countries to examine the generalizability of our findings.
Supplemental Material
Supplemental Material - Countering online abuse of politicians through information about consequences and norms
Supplemental Material for Countering online abuse of politicians through information about consequences and norms by Niels Bjørn Grund Petersen, Rasmus Tue Pedersen, Mads Thau in Research & Politics
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was funded by Trygfonden (Grant ID: 153435).
Supplemental Material
Carnegie Corporation of New York Grant
This publication was made possible (in part) by a grant from the Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author.
Notes
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
