Abstract
Social media offer opportunities for companies to promote their image, but companies online also risk being denounced if their actions do not align with their words. The rise of social media bots amplifies this risk, as it becomes possible to automate such efforts to highlight corporate hypocrisy. Our experimental survey demonstrated that bots and human actors who confront a corporation touting their commitment to equality by calling out organizational pay gaps damage perceptions of the corporation, heighten anger toward them, and ultimately can elicit boycott intentions. These hypocrisy challenges are equally effective when they come from bots and user accounts. Challenges to hypocritical behavior on social media are consequential and require further exploration.
In March of 2021, a bot account on Twitter (now X) 1 was created by Francesca Lawson and Ali Fensome to challenge corporations tweeting about their support for International Women’s Day. This account, @PayGapApp, automatically responds to companies listed on the UK government’s Gender Pay Gap Service website with their actual median gender pay differences, as all companies with 250 employees or more in the UK are mandated to report their gender pay gap (Gender Pay Gap Bot—About, n.d.). By the time of this study in March of 2023, this bot account had amassed over 247,000 followers, posted over 11,400 times on Twitter, and had a website dedicated to explaining the process behind its messages. Companies have responded to the bot in various ways, including blocking the account, removing their tweet from the feed, or deleting their initial tweet from public view (Breen, 2022).
The question of how bots are affecting the social media landscape is of great importance. It may be common to think that the social media landscape is composed of human users, but in the past decade, there has been an uptick in bot accounts across platforms (Ferrara et al., 2016; Hagen et al., 2022). These bots are defined as “automatic or semi-automatic computer programs that mimic humans and/or human behavior” (Wagner et al., 2012) and can fulfill various purposes, from the innocent like posting weather reports (U.S. Department of Commerce, n.d.) to the nefarious like spreading misinformation (Himelein-Wachowiak et al., 2021) or sowing discord (Broniatowski et al., 2018).
In this study, we consider the effects of a gender pay bot, calling attention to gaps between what companies profess to value on social media (i.e., support for International Women’s Day) and their actual practices (i.e., a sustained gender wage gap). We compare the ability of a bot to call out corporate hypocrisy with a human actor. In doing so, we first consider how companies use social media as a site for reputation management and describe the literature about how people respond to corporate hypocrisy. We then elaborate on how bot accounts are changing social media, and whether the source of information—specifically, whether a challenge to corporate hypocrisy comes from a bot versus a human user—may influence people’s responses to such efforts. Our experimental survey study suggests that bots and human accounts are equally effective in challenging corporate hypocrisy, damaging corporate reputation, provoking anger towards the corporation, and increasing people’s boycott intentions. This highlights the potential for bots to reshape the social media landscape as a source of interaction and confrontation, rather than a space for broadcasting corporate good deeds uncontested.
Social Media as Space for Corporate Reputation Management
Social media, as a platform for an organization to communicate with its stakeholders, plays a key role in engaging with them and managing its reputation (Briones et al., 2011). We define social media as an interactive, internet-based channel of mass personal communication that allows for two-way interaction and derives its value primarily from user-generated content (Carr & Hayes, 2014; Kent, 2010). Social media platforms enable organizations to bypass traditional communication channels and institutional media to interact directly with the public (Entman & Usher, 2018). This “flattening” of hierarchies of information control makes it possible for organizations to reach key stakeholders—but also for stakeholders to reach them.
Social media presents a potent channel for an organization’s reputation management efforts, but managing and controlling the message can be difficult (Macnamara & Zerfass, 2012). Corporations can burnish their reputation on social media by enhancing their ability to collect information, strengthen corporate identity, monitor public opinion, and engage with key publics (Y. Wang, 2015). They can also use social media to publicize their corporate social responsibility (CSR) practices, broadly defined as a corporation’s self-governing investment and involvement in its resources to support societal goals (Frederick, 1994).
But user-centered social media platforms are unlike traditional corporate-controlled media in that individual users become media gatekeepers and content-creators who decide how organization-related content is used and shared. This arrangement transfers “the power to define corporate images from corporate communicators to stakeholders’ online networks” (Y. Wang, 2015, p. 9). Therefore, the same tools corporations use on social media are also available to empower activist groups in commanding an organization’s attention (Coombs, 1998).
Research in crisis communication shows that information about an organization disseminated by a third party on social media activates publics’ emotions such as anger, contempt, and disgust (Jin et al., 2014), and that these emotions are contagious (Kowalski, 1996). A single social media user that is dissatisfied with an organization or challenges them publicly can therefore set off a firestorm or pile-on (Einwiller & Steilen, 2015). While this can constitute a viable threat for the organization that is charged with irresponsible or unethical behavior (Coombs & Holladay, 2012), it also offers an opportunity for social media users to have a real impact in altering the organization’s actions. We examine this specifically in the context of an account calling attention to potential hypocrisy between a corporation’s public message (supporting women) and its practices (gender pay inequality).
Targeting Hypocrisy on Social Media
Corporate hypocrisy is defined as “the belief that a firm claims to be something that it is not” (Wagner et al., 2009), which should apply to corporations claiming to support gender equality while also systematically paying women less than men. The mere use of CSR as a branding and marketing tool can be viewed as self-serving, potentially inducing the perception of corporate hypocrisy and backfiring on a firm’s reputation, if the firm does not live up to its claims or makes empty promises (Bae & Cameron, 2006; Wagner et al., 2009; Yoon et al., 2006). As such, CSR efforts are particularly susceptible to social media attacks designed to engender hypocrisy perceptions. Past research showed that perceptions of hypocritical action from corporations produce lower perceptions of trust and credibility (Bhatti et al., 2013; Cooper et al., 2019; von Sikorski & Herbst, 2020), higher negative emotions (Simonovits et al., 2022; von Sikorski & Herbst, 2020), and more intentions to boycott from consumers (Wagner et al., 2009). Because social reinforcement acts as the engine powering social media (Aral, 2020; Singh & Singh, 2021) and boycott intentions are often powerful driven by social norms (Delistavrou et al., 2020), posts that stand to damage a firm’s reputation can be critical to our understanding of how consumers are motivated to boycott—and how they motivate others to boycott—in an online environment.
Building from past research, we explore the mechanisms by which social media challenges designed to elicit perceptions of corporate hypocrisy affect corporate credibility and boycott intentions. This question has practical and theoretical value, given the ways in which algorithmically-reinforced social pressures on social media may heighten corporations’ vulnerability to reputation attacks, also referred to as paracrises in social-mediated crisis communication (Coombs & Holladay, 2012). We consider two potential pathways by which challenges may impact corporations. First, we test whether perceptions of corporate hypocrisy explain the harms to corporate credibility and boycott intentions that previous research has uncovered (Klein et al., 2004). We explicitly contrast this pathway with an alternative explanation: that challenges cause moral anger toward the company, and it is this anger (in addition to or in place of) that explains attitudes and behaviors.
Moral anger is characterized as an emotional response arising from the perceived violation of a moral norm (Lindebaum & Geddes, 2016). Those who feel angry about the target company have lower evaluations of the company (Grappi et al., 2013; Kim & Cameron, 2011; Xie & Bagozzi, 2019) and perceive the spokespeople as less trustworthy and less favorable (Clementson & Xie, 2020). Importantly, moral anger can trigger behaviors aimed at correcting the situation, even when these involve personal behavior such as boycott behaviors (Braunsberger & Buckler, 2011; Hino, 2023; Klein et al., 2002). In addition, anger could serve as a mediator between blame attribution and boycott intentions (Shim et al., 2021). Therefore, we compare these two potential mechanisms—perceptions of corporate hypocrisy and moral anger toward the company—for explaining assessments of corporate credibility and boycott intentions.
Bots as Emerging Actors on Social Media
Recently, more scholarly and public attention has been focused on the growing role bots play on social media platforms (Assenmacher et al., 2020; Gorwa & Guilbeault, 2020). Bots are automated accounts based on algorithms to generate content and interact with other users (Howard & Kollanyi, 2016). Although bots attempt to mimic human users (Oberer et al., 2019), bots’ interactions on social media, such as sharing, sharing, and responding, often lack the responsiveness and variability that are inherent in human interactions, and their activities are more predictable (Cai et al., 2022; Chu et al., 2012). In addition, compared with human users, content generated by bots is less likely to be aligned with the overall mood of an event (e.g., bots share negative posts for a positive event) (Kusen & Strembeck, 2019).
The presence and activity of bots on social media can have significant societal implications. Bots have been criticized for manipulating public opinion (Weng & Lin, 2022), as a weapon for hate speech and m/disinformation (Hameleers et al., 2022; Shao et al., 2018; Uyheng et al., 2022; Vosoughi et al., 2018), and for influencing the public agenda (Zhang et al., 2024). Even when bots represent only a relatively small percentage of discussion participants, they can activate a spiral of silence (Cheng et al., 2020; Ross et al., 2019), wherein people avoid speaking out of fear of being isolated when they perceive themselves to be in the minority (Noelle-Neumann, 1974).
Although bots are often perceived negatively for their roles in spreading misinformation and manipulating public discourse, they can also have positive impacts. For example, they can also be used as tools for good, such as supporting online activism (Chen et al., 2021; Savage et al., 2016), responding to reduce racial harassment (Munger, 2017), or combatting problematic information on Wikipedia (Jiang & Vetter, 2020; Zheng et al., 2019). In this study, the bot we examined has this salutary intention: to broadcast the disconnect between a corporation’s public words (declaring support on International Women’s Day) and behaviors (pay inequality).
Bots Versus Humans as Sources
Given the increasingly important role bots play on social media (Cheng et al., 2020; Ross et al., 2019; Zhang et al., 2024), we need to further understand how people respond to algorithms-based bots and human users as information sources. Existing literature presents mixed evidence of how people perceive bots and human actors, with most existing research focusing on the perceived credibility of algorithmic versus human-created news (Graefe & Bohlken, 2020; Jia & Liu, 2021; Liu & Wei, 2019; Tandoc et al., 2020; Waddell, 2018; Wölker & Powell, 2021). Some researchers found that there was no significant difference in perceived credibility between a bot and a human user for both Twitter pages (Edwards et al., 2014) and news sources (Tandoc et al., 2020; Wölker & Powell, 2021). In contrast, other researchers found people rated content attributed to automated algorithms as either more objective (Liu & Wei, 2019) or less credible (Jia & Liu, 2021; Waddell, 2018) than human authors. One meta-analysis found news purportedly written by a human source would make the participants perceive the news content as more credible than news written by an algorithm (Graefe & Bohlken, 2020). However, this question of bot versus human sources has not been studied in the context of challenges to corporate hypocrisy in CSR efforts online.
Bots, due to their algorithm-based machine nature, might be perceived as less human-like, potentially more impartial, and could elicit less emotional engagement than human authors (Liu & Wei, 2019; Wischnewski et al., 2022). However, it is also possible that no significant difference exists between social media bots and human authors, as research also indicates that bots were perceived as equally credible, competent, fair, and objective compared with human authors, and there were no different interactional intentions between content attributed to humans or bots (Edwards et al., 2014). Therefore, we contribute to the literature comparing bots and human sources by extending it to a new space: in their ability to challenge corporate hypocrisy on social media.
Research Questions and Hypotheses
This study tests several specific hypotheses and research questions based on the existing literature. Building from previous research on corporate hypocrisy (Klein et al., 2004), we propose our first hypothesis:
Hypothesis 1 (H1). A corporation that is challenged for hypocrisy will have (a) higher perceived corporate hypocrisy, (b) lower corporate credibility ratings, (c) higher anger toward the company, and (d) higher boycott intentions than in the control condition.
We go beyond existing research in our next hypotheses to examine how these challenges to corporate hypocrisy are uniquely responsive to the affordances of social media. In particular, we ask whether an (unknown) user or a clearly labeled bot will differ in public responses to challenges to corporate hypocrisy. Given the contradictory findings in existing literature in terms of perceptions of bots versus human actors on social media (Jia & Liu, 2021; Liu & Wei, 2019; Waddell, 2018), we explore whether these two actors differ in their effects on response toward the corporation (RQ1) and evaluations of the actor making the response itself (RQ2):
Research Question 1 (RQ1). Will a bot versus a user challenging the corporation for hypocrisy differ in terms of (a) higher perceived corporate hypocrisy, (b) lower corporate credibility ratings, (c) higher anger toward the company, and (d) higher boycott intentions?
Research Question 2 (RQ2). Will a bot versus a user challenging a corporation for hypocrisy differ in terms of the challenger credibility ratings?
Finally, we offer a theoretical model by which social media challenges to corporate hypocrisy from bots versus humans affect perceptions of corporate credibility and boycott intentions. Specifically, we follow Shim and Yang (2016) to propose that perceptions of corporate hypocrisy will mediate the effects of the challenge on corporate credibility and boycott intentions. Likewise, we expect that moral anger elicited in response to the challenge of corporate hypocrisy (e.g., gender discrimination) will mediate the effects of the challenge on consumers’ attitudes and behaviors toward the target company (Krishna et al., 2021; Z. Wang et al., 2020; Z. Wang & Zhu, 2020). We add to this literature by exploring which of these processes serve as a better explanation for attitudinal and behavioral outcomes (RQ3) as well as whether these mechanisms for explaining possible effects differ depending on whether the challenge comes from a bot versus a human actor (RQ4):
Hypothesis 2 (H2). The effects of the public challenge on (a) corporate credibility and (b) boycott intentions will be mediated by heightened anger toward the corporation.
Hypothesis 3 (H3). The effects of the public challenge on (a) corporate credibility and (b) boycott intentions will be mediated by heightened perceived hypocrisy of the corporation.
Research Question 3 (RQ3). Will anger or perceived corporate hypocrisy serve as a better explanation for the mediation effects of the hypocrisy challenge on (a) corporate credibility or (b) boycott intentions?
Research Question 4 (RQ4). Will the bot versus user challenger differ in terms of the mediating pathways predicting (a) corporate credibility or (b) boycott intentions?
Methods
To test these research questions and hypotheses, we used a pre-registered experimental design 2 to precisely manipulate the source (bot versus human) of a challenge to corporate hypocrisy to explore its effects on our outcomes of interest. We used Prolific to recruit 600 participants from the United Kingdom in March of 2023, paying them £1.25 for taking our 7-minute survey. Our participants skewed female (65.5%), educated (54% had a Bachelor’s degree or higher), White (87.2%), and younger (M = 37.87, SD = 12.58) than the UK population.
After a short pre-test questionnaire, participants were shown a simulated Twitter feed. In our control condition, they saw four filler tweets, unrelated to the topic of the study. In our two experimental conditions, one of the filler tweets was replaced with a tweet from Procter & Gamble (P&G) promoting their efforts to support equality in honor of International Women’s Day in addition to three of the filler tweets. In both experimental conditions, this P&G tweet appears as a quote tweet, with the message emphasizing that P&G paid women 21% less than men (see Figure 1). The challenging messages are identical except for the source of the message: either from a human user (e.g., Taylor Jacobsen) or from a bot account (e.g., the Gender Pay Gap Bot). Therefore, we are able to precisely determine the effect of this shift in source cue in the ability of human versus bot accounts to challenge corporate hypocrisy on social media.

Example stimuli.
After seeing the simulated social media feed, participants first answered a series of manipulation checks about the tweets they saw on the feed (all), what the quoted tweet said about the pay gap at P&G, and the source of the quoted tweet (experimental conditions only). Then, they rated their agreement with statements to measure the five key outcomes: perceptions of corporate hypocrisy, corporate credibility, anger toward the corporation, bot/user credibility, and boycott intentions. The participants who failed the attention check (n = 3) were excluded from the analysis. All analyses match the pre-registration unless otherwise noted.
Measures
Corporate Hypocrisy
Perceptions of corporate hypocrisy is measured by asking the participants to rate the following statements for P&G on a 7-point scale from “Strongly Agree” to “Strongly Disagree”: “P&G acts hypocritically,” “What P&G says and does are two different things,” “P&G pretends to be something it is not,” “P&G does exactly what it says” (reversed), “P&G keeps its promises” (reversed), and “P&G puts its words into action” (reversed). This scale was adapted from the study by Wagner et al. (2009). Factor analysis confirmed a single factor seven-point scale (α = .94, M = 4.40, SD = 0.97).
Moral Anger
Moral anger toward the corporation is measured by asking the participants to rate the following: “Please indicate your response using the scale provided. While you read the tweet about P&G to what extent did you experience these emotions toward P&G?” (on a 7-point scale from “Not at all” to “Very much”). This measure is adapted from the scale developed by Harmon-Jones et al. (2016). Differing from our pre-registration, we only include three items (anger, rage, and disgust) in our analysis as recent research indicates that moral emotions, such as moral anger and disgust, would be elicited by the manipulation of hypocrisy (Laurent et al., 2014; Shim et al., 2021; Z. Wang et al., 2020; Z. Wang & Zhu, 2020) (α = .93, M = 2.47, SD = 1.55). 3
Corporate Credibility
Corporate credibility is measured by asking the participants to rate eight statements for P&G on a 7-point scale from “Strongly Agree” to “Strongly Disagree.”Example statements include “P&G has a great amount of experience” to “I do not believe what P&G is telling me” (reversed) (Newell & Goldsmith, 2001) (α = .85, M = 4.27, SD = .79).
Boycott Intentions
Boycott intentions are measured by asking the participants to rate the three statements for P&G: Statements range from “I would recommend others to avoid P&G products” to “I will purchase products made by P&G” (reversed) to “I would feel guilty if I bought a P&G product” on a 7-point scale from “Strongly Agree” to “Strongly Disagree.” This measure is adapted from the work by Shim et al. (2021) (α = .73, M = 3.49, SD = 1.07).
Challenger Credibility
Bot/User credibility is measured by asking the participants to rate the following to evaluate the account that quote retweeted the P&G tweet on a 7-point scale: the account is “Informed/Uninformed” (reversed), “Incompetent/Competent,” “Inexpert/Expert,” “Cares about me/Doesn’t care about me” (reversed), “Concerned with me/Unconcerned with me” (reversed), “Has my interests at heart”/Doesn’t have my interests at heart” (reversed), “Untrustworthy/Trustworthy,” “Honest/Dishonest” (reversed), “Unethical/Ethical,” and “Moral/Immoral” (reversed). This measure is adapted from the study by McCroskey and Teven (1999) (α = .88, M = 4.11, SD = .75).
Results
To test H1, we used a series of analyses of variance (ANOVAs) with the two experimental conditions combined to compare against the control condition, run separately for each dependent variable as pre-registered. H1 is supported across three of the four dependent variables (see Table 1), as exposure to a challenge produced significantly higher perceptions of corporate hypocrisy, higher anger toward the company, and lower credibility ratings for the company—but did not significantly affect boycott intentions.
Experimental Effects on Dependent Variables Comparing Experimental Conditions to Control.
p<.001, **p<.01, **p<.05; Different subscripts indicate significant differences between conditions for that DV, p < .05.
RQ1 asked whether there would be differences in the effects of the hypocrisy challenge depending on whether the source was a bot versus an anonymous social media user. To test this question, we again ran a series of ANOVAs, with the three experimental conditions as the independent variable (see Table 2). The results reinforce H1: both the user and bot challenge influenced corporate hypocrisy, anger, and credibility, but not boycott intentions. In no case did the bot versus the user challenges differ in these outcomes, even using the more permissive Least Significant Difference test. In addition, the bot versus user did not significantly differ in affecting perceptions of or feelings toward the corporation, per RQ2.
Experimental Effects on Dependent Variables Among Full Sample.
p<.001, **p<.01, **p<.05; Different subscripts indicate significant differences between conditions for that DV, p < .05.
Our next set of hypotheses proposed that perceptions of corporate hypocrisy and anger toward the corporation would mediate the effects of the challenge on perceptions of corporate credibility and boycott intentions. To test these hypotheses, we used the PROCESS macro version 3.3 (Hayes, 2017), Model 4, with a heteroskedacity estimator. We find support for H2: the effects of the challenge (either from the bot or a Twitter user, per RQ4) on perceptions of corporate credibility are fully mediated by both perceptions of corporate hypocrisy and anger toward the corporation (see Figure 2, Table 3), with the direct pathway between the challenge and perceptions of corporate credibility reduced to non-significance. For corporate credibility, perceptions of corporate hypocrisy appear to be a stronger pathway, as indicated by the lack of overlap between the confidence intervals for the indirect pathways for anger versus perceptions of corporate hypocrisy (per RQ3).

Mediation effects on corporate credibility.
Indirect Effects on Corporate Credibility.
Bolded rows signal significant indirect effects, as indicated by the confidence interval that does not include 0. SE = standard error; CI = confidence interval.
Likewise, although we did not find a main effect of our challenge on boycott intention toward the corporation, our mediation model provides some evidence for why this occurred (see Figure 3, Table 4). There is a significant indirect effect of the bot and user challenge (per RQ4) on boycott intentions via both perceptions of corporate hypocrisy and anger toward the corporation. These effects are roughly equivalent in size, as indicated by the overlapping confidence intervals for each indirect pathway (per RQ3). However, these positive indirect pathways on increasing boycott intentions are offset by the negative direct pathway between the challenge and boycott intentions that remains, an unanticipated result.

Mediation effects on boycott intentions.
Indirect Effects on Corporate Boycott Intentions.
Bolded rows signal significant indirect effects, as indicated by the confidence interval that does not include 0. SE = standard error; CI = confidence interval.
Additional Analysis: Recall
However, one difficulty with social media manipulations is that attention and recall for social media stimuli tend to be quite low (Ellison et al., 2011; Lang, 2000; Smith & Duggan, 2016; Vraga et al., 2016). This is also the case in our study. In our two experimental conditions, only two-thirds (65.0% bot, 66.3% user) correctly identified that the tweet claimed P&G pays men more than women. Even fewer recognized the source of the challenge: only 27.4% recognized that it came from a bot in the bot condition; 48.2% reported it originated from a user in the user condition.
Therefore, as indicated in our pre-registration, we replicate our analyses among those who recalled the position of the challenge tweet (that P&G pays men more than women). This strengthened but did not change our findings reported above (see Supplemental Table A1). Likewise, looking only at those who correctly recalled the content and source of the challenge largely produced the same results: either challenge was (equally) effective in changing perceptions of corporate hypocrisy and anger toward the corporation (see Supplemental Table A2); both the bot and the user challenge were seen as equally credible. While they both appeared somewhat more successful in lowering corporate credibility and increasing boycott intentions as compared to the user, these results must be interpreted with caution given the small sample size and the likely differences between those who can recall the challenger’s position and source, as compared to those who cannot.
Discussion
This article addresses the question of whether challenges to corporate hypocrisy on social media are effective in changing attitudes and behavioral intentions toward the company. We pay special attention to the source of such challenges, exploring the question of whether bots are equally effective as human actors in challenging corporate hypocrisy. Our experiment found that bots and human actors largely function similarly: when a bot or a human challenge a corporation for failing to pay men and women equitably, participants not only view the corporation as hypocritical, but they also experience a sense of moral anger toward the corporation and view the corporation as less credible. Importantly, we also provide a mechanism for downstream effects: Participants who expressed moral anger and viewed the corporation as hypocritical saw the corporation as less credible and were more likely to say they would boycott.
These findings add to our understanding of how bots and human-presenting accounts are interpreted in social media settings. This is an active area of inquiry; the research remains unclear how people rate the credibility, neutrality, and authority of bot versus human actors (e.g., Graefe & Bohlken, 2020; Jia & Liu, 2021; Liu & Wei, 2019; Waddell, 2018). Here, we find that bots serve as equally effective challenges to corporate hypocrisy. Theoretically, this suggests that work remains in considering how people respond to bot activity on social media, and this work is urgent beyond examining their effects for news consumption behaviors (Graefe & Bohlken, 2020; Jia & Liu, 2021; Liu & Wei, 2019; Tandoc et al., 2020; Waddell, 2018; Wölker & Powell, 2021). For corporate communication strategies, our current results suggest that advocacy individuals and organizations can utilize bot technology to challenge hypocritical behavior of corporations, rather than devoting manpower to this work. This produces a more scalable method to draw public attention to the misdeeds of corporations.
Of course, the social media environment is constantly changing. Among other major shifts that made global headlines, in February 2023, Twitter announced that it would begin charging third-party developers, such as bots, for access to their API data, a service that was previously free. Access to the API enables programmatic access to Twitter’s data, making it possible for bots to autonomously post and respond to tweets (Barnes, 2023). While the crackdown was intended to reduce bots, this policy change was later partially reversed, with the announcement that a “light” version would still be available for free (Binder, 2023). More recent research suggests that bots are an increasing problem on Twitter as the company shrinks its content moderation efforts (Henriksen & Wang, 2022; Taylor, 2023; Yang & Menczer, 2023). Bots that serve as “watchdog” accounts could be shut down at any moment, as Twitter CEO Elon Musk (2023) tweeted that Twitter will allow bots continued free access to the API provided they post “good content.” What Twitter and Musk deem “good content” remains an open question—and a potential threat to said watchdog accounts. The instability and uncertainty surrounding Twitter’s bot policies highlight the importance of scholarly focus not only on malicious bots but also on bots designed to bring attention to social issues and the societal implications of attempting to categorize the two.
At a more theoretical level, we also must recognize not just the growth of bot accounts but also accelerating interest in artificial intelligence (AI) in general. The development of AI has important implications for bots on social media. AI can strengthen the automated communication done by bots (Hepp, 2020). As AI makes bots more sophisticated (e.g., more responsive and human-like) (Chang & Ferrara, 2022), it will be harder to distinguish the differences between bots and human users on social media (Ferrara, 2023). In addition, with further advancement of Artificial General Intelligence (AGI), a form of future stronger AI that is smarter than human intelligence (McLean et al., 2023), bots could evolve to become the most active and influential actors on social media. Bots’ growing sophistication has raised many concerns about their negative outcomes (Hameleers et al., 2022; Shao et al., 2018; Uyheng et al., 2022; Vosoughi et al., 2018; Weng & Lin, 2022), but these AI-driven tools may also help the public to counter bad social bots (e.g., Botometer) (Yang et al., 2019), enhance the detection of malicious social bots (Zago et al., 2019), and advance the quality of interactions and user experience on social platforms (Hepp, 2020). They may also enable users to challenge hypocritical corporations more efficiently, producing the kinds of consequences our study describes.
We also should consider the ethical implications of using bots—even for goodwill—such as using them to challenge corporate wrongdoing. For example, human users might not be aware that they are interacting with bots (Marechal, 2016), even for the bots have clear labels because these automation tools can garner “unearned social capital” and potentially be disrespectful for other human users on social media (Coleman, 2018, p.124.) Moreover, bots more generally can hijack social media hashtags, disrupt online conversations (Marechal, 2016), manipulate user behavior (Guilbeault, 2016), affect public opinion (Bastos & Mercea, 2018).
Given this evolving landscape and ethical concerns, it is important to recognize that users can continue to perform this work of challenging corporate hypocrisy themselves across many social media platforms, whose affordances may not allow for bots (or can change rapidly, as in the case of Twitter), and which can minimize some of the ethical questions of using bots. However, these challenges still require access to high-quality information—in this case, from the UK government—to offer these challenges.
This research also extends previous work into corporate hypocrisy into a new space: social media. Much like in other spaces (Bhatti et al., 2013; Cooper et al., 2019; Simonovits et al., 2022; von Sikorski & Herbst, 2020), once a corporation is challenged on Twitter for hypocrisy, people perceived it to be more hypocritical, less credible, and experienced more intense moral anger toward the company. Corporations using social media to promote their corporate reputations must be aware of this risk and ensure they are matching what they say they value and how they behave to avoid unintended negative consequences (Rim et al., 2020; Xu & Chang, 2023).
However, our results also suggest that these challenges on social media may go unnoticed by many. The low recall for what the bot said (65.0% bot, 66.0% user) still likely over-estimates real-world attention; while individuals can gain information through incidental exposure in an online media environment, they also engage in information filtering (Prior, 2007). Our effects are even stronger among those who recalled what the bot said, signaling that efforts to understand what makes some content more easily recalled than others are an important ancillary aspect of this research. Regardless of the reason for the low recall, this presents an additional difficulty for the visibility of social media challenges. They can be effective—if they are seen.
We also do not know if those users who follow these kinds of accounts in the real world are different from the participants of our study. It is possible that users who are motivated to follow these types of watchdog or activist accounts hold higher levels of moral anger to activate against corporations. If this is the case, we might expect challenges to have even stronger effects on these populations primed for anger, given its role as a mechanism for affecting the credibility and boycott intentions toward the company. Future research should replicate this experimental design using other methods and approaches to more closely approximate real-world experiences with social media platforms.
In addition, we studied this question in the context of a corporation not upholding its stated ideals in the form of gender equality. It remains an open question how challenges to hypocritical behavior on social media may function for other issues or when targeting other prominent actors outside of corporations. The fact that our sample was not representative—and especially in over-representing women—may have thus strengthened our findings regarding responses to corporate hypocrisy in this context. Moreover, in our case, the bot was producing accurate information about corporate hypocrisy, but that would not have to be the case. Future research should continue to explore how challenges function on social media in contexts outside of corporate hypocrisy, how corporations (or others) can respond if unfairly criticized, and how this process happens on other platforms outside of Twitter.
Ultimately, this study offers important theoretical and practical insights into how bots and humans challenging corporate hypocrisy on Twitter are perceived by individuals. On Twitter, an emotional reaction can be triggered by a challenge to corporate hypocrisy from either a human or a bot, which can translate into perceptions of the company overall and intentions to boycott their products. This insight provides a window into how social media firestorms and pile-ons, particularly those aimed at corporations, have the potential to gain speed and momentum in practice, as a single user or bot can activate an emotional response from viewers of their challenging tweet. This leads us to the central question of our study: Who can challenge corporations on social media? For now, the answer seems to be anyone.
Supplemental Material
sj-docx-1-sms-10.1177_20563051241292578 – Supplemental material for Bot Versus Humans: Who Can Challenge Corporate Hypocrisy on Social Media?
Supplemental material, sj-docx-1-sms-10.1177_20563051241292578 for Bot Versus Humans: Who Can Challenge Corporate Hypocrisy on Social Media? by Serena Armstrong, Caitlin Neal, Rongwei Tang, Hyejoon Rim and Emily K. Vraga in Social Media + Society
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
Funding for this article was provided by the Don and Carole Larson Endowed Professorship through the University of Minnesota.
Supplemental Material
Supplemental material for this article is available online.
Notes
Author Biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
