Abstract
Coordinated inauthentic behaviours online are becoming a more serious problem throughout the world. One common type of manipulative behaviour is astroturfing. It happens when an entity artificially creates an impression of widespread support for a product, policy, or concept, when in reality only limited support exists. Online astroturfing is often considered to be just like any other coordinated inauthentic behaviour; with considerable discussion focusing on how it aggravates the spread of fake news and disinformation. This paper shows that astroturfing creates additional problems for social media platforms and the online environment in general. The practice of astroturfing exploits our natural tendency to conform to what the crowd does; and because of the importance of conformity in our decision-making process, the negative consequences brought about by astroturfing can be much more far-reaching and alarming than just the spread of disinformation.
Keywords
1. Introduction
Recent events in the world have highlighted just how influential social media can be, both in a national context and internationally. To list a few examples: platforms like Twitter and Facebook played a prominent role in the events surrounding the recent US presidential elections; social media and messaging platforms made possible the many decentralized mass protests that have popped up around the globe, from the pro-democracy movements in Hong Kong, Thailand and Belarus to the Black Lives Matter protests in the United States; and of course, the whole of the internet, for better or worse, played a role in shaping how the world responds to the COVID-19 pandemic. But with great power comes the great potential for manipulation and misuse. Internet trolls, fake accounts and the spread of disinformation are becoming serious problems, so much so that 4 years ago Facebook developed a team to deal with ‘coordinated inauthentic behaviour’ (CIB), a term coined by Facebook itself (Gleicher 2018). Relatively new, CIB is not a well-defined concept (Douek 2020), but it generally refers to activities undertaken by groups of people working together to mislead others about who they are or what they are doing. One common type of such manipulative behaviour is astroturfing. This happens when an entity artificially creates an impression that there is widespread sentiment in favour of or against a product, policy, or concept, when in actuality no such sentiment exists, at least not to the extent imputed (Schill 2014). The focus of ‘Big Tech’ companies on cybersecurity seldom includes consideration of astroturfing in isolation, but only as one of many inauthentic behaviours in the online environment that must be prevented. Academic commentators who do single out the problem of astroturfing in their discussions usually focus on how it spreads fake news or disinformation. 1 Astroturfing is indeed closely connected to the problem of disinformation, but it would be wrong to treat it only as such. The purpose of this paper is first to properly define and distinguish astroturfing from other CIBs, and then to show how astroturfing can be problematic beyond the spread of false information. My goal is to show that it is a distinct problem that calls for special attention and will require its own remedy.
In section 2, I will define and distinguish astroturfing from other CIBs, focusing on the way astroturfing hinges upon our tendency to follow the crowd. Thus, to understand the effects of astroturfing, one must first understand how crowd action and sentiment affect us. This will be addressed in Section 3, where I will analyse the crowd’s effects on our decision-making process. I will show that it influences us in two distinct ways: it changes what we believe and it creates value for us when we conform. Understanding the influences will lay the groundwork for the next step in the analysis, which considers the consequences one must suffer when this ‘crowd’ turns out to be inauthentic. This will be done in Section 4, where I will discuss how the consequences of astroturfing are similar to typical false marketing practices. This parallel should be quite straightforward, since astroturfing involves, first and foremost, some kind of deception. The key argument of this paper is set out in Section 5, where I will attempt to show how the problem of astroturfing could extend far beyond the issue of disinformation and false marketing. Here, I will bring out the more alarming effects of astroturfing, how it has the potential to spread and stabilize suboptimal norms within society and negatively impact the development of crowd wisdom, an epistemic tool that can be highly valuable to us.
2. Online astroturfing
Astroturfing is not exclusive to the online environment. It also happens in real life when actors are paid to attend protests, sign petitions or crash town hall meetings. Classic astroturfing, as the name suggests 2 , refers to fake grassroots activism. It involves corporations or political parties using hired agents to create the impression that a view they wish to promote has widespread public support (Henrie and Gilde 2019). Genuine grassroots movements start from the ground up, from ordinary members of the public (Zickiene 2019). At the start of the movement, there is often no charismatic leader, no fancy marketing campaigns and no financial incentives on offer; the only way for the movement to attract participation is through the merit of its objective and ideology. A decentralized large-scale grassroots movement is therefore particularly powerful because it shows that numerous otherwise unconnected and independent individuals have all come together to support a common cause. This effect is precisely what astroturfing seeks to emulate. It wants to create the impression that a certain opinion or message is highly credible, by pretending that it comes from a large number of unconnected independent individuals, when in reality it is all the result of a coordinated effort brought about by a centralized source.
With the rise of the internet, astroturfing practices increased exponentially. The internet provides a low-cost, efficient and anonymised way to disseminate a huge amount of information within a short period of time. Instead of having to pay hundreds of actors to attend a protest, a company can just hire a single individual with basic programming skills to create millions of fake social media accounts (often referred to as ‘sock puppets’). These fake personas can be as diverse as one might like, with various distributions of age, gender, ethnicity, average income, geographical location, political profiles, etc. These accounts are sometimes run by actual agents who determine what to post and what actions to carry out; but they can also be run by automated algorithms, such as ‘social bots’ 3 , which try to mimic actual human users. Some of these bots are more basic and may only carry out standard tasks such as ‘liking’ or ‘retweeting’ from specific users or posting predetermined messages. But the latest social bots are becoming more advanced and can even interact naturally and smoothly with other real users on social media. All of this means that, with just a few clicks, a business can now easily make it seems as though hundreds of independent customers have ‘liked’ a photo of its new product; or a political party can use thousands of seemingly authentic accounts to ‘share’ disparaging news of its opponents. For the sake of clarity, in this paper, I use the term ‘astroturfer’ to refer to the mastermind behind the scenes organizing the campaign (the corporation, political party, public relations firm, etc.), and will use the term ‘astroposter’ to refer to the frontline staff carrying out the instructions and actually creating content on the internet, be it manually, or via sock puppets or bots.
The issue of astroturfing has received quite a bit of attention in the social, legal and political literature, as well as in information technology research. 4 These define astroturfing slightly differently and focus on various characteristics of the strategy. A common thread that runs through all these accounts is ‘deception’. For example, Zhang, Carpenter and Ko (2013, 2561) define online astroturfing as ‘the dissemination of deceptive opinions by imposters posing as autonomous individuals on the Internet with the intent of promoting a specific agenda’ (my emphasis). The idea of ‘deception’ here echoes the concept of ‘inauthenticity’ in other CIBs found on the internet. But what exactly does this ‘deception’ involve? Nathaniel Gleicher, Head of Cybersecurity Policy for Facebook explains that many posts removed for CIB are not flagged because of the content they are sharing, it may not be the case that the posts themselves carry a false statement or a piece of fake news. In the same vein, Zhang, Carpenter and Ko (2013, 2561) note that the astroturfing messages can be either false or genuine. What is worth highlighting is that even though CIB strategies have often been used to spread misleading information or fake news, 5 astroturfing can in fact involve statements and actions that go beyond what can be categorized as true or false. A lot of the activities carried out by astroposters do not even include substantive statements of any kind. Sometimes, they may just involve ‘following’ a Twitter account or ‘liking’ certain tweets posted by a political candidate. 6 Other times, the CIB could be ‘cheerleading’ for a particular entity and ‘booing’ its competitors, without engaging in any meaningful online discourse. 7 These are not strictly propositions that can be judged ‘true’ or ‘false’ in the traditional sense. The kind of ‘deception’ involved in astroturfing therefore does not directly stem from astroposters posting incorrect information or ‘unfacts’ on the internet (although this is a common occurrence). Instead, the root of the deception lies in the source and validation of what is posted. 8 This distinction will become relevant when we turn to look at the problems created by astroturfing in Section 4 and Section 5.
Different CIBs use different ways to mislead their audience about the source of their content. Sometimes an account pretends to be operating from a particular country or certain parts of the world; other times a post pretends to be expressing opinions from a panel of experts; in some cases, internet influencers would endorse a product without disclosing the sponsorship backing the activity. All of these strategies aim at manipulating the audience’s perception of the content’s credibility. Astroturfing is a particular type of CIB. What it does is to pretend that the information, beliefs and opinions posted are supported by an organic and natural accumulation of unconnected regular individuals, when in fact it is an orchestrated effort by a centralized entity. Thus, the deception here is rooted in the inauthenticity of the apparent crowd sentiment and action. What the astroposters have posted may technically not be (or cannot be categorized as) ‘false’, but the audience is still being deceived because how the apparent crowd action comes about is inauthentic.
9
In light of this, I will define online astroturfing as follows: Online astroturfing: a practice where a centralized source disseminates colluded information on the internet pretending that such information comes from a large number of unconnected individuals.
The key thing to note from the definition is that the nature of astroturfing involves a pretense. As such, it would be helpful to analyse its effects under two separate layers: in the first layer, I will look at how majorities and the crowd 10 typically affect our decision-making processes. Astroturfing pretends to create a genuine crowd sentiment. Assuming that it is successful in hiding its true nature, it will affect an agent’s decision-making process just like any crowd action would. In the following section, I will first look at how typical crowd action and sentiment can change our beliefs and behaviours. Then I will turn to look at what happens when the crowd turns out to be fake; what are the consequences of having acted upon something false – this is the second layer of the analysis. The two layers are closely related but also distinct, and so must be kept separate in the analysis.
3. Analysing our tendency to conform
There is an abundance of studies and experiments showing that human beings have a tendency to conform their actions to match those of the majority. 11 This is often considered a problematic trait or a shortcoming in human nature. Social psychologist, Solomon Asch, conducted an experiment showcasing the effects of the majority on an individual’s action. In his comments on the experiment, he famously noted: ‘That we have found the tendency to conformity in our society so strong that reasonably intelligent and well-meaning young people are willing to call white black is a matter of concern. It raises questions about our ways of education and about the values that guide our conduct’. (Asch, 1955, 34) In his book on conformity, Cass Sunstein (2019, chap. 2) discusses how groups that consisted of conforming individuals are often vulnerable to information cascades, whereby a large number of people end up thinking or doing the same thing as a result of the beliefs or actions of just a few early movers (Sunstein 2019, 35). He considers information cascade the main cause of what is known as ‘bandwagon diseases’ – a phenomenon where doctors adopt problematic treatments that have not been sufficiently tested because they have this tendency to follow what the rest of their colleagues are doing (Sunstein 2019, 90). More recently, the term ‘groupthink’ gained prominence in popular discussion. It was a theory proposed by psychologist Irving Janis back in the 1970s, investigating the kinds of antecedent conditions that would give rise to poor decision-making within a group. The term has since been adapted by other disciplines and the general public to criticize defective decision-making due to conformist pressures within a group (Pratkanis and Turner 2017).
The practice of astroturfing targets this natural tendency of ours to follow the crowd. If I am inclined to do what the crowd does, pretending a policy or opinion has the crowd’s support would be a way to manipulate my decisions on the issue. In order to avoid such risks arising from our tendency to conform, many people suggest individualism as the solution. This is not new. Philosophers have always celebrated the uniqueness of the individual, the courage to be different. Kant’s famous motto “Sapere aude! Have the courage to make use of your own intellect!” (Kant 1784/2008, 17) highlights the role of each individual man in the enlightenment. In the 19th Century, John Stuart Mill praised and celebrated innovations and individuality while arguing that conformity hinders happiness and social progress (Mill 1859/2016, 52). If we adopt this line of thought, astroturfing would not be a difficult problem to solve, at least it would not be a severe enough problem to warrant regulatory intervention. Consider for example the Federal Trade Commission (FTC) in the US, who is responsible for protecting consumers against deceptive and unfair business practices. The way the FTC achieves this is not, however, to enact detailed rules and enforce it against every instance of deceptive or unfair behaviour. This would be not be practical, nor ideal. Instead, most of the time, the FTC adopts a hands-off approach and prefers self-regulation. 12 More specifically, the FTC often only steps in if ‘injury to the consumer cannot reasonably be avoided’ (Leiser 2016, 8; Scott 2019, 448). If individualism was the answer, then one can imagine how the potential harm from astroturfing could easily and reasonably be avoided. Consumers simply have to follow in Kant and Mill’s teachings; and reasonable consumers would exercise their independent critical thinking and conduct their own due diligence instead of just blindly follow the crowd (whether it be a real crowd or a crowd of astroposters). If these were what a reasonable consumer does (or should do), then there would be no reason for the FTC to step in and prohibit astroturfing practices. 13 Unfortunately, the problem is not as simple as that. What makes astroturfing a unique and complex problem is that contrary to popular criticism, our tendency to conform is not always problematic, and individualism is not the ultimate solution. In fact, conforming to what the majority or the general crowd does is often rational, reasonable, or otherwise justified. And if this is the case, there will be no simple formula to avoid astroturfing. The purpose of this section is to explain the rationality underlying our tendency to conform, which will lay the groundwork for understanding the true damage that can be brought about by astroturfing practices.
Social psychology typically understands conformity in two distinct ways – informational and normative. 14 Different social psychologists use these or similar terms to mean slightly different things. In general, informational conformity happens when an individual imitates the action of the crowd based on the belief that the crowd has the best and most accurate information on how to behave in the given physical environment. On the other hand, normative conformity happens when an individual imitates the action of the crowd to facilitate social interaction, such as gaining acceptance within a group or reinforcing her group identity. Let us look at these two types of conformity in turn.
Informational conformity
With informational conformity, the crowd affects what the agent believes about the real world. The rationality underlying this effect finds support from the fields of evolutionary biology, the social sciences and epistemology. Using statistical decision theory, evolutionary biologists, Richerson and Boyd, looks into the effectiveness of social learning in a variable environment. They found that natural selection favours individuals who imitate the more common behaviour within the group rather than those who imitate randomly chosen behaviours. This is because behaviours favoured by selection in a particular environment will tend to be more common in that environment (Boyd and Richerson 2005, chap. 1). To illustrate, imagine an agent, Robinson, arriving at a strange new island after a shipwreck and needing something to eat. There are two types of fruits available in a nearby forest, both of which are unfamiliar to him. He has no information whatsoever on these fruits, but he observes that a large group of locals is picking and eating the red fruit and only a few isolated individuals are picking the green one. What would be the reasonable choice of action for Robinson? Having no other source of information, it could hardly be considered irrational for him to follow the crowd and pick the red fruit over the green one. Richerson and Boyd (2005) help explain why Robinson’s decision to conform to the crowd here would be rational: “ ‘When in Rome, do as the Romans do.’ This strategy makes good evolutionary sense under a broad range of conditions. A number of processes, including guided variation, content bias, and natural selection, all tend to cause the adaptive behavior to become more common than maladaptive behavior. Thus, all other things being equal, imitating the most common behavior in the population is better than imitating at random.” (Richerson and Boyd 2005, 120)
In a similar vein, Henrich (2015, chap. 3) examines various cases of 19th Century European explorers stranded on strange and unfamiliar lands. Having no or very limited information on the new land, all cases of successful survival involved the explorers imitating what the locals did or having been in some way accepted by the locals into their community.
Findings from laboratory experiments have also shown that people rely more heavily on evidence from other members of their group over their own observations in ambiguous and uncertain situations (Jacobs and Campbell, 1961). What the crowd does is valuable because learning is costly. Thus, easily observable crowd actions provide the agent with low-cost information that helps her understand the world and form true beliefs about her environment. Or more precisely, crowd action changes her degrees of belief regarding the physical world. Based on the actions of the locals, it is more likely that the red fruit is edible and carries higher nutritional values, or there is a higher chance that the red fruit will not lead to allergic reactions in Robinson. 15 A conformity bias in social learning therefore gives us an epistemic advantage (Muthukrishna, Morgan, and Henrich 2016). Psychologists sometimes refer to this as our conformist heuristic – what the crowd or majority believes is a shortcut for us to determine, either consciously or subconsciously, what information is reliable. And in a world full of uncertainties, this, together with other heuristics, are efficient cognitive processes that form an invaluable part of our ‘adaptive toolbox’ (Gigerenzer and Brighton 2011). The conformist heuristic, in particular, is considered to be useful and rational when the environment is relatively stable and where information search is costly or time-consuming (Gigerenzer and Brighton 2011, 17).
While conforming to the crowd is often a subconscious or heuristic process, the underlying rationality has also seen support from various mathematical models of belief formation in the field of epistemology. Zollman (2010) and Mohseni and Williams (2019), employing different models, both showed that conformity can increase the reliability of an agent’s beliefs in certain contexts. The details of the various models and the calculations are too technical for the scope of this paper, but a common idea runs through all of their arguments. Other people’s actions, statements and opinions are a kind of evidence of certain facts about the world. This means that when a large number of independent individuals holds the same view, it is equivalent to a large amount of independent evidence supporting that belief. Thus, in our decision-making process, taking other people’s beliefs and opinions into account is actually what a reasonable open-minded agent would normally do. After all, when faced with the same situation, how likely is it that hundreds or thousands of people got it wrong and I alone figured out the right answer? It is not impossible, but very unlikely.
All this is to say that informational conformity can be grounded in a rational and reasonable process of belief-updating. The tendency to conform is not by its very nature a bad thing. And it is not practical to suggest that each individual should always ‘think for herself’ and not let what other people say or do influence her beliefs. Other people’s actions, thoughts and experiences mean something to us, and following the crowd is an extremely valuable epistemic tool that we all rely on when forming beliefs about the physical world.
Normative conformity
The actions of the crowd also have a normative effect on us. Or, more precisely, conforming our actions to other members of our group in itself gives us a value, a utility. Unlike informational conformity, normative conformity is not about what the agent believes about the physical world. Here, the agent cares about crowd actions not because they serve as a kind of evidence in her belief formation, but because fitting in with people around her is important if she is to navigate her social environment successfully, or so I shall argue.
Human beings are social animals and the smooth and successful running of society depends on its member to coordinate and cooperate with one another. Conformity is crucial in both. The difference between the two is that coordination does not involve a conflict of interest. An obvious example is language rules. It does not really matter to me what words or utterances people use to denote certain items, as long as everyone uses the same words to mean the same things and are able to communicate effectively. Conformity is clearly the rational response here, since the agent maximizes her own benefits when she matches her behaviour with other members of her community and coordination is successful. Cooperation is trickier because it involves a conflict between personal interests and the overall interests of the group. Consider the prisoner’s dilemma – such scenarios can be found everywhere in human society. A typical case could be working on a group project with a few others. Here are a few common scenarios that may result, where ‘C' represents cooperation and ‘D’ represents defection: CC: Where everyone does their share of work, and everyone gets a good grade. Let us assume each member of the group gets a payoff of 4 for the good grade. If there are three members, the total payoff for the group would be 12. DC: Where I fail to do my share and others have to do extra work to cover for me, the group ends up only getting a passing grade. Here, let us assume I get a payoff of 4 for getting a passing grade despite being lazy. Others get a payoff of 0 each, because even though they get a passing grade, they had to work extra hard for it. CD: This just reverses the roles in DC, where others slack off but I do the extra work. Like the above option, others get a payoff of 4 each, while I get a payoff of 0. DD: Where no one does their share of work, and the group failed the project. Even though the group does not get any reward from the project, let us assume each member of the group gets a payoff of 1 for slacking off. The total payoff for the group would be 3.
It is obvious from the above options that I maximize my own interest by defecting (i.e. not doing any work in the group project) regardless of whether the others cooperate or defect. But if every member of the group thinks in the same way and seeks to maximize their own interest, the group will end up at DD, which is the worst-case scenario for the group. Instead, the group as a whole performs best at CC when everyone cooperates. Bicchieri (2005b) has argued that for norm-followers, the existence of a cooperation norm would turn a prisoner’s dilemma scenario into a simple case of coordination. The concepts of norm and norm psychology are closely related to the current topic of conformity. But I think there is no need to invoke the more complicated concept of norms here in order to understand such a transformation – as long as there is a natural tendency to conform to the behaviour of others, the prisoner’s dilemma will become a case of simple coordination. If I prefer matching my actions to the others in my community, I will prefer to cooperate when others cooperate and to defect when others defect. Thus, CC and DD would be preferable to CD and DC. And since CC pays more, agents would converge on CC, thereby also maximizing the group’s overall payoff. 16 Looking at it this way, it might be said that the tendency to conform is not rational, in the sense that it pushes the agent towards the option with a lower personal payoff. The agent would in fact fail to maximize her own self-interest if she cooperates when the others also cooperate. But this could not be the reason why our tendency to conform has received so much criticism in popular discussion. After all, this innate tendency to conform is crucial for solving various types of collective action problems that have plagued human society. 17 Individuals with a tendency to conform to other people’s actions and expectations are more likely to give up a personal benefit for the sake of the group. From a community perspective, this is clearly something that is desirable and should be encouraged. Even though normative conformity may not be grounded in the maximization of self-interest, it can certainly be justified in terms of the greater good.
The above argument is not to suggest that conformity is always the correct response. There are of course problematic cases where conforming is irrational and unjustified. Children are taught not to yield to peer pressure when their friends offer them a cigarette. Some authors have also shown that the tendency to conform has hindered the acquisition of truth, especially within the scientific community (Mohseni and Williams, 2019; Weatherall and O’Connor, 2020). But saying conformity could be problematic is very different from saying that conformity is always problematic. This point is particularly crucial when dealing with astroturfing. It would not be possible to avoid the consequences of astroturfing by simply celebrating the individual and not following the crowd. Because most of the time, doing what the crowd does is in fact the rational justified decision. This is what makes astroturfing such a serious and tricky problem to solve – anyone can fall victim to it even when they are being perfectly rational and reasonable.
Combining the two types of conformity
Before I conclude my first layer analysis on astroturfing (i.e. concerning the effects of the crowd on the agent’s decision-making process), there are some important points to note about the two kinds of conformity. As implied in the above discussion, the two kinds of conformity are not mutually exclusive; and in fact, often go hand in hand. In analysing how crowd action affects the agent’s overall decision-making process, one should consider how the two operate together. The subjective expected utility decision theory (EU theory) calculates the total payoff an agent can expect to receive for a particular action (f) and it can be adopted here to show how the effects of the two kinds of conformity come together. 18 Under the EU theory, the agent would assign utility values u(x) to a list of possible outcomes, and probability values p(H) to different possible states of the world where such outcome occurs. The utility of each possible outcome is then multiplied by the probability that it will occur to get an ‘expected utility’. The total expected utility for action (f) is the aggregate of all these expected utilities.
What informational conformity changes is the probability function
This concludes the first layer analysis of astroturfing. I have examined the rationality underlying our tendency to conform, and thus how individuals could fall victim to astroturfing even though they may have acted in a perfectly rational way. What happens if the crowd action the agent based her decisions on turns out to be fake? I will now try to answer this in the second layer analysis.
4. Astroturfing – The problem of false marketing
When COVID-19 hit the world in March 2020, it brought unprecedented changes to our lives. Interestingly, the pandemic also caused changes to the behaviour of social bots on Twitter. Typically, social bots account for 10 to 20% of activity on social media. But a study has found that the number of social bots increased to 45 to 60% among all Twitter users who engaged in COVID-19 discourse (Hao 2020). Many of these bots promote fake news and disinformation; they also push for ending the lockdown and ‘reopening America’. Let us first consider how these play into the decision-making process of a real Twitter user. Suppose I do not want to get sick but I equally want to go to parties and have fun with my friends. Astroposters spread fake information online, such as conspiracy theories that claim there is actually no virus going around. Suppose I considered that the probability of ‘I will stay safe if I go partying’ (let this be hypothesis ‘H’) is low. I am not very certain about this, especially at the start of the pandemic when information is scarce and uncertain. Given the initial uncertainty, I could quite easily be swayed by any opposing evidence. Even if I am in general quite doubtful of any information found on Twitter, if I see a large number of people endorsing a conspiracy theory, it could constitute enough evidence to cause a change in my degree of belief in H. This is the effect of informational conformity. I thought it might be risky going out to parties, but if it is just a conspiracy, I may think that it is actually quite safe. And of course, there is also the normative side: seeing ‘#endthelockdown’ trending on Twitter, suggests that a large portion of my community is not staying home, or at least wishing not to stay home. Because of my desire to conform, any action that involves going out would see an increase in value. Applying EU theory, this means that the COVID-19 discourse I read on Twitter pushes both u(x) and p(H) upwards in regards to going out and not staying home. The utility of partying increases, and the probability of being safe while I am out also increases, based on my subjective updated degrees of belief. The change in the total expected utility can potentially change my decision from staying home to going out partying. In this way, what the astroturfers spread on the internet can make a difference to my actions. Yet, I would have based my decision-making on something false. The effects of conformity came from a crowd that is inauthentic. And this inauthenticity is similar to what one would typically encounter in false marketing. I will explain how.
It is actually a bit hard to pin down the exact nature of this falsity that is propagated through astroturfing. Most of the current literature on astroturfing looks at how it spreads fake news and criticizes astroturfing for creating uncertainty in its audience (Cho et al. 2011; Howard, Woolley, and Calo 2018; Leiser 2016; Zerback, Töpfl, and Knöpfle 2020). Yet, as I have defined it at the start of this paper, astroturfing does not necessarily involve fake news. In fact, creating uncertainty cannot be a fault in itself. Any information that contradicts an existing opinion will create uncertainty. This in itself is neutral (and sometimes can even be a good thing). Everyone agrees that spreading fake news and disinformation is bad. But what is so distinctive about astroturfing that is worth the extra attention? To answer this, let us look at a group of coordinated astroposters who are paid simply to post opinions and genuine news. Does it still involve a kind of deception? Yes. Because the illusion of the crowd gives the information a kind of credence it does not deserve. A belief that is held by a large number of people has a much greater influence than one that is held by a single individual; and independent actions from each member of the community are worth much more credence than colluded actions stemming from a centralized source. Thus, pretending that many people retweeted a piece of news (whether fake or not), gives it credence that it should not have otherwise. This is analogous to situations in which an advertisement includes fake expert advice – I bought the new toothpaste because the dentist in the advertisement says it improves gum health, however the dentist turns out to be just an actor in a lab coat. As a result, I attached more weight to the advice than I otherwise should have, regardless of whether these beliefs and opinions turn out to be correct.
What about astroposters simply tweeting and retweeting ‘#endthelockdown’? This is not really a piece of ‘information’, it is not even a statement of fact, it is simply a demand or request or expression of a sentiment. The problem here is that the crowd sentiment it conveys is inauthentic. Astroturfers know that fitting in and conforming to the crowd is something valuable to many people. They therefore use it as a decoy to lure the real Twitter users in. Assuming there is no other false information at play, the only reason I might support ending the lockdown is that a large enough proportion of my community shares the same sentiment. If the rest of my community thinks that it is worth the risk of the virus spreading in order to keep unemployment rates low and stock prices up, my support for ending the lockdown would have a kind of democratic legitimacy. My preference for ending the lockdown will increase (i.e. my utility function u(lockdown ends) shifts upward), since I would like to fit in with the crowd and I also desire, to a certain extent, that other members of my community get what they want. But if it turns out that beyond the Twitterverse, the majority of my community in fact supports staying home, my support for the cause would have been misdirected. When what I saw on Twitter is simply the result of astroturfing, #endthelockdown will in fact not be a cause that most of my peers find valuable; instead, it is just to the benefit of a few powerful astroturfers who have little respect for public health. This is analogous to situations where consumers buy a product due to misleading advertising – I pay extra for a new phone because of the advertised camera function (something valuable to me), and it turns out that the camera does not work. Here, we thought #endthelockdown has the support from a large portion of people in my community, turns out it is not so. That is a case of misrepresentation and a clear manipulation of normative conformity.
Understanding astroturfing as analogous to false marketing means that the relevant loss the real Twitter users suffer is similar to that of purchasing the toothpaste based on the ‘dentist’ advice or the phone based on the advertised camera function. Astroturfers trick individuals to put their trust in fake and unreliable information. But this is not unique to astroturfing. Most CIBs are in one way or another trying to make their target audience give credence where it is not due. What deserves special attention is how astroturfing also creates problems beyond the ones similar to those arising from false marketing practices.
5. Astroturfing – The problem beyond false marketing
I have examined what ‘falsity’ the astroturfer perpetrates and how it affects the agent’s decision-making process. But astroturfing has a unique characteristic. If it is successful and widespread enough, it can turn such falsity into truth. For example, if enough people see ‘#endthelockdown’ on Twitter and want to fit in by supporting it, the fake crowd sentiment will be able to create a real substantial group of people who support ending the lockdown. Or suppose enough people read a piece of fake news and believe that it is true: although it will not make the fake news true, the added credence attached to this piece of information will become ‘proper’, in that a large number of people will come to believe, somewhat independently, that it is true. In this way, astroturfing is more like a self-fulling prophecy – the lie itself is able to bring about its own truth. No doubt, these are rather unlikely scenarios, but these could really result from more systematic astroturfing campaigns, such as the ones carried out by the Chinese Communist Party. Astroturfing in China is extremely large-scale – it extends throughout the whole country and even into overseas Chinese communities. It is also very well-organized, with the authorities providing financial support and maintaining tight control over all major aspects of the campaign. It has been estimated that there are at least 280,000 paid astroposters working for the regime, 19 with detailed instructions on when, what and where to post on various social media outlets (Anderson 2010; King, Pan, and Roberts 2017). One study has estimated that these astroposters wrote approximately 448 million social media posts in 2013 alone (King, Pan, and Roberts 2017). Of course, in addition to the extensive astroturfing efforts, other social and political rewards and sanctions are in place to discourage dissent. But let us assume here that astroturfing plays a significant role in swaying public opinion towards support for the Communist Party. What happens in such situations will result in something very different from false marketing.
Firstly, in relation to normative conformity: agents who conform for the sake of fitting in will be able to actually fit in. Although the crowd sentiment started out fake, successful astroturfing efforts will turn it into a real crowd sentiment. An agent in China may thus find that expressing support for the government is indeed what a large proportion of the population actually does, both inside and outside of social media. Unlike the real Twitter user who was astroturfed by ‘#endthelockdown’, the agent in China will in fact be given what was promised: the crowd sentiment is actually there. Unfortunately, this could potentially be worse for the agent in the long run. The outcome produced is similar to what social psychologists refer to as ‘pluralistic ignorance’, a phenomenon that often leads to the spread of undesirable norms.
Pluralistic ignorance happens when agents mistakenly judge that their own private thoughts, attitudes and feelings are different from other members of the group, even though everyone’s public behaviour is identical (Bicchieri 2005a, 186). Francisco Gil-White (2005) illustrates this using the example of excessive drinking among college students. Imagine a freshman attending her first college party. She observes that everyone else drinks heavily. Even though she herself does not enjoy drinking, she decides to do so in order to fit in. Notice the inference at play in arriving at her decision to drink. The observation is that everyone else is drinking heavily, she thus infers that this is what everyone else truly enjoys doing and that the group endorses such behaviour among its members. Yet, this is not a valid inference to make. Publicly observable behaviour often does not reflect underlying attitudes and preferences. After all, the freshman herself could also end up drinking heavily at the party. She knows well that her actions are based purely on social motives and are not indicative of her intrinsic desire towards drinking. Why then would she believe that the public actions of everyone else are based on their respective intrinsic desires? This is the systematic mistake that arises in pluralistic ignorance. And if enough people make the same mistake as the freshman, the false beliefs would result in widespread public conformity to certain actions that have very little private support (Gil-White 2005). Examples of pluralistic ignorance are not difficult to find in real life. In one study, for example, juvenile gang members were found to be privately uncomfortable with the gang’s anti-social behaviour, but endorsed violence when in public because they thought their peers were fully committed to such behaviours (Bicchieri 2005a, 184). A set of studies in the 60s and 70s finds that White Americans often overestimate how much private support there is for forced racial segregation among their fellow white citizens (Kuran 1997, 78). All these situations suffer from what Timur Kuran (1997) refers to as the ‘persistence of unwanted social choices’ (Kuran 1997, 106). When the freshman partakes in the norm of heavy drinking for the sake of fitting in, she is sending out a signal to others that she endorses the prevailing norm; her action causes other members of the group to further overestimate the private desires for heavy drinking. The pluralistic ignorance thus reinforces itself, and in turn helps the unwanted social choice persist. Even though people are able to successfully fit into the group, the situation is unsatisfactory because everyone is stuck at a suboptimal outcome. Privately, most individuals in the group actually prefer something else. That is, if we look at the utility of the action per se (i.e. not taking into account any utility from successfully fitting in), there exists a better option than the one that was chosen, at least for the majority of the group. If not for pluralistic ignorance, everyone could have successfully fit into the group whilst choosing the better option.
Often, pluralistic ignorance happens unintentionally. Like when an unwanted social norm was passed down from a generation long gone but no one dared to speak up against the well-established tradition. On the other hand, astroturfing is an intentional effort to spread undesirable norms (or more specifically, norms that are desired by the astroturfer but not necessarily preferred by the public at large). Whereas pluralistic ignorance happens because agents make the wrong inference that leads to false beliefs about other people’s private preferences, astroturfers wilfully imbued the online environment with these false beliefs. Agents are fed false evidence about the inner attitudes, motives and desires of other members of their group, creating conformist pressures from public sentiments that are not really there. This is exactly what astroturfing does in China. King, Pan, and Roberts (2017) find that about 80% of the Chinese astroposters’ activities consist of some sort of cheerleading for the government, spreading patriotism, singing praises and generally expressing positive sentiment towards the regime. These activities do not necessarily reflect the inner attitudes and opinions of the ordinary Chinese citizen. Yet, individuals who are unaware of the deceptive nature of these actions and have a strong desire to fit in will choose to conform, they will likewise express their support for the government, online and maybe in real life. This then, could also bring the forces of pluralistic ignorance into play, whereby other citizens who observed the conforming behaviour would (wrongly) infer that it reflects private support, unaware that it is merely a result of astroturfing. More conformity occurs, all of which goes on to reinforce the false belief about private support towards the regime. The result of this is a spread of undesirable norms and unwanted social choices within the community. Distraught citizens who privately wish to raise concerns or express dissent against the government will likely refrain from doing so due to the enormous conformist pressure. A leaked government document noted that the intended purpose of the Chinese astroturfing activities is to ‘promote unity and stability’ (King, Pan, and Roberts 2017, 489). From what is observable, it seems that, to date, the Communist Party has been more or less successful in achieving the desired stability within China. But it remains highly doubtful how much of it comes from authentic private support and how much of it is based on false beliefs about private and public sentiment generated by the astroturfing campaign. 20
Recall how the justification of normative conformity is grounded in social relations and the greater good. When undesirable norms persist as a result of astroturfing and pluralistic ignorance, social relations are maintained, people accept and enforce the undesirable norm. There is thus no problem with fitting in or general coordination. Yet, individual agents here are acting on a false belief. They decided to pay a personal cost and conform to a behaviour they privately find undesirable only because they are under the misconception that it is what is necessary to fit in or that it will bring about a greater good for the benefit of society as a whole. The truth is, however, that everyone is sacrificing their personal preference for the benefit of a few rich and powerful astroturfers.
Informational conformity operates differently. When we use the crowd as evidence to update our beliefs about the world, it is less likely that astroturfing can get a whole community to become stuck in a false belief. This is because, unlike the case of pluralistic ignorance, typically, when individuals act on a false belief, things will not work out as they expect and this will usually prompt them to adjust their beliefs according to the new evidence. But this creates another problem. Consider Yelp, the well-known online review website. It is no secret that it contains numerous fake reviews (Loten 2014). Suppose I visit a restaurant based on the large number of five-star reviews on Yelp. If my experience turns out to fall far below my expectations, I suffer for this particular decision, and use this to update my relevant beliefs. One belief that has to be updated is the credibility of Yelp – it is not as useful as I expected it to be. Next time, I will attach less weight to the evidence based on information found on Yelp. Any other negative experience will reduce my trust further, until I no longer consider any information found on Yelp useful. This outcome is what is postulated by the Technology Acceptance Model (TAM) in information system theory. TAM posits that perceived usefulness, amongst other things, is a key variable in anticipating the acceptance and usage of information technology. More specifically, a study by Salehi-Esfahani and Kang (2019) finds that perceived usefulness is the most important predictor of consumers’ attitude toward restaurant review websites (such as Yelp), which in turn affects the consumers’ intention to actually use the websites. If Yelp is polluted by fake reviews, it will no longer be able to provide me with useful information on restaurant quality. And any repeated negative dining experiences resulting from the unhelpful reviews would discourage consumers from using the platform at all in the future.
This may not seem to be a matter of real concern if it is just about Yelp and choosing restaurants. But consider the increasingly important role social media played in recent civil rights movements, including the Arab Spring 2.0, the 2019 Iranian Protest, the Black Lives Matter movements and the Hong Kong 2019 Protest (Hu 2020). These are all decentralized movements where people relied heavily on social media and online forums for information and organization. Reliable information and popular opinions on these platforms can easily be tracked by the number of ‘likes’ or ‘upvotes’ or from looking at the ‘trending’ content. A successful civil movement requires participants to share information, reach agreements and converge towards a common cause. This is not something easy to achieve, especially in a decentralized movement. What astroturfers do is sow distrust and instigate arguments on such online social platforms, creating uncertainty, and disrupting genuine communication (Keller et al. 2019). In this way, protestors will find it very difficult to coordinate, and to agree on an action plan that has the most support. More importantly, it diminishes the users’ trust in and perceived usefulness of such platforms. Moving to another platform does not help – astroturfers can always go where the crowd goes. Astroturfing may thus disrupt and suppress any such civil movements, especially at its early stage where the direction is unclear and a trusted platform available for information and discussion is crucial. And this is precisely what some governments did to stifle dissent. In the summer of 2019, Twitter and Facebook had to suspend hundreds of thousands of Chinese accounts after investigations showed that they were part of some coordinated state-backed operations that sow political discord about the Hong Kong protests (Brown 2019). Keller et al. (2019) also found that astroturfing campaigns based in Egypt and the United Arab Emirates used fake accounts to support authoritarian regimes across nearly a dozen Middle East and African countries.
A point that may be obvious but worth noting is that alerting users to such astroturfing practices would often only aggravate the problem. Sometimes knowing about the problem can be helpful. A Yelp user knowing that there are false reviews may be able to avoid going to a bad restaurant. A protestor can maybe avoid using a wrong slogan in the next demonstration. But these are merely addressing the ‘false marketing’ issue. Being aware that astroturfing practices exist may in fact accelerate a bigger problem – it damages the general perceived usefulness of the relevant social platform. Individuals no longer trust that it properly reflects people’s independent beliefs and opinions; it will no longer serve as a reliable resource to obtain information and gauge public sentiment. If users only have a general awareness that astroturfing exists, but do not have any way to distinguish between real from fake content, any truthful information and honest opinion that are expressed on the platforms will also be doubted and lose their credibility. If no other recourse is provided, it will just end up taking away an important resource where no replacement seems to be available. A more alarming consequence of this is that a government that tries to shut down these civil movements may not even need to actually plant astroposters on the social platforms – simply by suggesting that astroturfing practices are common and widespread online, they may already be stifling any potential protests.
The rationality of informational conformity and the justification for normative conformity both stem from the authenticity of crowd actions. In the same way that numerous unconnected individuals coming together make grassroots movements a powerful force, a large number of independently held beliefs and opinions carry special merit. Crowd wisdom is a topic that has received a lot of attention recently, especially since Surowiecki (2005)’s popular book on the topic. An example that he gives at the opening of his book involves 787 villagers guessing the weight of an ox in a country fair. While none of them got the right answer, the average of all their guesses was only off by 0.08%, a result better than any individual villager’s best guess. Looking at collective intelligence on the world wide web, Servan-Schreiber (2012) discusses the success of prediction markets – online platforms where the public can place bets on the outcomes of future events. He finds empirical data showing that the prices in such markets, when translated into probabilities, were closely correlated with observed event frequencies. But of course, there is also an abundance of counterexamples that demonstrates flawed crowd reasoning and defective group decisions. The stock market and housing crash of 2008 in the US comes to mind. Markets are often vulnerable to ‘bubbles’, where the crowd makes inaccurate judgments about stocks or future events (Sunstein 2006, 137). Caplan (2009) also gives several examples of how the ‘miracle of aggregation’ can fail, particularly in areas of public policies and economic issues. Given the many conflicting examples, it is no surprise that the debate is still ongoing as to what are the right conditions conducive to crowd wisdom. It would not be possible to go into all the details in this paper. But it will be helpful for our current discussion to look at a few key conditions that have been more widely discussed and adopted.
An early argument for how crowd wisdom emerges came from Condorcet in 1785. He found that when jurors have to decide by a majority vote between two options, given certain conditions, a larger group of jurors would be more likely to reach the correct decision than a smaller group of jurors. This is a case of crowd wisdom, where the group as a whole outperforms the individual; and the larger the group, the better its performance. Condorcet noted that for this to work, a few conditions have to be satisfied. These include: (i) the jurors are more likely than not to identify the correct decision; (ii) their votes are statistically independent; and (iii) that they vote sincerely for the decision they believe to be correct (Goodin and Spiekermann 2018). 21 Given human’s natural tendency to conform, some of these conditions may be difficult to achieve. But this is exactly why platforms like Yelp and Twitter are (ideally) such a valuable and promising resource for crowd wisdom to emerge. They can potentially provide easy access to a large quantity of independent data. If managed properly, these kinds of platforms could be a reliable source of crowd wisdom, providing us with a valuable epistemic tool. Yet, the practice of astroturfing poses an obvious hindrance and obstacle to this. Instead of encouraging the sharing of independent, diverse and truthful information within the group, by its very nature, astroturfing is a colluded effort to spread falsity and inauthentic opinions from a single source. What astroturfing does is, therefore, to turn any chance we might get at crowd wisdom into a crowd folly instead.
6. Conclusion
What I have tried to highlight in this paper is that astroturfing is not just one of the many CIBs that currently plague the internet. The problem of astroturfing is unique because of how it connects to our tendency to conform. Exploiting this tendency brings about two distinctive and far-reaching consequences. First, crowd sentiments are self-reinforcing and difficult to verify. Consider typical CIBs, those that involve false marketing and the spread of fake news – as soon as the audience is aware of the problem, they can often find another way to check the credibility of the information. If I am not sure whether a famous Youtuber is expressing her own genuine opinion or if she is merely advertising a new cosmetic treatment, I can go ask my family doctor for a more reliable expert opinion on the treatment. If I suspect that a news report we read online is fake, I can check it against a few other more reliable news outlets. The problem of astroturfing is unique because even if I am aware of the problem, there is no quick and easy way to verify whether the crowd sentiment is authentic, especially when pluralistic ignorance is at play. In this way, online astroturfing is much more dangerous than other forms of CIBs in that it has the potential to spread and stabilize sub-optimal norms quickly and efficiently. Second, even though the exact conditions for when and how crowd wisdom may emerge remain uncertain, there have been cases where it proves to be an extremely valuable tool that helps us navigate the physical and social world. With the right conditions and platform, a group of people together can outperform its most intelligent member, achieving a sum that is more than its parts. Yet, by undermining trust and tampering with the independence found among users on social media, online astroturfing ruins our chance of developing a crucial and valuable epistemic and social tool. In this way, the problems that online astroturfing creates go far beyond disinformation, false marketing and any other typical deceptive behaviour on the internet. Of course, the uniqueness of the problem also means that it will require a unique solution. But this will have to be a topic for another paper.
ORCID iD
Jovy Chan https://orcid.org/0000-0002-9163-0328
Footnotes
Acknowledgements
In writing this paper, I am very grateful to Professor Joseph Heath for his invaluable input and guidance throughout the writing process. I would also like to thank the anonymous reviewer and Professors Sergio Tenenbaum and Brendan de Kenessey for their helpful comments on earlier versions of this paper.
