Abstract
Social media engagements, such as likes and follows, have become crucial for driving algorithmic recommendations and underpinning platform economies. This has given rise to disinformation industries that focus on the production and sale of engagements, including Instagram followers—a phenomenon we term the “engagement as a service” market. However, this market poses significant challenges for empirical research as its operations remain obscured from the scrutiny of platforms, their users, and researchers alike. In this article, we propose a mixed-methods approach to make visible the relationship between the engagement market and platform governance, the latter of which increasingly aims to moderate account behavior in terms of authenticity and inauthenticity—what we refer to as “authenticity governance.” By developing this approach, we explore the relationship between the engagement market and platform ecosystems through three case studies: (1) engagement market responses to platform governance; (2) the evolution of engagement as a service; and (3) testing the quality of engagement as a service on Instagram. These investigations allow us to comprehend disinformation as an ongoing negotiation between the engagement market and authenticity governance. Overall, our three integrated approaches can help researchers move forward with the empirical study of disinformation markets that operate at the periphery of platform ecosystems. In short, this article presents a methodological outlook for analyzing (in)authentic engagements as a form of disinformation.
The Rise of the Social Media Engagement Market
Social buttons such as likes, shares, and follows are typically designed and encountered as interface affordances that shape how people interact on social media platforms. At the same time, these interactions are measured as metrics that form the basis for how visibility and value are achieved on each platform (Bucher, 2012; Gerlitz & Helmond, 2013). An important development has been the increased visibility of key metrics for users themselves, notably likes and followers, as real-time algorithmic feedback systems create an ongoing concern with measurements and rankings (Petre et al., 2019). Engagement metrics have come to fuel algorithmic recommendations and platform economies (Cotter, 2019), especially through social media influencers who are paid for posting for brands (at least in part) based on their number of Instagram followers (Frier, 2020; Khamis et al., 2017).
The metrification and monetization of social media have given rise to a market for the production and sale of disinformation, namely social media engagements described along the lines of “fake followers” (i.e., Confessore et al., 2018). Moving beyond a priori dichotomies such as “real” and “fake,” we adopt the term “engagement as a service market,” or simply the “engagement market.” This term underscores a marketplace geared toward social media marketing where engagements transform into a series of services offering access to accounts and their activities (cf. Kaldrack & Leeker, 2015). Engagement as a service is a business model in which social media users are sold bundles of Instagram followers, likes, or any number of comparable engagements in the thousands or even millions rather than struggling to develop their own methods of amplification. This should be considered in relation to the claim that social media engagement, like participation more generally, has increasingly become a “formatted procedure” driven by the ongoing demand to contribute (Kelty, 2019, pp. 17–18).
This article develops an approach to mapping and studying the relationships between the market for engagements and the governance efforts of social media platforms. The market for engagements poses significant methodological challenges for empirical research as its operations are obfuscated from scrutiny by platforms, their users, and researchers. In similar terms, platforms are highly restrictive in revealing their own security protocols and governance of user accounts and activities. In response, we take as our starting point an essential practice of social media platforms, the moderation of engagements (Gillespie, 2018; Roberts, 2021). Moderation makes visible how platforms shape interactions, which are increasingly based on the regulation of the boundary between “authentic” and “inauthentic” account behavior. In order for social media accounts and engagements to become established as a form of disinformation, they must be “authenticated” by platforms, foregrounding performative and relational processes distinct from the construction of facts. We refer to this particular subset of platform moderation as “authenticity governance.”
More specifically, in response to calls for more “methodological rigor” in the study of disinformation (Camargo & Simon, 2022), we shift the focus toward methodological visibility. We explore how certain methodological “entry points” (Dieter et al., 2019) can render aspects of this engagement market visible. To render the relationship between the engagement market and social media platform moderation analyzable, we present case studies covering (1) engagement market responses to platform governance, (2) the evolution of engagement as a service, and (3) the testing of the quality of engagement as a service on Instagram. Our mixed methods approach offers researchers a “travel guide” (Latour, 2007, p. 17) for describing, analyzing, visualizing, and interpreting disinformation operating on the fringes of platform ecosystems. It renders analyzable relationships that form between and around engagements and the governance efforts of the different platforms to which they are linked. Importantly, this relational methodology focuses on processes and boundaries rather than stable categories (cf. Desmond, 2014). As the boundary between authenticity and inauthenticity continually shifts, disinformation is best treated as a moving target.
We begin by discussing the shift toward authenticity in social media and platform governance approaches. Historically associated with Euro-American individualism (Handler, 1986; Trilling, 1972), authenticity has become pervasive and a key trope across the social media landscape. “Authenticity governance” conceptualizes how the boundary between authenticity and inauthenticity has become critical to how platforms communicate and regulate accounts and engagements. Following this, we outline the mixed-methods approach that allows us to make the authenticity focus in governance visible. Next, we turn to our case studies, which function as methodological entry points for understanding the shaping of engagements as disinformation. With regard to the theme of this special issue, our main contribution is not the description and analysis of the organization and identities of disinformation-for-hire actors (see Lindquist, 2022), but rather taking these actors as a starting point and, indeed, as key (albeit often suspicious) interlocutors in investigating the relationship between platform practices of authenticity governance and the production and distribution of disinformation.
Finally, our article raises questions about how governance by infrastructural platforms like Meta shapes the larger platform ecosystem (Plantin et al., 2018; van der Vlist, 2022; van Dijck et al., 2018), even those parts that are deemed illicit. More specifically, the three case studies help us make visible the many obfuscated layers and interconnected components of the platform ecosystem used to govern and reduce inauthentic engagements, including community policies, manual research, automated security updates, and shadow banning (Gillespie, 2022).
The Authenticity Turn in Platform Governance
The emergence of disinformation studies, which incorporate various methods from a range of disciplines, has coincided with the predominance of mis - and disinformation in political and public life over the past 10 years (Camargo & Simon, 2022). While the center of gravity of this field has been “fake news” in the era of social media (Egelhofer & Lecheler, 2019), it has also addressed issues such as the veracity of content (Gelfert, 2018), the growing importance and sophistication of social bots (Gorwa & Guilbeault, 2020), the structure of circulation (Bounegru et al., 2018), the politics of manipulation (Rogers & Niederer, 2020), fake views moderation (Castaldo et al., 2022), coordinated “astroturfing” campaigns (Keller et al., 2020), and the actors who produce and distribute disinformation (Bradshaw & Howard, 2017).
This final theme, the focus of this special issue, draws attention to the increasingly professionalized and diversified global industry for the production of disinformation that has emerged in response to the metrification, monetization, and political influence of social media. Many of these businesses, such as “dark PR” firms (Silverman et al., 2020; Verwey & Muir, 2019), “propaganda secretary” offices (Hassan & Hitchen, 2019), data analytics firms (Briant, 2021), and social media marketing agencies (Mellet, 2017), have front- and back-end operations that straddle the boundary between the licit and illicit. Investigative journalists and international organizations that study the engagement market, the focus of this article, have used industrial metaphors such as “click farms” (Clark, 2015) and “follower factories” (Confessore et al., 2018) to describe the vilified “black market for social media manipulation” (Bay & Reynolds, 2018).
In response to these forms of proliferating disinformation, key social media platforms have initiated a turn toward authenticity governance, refocusing their platform policies and moderation to govern undesirable behaviors and identities through processes of authentication. In 2018, Meta launched its “coordinated inauthentic behavior” policies, defined as “coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation” (Meta, 2022a). Similarly, in 2020, Twitter created a new set of rules and policies focused on “platform integrity and authenticity,” defined as “policies that promote the health of the public conversation by investigating and mitigating material related to spam, platform manipulation, API abuse, and information operations” (Twitter, 2022a). Rather than addressing false information and truth, Meta and Twitter have identified and deleted inauthentic behavior based on their Community Standards and The Twitter Rules, respectively.
For platforms, authenticity has a double-sided function. On one hand, an often vague definition of authenticity—already a “slippery” term (Marwick, 2013, p. 121)—is used in platform policy communication and serves as the basis for creating a “community.” For example, Meta’s use of coordinated inauthentic behavior has been criticized for its lack of clarity as a discursive tool (Douek, 2020), as both “coordinated” and “inauthentic” are value-laden and poorly defined, in practice obscuring platform governance. At the same time, platforms develop methodical and often automated means of moderating inauthentic accounts or their engagements in the back-end. This echoes Chun’s term “algorithmic authenticity,” which not only highlights the methodological nature of authenticity but also “reveals the ways in which users are validated and authenticated by network algorithms” (Chun, 2021, p. 144; also see Burton et al., 2023). Authenticity governance is therefore both a deliberately vague discursive tool for policy communication and a practical and concrete methodological tool for platform governance.
Authenticity is not only used by social media platforms but is also adopted and internalized into the practices of social media marketers selling “earned media” (Serazio & Duffy, 2018) and content creators around the world who engage in commercially-oriented online status-building (e.g., Arriagada & Bishop, 2021; Cunningham & Craig, 2017; Marwick, 2013). As a result, there is a growing scholarship on the tension between account users who develop “authentic” tactics to increase metrics and visibility (O’Meara, 2019; Petre et al., 2019, p. 5) and social media platforms that regulate or ban accounts that engage in “inauthentic” practices (Gorwa, 2019; Poell et al., 2021). Social media users’ attempts to test the limits of platform authenticity have led scholars to develop concepts such as “algorithmic imaginary” (Bucher, 2018), “algorithmic gossip” (Bishop, 2019), and “influencer imaginary” (Arriagada & Bishop, 2021) to study how content creators and social media users perceive what recommendation algorithms are, how they function, and how they can contribute to financial consistency and visibility. These concepts are also helpful for approaching the engagement market, as many engagement market actors develop comparable forms of communication and experimentation in the face of obfuscated platform policies and moderation methods.
Authenticity thus allows us to analyze the dynamic and obfuscated relationships between social media platforms and the engagement market through the moderation and production of inflated account metrics. More generally, the turn to authenticity governance leads us to consider how disinformation takes shape relationally as a site of negotiation between different actors and interests (cf. Banet-Weiser, 2012, see also Desmond, 2014). Operationalizing this approach, however, requires methodological innovations, which we describe in the next section.
A Mixed-Methods Approach to Disinformation at the Fringes of Platform Ecosystems
The key methodological challenge this article addresses is identifying entry points to study the social media engagement market as the production of engagements is dispersed to the fringes of platform ecosystems. Although easy to find for customers, market operations are hidden. To make the market visible for analysis, we conducted ethnographic research among disinformation-for-hire actors in Indonesia, a world leader in mobile internet and social media engagement (Google and Temasek, 2017; Lim, 2018) (Case 1). We identified and contacted 130 sellers through internet searches, of whom 30, most based in the greater Jakarta region, agreed to meet in person, around a quarter of them repeatedly over a two-year period between 2018 and 2019.
The ethnographic data led us to develop research questions about the market’s transnational organization, which required the use of explicitly digital rather than ethnographic methods. The data collected in this process allowed us to identify both networked relations between market actors and a key reseller website for social media engagements, through which we were able to track available services across platforms over time (Case 2). This research also produced contact information for around 1,200 key market actors, of whom we were able to interview 10, several repeatedly, allowing us to substantiate and add to earlier ethnographic research. They were based in Australia, Bangladesh, India, Serbia, Taiwan, the United States, Morocco, Egypt, Russia, and Turkey. Finally, we set up Instagram accounts and payment methods, purchased Instagram followers through the reseller website in Case 2, and analyzed the behavior and stability of the followers over time. The activities and account details of the purchased followers were collected and analyzed to gain an understanding of the quality and behavior of the accounts (Case 3).
Our approach follows work in infrastructure and app studies (e.g., Dieter et al., 2019; Larkin, 2013) that addresses the methodological difficulties of studying complex systems or digital objects that are not easily situated or localized. One of the authors of this article has co-developed a multi-sited approach to apps (Dieter et al., 2019) that introduces the methodological “entry point,” which “simultaneously deploys and makes visible different infrastructural settings.” This article uses different entry points: disinformation-for-hire actors, engagement services on a central reseller panel in the market, the Internet Archive, and publicly available “platform boundary resources” (Helmond & van der Vlist, 2019), such as platform policies, blog posts, and research reports, to engage with the production of disinformation. This relational methodology (cf. Desmond, 2014) reveals disinformation in processual terms as constantly changing in relation to platform moderation practices.
Our mixed-methods approach combines ethnography with digital methods (Rogers, 2013). This is a process of mutual engagement, as the methodologies lead to data that generates new research questions, which demand alternative methodologies that in turn create new data as the project progresses in the face of platform and market opacity. Through this research and in relation to platform boundary resources, we have developed “authenticity governance” as our main focus of attention. This allows us to draw together the three different case studies, which were limited in themselves but together offer a methodological approach to understanding the formation of disinformation over time at the interface between corporate platforms and the disinformation-for-hire market (cf. Camargo & Simon, 2022).
Three Approaches to Making Inauthentic Engagements Visible
Case 1: Engagement Market Responses to Authenticity Governance
To move beyond an a priori villainization of disinformation-for-hire actors and gain access to a highly obfuscated market (Bay & Reynolds, 2018), scholars have increasingly utilized ethnographic methods, most often focusing on digital labor in the Global South, to develop a more in-depth behind-the-scenes understanding of the industry for the production of disinformation (e.g., Grohmann, Aquino, et al., 2022; Grohmann, Pereira, et al., 2022; Lindquist, 2021, 2022; Ong & Cabañes, 2018; Ong & Tapsell, 2022). In line with this, ethnographic research and interviews with disinformation-for-hire actors in Indonesia and Turkey, two major countries in the engagement market, form the key entry point for this case study. Ethnographic research in Indonesia in the two years leading up to the COVID-19 pandemic and interviews with actors in other countries in the Global South revealed a shifting playing field across a transnational landscape—a wide range of practices among disinformation-for-hire actors in an obfuscated engagement market characterized by intensifying competition and a diversity of methods. Instagram was by far the dominant platform in this market (see Case 2). Interlocutors generally worked in small groups of young men (from a few up to 10), most in their 20s, often friends, relatives, or neighbors who identified as self-taught and had learned to make money through the internet. Although some had offices, most worked discretely out of their homes in cottage industries (Lindquist, 2022; see also Grohmann, Aquino, et al., 2022). Interlocutors revealed that engagements were primarily purchased from sellers located outside of the country and resold to domestic buyers.
These interviews, observations, and additional materials offered us entry points into the engagement market. Terms such as algorithmic gossip (Bishop, 2019) and imaginaries (Bucher, 2018), initially developed to describe practices of social media users, also resonated with disinformation-for-hire actors. Both types of actors are engaged in forms of experimentation and collaboration to interpret platform algorithms in the face of intense competition. While social media influencers attempt to understand platform algorithm effects to improve their platform visibility through rankings and recommendations, disinformation-for-hire actors are primarily concerned with developing engagement metrics that are identified as authentic by moderation algorithms and thus remain in play on the platform.
The overarching problem that our interlocutors faced was engaging with authenticity governance, which led social media platforms to close down existing accounts, impede the creation of new ones, and slow or prevent the transfer of services. For instance, this could mean limiting the number of comments allowed and the number of followers that could be transferred during a period of time, thus reducing activities rather than deleting accounts altogether (cf. Gillespie, 2022). As an example, one interlocutor in Morocco said it was possible to transfer between 150,000 and 200,000 Instagram followers per day to an account until a major security update in June 2019 prevented this. 1 Major moderation updates had significant effects, often bringing international markets to a standstill for days at a time. For most market actors, who were primarily resellers and lacked technical skills, this meant waiting—and handling customer complaints—while providers and programmers were engaged in innovating by identifying and responding to specific problems.
Methods for “authentic-enough” (Lindquist, 2021) engagements take shape through individual innovation and broader collaboration. Most interlocutors described ongoing tinkering and experimentation in tandem with discussions on wide-ranging networks, Facebook groups, and internet forums, organized both on a domestic and transnational scale, that were used to innovate collaboratively in relation to changing moderation updates. With regard to the innovation of accounts, for instance, one of our interlocutors pointed us toward a significant repository on GitHub focusing on Instagram. 2 Most market actors agreed that Instagram’s increasing updates were in fact improving the quality of bot-generated accounts, becoming, as one of our Indonesian interlocutors put it, “as human as possible.” As a Turkish interlocutor put it, “high-quality followers are very important. If your account doesn’t have some profile photos and posts, it is worthless. If you create profile pictures, names, and at least three posts that match, it is like they are real.” 3
Significantly, these engagement services are developed not strictly to convince customers who want to improve their social media metrics but also Instagram’s moderation algorithms. For instance, our Turkish interlocutor had developed a system of automatic direct messaging between his bot-generated accounts. The aim was thus for the accounts to reveal themselves to Instagram’s moderation efforts as a form of authentic engagement rather than to the users or to the accounts that customers were hoping to engage with. As he put it, “we need to show Instagram a real user” while “giving our customers what they want.” More generally, the engagement market works at an interface that aims to create accounts that are recognized as authentic or “real” by users in terms of identity and in relation to Instagram in terms of technical sophistication.
These processes take shape both with regard to account creation and behavior. In early 2020, members of the Black Hat World forum, a key site for algorithmic gossip, discussed how Instagram had developed a range of “silent” punishments rather than taking down “spammy” accounts. When inauthentic or spammy behavior was noticed, the account was silently flagged and monitored, and activity limits were imposed, such as restrictions on following, commenting, or liking. Accounts with a shadow ban no longer showed up in hashtag and location searches, and comments could be marked as hidden so that they did not appear in news feeds (see Cotter, 2019; Gillespie, 2022). Forum members reported different thresholds for activity bans, which supported the hypothesis that Instagram flags and classifies suspicious accounts. Discussing and identifying thresholds allowed market actors to collaboratively understand the limits Instagram’s algorithms set, thus reflecting algorithmic gossip in practice. These methods were continuously evolving and practice-based because the precise workings of platform methods were largely unknown and continuously updated.
Finally, it is common practice to tinker and experiment with engagement services. For sellers, finding the best and cheapest services is at the center of this wide-ranging and cut-throat market. Interlocutors would rely on recommendations but always test the engagements they were reselling to understand to what degree they could be considered authentic-enough and thus reliably sold to customers, who could easily move to other sellers if disappointed with the purchased services. Most notably, the drop rates of Instagram followers were carefully measured, as it was critical that the majority of followers purchased remained connected to the customer’s account.
The ongoing experimentation with accounts and services utilizes algorithmic imaginaries (Bucher, 2018) and gossip (Bishop, 2019), as well as forms of tinkering and experimenting comparable to those of social media influencers (O’Meara, 2019). The engagement market, however, makes visible how Instagram’s authenticity governance is evolving over time and, more importantly, how this is shaping innovation in the market for these particular forms of disinformation. Focusing on the intersection between authenticity and inauthenticity and the diverse socio-technical relations that develop in this space illuminates how disinformation should be understood as a temporary accomplishment.
Case 2: Measuring the Effectiveness of Authenticity Governance
Our second approach tests the effectiveness of platform governance. Authenticity governance serves both as a communication tool and an analytical lens for investigating “influence operations” (Meta, 2021, 2022b). Nathaniel Gleicher, Meta’s Head of Cybersecurity Policy, has claimed that manually searching for inauthentic behavior has proven to be the most effective way to identify innovative, sophisticated influence operations (Jacoby, 2018). The insights from these investigations are then used to improve automated detection and enforcement at scale. Facebook, Instagram, and other platforms, however, generally decline demands for greater transparency about their methods since they claim this would make it easier for actors to “game the system” (cf. Petre et al., 2019). More specifically, they do not share the data required to determine if the line between authentic and inauthentic behavior has been crossed, except by stating that fake accounts are crucial to making that evaluation.
Measuring the effectiveness of inauthentic engagement moderation is thus a methodological challenge. Our approach measures how effective platform moderation methods are in regulating engagement services available on the market. The reseller website Just Another Panel 4 serves as our key entry point to collect listings of popular engagement services over time. Our interlocutors in Case 1 identified Just Another Panel as the key reseller on the global market with the largest offerings of services. Its popularity is supported by the web traffic to the site, which is much higher than any other reseller website we could find. Justanotherpanel.com was registered in September 2016, quickly establishing itself at the core of the market. The website with the tagline “Resellers’ #1 Destination for SMM Service” boasts that an order is made every 0.14 s, having completed around 340 million in total (as of September 6, 2022).
By collecting lists of engagement services in yearly snapshots using the Internet Archive’s Wayback Machine, we were able to capture and map the evolution of engagement services on Just Another Panel. In March 2021, Just Another Panel listed over 2,500 different services for over 40 platforms. Typically, there are many versions of engagement services, but each unique service—such as an Instagram-like or a Spotify play—has an identification number that makes it possible to trace it over time. Figure 1 visualizes the number of engagement services on offer per platform through yearly snapshots.

Evolution of services offered per platform by Just Another Panel over time. Visualization by Carlo De Gaetano.
To measure the effectiveness of platform-specific variations of authenticity governance, we counted the number of new, deleted, and persistent services over time per platform. Persisting services indicate few restrictions on the platform side for detecting and reducing inauthentic behavior, while a high turnover of engagements suggests significant platform efforts to detect and remove inauthentic behavior or limit the productivity of engagements per account. The edges that connect specific platforms over time (Figure 1) indicate the relative persistence of services per platform. The height of the bar indicates how many services are offered per platform, and the height of the connecting edge between the bars indicates how many of the services persist over time.
A first glance suggests rapid growth in a short period of time. The volume of discontinued services also indicates high volatility among those on offer. The resellers we talked to confirmed that these shifts are, to a large extent, driven by improved platform moderation methods. For example, Twitter renewed its crackdown on detecting inauthentic engagements as it revamped its policies in 2019 (Harvey, 2019) and 2020 (Roth & Achuthan, 2020), 5 which is reflected in a significant impact on the market in Figure 1. Similarly, there has been considerable discussion about Instagram’s evolving moderation methods since Meta introduced the term “coordinated inauthentic behavior” at the end of 2018 (Gleicher, 2018; Jacoby, 2018), along with the rolling out of inauthentic behavior policies from 2019 onwards (Meta, 2022a). One of our interlocutors in Morocco claimed, with regard to Instagram, that “we have never seen anything like this before.” 6 More than 150 influence operations—from states, commercial firms, and unattributed groups—have been reported in transparency reports with details on each network takedown (Gleicher, 2018). The new policies, accompanied by the development of new methods to detect inauthentic behavior (Jacoby, 2018), are evident in the dramatic decrease in services offered in 2020 (Figure 1). In contrast, Spotify’s services remain active, suggesting that moderation, at least until 2020, was nonexistent. Artist documentation (Spotify, 2022) which issued a warning about increased moderation and punishments starting as late as 2021, supports this finding.
Figure 1 also makes visible a hierarchy of social media platforms that are the target of engagement service providers. In 2016, most of the services on offer focused on Instagram, followed by Google (with “Traffic” as the core service), Twitter, YouTube, Facebook, Pinterest, SoundCloud, and Spotify. The graph shows increased specialization and diversification over time in terms of the targeted platforms. In addition, the figure shows shifts in popular targets for engagements. In 2021, Instagram continued to be the most popular platform for engagement services, followed by YouTube, while Facebook returned to the third place that had been taken over by Spotify in 2020.
Our approach allows us to make three significant observations. First, it identifies which social media platforms are popular targets in the engagement market. While Instagram has been the central focus over time, the market adapts to platforms that grow in popularity. For example, there has been recent growth in TikTok services. Second, this approach makes it possible to measure the impact of authenticity governance on the engagement services market. Unlike most platforms, which only report on the success of their own efforts, our approach allows us to compare the effect of authenticity governance across platforms and against a baseline of popular engagement services over time. Finally, the findings suggest a vibrant and active market of developers and resellers that play a crucial role in engaging with emerging platforms, key platform metrics, and new forms of social media marketing, as well as creating and adopting methods to circumvent engagement moderation methods.
Case 3: Testing the Quality of Engagements as a Service
Our final approach tests the quality and behavior of engagement services to make visible the relationships with the platforms they are created for. While social media platforms find the detection of inauthentic behavior difficult, it is considerably more so for researchers who lack access to crucial account information. Many of the ambiguities in the detection of inauthentic behavior stem from the “ground truth” problem (Gorwa & Guilbeault, 2020), namely that detection methods are based on human coders who can never be certain that they are correctly identifying social bots or inauthentic accounts (Subrahmanian et al., 2016). This presents a problem for machine learning models that rely on human-labeled training data (Ferrara et al., 2016). In contrast, our method starts with engagements that we have purchased and are able to directly observe rather than with suspicious accounts that may or may not be bots, thus approaching inauthentic behavior from an angle, which, beyond the immediate concerns of this article, can also supplement studies of social bots.
Once again, Just Another Panel was the methodological entry point for data collection. We focused on Instagram, both an understudied platform in social media bot research due to a lack of publicly available application programming interfaces (APIs) (notably in comparison to Twitter) (Gorwa & Guilbeault, 2020) and the most popular platform in the engagement market. For this methodological experiment, we created or repurposed 15 Instagram research accounts, or “research personas” (Bounegru et al., 2022). 7 After registering and transferring funds to Just Another Panel, we conducted an inventory of the approximately 100 different types of Instagram followers available.
By purchasing 100 followers for 15 different services, we aimed for a diverse selection of engagement services and selected five variations in three categories that we observed as dominant in the engagement market. The first category references degrees of realness (e.g., ‘‘100% real,’’ ‘‘real & engaging,’’ ‘‘real looking’’), the second the nationality of the account (e.g., Indonesia, Iran, Russia, Brazil), and the third, temporality, the pace or ephemerality of the service (e.g., “non drop,’’ ‘‘prank followers”) (Table 1).
Selection of Purchased Instagram Followers.
For the data collection on Instagram, we used two different tools: a script based on Instaloader 8 and the commercial tool Phantombuster. 9 Finally, once the data was collected, we developed four analytical tests to establish the behavior and quality of the engagements on Instagram: (1) follower growth, (2) stability of the service, (3) account activity, and (4) profile imagery of the purchased followers.
Follower Growth
The first test focuses on the pace and size of delivery of the purchased followers to the respective research accounts, grouped per category of 15 selected engagement services (based on realness, nationality, and temporality). The beeswarm timeline in Figure 2 shows the increase in the number of followers after they were purchased at 18:00, where each dot represents one delivered follower. (Note that some research accounts already had followers, as indicated in gray.) Depending on the quality of the purchased followers, the visualization shows that some followers were delivered more quickly than others. Some categories seemed to over-deliver, which is indicated by the darker color of the follower dot. Overall and across the three categories, the engagement services with “real” in the title and indications of “low” or “non drop” delivered the promised 100 followers. One order—[BOTS WITH PP] [HIGH DROP %]—was canceled, and the Indonesian order was not delivered within two days. This graph also shows how some followers, especially the low-quality “prank” and “bot” followers, no longer followed the research account after a short time.

Follower growth: The pace and size of delivered followers. Visualization by Alessandra Facchin and Giovanni Lombardi.
Stability
In the second test, we collected data on the number of followers per research account over a three-week period. We found that some batches of purchased followers stopped following or were deleted by Instagram very soon after purchase, which is to be expected in the low-quality and prank accounts. However, the [Low Low Drop][Real Looking] followers also dropped significantly within a week. In some cases, we saw indications that followers were being refilled, such as with the late-arriving Indonesian and Iranian followers. Overall, the decline in followers over three weeks was significant but partial, with the majority steadily declining over time. This echoes the findings by Castaldo et al. (2022), which suggest that platforms detect inauthentic engagements after the targeted content has gained algorithmic visibility and that a significant number of inauthentic engagements are not detected at all.
Profile Imagery
The next two analyses are in response to our interlocutors’ claims (see Case 1) that account behavior needs to be as “human-like” as necessary to convince the platforms’ moderation methods as well as Instagram users. According to several of our interlocutors, at a minimum, this requires a profile image, a few posts, and some followers. The followers that were still in place the day after the purchase have a profile image. (We were not able to capture the data for the lowest quality followers, such as the [Prank] and [R7] low-quality services, because they dropped too quickly after purchase.) These are not the default empty profile pictures, however, but instead show a wide variety of people, logos, and objects. This supports the assumption that methods in the engagement market are continuously evolving, as default profile imagery is no longer an indication of a bot or new account. It is crucial for social bot and disinformation studies to develop methods that are able to adapt to these ongoing transformations.
Account Activity
To appear authentic-enough to the platform and its users, it is equally important that the purchased followers have account activity. Extending the previous test, we analyze the qualitative properties of the followers, focusing on the followers-following count and posting count. With a few exceptions, the purchased followers have only made a few posts. Unsurprisingly, the results show that the overall following-follower ratio tends toward excessive following, with purchased services generally following 10 times more accounts than they receive. The high distribution toward following in the purchased followers that are marketed as being real (i.e., ‘‘Exclusive 100% Real,’’ ‘‘Real & Engaging,’’ and ‘‘Super VIP’’) is notable, as is the low score in both followers and following for the services centered on temporality ‘‘Non Drop’’.
The findings suggest, within the limited time of our study, that the methods for constructing authentic-enough Instagram followers are well-established across the engagements purchased for this experiment. The overall shared quality of the purchased followers is significant, thus allowing us to hypothesize that there is a community of practice that quickly comes to shape stable forms of disinformation in the face of Instagram’s authenticity governance. The results from our previous two cases support the idea that there is a constant back-and-forth between social media platforms and the engagement market.
A limitation of our approach is that we cannot focus the collection of data around an issue or event, such as an election. Our approach could complement social bot studies, however, since collecting data from purchased engagements offers a “ground truth” to inform further machine learning models (Gorwa & Guilbeault, 2020). In addition, while moderation methods change over time, the engagement market appears able to respond quickly to the authenticity governance of social media platforms. To grapple with this volatile, rapidly evolving market, engagement services can serve as an entry point to collect training sets for machine learning models to detect inauthentic behavior.
Conclusion: The Shifting Field of Authenticity and Disinformation
The article provides a methodological outlook to help researchers explore the social and economic significance of engagement services in relation to platform ecosystems. The relational approach makes visible both the parts of the engagement market that are obfuscated by platform architecture as well as platform moderation practices that are concealed from users and the public by design. These forms of methodological visibility should be distinguished from the widely discussed forms of algorithmic and platform visibility that social media users vie for. Foregrounding relations between the engagement market and platform moderation enable a shift of perspective that offers significant empirical and conceptual contributions to the field of disinformation studies. More specifically, it highlights how engagements as disinformation should be conceptualized in processual terms, as unstable entities that are negotiated and take shape at the interface between the engagement market and platform moderation.
With regard to the theme of this special issue, in this article disinformation-for-hire actors function as a methodological entry point, and indeed a starting point for making visible authenticity governance and the process of producing and distributing social media engagements. Importantly, through their everyday work, engagement market actors become experts in interpreting and developing practices in response to authenticity governance. Through experimentation, tinkering, gossip, collaboration, and competition across uneven transnational networks, these diverse actors are our critical interlocutors in developing an understanding not only of how disinformation is shaped in practice and how social media platforms regulate disinformation but also for generating further research questions that demand methodological innovation. While there is more to write about the identities, work relations, and ethical perspectives of disinformation-for-hire actors (see Lindquist, 2022), their practices and insights offer guidance for understanding how disinformation is shaped in practice.
In this process, we have developed three case studies that offer complementary research opportunities. The first, engagement market responses to authenticity governance, allows for the analysis of relationships between social media engagement providers, resellers, and platforms through ethnographic interviews and observations. Importantly, this moves beyond earlier research that has focused attention on either social media platforms or content creators or the relationships between them (e.g., Bishop, 2019; O’Meara, 2019; Petre et al., 2019). While this research has shown how social media users interpret platform algorithms to improve visibility, our focus on the relationship between disinformation-for-hire actors and platforms allows us to describe the interface between the production, distribution, and regulation of disinformation. The second approach describes the evolution of engagement services in relation to particular social media platforms. These provide an entry point for studying the effectiveness of authenticity governance. The third approach, testing the quality of engagement as a service on Instagram, analyzes the quality and behavior of engagement services in the platform environment. Together, the three approaches and entry points offer a mixed-methods approach to the study of social media engagements as a form of disinformation.
These three approaches reveal that taking authenticity as a starting point for understanding disinformation is valuable for at least four reasons. First, it draws attention to the production of engagement metrics that are typically invisible and hidden to social media platform users and stakeholders. More specifically, it allows us to describe how engagements take shape over time. As we have shown, this form of disinformation is generally a moving target and is produced through indirect negotiations between engagement market actors and social media platforms. Importantly, the quality of these engagements and the accounts associated with them appear to be improving over time in response to authenticity governance. This perspective, which highlights the importance of focusing on the production of disinformation as a process, a site of negotiation, and technical innovation between diverse actors over time, provides a useful methodological outlook for the study of the production (Ong, 2020), circulation (Bounegru et al., 2018), and the political economy of disinformation (Hirst, 2017; Ong, 2022), and, more generally, a significant contribution to disinformation studies.
Second, these approaches help explain how the engagement market creates value within platform economies and user visibility within algorithmic platforms (Bucher, 2012; Gerlitz & Lury, 2014; Petre et al., 2019). There are real incentives for inflating social media metrics. Football superstar Cristiano Ronaldo, with nearly 600 million Instagram followers, makes up to one million USD per advertised Instagram post (Leighton, 2020). 10 The field of influence goes far beyond major celebrities, however, as around 200 million Instagram users reportedly have more than 50,000 followers, the level at which it is considered possible to make a living through posting for brands (Frier, 2020, pp. xvii–xviii). The three cases illustrate the key objects around which platforms build their economies in specific ways (e.g., follows, likes, and views), as well as which platforms are the most lucrative for inflating metrics (i.e., Instagram, followed by YouTube, Facebook, and Spotify), thus highlighting the importance of being attentive to how value and user visibility are shaped in diverse ways across platforms. While our finding that the engagement market is widespread is not surprising, the diversity, the targeted nature (in terms of nationality, for instance), and the (“human-like”) quality of engagement services suggest a sophisticated disinformation market that may, like manipulative social bots, be difficult to quell (Gorwa & Guilbeault, 2020).
Third, our focus on authenticity governance reveals the emerging layered governance relationships that manifest themselves in platform ecosystems. Relationships can be analyzed to determine how the governance of infrastructural platforms like Meta shapes the larger platform ecosystem (Plantin et al., 2018; van der Vlist, 2022; van Dijck et al., 2018), including hidden parts on the fringes. Through our mixed methods approach, we were able to make visible the resources and entry points to study activity on the fringes of the platform ecosystem. We identified the entry points to the layered governance consisting of a series of connected authenticity policies, community guidelines, and (gossip about) moderation methods. The three case studies and approaches help in dealing with the many hidden layers and interrelated components of the contemporary platform ecosystem and how governance manifests itself in this ecosystem (van der Vlist, 2022; van Dijck et al., 2018).
Finally, the article contributes to efforts to critically engage with moderation methods that are anchored in slippery authenticity policies and enhances our understanding of the practical implications of authenticity governance. Scholars often inherit vocabulary and concepts from social media platforms that impact how phenomena are perceived, recognized, and investigated. In line with policy communication requirements, these terms are frequently created by platforms themselves and can thus come to shape research questions and agendas (i.e., Gillespie, 2010). Authenticity, for instance, has a significant impact on how disinformation is understood. Focusing on the authenticity of behavior rather than the veracity of information is arguably less contentious politically and more effective in communicating with users and the general public. For scholars, this has methodological implications, drawing attention to the actions of accounts in contrast to determining the truthfulness of content, which necessitates a shift in techniques from fact-checking and source evaluation to account metrics and performance evaluation. In light of this, the article contributes to efforts to critically engage with platform moderation methods and enhance our understanding of the practical implications of authenticity governance in a post-truth era (cf. Egelhofer & Lecheler, 2019). Researchers, investigative journalists, and government officials are increasingly concerned with the various challenges faced by disinformation as well as the role and accountability of platforms in this space. To advance independent research, we need to engage critically with platforms and develop concepts and methods that are open to scrutiny, properly described, and an inspiration to other researchers.
Footnotes
Acknowledgements
We would like to thank the editors of this special issue, Rafael Grohmann and Jonathan Ong, as well as the anonymous reviewers for their useful feedback and suggestions. We thank Yantri Dewi for her research assistance in Indonesia. We would also like to thank all participants and designers of the “good enough followers” project at the Digital Methods Summer School 2020, which forms the basis for Case 3: Alexander Roidl, Casey Boyle, Minttu Tikka, Janna Joceli Omena, Elena Pilipets, Scott Springfeldt, Alessandra Facchin, and Giovanni Lombardi. Finally, we would like to thank Carlo De Gaetano for the design of Figure 1 and Alessandra Facchin and Giovanni Lombardi for the design of
.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article. For Weltevrede: Nederlandse Organisatie voor Wetenschappelijk Onderzoek VI. Veni.191 C.048. For Lindquist: Vetenskapsrådet 2017-02937.
