Abstract
Some of the most important impacts of social media on social movement organizations come not through the new forms of
Introduction
Social media is often treated as replacement for, or a competitor to, well-organized social movement organizations (SMOs; Bennett & Segerberg, 2013; Earl & Kimport, 2011; Margetts, John, Hale, & Yasseri, 2015). The lowered transaction costs and ubiquitous connectivity of online communications tools create the potential for any tweet, post, video, or digital petition to “go viral” and become the rallying cry of a digitally enabled social movement. Yet, this focus on new forms of digital speech has obscured a range of equally important developments for social movement activity: the emergence of new forms of
SMOs can listen online in ways that networks, crowds, masses, and everyday citizens cannot. A hashtag can turn viral and help ignite a movement. But a hashtag cannot run an experiment or mobilize its supporters over the course of months and years. Crowds can take concerted, collective action online. But they cannot deliberate, make hard choices, or evaluate and learn from their failures. Organizations, by contrast, can incorporate digital trace data into their strategic and tactical deliberations, leading to novel strategies that help them adapt to the affordances of what Chadwick (2013) has termed the “hybrid media system.” Organizations can learn and strategize; crowds and masses cannot. So the new data environment offers a particular set of implications for how SMOs learn, adapt, and make difficult choices in the digital age.
By “digital listening,” I am referring, in particular, to the collection and analysis of online behavioral data. This includes both indirect listening, in which SMOs rely on third-party social media analytics to identify potential supporters or track emerging issue publics, and direct listening, in which SMOs collect data on their own interactions with online supporters and use these data to improve tactics and decision-making. Direct forms of digital listening allow for
This article defines and describes analytic activism, discussing its central features while also highlighting the boundaries and constraints that define its current limits. To be clear, I do not make the claim that analytic activism is a feature of all instances of online collective action. It does not even describe all forms of organizationally enabled online activism. Instead, it describes a particular constellation of online activist practices, most frequently associated with large-scale “netroots” advocacy associations that tend to prioritize progressive, reformist policy objectives (Karpf, 2013). It primarily signals a change in how SMOs deliberate, learn, and strategize. These analytic activist organizations are worthy of analysis due to their size, prominence, and relative political effectiveness. They are treated as innovators within networks of SMO professionals, and thus, there is good reason to expect their organizational practices to be adopted over time both by established and by newly formed SMOs. The article thus also elaborates two key boundaries, which I term the “analytics floor” and the “analytics frontier,” which I expect will define the limitations of this growing style of digitally informed activism. The analytics floor limits the applicability of analytic activism to small-scale organizations and issue areas. The analytics frontier constrains digital listening to the set of digital traces that can be reliably linked to key organizational goals and, theoretically, limits how analytic activists define, measure, and build their own power. Digital listening does not only change how organizations listen to their supporters and reach hard decisions, but it also creates new path dependencies that lock digital SMOs into particular ways of viewing the political systems they are seeking to change.
The article proceeds through three sections. It begins by defining analytic activism and contrasting it to the existing literature on online collective action. It then turns to the analytics floor and analytics frontier, devoting a section to each boundary condition and offering a critical case example to animate the discussion of each. The article concludes by highlighting the features, events, and challenges that researchers can access by focusing on digital listening that remain obscured by the existing focus on digital speech.
What Is Analytic Activism?
Analytic activism converts digital trace data (analytics) into
First, analytic activism embraces a
Second, analytic activism prioritizes
It is worth noting that the use of digital listening can carry two competing connotations. It can suggest responsive governance (“your concerns are being heard”) or it can imply a foreboding, panoptic system of control (“They can
Third, analytic activism requires
One implication of these three features of analytic activism is that
Similarly, analytic activism is distinct from radical “horizontalist” activist movements and from “hacktivist” networks. The horizontalist tradition, which has experienced a resurgence in recent years through Occupy Wall Street (OWS) and worldwide “movements of the squares,” has a deep pedigree that predates the rise of digital communication networks. Both radical anarchist activists and the Yippies of the 1960s have long preached and (to at least some degree) practiced horizontalism. As Paolo Gerbaudo (2012) has demonstrated, leadership in these (primarily offline) horizontalist movements takes the form of “activist choreography,” wherein key actors within activist networks employ social media to softly help coordinate activist activity and influence the direction of their radical movements. This is an important development in digital activism, but it is distinct from what I would term analytic activism. OWS and similar activist movements lack the culture of testing that we see in organizations like SumOfUs.org, Change.org, and MoveOn.org. Their lack of vertical leadership structure alters how they strategize, how they listen, and how they learn.
The hacktivists of Anonymous and LulzSec, meanwhile, are (obviously) deeply engaged with digital media and technology. But they are not focused on converting data and analytics into outputs that help them craft media interventions to move forward a specific political agenda. They are instead focused on directly exploiting vulnerabilities in software and hardware to create power outside the traditional boundaries of politics (Coleman, 2014). Hacktivism is a distinct phenomenon, which attracts different players with different skill sets, norms, beliefs, and goals. It deserves (and receives) detailed treatment and attention in its own right.
Analytic activism produces new
The theory of analytic activism thus embraces the broader shift in the political communication literature toward actor–network theory (Anderson, 2013; Baldwin-Philippi, 2015; Kreiss, 2012, 2016). While the existing literature on repertoire development has emphasized broad concepts like “scale shift” and “innovation and counter-innovation” to help us understand how social movement practices evolve (Tarrow, 2011), actor–network theory pushes scholars to consider how actors, actants, and networks shape and are shaped by one another over time. Analytics-based listening and experimentation produce different feedback loops and resultant organizational behavior, which in turn results in new activist strategies that further empower the actors who are situated to make use of data and analytics.
Two Illustrative Examples
As an example of how analytics affect the strategic choices of activist organizations, consider MoveOn.org’s 1 February 2008 endorsement of presidential candidate Barack Obama. This endorsement occurred well before it was clear who the Democratic nominee would be and well before most progressive political organizations made an endorsement (Karpf, 2009). MoveOn’s Field Director at that time told me in a phone interview that the endorsement decision was a “very scary moment” for the organization, adding, “No one thought Obama was going to win.” 1 The timing of the endorsement was based on MoveOn’s digital listening routines. MoveOn conducts weekly online surveys of random samples of its membership. In 2007 and 2008, the survey included a question about whether MoveOn should make a presidential endorsement. When John Edwards dropped out after the New Hampshire primary, campaigners noticed a major shift in member response to this question. They interpreted this as a strong signal that the membership wanted to make an endorsement prior to the “Super Tuesday” wave of primaries. Since these weekly surveys are a strategic object in MoveOn’s staff meetings, they were already scheduled as part of the next conference call on strategic direction-setting. The staff decided to put the matter to a membership-wide vote. On 30 January, MoveOn asked its membership to vote on whether it should make an endorsement in the primary. Within 24 hr, they had their result. A supermajority of 70.4% said that MoveOn should endorse and the endorsement should go to Barack Obama. MoveOn announced its endorsement on 1 February 2008.
By comparison, the Sierra Club waited until mid-June 2008 to make their presidential endorsement, long after it had become clear that Barack Obama would be the nominee. I asked Sierra Club officials during their 20-23 February Board meeting, 3 weeks after MoveOn had endorsed, whether they were considering a primary endorsement. Both senior staff and Board members offered multiple reasons why the organization simply could not consider an endorsement yet. The most prominent was that
Both the Sierra Club and MoveOn value member input when making important strategic decisions. Both organizations had established processes for gathering this input and incorporating it into the strategic debate. The two organizations have similar
Analytic activism also produces marked improvements at the tactical, rather than strategic, level. A specific story about the “culture of testing” has repeatedly been told at activist workshops and trainings for the past several years. The story begins with a fundraising experiment. MoveOn staff wondered whether their members would respond to zip-code-based donation targets. Rather than sending national email blasts that include a text box saying (for example), “We need to raise $250,000 in the next 24hours,” MoveOn’s analytics director wondered whether their members would be more motivated by a text box saying, “We need to raise $2,500 from [your city].” MoveOn tested the zip-code-based donation frame and, indeed, found a statistically significant increase in donation rates. The group did not know
Within a few months, many of MoveOn’s peer organizations had caught on to this new wrinkle and adopted the same language. This led the data team at MoveOn to wonder whether the spread of this language had altered its impact. So, they re-ran the experiment to see whether the results held up. They did not. The zip-code-based fundraising goals turned out to have had a short-run novelty value, and nothing more.
For analytic activist organizations, the null result from this retest is a small triumph, rather than a disappointment. The individual outcomes of A/B tests and analytics reports are far less important than the data-driven learning routines that analytic activists use to determine how they can operate most effectively. The results of a single experiment are unlikely to revolutionize an activist campaign. The habit of testing, on the contrary, allows the organization to ask new questions, challenge existing assumptions about tactical effectiveness, and evaluate hard questions about their own underlying theory-of-change.
The data-driven turn among analytic activist organizations is accompanied by a new set of limitations. Practitioners of this new data-informed activism frequently invoke a phrase coined by behavioral economist Dan Ariely (2010), “You are what you measure.” By this, they mean that data
The Analytics Floor
The analytics floor is the threshold below which an SMO cannot practically make use of digital trace data to inform and enhance its strategic work. The value of analytics increases alongside the scale of an activist organization. Large SMOs—in particular the “Internet-mediated issue generalists” that typify progressive netroots SMOs in the United States, Australia, and much of Europe (Karpf, 2013)—are better situated to make use of internal analytics than smaller SMOs that specialize in a single issue niche. 2 Conceptually, the analytics floor limits the spread of analytic activism to (1) SMOs based in countries with large national populations and (2) SMOs with massive membership lists.
Consider: if you have an email list of 5 million people or a website that receives 500,000 visits per day, you can run routine experiments (generally called A/B tests) on random subsets of your list and then apply those lessons to the rest of the supporter base. If you have an email list of 500 or a website that receives 50 visits per day, then A/B testing will not provide statistically significant results within a useful time frame. The analytics floor is the threshold below which an SMO cannot practically make use of digital trace data to inform and enhance its strategic work. The presence of the analytics floor creates a drive toward “growthiness” (defined as a focus on tactics that help to expand the organization’s total list size), which in turn can exert pressure that moves activist organizations astray from their mission.
Three variables determine the exact dimensions of the analytics floor: (1) baseline conversion rate, (2) list/audience size, and (3) minimum detectable effect (MDE).
3
For the purpose of illumination, imagine an advocacy group seeking to raise money from small donors. Baseline conversion rate is the rate at which members currently respond to an average fundraising email. List/audience size is the total population that you can sample from. MDE is the threshold at which the group would
As Kevin Collins, Research Director of the Analyst Institute, explains it in the context of electoral campaigning,
For optimization to be actionable, you not only have to have results with sufficiently small standard errors, you have to have a large population into which you’re drawing a sample. If my 10,000 voter contacts represent the most responsive half of my district, they will be less effective on average than if they represent the most responsive 10% of my district. This is true for both experimental analytics and targeting with observational models, and is probably the harder constraint on the analytics floor.
5
His key point here is that there is a
There are two broad implications of the analytics floor. The first implication is that analytic activism is of limited value to organizations whose maximum size is naturally limited. The country of New Zealand, for instance, has a netroots organization called ActionStation. ActionStation collaborates with similar netroots organizations like MoveOn, Britain’s 38 Degrees, Germany’s Campact, and Australia’s GetUp through an international networked convening organization called OPEN (Online Progressive Engagement Network; Karpf, 2013). Most of the organizations in the OPEN network have a membership exceeding 1% of the national populace. For a country like the United States, that threshold produces a multi-million person member list. For ActionStation, given New Zealand’s national population of only 4.75 million, the 1% threshold would be exceeded by a 50,000-person member list. From a cross-national perspective, analytic activism is less valuable in small countries like New Zealand than in large countries like the United States. Similarly, regional, state, and local organizations within a single country cannot simply replicate the analytic activist work routines found in national and international organizations.
The second implication is that organizations that
Growthy petitions tend to resonate with the current news cycle. In the aftermath of a mass shooting or an incident of police brutality, condemnations of these acts tend to perform better than they would otherwise. In the midst of viral online moments like the Ice Bucket Challenge, advocacy organizations commonly link their communications to that viral moment in order to increase their potential growthiness. But it is also the case that some topics and frames simply have more growth potential than others (Berger & Milkman, 2012; Critchfield, 2013). Campaign finance reform tends not to be a big growth engine, nor do stories of corporate malfeasance. Charismatic megafauna (big, cool animals like polar bears and wolves) tend to perform well, while agriculture policy does not. 6
Growthiness can be a deceptive metric, though. The problem is that not all potential members have equal value. If your organization works on tax policy, but you run a petition about the new Star Wars movie trailer, then the new members who join through that petition are likely to delete, spam-filter, or unsubscribe from your detailed reports about the estate tax. If your organization is looking for members who will walk door-to-door in Minnesota in January, then signing up new members through an offer of free bumper stickers and visors will be a weak starting point.
Different activist organizations subscribe to different membership philosophies. Some activist groups primarily focus on aggressive, staff-driven campaigning through creative media and elite pressure tactics. These groups primarily look to their membership for donations. Others primarily mobilize large numbers of supporters to take part in low-bar actions. Still others try to develop volunteer leadership capacity and seek to build power through distributed, volunteer-led activities. Advocacy organizations do not simply want massive lists of one-time supporters who never take another action. They want to develop larger membership
One issue with undirected list growth is that it can hollow out response rates. A second issue with acquiring a mass of new members through an issue unrelated to the organization’s core mission is that it can then skew the analytics reports that the organization relies upon for passive membership feedback. This problem is discussed among netroots practitioners through an intriguing metaphor: “The shrimp that we eat.” The phrase comes from the fact that pink flamingos are not born with their colorful plumage; their feathers take on their distinctive color because of the food supply—the particular shrimp that they eat. For digital advocacy organizations, “the shrimp we eat” is a reference to the issue topics that helped build their membership rolls. If a group initially built its list around a forestry campaign, then it can expect forest-related actions to perform particularly well with the list and (for instance) civil rights–related actions to fare less well. If we conceptualize an SMO’s internal analytics as a type of listening, then the list-building that occurred through past campaigns behaves like an acoustic chamber, naturally amplifying some sounds and dampening others. It follows that, as SMOs embrace digital listening, both the organization’s history with activist campaigns and the types of data that organization has chosen to track take on new importance in determining how it will respond to future problems and opportunities.
So in response to the analytics floor, growth is both an imperative for organizations who wish to take advantage of analytic activist techniques and a risk that can steer an activist organization away from its core mission. Intentional list growth can yield increasing value from digital listening, but runaway growthiness can produce an unresponsive list that further weakens digital activist campaign techniques. The challenges posed by the analytics floor cannot simply be solved through mass email acquisition. And organizations that manage to build their way above the analytics floor still then confront the second boundary condition: the analytics frontier.
The Analytics Frontier
If the analytics floor defines a lower boundary for the use of analytics in political campaigning, then we can also conceive of the outer boundary as comprising an
The analytics frontier is rooted in the electoral origins of most politically focused analytics programs. Practices such as A/B testing and “computational management” played a prominent role in the 2008 Obama campaign (Kreiss, 2012), and the success of that campaign led to the birth of several prominent political technology organizations that offer analytics and testing services to electoral and advocacy campaigns (Kreiss & Jasinski, 2016). As a result, the most sophisticated systems for gathering and analyzing digital trace data have been focused on the types of outcomes that electoral campaigns rely upon. And this is a problem because electoral campaigns are far less complex than activist campaigns.
Electoral campaigns have a clearly specified victory condition (in the United States, the winner receives a plurality of the vote; in parliamentary democracies, the winner is the coalition of parties that combine to hold a majority of seats) and a set end date when the election will take place. US electoral campaigns seek to raise funds from supporters (a mix of small donors and large donors) and use those funds to support a massive campaign “assemblage” (Nielsen, 2012). While we can identify meaningful differences in how campaigns use information technology to achieve these goals (Hersh, 2015; Issenberg, 2012; Kreiss, 2012; Stromer-Galley, 2014), in how they engage supporters (Alexander, 2010), in where their funding comes from (La Raja, 2013), and in what policies they pursue, the fixed endpoints and rhythms of the electoral campaign environment enforce a good deal of similarity among all campaigns. Campaigns knock on doors and purchase media advertisements. They do not hold sit-ins or compose poetry. We know when an electoral campaign has ended, and we can evaluate it according to well-established metrics to determine how it did. This is all a blessing for electoral analytics and testing because campaigners know which variables to track and when.
Activism, by comparison, is hobbled by the undefined qualities of its underlying mission. How do we know when activism has succeeded? How does an advocacy campaign know that it has reached its end goal?
7
Consider, for instance, OWS. Was OWS
As another example, consider the work of global climate activists. What constitutes winning for climate activism? Is it passage of an international climate treaty? National climate legislation? Shutting down a local coal-fired power plant? Passing a campus or town ordinance that promotes clean energy? Undoing the worst excesses of global capitalism by replacing the current energy extraction industry with one that promotes green jobs and climate justice? Some of the toughest strategic debates within the climate movement revolve around these very questions (Hadden, 2015). There is no fixed endpoint at which the climate movement has either won or lost. There is no simple victory condition like 50.1% of the electorate. Even for legislative victories and climate treaties, some climate activists would firmly argue that the proposals pushed forward because they are politically feasible are not nearly strong enough to achieve the carbon reductions that climate scientists tell us are ultimately necessary (Skocpol, 2013). In the multi-year activist campaign to defeat the Keystone XL pipeline, there was even a long-running strategic debate over whether the pipeline was the right target for climate activism (Roberts, 2015).
Without a clear endpoint and a clear victory condition, the analytics that have been developed for electoral campaigns are of limited utility to social movements and activist organizations. An activist organization can use analytics to measure fundraising, or membership growth, or calls to Congress, or media coverage. But none of these metrics is clearly linked to winning or success because the very definition of victory is itself subject to intense debate. As with the problem of growthiness discussed in relation to the analytics floor, the electoral origin of most analytics programs raises a specific danger: An over-reliance on analytics can potentially lead to a bloodless style of activism that generates large-but-hollow numbers. By focusing on the types of analytics that have been developed for electoral campaigns, activist organizations run the risk of prioritizing the outcomes that are easily measurable over the outcomes that align with their strategic mission and vision. The converse of the well-worn motto “you are what you measure” is that you probably
The simplest online interactions tend to be the ones that are most amenable to analytics. Tracking clicks and shares is easy. Tracking conversations is a bit trickier. Tracking online-to-offline participation is still quite hard. Tracking impacts on elite decision-makers is nearly impossible. The more complex the task, the fewer people will engage in it and the more variables you need to simultaneously account for. Think of the analytics frontier like an old map from a bygone era: It can be extended further with time and effort. But until then, it defines the limits beyond which we can only scrawl “
Two conceptual distinctions further complicate the analytics frontier. First is the difference between
Micah Sifry (2013) argues that this represents a major problem, limiting the long-term effectiveness of digital activist groups. In a 2013 article titled “You Can’t A/B Test Your Response to Syria,” Sifry writes,
It’s really striking that a decade into the emergence of online political organizing, there is still no commonly accepted and easy-to-use tool that would enable groups to conduct large-scale debate and deliberation aimed at producing a common pro-active policy on anything—despite the fact that collectively these groups have millions of email addresses and, at least in theory, the resources to put towards the problem. (It’s not for nothing, after all, that the Internet is much better at saying “stop” than it is at saying “go.”)
Sifry is not arguing that analytics and listening are anathema to strong digital activism. He is instead suggesting that the analytics developed for mobilization are more robust and well developed than the analytics for organizing. There is a real danger that in attempting to “listen to the data,” the current wave of analytic activist organizations will become overly fixated on the (mobilization) data that speak the loudest and clearest. The analytics frontier is defined by efforts to expand analytic activism into these more-challenging realms of organizing, mass conversation, and deliberation.
The second key distinction is between organizing and campaigning. As Taren Stinebrickner-Kauffman, Executive Director of SumOfUs.org, puts it,
Campaigners are different from organizers. The fundamental mission of an organizer is to empower other people to create change. The fundamental mission of a campaigner, though, is to set their sights on a particular change they want to create in the world, and then go out and make it happen, whatever it takes. If that happens to involve empowering people along the way, then that’s great. But if you can make that change by having drinks with the nephew of a Senator, so be it. (Stinebrickner-Kauffman, 2013)
Netroots political organizations are mostly constructed for campaigning and mobilizing, not organizing. The bias toward campaigning is not an immutable element of analytic activism, but it does help to define the shape of the current analytics frontier.
The measures of effective campaigning are less firm and precise than the measures of effective mobilizing. We can directly measure whether an SMO’s actions created a lot of noise, generated a lot of signatures, or raised a lot of money. But we cannot directly observe what a targeted decision-maker would do in the absence of an activist campaign (or, more importantly, in reaction to a different series of strategic interventions from an activist campaign). So campaigners are forced to rely on secondary indicators of campaign influence. The analytics frontier forces us to focus attention on key concepts like movement power and strategy. As Stinebrickner-Kauffman puts it, “Who cares if you can get more people to make phone calls by picking the best subject line in your email if you don’t even know if the phone calls have an impact?”
Measuring organizing is even more complicated. How large and committed is the core leadership team? How well have they learned critical organizational skills? Are people becoming more deeply committed to the organization, and are they drawing upon their creative and organizational resources to participate in strategic deliberations? Organizing is a craft; it is more art than science. As a result, the measures of successful organizing are more elusive than the measures of successful campaigning or successful mobilizing. As organizations commit themselves to “listening to the data,” they face the inevitably hard problem that there are far more plentiful data related to mobilizing than there are to campaigning or organizing.
For both the leading academics and the leadership of political associations, the analytics frontier raises questions of depth, power, and effectiveness that represent a vexing challenge, the tough puzzle that they continually attempt to solve. Analytic activist organizations are constantly working to expand into the analytics frontier, developing new metrics of influence and asking hard questions about whether they are gathering the right digital traces, and using them in the right way. Academic researchers, likewise, must treat the ongoing development of new metrics for power and influence as a topic worthy of analysis in its own right. Rather than establishing ex ante our own models of power and success based either on past literature or on the accessibility of large data sets amenable to statistical analysis, researchers would be better off investigating how SMOs are defining and measuring their outcomes, and assessing the powerful, complex, and limiting data analysis practices that are being fashioned toward this end.
Conclusion
The purpose of this article has neither been to decisively prove the value of analytic activism nor to level a broadside critique of its limitations. My intention instead has been to illustrate a few of the important ways that digital listening, as represented by an organizational focus on analytics, changes the conduct of SMOs in the present digital moment. Digital listening is something less than the robust online conversations that the Habermasians among us likely hope for. But it is also something much greater than the staid interest group practices that have been a feature of the advocacy landscape for the past several decades. And, most importantly, it is something far different from the disorganized digital speech that has been the near-unanimous focus of academic research on Internet activism.
Both the analytics floor and analytics frontier are theoretical constructs, rather than specific empirical point predictions. The floor and the frontier define the current limitations of analytic activism. The leading analytic activist organizations, however, are actively working to find routes around the analytics floor and paths that traverse new terrain in the analytics frontier. Analytic activism is still in the process of being created. The goal of this article, consistent with the theme of “Social Media and Organizations,” has been to bring these subjects to the attention of the research community in the hopes of attracting further scrutiny and analysis. Activist organizations make use of social media both for digital speech and for digital listening. While speech can be analyzed through tools like content analysis and computational social science, listening requires more time-worn techniques like qualitative field research and elite interviews. Digital listening is a critical component of large-scale 21st-century SMOs. But the influence of digital listening is easy to miss unless we take the time to look for it.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
