Abstract
Despite the increasing sums devoted to online political advertising, our understanding of the persuasive effects of such advertising is limited. We report the results of a ZIP code level randomized field experiment conducted over Facebook and Instagram during the 2018 U.S. midterm elections in Florida. The ads, produced by a Democratic-leaning political action committee, were designed to spur Democratic vote share and were seen more than 1.1 million times with over 100,000 full views. This wide saturation notwithstanding, we find that these advertisements had very small estimated effects on Democratic vote share at the precinct level (−0.04 percentage points, SE: 0.85 points). Our results underline the challenges of political persuasion via digital advertisements, particularly in competitive electoral contexts.
While the study of political advertising has been largely focused on televised advertising, television is not the only means by which campaigns deliver messages. During the 2020 election, nearly a quarter of all political advertising spending was devoted to digital advertising (Media Project, 2021). At one point during the 2020 Democratic primaries, the leading candidates were spending more on Facebook ads than television ads (Goldmacher and Bui, 2019). Indeed, the amount spent on digital advertising appears to be increasing with each passing election cycle (Homonoff, 2020).
The amount spent on digital advertising raises an obvious question: Do social media advertisements affect vote choice? Campaigns—particularly winning ones—seem to think so. Brad Parscale, Donald Trump’s 2016 campaign manager, credited Facebook advertisements for Trump’s victory (Beckett, 2017). From a campaign perspective, the allure of social media ads is considerable. As they allow for more fine-grained targeting and do not require large start-up costs, Facebook ads are used by a broader set of campaigns than traditional ads and appear especially appealing to down-ballot candidates (Fowler et al., 2021).
Direct evidence attesting to the influence of digital advertisement on election outcomes, however, is hard to come by. A recent study conducted in Germany suggests that such ads can motivate vote choice in the intended direction (Hager, 2019), though the estimates in that study are not statistically distinguishable from zero at conventional levels. In the U.S., previous work has relied on randomized exposure to ads, with effects on candidate preferences measured via follow-up phone surveys ostensibly unrelated to the ads (Broockman and Green, 2014; Turitto et al., 2014). Other studies have examined the effects of geographically targeted Facebook advertisements on voter turnout in 2012 and 2013, obtaining weakly negative and insignificant point estimates (Collins et al., 2014).
In this paper, we add to this small body of existing evidence a study that evaluates the effects of persuasive ads deployed via Facebook and Instagram on vote choice in the United States. In the weeks immediately prior to the 2018 elections, in partnership with a Democratic-leaning political action committee, we randomized exposure to advertisements on Facebook and Instagram at the ZIP code level in Florida. While the partner organization’s advertisements were meant to help elect Democrats, neither of the ads explicitly mentioned a specific candidate or campaign.
Instead, both ads focused on the virtues of the Democratic Party and the deficiencies of the Republican Party. In this way, they reflect broader trends in Facebook ads, which tend to be more partisan than television ads (Fowler et al., 2021). The tested ads were also quite similar to the non-candidate specific, issue-focused ads that have recently grown in prominence because of changes to campaign finance regulations (Persily et al., 2018). Together, the tested ads accumulated more than 1.1 million impressions. Following a pre-registered, randomized field experimental design similar in spirit to Arceneaux (2005) and Hager (2019), we measured the impact of the ads on Democratic vote share at the precinct level.
Our findings echo the emerging consensus that persuasive political messaging often has limited effects in general elections (Coppock et al., 2020; Kalla and Broockman, 2018) and that any such effects dissipate rapidly over time (Gerber et al., 2011; Hill et al., 2013). There are of course exceptions (e.g., Spenkuch et al., 2018), but ours is not one of them. Our estimate of the effect of the ads on vote share is substantively small and estimated with reasonably high precision. Our results underscore the challenge faced by groups trying to influence vote choice online. Political persuasion is hard, and social media advertisements do not necessarily make it any easier.
The paper is structured as follows. We begin by reviewing the available evidence on the effectiveness of political messaging and clarify our contribution. We then turn to describing the experimental design, as well as the context in which the experiment was administered. As we discuss, our design allows us to estimate a crucial quantity: the effect of the ads not on feelings or intentions, but vote choice itself. We present our results and contextualize our estimates through a meta-analysis of previous studies.
The challenge of political persuasion
Our goal is to understand the extent to which digital political advertising affects vote choice. On the one hand, campaigns and outside groups spend enormous sums of money to deliver advertisements to citizens as they use their digital devices. In 2018, the year this study was conducted, Facebook reported that $400 million was spent on political advertising on its platform (Fowler et al., 2018). Across platforms, the amount spent on digital political advertising continues to grow (Homonoff, 2020) On the other hand, the academic literature abounds with skepticism about the persuasive effects of political advertisements in general, regardless of format. The experimental evidence assembled by Kalla and Broockman (2018) shows that, across multiple channels of communication, the persuasive effects of political messaging are close to zero in the context of general elections. Kalla and Broockman’s assessment is stark: “When we focus on the choices voters actually make on election day in a general election, we find that any early persuasion has decayed and that any persuasion near election day fails reliably.” This summary conclusion is echoed by Coppock et al. (2020), who measure the (evidently small) effects of dozens of 2016 presidential television ads using survey experiments conducted over the course of the election.
All studies of the effects of advertising on vote choice face the fundamental data challenge that individual-level vote choice is not observable. Scholars have often addressed this difficulty by substituting survey measures for vote choice. For example, Gerber et al. (2011) examined the effects of television ads on favorability ratings of the advertising candidate and self-reported vote intention. Broockman and Green (2014) collaborated with political campaigns to evaluate the effectiveness of Facebook ads on attitudes toward the candidates. The treatments were deployed on Facebook, with outcomes measured by a polling company shortly thereafter. In two experiments, they find that ads for a Republican state legislative candidate have no discernible effects on voters’ attitudes toward the candidate or vote intention (estimate: 1.6 points, SE: 1.4 points). Because these survey measures are likely correlated with actual vote choice, they come close to measuring the main object of interest—the effect of ads on vote choice—but do so only for the segment of voters who are willing to respond to surveys. Similarly, Turitto et al. (2014) use survey outcomes to measure the persuasive effects of a digital advertising campaign, cluster-assigned at the municipality level. Owing to the relatively small number of clusters, the point estimate (1.1 points, SE: 2.1 points) cannot be distinguished from zero.
As noted by Arceneaux (2005), since precincts are the lowest level of aggregation at which vote choice is observed, treatments must be assigned at the precinct level or higher to study the effects of treatments on vote choice measured at the ballot box. Arceneaux points out that some of the power loss due to cluster random assignment can be offset by adjusting estimates using the detailed pretreatment covariate information available about each precinct from historical election returns. In that study, voters in randomly selected precincts in Kansas City, Missouri were cluster-assigned to receive visits from door-to-door canvassers urging support for a ballot initiative. Hager (2019) used an analogous precinct-randomized design to study the electoral effects of Christian Democratic Union (CDU) Facebook advertisements during Germany’s 2016 election. Treatment localities were assigned to either emotional or fact-based ads, while the control localities were assigned to no ad at all. Pooling over the treatment arms, the average treatment effect estimate on CDU vote share was 1.7 percentage points (SE: 1.2 points). Covariate adjustment does increase the precision of the estimate, but it diminishes in size and remains statistically insignificant at 0.9 points (SE: 0.6 points).
Like the Broockman and Green (2014), Turitto et al. (2014), and Hager (2019) studies, the present study also examines the effects of political advertisements delivered via social media. Other studies conducted on Facebook and Twitter explicitly leverage the network structure of these platforms to understand social interaction. For example, Bond et al. (2012) randomly assigned Facebook users to receive either informational or social messages regarding election turnout; Eckles et al. (2016) administered an encouragement design to uncover the role that peer effects play in Facebook behavior; Haenschen (2016) recruited Facebook users to participate in experiments on the capacity of social pressure on the platform to increase voter turnout; Coppock et al. (2016) randomly encouraged petition signers to tweet a petition endorsement to their followers. By contrast, our study leaves the social aspect of these platforms entirely to the side. We test the effect of ads delivered via Facebook and Instagram because many Americans spend large portions of their days on these platforms—indeed, the high volume of traffic on these sites is presumably why they are such attractive targets for political advertising in the first place.
Field experiment: Florida advertisements
We administered a randomized field experiment just prior to the November 2018 general elections to measure the effect of online pre-roll ads designed to encourage voting for Democratic candidates in four Florida congressional districts. We registered our pre-analysis plan at https://egap.org/registration/5312 and include it in the supplemental materials for reference.
Our experimental units are the 210 ZIP codes associated with Florida congressional districts 15, 16, 26, and 27. ZIP codes are typically much larger than voting precincts, with the average ZIP code containing approximately 10–20 voting precincts. We associate precincts with ZIP codes using common membership on the voter file. This process creates many false negatives because voters’ addresses can be out of date with the precinct location on the file; we drop precinct-ZIP code pairs that account for less than 10% of voters putatively residing within a given ZIP code. The matching process is further complicated by the fact that precincts do not nest perfectly within ZIP codes.
Sample description at the ZIP code and voting precinct levels.
Substantively, the two treatment ads were very similar. Both featured pro-gun control messages that criticized the Republican Party for its support of anti-gun control policies. The ads attempted to persuade viewers to vote for Democrats in the 2018 midterm elections because of their position on guns. In one of the ads, guns conspicuously dangle from teachers’ hips as Trump is heard to proclaim “the end of gun-free school zones.” In the other ad, a student texts his mother while his school is locked down during an active shooter incident. Neither of the ads reference specific campaigns or candidates; instead, they direct viewers’ attention to the partisan differences on gun control policy.
Treatment advertisements.
Stills were captured at 0:03 seconds into advertisements.
Treatment exposure information.
Source: Ad vendor.
Our primary outcome measure is the precinct two-party vote share for Democrats in the 2018 congressional elections, obtained from the Florida Secretary of State. Following our pre-registered analysis plan, we include the two-party vote share in the 2012, 2014, and 2016 general elections as covariates in order to increase the precision of our estimates. We cluster our standard errors at the unit of assignment, which is the ZIP code. Again following our analysis plan, we conduct one-tailed hypothesis tests using a randomization inference procedure under the sharp null hypothesis of no effect.
Effects on vote share.
ap < 0.05. CR2 Cluster-robust standard errors are in parentheses.
In the supplementary materials, we report a series of additional analyses for the interested reader. In particular, we report the estimated effects on vote margin and voter turnout. These estimates are substantively small and statistically nonsignificant. We also include an alternative estimation approach in which aggregate precinct-level outcomes to the ZIP code level, then compare treated ZIP codes to untreated ZIP codes. This approach has the advantage that we can include all 210 randomized ZIP codes in the analysis, but relies on a proportional allocation rule for handling precincts that span ZIP code boundaries. The substantive results in terms of magnitude and significance on vote share, vote margin, and turnout are very similar to those reported here in the main text.
We close our empirical section with a formal integration of our findings with the existing research literature on the effects of digital advertising on vote choice. To do so, we adopt a Bayesian framework in which diffuse priors are updated in the wake of each experiment. For ease of exposition, we assume that prior beliefs about the size of the average treatment effect are distributed normally, and we update these priors assuming that each experiment’s results have a normal sampling distribution. The normality assumption allows us to apply Bayes’ Rule by weighting the priors and each experimental result by the inverse of its squared standard error.
We begin with a diffuse prior centered on zero with a standard deviation of 5 percentage points. This prior distribution is depicted in the leftmost graph of Figure 1 and represents a considerable amount of prior uncertainty about the plausible effects of digital advertisements on vote choice. The three field experiments to date on this topic are Broockman and Green (2014), Turitto et al. (2014), and Hager (2019). Figure 1 shows the evolution of posteriors since 2014, as each study updates the priors formed by the studies that preceded it. After these first three studies, the posterior (panel 4) has a mean of 1.0 percentage points with a standard deviation of 0.5 percentage points. Bayesian integration of the research literature on the effects of digital ads on vote choice.
How does the present study build upon the previous literature? Panel 4 shows the state of the literature before our study, while panel 5 shows how the posterior distribution looks after our study is included. Contributing an estimate centered precisely on zero, the Florida study shrinks the posterior to a mean of 0.7 points with a standard deviation of 0.4 points. Despite the fact that each of the four studies returned statistically insignificant results, the accumulation of evidence over time yields a relatively sharp picture of small positive effects.
In the appendix, we conduct a design diagnosis (Blair et al., 2019) that shows how, on its own, our experiment is relatively underpowered for the 1.0 percentage point average effect implied by the previous literature (power: 21%). However, if we conceive of the design as first obtaining an experimental estimate, then combining it with prior evidence to produce a Bayesian posterior, the power is much stronger (89%). As this exercise demonstrates, in research settings with small effects, individual experiments may be underpowered, but placing them within a cumulative research program can still advance knowledge incrementally.
Discussion
To our knowledge, the present study is among the first to test the effects of digital advertising on precinct-level vote choice in the U.S. Given the growing enthusiasm for digital advertising on large social media platforms, the fact that the digital ads we tested had no apparent effect on Democratic vote share is bracing. In light of prior experimental evidence, it appears that the average effect of digital advertising is weakly positive. We consider four possible explanations that may guide future research.
First is the possibility that digital advertising generally has little effect on voting behavior. Although the number of publicly accessible studies remains small, it is noteworthy that neither studies of vote choice (Broockman and Green, 2014; Turitto et al., 2014; Hager, 2019) nor studies of voter turnout (Collins et al., 2014) lend clear support to the thesis that digital advertising persuades or motivates. That said, if one were to take the mean of our final Bayesian posterior (0.7 percentage points) at face value, it would imply a cost per vote of $8.68, a figure that compares favorably to other campaign tactics (Green and Gerber, 2019). 1 Bear in mind, however, that the point estimate from our single study is very slightly negative, which would imply an infinite cost per vote.
Second, it may be that digital ads work when voters pay attention to them, but voters disregard them en route to the content that attracts them to social media sites. It is telling that, for every voter who watched our ads all the way through, roughly 10 viewers skipped our ads after 3 seconds. This brings us to the third explanation, which is that our ads were deficient. Although one can never rule out the possibility that others ads would have performed better, it should be noted that the advertising campaign was one that was actually deployed by an organization seeking to persuade the electorate; the ads were very much the kind of issue-based messaging that has grown in prominence in the wake of Citizens United and other Court decisions. However, we concede that future work should investigate whether the failure to directly mention and endorse candidates undercuts the influence of political advertising.
Finally, it may be that advertising’s influence on vote choice is attenuated during the final days of a closely contested general election. In part, as Kalla and Broockman (2018) contend, persuasion is harder during a general election, when party labels overwhelm others considerations. Weak effects may also reflect the sheer volume of competing campaign messages, drowning out the experimental ad’s message. Of course, this explanation raises the question of why digital advertising is relied on so heavily during the waning days of a general election, when the marginal returns may be attenuated and the marginal costs of advertising are at their maximum.
Each of these explanations lends itself to further research along the lines of our experiment. To ascertain whether digital advertising is capable of swaying votes, randomized trials involving a broad range of advertising content and volume are necessary. Of particular interest in the context of American campaign finance regulations are head-to-head comparisons between candidate-focused ads and issue-focused ads. To assess whether advertising effectiveness is contingent on context, it would be instructive to compare the effectiveness of presidential ads deployed in battleground and non-battleground states, perhaps leveraging adjacent media markets that attract markedly different numbers of ads.
Supplemental Material
sj-pdf-1-rap-10.1177_20531680221076901 – Supplemental Material for Does digital advertising affect vote choice? Evidence from a randomized field experiment
Supplemental Material, sj-pdf-1-rap-10.1177_20531680221076901 for Does digital advertising affect vote choice? Evidence from a randomized field experiment by Alexander Coppock, Donald P Green and Ethan Porter in Research & Politics
Footnotes
Acknowledgments
We thank Frank Chi, Jesse Ferguson, Conor Gaughan and Tara McGowan for facilitation.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Correction (April 2025):
Ethical approval
This research was reviewed and approved by the Institutional Review Board of George Washington University (IRB#NCR202186).
Supplemental material
Note
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
