Abstract
What are the political consequences of democratization assistance to regimes transitioning from authoritarian rule? By exploiting the downstream effects of a field experiment designed to encourage citizen monitoring of Georgia’s 2008 parliamentary elections, we evaluate the political consequences of one type of democracy promotion aid. The intervention increased citizen activism, but it also had the unanticipated effect of suppressing overall voter turnout by approximately 5%. We hypothesize that the civic education campaign was interpreted as a sign of increased political attention to a selected voting precinct, which suppressed opposition turnout. Two additional experiments provide additional evidence for the hypothesis.
Introduction
The delegitimization of the public sphere implies that decent people should avoid politics, or enter them only grudgingly, on the grounds that “someone has to do the dirty work.”
With the goal of unpacking the consequences of foreign-led empowerment of civil society, a pilot program was conducted through an international democracy promotion NGO during the 2008 Georgian parliamentary elections. The intent of the program was to increase civic engagement and deter voter fraud by providing Georgian citizens with information on (a) how to file a formal complaint and (b) how an individual could monitor and change the voter registry. Programs were implemented in randomly selected voting precincts, which allowed for the gold standard in program evaluation methodology: a randomized controlled experiment. The pilot program increased citizen activism, evidenced by a 12% rise in registered complaints and a moderate increase in voter registration.
While the intervention worked as intended, examination of the information campaign’s effects on electoral outcomes revealed an unexpected result: precincts randomly selected for the “complaints” intervention had 5% lower voter turnout than those that received no treatment. For every person contacted, at least one person stayed home on election day. In two separate replications of this result, precincts that were randomly selected for a 2008 or 2010 pre-election panel survey—even if they received no additional contact with the NGO—had a voter turnout about 3% lower than precincts where no data were gathered. In all three experiments, relatively innocuous interventions induced voter abstention. Given that these interventions are representative of standard methodologies used in democracy promotion activities and social scientific inquiry more generally, an in-depth exploration of these unexpected findings may shed light on important and under-reported political processes unfolding in the shadow of democracy promotion aid.
An individual’s decision to turn out to vote on election day is typically modeled as a minimally costly action with few direct benefits. 1 Since rationalist approaches to turnout typically depend on non-material inducements (i.e. “expressive” benefits) to explain aggregate electoral participation, it is plausible that in the Georgian case a small increase in negative emotions—fear or disgust—on the part of study respondents might have tipped the scales against participation among some voters on election day. In semi-authoritarian contexts, some citizens believe that regimes can monitor individual voting decisions and use the results to tailor retaliations. 2 As a result, many post-Soviet elections resemble “rituals of consent” (“voting in order to be seen voting by anyone who might be interested in keeping track of who is voting”) rather than exercises in preference aggregation. 3 These fears are not subject to easy falsification. They are akin to a conspiracy theory, which can vary in potency across different electoral districts within the same state or county. We hypothesize that whether a Georgian voter turned out to participate in the 2008 or 2010 election hinged on individuals’ self-assessment of two factors: (a) whether the regime was capable of monitoring his or her vote choice, and (b) potential personal costs of retaliation for voting for the opposition. Our suspicion is that our intervention triggered risk aversion behaviors: staying home on election day.
An alternative hypothesis to regime surveillance is disenchantment with politics in general, as suggested by the epigraph to this paper. Randomization of the content of a 2010 pre-election survey found that refusal to voluntarily supply information was more common the more “political” the survey instrument was. This finding was especially acute in precincts rife with mistrust of regime intentions. Perhaps citizens feared that the data being collected would be somehow used against them. Or perhaps respondents see politics as intrinsically “dirty,” and viewed the activities of the foreign-assisted youth who arrived from the capital with some mix of pity and disgust. Regardless of the precise mechanism, these findings draw attention to a possible source of bias in much of the survey data collected in semi-authoritarian regimes on politically sensitive subjects.
Local context: The Georgian case
Georgia is a post-Soviet state that enjoys an unusually strong strategic relationship with the United States. It receives more Western aid per capita than the rest of the states in the region combined. 4 In 2003, peaceful protests swept Eduard Shevardnadze and his regime from power when exit polls did not match official vote tallies. 5 In the wake of the “Rose Revolution,” the next president, Mikheil Saakashvili, allowed the Soros Foundation, NDI, IRI, Transparency International, and various other democracy promotion organizations freedom of action.
At the time of the study Saakashvili’s party, the United National Movement (UNM), dominated the political landscape by using formal institutions—notably the judiciary, the Ministry of Interior, and Parliament—to discourage electoral alternatives. 6 As president, Saakashvili enjoyed favorable media coverage and free time on state television networks. In a pre-election survey conducted in the lead-up to the 2008 parliamentary election, 32% of voters reported that people had to “vote a certain way to keep their jobs.” An additional 34% of voters admitted that this happens “sometimes.” 7 For all the disappointments of the post-Rose Revolution era, however, Saakashvili’s decision to concede electoral defeat to Bidzina Ivanishvili’s Georgian Dream party in 2012 represents the clearest example of a party in post-Soviet Eurasia losing an election and voluntarily stepping down from power. 8
Two facets of the 2008 Parliamentary election deserve special note. First, the election was conducted in an atmosphere of paranoia and conspiracy. Many voters believed the integrity of the secret ballot was compromised. It was taken for granted by Georgian media outlets that authorities could figure out how constituents voted, and tailor rewards or punishments accordingly. 9 This is borne out in survey data. According to a nationally representative household survey conducted in the Spring of 2009, only 50.1% of voters believed that their vote was secret in last Georgian election—a far lower figure than was reported for the same question in either Armenia (72.8%) or Azerbaijan (62.8%). 10 Second, little distinguished the various opposition parties in terms of ideology or policy proposals. Voters generally went to the polls to either affirm or reject the status quo—to either vote vote for or against Saakashvili’s UNM party.
Program evaluation: Expected and unexpected findings
In the weeks leading up to the 2008 parliamentary elections, the authors worked with an international democracy promotion NGO to randomize the precincts that would receive a voter information program. The goal of this pilot program was to educate Georgian citizens about common technologies of electoral fraud and—it was hoped—to deter voter fraud by disseminating accurate information about how to file an anonymous complaint. With the aid of the implementing organization, we tailored the content of the message to address types of fraud regularly reported to be associated with Georgia’s electoral environment. The program was implemented by the Caucasus Research Resource Centers (CRRC). The total program budget was just under $10,000. Eligible precincts were those previously selected as primary sampling units in a national panel survey on the 2008 parliamentary elections. Canvassers delivered the message several weeks after the first wave of the survey. To increase efficiency, precincts were grouped in blocks of four, based on region, urban or rural, 2006 governing party’s vote share, and percentage of ballots cast between 5:00 p.m. and 8:00 p.m. in the previous election, which election observers use as a rough proxy of voter fraud (since ballot stuffing tended to occur at the end of the day). Within these blocks of four, two precincts were randomly selected to receive the intervention.
Description of intervention
The pilot program studied an intervention designed to encourage voters to report any observed problems on election day to either local electoral officials or to well-known NGO partners (with experience in election monitoring). 11 The messages were delivered through a combination of pamphleting and canvassing. In every treatment precinct, approximately 30 households were visited by canvassers and given a face-to-face oral message and a flier. Canvassers also distributed fliers to roughly 80 additional households in the precinct (2–3 neighbors per home selected). Given that average precinct size in the experimental samples is 1151 voters and assuming an average of 2 voting age adults in each household, 12 we would have directly exposed about 20% of households to the message. The canvassers were college students recruited from Tbilisi universities. Most enumerators had substantial survey administration experience and training through the CRRC. The text of the messages are in the appendix.
Expected results: Enhanced civic activism against electoral malfeasance
The intent of the “Complaints” intervention was to encourage voters to monitor local elections themselves, filing a grievance if election regulations were violated. To check if our intervention had this effect, we compiled data on complaints of electoral malfeasance reported by prominent election monitoring organizations. We use two dependent variables: (a) a dummy variable for at least one complaint filed by a voter in that precinct and (b) the raw number of complaints filed. Results are presented in Table 1. We find strong evidence that our intervention increased the propensity of voters to challenge voting procedures on election day. The mean difference across treatment and control for the complaints dummy implies a 12% increase in the probability of a complaint being filed in treated districts. For the models using the raw number of complaints, the point estimate is 0.26—which is roughly twice the mean number of complaints filed in all precincts in the sample.
The average treatment effect of the complaints treatment on whether or not any complaint was made and the number of complaints in the 2008 elections. For covariate adjusted results, the adjustment variables are the number of registered voters, 2006 turnout, and 2006 vote share of the ruling party. Standard errors are heteroskedasticity consistent and account for blocking.
Robust standard errors in parentheses.
Significant at p<.05.
Significant at p<.01.
Unexpected downstream results: Turnout suppression
Subsequent investigation of the data revealed another finding: a substantial number of Georgians did not cast their votes on election day in districts selected for the “Complaints” treatment, as seen in Table 2. The average effect of the complaints treatment (unadjusted specification) on voter turnout is about −5.4%. We find a positive, but small, difference in the ruling party’s vote share (model 3 and 4), though the null hypothesis of no difference could not be not rejected at conventional levels. 13 This unexpected finding prompted additional investigation.
The average treatment effect of the complaints treatment on turnout and the vote share of the ruling party. For covariate adjusted results, the adjustment variables are number of registered voters, 2006 turnout, and 2006 vote share of the ruling party. Standard errors are heteroskedasticity consistent and account for blocking.
Robust standard errors in parentheses.
Significance at p<0.05.
Turnout suppression effects of two pre-election surveys
Given that the turnout dependent variable was not pre-specified prior to the experiment’s implementation, skeptics could reasonably dismiss the turnout suppression finding as a statistical fluke. This section presents two partial replications of the result, wherein two pre-election surveys are treated as randomized interventions. Subcontracted Georgian canvassers capital city visited randomly selected precincts, asking standard questions about political beliefs a few weeks before two separate elections. The 2008 and 2010 CRRC surveys had a multi-stage sampling structure, with precincts as the primary sampling units. In 2008, prior to the parliamentary elections, 85 precincts were selected out of a total of 3019 precincts using sampling probabilities (πi proportional to the number of registered voters. Using a virtually identical sampling procedure in the weeks prior to the 2010 local elections, 110 precincts were selected as primary sampling units for a similar political survey. The unit of analysis for this “experiment” is the precinct and the treatment is selection into the sample. To recover the average treatment effect of being included in the survey sample on turnout and regime vote share, we fit a weighted linear model of each dependent variable on a treatment indicator variable and the selection variable, the number of registered voters. The weights are inverse probability weights, where the weight for each treatment unit is equal to
The results are reported in Table 3. In both the 2008 parliamentary election and the 2010 local elections, being randomly selected for a pre-election survey had a statistically significant voter suppression effect. The suppression effect was slightly more acute in 2008 than in 2010 (a lower turnout of approximately 3% compared to a lower turnout of approximately 2.5%). The results are consistent with the effects of the complaints intervention reported in Table 2.
“Survey as an Experiment.”
This table shows the estimated effects of a precinct being included in two different political surveys. The first survey was conducted immediately before the 2008 parliamentary election. The second survey was carried out before the 2010 local elections. Treatment effect estimates account for unequal probability of selection and control for precinct size, the variable used to sample precincts into the “treatment”.
Potential mechanisms
Why did the three interventions lower turnout? The interventions could have lowered turnout either by changing the behavior of Georgian voters or the functioning of the elections themselves. In the latter scenario, exposure to the information or survey intervention could have deterred would-be ballot stuffers, thus reducing the number of manufactured voters and consequently reducing measured turnout. Ballot-stuffing and tabulation manipulation are prevalent in the South Caucasus. 15 In the 2008 election, however, European election monitors observed only one instance of ballot stuffing and the ballot counting procedures were generally assessed “positively” (OSCE, 2008: 7). Under this scenario, political operatives—typically allied with the ruling regime—would interpret the interventions as increasing the probability of being caught and punished for breaking the law, thus deterring electoral manipulation. According to this hypothesis, one would expect the vote share for the president’s party to decrease in treatment precincts, as ballot stuffers would no longer generate the same number of votes for the ruling party. The evidence for this implication of the fraud dampening mechanism is equivocal. The point estimates of the effect of the complaints intervention (Table 2) on the vote share of the presidential party is positive (albeit not statistically significant), while the effects of the survey (Table 3) are indeed negative (again not significant). Furthermore, it is not clear why a precinct being selected as a primary sampling unit for a survey would reduce ballot stuffing, since enumerators were deployed several weeks before the election and did not present themselves as election observers. Given the weak theoretical rationale for attributing reduced ballot stuffing to the “survey as an experiment” intervention and the null results on regime party vote share, the case for this mechanism is tenuous.
Our favored explanation for the findings is that the interventions lowered the propensity of some Georgian citizens to vote. Being surveyed has been shown to affect consumer behavior (Zwane et al., 2011) in a variety of contexts and several cognitive mechanisms such as self-prediction and related priming effects have been proposed as explanations (Rogers et al., 2013). The particular characteristics of the Georgian political environment, however, suggest an alternative mechanism for our findings on voter behavior: increased suspicion of regime surveillance, induced by the unexpected arrival of survey enumerators, reduced political participation. This mechanism is plausible given the widespread disbelief in the secrecy of the ballot and the common view that the government punishes political opponents. Citizens in post-Soviet Georgia—many of whom came of age in a totalitarian state—may misconstrue enumerators from the capital asking about politically sensitive issues as agents of the state seeking to gather information about their political sympathies. These types of fears are frequently expressed by Georgian opposition leaders. For example, in 2009, a Georgian opposition legislator claimed that
telephone calls have been made … to people from a number which is registered to a division of the Ministry of Internal Affairs … An unidentified individual was conducting a survey from this number, asking whether or not they were planning to attend the 9 April rally, whether they stood in solidarity with [participants of] the rally and whether they would change their mind if the opposition used weapons [at the rally].
If visits by enumerators prime these sorts of fears, it is plausible that even a small increase in the perceived risk to the voter or her family could tip the scales against participation on election day.
Some evidence for this mechanism can be observed by examining how treatment effects in the original “complaints” experiment vary by perceptions of the prevalence of electoral coercion, as well as the geographic location of the precinct. We examined three main interactions: urban precincts versus rural precincts, prevalence of employer pressure to vote, and the degree to which employers pressured employees to campaign. 16 If the intervention was indeed being interpreted as political monitoring, then we would expect that the effect would be largest in rural areas where machine politics is more entrenched. The qualitative literature suggests that employment, particularly in the public sector, is often used to mobilize supporters. To that end, we examined whether, as expected, the monitoring effect was largest in areas where survey respondents reported that employer political pressure was highest. We also check whether effect estimates are larger in areas where employers are perceived to place undue pressure on workers to participate in a campaign. 17
The results can be found in Figure 1. The treatment effects all vary in the expected directions, though only one interaction—perhaps not surprising giving the low sample size—is statistically significant. The intervention had a larger suppression effect in rural areas, as expected. But particularly striking is the “employer pressure to vote” interaction, which is statistically significant at the 5% level. In areas where relatively few survey respondents believed that employer pressure to vote for particular parties was widespread, the estimated effect on turnout is close to 0. In areas with a high number of people citing employer pressure as widespread, the effect estimate is −12.7%. Cumulatively, these patterns in the data provide suggestive evidence that the intervention was interpreted by opposition voters as a signal that there would be increased regime attention to their voting behavior on election day.

Heterogenous treatment effects: Heterogeneity in the effect of the complaints treatment on voter turnout by three variables. ATE in each covariate strata is from a linear model with block fixed effects and adjusted for covariates. Standard errors are heteroskedasticity consistent.
Politicized data collection and non-response in 2010
In order to probe the mechanism further, we implemented a survey experiment in the 2010 pre-election survey. Our new question: Is the behavioral response of Georgian citizens to a survey instrument different if the survey is clearly associated with politics? Procedurally, we front loaded half of the surveys with the “politics module” (treatment) and placed these directed political questions at the end of the survey for the other half of survey respondents (control). 18 Every respondent was eventually asked political questions, so if the hypothesized mechanism is correct, the survey itself heightened concerns over regime surveillance in both the treatment and control group. The advantage of the randomization of module order is that one can observe if Georgian respondents perform differently on surveys they know to be collecting political behavior data compared to surveys where this was suspected but not known.
The dependent variable in the analysis that follows is the number of questions that the respondent answered “Don’t Know” or refused to answer. 19 The average treatment effect for the full sample can be seen in Table 4. Asking political questions in the first part of the survey increased the number of “Don’t Know” or “Refuse To Answer” responses by an average of 1.7, which does not reach standard levels of statistical significance.
The average effect of question order on non-response. Treatment respondents were asked political questions at the beginning of the survey.
Given that a variety of mechanisms could explain how priming politics could induce higher rates of non-response, we are particularly interested in heterogeneous treatment effects: is non-response attributable to treatment higher in precincts with greater distrust of Georgian political institutions and the ruling party? A complication is that it is not possible to use individuals’ responses to a question as a covariate: all answers are “post-treatment.” Since the thrust of our argument is that respondents are reluctant about providing data and answer questions strategically, examining heterogeneity by individual response risks conflating treatment with baseline characteristics. Ecological inference provides a partial solution to this problem: creating covariates, at a precinct level, based on aggregates of survey responses that could divide precincts transparently. 20 It becomes possible to make generalizations about which communities are essentially UNM strongholds and which communities do not believe that Georgia is a democracy. 21 Table 5 splits the sample by the median value of the community level covariates and then displays the average treatment effect within each stratum.
Heterogeneity in the effect of question order.
Citizens were less likely to give answers to questions on the politics-loaded version of the survey (treatment) in precincts where many citizens also report they are not living in a democracy, they don’t trust the electoral process, and they are mistrustful of the UNM. 22 The specific questions with the the largest difference in non-response rates across the treatment and control conditions also support the favored hypothesis. 23 The five questions with the largest gap in response rates when the politics module was put at the front of the survey are as follows, in descending order:
On a scale of 1 to 5 how do you feel about the following parties with “1” meaning “Very Negatively,” and “5” being “Very Positively”: [Answer: The Alliance for Georgia Opposition Party]
It has been proposed that closed circuit TV cameras be installed outside the polling stations to count the number of voters. Do you support this proposal?
On a scale from 1 to 5 where “1” means “Totally Falsified” and “5” means “Conducted Very Well” how do you think the local elections will be conducted?
On a scale of 1 to 5 how do you feel about the following parties with “1” meaning “Very Negatively” and “5” being “Very Positively”: [Answer: The We Ourselves Opposition Party]
On a scale from 1 to 5 where “1” means “Do Not Trust At All” and “5” means “Trust Completely,” how would you assess your trust towards domestic election observers?
In line with our theoretical expectations, front loading the survey with political questions most powerfully affected non-response rates on questions related to monitoring of electoral practices, the credibility of the electoral process, and feelings about opposition parties. The first and fourth largest gap in response rates were questions that asked respondents to rate particular opposition parties (Alliance for Georgia was a coalition of opposition parties backing the main opposition candidate in the Tbilisi 2010 mayoral race; We Ourselves was a single-issue opposition party focusing on the Abkhazia dispute). The three other questions with large gaps refer to the credibility of the electoral process—with particular reluctance about monitoring, either via cameras and digital recording devices or domestic election observers. We interpret this pattern as additional circumstantial evidence that the more the survey veered into questions of politics, the less willing Georgian citizens were to reveal their opinions.
All of this may, in part, explain why neither area experts nor Georgians themselves predicted that Saakashvili’s UNM party would be outmaneuvered in the 2012 election by Bidzina Ivanishvili’s Georgian Dream. Mullen (2012) reports that 46% of people contacted for a NDI-funded pre-election survey in the immediate lead-up to the 2012 election refused to tell CRRC survey enumerators who they were planning to vote for. He speculates: “[T]hat number was based on fear; and fear had become an important part of life under the UNM for many people. It was a fear that would lead people to avoid taking risks that could have put them on the wrong side of the authorities.” But it is important to remember that is also plausible that respondents simply did not see themselves in solidarity with foreign-funded democracy professionals doing surveys. While fear is the mechanism that we find most intuitive—and a mechanism that fits neatly into both the rationalist voting framework and anti-regime polemics—other psychological mechanisms are plausible. Simple disgust is difficult to falsify, and broadly consistent with the heterogeneous treatment effects reported above. As Nodia’s epigraph to this paper makes clear, politics in Georgia are widely viewed as dirty work. 24 The tendency to perceive politicians as corrupt, and respond negatively and emotionally to misused political power, is ubiquitous in democracies around the globe. The desire by Georgian respondents to avoid dirtying themselves with politics by avoiding it as much as possible could explain the behaviors reported in this study.
Conclusion
In recent years a number of studies have used natural and field experiments to demonstrate that the presence of international actors—as election monitors, scientific observers, or activists—can have a causal influence on local electoral outcomes. Unfortunately, as this study emphasizes, it is likely that well-meaning interventions are occasionally misinterpreted. And while we are certain that our intervention interacted in an unexpected way with prevalent beliefs in Georgian society, we emphasize that our study does not allow us to distinguish precisely which inherited beliefs interacted with the interventions to reduce turnout. Did the intervention accidentally prime fear of regime monitoring? Or did it accidentally activate ambivalence, reminding respondents of some other distasteful aspect of Georgian political life? Future research may be able to discover the mechanism at work.
The Rose Revolution is an exemplary case of civil society actors nonviolently seizing control of the state apparatus. One of the unforeseen consequences was a blurring of the boundary of where the regime ends and civil society begins. Well-meaning efforts by foreigners to strengthen civil society, in this context, were likely interpreted by many Georgians as strengthening the state. While this study provides limited grounds for generalization beyond the Georgian case, or even beyond the Saakashvili era, inherited beliefs from the Soviet legacy—a culture of bureaucratic compliance that is exploited by incumbent parties, expectations that private behaviors can and will be monitored and reported to authorities, and a sense that the political sphere is “dirty”—are ubiquitous in the region. 25 Residual fear of state surveillance is a common phenomenon in post-authoritarian societies. This study should be seen as an invitation for further research on how authoritarian cultural legacies interact with the standard social science toolkit.
Unexpected results are not unusual in the nascent field experimental literature on democracy promotion and election observation. Hyde (2010), for example, reports the results of a randomized evaluation of election observers in Indonesia, where she found that precinct visits by international observers increased the vote share of the incumbent presidential candidate by 32%. Hyde (2010: 521) proposes several possible explanations, but underscores that the specific result is “somewhat idiosyncratic” and unanticipated. In a voter mobilization experiment in Uganda, Ferree et al. (2011) unexpectedly found that increasing the salience of the visibility of voting suppressed turnout, particularly among women. These results highlight one of the strengths of field experiments in that they enable researchers to uncover unanticipated effects of interventions. But these results also point to our incomplete knowledge about how interventions common in democracy promotion activities interact with local political practices and cultural beliefs.
This study has policy implications. Investments in improving the secrecy of the ballot—and then funding a public information campaign to publicize this fact—would probably indirectly empower opposition parties by reducing the perceived risks from regime surveillance. But primarily these findings represent a sobering reminder that U.S. policy ambitions outstrip the current state of knowledge on how foreign assistance can alter the relationship between the state and civil society. A healthy dose of humility is justified. While aggregate analysis of democratization aid have found positive effects on democracy indices (Finkel et al., 2008), our results suggest that the consequences of any particular program can be highly heterogeneous and dependent on hard-to-measure local factors, particularly in countries just emerging from long bouts of authoritarian rule. There is much that we do not know about the technologies used by electoral authoritarian regimes to conduct elections in an environment that keeps citizens insecure, atomized, and dependent on the party in power.
Footnotes
Appendix
Acknowledgements
We would like to thank John Bullock, Donald Green, Hans Gutbrod, Nahomi Ichino, Charles King, Jennifer London, Jas Sekhon, and the participants of the HALBI working group at the University of California at San Diego and the CRITICS working group at Georgetown for helpful comments that contributed to this paper in its current form. Most of all, we would like to acknowledge the assistance of the Caucasus Research Resource Centers, particularly Aaron Erlich, Nana Papiashvili, Arpine Porsughyan, and Koba Turmanidze, who generously provided logistical support for the implementation of the experiment itself.
Declaration of conflicting interest
The author declares that there is no conflict of interest.
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
