Abstract
Citizen monitoring of government performance is often ineffective at improving performance, perhaps because information from monitoring does not make it far enough up in the chain of bureaucracy where the authority to punish public mismanagement rests. In a field experiment, we test whether delivering regular, officially certified reports derived from citizen monitoring and describing specific problems with the implementation of public projects to high-level bureaucrats charged with overseeing the projects improved their delivery. We do not find evidence that this treatment improved the delivery of public projects. Follow-up interviews revealed that the targeted officials seemed to avoid knowledge of the monitoring, perhaps to avoid taking on the responsibility that would come from such knowledge. However, the treatment also provided information to citizens about what they should expect from local governments, which instigated several direct complaints that the targeted officials did not ignore. Based on this alternative channel, which we did not anticipate, we conclude that citizen monitoring must be deployed in ways that make knowledge of problems undeniable for authorities who have a responsibility to address them.
Introduction and theory
In a field experiment, we test whether providing senior bureaucrats with regular reports about implementation problems in public projects, derived from citizen monitoring, and delivered by a high-level government partner, improved the implementation of projects. We sought to understand how citizen monitoring can contribute to public accountability when monitored officials are not bound by social ties to the citizens who provide monitoring. In such settings, citizen monitoring might serve as a useful input to top-down methods of managing government performance, rather than enable citizens to punish or reward performance themselves.
Our study contributes to the growing literature about strategies to encourage bottom-up accountability from public officials and is unique because the monitoring intervention combines features of both bottom-up and top-down accountability. Bottom-up accountability, which involves citizens seeking better government performance themselves, often has advantages related to information because citizens experience poor performance directly. Top-down accountability, which involves establishing public institutions to manage the performance of public officials, often has advantages related to credible sanctioning and rewarding of performance.
While there is some evidence that citizen monitoring can improve the performance of public officials who are bound by social ties to monitors apart from a link to top-down institutions (Björkman and Svensson, 2009), a number of studies have found more limited results when citizens provide monitoring directly to low-level officials with whom they do not share social ties (Banerjee et al., 2010; Buntaine et al., 2020; Grossman et al., 2018; Olken, 2007). Even the most promising intervention on bottom-up accountability (Björkman and Svensson, 2009) has produced less impressive outcomes in a scaled-up replication (Raffler et al., 2018).
The intervention that we study had several innovative features designed to link bottom-up and top-down methods of promoting accountability from governments: (a) citizen observations were delivered to a high-ranking bureaucrat with responsibility and authority to address the mismanagement of public programs; (b) the reports provided specific information about why problems had emerged; and (c) the reports were certified and delivered by another high-level government official, creating common knowledge among authorities about the problems. We expected that high-level officials with specific oversight responsibilities would face penalties—in terms of prestige, employment and promotion—for failing to address specific problems made known to them through monitoring, since knowledge of problems would trigger a responsibility to respond.
Specifically, we study the delivery of village-level projects chosen by residents, funded by a national park revenue-sharing program, and implemented by district and subcounty governments in Uganda. The experimental treatment involved informing residents that local governments had received a specific amount of funding to implement their chosen project and then collecting reports about the status of implementation over several months using a voice-response platform. Our research team aggregated monitoring from residents in treatment villages into district-level reports that flagged all villages where more than half of residents observed problems. The reports detailed the reasons for the problems. The Chief Warden of the national park personally certified the reports and delivered them to the chief administrator of the district government.
Counter to our expectations, we do not find evidence that the intervention improved the implementation of village-level projects. Project were not finished or delivered more completely in treatment villages as compared to control villages, based on field audits. We do not find evidence that the intervention increased residents’ satisfaction with projects, which might be expected given the lack of a positive main effect. In follow-up interviews with local officials, however, we uncovered three projects where the provision of information to residents about approved funding amounts through the treatment instigated direct complaints that resulted in officials being fired, transferred, or disciplined. While the treatment reports also contained consistent and negative information about these three projects, the high-level officials targeted by the intervention only got involved when their knowledge about mismanagement became unavoidable because of collective and non-anonymous complaints.
Our study contributes to the growing literature on bottom-up accountability by showing that high-level government officials may adopt strategies to avoid knowledge of problems when such knowledge would activate a responsibility to respond. Disseminating citizen monitoring in ways that are both hard for officials to ignore and that credibly signal a threat of escalation is a promising direction. While there have been a number of successful citizen monitoring programs that focus on individual officials (Callen et al., 2018; Muralidharan et al., 2019), the lack of impact in our study, interpreted in light of similar results in other settings (Banerjee et al., 2010; Buntaine et al., 2020; Grossman et al., 2018; Olken, 2007; Raffler et al., 2018), suggests that future strategies for bottom-up accountability related to complex governance outcomes should focus on disseminating monitoring in ways that officials cannot plausibly ignore given their formal responsibilities.
Research design
Setting and problem
Our study sites are the 91 villages that share a boundary with Bwindi National Park in western Uganda (Figure 1). The average household income in the area is less than US$300 per year and most families engage in subsistence farming. Outside of towns that are the seat of district and subcounty governments, there is little evidence of formal state presence. The most significant evidence of state presence aside from the national park is the occasional grading of dirt roads. While no point around Bwindi National Park is more than 25 miles from another point, it takes several hours by vehicle to travel between more distant villages and it is not possible to travel around the park in a single day. The three district governments in the study area have limited capacity to monitor public projects.

Map of the villages surrounding Bwindi National Park.
Bwindi National Park is a World Heritage Site that attracts approximately 20,000 foreign visitors each year, most of whom come to see endangered mountain gorillas. Historically, the exclusion of local people from using park resources has created tensions with the park’s management (Tumusiime and Sjaastad, 2014).
More than a decade ago, Bwindi National Park established a revenue-sharing program that funds village-level development projects with a portion of the gate fees that tourists pay. Revenue sharing is intended to deliver approximately US$1300 for each village that shares a boundary with the park each year, roughly equal to the incomes of four to five households in villages that on average have approximately 200 households. The specific amount shared is determined by a formula that takes into account population and the length of the shared boundary.
Residents in each village elect a committee that decides what project or projects should be done with the funds after holding a community meeting. Residents have broad discretion to choose projects. Previous projects have included everything from animal husbandry to water supply tanks. At the end of the selection process, residents send a proposal outlining their chosen project to the Uganda Wildlife Authority (UWA) for approval.
Upon reviewing and approving the proposed projects, UWA passes funds to the district government, which transfers funds to the subcounty government, which pays contractors selected by village- or parish-level committees to implement the villages’ projects (Figure 2). By law, all public spending for local projects must be handled by district governments. UWA has no formal responsibility for the implementation of projects, other than an ability to monitor implementation and provide information to district governments.

Flows of funds and information within the Bwindi National Park revenue-sharing process.
This long chain of administration often results in funds being mismanaged (Adams et al., 2004; Archabald and Naughton-Treves, 2001; Buntaine et al., 2018; Laudati, 2010; Tumusiime and Vedeld, 2012). Previously, UWA officials estimated that up to 80% of revenue-sharing funds were diverted from their intended purpose. The revenue-sharing program makes up only a small amount of total district spending and the overhead allocated to districts to supervise projects is only 5% of the total revenue-sharing funding. Responsibility for planning and implementation is delegated to subcounties, which helps district-level officials avoid blame for problems. For their part, chief administrators at the subcounty level often decry the technical ineptitude of projects or highlight how they rely on village-level management and procurement committees to advise them to release funds to contractors. These committee members are reportedly bribed by contractors who wish to be paid without delivering agreed outputs. At the root of many of these problems is the inability of the district governments to provide effective oversight.
Atop this administrative system is the Chief Administrative Officer (CAO) of the district government, who oversees all public spending by district and subcounty governments. This official must account for all spending to the Ministry of Local Government at the national level and is rated annually on performance. The CAO has authority to approve spending and discipline any bureaucrats who are part of the district and subcounty governments. Good performance of managing public funds is associated with favourable postings during staff rotations, promotions to the central ministry and professional recognition as part of annual rankings of districts, while poor performance often results in dismissal (interview V). In practice, CAOs have limited ability to provide oversight of revenue-sharing projects because of a large workload and a lack of information about implementation.
Treatment
We collected the mobile phone numbers of 4119 local residents over several years in all 91 villages that share a boundary with Bwindi National Park and enrolled them in a voice-response platform co-developed with park staff. Recruitment to the Bwindi Information Network was communicated as an opportunity to receive information about park management and provide input on ongoing decisions at the park. The reception of the program has been enthusiastic, with recruitment drives succeeding in signing up nearly every individual encountered who had access to a mobile phone.
The treatment for this experiment involved exchanging information with subscribers in treatment villages and passing along citizen monitoring to CAOs in four monthly reports. In terms of outgoing information, subscribers in treatment villages received bi-weekly reminders confirming the project or projects that had been approved and the amount of funds allocated in the form of voice messages delivered by phone calls. This information was not readily available to residents from other sources. The platform also asked residents in treatment villages to respond to multiple-choice prompts using their dial pad and to provide voice reports about their village’s revenue-sharing project five times. Subscribers in control villages received public health messages from a local hospital to hold contact rates constant across experimental conditions. For assignment of treatment, we used complete randomization of villages within subcounty blocks. An overview of the experimental design is displayed in Figure 3.

CONSORT diagram tracking study design.
After receiving responses, our research team compiled the information into monthly reports at the district level four times. These reports broke down the responses of the residents and visually highlighted instances where a majority of reports indicated a problem with implementation (see Online Appendix C). The reports contained information on the number and proportion of residents who: (a) believed the approved project had been completed, (b) reported different reasons for the project not being completed, and (c) felt satisfied with the implementation of the project.
The Chief Warden of the park certified the reports in a cover letter and had his team physically deliver them each month to the CAO of each of three districts in the study. UWA managers informed the CAOs about the monitoring program upon delivery of the first report but did not communicate steps that would be taken to follow up on problems identified in reports. The treatment was intended to (a) lower search costs for problems, and (b) create common knowledge about problems which might activate responsibility to respond and raise the risks of not responding for the CAOs. Figure 2 earlier displays the basic administrative setup of the program. During the implementation, UWA and our research team conducted a joint audit of the quality of citizen reporting in 10 randomly selected villages, which found that delays in the delivery and implementation of projects were correctly noted by citizens. Further program details and a timeline are available at Online Appendix B.
Partnership
The downside of our partnership with UWA is that we did not have precise control of implementation. UWA insisted that our research team have no direct contact with district or subcounty officials, either for pretesting or for the delivery of reports. They wanted to take responsibility for the entire interaction to shield us from what they expected to be the risks associated with angering local officials.
Despite less control over implementation, the partnership provided at least three benefits. First, with UWA’s cooperation, we had an opportunity to test whether creating a common knowledge of problems among high-level government officials could work to activate responsibility. Second, the design and formatting of reports was based on UWA’s local knowledge about what might best spur action. Third, because UWA shaped and delivered the intervention, we avoided the frequently voiced concern about field experiments that implementation was not representative of real-world conditions (e.g., Berge et al., 2012).
Outcome measurement
We conducted independent audits of revenue-sharing projects, which involved photographing and describing all work completed in all 91 revenue-sharing villages. We entered every village and asked the village chair or designated substitute to guide our enumerators to document the revenue-sharing project or projects. We described and photographed all projects shown to us by the local guide, including if funds were spent on an approved project. We also recorded and photographed any evidence of labeling for revenue-sharing projects, which is required by guidelines. These audits provide our primary measures of the delivery of projects.
We also completed a survey with a representative sample of 20 residents in each village using a random walk to assess attitudes and opinions about the revenue-sharing program. As part of surveys, we asked residents to show us physical evidence of revenue-sharing projects. We used this physical evidence as a check on the audit results, particularly for items that were dispersed throughout villages. Descriptive statistics about survey respondents are displayed in Online Appendix Table A1.
Analytical strategy
We analyze average treatment effects for each audit and survey outcome using two strategies, as outlined in our pre-analysis plan. First, we compute simple difference-in-means between experimental conditions. Second, to increase precision, we specify an OLS model for each outcome that includes the treatment indicator, block fixed-effects, and for individual-level analyses the following covariates: gender, age, income, and literacy. For both types of estimates, we report the standard errors and p-values of the sharp null hypothesis following the exact blocking (subcounty) and clustering (village) approach used to assign treatment. To compute this value, we exactly replicate the random assignment procedure 10,000 times assuming no treatment effect for any unit (i.e., sharp null hypothesis) and record the variance in the parameter estimate that results from the randomization design.
Results
Physical and resident audits
We observe no differences in project delivery between treatment and control villages inconsistent with the null hypothesis based on data from physical and resident audits (Table 1). Across a host of outcomes measured in physical audits, including whether the project implemented was the project approved by UWA, whether the project was completed, the number of dispersed items that could be located by village guides, and whether the project components were labeled, we find that treatment villages did not do better than control villages. Likewise, when we asked residents to report on whether an approved project was implemented and to show evidence of delivery, treatment villages did not do better than control villages. As displayed in SI Table G2, we do not detect spillover from contiguous villages.
Results of revenue-sharing implementation from physical and resident audits.
Variables: Approved project implemented measures the proportion of implemented or partially implemented projects that were approved by UWA. Complete is a binary indicator of whether non-dispersed projects were implemented completely or somewhat completely as revealed by field audits. Pictures is a numeric variable from 0 to 10 of the number of pictures of dispersed items captured during audits. Fully labeled is a binary indicator of whether project components were labeled according to guidelines. Partially labeled is a binary indicator of whether project components had some labeling, even if not fully in line with guidelines. Standard errors are computed by cluster-wise bootstrapping at the village level for the descriptive treatment and control conditions. Standard errors for the difference-in-mean and OLS fixed-effects models are the standard deviation of the randomization distribution of the assignment with village clustering assuming the sharp null hypothesis. p-values are one-way tests based on randomization inference. FE OLS includes block fixed-effects. For non-dispersed Complete outcome, not enough observations are available to estimate the pre-specified fixed-effects model.
Resident surveys
Consistent with the audits, we find no evidence that the treatment changed attitudes among residents. In Table 2, we show that there is no evidence that residents in treatment villages had increases in satisfaction with the implementation of revenue sharing, satisfaction with the management of Bwindi National Park, satisfaction with revenue sharing generally, the perceived importance of protecting Bwindi National Park, and the perceived value of revenue-sharing projects, as compared to residents in control villages. We find no evidence of heterogeneous effects based on whether survey respondents were subscribers to the Bwindi Information Network (SI Table H1).
Attitudes about revenue sharing from resident surveys.
Variables: See Online Appendix C for exact survey items. Satisfied RS implementation is satisfaction with implementation of revenue sharing (0, very dissatisfied; 3, very satisfied). Satisfied park management is satisfaction with overall park management (0, very dissatisfied; 4, very satisfied). Satisfied revenue sharing is satisfaction with revenue sharing (0, very dissatisfied; 4, very satisfied). Importance conservation is agreement with the importance of protecting Bwindi (0, not very important; 2, very important). RS benefits valuable is perception of whether benefits from revenue sharing are valuable (0, not at all; 3, very valuable). Standard errors are computed by cluster-wise bootstrapping at the village level for the descriptive treatment and control conditions. Standard errors for the difference-in-mean and OLS fixed-effects models are the standard deviation of the randomization distribution of the assignment with village clustering assuming the sharp null hypothesis. p-values are one-way tests based on randomization inference. FE OLS includes block fixed-effects.
Follow-up interviews
Perplexed by these results, we conducted interviews with each of the three CAOs in the relevant districts. We also interviewed elected and appointed subcounty officials, members of village- and parish-level project procurement committees that select contractors for projects, members of village project management committees, and elected village chairpersons. The list of interviews conducted is available in Online Appendix E.
All three CAOs in office during the study claimed that they had not seen the reports that were hand delivered to their district office by a UWA ranger or warden. UWA confirmed that they delivered a total of four reports, had phone calls and in-person meetings to explain the reports to each of the CAOs, and received acknowledgement that the reports had been received directly from the CAOs. Interviewee A was careful to note that while he had not seen the reports, it was possible that another staff member in his office had seen them. Interviewee B also stated the potential importance of the reports to his office, even though he had not seen them. Interviewee C indicated that his assistant had informed him about the receipt of the report from UWA, but that he had not personally looked them over. Interviewee C stated, “I was told about these reports from Bwindi by my assistant, who told me there was nothing big to attend to.” He also complained about the burden of reading a report and noted “when someone is busy, they become lazy to read these small fonts.” It is possible that CAOs avoided acknowledging the reports during our interviews to avoid acknowledging responsibility for the lack of a response to their contents.
While the citizen monitoring intervention did not impact the delivery of projects, interviews with other local officials revealed that the treatment had positive impacts beside those that we measured in audits and surveys. Our interviews (E–U) revealed three instances where the information about the approved project and funding amounts sent to residents as part of treatment ultimately encouraged residents to complain to CAOs collectively and non-anonymously, leading to important follow-up actions that could not have been detected using audits or surveys. The reports delivered as part of treatment conveyed similarly negative information about these projects (Table D1 online) but did not instigate any follow-up action by the CAOs.
Cash instead of goats
Villagers in Kashekyera received messages informing them that they would receive goats for an animal husbandry project and the amount of funds allocated. One evening, the subcounty chief came to Kashekyera and tried to get villagers to accept cash rather than goats. He offered beneficiaries less money than had been allocated to pay for the goats. Some beneficiaries took the money and some did not. He told those who refused that if they refused the money, they would not receive anything as part of revenue sharing.
A subset of villagers, based on the information they had learned from the treatment messages, identified the difference in funds allocated versus those offered by the subcounty chief. They contacted UWA staff to complain and explained that (a) they had received messages, (b) the messages said to expect goats and that a certain amount of money would be spent, and (c) instead the subcounty chief had tried to get them to accept a lesser amount of cash. UWA told the villagers to contact the CAO and to copy UWA on the complaint. After receiving the complaint, the CAO asked UWA leaders, the subcounty chief, and the subcounty chairperson to meet. The subcounty chief denied the story. The CAO insisted the the whole group visit the beneficiaries to find out what had happened. When they arrived, residents pointed at the subcounty chief and explained how they were told to take money or they would get nothing. Based on this information, the CAO took the subcounty chief to the disciplinary committee and attempted to fire him. Ultimately, the subcounty chief challenged this decision, and his punishment was settled at a several-month suspension and a transfer.
Shorting a village revenue-sharing funds
In Kahurire village, UWA had allocated 23,000,000 UGX for revenue-sharing projects, and the village had this confirmed in treatment messages. The subcounty chief told Kahurire village only to expect 15,000,000 UGX. Apparently, the subcounty chief had planned to send the additional 8,000,000 UGX to a different village in the subcounty to help fund a new school building. It was suspected by locals that the subcounty chief would somehow benefit.
Because the amount and project differed from the treatment amount the messages told them to expect, the villagers called UWA. The subcounty chief’s actions were brought to the district’s attention. Based on this interaction, the district CAO fired the subcounty chief. Funding was ultimately made available for Kahurire’s revenue-sharing project.
Advance payment to missing contractor
Five villages in Buremba Parish requested to pool their revenue-sharing funds to build a tourism center. The tourism center would be built near a health center that a contractor had started but not finished. The subcounty chief opted to use the same subcontractor who had started to build the health center to build the tourism center. The subcontractor determined he would finish the health center instead of building the tourism center.
The treatment messages told the villagers that revenue-sharing funds were allocated for a visitor center along with the amount of funds allocated. Responding to the treatment messages, the villagers complained to the subcounty that they should get a visitor center, not a health center.
The subcounty chief insisted that the health center building would have to suffice and authorized advanced payment to the contractor. This upset residents, who complained to UWA. The villagers were told by UWA to complain as well to the district CAO. Based on the complaint, the district auditor initiated a review. The subcontractor fled the subcounty and the work remains undone. The CAO has attempted to recover the funds from the salary of the subcounty chief.
Discussion
We expected that citizen monitoring that revealed specific problems and was directed to high-level officials would activate responsibility to correct known mismanagement of public funds. In the experiment, the CAOs who received the reports stated that they had not paid attention to them. Apparently, the anonymous and aggregate nature of the reports was not sufficiently threatening, even when delivered by another high-ranking official, and CAOs could claim they had been lost among other paperwork. In contrast, when citizens complained directly and non-anonymously about problems also indicated in the treatment reports, the CAOs acted. It is possible that direct citizen complaints made the threat of escalation more credible and diminished the ability of the CAOs to claim ignorance of problems.
In contrast to the results of this experiment, monitoring individual officials, rather than complex governance failures, seems more promising. Building expectations among officials that citizen monitoring will be used for their evaluation directly and shared with supervisors has had a modest impact on service provision (Muralidharan et al., 2019). Monitoring specific personnel on basic duties like attending work has achieved reductions in absenteeism (Callen et al., 2018).
For complex governance challenges, there may be no shortcut to disseminating monitoring in ways that officials cannot ignore, which seems most likely when citizens complain about government performance in vocal and public ways (Fiala and Premand, 2018). While the treatment we studied attempted to remove the risks of petitioning governments, the resulting crowd-sourced information contained less specific information and was delivered in a format that was too easy to ignore. Another possibility for why CAOs responded to direct complaints from citizens is that they activated social expectations or emotional responses that anonymous reports did not.
As might be expected given the limited responsiveness of the CAOs to reports from the platform, residents in treated villages did not report increased satisfaction with revenue sharing. This result is likely due either to residents failing to see any results of reporting or because they learned that additional mobilization was required to generate responsiveness. Satisfaction with a government program is unlikely to be positively related to the need to make costly complaints about it. More practically, this result confirms that reporting platforms must credibly signal responsiveness from government to affect citizens’ attitudes or behavior (Buntaine et al., 2019).
Our experiment was designed to test whether features of both bottom-up and top-down accountability could be combined to improve the management of public funds for community-driven development projects. While this combination might work in other contexts where there is a strong motivation by high-authority officials to respond to problems (Anderson et al., 2019), this study suggests that monitoring must be deployed in ways that make the threat of escalation credible and make it impossible for responsible officials to claim ignorance of problems.
Supplemental Material
BuntaineDaniels_RandP_SI – Supplemental material for Combining bottom-up monitoring and top-down accountability: A field experiment on managing corruption in Uganda
Supplemental material, BuntaineDaniels_RandP_SI for Combining bottom-up monitoring and top-down accountability: A field experiment on managing corruption in Uganda by Mark T. Buntaine and Brigham Daniels in Research & Politics
Footnotes
Acknowledgements
We are grateful to Jeremiah Nahamya for his contributions to the design and implementation of this research as the project manager based in Uganda. Tanner Bangerter provided excellent research assistance. This project was conducted in partnership with the Uganda Wildlife Authority and we gratefully acknowledge the guidance and engagement of Pontius Enzuma, Raymond Kato, Aulea Tumwebaze, and Joseph Arinaitwe. We are grateful to Lisa Dellmuth, Aseem Prakash, and audiences at the University of Illinois and the 4th Annual Conference on Environmental Politics and Governance, the Working Group in African Political Economy, and anonymous reviewers for helpful comments on earlier drafts. This project was reviewed and approved by the UCSB Human Subjects Committee (UCSB protocols #4-18-0013 & #14-18-0385), the Uganda Mildmay Research Ethics Committee (protocol 0703-2015), the Uganda National Council for Science and Technology (protocol IS 111), and the Uganda Office of the President (ref: ADM 154/212/03). Author contributions: MB and BD designed and implemented the research. MB analyzed the data and wrote the paper. MB and BD edited the paper. An earlier version of this paper was circulated with the title “Diffuse Responsibility Undermines Public Oversight: A Field Experiment at Bwindi National Park, Uganda.”
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project was funded by the US National Science Foundation (collaborative grants # 1655459/1655513).
Supplemental materials
Carnegie Corporation of New York Grant
This publication was made possible (in part) by a grant from the Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
