Abstract
This article reflects on how the Independent Evaluation Group (IEG) of the World Bank Group grapples with the issue of generalizability in conducting large-scale and complex development evaluations. It discusses a practical framework IEG uses to meet three needs: methodological, institutional, and didactic. IEG evaluations are aimed at informing significant organizational or strategic decisions encompassing broad scopes, assessing extensive portfolios of interventions by the organization across diverse contexts and spanning multiple years. These evaluations are inherently multimethod and need to bridge various logics of generalization. They seek to influence decision-makers, such as boards of directors or senior managerial teams, with the objective of guiding pivotal moments in the organization’s trajectory; ensuring accountability for learning, results, or budget expenditures; and synthesizing substantial evidence to distill key success or failure factors for future strategic planning. The defensibility of the methodological scaffolding is paramount to the credibility of the evaluations. The article discusses the challenges inherent in such evaluations, including the need to generate findings that are valid at multiple levels of analysis and the reliance on multitiered mixed-methods approaches. It examines the use of a practical framework to bridge methodological principles and real-world challenges involved in evaluation to inform the design and implementation of large-scale evaluations. The framework is illustrated with examples from IEG’s evaluations, and the article explores how practitioners and researchers can apply the framework in other settings to enhance the generalizability of their findings.
Keywords
Get full access to this article
View all access options for this article.
