Abstract
One concern about impact evaluation is whether a program has made a difference (Owen 2006). Essentially, in trying to understand this, an evaluation seeks to establish causality—the causal link between the social program and the outcomes (Mohr 1999; Rossi & Freeman 1993). However, attributing causality to social programs is difficult because of the inherent complexity and the many and varied factors at play (House 2001; Mayne 2001; Pawson & Tilley 1998; White 2010).
Evaluators can choose from a number of theories and methods to help address causality (Davidson 2000). Measuring the counterfactual—the difference between actual outcomes and what would have occurred without the intervention—has been at the heart of traditional impact evaluations. While this has traditionally been measured using experimental and quasi-experimental methods, a counterfactual does not always need a comparison group (White 2010) and can be constructed qualitatively (Cummings 2006).
With these in mind, this article explores the usefulness of the concept of additionality, a mixed-methods framework developed by Buisseret et al. (cited in Georghiou 2002) as a means of evaluative comparison of the counterfactual.
Get full access to this article
View all access options for this article.
