Abstract
Performance-based accountability along with budget tightening has increased pressure on publicly funded organizations to develop and deliver programs that produce meaningful social benefits. As a result, there is increasing need to undertake formative evaluations that estimate preliminary program outcomes and identify promising program components based on their effectiveness during implementation. By combining longitudinal administrative data, multiple comparison group designs, and a progressive series of analyses that test rival explanations, evaluators can strengthen causal arguments and provide actionable program information for key stakeholders to improve program outcomes. In this article, we illustrate the application of rigorous methods to estimate preliminary program effects and rule out alternative explanations for preliminary effects, including site selection bias, individual selection bias, and resentful demoralization through the evaluation of the Collaborative Project, a North Carolina educational improvement project that incorporated multiple components aimed at boosting student achievement.
Get full access to this article
View all access options for this article.
