Abstract
The evaluation of Reading First, the U.S. Department of Education’s multibillion-dollar K–3 initiative, although flawed, nevertheless offers instructive guidance for gauging the impact of future initiatives. After providing an overview of the program, its evaluation, and the historical context of federal initiatives, the authors outline limitations in applying scientific principles at scale. They argue for more nuanced approaches, including meta-analyses across projects, the use of improved statistical approaches, and the incorporation of formative designs. They conclude with four recommendations for evaluating future initiatives. Such evaluations should (a) account for fidelity systematically, (b) include outcome measures that gauge school climate and administrative support, (c) include multiple designs and aggregate the results, and (d) account for the length of implementation.
Get full access to this article
View all access options for this article.
