Abstract
It is a healthy exercise to debate the merits of using effect-size benchmarks to interpret research findings. However, these debates obscure a more central insight that emerges from empirical distributions of effect-size estimates in the literature: Efforts to improve education often fail to move the needle. I find that 36% of effect sizes from randomized control trials of education interventions with standardized achievement outcomes are less than 0.05 SD. Publication bias surely masks many more failed efforts from our view. Recognizing the frequency of these failures should be at the core of any approach to interpreting the policy relevance of effect sizes. We can aim high without dismissing as trivial those effects sizes that represent more incremental improvement.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
