Abstract
When a researcher tests an educational program, product, or policy in a randomized controlled trial and detects a significant effect on an outcome, the intervention is usually classified as something that “works.” When expected effects are not found, there is seldom an orderly and transparent analysis of plausible reasons why. Accumulating and learning from possible failure mechanisms is not standard practice in education research, and it is not common to design interventions with causes of failure in mind. This chapter develops Boruch and Ruby’s proposition that the education sciences would benefit from a systematic approach to the study of failure. We review and taxonomize recent reports of large-scale randomized controlled trials in K–12 schooling that yielded at least one null or negative major outcome, including the nature of the event and reasons (if provided) for why it occurred. Our purpose is to introduce a broad framework for thinking about educational interventions that do not produce expected effects and seed a cumulative knowledge base on when, how, and why interventions do not reach expectations. The reasons why an individual intervention fails to elicit an outcome are not straightforward, but themes emerge when researchers’ reports are synthesized.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
