Abstract
Educators who are urged to use evidence-based practices to improve instruction often end up disappointed at the results, which fall short of those touted in the research and by the What Works Clearinghouse. Stanley Pogrow explains how common strategies researchers use to demonstrate evidence of success, such as statistical significance of or effect size, can make results seem more dramatic than they are. Such measures are more valuable to researchers than to educators who are looking for interventions that will make a difference in their classrooms or schools. Pogrow offers recommendations for both educators and the research community for improving the state of education research.
Keywords
Get full access to this article
View all access options for this article.
