BloomH. S.1995. “Minimum Detectable Effects: A Simple Way to Report the Statistical Power of Experimental Designs.”Evaluation Review19:547–56.
2.
BloomH. S.2005. “Randomizing Groups to Evaluate Place-based Programs.” In Learning More from Social Experiments Evolving Analytic Approaches, edited by BloomH. S., 115–72. New York: Russell Sage.
3.
BloomH. S.BosJ. M.LeeS. W.. 1999. “Using Cluster Random Assignment to Measure Program Impacts.”Evaluation Review23:445–69.
4.
BloomH. S.Richburg-HayesL.BlackA. R.. 2007. “Using Covariates to Improve Precision for Studies that Randomize Schools to Evaluate Educational Interventions.”Educational Evaluation and Policy Analysis29:30–59.
BoruchR. F.FoleyE.. 2000. “The Honestly Experimental Society.” In Validity and Social Experiments: Donald Campbell’s Legacy, edited by BickmanL., 193–239. Thousand Oaks, CA: Sage.
7.
BrandonP. R.HarrisonG. M.LawtonB. E.. 2013. “SAS Code for Calculating Intraclass Correlation Coefficients and Effect Size Benchmarks for Site-randomized Education Experiments.”American Journal of Evaluation34:85–90.
8.
CookT. D.2005. “Emergent Principles for the Design, Implementation, and Analysis of Cluster-based Experiments in Social Science.”The Annals of American Academy of Political and Social Science599:176–98.
9.
DongN.MaynardR.. 2013. “PowerUP!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-experimental Design Studies.”Journal of Research on Educational Effectiveness6:24–67.
10.
HedgesL. V.HedbergE. C.. 2007. “Intraclass Correlation Values for Planning Group-randomized Trials in Education.”Educational Evaluation and Policy Analysis29:60–87.
11.
HedgesL. V.RhoadsC.. 2009. Statistical Power Analysis in Education Research (NCSER 2010-3006). Washington, DC: National Center for Special Education Research, Institute of Education Sciences, US Department of Education.
12.
JacobR.ZhuP.BloomH. S.. 2010. “New Empirical Evidence for the Design of Group Randomized Trials in Education.”Journal of Research on Educational Effectiveness3:157–98.
13.
KelceyB.PhelpsG.. 2013. “Considerations for Designing Group Randomized Trials of Professional Development with Teacher Knowledge Outcomes.”Educational Evaluation and Policy Analysis35:370–90.
14.
KonstantopoulosS.2008a. “The Power of the Test for Treatment Effects in Three-level Cluster Randomized Designs.”Journal of Research on Educational Effectiveness, 1, 66–88.
15.
KonstantopoulosS.2008b. “The Power of the Test for Treatment Effects in Three-level Block Randomized Designs.”Journal of Research on Educational Effectiveness1:265–88.
16.
KonstantopoulosS.2009. “Incorporating Cost in Power Analysis for Three-level Cluster-randomized Designs.”Evaluation Review33:335–57.
17.
RaudenbushS. W.1997. “Statistical Analysis and Optimal Design for Cluster Randomized Trials.”Psychological Methods2:173–85.
18.
RaudenbushS. W.LiuX.. 2000. “Statistical Power and Optimal Design for Multisite Randomized Trials.”Psychological Methods, 5:199–213.
19.
RaudenbushS. W.LiuX.. 2001. “Effects of Study Duration, Frequency of Observation, and Sample Size on Power in Studies of Group Differences in Polynomial Change.”Psychological Methods, 6:387–401.
20.
RaudenbushS. W.MartinezA.SpybrookJ.. 2007. “Strategies for Improving Precision in Group-randomized Experiments.”Educational Evaluation and Policy Analysis29:5–29.
21.
RaudenbushS. W.SpybrookJ.CongdonR.LiuX.MartinezA.BloomH.HillC.. 2011. Optimal Design Plus Empirical Evidence(Version 3.0). http://www.wtgrantfoundation.org. accessed on January 7, 2014
22.
SchochetP. Z.2008. “Statistical Power for Random Assignment Evaluations of Education Programs.”Journal of Educational and Behavioral Statistics33:62–87.
23.
SpybrookJ.RaudenbushS. W.. 2009. “An Examination of the Precision and Technical Accuracy of the First Wave of Group Randomized Trials Funded by the Institute of Education Science.”Educational Evaluation and Policy Analysis31:298–318.
XuZ.NicholsA.. 2010. New Estimates of Design Parameters for Clusered Randomization Studies: Findings from North Carolina and Florida. Washington, DC: The Urban Institute, National Center for Analysis of Longitudinal Data in Education Research.
26.
ZhuP.JacobR.BloomH.XuZ.. 2012. “Designing and Analyzing Studies that Randomize Schools to Estimate Intervention Effects on Student Academic Outcomes without Classroom-level Information.”Education Evaluation and Policy Analysis34:45–68.