BloomH. S. (2005). Randomizing groups to evaluate place-based programs. In BloomH. S. (Ed.), Learning more from social experiments: Evolving analytic approaches (pp. 115–172). New York, NY: Russell Sage Foundation.
2.
BloomH. S.Richburg-HayesL.BlackA. R. (2007). Using covariates to improve precision for studies that randomize schools to evaluate educational interventions. Educational Evaluation and Policy Analysis, 29, 30–59.
DongN.MaynardR. (2013). PowerUP!: A tool for calculating minimum detectable effect sizes and minimum required sample sizes for experimental and quasi-experimental design studies. Journal of Research on Educational Effectiveness, 6, 24–67.
5.
DonnerA.KlarN. (2000). Design and analysis of cluster randomization trials in health research. London, England: Arnold.
6.
FlayB. R.CollinsL. M. (2005). Historical review of school-based randomized trials for evaluating problem behavior prevention programs. The Annals of the American Academy of Political and Social Science, 599, 115–146.
7.
HedgesL. V.HedbergE. C. (2007). Intraclass correlation values for planning group-randomized trials in education. Educational Evaluation and Policy Analysis, 29, 60–87.
8.
HedgesL. V.HedbergE. C. (2013). Intraclass correlations and covariate outcome correlations for planning two-and three-level cluster-randomized experiments in education. Evaluation Review, 37, 445–489.
9.
HedgesL. V.RhoadsC. (2010). Statistical power analysis in education research (NCSER 2010–3006). Washington, DC: National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education.
10.
Institute of Education Sciences. (2016). Research grants request for applications for awards beginning in fiscal year 2017: CFDA Number 84.305A. Washington, DC: U.S. Department of Education.
11.
JacobR.ZhuP.BloomH. (2010). New empirical evidence for the design of group randomized trials in education. Journal of Research on Educational Effectiveness, 3, 157–198.
12.
KelceyB.PhelpsG. (2013). Considerations for designing group randomized trials of professional development with teacher knowledge outcomes. Educational Evaluation and Policy Analysis, 35, 370–390.
13.
KonstantopoulosS. (2008). The power of the test for treatment effects in three-level cluster randomized designs. Journal of Research on Educational Effectiveness, 1, 66–88.
14.
MurrayD. M. (1998). Design and analysis of group-randomized trials. New York, NY: Oxford University Press.
15.
MurrayD. M.BlitsteinJ. L. (2003). Methods to reduce the impact of intraclass correlation in group-randomized trials. Evaluation Review, 27, 79–103.
16.
MurrayD. M.ShortB. (1995). Intraclass correlation among measures related to alcohol use by young adults: Estimates, correlates and applications in intervention studies. Journal of Studies on Alcohol, 56, 681.
17.
RaudenbushS. W. (1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, 2, 173–185.
18.
RaudenbushS. W.LiuX. (2000). Statistical power and optimal design for multisite randomized trials. Psychological Methods, 5, 199–213.
19.
RaudenbushS. W.MartinezA.SpybrookJ. (2007). Strategies for improving precision in group-randomized experiments. Educational Evaluation and Policy Analysis, 29, 5–29.
20.
RaudenbushS. W.SpybrookJ.CongdonR.LiuX.MartinezA.BloomH. (2011). Optimal design software for multi-level and longitudinal research(version 3.01) [software]. Retrieved fromwww.wtgrantfoundation.org
21.
SchochetP. Z. (2008). Statistical power for random assignment evaluations of education programs. Journal of Educational and Behavioral Statistics, 33, 62–87.
22.
SchwartzK.IqbalY.AberJ. L. (2016). What we have learned, what we have asked: Evaluating effectiveness in educational interventions in low- and middle-income countries. Vancouver, BC: Comparative and International Education Society.
23.
SiddiquiO.HedekerD.FlayB. R.HuF. B. (1996). Intraclass correlation estimates in a school-based smoking prevention study outcome and mediating variables, by sex and ethnicity. American Journal of Epidemiology, 144, 425–433.
24.
SpybrookJ. (2013). Introduction to special issue on design parameters for cluster randomized trials in education. Evaluation Review, 37, 435–444.
25.
SpybrookJ.KelceyB.DongN. (2016). Power for detecting treatment by moderator effects in two-and three-level cluster randomized trials. Journal of Educational and Behavioral Statistics, 41(6), 605–627.
26.
SpybrookJ.ShiR.KelceyB. (2016). Progress in the past decade: An examination of the precision of cluster randomized trials funded by the US institute of education sciences. International Journal of Research & Method in Education, 39, 255–267.
27.
SpybrookJ.WestineC. D.TaylorJ. A. (2016). Design parameters for impact research in science education. AERA Open, 2, 1–15.
28.
UkoumunneO. C.GullifordM. C.ChinnS.SterneA. C.BurneyP. F. (1999). Methods for evaluating area-wide and organization-based interventions in health and health care: A Systematic review. Health Technology Assessment, 3, 1–99.
29.
WestineC. D.SpybrookJ.TaylorJ. A. (2013). An empirical investigation of variance design parameters for planning cluster-randomized trials of science achievement. Evaluation Review, 37, 490–519.