In this response, we first show that Simpson’s proposed analysis answers a different and less interesting question than ours. We then justify the choice of prior for our Bayes factors calculations, but we also demonstrate that the substantive conclusions of our article are not substantially affected by varying this choice.
BaguleyT. (2012). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke, UK: Palgrave Macmillan.
2.
BergerJ. (2006). The case for objective Bayesian analysis. Bayesian Analysis, 1(3), 385–402.
3.
CheungA.SlavinR. E. (2016). How methodological features of research studies affect effect sizes. Educational Researcher, 45(5), 283–292.
4.
de VriesR. M.MoreyR. D. (2013). Bayesian hypothesis testing for single-subject designs. Psychological Methods, 18(2), 165–185.
5.
HattieJ. A. C. (2009). Visible learning: A synthesis of 800+ meta-analyses on achievement. Oxford, UK: Routledge.
6.
MoreyR. D.WagenmakersE.-J.RouderJ. N. (2016). Calibrated Bayes factors should not be used: A reply to Hoijtink, van Kooten, and Hulsker. Multivariate Behavioral Research, 51(1), 11–19.
7.
RouderJ. N.MoreyR. D.SpeckmanP. L.ProvinceJ. M. (2012). Default Bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56(5), 356–374.
8.
RouderJ. N.SpeckmanP. L.SunD.MoreyR. D.IversonG. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225–237.
9.
SimpsonA. (2019). Whose prior is it anyway? A note on “Rigorous large-scale educational RCTs are often uninformative.”Educational Researcher, 48(6), 382–384.