Abstract
Although many researchers have discussed replication as a means to facilitate self-correcting science, in this article, we identify meta-analyses and evaluating the validity of correlational and causal inferences as additional processes crucial to self-correction. We argue that researchers have a duty to describe sampling decisions they make; without such descriptions, self-correction becomes difficult, if not impossible. We developed the Replicability and Meta-Analytic Suitability Inventory (RAMSI) to evaluate the descriptive adequacy of a sample of studies taken from current psychological literature. Authors described only about 30% of the sampling decisions necessary for self-correcting science. We suggest that a modified RAMSI can be used by authors to guide their written reports and by reviewers to inform editorial recommendations. Finally, we claim that when researchers do not describe their sampling decisions, both readers and reviewers may assume that those decisions do not matter to the outcome of the study, do not affect inferences made from the research findings, do not inhibit inclusion in meta-analyses, and do not inhibit replicability of the study. If these assumptions are in error, as they often are, and the neglected decisions are relevant, then the neglect may create a good deal of mischief in the field.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
