Abstract
Most of the debates around statistical testing suffer from a failure to identify clearly the features specific to the theories invented by Fisher and by Neyman and Pearson. These features are outlined. The hybrids of Fisher’s and Neyman–Pearson’s theory are briefly addressed. The lack of random sampling and its consequences for statistical inference are also highlighted, leading to the recommendation to dispense with inferences and perform approximate randomization tests instead. A possible scheme for the appraisal of substantive hypotheses is offered, the corroboration of which is a necessary prerequisite for scientific explanations and predictions. The scheme is partly based on the Neyman–Pearson theory. This theory, though not perfect, is superior to its competitors, especially when examining substantive hypotheses. The many statistical and extra-statistical decisions prior to experimentation and the inevitable subjectivity of our research endeavors are emphasized. If feasible, statistical problems should be discussed from an extra-statistical methodological/epistemological viewpoint.
Keywords
Get full access to this article
View all access options for this article.
