Abstract
When several statistical hypotheses are tested in a study to answer a single research question or to test a single scientific hypothesis, interpretation of the results may be difficult because Type 1 error probabilities cumulate. Researchers often try to solve this problem by reducing the significance level for each test or by applying a multiple-comparison procedure for means. For a constant number of observations, both strategies result in a lower power for each test or comparison of interest, however. Two well-known psychological experiments are reanalyzed in this respect. It is shown that low probabilities of Type 1 error should be given a higher priority only if the scientific hypothesis under scrutiny implies that all null hypotheses of the tests of significance are valid. More often, however, the research hypothesis is supported completely if all alternative hypotheses are accepted. In this case, a high power for each single test is more important than a low significance level.
Get full access to this article
View all access options for this article.
