Abstract
It is often interesting to compare the size of treatment effects in analysis of variance designs. Many researchers, however, draw the conclusion that one independent variable has more impact than another without testing the null hypothesis that their impact is equal. Most often, investigators compute the proportion of variance each factor accounts for and infer population characteristics from these values. Because such analyses are based on descriptive rather than inferential statistics, they never justify the conclusion that one factor has more impact than the other. This paper presents a novel technique for testing the relative magnitude of effects. It is recommended that researchers interested in comparing effect sizes apply this technique rather than basing their conclusions solely on descriptive statistics.
Get full access to this article
View all access options for this article.
