Abstract
Determining whether a test violates the assumption of unidimensionality is an important precursor to item response theory (IRT) analysis. However, a test’s unidimensionality or nonunidimensionality may be a matter of degree, and the implications of the degree of nonunidimensionality may depend on how the test is analyzed and how the results are to be used. This study examined the dimensionality of a high-stakes graduate training selection test and the implications of the test’s dimensionality for the IRT calibration and scoring of each section of the test. The dimensionality analyses suggested that, although the items within each of the sections were not completely homogeneous, neither were they clearly measuring distinct constructs corresponding to the content disciplines. The correlations between student scores based on item parameters that were estimated separately within discipline and then formed into weighted composites and scores based on item parameters that were estimated across discipline (within section) exceeded .99.
Get full access to this article
View all access options for this article.
