Abstract
Interest in on-demand noncognitive assessment has flourished due to advances in computer technology and studies demonstrating noteworthy predictive validities for organizational outcomes. Computerized adaptive testing (CAT) based on the Zinnes-Griggs (ZG) ideal point item response theory (IRT) model may hold promise for organizational settings, because a large pool of items can be created from a modest number of stimuli, and the items have been shown to be resistant to some types of rater bias. However, sample sizes needed for marginal maximum likelihood (MML) estimation of statement parameters are quite large and could thus limit usefulness in practice. This article addresses that concern and its ramifications for CAT. Specifically, we conducted empirical and simulation studies to examine whether subject matter expert (SME) ratings of statement extremity (location) can be substituted for MML estimates to streamline test development and launch. Results showed that error in SME-based location estimates had little detrimental effect on score accuracy or validity, regardless of whether measures were constructed adaptively or nonadaptively. Implications for research involving small samples and CAT in field settings are discussed.
Keywords
Get full access to this article
View all access options for this article.
