Abstract
Inattentive responses can threaten measurement quality, yet they are common in rating- or Likert-scale data. In this study, we proposed a new mixture item response theory model to distinguish inattentive responses from normal responses so that test validity can be ascertained. Simulation studies demonstrated that the parameters of the new model were recovered fairly well using the Bayesian methods implemented in the freeware WinBUGS, and fitting the new model to data that lacked inattentive responses did not result in severely biased parameter estimates. In contrast, ignoring inattentive responses by fitting standard item response theory models to data containing inattentive responses yielded seriously biased parameter estimates and a failure to distinguish inattentive participants from normal participants; the person-fit statistic
Get full access to this article
View all access options for this article.
