Abstract
The authors illustrate a problem with confirmatory factor analysis (CFA)-based strategies to model disaggregated multitrait—multirater (MTMR) data—the potential to find markedly different results with the same sample of ratees simply as a result of how one selects and identifies raters within the data set one has gathered for analysis. Using performance ratings gathered as part of a large criterion-related validation study, the authors show how such differences manifest themselves in several ways including variation in (a) covariance matrices that serve as input for the modeling effort, (b) model convergence, (c) admissibility of solutions, (d) overall model fit, (e) model parameter estimates, and (f) model selection. Implications of this study for past research and recommendations for future CFA-based MTMR modeling efforts are discussed.
Get full access to this article
View all access options for this article.
