Abstract
Coefficient a, an estimate of the classical reliability coefficient, was evaluated under violations of two classical test theory assumptions: essential r-equivalence and uncorrelated errors. The interactive effects of both violations were explored using computer simulated true and error scores with known properties. As correlations among true scores decreased from 1, or essential -r-equivalence was systematically violated, a progressively underestimated the classical reliability coefficient. Simultaneously, as error score correlations increased from 0, the underestimation was attenuated and a became an inflated overestimate of the classical reliability coefficient. Although it is generally accepted that true score assumptions can be tested using confirmatory factor analysis (CFA), the research literature indicates that it is impossible, or at least extremely difficult, to empirically assess uncorrelated error with cross-sectional (as compared to longitudinal) data. Nevertheless, it is shown here that CFA error covariance estimates can be subtracted from a to substantially reduce, if not completely eliminate, the inflation bias that results from positive correlated error.
Get full access to this article
View all access options for this article.
