Abstract
The purposes of this study were: (a) to compare the reliability, concurrent validity, and efficiency of scores derived from computerized adaptive and fixed-item versions of the vocabulary subtest from the Iowa Tests of Educational Development (ITED), and (b) to evaluate examinees' attitudes toward the tests. One-hundred and sixty-five college students were randomly assigned to take either an adaptive (n = 82) or fixed-item (n = 83) vocabulary test. Self-reported college grade point average, verbal self-concept scores, and vocabulary self-efficacy scores served as criterion variables in the concurrent validity analysis. Results indicated that the adaptive test at 13 items provided higher levels of reliability and concurrent validity than the fixed-item tests at 40 items. Examinees expressed favorable attitudes about the computerized tests' visual displays, directions, and practice items but unfavorable attitudes about the exclusion of review and skip options. They also preferred computerized over paper-and-pencil vocabulary testing formats.
Get full access to this article
View all access options for this article.
