Abstract
In a previous research study, factor analyses for TOEFL item phi-coefficients yielded a single factor for each section. This result conforms to the assumption that, within TOEFL sections, the item response curves are proportional. This assumption, called PIRC for 'proportional item response curve', could serve as a basis for simpler application methods than are now being used. Use of the assumption entails estimating fewer parameters and simpler computations than with current methods. Other anticipated benefits are reduced chances of error in calibrating items and smaller equating samples, which would in turn allow pretest evaluation of substantially more items for possible operational use.
To evaluate the model further, a crossvalidation study was undertaken in which PIRC was used to predict item scores of selected examinees on selected items. The three-parameter logist model and a modified Rasch (one-parameter) model were used as well. The study also compared pre dicted half-test scores with actual scores for each method. These comparisons were made using varying sampling sizes to calibrate the items and calculate scores for the examinees. Surprisingly, the models' accuracies of prediction were approximately the same, and the estimation sample size appeared to make little difference.
Get full access to this article
View all access options for this article.
