Abstract
Continuously administered examination programs, particularly credentialing programs that require graduation from educational programs, often experience seasonality where distributions of examine ability may differ over time. Such seasonality may affect the quality of important statistical processes, such as item response theory (IRT) item calibration and equating. The lead time required for producing pre-equated test forms in the continuous testing framework further complicates issues. This study examines the effect of seasonality in test data on Rasch IRT item parameter estimates. Data came from four credentialing examination programs that represented both programs with and without seasonality, as well as medium and low examinee volume. Results showed that calibrating items during certain times can lead to quite poor item parameter estimates. While certain programs could conduct IRT calibrations without waiting for the full examination cycle to be completed, other types of programs should wait as long as possible before calibrating items.
Keywords
Get full access to this article
View all access options for this article.
