Two methods of 'equating' tests are compared, one using true scores, the other using equipercentile equat ing of observed scores. The theory of equating is dis cussed. For the data studied, the two methods yield almost indistinguishable results.
Get full access to this article
View all access options for this article.
References
1.
Angoff, W.H. (1971). Scales, norms, and equivalent scores. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 508-600). Washington DC: American Council on Education.
2.
Braun, H.I., & Holland, P.W. (1982). Observed-score test equating: A mathematical analysis of some ETS equating procedures. In P. W. Holland & D. B.Rubin (Eds.), Test equating (pp. 9-49). New York: Academic Press.
3.
Kolen, M.J. (1981). Comparison of traditional and item response theory methods for equating tests. Journal of Educational Measurement , 18, 1-11.
4.
Lord, F.M. (1980). Applications of item response theory to practical testing problems. Hillsdale NJ: Erlbaum.
5.
Loret, P.G., Seder, A., Bianchini, J.C., & Vale, C.A. (1974). Anchor test study—Equivalence and norms tables for selected reading achievement tests (grades 4, 5, 6). Washington DC: U. S. Government Printing Office.
6.
Petersen, N.S., Cook, L.L., & Stocking, M.L. (1983). IRT versus conventional equating methods: A comparative study of scale stability. Journal of Educational Statistics, 8, 137-156.
7.
Wood, R.L., Wingersky, M.S., & Lord, F.M. (1976). LOGIST—A computer program for estimating examinee ability and item characteristic curve parameters (ETS RM 76-6). Princeton NJ: Educational Testing Service.