The tests we use to evaluate student achievement may well be sound measures of what students know, but they are faulty indicators at best of how well they have been taught. A remedy to this this situation of judging teachers by the performance of their students on high-stakes tests may be in hand already. We should look to the methods successfully used to eliminate testing bias when it was first discovered as pertaining to gender and racial groups.
Get full access to this article
View all access options for this article.
References
1.
CoxR. (1971). Evaluative aspects of criterion-referenced measures. In PophamW.J. (Ed.), Criterion-referenced measurement: An introduction (pp. 67–75). Englewood Cliffs, NJ: Educational Technology Publications.
2.
HaladynaT.RoidG. (1981). The role of instructional sensitivity in the empirical review of criterion-referenced test items. Journal of Educational Measurement, 18(1), 39–53.
3.
KaneM.T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1–73.
4.
PophamW.J. (2006, June). Determining the instructional sensitivity of accountability tests. Presented at the Large-Scale Assessment Conference, Council of Chief State School Officers, San Francisco, Calif.
5.
PophamW.J. (2014). Classroom assessment: What teachers need to know (7th ed.). Boston, MA: Pearson.
6.
PophamW.J.LindheimE. (1981). Implications of a landmark ruling on Florida’s minimum competency test. Phi Delta Kappan, 63(1), 18–22.
7.
PophamW.J.RyanJ. (2012, April). Determining a high-stakes test’s instructional sensitivity. Presented at the annual meeting of the National Council on Educational Measurement, Vancouver, British Columbia, Canada.