Our aim in this article is to draw attention to some underappreciated problems in the design and implementation of evaluation systems that incorporate value-added measures. We focus on four: (1) taking into account measurement error in teacher assessments, (2) revising teachers’ scores as more information becomes available about their students, and (3) and (4) minimizing opportunistic behavior by teachers during roster verification and the supervision of exams.
BallouD.SandersW.WrightP. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 37–65.
BetebennerD. W. (2008). A primer on student growth percentiles. Dover, NH: National Center for the Improvement of Educational Assessment.
7.
BetebennerD. W. (2011). A technical overview of the student growth percentile methodology: Student growth percentiles and percentile growth projections/trajectories. Dover, NH: National Center for the Improvement of Educational Assessment.
CullenJ. B.RebackR. (2006). Tinkering toward accolades: School gaming under a performance accountability system (NBER Working Paper #12286). Cambridge, MA: National Bureau for Economic Research.
11.
DeereD.StrayerW. (2001). Putting schools to the test: School accountability, incentives, and behavior. Unpublished manuscript, Texas A&M University.
FiglioD. (2003). Testing, crime and punishment. Unpublished manuscript, University of Florida.
14.
FiglioD.GetzlerL. (2002). Accountability, ability and disability: Gaming the system? (National Bureau for Economic Research Working Paper 9307). Cambridge, MA: National Bureau for Economic Research.
15.
FiglioD.WinickiJ. (2005). Food for thought? The effects of school accountability plans on school nutrition. Journal of Public Economics, 89, 381–394.
GoodnoughA. (1999, December8). Answers allegedly supplied in effort to raise test scores. New York Times.
18.
HanB.McCaffreyD. F.SpringerM.GottfriedM. (2012). Teacher effect estimates and decision rules for establishing student-teacher linkages: What are the implications for high-stakes personnel policies in an urban school district?Statistics, Politics, and Policy, 3(2), 1–22.
19.
HaneyW. (2000). The myth of the Texas miracle in education. Education Analysis Policy Archives, 8(41). Retrieved from http://epaa.asu.edu/epaa/v8n41/.
20.
HarrisD. N. (2011). Value-added measures in education. Cambridge, MA: Harvard Education Press.
JacobB. (2005). Testing, accountability, and incentives: The impact of high-stakes testing in Chicago public schools. Journal of Public Economics, 89(5/6), 761–796.
23.
JacobB.LevittS. (2005). Rotten apples: An investigation of the prevalence and predictors of teacher cheating. Quarterly Journal of Economics, 118(3), 843–877.
24.
KluenderR.ThornC.WatsonJ. (2011). Why are student-teacher linkages important? An introduction to data quality concerns and solutions in the context of classroom-level performance measures. Madison, WI: Center for Educator Compensation Reform.
25.
KoretzD.BarronS. I. (1998). The validity of gains on the Kentucky Instructional Results Information System (KIRIS) (M-1014-EDU). Santa Monica, CA: RAND Corporation.
26.
McCaffreyD. F.LockwoodJ. R.KoretzD. M.HamiltonL. S. (2003). Evaluating value-added models for teacher accountability. Santa Monica, CA: Rand.
RANDA Solutions. (2014). New USDOE report reveals many states struggle to link individual student test data to the proper teacher. Retrieved from www.prweb.com/releases/2014/02/prweb11562558.htm
WebberA.TroppeP.MilanowskiA.GutmannB.ReisnerE.GoertzM. (2014). State implementation of reforms promoted under the Recovery Act. Washington, DC: U.S. Department of Education, Institute of Education Sciences.