Abstract
Effective talent-identification procedures minimize the proportion of students whose subsequent performance indicates that they were mistakenly included in or excluded from the program. Classification errors occur when students who were predicted to excel subsequently do not excel or when students who were not predicted to excel do. Using a longitudinal sample, we assessed the accuracy of measures of verbal reasoning, quantitative reasoning, nonverbal reasoning, and current achievement for predicting later achievement. We found that seemingly small differences in predictive validity substantially changed the number of students erroneously included or excluded from the program. Surprisingly, nonverbal tests not only led to more classification errors but also failed to identify more English language learners and minority students. To increase equity and maintain fairness, practitioners should carefully evaluate claims that scores from alternative assessments are as valid as scores from conventional ability tests and verify that the use of these tests result in greater diversity.
Get full access to this article
View all access options for this article.
