Abstract
Evaluation of special education effectiveness has stirred considerable controversy about the quality of services delivered to mildly handicapped students. However, the reliability of these evaluations has been questioned due to numerous methodological problems. Foremost among the concerns is the choice of metric to represent academic growth. The present study examines the hypothesis that different evaluative interpretations may be a function of the manner in which data are summarized and reported. Using a norm-referenced evaluation design, multiple metrics of reading and spelling achievement were obtained from mildly handicapped students in grades 1 through 6. Four metrics were compared, including raw score (level of correct performance), grade-equivalent score, z-score, and discrepancy index. Different interpretations may be warranted according to the metric employed. Criteria for selecting metrics for evaluating programs are considered with respect to statistical adequacy and the objective of special education programs.
Get full access to this article
View all access options for this article.
