Abstract
Despite evidence that the choice of dependent measures can significantly influence design sensitivity, many evaluators default to traditional measures that may be insensitive to intervention effects. This paper describes an innovative set of test development guidelines designed to select items and create aggregate scales that are better able to detect program effects. The application of these Intervention Item Selection Rules (IISRs) is illustrated during the initial development of an outcome measure, completed by teachers, for elementary age children receiving psychosocial services from community mental health agencies. The major scale formed with these change-sensitive items displayed a larger effect size and an adequate reliability estimate.
Get full access to this article
View all access options for this article.
