Abstract
Defining practical significance in program evaluations is a difficult measurement problem, which can only be solved by an intimate familarity with the measures upon which effects are estimated, and their substantive relationship with the goals of the program being evaluated. Past attempts to describe the “size of effect” of instructional programs have characteristically relied on statistical indices that can be estimated and reported without any knowledge of what was measured. This practice is shown to be misdirected. Instead, what is called for is a procedure whereby the substantive instructional intentions of the program, the substantive characteristics of a test, and the interrelationship between the two are made explicit.
Get full access to this article
View all access options for this article.
