Abstract
Evaluations of educational programs' success often benefit from feedback supplied by the employers of students who have completed the program, but attempts to contact employers may be marred by coverage error. Moreover, the formulation of goals against which a program's success can be evaluated may be problematic. This paper explores an evaluation of an initiative to enhance instruction and capabilities regarding information technology in view of these problems. Experience with the project led to the conclusions that plans for a census or probability-based sampling sometimes must be revised in the course of an ongoing study and that official goals must be developed with caution. In view of such difficulties, designing studies in which errors are nonproblematic or can be measured with precision may represent an unattainable outcome. Even so, studies that can make constructive contributions to ongoing decision-making may still be viewed as qualified but meaningful successes.
Get full access to this article
View all access options for this article.
