Abstract
Professional training has long been the target for evaluation. However, most evaluation of training is confined to simple end-of-course data collection and superficial analysis. We argue that evaluations that rely heavily on end-of-course data collection can provide useful findings if they are well designed. This involves thinking clearly about the variables that are to be measured, translating them into measures, and ‘making sense’ of the findings within a framework set by the phenomenon under review. A course that presented training in evaluation to graduate students and participants in New Zealand was chosen as a case to exemplify the application of these design principles.
Get full access to this article
View all access options for this article.
