Abstract
Meta-evaluations reported in the literature, although rare, often have focused on retrospective assessment of completed evaluations. Conducting a meta-evaluation concurrently with the evaluation modifies this approach. This method provides the opportunity for the meta-evaluators to advise the evaluators and provides the basis for a summative judgment about the quality of the evaluation. The authors conducted a concurrent meta-evaluation of a new evaluation technique being developed by a federal governmental agency; the new evaluation technique was expected to be highly visible and widely applied. The differences between concurrent meta-evaluation and other meta-evaluations were continuous involvement, attendance at data collection events, and external verification of the evaluation data. The authors' experience conducting the concurrent meta-evaluation is described and challenges are discussed in this critique. The authors conclude that concurrent meta-evaluation holds promise for improving the practice of evaluation and of meta-evaluation.
Get full access to this article
View all access options for this article.
