Abstract
Background:
When evaluations are broadly disseminated, the public can use them to support a program or to advocate for change.
Methods:
To explore how evaluations are perceived and used by the public, individuals in a sample of 425 people in the United States were recruited through an online crowdsourcing service called Mechanical Turk (www.mturk.com). Participants were randomly assigned to receive different versions of a press release describing a summative evaluation of a program. Each condition contained a unique combination of methods (e.g., randomized controlled design) and findings (positive or negative) to describe the evaluation and its findings. Participants in each condition responded to questions about their trust in the content of the evaluation findings and their attitudes toward the program.
Results:
Results indicated that the type of evaluation methods and the direction of the findings both influenced the credibility of the findings and that the credibility of the findings moderated the relationship between the direction of the evaluation findings and attitudes toward the evaluated program. Additional evaluation factors to explore in future research with the public are recommended.
Get full access to this article
View all access options for this article.
