Abstract
Data from psychological experiments pose a causal generalization paradox. Unless the experimental results have some generality, they contribute little to scientific knowledge. Yet, because most experiments use convenience samples rather than probability-based samples, there is almost never a formal justification, or set of rigorous guidelines, for generalizing the study's findings to other populations. This article discusses the causal generalization paradox in the context of outcome findings from experimental evaluations of psychological treatment programs and services. In grappling with the generalization paradox, researchers often make misleading (or at least oversimplified) assumptions. The article analyzes 10 such assumptions, including the belief that a significant experimental treatment effect is likely to be causally generalizable and the belief that the magnitude of a significant experimental effect provides a sound effect size estimate for causal generalization. The article then outlines 10 constructive strategies for assessing and enhancing causal generality. They include strategies involving the scaling level of outcome measures, variable treatment dosages, effectiveness designs, multiple measures, corroboration from observational designs, and the synthesis of multiple studies. Finally, the article's discussion section reviews the conditions under which causal generalizations are justified.
Keywords
Get full access to this article
View all access options for this article.
