Abstract
Factorial surveys use a population of vignettes to elicit respondents’ attitudes or beliefs about different hypothetical scenarios. However, the vignette population is frequently too large to be assessed by each respondent. Experimental designs such as randomized block confounded factorial (RBCF) designs, D-optimal designs, or random sampling designs can be used to construct small subsets of vignettes. In a simulation study, we compare the three vignette designs with respect to their biases in effect estimates and show how the biases arise from the designs’ confounding structure, nonorthogonality, and unbalancedness. We particularly focus on the designs’ sensitivity to context effects and misspecifications of the analytic model. We argue that RBCF designs and D-optimal designs are preferable to random sampling designs because they offer a stronger protection against undesirable confounding, context effects, and model misspecifications. We also discuss strategies for dealing with context and order effects since none of the basic vignette designs can satisfactorily handle them.
Keywords
Get full access to this article
View all access options for this article.
