Abstract
The difference in differences design is widely used to assess treatment effects in natural experiments or other situations where random assignment cannot, or is not, used (see, e.g., Angrist & Pischke, 2009). The researcher must make important decisions about which comparisons to make, the measurements to make, and perhaps the number of individuals whose data is included in each timepoint. Also, interpretation of any statistical results, particularly null results, is improved by understanding the sensitivity of the design. This paper describes methods for computing the statistical power for tests of treatment effects in the difference in differences design. We describe alternative approaches to the analysis of the design, show which are equivalent, and provide expressions for computing statistical power and determining minimum detectable effect sizes. We then discuss how these methods could be generalized to unbalanced designs, designs with covariates, and designs more than two timepoints including difference in difference in differences designs.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
