In the statistical literature, ‘effect size’ is typically defined as the estimate of a fixed or random effects in a linear or other statistical model. Here, a more explicit measure of the estimate of effect size is studied, defined generically as
, where
and S are the sample mean and sample standard deviation for a random sample of observations X1, X2, …, Xn from a distribution with finite mean μ, variance σ2 and finite moments of order 3 and 4, so that skewness and kurtosis are well defined (see section 2 for explicit formulae for the skewness and kurtosis for a univariate real valued random variable). For example, in a paired comparison experiment, the Xi would be the differences between treatment and control measurements. The effect size has also been referred to in signal processing as the ‘signal-to-noise-ratio,’ and presents a measure of how easy it is to detect a signal (μ) in the presence of noise quantified by σ. The larger the effect size, the higher is the confidence that a signal is present. Accordingly, it is of interest to find interval estimates of the population effect size μ/σ. In this note, the asymptotic (normal) distribution of
is determined, and an explicit variance-stabilizing transformation g is determined such that
is asymptotically standard normal, making confidence intervals for the effect size relatively easy to compute numerically. This explicit formula for a variance stabilizing transformation for the effect size appears to be a new result in the mathematical statistics literature, and there is strong evidence that confidence intervals formed after applying a variance-stabilizing transformation have merit over central-limit-theorem-based confidence intervals. Therefore, for a random sample from any completely known univariate distribution, this result allows one to compute a relatively efficient confidence interval for the effect size.
AMS Subject Classification: 62-01