Abstract
The increasing number of self-report surveys being collected using computers has led to a body of literature examining the response rates for computerized surveys compared with the more traditional paper-and-pencil method. However, results from individual studies have been inconsistent, and the meta-analyses available on this topic have included studies from a restricted range of years and did not use proper statistical procedures for examining comparability. Consequently, we conducted a meta-analysis with 96 independent effect sizes spanning over two decades of studies; we also assessed potential moderators. Comparability was determined using confidence interval equivalence testing procedures. The meta-analysis indicated nonequivalence, with those in the paper-and-pencil condition being almost twice as likely to return surveys as those in the computer condition. There was large heterogeneity of variance, and 11 of the 18 potential moderators were significant. Two meta-regressions yielded only two significant unique moderators: population and type of measure. Results highlighted issues within the response rate literature that can be addressed in future studies, as well as provided an example of using equivalence testing in meta-analyses.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
