This paper outlines new evidence on what happens when questions from major social surveys are asked of online survey panellists. The paper shows how difficult it is to control for ‘panellist bias’ and produce unbiased population estimates but also that, for some statistics, panel data can provide a surprisingly close match to the gold standard surveys of government.
Get full access to this article
View all access options for this article.
References
1.
BettsP., & LoundC. (2010) The application of alternative modes of data collection in UK government social surveys: a report for the Government Statistical Service
2.
CochranW. G. (1968) The effectiveness of adjustment by subclassification in removing bias in observational studiesBiometrics, 24, pp 295–313
3.
DayenN., & FarrerY. (2010) A structured approach for reducing bias in surveys using online access panels Worldwide Readership Research Symposium 2009
4.
DeverJ., RaffertyA., & ValliantR. (2008) Internet surveys: can statistical adjustments eliminate coverage bias?Survey Research Methods, 2, 2, pp 47–62
5.
LeeS. (2006) Propensity score adjustment as a weighting scheme for volunteer panel web surveysJournal of Official Statistics, 22, 2, pp 329–349.
6.
NicolaasG., & TippingS. (2006) Mode effects in social capital surveys Survey Methodology Bulletin, Special Edition, 58, August, pp 56–74
7.
SchonlauM., Van SoestA., & KapteynA. (2007) Are ‘webographic’ or attitudinal questions useful for adjusting estimates from web surveys using propensity scoring? RAND working paper
8.
SchonlauM., Van SoestA., KapteynA., & CouperM. (2008) Selection bias in web surveys and the use of propensity scores RAND working paper
9.
SchonlauM., ZapertK., SimonL. P., SanstadK., MarcusS., AdamsJ., SprancaM., KanH., TurnerR., & BerryS. (2004) A comparison between a propensity weighted web survey and an identical RDD surveySocial Science Computer Review, 22, pp 128–138