AustF.DiedenhofenB.UllrichS.MuschJ. (2013). Seriousness checks are useful to improve data validity in online research. Behavior Research Methods, 45(2), pp.527-535.
BansalH.S.EldridgeJ.HalderA.KnowlesR.MurrayM.SehmerL.TurnerD. (2017). Shorter interviews, longer surveys: Optimising the survey participant experience while accommodating ever expanding client demands. International Journal of Market Research, 59(2), pp.221-238.
6.
BarnetteJ.J. (1999). Nonattending respondent effects on interval consistency of self-administered surveys: A Monte Carlo simulation study. Educational and Psychological Measurement, 59(1), pp.38-46.
7.
BollenK.A.ArmingerG. (1991). Observational residuals in factor analysis and structural equation models. Sociological Methodology, 21, pp.235-262.
CaslerK.BickelL.HackettE. (2013). Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing. Computers in Human Behavior, 29(6), pp.2156-2160.
10.
DillmanD.A.SmythJ.D.ChristianL.M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). New York, NY: Wiley.
11.
DodouD.de WinterJ. C. (2014). Social desirability is the same in offline, online, and paper surveys: A meta-analysis. Computers in Human Behavior, 36, pp.487-495.
12.
DolnicarS.GrünB.YanamandramV. (2013). Dynamic, interactive survey questions can increase survey data quality. Journal of Travel & Tourism Marketing, 30(7), pp.690-699.
FleischerA.MeadA.D.HuangJ. (2015). Inattentive responding in MTurk and other online samples. Industrial and Organizational Psychology, 8(2), pp.196-202.
16.
GuinT.D.L.BakerR.MechlingJ.RuyleE. (2012). Myths and realities of respondent engagement in online surveys. International Journal of Market Research, 54(5), pp.613-633.
17.
GoodmanJ.K.CryderC.E.CheemaA. (2013). Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making, 26(3), pp.213-224. doi:10.1002/bdm.1753
18.
GrovesR.M.SingerE.CorningA. (2000). Leverage-saliency theory of survey participation: Description and an illustration. The Public Opinion Quarterly, 64(3), pp.299-308.
19.
HymanM.R.SierraJ.J. (2012). Adjusting self-reported attitudinal data for mischievous respondents. International Journal of Market Research, 54(1), pp.129-145.
20.
KostykA.ZhouW.HymanM.R. (2019). Using surveytainment to counter declining survey data quality. Journal of Business Research, 95, pp.211-219.
21.
KropfM.E.BlairJ. (2005). Eliciting survey cooperation: Incentives, self-interest, and norms of cooperation. Evaluation Review, 29(6), pp.559-575.
22.
LindJ.C.ZumboB.D. (1993). The continuity principle in psychological research: An introduction to robust statistics. Canadian Psychology, 34(4), pp.407-414.
23.
LiuY.ZumboB.D. (2007). The impact of outliers on Cronbach’s coefficient alpha estimate of reliability: Visual analogue scales. Educational and Psychological Measurement, 67(4), pp.620-634.
PaasL.J.DolnicarS.KarlssonL. (2018). Instructional manipulation checks: A longitudinal analysis with implications for MTurk. International Journal of Research in Marketing, 35(2), pp.258-269.
26.
PaasL.J.MorrenM. (2018). Please do not answer if you are reading this: Respondent attention in online panels. Marketing Letters, 29(1), pp.13-21.
27.
PayneS.L. (1951). The art of asking questions. Princeton, NJ: Princeton University Press.
28.
PetersonR.A. (2001). On the use of college students in social science research: Insights from a second-order meta-analysis. Journal of Consumer Research, 28(3), pp.450-461.
29.
PetersonG.GriffinJ.LaFranceJ.LiJ. (2017). Smartphone participation in web surveys. In: BeimerP.P., et al. (eds.), Total survey error in practice (pp.203-233). New York, NY: Wiley.
30.
SearsD.O. (1986). College sophomores in the laboratory: Influences of a narrow database on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51(3), pp.515-530.
31.
SingerE.CouperM.P. (2008). Do incentives exert undue influence on survey participation? Experimental evidence. Journal of Empirical Research on Human Research Ethics, 3(3), pp.49-56.
32.
SmithS.M.RosterC.A.GoldenL.L.AlbaumG.S. (2016). A multi-group analysis of online survey respondent data quality: Comparing a regular USA consumer panel to MTurk samples. Journal of Business Research, 69(8), pp.3139-3148.
33.
Van HerkH.PoortingaY.H.VerhallenT.M. (2004). Response styles in rating scales: Evidence of method bias in data from six EU countries. Journal of Cross-Cultural Psychology, 35(3), pp.346-360.
34.
ZhangC.ConradF. (2014). Speeding in web surveys: The tendency to answer very fast and its association with straightlining. Survey Research Methods, 8(2), pp.127-135.