AbbeyJ. D.MeloyM. G. (2017). Attention by design: Using attention checks to detect inattentive respondents and improve data quality. Journal of Operations Management, 53–56(1), 63–70. doi: 10.1016/j.jom.2017.06.001.
3.
AustF.DiedenhofenB.UllrichS.MuschJ. (2013). Seriousness checks are useful to improve data validity in online research. Behavior Research Methods, 45(2), 527–535.
4.
BansalH. S.EldridgeJ.HalderA.KnowlesR.MurrayM.SehmerL.TurnerD. (2017). Shorter interviews, longer surveys: Optimising the survey participant experience while accommodating ever expanding client demands. International Journal of Market Research, 59(2), 221–238. doi: 10.1177/1470785319870622a.
5.
BarnetteJ. J. (1999). Nonattending respondent effects on interval consistency of self-administered surveys: A Monte Carlo simulation study. Educational and Psychological Measurement, 59(1), 38–46. doi: 10.1177/0013164499591003.
BollenK. A., & ArmingerG. (1991). Observational residuals in factor analysis and structural equation models. Sociological Methodology, 21, 235-262. DOI:10.1073/pnas.1010661108.
8.
BrosnanK.BabakhaniN.DolnicarS. (2019). “I know what you’re going to ask me”: Why respondents don’t read survey questions. International Journal of Market Research, 61(4), 366–379. doi: 10.1177/1470785318821025.
9.
CaslerK.BickelL.HackettE. (2013). Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing. Computers in Human Behavior, 29(6), 2156–2160. doi: 10.1016/j.chb.2013.05.009.
10.
CouperMP (2013). Is the sky falling? New technology, changing media, and the future of surveys. Survey Research Methods, 7(3), 145–156. doi: 10.18148/srm/2013.v7i3.5751.
11.
DennisS. A.GoodsonB. M.PearsonC. A. (2020). Online worker fraud and evolving threats to the integrity of MTurk data: A discussion of virtual private servers and the limitations of IP-based screening procedures. Behavioral Research in Accounting, 32(1), 119–134. doi: 10.2308/bria-18-044.
12.
DillmanD. A.SmythJ. D.ChristianL. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). Wiley.
13.
DodouD., & de WinterJ. C. F. (2014). Social desirability is the same in offline, online, and paper surveys: A meta-analysis. Computers in Human Behavior, 36, 487-495. DOI:10.1016/j.chb.2014.04.005.
14.
DolnicarS.GrünB.YanamandramV. (2013). Dynamic, interactive survey questions can increase survey data quality. Journal of Travel & Tourism Marketing, 30(7), 690–699. doi: 10.1080/10548408.2013.827546.
GoodmanJ. K.CryderC. E.CheemaA. (2013). Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making, 26(3), 213–224. doi: 10.1002/bdm.1753.
17.
GrovesR. M.SingerE.CorningA. (2000). Leverage-saliency theory of survey participation: Description and an illustration. The Public Opinion Quarterly, 64(3), 299–308. doi: 10.1086/317990.
18.
GuinT. D.-L.BakerR.MechlingJ.RuyleE. (2012). Myths and realities of respondent engagement in online surveys. International Journal of Market Research, 54(5), 613–633. doi: 10.2501/IJMR-54-5-613-633.
19.
HymanM. R.SierraJ. J. (2012). Adjusting self-reported attitudinal data for mischievous respondents. International Journal of Market Research, 54(1), 129–145. doi: 10.2501/IJMR-54-1-129-145.
20.
KostykA.LeonhardtJ. M.NiculescuM. (2021). Processing fluency scale development for consumer research. International Journal of Market Research, 63(3), 353–367.
21.
KostykA.ZhouW., & HymanM. R. (2019). Using surveytainment to counter declining survey data quality. Journal of Business Research, 95, 211-219. DOI:10.1016/j.jbusres.2018.10.024.
22.
KropfM. E.BlairJ. (2005). Eliciting survey cooperation: Incentives, self-interest, and norms of cooperation. Evaluation Review, 29(6), 559–575. doi: 10.1177/1470785319877137.
23.
LindJ. C.ZumboB. D. (1993). The continuity principle in psychological research: An introduction to robust statistics. Canadian Psychology, 34(4), 407–414. doi: 10.1037/h0078861.
24.
LiuY.ZumboB. D. (2007). The impact of outliers on Cronbach’s coefficient alpha estimate of reliability: Visual analogue scales. Educational and Psychological Measurement, 67(4), 620–634. doi: 10.1177/0013164406296976.
25.
MeadeA. W.CraigS. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437–455. doi: 10.1037/a0028085.
26.
MillerP. V. (2017). Is there a future for surveys?. Public Opinion Quarterly, 81(S1), 205–212. doi: 10.1093/poq/nfx008.
OppenheimerD. M.MeyvisT.DavidenkoN. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4), 867–872. doi: 10.1016/J.JESP.2009.03.009.
29.
PaasL. J.DolnicarS.KarlssonL. (2018). Instructional manipulation checks: A longitudinal analysis with implications for MTurk. International Journal of Research in Marketing, 35(2), 258–269. doi: 10.1016/j.ijresmar.2018.01.003.
30.
PaasL. J.MorrenM. (2018). Please do not answer if you are reading this: Respondent attention in online panels. Marketing Letters, 29(1), 13–21. doi: 10.1007/s11002-018-9448-7.
31.
PayneS. L. (1951). The Art of asking questions. Princeton University Press.
32.
PerkelJ. M. (2020). Mischief-making bots attacked my scientific survey. Nature, 579(7798), 461. doi: 10.1038/d41586-020-00768-0.
33.
PetersonR. A. (2001). On the use of college students in social science research: Insights from a second-order meta-analysis. Journal of Consumer Research, 28(3), 450–461. doi: 10.1086/323732.
34.
PetersonG.GriffinJ.LaFranceJLiJ (2017). Smartphone participation in web surveys. In BeimerPP, (Eds.), Total survey error in practice (pp. 203–233). Wiley.
35.
SearsD. O. (1986). College sophomores in the laboratory: Influences of a narrow database on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51(3), 515–530. doi: 10.1037/0022-3514.51.3.515.
36.
SingerE.CouperM. P. (2008). Do incentives exert undue influence on survey participation? Experimental evidence. Journal of Empirical Research on Human Research Ethics, 3(3), 49–56. doi: 10.1525/jer.2008.3.3.49.
37.
SmithS. M.RosterC. A.GoldenL. L.AlbaumG. S. (2016). A multi-group analysis of online survey respondent data quality: Comparing a regular USA consumer panel to MTurk samples. Journal of Business Research, 69(8), 3139–3148. doi: 10.1016/j.jbusres.2015.12.002.
38.
SturgisP., & LuffR. (2020). The demise of the survey? A research note on trends in the use of survey data in the social sciences, 1939 to 2015. International Journal of Social Research Methodology, 1-6. DOI:10.1080/13645579.2020.1844896.
ZhangCConradF (2014). Speeding in web surveys: The tendency to answer very fast and its association with straightlining. Survey Research Methods, 8(2), 127–135. doi: 10.18148/srm/2014.v8i2.5453.