This research analyzes the effectiveness of the list experiment and crosswise model in measuring self-plagiarism and data manipulation. Both methods were implemented in a large-scale survey of academics on social norms and academic misconduct. As the results lend little confidence about the effectiveness of the methods, researchers are best advised to avoid them or, at best, to handle them with care.
Get full access to this article
View all access options for this article.
References
1.
AronowP. M.CoppockA.CrawfordF. W.GreenD. P.. 2015. Combining list experiment and direct question estimates of sensitive behavior prevalence. Journal of Survey Statistics and Methodology3:43–66.
2.
BlairG.ImaiK.. 2012. Statistical analysis of list experiments. Political Analysis20:47–77.
3.
CoffmanK. B.CoffmanL. C.EricsonK. M. M.. 2017. The size of the LGBT population and the magnitude of antigay sentiment are substantially underestimated. Management Science63:3168–86.
4.
CouttsE.JannB.. 2011. Sensitive questions in online surveys: Experimental results for the randomized response technique (RRT) and the unmatched count technique (UCT). Sociological Methods & Research40:169–93.
5.
DiekmannA.2012. Making use of Benford’s Law for the randomized response technique. Sociological Methods & Research41:325–34.
6.
DroitcourJ.CasparR. A.HubbardM. L.ParsleyT. L.VisscherW.EzzatiT. M.. 1991. The item count technique as a method of indirect questioning: A review of its development and a case study application. In Measurement errors in surveys, eds. BiemerP. P.GrovesR. M.LybergL. E.MathiowetzN. A.SudmanS., 185–210. New York: John Wiley & Sons.
7.
FanelliD.2009. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PloS One4:e5738.
GilensM.SnidermanP. M.KuklinskiJ. H.. 1998. Affirmative action and the politics of realignment. British Journal of Political Science28:159–83.
10.
GlynnA. N.2013. What can we learn with statistical truth serum? Design and analysis of the list experiment. Public Opinion Quarterly77 (S1): 159–72.
11.
HinsleyA.KeaneA.JohnF. A. St.IbbettH.NunoA.. 2019. Asking sensitive questions using the unmatched count technique: Applications and guidelines for conservation. Methods in Ecology and Evolution10:308–19.
12.
HoffmannA.de PuiseauB. W.SchmidtA. F.MuschJ.. 2017. On the comprehensibility and perceived privacy protection of indirect questioning techniques. Behavior Research Methods49:1470–83.
13.
HoffmannA.MuschJ.. 2016. Assessing the validity of two indirect questioning techniques: A stochastic lie detector versus the crosswise model. Behavior Research Methods48:1032–46.
14.
HöglingerM.2016. Revealing the truth? Validating the randomized response technique for surveying sensitive topics. PhD thesis, ETH Zurich, Zurich.
15.
HöglingerM.DiekmannA.. 2017. Uncovering a blind spot in sensitive question research: False positives undermine the Crosswise-Model RRT. Political Analysis25:131–37.
16.
HöglingerM.JannB.. 2018. More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model. PLoS One13:e0201770.
17.
HöglingerM.JannB.DiekmannA.. 2016. Sensitive questions in online surveys: An experimental evaluation of different implementations of the randomized response technique and the crosswise model. Survey Research Methods10:171–87.
18.
HolbrookA. L.KrosnickJ. A.. 2010. Measuring voter turnout by using the randomized response technique: Evidence calling into question the methods validity. Public Opinion Quarterly74:328–43.
19.
JannB.JerkeJ.KrumpalI.. 2012. Asking sensitive questions using the crosswise model: An experimental survey measuring plagiarism. Public Opinion Quarterly76:32–49.
20.
JerkeJ.JohannD.RauhutH.ThomasK.. 2019. Too sophisticated even for highly educated survey respondents? A qualitative assessment of indirect question formats for sensitive questions. Survey Research Methods13:319–51.
21.
JohannD.ThomasK.. 2017. Testing the properties of the crosswise model in reducing social desirability bias on attitudes towards Muslims: A validation study. Survey Methods: Insights from the Field. https://surveyinsights.org/?p=8887(accessed June 28, 2020).
22.
Kiewiet de JongeC. P.2015. Who lies about electoral gifts? Experimental evidence from Latin America. Public Opinion Quarterly79:710–39.
23.
KorndörferM.KrumpalI.SchmukleS. C.. 2014. Measuring and explaining tax evasion: Improving self-reports using the crosswise model. Journal of Economic Psychology45:18–32.
24.
KrumpalI.2013. Determinants of social desirability bias in sensitive surveys: A literature review. Quality & Quantity47:2025–47.
25.
KrumpalI.JannB.AuspurgK.von HermanniH.. 2015. Asking sensitive questions: A critical account of the randomized response technique and related methods. In Improving survey methods: Lessons from recent research, eds. EngelU.JannB.LynnP.ScherpenzeelA.SturgisP., 122–36. New York: Routledge.
26.
LandsheerJ. A.Van Der HeijdenP.Van GilsG.. 1999. Trust and understanding, two psychological aspects of randomized response. Quality and Quantity33:1–12.
27.
LiJ.Van den NoortgateW.. 2019. A meta-analysis of the relative effectiveness of the item count technique compared to direct questioning. Sociological Methods & Research, Online First:1–40.
28.
MillerJ. D.1984. A new survey technique for studying deviant behavior. PhD thesis, Department of Sociology, George Washington University, Washington, DC.
29.
RauhutH.JohannD.JerkeJ.RathmannJ.VelicuA.. 2020. The Zurich Survey of Academics: Methods, design, and data. University of Zürich.
30.
RobertsD. L.JohnF. A. S.. 2014. Estimating the prevalence of researcher misconduct: A study of UK academics within biological sciences. PeerJ2:e562.
31.
SchnellR.ThomasK.. Forthcoming. A meta-analysis of studies on the performance of the crosswise model. Sociological Methods and Research.
32.
ThomasK.JohannD.KritzingerS.PlesciaC.ZeglovitsE.. 2016. Estimating sensitive behavior: The ICT and high-incidence electoral behavior. International Journal of Public Opinion Research29:157–71.
33.
TourangeauR.RipsL. J.RasinskiK.. 2000. The psychology of the survey response. Cambridge: Cambridge University Press.
34.
TourangeauR.YanT.. 2007. Sensitive questions in surveys. Psychological Bulletin133:859–83.
35.
TsuchiyaT.HiraiY.OnoS.. 2007. A study of the properties of the item count technique. Public Opinion Quarterly71:253–72.
36.
UmeshU. N.PetersonR. A.. 1991. A critical evaluation of the randomized response method applications, validation, and research agenda. Sociological Methods & Research20:104–38.
37.
WalzenbachS.HinzT.. 2019. Pouring water into wine: Revisiting the advantages of the crosswise model for asking sensitive questions. Survey Methods: Insights from the Field, 1–16. doi:10.13094/SMIF-2019-00002
38.
WolterF.2012. Heikle Fragen in interviews: Eine theoretische und empirische Validierung der Randomized Response-Technik [Sensitive questions in interviews: A theoretical and empirical validation of the randomized response technique]. Wiesbaden, Germany: VS Verlag.
39.
WolterF.LaierB.. 2014. The effectiveness of the item count technique in eliciting valid answers to sensitive questions. An evaluation in the context of self-reported delinquency. Survey Research Methods8:153–68.
40.
YuJ.-W.TianG.-L.TangM.-L.. 2008. Two new models for survey sampling with sensitive characteristic: Design and analysis. Metrika67:251–63.