Abstract

Keywords
Particularly at this time in the history of “mischievous responders” in LGBTQ (lesbian, gay, bisexual, transgender, queer or questioning) youth research, a registered report seemed like a great fit. The idea of mischievous responders has now percolated across the field, as well as into other minority and youth research, but some researchers understandably remain skeptical about how much to make of the possibility of mischievous responders producing biased estimates. Thus, a replication with a widely available data set based on a preregistration before the data became available seemed like the perfect opportunity to get a little more traction with this topic and to do so in a way that could test if the patterns are robust or due to some strange combination of screener items that are determined post hoc while analyzing the data.
All that is to say that preregistration and registered reports can be appropriate and beneficial in many contexts, and this was a situation that seemed particularly well suited for these features of open science. Moreover, all of the work we have done on mischievous responders has centered around research transparency in the assumptions regarding data validity, and this special collection presented a very nice opportunity to make clearer the connections between transparency/open science and mischievous responders.
Registered reports may also be one of the best things for research where mischievous responders are likely present in the data because of what removing potentially mischievous responders can do to
With registered reports, however, researchers could specify in advance how they would identify and remove suspected mischievous responders, and the research design could be accepted for in-principle publication regardless of whether the removal of suspected mischievous responders took a
As for our specific case, conducting the research was fairly straightforward after the in-principle accept. The one complication we encountered came when we were conducting our “Study 3” (comparison of methods). We had anticipated using data from all sites in the Youth Risk Behavior Survey (YRBS), but the YRBS is actually a collection of different surveys gathered across sites that are given flexibility in selecting the items they include in their own survey; that is, some sites may ask all the screener questions and others may ask only a small subset of them. When all the screener questions are asked, we have more information to identify potentially mischievous responders; but the signal is weakened when a site asks only a couple questions in our screener. Because our Study 3 was designed as a methodological study, we thought it would be best to retain only the sites that asked all the screener questions, so that we could compare the methods while not introducing another variable (i.e., variation across sites in items asked). Then for consistency, we used this smaller (but fuller information) sample for the entire main analysis. (We also ran everything again with the complete sample and put that in the online appendix, so that people can see the results with all the data since we did not think about this issue in advance. The results are similar.) That was the only real interesting challenge as we conducted the research; everything else went as anticipated.
Footnotes
Authors
JOSEPH R. CIMPIAN, is an associate professor of economics and education policy at the New York University Steinhardt School of Culture, Education, and Human Development, Kimball Hall, 2nd floor, New York, NY 10003;
JENNIFER D. TIMMER, is an Institute of Education Sciences postdoctoral research fellow in the Department of Leadership, Policy, and Organizations at Vanderbilt University, Peabody College, PMB 414, 230 Appleton Place, Nashville, TN 37203;
