Abstract
Psychological scientists are increasingly using preregistration as a tool to increase the credibility of research findings. Many of the benefits of preregistration rest on the assumption that preregistered plans are followed perfectly. However, research suggests that this is the exception rather than the norm, and there are many reasons why researchers may deviate from their preregistered plans. Preregistration can still be a valuable tool, even in the presence of deviations, as long as those deviations are well documented and transparently reported. Unfortunately, most preregistration deviations in psychology go unreported or are reported in unsystematic ways. In the current article, we offer a solution to this problem by providing a framework for transparent and standardized reporting of preregistration deviations, which was developed by drawing on our own experiences with preregistration, existing unpublished templates, feedback from colleagues and reviewers, and the results of a survey of 34 psychology-journal editors. This framework provides a clear template for what to do when things do not go as planned. We conclude by encouraging researchers to adopt this framework in their own preregistered research and by suggesting that journals implement structural policies around the transparent reporting of preregistration deviations.
Over the past decade, new research practices have been proposed to increase the replicability and credibility of psychological research (Vazire, 2018), including preregistration (Moore, 2016; Simmons et al., 2011; van’t Veer & Giner-Sorolla, 2016). Preregistration involves specifying research questions, hypotheses, methods, analytic approaches, and/or inferential criteria before collecting or analyzing data (Nosek et al., 2018; van’t Veer & Giner-Sorolla, 2016; Weston et al., 2019). Before 2011, preregistration was extremely rare in psychological science (Simmons et al., 2021), and although preregistration is far from common practice in the field (Hardwicke et al., 2022), usage is increasing (Nosek & Lindsay, 2018). When preregistered plans are followed perfectly, preregistration offers several benefits that decrease risk of bias and the likelihood of false-positive results and increase the credibility of the scientific literature (Nosek et al., 2018). However, preregistrations are rarely followed perfectly (Claesen et al., 2021; Heirene et al., 2021; Ofosu & Posner, 2021), and there are many legitimate reasons that a researcher may deviate from their preregistered plans. We define a deviation as any discrepancy between what the authors said they would do in the preregistration and what the authors actually did in the final article. Although preregistration allows readers and reviewers to distinguish between what was planned and unplanned, this benefit can be realized only if deviations are transparently reported.
In this article, we begin by explaining why a lack of transparency in reporting preregistration deviations is a problem and propose a standardized template to report deviations as one potential solution to this problem. Then, we describe our process for developing a standardized preregistration-deviations template. As part of this process, we conducted a survey of 34 psychology-journal editors to understand current editorial practices and their perceptions of preregistration deviations and to gather feedback on an initial draft of our template. After reporting the results of this survey, we present our template and make specific recommendations for how researchers and journals can adopt this framework to systematize the reporting of preregistration deviations. Finally, we conclude by addressing other questions that readers might have about preregistration deviations, such as how to prevent deviations, when to deviate, when it may be better to update an existing preregistration or create a new preregistration, and what to do when there are numerous deviations. The ultimate goal of the framework presented in this article is to help researchers fine-tune their preregistration skills to reduce deviations while also providing a simple and transparent way to report deviations when they do occur.
The Problem and Opportunity
Despite most scientists’ best efforts and intentions, it is sometimes necessary or preferable to deviate from preregistered plans, whether because of new innovations, data-collection errors, responding to reviewer requests, or simply correcting meaningful typos. We are not the first to document this observation or the first to make calls for preregistration deviations to be transparently reported (e.g., Brodeur et al., 2022; Campbell et al., 2023; Claesen et al., 2021; Nosek et al., 2018, 2019; Simmons et al., 2021). However, currently, there is no standardized way to report deviations from preregistrations or to evaluate the impact of those deviations on study results. Researchers tend to adopt different methods for reporting deviations, which we believe hinders transparency because there is no systematic way for reviewers, editors, and/or readers to identify and evaluate deviations. To illustrate, we share some approaches that we have used in our own research, such as uploading separate preregistration-deviation lists to OSF (e.g., Atherton et al., 2022, 2023), weaving deviations throughout the article (e.g., Willroth et al., 2021), creating tables that describe deviations by research question (e.g., Willroth et al., 2020; Willroth, Hill, et al., 2023), and reporting deviations in footnotes (e.g., Willroth, Luo, et al., 2023). In addition, many researchers do not report deviations at all (e.g., see Brodeur et al., 2022; Claesen et al., 2021). In fact, of 27 preregistered studies published in
The lack of standardized reporting of preregistration deviations is a problem for authors, reviewers, editors, and scientific progress. Some of the main reasons authors report being reluctant to adopt preregistration include beliefs that deviations are simply not allowed, misperceptions that deviating from the preregistration defeats its purpose, or a lack of resources and clear guidelines to support preregistration (Simmons et al., 2021; Washburn et al., 2018). Likewise, many authors may not know how to weigh the costs and benefits of deviating or how to communicate those costs and benefits to others. As a result, we argue that a standardized approach would make open-science practices easier to adopt among preregistration veterans and newcomers alike because it normalizes the occurrence and reporting of deviations and makes those deviations transparent for readers.
The lack of standardized reporting of preregistration deviations also poses notable problems for peer reviewers and editors who are already overburdened by the system. Presently, it is up to reviewers and editors to review any preregistration(s) at will and then decipher whether and in what ways the manuscript differs from the preregistration. This is inefficient and costly for reviewers and editors, who often do not have hours to spend identifying the differences across multiple documents. Standardized reporting of preregistration deviations would help reviewers and editors evaluate work more easily. Finally, a lack of standardized reporting is a problem for scientific progress in the field. Preregistrations are more valuable when they are specified in detail and followed perfectly and/or when deviations are transparently reported. If readers are not able to easily discern adherence or nonadherence to the preregistration, this system may erode trust among members of the field and undermine the credibility of psychological science. In sum, developing a tool for standardized reporting of preregistration deviations has the potential to make open science easier to adopt, implement, and evaluate.
Developing a Tool for Standardized Reporting of Preregistration Deviations
The motivation for developing a standardized mechanism of reporting preregistration deviations stemmed from our personal experiences with preregistration. We found that it was difficult to identify the most transparent way to report deviations in our own work and to identify what, where, and when preregistration deviations occurred in the articles for which we served as peer reviewers. Thus, to develop a tool for standardized reporting of preregistration deviations, we drew on our own experiences with preregistration and existing unpublished templates (Brodeur et al., 2022; Campbell et al., 2023; Claesen et al., 2021; Nosek et al., 2018, 2019; Simmons et al., 2021), that led us to create a “Preregistration Deviations Table” template. On the basis of reviewer feedback on this article, we surveyed editors of psychology journals to understand current editorial perceptions and practices concerning preregistration deviations and to gather feedback on an initial draft of our template (the initial draft is available on our OSF page at https://osf.io/yhzdc). We then refined our template based on reviewer, colleague, and participant feedback. In the following sections, we describe the method and results of our editorial survey and present the final template for reporting preregistration deviations.
Method
This study was approved by the Washington University in St. Louis Institutional Review Board (project title: “Preregistration Survey”; IRB 20230922). We conducted a preregistered survey of head editors and action editors from 16 psychology journals. We report how we determined our sample size, all data exclusions, and all measures in the study. The preregistration, survey items, data, and analytic code are available on OSF (https://osf.io/et6km/). We have no deviations from the preregistration to report. The survey asked editors about their perceptions and editorial practices concerning preregistration deviations, including prevalence of deviations, formal evaluation mechanisms, and justifiability of deviations, and their own experiences with preregistration and demographic characteristics. The items and corresponding response options are described before each set of results in the Results section below.
Our primary sampling goals were to (a) obtain data from editors and associate editors at general psychology journals and subdiscipline-specific journals and (b) obtain data from journals that vary in the extent to which they adopt Transparency and Openness Promotion (TOP) guidelines/preregistration. To achieve these goals, we identified six primary subfields of psychology (cognitive psychology and neuroscience, clinical, developmental, health, industrial-organizational, and social-personality). For general psychology, we chose two journals that were associated with the Association for Psychological Science and two journals that have high TOP scores based on the following website: https://topfactor.org/journals?disciplines=Psychology. This led us to target editors and associate editors at
We recruited participants via direct emails with requests for survey participation. In total, we emailed 170 potential participants. The survey was completed in Qualtrics. Participation was completely voluntary, and no compensation was provided. We preregistered that we would keep data collection open for 1 week after the initial emails were sent, and if the total
Results
Participants
The sample included 32% cognitive psychologists/neuroscientists, 18% clinical psychologists, 15% developmental psychologists, 6% health psychologists, 3% industrial-organizational psychologists, 41% social-personality psychologists, 9% who identified with a different psychological subdiscipline such as methodology/ metapsychology, and 4% who did not report their subdiscipline. Participants were allowed to select multiple subfields. Three percent of participants were postdoctoral researchers, 3% were assistant professors, 26% were associate professors, 53% were full professors, 3% held a position other than those listed here, and 12% did not report their academic position. Six percent of participants had been a journal editor for less than 1 year, 21% had been a journal editor for 1 to 2 years, 35% had been a journal editor for 2 to 5 years, 18% had been a journal editor for 5 to 10 years, 9% had been a journal editor for 10+ years, and 12% did not report the length of their editorship. In their role as editor, participants reported that, on average, 44% (
Editorial perceptions
On average, editors estimated that 82% (
We instructed editors to “imagine that the authors of a preregistered study deviated from their preregistered plan in one or more ways. The authors report the deviation(s) along with an explanation for why they deviated in the manuscript.” On a scale from 1 (
Next, we asked editors to consider the same scenario except the authors did not report the deviation(s) and that instead, the deviation(s) was identified during the peer-review process. On a scale from 1 (
We then asked participants to rank-order several factors that may affect their editorial decision on a manuscript that contains preregistration deviations. For a plot of the mean ranked values, see Figure 1. Editors were also asked to rate the extent to which several specific types of deviations and reasons for deviating were justifiable on a scale from 1 (

Mean rankings of the extent to which each factor would affect editorial decisions on a manuscript containing preregistration deviations. Error bars depict 1

Mean justifiability ratings for different types of preregistration deviations. Error bars depict 1

Mean justifiability ratings for different reasons for preregistration deviations. Error bars depict 1
Editorial practices
In their role as editors, participants indicated that they personally evaluated the extent to which authors followed their preregistered plan 65% of the time, on average (
Next, we asked editors whether their journal had any formal mechanisms for evaluating the extent to which authors followed their preregistered plan. Twenty-four percent reported that there are instructions on how to report deviations in the author guidelines. Three percent reported that authors are prompted to report preregistration deviations in the submission portal. Nine percent reported that reviewers are instructed to check for preregistration deviations. In addition, 6% of participants reported other formal mechanisms, such as including statistical reproducibility check for undisclosed deviations, as a norm during the initial manuscript-triage process. Thirty-two percent reported that they did not know if there were any formal mechanisms at their journal, and 35% reported that there are no formal mechanisms at their journal.
At the end of the survey, we shared an initial draft of our preregistration-deviations template with participants and asked for their feedback. All anonymous open-ended responses containing feedback are included on the OSF project page. In response to participant feedback, we modified the original template in the following ways: (a) added a section for authors to describe “unregistered steps” to outline aspects of the preregistration that were ambiguous and necessitated a decision that was not specified in advance; (b) added a sentence to the table note indicating that if more than one type or reason applies to the same deviation, then authors can replace the drop-down menu with a narrative account; and (c) revised some minor wording choices (e.g., difference between typo vs. oversight vs. error). Finally, we asked whether they would support a policy at their journal that would require authors to transparently report any deviations from their preregistered plans using this template. On a scale from 1 (
A Template for Reporting Preregistration Deviations
To ensure that deviations are reported in a transparent and standardized way, we created a Preregistration Deviations Table that authors can complete and include in their manuscripts. For a completed example based on deviations from our own preregistrations, see Table 1. A blank version of the table is available in Table S1 in the Supplemental Material available online and as a downloadable Google doc at https://docs.google.com/document/d/1m7k53z38w18AJe56ucftunnHuFM7wDlMFjpoGenwN6k/edit?usp=sharing. The table template specifies each deviation in a separate row, providing a straightforward way for reviewers, editors, and readers to quantify the number of deviations. Then, authors select from drop-down menus to aid readers in understanding how the deviation could be characterized in terms of type, reason, and timing. Finally, there are columns for authors to include the original text of the preregistration, a description of the deviation, and a reader-impact statement. In addition to deviations, we also included a place in the table to report unregistered steps (or unspecified decision points) that may result from a vague or underspecified preregistration.
Example Preregistration Deviations Table Template
Note: Choose one characteristic for each drop-down menu. If more than one type or reason applies to the same deviation, then replace the drop-down menu with details. Provide one deviation per row and add or delete rows as needed.
To ensure that all deviations are identified and reported, researchers should document deviations throughout the research process and carefully review their preregistration at the end of the research process. Researchers should disclose all deviations and unregistered steps in the Preregistration Deviation Table so that reviewers and readers do not need to compare documents or search for this information. When page and table limits permit, we encourage researchers to include the Preregistration Deviations Table in the main body of their manuscript or appendix. When it is not possible to include the Preregistration Deviation Table in the main manuscript (e.g., because of journal restrictions), researchers should provide the table in supplemental online material and reference it in the main text of the manuscript with a summary of deviations.
In addition to transparently reporting preregistration deviations, whenever possible, researchers should also include the results of the preregistered analyses alongside the deviated analyses, for example. In some cases, it may be appropriate to include these results in the supplemental online material and to summarize them in text. This will help researchers and readers evaluate how the deviations may have affected conclusions. In the section that follows, we describe each of these aspects of reporting in more detail and provide examples.
Type of deviation
There are a number of different types of deviations that could occur throughout the research process, including changes to the study design, inclusion/exclusion criteria, research questions, hypotheses, sample size, data preparation, variable operationalizations or computations, analytic approach, covariates, or inferential criteria. Table 2 provides examples of each of these types of deviations.
Types of Deviations
Reason for deviation
Deviations from preregistrations can occur because of typos or oversights in the preregistration, data-collection or documentation errors, the preregistered plan was not possible or was inappropriate because of characteristics of the data, the researchers or the field gained new knowledge, new suggestions by peer reviewers/editors, or miscommunications among the coauthor team at the time of preregistration. Table 3 provides examples of each of these reasons for deviations.
Reasons for Deviations
Timing of deviation
The timing of when the deviation occurred can affect the diagnosticity and credibility of the presented results (e.g., Hardwicke & Wagenmakers, 2023; Nosek et al., 2018). Deviations can occur at any point in the research process, including before data were collected, during data collection, after data collection but before data were accessed, after data were accessed but before results of preregistered approach were known, and after results of preregistered approach were known. It is important to transparently report the timing of when the deviation was recognized or occurred because data-dependent deviations increase risk of bias (Hardwicke & Wagenmakers, 2023).
Reader impact
The extent to which deviations from the preregistration affected study results (e.g., magnitude and direction of effect sizes) and whether and how they may have affected the diagnosticity of study results (e.g., control over researcher degrees of freedom and false positives) may affect how readers interpret the findings of preregistered research that includes deviations. For example, when numerous substantive deviations occur, readers may choose to evaluate the evidence in a similar manner to how they would evaluate exploratory or unregistered research.
Unregistered steps
In addition to deviations, there may be instances in which the preregistration did not consider a particular step/decision point or lacked sufficient detail (e.g., indicated parent education level would be used but did not specify how mother and father education level would be combined). These unregistered steps should be reported in a separate section after deviations in the Preregistration Deviations Table.
Other Considerations
What is considered a deviation?
The line between what is considered deviation is debatable. Is a typo in the preregistration that is corrected in the final manuscript considered a deviation? Is lack of specificity in the preregistration that requires the researcher to make an unregistered analytic decision considered a deviation? As Figures 2 and 3 show, even psychology-journal editors have different opinions about what counts as a deviation. Given the inherent subjectivity in this determination, we recommend authors err on the side of caution and transparently report anything that could be perceived as a deviation from the original plan. For example, a typo may not necessarily be a deviation in the same way that changes to study protocol or variable operationalizations are, but typos that change the meaning of a statement are still worth reporting so that readers can be assured that these oversights in the preregistration were identified by the study team. As others have discussed (e.g., Nosek et al., 2018), although lack of specificity resulting in an unregistered analytic decision may not be a deviation per se, it should still be transparently reported. There are certainly instances in which researchers could not possibly anticipate all forking paths that could occur in the research process, and in these cases, we encourage researchers to make their preregistrations and backup plans as detailed as they possibly can to avoid unregistered steps. For example, if researchers did not specify how they were going to treat missing item-level data when computing a sum score, then this should be transparently reported as unregistered steps because it introduces researcher degrees of freedom that occurred during the analytic process and could have been specified in advance. Or lack of specificity might occur when a researcher does not create three different backup plans for the problems that might occur when using structural equation modeling. If variables are not distributed appropriately, latent variable models do not fit, and model assumptions are violated, a researcher may need to change course in a way that was not anticipated. Future research should investigate which areas of preregistration researchers have the most trouble specifying details about and develop tools to facilitate specificity in preregistrations, beyond templates. As a result, we find it important to highlight that there may be clear “deviations” (e.g., changed analytic model), whereas others may be unregistered steps (e.g., lack of specificity in preregistration), but both are important to report in a standardized way for readers to evaluate. In addition to deviations and unregistered steps, researchers may choose to include additional research questions or analyses beyond those that they preregistered. These should be clearly labeled as unregistered when they are first presented in text.
Preventing deviations and unregistered steps
Although deviations are sometimes necessary and unregistered steps may happen, it is important to acknowledge that there are several steps researchers can take to reduce the need for deviations and unregistered steps in the first place. First, a certain degree of prep work may be necessary to write a preregistration that is unlikely to be deviated from. For example, researchers should review the relevant literature and familiarize themselves with the details of their planned analytic approach before preregistering. Note that this prep work will save time during the analysis and writing phases, so it should not add to the total amount of time needed to complete a project. Second, preregistrations should be sufficiently detailed to reduce the likelihood that researchers will face analytic choices that were not addressed in the preregistration (Claesen et al., 2021). This can be accomplished by using detailed preregistration templates and by supplementing those templates with knowledge of one’s specific research area and typical research process. Third, researchers may consider documenting and registering standard operating procedures for their lab or for a given data set (Lin & Green, 2016; Nosek et al., 2018; Tackett et al., 2020). This can prevent unregistered steps that arise from failures to document analytic steps that may be considered routine to some researchers, such as treatment of duplicate respondents in survey research (e.g., Claesen et al., 2021). Fourth, researchers should consider potential forking paths in which analytic decisions depend on currently unknown features of the data and preregister decision trees that outline how those analytic decisions will be made (Nosek et al., 2018). For example, use of measures may depend on their reliabilities, or the selection of statistical tests may depend on whether model assumptions are met or violated. Fifth, researchers can consider drafting the analytic code in advance and including it as a supplemental file with the preregistration because doing so can make some unregistered steps of the preregistration more apparent (e.g., Graham et al., 2022; Willroth et al., 2022). In addition, when skill sets allow, it may be helpful to use the drafted analytic code to conduct simulations because simulations can aid in visualizing potential data patterns that affect preregistered plans. Finally, collaborators should read and provide feedback on preregistrations the same way that they would for a final manuscript. Because researchers differ in their familiarity with and use of preregistration, it may be beneficial for research teams to discuss the importance of preregistration and their plans for how to handle preregistration deviations before finalizing the time-stamped document.
In addition to considering strategies that researchers can engage in to prevent deviations and unregistered steps, it may also be useful to consider contextual factors that may affect the likelihood of deviations. For example, Claesen et al. (2021) identified a handful of studies in which preregistrations were followed exactly (Study 4, Hawkins et al., 2015; Pilot + Study 1, Pittarello et al., 2015). What makes these preregistrations different from preregistrations in which deviations occurred? The most defining features of the aforementioned examples may be that they are extensions of prior studies and/or the investigations were fairly simple in study design and analytic approach (e.g., analysis of variance). This points to the possibility that study or analytic complexity may be one reason for preregistration deviations. Future research should empirically examine how studies with no deviations differ from studies with deviations and to innovate solutions so that simple and complex projects alike can adhere to preregistrations with ease.
When should researchers deviate?
As can be seen from editors’ perceptions of the justifiability of different types of (and reasons for) deviations in Figures 2 and 3, the question of when to deviate is not straightforward to answer. In this case, it is useful to consider a spectrum of deviations, from strictly necessary deviations to completely arbitrary deviations, and the timing of the deviation. Risk of bias increases depending on when the deviation occurs (e.g., after the researcher has already seen the data; Hardwicke & Wagenmakers, 2023). At one extreme, some deviations may be necessary to carry out any analyses at all. For example, a researcher may plan to use a specific variable and later find out that the variable was not collected because of an error in data collection or data documentation. Other deviations may not be strictly necessary but may unequivocally improve the quality of the research. For example, researchers may learn that their planned analysis does not match their research question, or the data may violate important model assumptions. In both cases, deviations are likely to be justifiable and should be reported transparently. Moreover, depending on the type and timing of the deviation, there may be ways for researchers to safeguard against risk of bias from deviations. For example, researchers could consider having an independent statistician read their preregistration and provide feedback on the analytic plan to reduce risk of bias (e.g., Hardwicke & Wagenmakers, 2023). Next, one can consider deviations that are not strictly necessary and that do not unequivocally improve the quality of the research but may be preferred by some. For example, a peer reviewer may recommend that a researcher use the reviewer’s preferred statistical approach or variable operationalization. Often, this situation can be handled by following the preregistered plan in primary analyses and then including an additional supplemental or sensitivity analysis that follows the alternative plan. Of course, this may not be possible in all cases (e.g., smaller sample size than expected). But regardless of whether the alternative choice is testable, it is necessary to provide a narrative account of the meaning of the deviation (and overall risk of bias) in the reader-impact column of the Preregistration Deviation Table. Finally, some deviations are arbitrary, increase risk of bias, and are likely difficult to justify. For example, changing the alpha level or sidedness of a preregistered analysis (e.g., from a two-tailed to a one-tailed test). In instances such as this, deviations should be avoided. In sum, researchers can take several steps to reduce the occurrence of deviations, but deviations may still be necessary, and unregistered steps may need to be reported. When deviations are not strictly necessary, researchers should consider whether the deviation unequivocally improves the quality of the research. If either criterion is met, the deviation is likely justifiable and should be reported transparently. If neither criterion is met, it may be better to stick to the preregistered plan and include the alternative analysis as supplemental or sensitivity analyses, if applicable. In all cases, unregistered steps, no matter how trivial they seem, should be reported, too.
When to deviate versus update versus create a new preregistration?
Some preregistration repositories provide the option to “update” a preregistration. For example, at the time of this writing, OSF allows researchers to create a time-stamped update to their preregistration, including a description of the change, the rationale for the change, and the impact of the change on the study. The OSF support page instructs researchers to use this feature for “events outside your control” and “unexpected anomalies.” This feature may also be useful to correct accidental omissions from the preregistration before data collection or data analysis begin. Once results are known, preregistrations should not be updated. Instead, researchers should report their deviations or unregistered steps using the Preregistrations Deviations Table in the resulting article. This ensures that results-dependent changes to the preregistered plan are transparently documented in the final publication. In the event of major changes to a preregistration, researchers may also consider registering a supplemental registration or coregistration that outlines the planned deviations, including when during the research process these changes were implemented (Benning et al., 2019; Kirtley et al., 2021). Finally, consider a scenario in which a reviewer suggests that the researcher adds another study to their manuscript. Because the researcher has not conducted the study yet, it would be appropriate to create a new preregistration for the new study and to document that the latter study was preregistered after results from the first study were known.
When are there too many deviations?
There is no clear-cut answer to this question, but a good rule of thumb is to overreport rather than underreport. The utility of a preregistration is transparency for readers, but risk of bias occurs when there are deviations from the preregistration (Hardwicke & Wagenmakers, 2023), particularly when deviations occur after looking at the data. We encourage researchers to use the Preregistration Deviations Table to report all deviations no matter the number. We also encourage researchers to always disclose if there was a preregistration, even if they feel that there were too many deviations for the research to be considered preregistered. Although it might be misleading to apply for a preregistration badge when the number of deviations and data-dependent decisions renders risk of bias high, it would also be misleading to not report that there was a preregistration in the first place. At the end of the day, preregistration is a skill, and preregistrations are likely to be messy while that skill is being honed (Kirtley et al., 2021). We hope that researchers will also be able to learn from filling out the Preregistration Deviations Table such that they will recognize where they need to be more specific in future preregistrations or how they can anticipate analytic violations more directly. Preregistration is still a valuable tool for distinguishing between planned and unplanned steps in the research process, even when many deviations are made. As Nosek and colleagues (2019) put it: “Having some plans is better than having no plans, and sharing those plans in advance is better than not sharing them” (p. 817).
Concluding Remarks
To maximize the benefits of preregistration, researchers should take steps to ensure that they follow their preregistered plan closely and to reduce the need to deviate from that plan. However, even the best laid plans do not work out sometimes. Preregistration can still be a valuable tool for increasing the credibility of scientific findings so long as preregistration deviations are transparently reported. In the current article, we offer recommendations to help researchers determine when a deviation from preregistered plans is necessary or potentially justifiable and provide a framework to standardize the transparent reporting of preregistration deviations and unregistered steps. We encourage researchers to adopt this framework in their own preregistered research. To support transparent reporting of preregistration deviations, we also urge reviewers and editors not to penalize authors simply for reporting preregistration deviations. As the results of our editorial survey showed, editors already have neutral to slightly positive perceptions of disclosed deviations on average compared with significantly negative perceptions of undisclosed deviation on average. Likewise, among seven different factors, including the number of deviations and extent to which the deviation affected substantive conclusion, editors ranked “the extent to which authors were transparent” as the top factor for influencing their editorial decisions on average. Thus, reporting all deviations is beneficial to authors, editors, readers, and the field alike. Finally, we call on journals to implement structural-level policies that encourage transparent reporting of preregistration deviations. The adoption of this framework will provide researchers with a clear template for what to do when things do not go as planned, alleviate burden on reviewers and editors, and increase the transparency and credibility of preregistered research.
Supplemental Material
sj-docx-1-amp-10.1177_25152459231213802 – Supplemental material for Best Laid Plans: A Guide to Reporting Preregistration Deviations
Supplemental material, sj-docx-1-amp-10.1177_25152459231213802 for Best Laid Plans: A Guide to Reporting Preregistration Deviations by Emily C. Willroth and Olivia E. Atherton in Advances in Methods and Practices in Psychological Science
Footnotes
Transparency
Both authors contributed equally and share first authorship. The order in which the authors are presented was decided by a coin flip.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
