Abstract

On February 14, 2018, Daniel Simons, the founding editor of Advances in Methods and Practices in Psychological Science (AMPPS), introduced the journal—the history of its emergence, its scope, and its vision—in an editorial in which he wrote, In launching AMPPS, APS [Association for Psychological Science] hopes to reach a broad audience, consolidating in a single outlet a range of novel approaches to experimentation (e.g., the Registered Replication Reports), articles on metascience and best practices, and tutorials on research methods and practices. As do all the APS journals, AMPPS emphasizes both innovation and accessible communication, and has a mandate to help researchers from across psychological science to improve the quality of their research and the rigor of our discipline. (Simons, 2018, p. 3)
More than 7 years later, my time as the second editor of AMPPS is coming to a close, and I am here to report on our many successes. We are achieving the vision outlined in this opening editorial, and we are reaching the end of the beginning: After nearly a decade of high-quality, innovative, and exciting articles, AMPPS has established itself as an important outlet in the field. We have realized many of our initial, lofty goals while growing in unique and organic ways. In this editorial, I review some of these successes and then shine a light on the path forward. Where will the journal go, and what can it achieve in the next decade?
First, though, I would like to take a bit of a personal digression. Perhaps self-consciously, perhaps because none of us are immune to the relentlessness of imposter syndrome, I have often wondered why I was chosen to be editor of AMPPS. I think the easiest way for me to digest the fact that I was selected for this position is to view myself as a reflection of the AMPPS target readership. I am an applied researcher, a curious methodology seeker. I have always wanted to know about the best methodological practices and statistical approaches in the field, and there is no question in my mind that this motivation has its origin in my first weeks of graduate school, when I was introduced to Cohen’s (1994) seminal article, “The Earth Is Round (p < .05),” in the American Psychologist. In the piece, Cohen famously said that bad statistical practices “continue to continue” (p. 997). Cohen passed away more than a quarter century ago, but he might not be surprised to learn that even today, the poor statistical practices still continue to continue. (Maybe we have turned the corner a little, but let me return to this assessment in a minute). When I first read Cohen and other methodologists of the time, the world split in two. There was an old way and a new way, a less good way and a better way. There were pitfalls and promises, all of which are inherent in doing science.
As I progressed in my career, as much as I tried, I never fully left the old world behind to enter the new one. I always and, for the most part still do, exist between wanting to practice scientific tradecraft at the highest level but ending up somewhere below my best intentions. If anything, perhaps it is this earnestness—and the desire to be better and to do better—that led me to the editorship of AMPPS. I believe I share this with our readers. We are hungry for the curtain to be pulled back on the methods. We want to see and study the code even if we cannot write it all ourselves; we want all the gory details and imperfections of a study out on the table, open and transparent for deep consideration. It is with and through this transparency that we step from the world of less good practices to the world of better ones.
Want to know exactly how to power your two-way interaction? We have multiple articles on that. How to interpret effect sizes? How to do attention checks? How to do research with the reluctant? We have articles on these and many other topics, all providing deep and thorough opportunities to engage with new ideas and best practices in the field. As comprehensive as our articles are, though, AMPPS also provides a philosophical invitation: join us in the world of better practice. The articles in AMPPS offer, month in and month out, the invitation to practice aligning your values, intentions, and actions to do less scientific harm and reach for the methodological ceiling. In January 2026, my term as editor will end, but the opportunity to grapple with the topics covered in AMPPS will continue for me and for all of our readers, of course.
What Have We Done?
In terms of articles, we have covered much new ground in the last 4 years. Citing specific examples of the excellent work in AMPPS feels a bit like playing favorites among children—we love them all equally—but touching on a few in particular does add texture to the whole. We have published articles on many metascientific topics, including a number of reports on changes in methodological practices since the early days of the credibility crisis (e.g., Bogdan, 2025); a comprehensive review of open science in the developing world (Chuan-Peng et al., 2025); an article on reporting preregistration deviations (Willroth & Atherton, 2024); articles that demarcate the landscape of questionable research practices (e.g., Nagy et al., 2025); a number of articles providing useful tools for power analyses and effect-size calculation (e.g., Lengersdorff & Lamm, 2025; Sommet et al., 2023), including for complex statistical models (Cole et al., 2025); an article on promoting findable, accessible, interoperable, and reusable (FAIR) data sets (van Ravenzwaaij et al., 2025); several articles on experience-sampling methods (e.g., Fritz et al., 2024; Stadel et al., 2024); and tutorials on large language models (LLMs; Brickman et al., 2025), machine learning (Pargent et al., 2023), and causal-inference methods (Lawes et al., 2025). We have also published articles on mixed-methods research (Syed & Westberg, 2025), open science for qualitative research (Campbell et al., 2023), and sampling strategies for working with underrepresented participants (Emery et al., 2023).
AMPPS also continues to be an outlet for Registered Replication Reports (RRRs) of content-oriented findings in psychological science. Our Registered Reports (RRs) are methodological and/or metascientific in scope, but we consider RRRs covering any topic in the field. (RRRs are a special form of RRs; for a terrific visualization of conceptual space linking preregistration, RRs, and RRRs, see Chambers and Tzavella, 2022, Fig. 1b). The RRR articles we publish usually focus on important findings in the field and involve “Many Labs” style collaborations (see Klein et al., 2018). Most of the RRRs we have published were initiated by Dr. Simons’s team, a testament to work involved—for the authors and for the journal—in bringing these large collaborations together. In the past year, we have published important articles on cognitive dissonance (Vaidis et al., 2024) and terror-management theory (Rife et al., 2025), and we are reviewing articles on construal-level theory, thought suppression, and stereotype threat. We have also taken on new topics, including articles on publication bias in clinical psychology (Schiekiera et al., 2025) and work in progress on musical practice and cognitive performance, and on harm and purity transgressions in an African sample. With all our articles, we offer the opportunity for commentaries, which operate as a form of postpublication peer review. Following the publication of the cognitive-dissonance RRR article, for example, we received a number of commentaries critical of the experimental manipulations (e.g., Harmon-Jones & Harmon-Jones, 2024; Lishner, 2024). Our commentaries and rejoinders from authors of the target articles expand the discourse around the RRRs.
The pursuit and publication of RRRs is a major contribution of AMPPS. In my opinion, no single article can answer whether an effect exists or does not exist, nor can RRRs prove or disprove the legitimacy of a research program or even a research tradition. Nevertheless, this work is critically important in defining boundary conditions and altering posterior probabilities on the likelihood of replicating work in a given area. In observing many RRR workflows and the final articles, often completed after years of research through large-scale calibrations, it occurs to me that the content in discussion sections of the original empirical articles represent one of the most critical elements in the pursuit of scientific credibility. In most content-related articles from around the field, authors go way beyond their data in conveying a sense of generalizability. In this respect, RRRs often represent an important corrective and humbling reminder that our effects may not be nearly as strong and as far-reaching as we have led ourselves and others to believe.
Beyond our excellent articles, I believe our crowning achievement over the last 4 years rests in working with APS leadership to help form the Editorial Fellows Program (EFP). With editors from the other APS journals, we are in our second year of the EFP, and our primary goal is to diversify the ranks of journal editors through a mentored program that provides scaffolding and training on becoming an editor. If we take seriously the problem of a lack of representativeness on editorial boards (cf. Roberts et al., 2020), then the only real way to address this problem is via the pipeline of incoming editors and helping to train people from all backgrounds to step into editorial roles. Participating in the origins of this program was a major achievement, and AMPPS has benefited enormously from the direct contributions of four editorial fellows in the last 2 years.
We have also implemented a number of other procedural and policy changes. AMPPS is entirely online; we have no print issues, and we organized a system of Collections for tagging individual articles and combining them in interesting ways. We encourage you to check out the Collections page and to explore different combinations. Whether you are interested in causal inference, simulation studies, replications, or other topics, there are collections for many interests. You can combine those tags—and many others—via our Collections. In addition, we added transparent peer reviews to all our articles, which allow scientists to see the underlying editorial discussions/feedback/responses that are associated with our published articles. We also established a collaboration with the Peer Community In Registered Reports (PCI RR), becoming a “PCI RR-friendly” journal. This status means that we directly accepted any Stage 2 RR from the PCI RR that is within scope for AMPPS without further peer review. We recently published our first article on this topic, which was covered in the APS Observer, and we expect this to be a highly beneficial relationship in time. We have established a similar relationship with MetaResearch Open Review (MetaROR), a platform for open peer review of metascientific research.
What Can AMPPS Do Better?
I am sure there are a variety of ways AMPPS can and will improve in the coming years, and I have no doubt the next editorial team will drive the journal in all kinds of exciting directions. As I think back on our term, I see two areas I wish we could have improved more quickly, and in some ways, both issues will need deeper consideration in the coming years. As noted, our Commentary-style articles are designed as a form of postpublication peer review in which we invite authors to comment on articles that have appeared in AMPPS. Following acceptance of our RRRs, we often make a call for Commentary articles and then let the RRR authors contribute a rejoinder (i.e., a reply to the commentaries). Because commentaries often refute, debate, or call into question the findings or conclusions from a target article, peer review of these contributions is often quite difficult. Our general policy is to advance a Commentary article if it is civil in tone and addresses specific comments on an article published in AMPPS. (You may find it surprising that some of the commentary contributions are not civil in tone. Honestly, this is one of the most challenging parts of working on these types of articles. The tone of many first-iteration Commentary articles is quite testy.) In addition, what happens when a Commentary is submitted after the authors of a target article have already replied with a rejoinder to another set of commentaries? I would love for AMPPS to move toward an online discussion forum that could follow each article—where anyone could comment and authors could comment back, and so on and so forth. The journal could moderate the discussion for tone, akin to a Reddit forum or a comparable website. Commentaries are quite useful and fun to read, but they take a lot of the journal’s energy, especially to set up an entire target article–comment response–rejoinder series. There are probably more innovative ideas for making this work well, but it is a problem in need of a (better) solution.
Another issue for future consideration is our Many Labs-style RRRs. (We require that at least three independent sites are involved in any RRR effort that we pursue.) How should we decide what RRRs to pursue? When AMPPS was one of the only outlets in the field championing RRRs, it was reasonable to pursue content-focused replications of major findings in the field, and we should certainly continue to do so. But what constitutes a major finding in the field? Most RRRs are massive undertakings for both the authors and the journal. How do we choose our content? I am not sure I know the best way, but it seems like we should develop a better framework for doing so.
It’s an LLM World, and We’re Just Living in It
At this point, I estimate that every third or so article submitted to AMPPS represents a cutting-edge application of LLMs or the generative artificial intelligence (AI) built on top of the LLMs. Many of these articles focus on advances in natural language processing (NLP), and in time, I suspect that many of our articles will include LLM-, NLP-, or AI-related research. One of the chief lessons we have learned from handling these early submissions is the need to carefully assess whether the unique and innovative applications of the latest technological breakthroughs are methodologically sound. Presently, a large proportion of the submissions we receive with these tools are variants of a “look at the cool thing this tool can do” manuscript. Granted, the work is often innovative, amazingly so. But is it psychometrically sound? Psychometric evaluation of all research tools remains a critical priority for methodological research regardless of whether we are discussing the latest machine-learning methods or AI-based personae research (in which AI agents take on the role of human participants—yes, this is happening, and yes, this will be a common mode of research within 5 years; see Lin, 2025). In short, all that glitters is not gold, and subjecting novel developments to rigorous psychometric testing is critical for examining boundary conditions and identifying where and when the method may falter (see Iverson et al., 2009). This position may sound too conservative for a journal seeking to push the applied methodological envelope, but the advances in these areas need to run up against the methodological firewalls that have safeguarded psychological science for the last hundred years. We need to ensure that the methods and tools we disseminate continue to be reliable and valid, and we have 100+ years of measurement research that can underpin all evaluations of these advances in methods. The AI revolution has arrived in psychological science, but as we launch into the stratosphere of new possibilities, the only way forward is to keep our methodological feet firmly on the ground.
A Note of Thanks
I am not capable of expressing my full gratitude to the team of editors at AMPPS. Miraculously, we have gone start to finish without losing a single member of our team, including president elect of the APS, Pamela Davis-Kean (Deputy Editor), and the group of four other Associate Editors, including Katie Corker, Jessica Flake, Rogier Kievit, and Yasemin Kisbu. I have long maintained that I love the scientific life because of the people, and this has proven true time and again with this team at AMPPS. I have learned a tremendous amount from all of our editors, and we have worked together better than I could have imagined to bring you the best possible content. I am in awe of their generosity, commitment, knowledge, and skill. I also wish to thank our four editorial fellows, Kongmeng Liew, Aishwarya Rejesh, Karoline Huth, and Patrick Manapat, all of whom jumped right in and joined us in trying to find and facilitate the best content. Finally, a note of gratitude to the APS staff for the tremendous support, especially Becca White, Janine Chiappa McKenna, and (former APS director of publications) Amy Drew. Without the tireless work of our founding Editor in Chief, Daniel Simons, none of what we have today would be possible. Dan helped jump-start my editorship by handing-off a ship that was sailing quite well. I also wish to thank senior leadership at APS, including Robert Gropp and Aime Ballard-Wood, for supporting me in this position and more importantly, for championing AMPPS, the growth of all the APS journals, and psychological science more generally.
