Abstract
This manuscript provides critical reflections on using reporting guidelines in qualitative research and examines the tensions that arise when universal checklists are applied across diverse methodological traditions. While transparency and rigor are essential, our manuscript supports that widely adopted tools such as COREQ-32 insufficiently capture the epistemological and procedural features of certain methodologies, particularly Constructivist Grounded Theory. Drawing on existing critiques and emerging methodology-specific frameworks, we argue that rigid, sometimes unvalidated criteria could impede methodological congruence and provide limitations to report Constructivist Grounded Theory research. This reflection contributes to methodological scholarship by advocating for reporting guidance that aligns with the philosophical and methodological stance of each qualitative approach. Therefore, we call for nuanced, method-congruent standards that enhance transparency while preserving the richness, reflexivity, and flexibility that underpin excellence in qualitative inquiry.
Keywords
Introduction
In recent years, as qualitative inquiry expands its reach and influence within health and nursing research, there has been a growing call for enhanced methodological rigor, transparency, and epistemic congruence in reporting qualitative studies. High-quality qualitative research requires methodological depth and reflexive engagement, as well as a systematic and coherent analytic process. These attributes should be clearly described in manuscripts to demonstrate evidence of the rigor throughout which findings emerged thereby supporting credibility, transferability, and scholarly impact.
Transparent and rigorous reporting is a cornerstone of academic integrity, and publishing in journals that explicitly prioritize these attributes amplifies the contribution of qualitative work to nursing practice and theory development. Some journals (Morse, 2021), resist the use of checklists for editorial or review purposes, whereas others, strongly recommend adherence to the
Transparency Guidelines and Quality Criteria in Qualitative Inquiry
Reporting Guidelines are viewed to offer a common understanding of rigor in qualitative research (Hannes et al., 2015). Academically, adherence to established reporting guidelines, such as those proposed by the EQUATOR Network, are widely regarded as essential for ensuring transparency and facilitating rigorous appraisal of qualitative findings by editors, reviewers and readers to rigorously appraise qualitative findings. For example, the COnsolidated criteria for REporting Qualitative Research (COREQ) have been widely utilized to structure and report qualitative inquiries (Tong et al., 2007; Walsh et al., 2020). However, the COREQ-32 guidelines have been subject to substantial critique. Concerns have been raised regarding the replicability of the literature search used to develop these guidelines, as well as the lack of clear rationale for selecting specific reporting criteria (Buus & Perron, 2020). Therefore, reliance on non-validated criteria raises concerns about the consistency and overall quality of reporting qualitative inquiry (Walsh et al., 2020), potentially leading to latent inconsistencies when using or promoting such guidance.
While tools like COREQ-32 advance procedural transparency, they risk privileging formal uniformity over epistemological and methodological reflexivity. Lincoln and Guba’s (1985) concept of “trustworthiness” established a foundational framework for evaluating qualitative inquiry grounded in the naturalistic paradigm. These criteria encompass credibility, dependability, confirmability, and transferability, who extend beyond procedural checklists, offering a holistic evaluative framework for qualitative inquiry. Their later addition of the criterion of “authenticity” (Lincoln & Guba, 1986) underscores the importance of representing multiple realities within research contexts which aspect is often underemphasized in standardized reporting guidelines. The concept of “trustworthiness” highlights the limitations of using a universal checklist without critical adaptation to specific methodologies. More specifically, quality criteria can vary according to the research methodology. As an example, Grounded Theory, evolved into three main approaches depending on epistemological and ontological differences (Bryant, 2017; Rieger, 2019; Singh & Estefan, 2018). Within this landscape, Constructivist Grounded Theory, as articulated by Charmaz (2014), recognizes that theory is co-constructed through interaction between researcher and participants, and that multiple realities are situated within specific social and contextual condition (Bryant, 2017; Charmaz, 2014). Constructivist Grounded Theory’s epistemological foundations emphasize reflexivity, the co-construction of meaning, and theoretical sensitivity. These principles underpin inherent procedures for rigor, including iterative coding, memo writing, the constant comparative method, and theoretical sampling, all of which support the progressive abstraction of data toward the development of a co-constructed grounded theory (Charmaz, 2014; Charmaz & Thornberg, 2020; Tarozzi, 2020). Typically, in Constructivist Grounded Theory, data generation occurs through interviews, an approach that remains flexible and responsive to emerging insights (Charmaz, 2014).
Charmaz (2014) proposed four criteria to assess a Constructivist Grounded Theory tailored to the philosophical underpinnings. Credibility refers to the logical and conceptual grounding of categories in the data, ensured through systematic coding and an iterative analytic process. Originality highlights the significance of the theoretical contribution and its potential to offer new insights. Resonance captures the extent to which the analysis meaningfully reflects participants’ experiences and perspectives to others for wider audiences. Finally, usefulness emphasizes the contribution of the theory to knowledge development and its practical application in research and practice (Charmaz, 2014). These criteria, aligned with a specific methodology, present some discrepancies with COREQ-32 guidelines.
Tensions Between Constructivist Grounded Theory Method and COREQ32
This tension becomes particularly evident when considering Constructivist Grounded Theory where theoretical co-construction through active interaction between researchers and participants is a defining principle (Charmaz, 2014). The constructivist stance challenges post-positivist assumptions embedded within COREQ-32 reporting guidelines (Braun & Clarke, 2024), such as participant validation, which may not align with recognition of multiple “truths.” While Constructivist Grounded Theory emphasizes the authentic representation of participants’ voices, it acknowledge that findings emerge through interaction between researchers and participants (Charmaz, 2014). This distinction underscores the importance of adapting traditional evaluative criteria to align with epistemological stances.
Likewise, Grounded Theory Method principles such as theoretical sampling, memo writing, and iterative coding (Bryant, 2019; Charmaz, 2014; Tarozzi, 2020), are insufficiently addressed within COREQ-32, suggesting a fundamental misalignment between standardized guidelines and specific methodological needs (Bobbink et al., 2024). More precisely, the requirement for “pilot testing” an interview guide provide risks of compromising the emergent and iterative nature of “theoretical sampling” and openness central to interviewing in Constructivist Grounded Theory. Pilot testing presumes a fixed set of interview questions, which may obscure early insights, impose premature assumptions about what is “relevant,” and contradict the Grounded Theory principle that initial interviews should guide subsequent data collection to support theory construction. This practice challenges the Grounded Theory Method principle of treating “all is data” (Glaser et al., 1967) and may overlook the open, iterative character of the analytic process (Bryant, 2019). These gaps highlight the limitations of imposing generic, non-method-specific checklists and underscore the need for reporting guidelines that align with each qualitative methodology’s philosophical foundations and procedural particularities.
Enhancing Transparency Through Tailored Guidance
Walsh et al. (2020) refute the view that a single reporting standard constrains qualitative inquiry, emphasizing instead the necessity of transparent reporting to support informed evaluation for health policy and practice. Nevertheless, Charmaz and Thornberg (2020), highlight the difficulties of universal criteria according to the variety of qualitative methods and methodologies. Similarly, Tarozzi (2020) argue that the diversity of research traditions necessitates distinct evaluative and reporting frameworks. Conversely, Birks and Mills (2023) contend that textual data require flexible evaluative criteria, often tailored to the chosen methodology. As another example, the Standards for Reporting Qualitative Research (SRQR) (O’Brien et al., 2014), while broader in scope and less frequently applied than COREQ guidelines (Walsh et al., 2020), still lack methodological specificity needed by both novice and experienced researchers. For example, this SRQR guidance overlooked key features of Grounded Theory Method, such as memoing, theoretical sampling, constant comparison, concurrent data collection and analysis, and the articulation of relationships between categories, all of which are central to a “theory” as outcome of the research process.
A Delphi study (Hannes et al., 2015) underscored the importance of reporting guidelines while also revealing divergences among researchers regarding their optimal content. By highlighting divergent viewpoints among research experts this study offered a reflection on the need to develop specific reporting guidelines. As an example, the Guideline for Reporting and Evaluating Grounded Theory Research Studies (GUREGT) (Bøttcher Berthelsen et al., 2018), was developed through a literature review, seminal works, and consultation with an expert panel. By emphasizing unique methodological aspects of Grounded Theory method, such as theory development, theoretical sampling process, iterative principles of coding and memo writing, engaging with the literature review, it offers tailored guidance that responds to the unique demands of this method. More precisely, this reporting guideline provides detailed reporting and methodological guidance regarding the three main approaches to Grounded Theory, including Glaser, Strauss and Corbin, and Charmaz. Despite its publication in 2018, the GUREGT guideline, still lacks empirical evidence demonstrating its clear benefits. The GUREGT guidelines have been used to critically assess the quality of Grounded Theory studies in a recent systematic review (Ali et al., 2024), which highlighted substantial variation in how methodological elements were reported. In this field of inquiry, their review highlighted limitations in which, how and why a Grounded Theory Method was used. Particularly, the specific Grounded Theory approach used was often not described, and the resulting “theory” as a methodological outcome was frequently overlooked (Ali et al., 2024).
Authors employing other qualitative methodologies, such as phenomenology (Shorey & Ng, 2022) or thematic analysis (Braun & Clarke, 2021, 2025), have similarly emphasized the value of methodology-specific guidance. Nevertheless, such guidance should be applied with caution, as “guidelines are not prescriptive rules” (Braun & Clarke, 2025) and cannot substitute for a thorough and critical engagement with the chosen methodology.
Conclusion
Given the diversity of qualitative methodologies, we advocate for the EQUATOR Network to prioritize the dissemination of existing qualitative reporting guidelines tailored to the philosophical and methodological foundations of each specific qualitative approach. Rather than promoting a one-size-fits-all guideline, we suggest promoting and developing guidance that acknowledges rigor, transparency, and methodological coherence across each qualitative research methodology. Importantly, this approach acknowledges epistemological distinctions and provides a nuanced basis for evaluating the diverse contributions of qualitative inquiry to health research. At the same time, the advantages of strengthened reporting practices warrant further evaluation and critical reflection to prevent over-standardization of reporting guidelines in a Constructivist worldview.
Footnotes
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This manuscript is related to a research project funded by the Swiss National Science Foundation (10531C_185332) and supported by the HES-SO University of Applied Sciences and Arts, Geneva, Switzerland.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Declaration of Generative AI-Assisted Technologies in the Writing Process
During the preparation of this manuscript the first author used Chat GPT to assist with grammar. Following the use of this tool, all the authors critically reviewed and edited the content as needed and took responsibility for the content of the manuscript.
