Abstract
The Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist was designed to enhance quality in the reporting of interview and focus group studies, and it is widely endorsed by journals and publishers. However, it has also been heavily critiqued for its design and application in qualitative health research communities. In this article, we conduct detailed critical text analyses of eight articles and their accompanying self-reported COREQ responses and discuss the performative force of the checklist on the appearance of research quality. The analyses of authors’ rhetorical strategies in articles and checklist responses indicated that they sometimes provide misleading, inconsistent, or excessive information, prioritizing checklist completion over substantive engagement with quality principles. While intended to standardize reporting, COREQ’s rigid structure often led to overcompliance or inappropriate responses from authors, who strived to meet its criteria, even when they were irrelevant or unsuitable. This “overobedience” reflects a desire to maintain credibility and avoid scrutiny, yet it undermines the depth and rigor of qualitative research. COREQ is an epistemic device, shaping researcher practices and identities beyond its stated purpose, and while COREQ aims to enhance accountability, it perpetuates epistemic dominance, eroding authenticity and critical reflection in qualitative research, ultimately exacerbating the very problems it seeks to solve.
Introduction
The Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007) is an influential checklist for reporting qualitative interview and focus group research. It is, for instance, one of two qualitative health research checklists promoted by the influential EQUATOR Network (Enhancing the QUAlity and Transparency Of health Research) and is endorsed or required by an increasing number of journals. COREQ aims “to promote complete and transparent reporting among researchers and indirectly improve the rigor, comprehensiveness and credibility of interview and focus-group studies” (Tong et al., 2007, p. 350). It consists of 32 items with associated descriptors and is organized into three domains: “Research team and reflexivity,” “Study design,” and “Analysis and findings” (Tong et al., 2007). However, COREQ’s design and items have numerous shortcomings highlighted in several articles (Braun & Clarke, 2024; Buus & Agdal, 2013; King, 2021), and a particularly critical article by Buus and Perron (2020) questioned the validity of the development of the checklist as it was first reported by Tong et al. (2007).
The study by Buus and Perron (2020) was a replication study that retraced the stepped development of COREQ and concluded that the report of the instrument development was fundamentally flawed. COREQ may be reliable, but it is not valid for the reasons promoted by Tong et al. (2007). The Buus and Perron study has attracted some attention in the qualitative research community. As of May 3rd 2024, it had been cited 69 times in Scopus. We identified 65 of these referencing articles in languages we could read and found that only 18 articles (28%) included evidence that the Buus and Perron (2020) article had been read beyond the COREQ acronym in the title. For instance, “The reporting of this study adhered to the Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist” (Buus & Perron, 2020) is a generic, simple declaration of procedural certainty and accountability that does not indicate that the authors have actually read any part of the article and its critique. A closer read revealed that most of the articles with indications of having read Buus and Perron (2020) were methodological discussions and critiques, which skewed our impression of the literature. Working with this sample of 65 articles, and considering the aim of COREQ evaluations, we retrieved all 43 interview and/or focus group studies (43/65 = 66%), removed the two studies authored by Buus and Perron and their research groups, and identified only two studies (2/41 = 5%) that positively indicated that the authors had read the Buus and Perron article (Berthe-Kone et al., 2021; Porter et al., 2022). Despite the relatively large number of references to Buus and Perron (2020), most referencing authors miscite the article and ignore its critique of qualitative research publication practices.
Compared to the rhetorical certainty expressed in the articles that miscite the Buus and Perron (2020) article (as illustrated above), the references to COREQ in Porter et al. (2022) and Bethe-Kone et al. (2021) are almost apologetic and hint a respectful awareness of the trouble associated with using the instrument: Although we recognize the limitations of a tool initially aimed at biomedicine and health services (Buus & Perron, 2020), and the risk of formulaic interpretation of qualitative data (…), nevertheless the Consolidated criteria for reporting qualitative research (COREQ) served as a valuable guideline for including relevant methodological detail in reporting findings (Tong et al., 2007). (Porter et al., 2022, p. 6).
Porter et al. (2022) follow up their “although limitations—nevertheless valuable” construction by referring to an appendix with their own self-completed COREQ questionnaire. The examples above exemplify that researchers’ awareness and use of COREQ is varied and oftentimes based on references they may not have read.
Tong et al. (2007) did not suggest scores nor define summarizing cutoffs that could be used to quantify an article’s level of reporting quality. Arguably, this would be a challenging task as the significance of the COREQ items is not evenly distributed. For instance, reporting the number of participants in a study is more important than reporting the credentials of the researcher, and such differences would need to be reflected in weighted scoring and cutoffs. Despite this, some researchers have suggested arbitrary cutoffs as for various levels of adherence to the COREQ guideline. Al-Moghrabi et al. (2019) identified a purposive sample of 100 qualitative studies in dentistry using interviews and focus groups and explored to what extent they had reported COREQ items in a binary evaluation (Yes/No). Without explanation, they defined “good reporting” as articles reporting 25–32 COREQ items, “moderate reporting” as 17–24 reported items, “poor reporting” as 9–16 reported items, and “very poor reporting” as ≤ 8 reported items. The mean score was 17 reported items out of 32, ranging from 2 to 27. In a similar study of a purposive sample of 197 studies from nursing journals, Walsh et al. (2020) also used a binary approach (Yes/No), adopted Al-Moghrabi et al.’s cutoff scores, and identified a mean score of 17 out of 32, ranging from 3 to 28. Furthermore, they argued that journal endorsement of COREQ or an author statement that an article adhered to COREQ was associated with “better reporting” (p. 4).
We are challenging two underlying assumptions of these two adherence studies: that articles that report more COREQ items have increased reporting quality and that it is possible to validly and reliably classify COREQ responses. First, some COREQ items are redundant or “not applicable” in the context of a concrete article, and missing reports on items are not necessarily indicative of poor reporting. This was, for instance, noticed by Al-Moghrabi et al. (2019) and mentioned in a footnote; reporting “others present during interview” (COREQ item #15) is “not applicable,” if you are conducting telephone interviews. Therefore, some articles will have lower total COREQ scores merely through the irrelevance of COREQ items to a given study. Conversely, authors can increase their adherence to COREQ by including information about what they did not do, for example, “transcripts were not returned to participants” or “software was not used” (COREQ items #23 and #27), which is superfluous and therefore in effect decreasing the quality of the reported study while addressing the COREQ item. Adherence rates may therefore be reliable, but not valid because of the use of wrong denominators with actual adherence rates being higher than first calculated (for instance, if there were 18 positive responses to COREQ, the compliance score would be 18/32 = 56%, but if three items were irrelevant, the score would be 18/29 = 62%). Second, the two adherence studies (Al-Moghrabi et al., 2019; Walsh et al., 2020) make use of multiple coders to reliably classify whether COREQ items were reported in their samples of articles, but they did not explicate how comprehensively a COREQ item should be reported before it would qualify as being present. For instance, a poor de-contextualized description of “saturation” would count as “Yes, reported” as well as a comprehensive and methodologically contextualized description of “saturation.” Arguably, only the latter should count as adhering to the COREQ item. Adherence evaluators must therefore nuance their future analyses by distinguishing between items that are not reported and items that are not relevant in the context of a given article and by implementing reliable criteria for what counts as a good enough response to a COREQ item.
The assumed linear and positive relationship between adherence to COREQ and reporting quality is not demonstrated in the current literature, and we believe that it is critical to examine how qualitative health researchers respond to an increased pressure to make use of COREQ and similar checklists, for example, through journal endorsement. Checklists, such as COREQ, should not be viewed as “inconvenient-but-innocent” publication rituals, but must also be viewed as epistemic devices that redefine foundational qualitative research processes (Morse, 2021). From a performativity perspective (Austin, 1962; Butler, 1997), COREQ is performative in that it constitutes a certain kind of research conduct. COREQ has an illocutionary force to constitute “quality qualitative research reporting” and perlocutionary effects on social relationships and researcher identities.
In this article, we explore the performativity of the COREQ checklist by examining how qualitative researchers complete COREQ checklists and how they address COREQ items in the related publications. Unlike the traditional deficit-oriented adherence studies, the aims of the article are (1) to conduct critical text analyses of both articles and their linked self-reported COREQ responses published as supplementary online materials and (2) to critically discuss the underlying performative forces that produce such practices.
Methods
We did not evaluate the published articles in terms of the quality of the research reporting but in terms of how authors use COREQ to make claims about the quality of their reporting. We assume that an ideal response to a COREQ item includes a clear, fitted, and adequate response to the item, which without contradiction refers to a section of the article, where the item is described or discussed. If any part of this response structure is poorly described, contradictive, or missing, the trustworthiness of the response is reduced or lost. Therefore, our analysis focused on the content and rhetorical features of COREQ items and how they refer to the published articles.
We drew a convenience sample from the 65 interview or focus group articles referencing the Buus and Perron article by identifying all the articles that (1) reported either an interview study or a focus group study, (2) referenced Buus and Perron’s article (2020), and (3) published an associated completed COREQ checklist (N = 8). The strength of this criteria-based sampling method was its feasibility in an exploratory research context where we did not have any strong hypothesis regarding factors that might skew the sample’s characteristics.
The first step of the analysis was to familiarize ourselves with the articles and the completed checklists. The second step was to tabulate how each COREQ item was addressed in both the COREQ checklist responses and in the published articles. This was initially done independently by three analysts (NB, BO, and AJ) who then compared findings by discussing similarities and differences and compared COREQ responses across articles. This included examinations of the textual features of the checklist responses (grammar, vocabulary) (Fairclough, 1992) and the illocutionary and perlocutionary forces (Austin, 1962; Butler, 1997) of COREQ. The third step was to classify and describe key features of how COREQ items were responded to.
Findings From Analyzing COREQ Checklist Responses and Their Associated Articles
Current peer-review processes are most often based on electronic documentation communication, and when journals direct prospective authors to submit a completed checklist from a relevant reporting guideline, it adds documents to the processes. Not all peer-review documents are meant for publication, but some journals choose to publish the checklist documents. The directive’s illocutionary force must be understood in this context where journal editors have the authority to reject manuscripts, and academic authors are under pressure to publish to advance their careers, funding, and recognition. Furthermore, in order for the directive to be “felicitous” (Austin, 1962), editors must be sincere (be ready to sanction non-compliant authors) and submission procedures must follow established conventions. The eight articles were published in seven different health-related journals: “BMC Psychiatry” × 2, “Dementia,” “Family Relations,” “Frontiers in Psychology,” “Journal of Adolescent Health,” “Nursing & Health Sciences,” and “PLoS One.” Three of these journals explicitly recommend the use of COREQ with reference to the EQUATOR network, and a fourth is a Wiley journal (Wiley endorses COREQ and SRQR (Standards for Reporting Qualitative Research) for reporting qualitative research). We were not able to identify policies/recommendations for the remaining three journals, and we do not know if it was obligatory to provide a completed COREQ checklist as part of these journals’ submission processes. Sometimes requests for checklists are not revealed for prospective authors before they are in the process of submitting a manuscript on a journal/publisher website. We do not know who the receivers of the completed COREQ checklists were (editors, reviewers, or others) because responses could be interpreted as addressing different audiences. We also do not know how COREQ was read by the authors, for example, if they regarded closed response categories in COREQ as invitations to unfold their responses, or if they ignored responding to guiding/double questions in COREQ.
Five of the articles reported interview studies, and three reported studies with both interview and focus group data. Two of the articles reported interviews conducted in relationship to translations of research instruments and did not have interviews and focus groups in the foreground, but still the COREQ checklist was used. These latter two studies shared four authors, and large parts of their COREQ responses overlapped.
Prospective authors are challenged by an absence of established conventions regarding COREQ. The original developers of COREQ (Tong et al., 2007) did not instruct users on how to use the checklist, for example, how to respond to an item and its descriptors, and therefore, prospective authors must develop their own interpretation of an appropriate response. We identified two overall strategies to completing COREQs. One was an “elaborating” strategy, which included writing a unique, detailed response to each item (5 of the 8 COREQ responses used this strategy). While many responses were clear and adequate, this strategy would often complicate or contradict the content of the article. The other was an “indicating” strategy, where a response consisted of a reference to sections or pages in the article/manuscript (3 out of the 8 COREQ responses used this strategy). While this strategy never contradicted the article’s content, complications arose when it was impossible to identify the nominated sections of the article.
In the following two parts, we will first examine the content of the COREQ checklist responses as provided in each article’s appendix or supplementary material, and then examine how these responses related to COREQ items reported in the published articles.
Part A. Content of the Responses to the COREQ Items
We identified five different types of responses to the items in the COREQ checklist.
A. 1. Clear and Adequate Responses
Many responses had clear and adequate responses to the COREQ items. Such responses were succinct, straightforward, and fitted to the content and purpose of the COREQ item. For example, in response to COREQ item #29 “Quotations presented. Were participant quotations presented to illustrate the themes/findings? Was each quotation identified? e.g. participant number,” both Carvajal-Velez et al. (2023) and Nyongesa et al. (2022) wrote (Quote 1), “Yes, quotations were presented to illustrate the themes/findings, and each quotation was identified with an anonymous participant code,” which was fitted both in terms of answering the double question and in terms of the content (wording) that was copied from the item as a way of emphasizing the exactness of the response.
More than half of COREQ’s items are closed questions, yes/no or limited options, which invite a minimal response. Thus, responding (Quote 2) “no” (Xu et al., 2022) to item #15 “Presence of non-participants. Was anyone else present besides the participants and the researchers?” is a minimal but clear and adequate response.
A. 2. Unclear Responses
The meaning of several responses was less clear in terms of fit and content. For instance, in response to COREQ item #7 “Participant knowledge of the interviewer. What did the participants know about the researcher? e.g. personal goals, reasons for doing the research,” Xu et al. (2022) wrote (Quote 3), “Reasons for doing the research” [sic.]. They utilized the same copying approach as in Quote 1 above, which supports a literal reading, namely, that participants knew the researcher’s “reasons for doing the research.” The fit is less appropriate and the meaning of “reasons for doing the research” is not obvious, and therefore the authors’ minimal response does little to clarify their own understanding of the item. Moreover, the item itself is ambiguous as it is not clear if it is focused on the researcher’s personal reasons for conducting the research or more abstract reasons for conducting the research.
A. 3. Unsolicited Responses
Many responses included information that was not directly relevant to the COREQ item. This unsolicited information reflected authors’ attempts to demonstrate a partial adherence to a given COREQ item even though they could not provide a positive response to that item. For instance, in response to COREQ item #28 “Participant checking. Did participants provide feedback on the findings?”, Milton et al. (2022) responded with a “no, but” construction (Quote 4): Participant checking did not take place. Instead the lived experience researcher were [sic.] involved in the coding and theme identification process to enhance validity of the interpretation. Further, outside of the lay summary of the findings being returned, there was no formal opportunity for participants to feedback on the findings and recommendations other than contacting the researchers directly.
The first sentence was a clear and adequate response to the item: “Participant checking did not take place.” However, it was followed by a sentence about the work done by a lived experience researcher that “enhanced the validity of the interpretation,” but which had nothing to do with the item. This was followed by a sentence where “outside” suggests that there was an opportunity to give feedback on the findings when the respondents received a summary of the findings; however, this did require participants to be proactive in contacting the researchers. The additional unsolicited information and use of “outside” thus hinted that some form of participant checking took place or at least was not actively resisted by the researchers. Such responses indicate a preference for describing positive responses to COREQ even though nominated practices had not taken place.
COREQ item #25 “Description of the coding tree. Did authors provide a description of the coding tree?” was notoriously difficult to respond to in a clear manner despite the closed yes/no question. This was probably because the item implies that in qualitative research there will always be coding that can be organized in a tree. We identified no descriptions of any coding trees in the COREQ responses or the articles. The responses to COREQ item #25 typically had the “no, but” structure with references to a codebook or similar but without descriptions of the content of the codes. In this type of response, authors compensated for their inability to satisfy the criteria of a COREQ item by providing unsolicited information about a related type of practice that could be considered an appropriate proxy for a coding tree and therefore consistent with good research.
A. 4. N/A Responses
There were a number of “N/A” responses where authors indicated that a COREQ item was not applicable to their report/study. These responses seemed to take two forms. First, there were instances where the COREQ item was clearly not applicable to the study. For example, COREQ item #27 is worded: “What software, if applicable, was used to manage the data?” For this item, “N/A” is clearly a fitted and appropriate response for studies that did not use software.
However, in other instances, “not applicable” was used to avoid an outright “no.” This was likely because a “no” response implies that the item was not appropriately reported by the authors while “not applicable” suggests that the item was not relevant to the study, and therefore its absence does not imply failure on the part of the authors. For example, Oerther and Papachrisanthou (2024) responded to item #3 “What did the participants know about the researcher? E.g. Personal goals, reasons for doing the research” with “N/A.” It is not possible for this to be “not applicable” as the participants will either know about the researcher’s personal goals and reasons for doing the research or not. Such uses of “N/A” are thus unfitted to the COREQ item and serve to minimize the authors’ departure from COREQ ideals.
A. 5. Misunderstandings and Missing Responses
There were a small number of responses where authors apparently had misunderstood the item. For example, in response to COREQ item #11 “Method of approach. How were participants approached? e.g. face-to face, telephone, mail, email,” Carvajal-Velez et al. (2023) and Nyongesa et al. (2022) wrote (Quote 6), “Face-to-face discussions and interviews were conducted.” They mistook the approach to recruitment for modes of data collection. Also, there were three missing responses, all in Xu et al. (2022). They were adequately addressed in the article, which could indicate that Xu et al. regarded them as redundant in the checklist. While misunderstood and missing responses were not frequent, some articles included a high proportion of these responses, which we interpret as a sign of having rushed the completion of the checklist.
Part B. How Checklist Responses Relate to What Is Reported in the Articles
We identified four different types of relationships between checklist responses and manuscripts.
B. 1. Aligning References
The larger proportion of responses explicitly or implicitly referred to the manuscript through providing the same information or citing page numbers or specific sections of the article. Some references were unproblematic because the information in the COREQ response table corresponded with information in the article. For example, in response to COREQ item #21 “Duration. What was the duration of the interviews or focus groups?”, Porter et al. (2022) wrote (Quote 7) “11,” which referred to page 11 in the manuscript where it said, “Interviews lasted between 35 min and 1 h 55 min, with a median length of 1 h.”
Other simple references refer to larger sections of the article. COREQ items #30–32 inquired into “consistency” and “clarity” in the findings, which were very challenging to summarize, and all eight checklist responses simply referred readers to reading the manuscript’s findings sections. For instance, in response to COREQ item #30 “Data and findings consistent. Was there consistency between the data presented and the findings?”, Sun et al. (2024) wrote (Quote 8), “The presented data and findings are consistent from our point of view.” Sun et al. (2024) repeated the wording in the closed yes/no question and added a personal claim “from our point of view,” which underscores the subjective/interpretive nature of some COREQ items. In effect, though, readers of items #30–32 were forced to review the full findings in the article themselves and decide on the demonstrated levels of consistency and clarity.
Finally, the common rhetorical structure, “X was done, to achieve Y,” seemed to be used to reassure readers of the quality of the research and intent of the research reporting, for instance, “Interpretive sessions with two colleagues were held to verify the data saturation and to achieve consensus on the final codes” (Oerther & Papachrisanthou, 2024, p. 179). Also, it was common to rhetorically upgrade the adherence to COREQ, for instance, themes emerge “clearly,” discussions are “rigorous,” and field notes contain “high-quality” descriptive information.
B. 2. COREQ Responses Are More Detailed Than the Sections They Refer to
Occasionally, authors included more information in the COREQ checklists than what was reported in the articles themselves. For instance, in response to the closed question in COREQ item #6 “Relationship established. Was a relationship established prior to study commencement?”, Sun et al. (2024) wrote (Quote 9), “There was no relationship between the research team and the participants. The research team is not involved in taking care of these participations to avoid ethical issues.” We noted that the second sentence contained unsolicited information, and that no parts of this response were reported in the manuscript. In total, we identified seven instances like this in Sun et al.’s (2024) COREQ checklist, which could indicate that the response to COREQ was made after the completion of the manuscript, and that they did not deem it necessary to integrate this new/additional information in the article manuscript. However, in response to COREQ item #18 “Repeat interviews. Were repeat interviews carried out? If yes, how many?”, Sun et al. (2024, p. 3) wrote (Quote 10), “No repeat interviews were conducted.” in the manuscript. Arguably, this information about something that was not done would only be added to the manuscript with COREQ in mind. The prioritizing of information about actions that were not done compared to not describing actions that were done suggested that the completion of COREQ was not consistently used to modify or improve the manuscript.
B. 3. Contradicting References
Clear and adequate responses were no guarantee for alignment, transparency, and reliability. Comparisons between COREQ responses and the content of the articles often indicated inconsistencies and contradictions. An example was Sun et al.’s (2024) response to COREQ item #1 “Interviewer/ facilitator. Which author/s conducted the interview or focus groups?”. In the COREQ response, they stated, “Chun-Ting HSIAO and Jia-Jing SUN conducted the interviews”; however, in the manuscript, they stated (Quote 11), “All interviews were conducted by the author, who has a master’s degree, experience working in a dedicated COVID-19 ward, and training in qualitative research methods” (Sun et al., 2024, p. 2) and “Subsequently, all interviews were conducted by the researcher (JJS)” (Sun et al., 2024, p. 3). We do not know which is correct, but the information from the two sources cannot be correct at the same time.
B. 4. Misleading References
Several COREQ items include normative assumptions about qualitative research practices, which in some situations had to be actively managed by respondents. For instance, COREQ assumes that coding always takes place as part of an analysis and should preferably be done by more than one researcher. Tong et al. (2007) stated, “Specifying the use of multiple coders or other methods of researcher triangulation can indicate a broader and more complex understanding of the phenomenon” and that coding is “selecting significant sections from participant statements” (p. 356). In line with this, COREQ item #24 is: “Number of data coders. How many data coders coded the data?”, and Klinner et al. (2023) responded to this item by referring to their article on page “6” (Quote 12): In the process of reviewing the interview transcripts and listening to the audio files, one qualitative researcher developed an extensive analytical memo. This memo initially contained rich descriptions of preliminary themes identified inductively as important relative to the research question, interspersed reflexive comments on their potential meaning and relationships to each other (…). This expansive phase of memo-writing followed a process of structuring and abstraction of the data in which themes and sub-themes were consolidated in team discussions, using a critical realist approach (…) (Klinner et al., 2023, p. 3).
The “6” response suggested that the item would be addressed in the manuscript, but this was misleading as the item was not directly addressed. No coding process was described, and while Klinner et al. (2023) described an alternative approach to analysis based on memo-writing, they still prioritized mentioning that it was ultimately a team effort; maybe as a way to address the item’s emphasis on the value of multiple coders.
There were several instances of more misleading when the relationship between a COREQ response and an article was examined. Above, we characterized Quote 1 (“Yes, quotations were presented to illustrate the themes/findings, and each quotation was identified with an anonymous participant code”) as a clear and fitted response, as reported by Carvajal-Velez et al. (2023) and Nyongesa et al. (2022). However, as there were no actual quotes presented in the manuscripts, we interpreted it as very misleading. Often, brief responses, yes/no and a reference to a manuscript, would be misleading. For instance, in response to COREQ item #3 “Occupation. What was their occupation at the time of the study?”, Porter et al. (2022) indicated that a response to this was on page “9.” However, there was no information about the interviewer’s occupation in the article. Similarly, in response to COREQ item #13 “Non-participation. How many people refused to participate or dropped out? Reasons?”, Oerther and Papachrisanthou (2024) referred to their “participants and setting” section, which did not include any information about “non-participation.” So, while some sloppy responses dealt with the spirit of a given item, others came across as misleading about the content of a given manuscript.
We are aware of the possibility that discrepancies and inaccuracies between COREQ checklists and published manuscripts could arise when manuscripts are revised as part of the peer-review processes and some information is prioritized over other. We do not have the impression that checklists are updated as part of re-submission.
Discussion
Our analyses make clear that using the COREQ checklist is far from being a simple, straightforward exercise, and they reveal the numerous and varied ways in which authors completed the checklist to convey a sense of compliance with the criteria. Capturing various elements at play in the management of COREQ responses requires a lens that helps us contrast what seems visible and what remains out of frame both in spite of and because of the completion of the checklist. Below, we revisit ideas put forth by Austin (1962) and Butler (1997), and further mobilize the works of Foucault, Haraway, and ignorance scholars to critically examine some implications of our analyses.
Our analyses showed that transparency in research reporting was not a given outcome of COREQ completion. There were instances in the dataset where authors addressed reporting items appropriately, provided information over and beyond what was required, or provided misleading, inaccurate, and inconsistent information. Each of these response types hinged on games of (in)visibility that both reveal and conceal particular concerns about expectations surrounding quality and how various actors are positioned therein. For example, several responses reflected authors avoiding addressing certain COREQ items when they could not decisively report that they had indeed been met. “No, but” types of formulations were common as was the inclusion of additional information, which sometimes resulted in greater provision of details in the checklist responses than in the manuscript itself. While this may be assumed to enhance authors’ accountability and transparency, the information was often unsolicited and irrelevant in the particular COREQ context. We question authors’ apparent reluctance to simply dismiss inappropriate COREQ items, and we ponder the risks of doing so. Authors seemed keen to convey a sense of responsibility consistent with COREQ requirements, resulting in a form of overcompliance with the checklist including its unsuitable items. This leads us to consider this as a strategy to dispel potential doubts about the quality of one’s work, confirm the legitimacy of the reported research, and enhance authors’ credibility as researchers. Given the illocutionary force of a journal’s directive to complete COREQ and the authoritative claims about COREQ’s role in quality reporting, authors may come to understand that the relevance or applicability of COREQ items should not be questioned at the risk of raising suspicions about authors’ lack of rigor. Through overcompliance, authors therefore uphold their position as obedient subjects who ostensibly adhere to the “quality project” that COREQ is thought to represent. Such behavior echoes the work of Gros (2020) who problematizes overobedience to dominant discourses and normative assumptions even in the face of inconsistencies and flawed logic. “Overobedience” provides a means to present oneself in a favorable (compliant) light while maintaining the appearance of a working discursive system. In the context of interest here, we suggest overobedience to flawed or inappropriate COREQ items reflects similar intersecting forms of identity and epistemic work wherein the gaze of those assessing the quality of one’s article can be alleviated, and departures from expected COREQ responses may be more forgivable if one displays efforts to engage with the spirit of all items, however inapplicable.
In other instances, authors included misleading or other problematic information in their COREQ forms relative to the content of their article, or by justifying their use of COREQ by citing articles they had clearly not read. Such information provided no useful insight into authors’ engagement with the reporting requirements defined by the checklist (and, indirectly, the endorsing journals) and, in fact, called into question the very commitments to quality they call for. Despite such misleading information being easily identifiable, this did not impede the acceptance of the articles. We suggest successful publication of such articles rely on peer reviewers’, editors’, and readers’ excessive trust in COREQ and their disposition to readily accept authors’ responses to its criteria. Epistemic work is involved here too wherein the mere completion of the checklist coupled with collective blindness to poor reporting practices serves as proxies for quality. This epistemic work is key to maintaining the appearance of quality as well as committed narratives about its attainment.
Our analyses suggested that ritualized, irrelevant, and inaccurate reporting practices muddied notions of quality and rigor while also pointing to authors’ strategic management of their visibility and positionality to appear responsive and compliant. In Austin’s (1962) terminology, the editorial directive to complete the COREQ checklist was largely “infelicitous,” with our analysis indicating that neither authors nor editors consistently adhered to intended procedures around the completion and submission of the checklist. Such practices also raise questions about peer reviewers, editors, and publishers as guardians of quality research and quality reporting but who fail to capture problematic responses during the review and editing process. Problematic completion of the checklist, as well as failure to identify poor reporting practices (despite these being made evident in the COREQ forms), suggests that various actors involved in the writing, reviewing, and publishing enterprise may be far less committed to the checklist and its reporting requirements than appears at first glance. This raises questions about the meaning of “quality” in this context, as well as the usefulness of checklists such as COREQ as a tool for, and measure of, quality reporting, with inconsistencies, inaccuracies, and errors hidden in plain sight. Despite promises of transparency, there is substantial production of blindness and ignorance about individual and collective understandings of, and engagement toward, “quality,” with peer reviewers, editors, and publishers as active participants, alongside authors, in such regimes of ignorance (Gaudet, 2014).
The performative force of COREQ itself is something that remains conveniently out of view in current discussions about quality and the strategies to achieve it. As mentioned previously, fundamental flaws in the development of COREQ have been methodically highlighted by others, see, for instance, Buus and Perron (2020), and, concerningly, they still have not motivated a response by Tong et al. or the EQUATOR network. The accidental or willful overlooking of critiques toward COREQ runs counter to the very rigor to which the research community claims it aspires. It also contributes to a form of pluralistic ignorance (Knudsen et al., 2023) and collective obscuring of mechanisms of “seeing” and “unseeing.” This allows a collective of social actors (e.g., researchers/academics, peer reviewers, and journal editors) that hold specific ideas about truth, quality, and rigor to generate artefacts (e.g., checklists and submission guidelines) to sustain such ideas and create a sense of consensus around them, and to produce normative subject positions within the resulting dominant narratives (Gross & McGoey, 2015). In order to remain authoritative, however, checklist performances of quality must exclude conflicting ideas and provide a sense that debates about them have been settled or have become irrelevant to the research community, despite contradictory evidence (Morse, 2020, 2021).
At the core of these games of (in)visibility and ignorance lie, on one hand, longstanding anxieties around the epistemic, scientific, and social worth of qualitative research in a world long dominated by positivistic inquiry and, on the other hand, parallel abdications to pressures that discipline many qualitative researchers, who are seeking recognition and acceptance into the dominant view by making their work fit criteria of positivistic science as closely as possible (e.g., validity and generalizability), however inappropriate. In such a context, some may see checklists such as COREQ as a way to free the qualitative research community of continued unjustified attacks on its integrity. As a key to epistemic legitimacy in the world of technoscience, authors are disciplined into using these technologies, in hopes of finally putting epistemic tensions to rest. In other words, the use of checklists such as COREQ provides a veneer of epistemic mastery that, as our analyses showed, does not withstand scrutiny.
The problem that COREQ proposes to address is a perceived lack of rigor in qualitative research reporting and methodology. Within the academic enterprise, the rigor of knowledge production is highly controlled by the dominant paradigm of positivist and post-positivist science, and its quantitative measures and methods. Despite the narrowness of a positivist paradigm and its limitations within health research, its dominance constitutes knowledge and questions worthy of research, while its researchers control and organize resources including funding, access to publication, and dissemination. Quantitative methods are discursively constituted as “hard science” leading to important truth claims, while qualitative methods are constituted as “soft science” sidelined as window dressings, nice perhaps, but relatively non-essential and insubstantial. Technocratization produces items ill-fit for epistemological concerns of qualitative studies (e.g., “saturation”), eclipsing the key aspects of what makes a qualitative study methodologically rigorous. In attempts to publish, qualitative researchers operate defensively, avoiding confrontation with the logics and power of the dominant order that undermine their work. Instead, they comply with COREQ, resigning to its promise of legitimacy by providing a way to speak quantitatively about qualitative research. However, because COREQ is a poor fit for measuring quality, high levels of compliance (or designing a study with it in mind) erode the quality of the work, and low levels of compliance leave the authors open to future accusations of poor or misleading scholarship. Within this tautology, the relation of technobiopower comes into view. COREQ can be understood as productive of poor-quality qualitative research, and further entrenching its subjugation within the hierarchy of knowledge. Qualitative researchers are sidelined in the ever-accelerating world of scientific and academic production, where fast outputs of short articles are valued over longer ones that deep dive into complex ontological and epistemological questions. In such a context, checklists such as COREQ constitute a convenient workaround, allowing various social actors to perform quality ascribed by dominant quantitative methodology, without the necessary philosophical engagement and required contextualized discussions of quality and rigor.
The current analyses were only possible because of relatively recent changes to publication practices where authors’ interactions with checklists are made publicly available. This change is part of a larger push for ensuring transparency in research publishing where, for instance, researchers are routinely asked to also submit their datasets, and reviews and names of reviewers are revealed as part of publication. This is a goal consistent with the principles of the growing open science movement. The publication of COREQ checklists alongside a manuscript both intensifies accountability and diffuses traditional boundaries between “producing research” and “published research.” Drawing on Foucault’s analyses of disciplinary surveillance technologies (Foucault, 1977), we view the use of checklists, such as COREQ, as epistemic devices with perlocutionary force controlling and regulating research communities and shaping researcher subjectivities. Foucault (1977) analyzed the panopticon as a surveillance technology where subjects may be watched at any time, which leads them to regulate their own behavior as the gaze is internalized. We suggest that the publication of COREQ responses can be seen as part of a digital “research publication panopticon,” where authors, reviewers, and editors must perform with an awareness that their research, review, and publication practices could be scrutinized by anyone at any time. Of course, like all forms of surveillance, this largely does not matter if no one bothers to scrutinize and no one cares.
Extending Foucauldian ideas, Haraway (1997) describes how technology intensifies biopolitical relations. Ever expanding opportunities for surveillance, both temporally and spatially, change the nature and effects of power and regulation of the most detailed and intimate aspects of our lives and bodies. Haraway (1997) explains that this move to “technobiopower” has “more the temporality of the science-fictional wormhole, that spatial anomaly that cast travellers into unexpected regions of space, than of the birth passages of the biopolitical body” (p. 12). Our analyses revealed that checklist responses had to be invented by authors, but we can imagine further intensifications of the documentation communication—steps into Haraway’s wormhole—where journals/publishers develop online checklist templates where authors must electronically link checklist responses and manuscripts before submission (and re-submission) and as part of publication.
When authors are directed to complete COREQ as the basis for quality in research reporting, they are to some extent required to conform to a set of inherent normative assumptions external to the methodological traditions of their work. Authors mitigated the impact of the perlocutionary force of COREQ by providing non-conforming responses characterized by being unclear, deflecting, sloppy, or misleading. One interpretation of this behavior is that authors respond to COREQ as an annoying, repetitive ritual, an additional task that needs to be completed for submitting an article. Deflecting, misleading, and deceiving are regarded as legitimate responses, because responding to COREQ per se is more important than the associated research and publication practices, and because nobody (publishers, editors, reviewers, peer researchers) reads or cares anyway. An additional interpretation is that the non-conforming responses to COREQ can be characterized as acts of resistance to “the research publication panopticon,” including the quality of the COREQ criteria and the norms for qualitative research that COREQ promotes. Using Gros’ words, they might be seen as “obeying as badly as possible” (Gros, 2020).
Conclusion
Checklists are epistemic technologies in the sense that they stand in for (represent) but reduce “quality” using standardization. They create a façade that if they are followed a quality outcome will ensue and authors were required to produce “quality” aligned with the logics of the checklist. Our analysis showed that authors’ engagement with COREQ was more nuanced and complex than first appears, and suggested additional layers of concerns around, and enactments of, quality in reporting. The varied uses of the COREQ checklist identified in our analyses reveal how many aspects surrounding the checklist and its assumptions about quality remain in the shadows, unspoken and unaddressed.
COREQ is an epistemic device in itself, but it has very limited power without the push by publishers/journals to unquestioningly endorse it despite its shortcomings. Authors who are “playing the game” of quality reporting are ordered by publishers/journals who set the rules about how quality will be assessed (before the peer-review process). Abiding by dominant, rational, and technocratic rules shrouds the complex (and sometimes messy) issues of quality, keeping these out of view, giving the appearance they have been put to rest. Rather than alleviating issues of quality or quality reporting, our analyses suggest that COREQ multiplies the pathways of researchers’ entanglement in fundamentally unresolved debates about quality in research. It reproduces the dominance of knowledge produced by quantitative approaches within positivist science, by undermining desperately needed contributions of knowledge arising within qualitative methodologies aligned with epistemologies from other perspectives and paradigms. COREQ remains a poor-quality and untrustworthy tool that does not address quality struggles and that critically shapes authors’ reporting behaviors in problematic, misleading, and at times dishonest ways. In other words, checklists such as COREQ may be creating, at least partly, the very problem they claim to fix.
To be clear, we are not implying that quality does not exist in qualitative research or that the work of qualitative researchers, authors, reviewers, and journal editors is of poor quality. We do believe however that blind endorsement of checklists like COREQ functions to erode the quality of qualitative research by encouraging sloppy engagement with deep, fundamental questions around quality and rigor in research. Such reporting tools provide a convenient workaround wherein research stakeholders can uphold an appearance of concern for these issues, regardless of their actual engagement and ability to reflect on matters of epistemology, truth, and philosophy, which lie at the heart of the purported “quality problem” in qualitative research.
Footnotes
Author Contributions
All authors collaboratively designed the study; N.B., B.O., and A.J. performed the analyses; all authors collaboratively wrote the article.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
