Abstract
Artificial intelligence (AI) is now routinely deployed in qualitative health. Comparative evaluations indicate that these systems reproduce coding methods but can falter on culturally nuanced or emotionally complex material. Conventional reflexivity guidelines focus on investigator positionality and provide limited guidance for assessing algorithmic influence at early stages in the analysis process. We introduce the AI-Reflexivity Checklist (ARC), a pre-analysis, evidence-informed checkpoint that sets the appropriate human-in-the-loop (HITL) posture—delegate, assist/augment, or human-led—for LLM-assisted qualitative coding of textual data. Literature from science and technology studies, empirical studies of AI-assisted qualitative analysis, and pragmatic workflow models informed the identification of five decision domains: descriptive scope, contextual variation, experiential depth, ethical exposure, and output reversibility. These domains are operationalized as five sequential prompts completed before AI is introduced. If the planned task is purely descriptive, meanings are stable across contexts, experiential nuance is minimal, ethical risk is low, and outputs can be fully revised or reversed; automation is permitted with routine human verification. Elevated ratings on experiential or ethical domains point to an assist/human-led posture unless pilot evidence meets pre-specified acceptance criteria; lack of reversibility remains a blocker because it precludes audit and repair. ARC extends existing reflexivity practice to encompass algorithmic actors, offers a brief record suitable for review, and mitigates early path-dependency toward indiscriminate automation.
Get full access to this article
View all access options for this article.
