Abstract
Objectives
This study presents a protocol that integrates conversational artificial intelligence into qualitative data analysis to support rapid, decision-oriented descriptive analysis in public health settings. The protocol was developed during an applied project with Hamilton County Public Health that analyzed interviews with the next of kin of overdose decedents to inform local strategies. The objective is to describe the protocol, its safeguards for data familiarization and human verification, and its practical application in a real-world case.
Methods
Evaluators designed and tested manual coding, intentional artificial intelligence-assisted coding, and conversational artificial intelligence within ATLAS.ti, selecting the conversational approach for the protocol. The protocol requires a mandatory pre-analysis familiarization phase that includes reading a stratified subset of transcripts and drafting immersion memos. Analysts then pose structured natural language queries tied to prespecified research questions. All outputs are treated as proposals and undergo required human verification, including confirmation of quoted evidence and contextual review. Theme-level benchmarking compared independent human synthesis with conversational artificial intelligence outputs.
Results
Conversational artificial intelligence produced rapid descriptive findings anchored to verifiable text, enabling efficient auditing through embedded links. Theme-level comparison showed conceptual overlap between human and artificial intelligence outputs, with transparent documentation of areas of divergence. The protocol supported rapid training of local personnel and sustained in-house analysis capacity.
Conclusion
The protocol formalizes a pragmatic workflow for question-led, top-down descriptive analysis using conversational artificial intelligence with mandatory human oversight. It is not intended to replace interpretive or theory-generating approaches but offers a transparent and scalable option for time sensitive, decision-focused qualitative work.
Keywords
Introduction
Qualitative data analysis (QDA) plays a crucial role in public health research, offering deep insights into complex human behaviors, experiences, and social contexts. Traditional QDA methods, while thorough, often require significant expertise and time investment,1,2 posing challenges for staffing, rapid data analysis, and decision-making for local-level organizations. The majority of research systems that local governments rely upon are the result of community or academic partnerships 3 and operate outside local authority. Moreover, local-level public health organizations widely express concerns regarding the difficulties associated with generating and applying research to guide their actions.4,5 While embedding researchers in public health settings to translate research evidence into practice is increasingly common, 6 there are limitations to this approach including turnover, capacity, and potential misalignment of what constitutes research. 7 The Hamilton County Public Health (HCPH) Comprehensive Opioid, Stimulant, and Substance Use Program (COSSUP) project exemplifies these challenges, as it seeks to analyze qualitative data from interviews with the next of kin (NOK) of overdose decedents to inform public health strategies.
Traditional QDA methods, rooted in diverse ontological and epistemological traditions, 8 are designed to collect and analyze data from various sources and contexts, thereby informing the adaptable nature of QDA. QDA is flexible and allows for modifications to, or the combination of, approaches to best fit a given context (e.g. sample, research question, phenomenon, theoretical paradigm). 9 Considering this, effective qualitative research necessitates clear purpose, specialized skill, and focused concentration. QDA methods excel at understanding phenomena in deep detail from the perspective of participants. On the other hand, QDA is constrained by inherent linguistic ambiguities, and the findings from corpus analysis cannot be generalized to broader populations with the same level of certainty as quantitative methods. 10 In an effort to accelerate QDA learning and standardize review processes, researchers have developed methodological templates that provide structured, streamlined, and reproducible procedures for data collection, analysis, and interpretation. 11 Nevertheless, employing a template-like approach frequently proves inadequate for addressing the specific research question posed by investigators, unless modifications are made to accommodate the particular sample, setting, or epistemological considerations. 12
In response to these challenges, evaluators for the HCPH COSSUP project developed the accelerated qualitative understanding and analysis through intelligent computing (AQUATIC) protocol; a novel approach to rapidly implement QDA capacity at the local level. The approach aims to rapidly build/develop local-level capacity for top-down descriptive analysis by integrating artificial intelligence (AI) into the analysis protocols. The AQUATIC protocol is designed for question led, rule-bound engagement with a defined dataset in order to generate descriptive findings that are explicitly tied to prespecified research questions and substantiated with verified exemplar quotations. AQUATIC is not intended to replace interpretive or theory-generating traditions (e.g. grounded theory, phenomenology) that rely on iterative coding cycles, memoing, and researcher reflexivity. Rather, the protocol provides a transparent and efficient pathway to produce decision-oriented descriptions when stakeholders require timely answers to known questions. This framing aligns with the protocol's central features: prompt-driven queries anchored to research questions, emphasis on traceable evidence within the corpus, and mandatory human verification outputs.
AI-powered QDA tools hold vast potential to QDA; currently, as an augmentative tool as opposed to human replacement. 13 In their assessment of traditional QDA processes in relation to AI-supported analysis, Bennis and Mouwafaq 14 conclude that appropriately implemented AI models offer notable gains in efficiency and analytic precision, indicating that future directions in qualitative methodology should prioritize structured integration of AI tools with human interpretive judgment. The AQUATIC protocol follows this advice by utilizing human-guided conversational AI coupled with human-guided result verification. Implementing the AQUATIC protocol enables organizations to quickly develop in-house top-down descriptive analysis capacity, while simultaneously mitigating the need for ongoing collaboration with external research organizations that may influence research objectives and activities. In this sense, the AQUATIC protocol does not prescribe a static analytical approach; rather, it offers a set of procedures outlining how the analytical process should be conducted and encourages organizations to modify components of the analysis to better target the context in which it is applied. Keeping the QDA process internal positions the organization's AQUATIC analysts as internal change agents 15 and provides opportunities for cross-departmental collaboration within the organization. The protocol is more than merely a fancy search function. The systems are conducting analysis and synthesis of the uploaded dataset based upon guiding prompts. This article highlights the development and application of the AQUATIC protocol, with particular attention placed upon the rapid training of personnel and the implementation of AI-driven QDA methods. Furthermore, this article presents an example of how the AQUATIC protocol can be used effectively and demonstrates how AI can take an analyst beyond the basics and provide analytical insights. By leveraging AI, the protocol seeks to enhance the efficiency and conceptual coherence of data analysis, as well as augment local capacity to undertake and continue this work, ultimately contributing to more timely and informed public health interventions.
The AQUATIC protocol was developed for use with ATLAS.ti. ATLAS.ti is powerful computer-assisted QDA software designed to facilitate the analysis of complex qualitative data. Working with computer-assisted QDA software, sans AI tools, generally saves analysts 20–30% of the time they would spend managing, manipulating, and retrieving data. 16 With the inclusion of AI-powered tools, analyses can be done, “in minutes instead of weeks”. 17 As with other QDA software (e.g. NVivo, MAXQDA), ATLAS.ti offers a range of tools that enable users to code data as well as identify patterns and themes. ATLAS.ti, though, offers a number of analytical tools powered by AI. All of ATLAS.ti's AI tools are powered by generative pre-trained transformer models from OpenAI. Since HCPH had existing licenses for ATLAS.ti, the evaluation team utilized this QDA software as the basis for protocol development. ATLAS.ti is fundamental for the implementation of the AQUATIC protocol as the application relies on ATLAS.ti's conversational AI tool. In theory, the protocol is potentially adaptable to other computer-assisted QDA software with integrated conversational AI; however, replicability and contextual testing of the tool as well as the underlying language model is necessary for full adoption. The software's conversational AI tool is built upon generative AI and uses semantic search to distill query intent, enabling rapid and contextually relevant insights. Since the tool operates with semantic search, users are able to interact with the dataset through natural language queries. The use of conversational AI substantially enhances the efficiency of the data analysis process by reducing the time required for thematic identification and interpretation while maintaining a high level of conceptual coherence. Semantic search also contributes to bridging the gap between experts and non-experts in analysis. Experts, leveraging their extensive knowledge, can formulate precise queries, whereas non-experts often struggle with specific terms and concepts, resulting in less precise queries. 18 Furthermore, ATLAS.ti's user-friendly interface and the capability for the conversational AI tool to comprehend natural language interactions facilitates the rapid training of HCPH personnel and ensures that they will be able to continue this work independently once the COSSUP grant concludes.
The purpose of this article is to present the top-down descriptive analysis AI-powered AQUATIC protocol developed for the HCPH COSSUP project and to discuss its broader applicability in various research and organizational contexts. By sharing the insights gained from this innovative approach, we aim to encourage other organizations to consider adopting AI-driven methods for QDA, thereby improving the speed and quality of their research outcomes.
Protocol development
Evaluators for the HCPH COSSUP project developed and tested protocols for three AI-powered QDA approaches in ATLAS.ti to determine the most effective tool for the AQUATIC protocol: manual coding, intentional AI coding, and conversational AI. Manual coding, the traditional approach for QDA, involves human coders meticulously reviewing and categorizing data, which, while thorough, is time-consuming, resource-intensive, and less scalable as the project size increases. Manual coding does, however, offer the greatest methodological flexibility insofar that it allows for various coding approaches (e.g. emergent, thematic,
Each tested approach has its pros and cons. Manual coding offers high procedural defensibility, depth of understanding, and methodological flexibility but is labor-intensive. Intentional AI coding strikes a balance between efficiency and procedural defensibility, though it requires initial setup, substantial human training, continuous monitoring, and benefits from a secondary AI tool for efficiency. Conversational AI excels in speed and ease of use, making it ideal for rapid analysis, but it does not code data in a traditional sense. Importantly, the analysis and synthesis produced by the conversational AI aligns with the results generated by traditional human coding. The alignment of these results indicate that the conversational AI system is conducting sophisticated analysis, beyond simply searching a dataset for key words, at a similar level to humans with advanced QDA expertise. As with other AI-powered tools, the outputs must be carefully reviewed by a human to limit potentially biased and/or conceptually incoherent responses resulting from OpenAI's training data. Selecting the appropriate method depends on situational- or organizational-specific needs and constraints of the project.
The development of the AQUATIC protocol involved a meticulous process of integrating conversational AI into the QDA framework. Initially, the evaluators tested the conceptual replicability between the ATLAS.ti locally installed desktop version compared to the web-based version. The testing was conducted by two evaluators using different computers, in different geographic regions, on different dates within the same week. While both versions produced conceptual replicability, the user interface was clearer and with fuller access to tools on the desktop version. As such, the AQUATIC protocol was developed for use with the desktop version of ATLAS.ti. The evaluators recognized the potential of conversational AI to transform the way qualitative data is analyzed, particularly in the context of local-level public health research.
Simultaneous with the version testing, the evaluators conducted a theme-level comparative analysis examining the five primary themes which emerged from traditional human coding and the conversational AI. The testing parameters aligned with the version testing above. In this case, the two evaluators manually coded the data directed toward a prescribed research question (R3 in Section 4) and identified the five major themes that surfaced. The evaluators then compared the thematic results against each other for alignment, and then against the conversational AI output. The theme-level comparison matrix for R3 is displayed in Table 1. The thematic benchmarking demonstrates conceptual concordance where present and transparent divergence where AQUATIC or human analysis surfaced unique or differently scoped findings. Since the conversational AI tool does not code data, a code-by-code comparison is neither possible nor aligned with the protocol's analytic logic. A theme-level comparison more appropriately assesses whether the AI-assisted, human verified descriptive synthesis converges conceptually with independent human judgment at the level that AQUATIC actually produces and that stakeholders consume: themes with evidence. Because every AQUATIC theme is accompanied by verified quotations and hyperlinks, analysts can audit the evidentiary basis of the AI-assisted results directly in the corpus; an advantage for traceability and decision-fitness in descriptive use cases. Future work will extend this by incorporating parallel human/AI trials optimized for code-level agreement (where the AI pipeline outputs codes) and repeated AI runs to quantify stochastic variance. Until then, theme-level benchmarking remains the appropriate and methodologically faithful test for AQUATIC's intended function.
R3 theme-level comparison matrix. a
Prior to any AI-assisted querying, analysts complete a structured familiarization phase to ensure deep engagement with the corpus independent of AI summaries. The familiarization phase includes: (a) reading a minimum of 25% to 30% of transcripts in full, sampled across key document groups (e.g. year, kin type, age band); (b) reviewing interview notes or debrief memos, where available; and (c) producing a short immersion memo (≤500 words per research question) summarizing salient features, tentative hunches, emergent outliers, and language cues (e.g. irony, sarcasm, metaphor) that may require special attention during analysis. When feasible, the interviewer or a team member who conducted/observed the interviews serves as the lead analyst, leveraging preexisting contextual knowledge to anticipate nuance that AI might miss. This memo is archived in the audit trail and consulted during prompt design, negative-case exploration, and final synthesis. Familiarization ensures analysts enter AI-assisted querying with an independent mental model of the corpus, including anticipated outliers, contradictions, and symbolic cues (e.g. irony/metaphor).
After ensuring that technical requirements (e.g. necessary software and hardware were in place, active internet connections, clean transcripts) were met, the evaluators created a fully built project file which included all deidentified NOK interview transcripts. Optionally at this stage, the interview transcripts may be assigned to one or more document groups (e.g. Year One Transcripts, male/female, age ranges) to facilitate criteria-specific analyses.
Once the project file was built, the evaluators focused on the process of using conversational AI for QDA. This process was designed to be user-friendly and scalable, allowing researchers to interact with the dataset through natural language queries, without the need for specialized QDA jargon. In this manner, the conversational AI tool within ATLAS.ti functions as an intelligent research assistant, capable of answering questions, clarifying nuances, retrieving exemplar quotations, and identifying themes. This approach differs from traditional coding methods, as it does not involve coding data in the conventional sense. Instead, it allows for dynamic and contextually relevant interactions with the dataset.
To ensure the effectiveness of the conversational AI tool, the evaluators implemented prompt engineering techniques. These techniques were designed to create thorough and specific questions that would elicit the best possible responses from the AI. The evaluators tested multiple generations of prompts, each more detailed than the previous, to determine the optimal approach for generating decision-fit and insightful answers. The final prompts were carefully crafted to maximize the AI's ability to provide relevant and actionable insights. The multi-generational approach to, and explicit instruction on, prompt engineering allows HCPH the flexibility to modify prompts to target current and future research questions.
The evaluators also conducted a preliminary analysis using the conversational AI tool to demonstrate its capabilities. This analysis included querying the dataset with specific questions and reviewing the AI-generated responses for procedural defensibility, applicability, and conceptual replicability. The results of this preliminary analysis were promising insomuch that the analyses were consistent in all three areas, providing a clear example of the AI's potential to deliver insightful and actionable recommendations.
In addition to the technical aspects of the AQUATIC protocol, the protocol also emphasizes the importance of training and capacity building. The evaluators developed comprehensive training and deployment protocols to facilitate the rapid training of personnel in the use of conversational AI for QDA. The local-level capacity building includes detailed training materials and resources, hands-on practice sessions, and continuous support and feedback mechanisms. By providing structured and accessible training, the AQUATIC protocol ensures that local-level personnel can quickly and effectively utilize AI-powered tools, enhancing the overall efficiency and impact of the QDA process.
The AQUATIC protocol also includes guidelines for reviewing and validating the AI-generated findings. Given the potential for biased responses resulting from the large language model's training data or agent induced limitations,19,20 it is crucial that human reviewers carefully examine the results (e.g. themes, exemplar quotations, etc.) to ensure their validity and relevance. All AI outputs in AQUATIC are treated as proposals subject to human analyst approval. For each research question or prompt, evaluators verified 100% of quoted evidence (verbatim text and hyperlink accuracy) and sample ≥30% of AI-generated answers for full contextual review (±5 sentence window and interview-level sentiment scan). If >10% of sampled answers fail any check, then the analyst escalates to 100% answer verification for that research question or prompt. Answers must be supported by ≥2 distinct interviews or quotations (depending on dataset size); we record a coverage rate and label answers as prevalent when ≥15% of interviews provide support (or
To mitigate the risk that AI returns “clean” answers that overlook minority or contradictory perspectives, the protocol mandates a negative-case review step during human verification in which analysts actively seek and document counter-examples using targeted prompts. During this phase, it is also suggested that the analyst creates an outlier log capturing any low-prevalence but high-salience patterns identified during familiarization or verification, with a decision note on inclusion/exclusion and rationale.
AI capabilities and human expertise must collaborate synergistically to achieve optimal qualitative research outcomes. The synergy should be thoroughly investigated and overseen by human qualitative experts for definitive validation. 14 The human validation step is essential to mitigate the risk of bias and to maintain the integrity of the data analysis process.
AQUATIC operationalizes top-down descriptive analysis by organizing the analytic process around prespecified research questions and rule-based prompts. Analysts prepare the data, load materials into ATLAS.ti, and then pose structured natural language queries that require the system to return concise descriptive conclusions with supporting quotations and links to source locations for verification. Human reviewers then validate each conclusion and quotation, correct mismatches, and synthesize a descriptive narrative organized by research question. This workflow prioritizes speed, evidentiary transparency, and reproducible retrieval of supporting text. It does not require traditional code assignment or iterative theory development, and it is not claimed as a substitute for interpretive methods.
Overall, the development of the AQUATIC protocol represents a structured articulation of an AI-assisted qualitative analysis workflow, rather than a novel qualitative methodology. By integrating conversational AI into the QDA framework, the protocol offers a scalable, efficient, and conceptually coherent method for analyzing qualitative data. This innovative approach holds the potential to transform public health research by enabling more timely and informed interventions that could save lives and improve community health outcomes. A key benefit of the protocol is that it was designed to be employed by staff or organizations without specialized training or skills in QDA. Furthermore, this protocol offers clear opportunities for transferability, facilitating broader use in diverse global contexts. The following section presents the protocol, and the subsequent section thereafter describes an example of its use in the field.
AQUATIC protocol
The AQUATIC protocol is designed to streamline QDA using ATLAS.ti's conversational AI tool. Below are the steps for implementing the protocol:
Define research objectives
Identify the general insights or questions you wish to explore with the dataset. This step ensures that the analysis is focused and relevant to the research goals. Data Familiarization [mandatory pre-analysis immersion]
Read a stratified sample (≥25–30%) of transcripts in full across document groups; Review any interview notes/debrief memos; Draft a brief immersion memo per RQ (≤500 words) highlighting salient patterns, suspected outliers/contradictions, and cues for irony or metaphor. Prepare data
Import all deidentified transcripts into the ATLAS.ti project file. Confirm that data are clean and organized for efficient processing. Optionally, you may add transcripts to document groups at this stage. Access conversational AI tool
Open the conversational AI tool by selecting the “Search & Code” tab from the toolbar and then “Conversational AI” on the ribbon. Select documents for analysis
Choose all transcripts or document groups that you wish to analyze. AI document review
Allow the AI to read the selected documents. The AI will review the documents and indicate when it is “Ready” to respond to queries. Note that the time required for this step can vary based upon the number and size of documents as well as server speeds. Query the AI
Use natural language queries, guided by CAREful construction,
21
to ask your questions or discussion prompts. The design of research questions or discussion prompts is a critical step in the protocol. Greater specificity and articulation of rules will improve the quality of the query results. Paste your inquiry into the chat using the keyboard command “Ctrl + v”. The AI will process your request and generate a response. Note that drafting the prompts in advance and storing them in an auxiliary document for easy modification further streamlines this process. Review AI responses
Carefully review the AI-generated findings to ensure that they align with your queries. Verify that 100% of exemplar quotations are applicable, correctly represent the data, and exist. Sample ≥30% of AI-generated answers for full contextual review. If >10% of sampled answers fail any check, then escalate to 100% answer verification for that research question or prompt. Note that this step must be completed by a human. To validate findings and view hyperlinked exemplar quotations, a reviewer must have ATLAS.ti installed locally, a copy of the project file imported into ATLAS.ti, the analysis file with hyperlinks, and access to the internet. Export results
Save or export the discussion results for further analysis or reporting. This step ensures that your findings are documented and accessible for future reference.
As AQUATIC is intended to support capacity building and applied analytic work, Figure 1 illustrates the key procedural steps of the protocol in a flowchart format to support transparent and accessible implementation.

AQUATIC protocol flowchart. * CAREful prompt construction emphasizes
Implementation process: HCPH as a case study
The implementation of AQUATIC in the HCPH COSSUP project involved several key steps. First, the analysis framework was established: defining the objectives, scope, and criteria for data analysis. Next, the data preparation phase involved deidentifying and importing NOK interview transcripts into ATLAS.ti, ensuring data integrity and confidentiality. The analysis was then conducted using conversational AI, with personnel querying the dataset to extract relevant insights and exemplar quotations.
The initial phase of implementation focused on establishing a robust analysis framework. The framework was designed to align with overarching goals of the HCPH COSSUP project, which aimed to understand the life experiences of overdose decedents and identify opportunities for community-based prevention and intervention strategies through interviews with NOK of overdose decedents. The framework outlined specific objectives, such as identifying common themes, patterns, and trends across the NOK interviews, and establishing criteria for evaluating the relevance and applicability of the findings.
After establishment of the analysis framework, the data familiarization phase began. The primary analyst collaborated with the interviewer and a second analyst to conduct a data familiarization pass which consisted of reading a stratified subset of transcripts in full, reviewing interview notes, and drafting immersion memos for each research question. These memos informed prompt wording (including dissent-seeking variants), guided negative-case exploration, and served as a reference during verification to ensure that outlier experiences and contradictory accounts were not lost in a “clean” summarization.
Subsequently, the preparation phase commenced. Data preparation is critical to ensure the integrity and confidentiality of the NOK interview transcripts. The evaluators meticulously cleaned and deidentified the transcripts, removing any personally identifiable information to protect the privacy of the interviewees. Once cleaned and deidentified, the transcripts were imported as separate Microsoft Word files into ATLAS.ti, the chosen QDA software for the AQUATIC protocol. During this step, humans organize the dataset into a structured format to facilitate efficient querying and analysis.
With the dataset prepared and imported, the analysis phase began. The evaluators utilized the conversational AI tool within ATLAS.ti to interact with the dataset. This tool allowed personnel to query the dataset using natural language, enabling them to extract relevant insights and exemplar quotations with ease. In alignment with HCPH's COSSUP goals, three research questions were identified for investigation. The three research questions were:
R1. What public health policies in Hamilton County, Ohio would help prevent overdose deaths? R2. What resources would have been beneficial to the NOK of overdose decedents to deal with grief around the death of their loved one? R3. What resources do the interviewees identify that would have helped prevent the overdose deaths of their loved ones?
Prompt development for each of these research questions followed CAREful construction.
21
The CARE elements emphasize specific attention to be placed upon context, the ask, rules, and examples. With prompts intended for conversational AI use, additional attention was given to the specificity of the prompt. The evaluation team tested three prompts to the same research question, each with increasing levels of specificity and adherence to CAREful construction. Using R1 as an example, the tested prompts, presented in order of increasing specificity, were:
Prompt 1
What policies would have been beneficial to prevent overdose deaths in Hamilton County? Prompt 2
Ask: What public health policies in Hamilton County, Ohio would help prevent overdose deaths? Rule: 1) you must use exemplar quotations to support conclusions. Prompt 3
Context: you are analyzing transcripts from interviews with the next of kin of overdose decedents in Hamilton County, Ohio. Ask: according to the transcripts, what public health policies in Hamilton County, Ohio would help prevent overdose deaths? Rules: 1) each conclusion that you present must have data references (links) to the transcripts, 2) each conclusion must have at least two exemplar quotations used to illustrate your conclusion, 3) do not fabricate any data.
The results of each prompt increased in specificity, applicability, and clarity as they were further developed. The three sample questions above illustrate the importance of constructing specific and detailed prompts that are explicitly tied to a given dataset.
In addition to prompts for the three research questions, the evaluators also developed a prompt to generate a narrative summary of the NOK interviews. The summary includes exemplar quotations to substantiate the summative conclusions as well as a separate section with recommendations for community-based intervention or prevention programming based on data within the transcripts. As with the other uses of conversational AI, the results must receive human verification. The use of conversational AI with properly developed prompts enhanced the efficiency of the analysis process, allowing for rapid identification of key themes and patterns supported by exemplar quotations.
Conversational AI output example
Below is an output generated by ATLAS.ti's conversational AI when subject to the AQUATIC protocol. The purpose of including this is to clearly demonstrate what the conversational AI produces when the AQUATIC protocol is used and how it meets the criteria of top-down descriptive analysis. The following output resulted from querying the dataset with the most advanced prompt, presented above, to investigate R1.
Chat Answer:
Based on the transcripts from interviews with the next of kin of overdose decedents in Hamilton County, Ohio, several public health policies 1 could be implemented to help prevent overdose deaths. Here are some conclusions supported by direct quotations from the transcripts:
Enhanced access to long-term treatment programs
“I think programs need to be way longer… a lot of kids that he met, or young adults, it was 90 days, and they were gone”[1][2]
b
. “He was in rehab from August 5 to… like I said, the end of February”[3][4].
Integration of mental health services in substance use treatment
· · “I just feel like partial, some kind of, you know, mental health or just some kind of recovery, that he would be okay”[5][6]. · “He knew he should, but he didn't think there was… he always said there was nothing for him to see that we were great”[7][8].
Development of peer support networks
· · “You would be a great resource, you know, a calm mind helps people a lot”[13][14]. · “So, we're trying to figure out how we can help people, to get to them, before it gets to this point”[15][16].
Family support and education programs
· · “I think if he had never met [girlfriend], he'd still be alive today”[17][18].
Community awareness campaigns
· · “He was a good person… he just had issues like we all do”[11][12].
Output and implementation discussion
It is worth emphasizing, once again, that the conversational AI function is more than a flashy search function. For example, interviewees were not explicitly asked “what policies would have been beneficial to prevent overdose deaths”, so the conversational AI is not simply combing through the transcripts looking for clear policy recommendations. Instead, participants reflected on their loved ones’ lives, how substance use or addiction impacted the lives of the interviewees and the NOK, and what they believe could have helped the decedent. The conversational AI, like a human coder, examines the transcripts for patterns, actions, sentiments, assumptions, emotions, etc. which it then uses to identify themes. While one is unable to see into the “black box” that is the large language model's “reasoning,” the thematic outputs and associated exemplar quotations align with the analysis and synthesis resulting from human coding. The similarities in thematic conclusions suggest that this AI conducts sophisticated analytical processes akin to those produced by human QDA specialists. The theme-level comparison demonstrates that the analyst-approved AQUATIC themes exhibit substantial conceptual overlap with the independent human synthesis, while also surfacing a small number of unique themes in each direction. Because AQUATIC does not assign codes, results are reported conceptually (matches/near-matches/uniques) with verified quotations and links provided for AQUATIC themes to support transparency and reinspection. However, given the limited transparency into the “mind” of the large language model at this time, the emphasis on human verification within the AQUATIC protocol cannot be overstated.
Throughout the implementation process, the evaluators emphasized the importance of human oversight and review. While the conversational AI tool provided valuable insights, it is essential for the human analyst to ensure the conceptual coherence and relevance of the findings. For example, the conversational AI tool may extract quotations from a different part of the transcript than it identifies or may present an exemplar quote out of context. The embedded hyperlinks, which direct the analyst to transcripts and sections from which the quotations were retrieved, are a substantial timesaving tool and vital to the human verification process. Personnel carefully reviewed the AI-generated responses, validating the conclusions and exemplar quotations to mitigate the risk of biased or untraceable results. Human review is crucial to maintaining the integrity of the analysis and ensuring that the recommendations are based on reliable and valid data.
Implementation of the AQUATIC protocol also involved comprehensive training and capacity building efforts. The evaluators developed detailed training materials and resources to support personnel using ATLAS.ti, and the conversational AI tool in particular, effectively. The evaluation team conducted hands-on practice sessions to familiarize personnel with the tool and its functionalities, and established continuous support and feedback mechanisms to address any challenges or questions that arose during the analysis process. All of these efforts were aimed at enhancing local-level capacity to undertake and continue the work, ensuring that the benefits of the AQUATIC protocol could be sustained beyond the COSSUP project's funding.
As a case study, applying the AQUATIC protocol to the HCPH COSSUP project generated descriptive insights from the qualitative data collected through NOK interviews. Firstly, the AQUATIC protocol highlighted the critical need for extended treatment programs. Many NOK expressed that short-term rehabilitation programs were insufficient for lasting recovery, emphasizing the necessity for longer and more comprehensive treatment options. This insight suggests that health policies should prioritize the development and funding of extended rehabilitation programs to better support individuals struggling with substance use disorders.
Secondly, the integration of mental health services into substance use treatment programs emerged as a vital component of effective intervention strategies. The analysis revealed that numerous overdose decedents had underlying mental health issues that were not adequately addressed during their recovery. This finding underscores the importance of incorporating mental health support into addiction treatment programs to address the root causes of substance use and improve overall outcomes.
Finally, the analysis identified the impactful role of peer support networks in the recovery process. Establishing peer support networks can provide individuals in recovery with the community and understanding they need to navigate their challenges, reducing feelings of isolation and promoting accountability. This result suggests that health initiatives should focus on developing and strengthening peer support systems to enhance recovery efforts.
While the qualitative findings are based upon a limited dataset and not intended to be generalizable, they are included here to demonstrate the potential of the AQUATIC protocol to transform the QDA process in public health research. By leveraging conversational AI, the project developed local-level capacity to analyze NOK interview data more efficiently, providing timely and actionable insights that may inform public health strategies. The successful implementation highlights the value of integrating AI-powered tools into QDA frameworks, paving the way for more effective and responsive public health interventions.
The implementation of the AQUATIC protocol in the HCPH COSSUP project involved a series of carefully planned steps, from establishing the analytical framework to conducting the analysis using conversational AI. The project showcases the potential of AI-powered tools to augment the QDA process, thereby contributing to accelerated public health interventions. The comprehensive training and capacity building efforts ensured that local-level personnel could effectively utilize the conversational AI tool, supporting the sustainability of the protocol and its benefits. As a case study, the HCPH COSSUP project serves as a model for other public health initiatives seeking to leverage AI for QDA, demonstrating the transformative impact of the AQUATIC protocol on public health research and practice.
Discussion
Examining the influence of the AQUATIC protocol on public health strategies and interventions, this section underscores the effectiveness of AI in top-down descriptive analysis by showcasing the enhanced efficiency, conceptual coherence, and depth of insights offered by tools like ATLAS.ti conversational AI. Following this, the conversation the broader applicability of the AQUATIC protocol across various fields, such as education, social science, and market research. Ultimately, the implications for future research and practice are considered, including the benefits of using AI for QDA in diverse organizational contexts. The section concludes with limitations and considerations for future research.
We explicitly delimit AQUATIC to top-down descriptive analysis. The protocol excels when stakeholders need timely, traceable descriptions that answer prespecified questions, because it combines prompt-driven synthesis with verifiable quotations and mandated human review. This scope yields clear strengths in speed, scalability, and evidentiary transparency for decision support. At the same time, AQUATIC is not positioned to replace interpretive or grounded approaches that require iterative coding, reflexive memoing, and theory building; as such aims demand more intensive human control than our protocol prescribes. Future work may formalize a deductive configuration with a stable coding frame and adjudication procedures, but we present AQUATIC here as a descriptive, question led protocol.
The application of AI-powered methods in QDA has demonstrated significant advantages in terms of efficiency, conceptual coherence, and depth of insights. The AQUATIC protocol, implemented through the ATLAS.ti conversational AI tool, exemplifies these benefits by transforming the analysis of qualitative data from interviews with the NOK of overdose decedents.
One of the primary advantages of the AQUATIC protocol is the substantial increase in efficiency. Traditional qualitative analysis methods often involve labor-intensive processes of manual coding and thematic analysis, which can be time-consuming and prone to human error. In contrast, the ATLAS.ti conversational AI tool can process large datasets rapidly, identifying patterns and themes with a level of consistency and speed that is difficult to achieve manually. This efficiency allows researchers to analyze more data in less time, thereby accelerating the research process and enabling more timely interventions.
Conceptual coherence is another benefit of AI-powered QDA. The ATLAS.ti conversational AI tool leverages advanced natural language processing algorithms to understand and interpret complex qualitative data. This capability reduces the likelihood of human bias and error, ensuring that the analysis is both reliable and replicable. Furthermore, the AI's ability to identify relevant themes with conceptual replicability enhances the overall quality of the analysis, providing a robust foundation for drawing meaningful conclusions.
The depth of insights gained through the AQUATIC protocol is also notable. The conversational AI tool can engage in sophisticated dialogue with the dataset, allowing researchers to explore nuanced questions and uncover deeper layers of meaning. For example, the AI can identify subtle connections between different themes, such as the interplay between mental health issues and substance use, which might be overlooked in manual analysis. This depth of analysis provides a more comprehensive understanding of the data, informing more effective public health strategies and interventions
The versatility of the AQUATIC protocol extends beyond the specific context of the HCPH COSSUP project, offering valuable applications in various fields such as education, social science, and market research. The principles and methods outlined in the protocol can be adapted to different types of qualitative data, making it a powerful tool for a wide range of research and practice settings.
In the field of education, the AQUATIC protocol could be used to analyze student feedback, identify areas for improvement, and develop targeted interventions to enhance learning outcomes. For instance, analyzing qualitative data from student surveys may reveal insights into the effectiveness or contextual suitability of teaching methods, classroom dynamics, and the needs of diverse student populations. Results could inform the development of evidence-based educational strategies and policies, thereby improving the quality of education.
Social science research could also benefit broadly from the application of the AQUATIC protocol. As an example, researchers studying complex social phenomena, such as social inequality or community resilience, could use AI-powered tools to analyze interview data, focus group discussions, fieldnotes, and other qualitative data sources. The ability to process large volumes of data efficiently may support conceptually aligned, question-driven analyses, enabling researchers to explore patterns and themes relevant to the experiences and perspectives of marginalized communities. Greater understanding of various social phenomena can inform the development of interventions and policies aimed at addressing social disparities and promoting social justice.
In the realm of market research, the AQUATIC protocol could be used to analyze consumer feedback, social media interactions, and other qualitative data sources to understand customer preferences and behaviors. This information could guide the development of marketing strategies, customer personas, product design, and customer service initiatives, eventually enhancing business outcomes. For example, analyzing qualitative data from customer reviews may reveal insights into product strengths and weaknesses, as well as customer purchasing habits, thus informing product development and marketing efforts.
Furthermore, integrating the AQUATIC protocol into organizational practice may improve decision-making processes by providing timely insights. For example, public health organizations could use the AQUATIC protocol to analyze qualitative data from community surveys, focus groups, and other sources to inform the development of targeted interventions and policies.
The use of AQUATIC can enhance the capacity of organizations to independently undertake and continue qualitative research. By reducing the time and effort required for data analysis, AI tools can enable organizations to conduct ongoing qualitative research, continuously gathering and analyzing data to inform their strategies and practices. The capacity-building aspect of AQUATIC is particularly important for organizations with limited resources, as it allows them to leverage AI technology to enhance their research capabilities and achieve their goals.
Limitations and future work
There are some important limitations to consider in relation to this work. Firstly, AQUATIC necessitates human oversight and verification. While noted multiple times above, it bears repeating that human validation constitutes a critical methodological safeguard for mitigating bias and maintaining the integrity of the analytic process. Related, another limitation is the potential for bias arising from training data for the large language model and an inability to see into the model's “reasoning”. Consistent with other AI-assisted analytic tools, AQUATIC outputs must undergo human review to mitigate potential bias and ensure conceptual coherence. While this is not an unusual step when working with AI-assisted analytic tools, this limitation underscores the importance of human involvement.
Since ATLAS.ti's conversational AI in AQUATIC does not produce code assignments, conventional agreement metrics (e.g.
A further limitation concerns symbolic interpretation. Large language model assisted querying excels at retrieving semantically similar text segments responsive to a question but may underperform on irony, sarcasm, metaphor, tacit meaning, or significant silences. Our protocol mitigates this risk by requiring human analysts to: verify every quoted evidence item in context (± five sentences), conduct interview level sentiment scans, and run negative case prompts to surface counter evidence before confirming any theme. Even so, these safeguards do not transform AQUATIC into a method for deep interpretive or theory-generating analysis; rather, they ensure that descriptive claims remain anchored to verifiable text and transparently qualified where ambiguity persists.
Another consideration is the application of AQUATIC to other QDA tools. While the AQUATIC protocol is, in principle, adaptable to other computer-assisted QDA platforms with conversational AI capabilities, replicability has not been established outside the ATLAS.ti environment. Contextual testing of alternative tools and language models would be necessary to ensure comparable performance and procedural consistency.
Beyond the specific context of the HCPH COSSUP project, the versatility of the AQUATIC protocol extends outside public health research, offering broad benefits across various fields including education, social science, and market research. The principles and methods outlined in the protocol can be adapted to different contexts and types of qualitative data, making it a powerful tool for a wide range of research and practice settings. For example, in education, the protocol can be used to analyze student feedback and develop targeted interventions to enhance learning outcomes. Social science researchers can leverage the protocol to examine complex social phenomena, providing deeper insights into issues like social inequality, group dynamics, and community resilience. Market researchers may use AI-driven tools to understand consumer preferences and behaviors, guiding the development of marketing strategies and product design. Adapting the AQUATIC protocol to different contexts may facilitate improvements in organizational efficiency of their qualitative research, leading to more informed decision-making and impactful outcomes.
A key opportunity for future research is the potential for AI-powered QDA to enhance the scalability of qualitative studies. Through automation of the analysis process, researchers can handle larger datasets and conduct more comprehensive studies without the limitations of manual analysis. The increased efficiency could lead to more robust and generalizable findings, eventually contributing to the advancement of knowledge in various fields.
The implications for future research and practice are meaningful. As AI technology continues to evolve, its applications in qualitative research are likely to expand at a meteoric rate, offering novel opportunities for innovation and impact. The AQUATIC protocol paves the way for more effective and responsive research practices, benefiting diverse organizational contexts and improving community health outcomes. By integrating AI-powered tools into QDA frameworks, researchers and practitioners can enhance their capacity to conduct ongoing qualitative research, continuously gathering and analyzing data to inform their strategies and practices. This advancement in QDA methodology holds the potential to transform various fields, eventually contributing to the advancement of knowledge and the potential betterment of society.
Conclusion
The development and implementation of the AQUATIC protocol for the HCPH COSSUP project represent a practical formalization of existing informal QDA practices using AI-assisted tools. Through the integration of AI-powered tools, specifically the ATLAS.ti conversational AI, the protocol has demonstrated the potential to enhance the efficiency of top-down descriptive analysis.
The AQUATIC protocol addresses several key challenges associated with alternative QDA methods, including the labor-intensive nature of manual coding and the limitations of intentional AI coding. Through the use of conversational AI, the protocol offers a scalable and user-friendly approach that allows researchers to interact with the dataset using natural language queries. The AQUATIC protocol not only accelerates the analysis process but also promotes a high degree of decision-fit and depth of insights, evidenced by the findings from the NOK interviews.
One of the most notable outcomes of the AQUATIC protocol is its ability to rapidly build local-level capacity for QDA. The comprehensive training and deployment protocols developed by the evaluators have enabled HCPH personnel to effectively utilize the conversational AI tool, ensuring that they can continue this work independently once the COSSUP grant concludes. The capacity-building aspect is particularly important for public health organizations with limited resources, as it allows them to leverage AI technology to enhance their research capabilities and achieve their goals.
Insights gained from the AI-powered analysis of NOK interviews provided valuable recommendations for public health strategies and interventions. The recommendations underscore the benefits derived from the AQUATIC protocol and illustrate the protocol's multi/transdisciplinary applicability within public health.
The AQUATIC protocol represents a practical advancement in QDA, offering a scalable and efficient method for conducting top-down descriptive analysis. Its application in the HCPH COSSUP project has demonstrated the protocol's potential to inform public health strategies and interventions, and its versatility suggests broader applicability in various fields. By leveraging AI-powered tools, the protocol enhances local capacity to undertake and continue qualitative research, ultimately contributing to more timely and informed public health interventions. As AI technology continues to evolve, the AQUATIC protocol paves the way for more effective and responsive research practices, benefiting diverse organizational contexts and improving community health outcomes.
Footnotes
Abbreviations
Acknowledgments
The authors would like to thank Hamilton County Public Health and Hamilton County Heroin Coalition and Taskforce for their collaboration with commitment toward the goal of better understanding addiction and developing more effective prevention and intervention efforts.
Ethical approval
This study was approved by the Ohio University Institutional Review Board (no. 42988).
Contributorship
S.B. led protocol conceptualization and development, the analysis, and writing of the paper. J.W. oversaw the project progress, led funding acquisition for the project, and contributed to protocol validation. C.G.B. contributed to protocol validation. E.C. contributed to protocol validation and testing, data curation, and the analysis. All authors contributed to the editing and final approval of the manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by a Comprehensive Opioid, Stimulant, and Substance Use Program (COSSUP) grant awarded to Hamilton County Public Health by the U.S. Department of Justice, Office of Justice Programs, Bureau of Justice Assistance. The contents of this report are solely the responsibility of the authors and do not necessarily represent the official views of the U.S. Department of Justice. The authors also acknowledge the dedication of project personnel and community partners who supported data collection and implementation activities integral to this effort. This project was supported by Grant No. 15PBJA-23-GG-02335-COAP awarded by the Bureau of Justice Assistance, Office of Justice Programs, U.S. Department of Justice. Points of view or opinions in this document are those of the author and do not necessarily represent the official position or policies of the U.S. Department of Justice.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
