Abstract
Expedited implementation of evidence into practice and policymaking is critical to ensure the delivery of effective care and improve health-care outcomes. Implementation science deals with the designing of methods and strategies for increasing and facilitating the uptake of evidence into practice and policymaking. Nevertheless, the process of designing and selecting methods and strategies for implementing evidence is complicated because of the complexity of health-care settings where implementation is desired. Artificial intelligence (AI) has revolutionized a range of fields, including genomics, education, drug trials, research, and health care. This commentary discusses how AI can be leveraged to expedite implementation science efforts for transforming health-care practice. Four key aspects of AI use in implementation science are highlighted: (a) AI for implementation planning (e.g., needs assessment, predictive analytics, and data management), (b) AI for developing implementation tools and guidelines, (c) AI for designing and applying implementation strategies, and (d) AI for monitoring and evaluating implementation outcomes. Use of AI along the implementation continuum from planning to delivery and evaluation can enable more precise and accurate implementation of evidence into practice.
Implications for Practice and Research
Using AI methods can enable implementation scientists to integrate relevant individual-, organizational-, and community-level data from a range of datasets and databases on a variety of outcomes and interventions. AI-based tools and guidelines can enable practitioners, clinicians, and implementation scientists to work with real-time data and information on implementation interventions, strategies, processes, and challenges. Using AI methods to design and execute implementation strategies has the potential to optimize their effect by allowing tailoring of strategies based on organizational and individuals needs with more precision.
Artificial intelligence (AI) has emerged to play an increasingly critical role in research, development, and testing within science, technology, engineering, and mathematics (STEM) and non-STEM disciplines (Xu et al., 2021). While there is no universally accepted definition of AI, it broadly refers to “systems that display intelligent behavior by analyzing their environment and taking actions—with some degree of autonomy—to achieve specific goals” (Sheikh et al., 2023, p. 20). AI has revolutionized a range of fields such as genomics (Dias & Torkamani, 2019; Quazi, 2022), education (Alqahtani et al., 2023), and drug and health-care research (Alowais et al., 2023; Chakraborty et al., 2023; Davenport & Kalakota, 2019). The commonly used types of AI in health care include machine learning (ML), natural processing learning, data science, rule-based expert systems, robots, and robotic process automation (Davenport & Kalakota, 2019). The emerging AI technology and techniques are potentially valuable for implementation research and offer both opportunities and challenges for translating research evidence into practice. The purpose of this commentary is to discuss how AI can be leveraged to expedite implementation science efforts to transform health-care practice, highlighting four key aspects of AI use in implementation science.
Artificial Intelligence for Implementation Planning
Implementation planning is imperative for sustainable uptake of research evidence into practice. Effective planning plays an instrumental role in optimizing the short- and long-term impacts of evidence uptake on health-care outcomes (Smith et al., 2022). Implementation planning often involves needs assessment; contextual adaptation of interventions and implementation strategies; examining prior data on intervention development, assessment, and evaluation; data management; and cost/benefit analysis of evidence-based intervention and implementation strategies (Eisman et al., 2020; Gagliardi et al., 2015; Smith et al., 2022). Before implementation, researchers and implementation practitioners often collect a wide range of data about evidence-based intervention(s), implementation settings, context, individual and organizational barriers, and readiness for uptake; selecting and incorporating relevant, meaningful, and accurate data requires effective data management for designing implementation plans (Pearson et al., 2020).
AI can be potentially useful for efficient organization and management of large datasets and databases on which implementation planning and needs assessment are usually based. Using data science methods can enable implementation scientists and implementation support practitioners (i.e., nonacademic implementation experts supporting clinicians in implementation research) to integrate relevant individual-, organizational-, and community-level data from a range of datasets and databases on a variety of outcomes and interventions (Pearson et al., 2020; Secinaro et al., 2021; Yu et al., 2018). One concrete example of using AI in implementation planning is the use of predictive analytics, which is a process of applying advanced statistical algorithms, effective software tools, and techniques for examination and interpretation of complex data for forecasting trends, predicting data patterns, or foretelling the process behavior either within the range or beyond the observed datasets (Dinov, 2018). Studies have demonstrated the usefulness of predictive analytics in implementation planning (Amarasingham et al., 2014; Moussa et al., 2021; Ng & Tan, 2021; Sendak et al., 2020; Sharma et al., 2022). For example, Moussa et al. (2021) explored the barriers for the implementation of professional services in community pharmacies and predicted the effectiveness of facilitation strategies for overcoming implementation barriers using ML techniques. In this 2-year change program, they identified 1,131 barriers and facilitation strategies and demonstrated the use of data-driven approaches to predict with 96.9% accuracy the effective tailored facilitation strategies for use during intervention implementation. Predictive analytics can enable implementation scientists and practitioners to generate more accurate and reliable predictions about to the outcomes of particular interventions and implementation strategies under particular contexts, settings, or conditions (Shaw et al., 2019).
Artificial Intelligence for Developing Implementation Tools and Guidelines
Implementation science entails developing and using a range of tools and guidelines for researchers, implementation support practitioners, and clinicians (Macdermid et al., 2013; Moore et al., 2017; Straus et al., 2013). Commonly used tools include quick reference guides for practitioners and clinicians, educational materials, and indicators of performance measurement and/or evaluation (Brownson et al., 2018; Gagliardi et al., 2011; Liang et al., 2017). The application of AI for the development of tools and guidelines for research, clinical decision making, and implementation has been emerging (Basu et al., 2020; Oliveira et al., 2014). Findings suggest that AI can facilitate clinical documentation, patient outreach, medical device automation, and patient monitoring (Basu et al., 2020; Bohr & Memarzadeh, 2020). Since implementation science often focuses on introducing innovative tools and guidelines for transferring evidence into practice, implementation scientists can use AI methods for designing assessment, implementation, evaluation, and monitoring tools and guidelines. Two scoping reviews outlined the application of AI for developing tools for improving the delivery of clinical care (Ng et al., 2022; von Gerich et al., 2022). von Gerich et al. (2022) identified 55 studies that focused on AI-applied techniques that could be used to establish implementation tools and guidelines, such as speech recognition, scheduling, documentation, care planning, outcome prediction, risk identification, health assessment, and education. Similarly, Ng and colleagues (2022) reviewed 37 studies, finding that AI could be used to improve implementation of evidence in clinical nursing through documentation, nursing care plans, and outcome prediction. Using AI-developed tools and guidelines can enable implementation scientists, implementation support practitioners, and clinicians to work with real-time data on implementation strategies, processes, and challenges (King & Kahn, 2023; Pearson et al., 2020).
Another example of AI for developing implementation tools is Opal, a specialized anesthesia information management system-based ML system (Bishara et al., 2022). This tool was designed for clinical and research uses, allowing health-care providers and researchers to access data from electronic health records. Opal offers extraction of large amounts of specific data, modifiable queries, and a comprehensive dashboard for data visualization and implementation of ML algorithms, informing clinical decision-making and research implementation and evaluation (Bishara et al., 2022).
Artificial Intelligence for Designing and Applying Implementation Strategies
For implementing research evidence into practice, implementation scientists, practitioners, and clinicians can collaborate in using a wide range of strategies including checklists, audit and feedback strategy (i.e., an implementation strategy to encourage behavior change among clinicians), educational tools, outreach visits, reminders, and champions (Bauer & Kirchner, 2020; Powell et al., 2015; Reynolds & Granger, 2023). Two systematic reviews demonstrated that implementation strategies are useful in enhancing the uptake of interventions in practice, but the effect is only small to moderate (Goorts et al., 2021; Kovacs et al., 2018). For example, Kovacs et al. (2018) reviewed 36 studies to evaluate the effectiveness of a range of implementation strategies for the uptake of non-communicable disease guidelines in primary health care, including distribution of materials, audit and feedback, motivational interviews, reminders, and patient-mediated strategies. These authors categorized the implementation strategies into six schemes (single, multifaceted, patient-mediated, educational, audit, and outreach). The analysis showed an overall moderate effect size of 0.22 (effect size is a quantitative measure of magnitude of a phenomenon, commonly interpreted using Cohen's d: 0.2 = small, 0.5 = medium, and 0.80 = large (Cohen, 1992) for all implementation strategies. The effectiveness of implementation strategies may vary across contexts and settings and may also be contingent on the characteristics of clinicians, patients, and practitioners.
Using AI methods to design and execute implementation strategies has the potential to optimize their effect by allowing tailoring of strategies based on organizational and individuals' needs with more precision (Bohr & Memarzadeh, 2020; King & Kahn, 2023; Yu et al., 2018). King and Kahn (2023) demonstrated the usefulness of data science methods for designing and applying six implementation strategies: (a) dynamic checklists, (b) customized audit and feedback, (c) intelligent prompts for evidence-based practice, (d) context-adaptive recommender systems, (e) just-in-time evidence, and (f) intelligent workflow design for informing clinical care. For example, audit and feedback strategies offer health-care professionals data on their past performance, enabling them to reflect on their performance and improve care for future patients. Feedback is often provided after the moment of need, affecting its timeliness and practical value. However, using AI technologies, customized audit and feedback could be computerized and possibly gamified in relation to electronic health record data and record of past activities. This could offer more consistent, real-time feedback allowing health professionals to learn from their prior actions and change behaviors accordingly. Feedback can also be personalized for each health professional, with those requiring feedback in one specific area offered more feedback compared to others. In this way, performance can be compared among professionals at different points in time with minimal effort and more precision.
Example: Facilitating Evidence-Based Rounding in an Intensive Care Unit
In an article in the Journal of Biomedical Informatics, King et al. (2023) described the use of a voice-based digital assistant to prompt intensive care unit (ICU) rounding teams to use evidence-based practices based on analysis of their real-time discussions. First, they used both audio and video to record ICU rounds for 90 days, capturing 743 patient discussions. They also utilized expert observers to shadow the rounds and document adherence to a reference standard for each component of the ABCDEF bundle addressed during patient's discussion. This bundle consists of: (A) Assess, prevent, and manage pain, (B) Both spontaneous awakening trials and spontaneous Breathing trials, (C) Choice of analgesia and sedation, (D) Delirium: assess, prevent, and manage, (E) Early mobility and Exercise, and (F) Family engagement and empowerment (Marra et al., 2017). A designated team captured the audio recordings for each patient's round using an omnidirectional microphone and uploaded them to cloud storage.
Two expert nurse observers in real time documented adherence to the reference standard on a subset of the data collection days. These nurses watched and listened to the care team and evaluated each patient discussion without looking into patient rooms, so that they had access to only the same information as the intelligent prompting system. The data gathered by the expert observers was considered a testing set and the data recorded via microphone was considered a training set. The transcription of the training set data was completed using Amazon Transcribe Medical and the performance of this system was assessed by comparing automatically generated machine transcripts to the corresponding manually corrected and de-identified human transcripts.
Data transcription was followed by fact extraction (i.e., automatically assigning each spoken sentence to one of the reference standards for patient outcomes, such as patient is receiving mechanical ventilation, patient is on continuous sedative) and adaption of clinical checklists. To evaluate the system's output, they compared the adapted checklists with a standard static paper checklist similar for each patient.
The performance of the voice-based digital assistant for prompting evidence-based practices was compared to that of an expert nurse. The digital assistant generated more prompts (n = 280) across various domains of the ABCDEF bundle compared to the nurse (n = 186), but the frequency for each prompt was same. Compared to the static paper-based checklist (n = 6), the digital assistant (n = 2.6) generated 56% fewer prompts per bed, with a positive predictive value of 0.45, negative predictive value of 0.83, true positive rate 0.68, and true negative rate 0.66, and 50% greater precision. This study offered evidence that using AI based implementation strategies (i.e., digital assistant) is more scalable than dedicated human delivered implementation (i.e., human checklist prompters). In other words, a voice-based digital assistant is useful in reducing prompts per patient compared to traditional approaches for increasing uptake of research evidence in ICU rounds. Therefore, AI incorporation in designing and executing implementation strategies can be promising for reducing human effort, achieving greater precision, and expediting the uptake of evidence in practice.
Artificial Intelligence for Monitoring and Evaluating Implementation Outcomes
Proctor and colleagues (2011) identified outcomes for implementation science research, including acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability. Different from intervention outcomes, these research outcomes help implementation science researchers and practitioners understand the effectiveness of the implementation process itself, enhancing efficiency for future implementation research (e.g., understanding how or if the implementation strategies or processes worked) (Proctor et al., 2011). There is not yet published evidence on how AI technologies could be used to monitor and evaluate implementation outcomes; however, the possibilities are evident.
Graili et al. (2021) evaluated the use of AI in outcomes research in a systematic review of 370 studies. They noted that AI was used for predicting and determining prognoses of frequently reported mortality and morbidity outcomes, and evaluated the efficacy and effectiveness among morbidity outcomes in response to preventive, diagnostic, and therapeutic interventions. Similar use of AI in monitoring and evaluating implementation research is possible. For example, AI may be able to collect and monitor data during and at the completion of an implementation science study to provide insight into the success of individual strategies and processes (e.g., suggesting which strategies were most adopted and/or most feasible). AI can be applied to predict the number and type of implementation outcomes for specific interventions and associated implementation strategies. This information in turn would assist implementation science researchers and practitioners in designing future studies focusing on those strategies and processes that were most impactful, thereby reducing waste of time, money, and other resources.
Implications for Research and Practice
This commentary brings to attention the importance of AI for improving implementation science efforts for providing evidence-based care to patients. There are limited examples of AI specifically in implementation science, and most of the efforts have been devoted to translating AI into health-care settings using implementation science methods. However, as Hogg et al. (2023) articulated, there is currently a gap, referred to as “AI chasm” (p. 2), that limits how AI technologies are used in the field of implementation science. The term AI chasm, originally coined by Keane and Topol (2018), refers to “the gulf between developing a scientifically sound algorithm and its use in any meaningful real-world applications” (p. 1). Ongoing research can help generate more insights on how AI can add value to implementation planning, delivery, and evaluation.
Conclusions
Timely, efficient, and sustainable implementation of research evidence into practice is of utmost importance for ensuring the delivery of effective, high-quality health care. Use of AI along the implementation continuum from planning to delivery and evaluation can enable more precise and accurate translation of evidence into practice. Leveraging AI methods and techniques to expedite implementation efforts needs the collective efforts of implementation scientists, clinicians, organizations, and implementation support practitioners.
Footnotes
Disclosure
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Ahtisham Younas is Editor in Chief for Creative Nursing.
Funding
The author(s) received no financial support for the research, authorship and/or publication of this article.
Author Biographies
