Abstract
The prevalence of artificial intelligence (AI) is rapidly growing across industries including in health care. AI has the potential to improve patient safety (e.g., diagnostic error) and reduce clinician workload (e.g., documentation burden) and healthcare costs. Yet, many questions remain about how clinicians will interact with and use AI to support their work and how these technologies will impact clinician workflow, decision-making, and teamwork. It is also uncertain how patients will interact with AI, with a recent report suggesting 60 percent of US adults are uncomfortable with their health care providers using AI. In this panel, we will discuss AI applications across differing health care contexts and describe how AI influences clinician (and patient) workflows. We will outline considerations for the design and implementation of AI-based technologies in health care and needed areas of future research.
Summary
The use of artificial intelligence and machine learning (AI/ML) in healthcare to inform patient care is on the rise. How this technology may be implemented into practice to improve patient safety, reduce clinician workload and subsequent burnout, and minimize healthcare costs has yet to be fully realized. Currently, there is limited information about how, when, and in what context this AI/ML generated information should be factored into a patient’s optimal care trajectory. A recent study suggests that only 40% of Americans would be comfortable with their provider relying on AI/ML for their own health care (Pew Research Center, 2023). New models for the development and implementation of AI/ML into healthcare exist that can address both implementation and equity issues. (Chen, Clayton, Novak, Anders & Malin, 2023).
The panelists will address current issues surrounding how the development, use, and implementation of AI/ML may potentially influence and impact patient care and communication across the spectrum of healthcare services. The panelists will be asked to answer questions such as:
What best practices are emerging around supporting clinician action based on AI/ML results?
What principles can support shared decision making between providers and patients with AI/ML is involved?
What factors should be considered when implementing AI/ML outside the traditional clinical setting?
Panel Presentations
Communicating Risk As Part Of An Ai-Generated Alert
Laura G. Militello is co-founder and CEO at Applied Decision Science, LLC, a research and development company that studies decision making in complex environments. She also co-founded Unveil, LLC, a company that delivers recognition skills training to combat medics, emergency responders and others. She has extensive experience studying the impact of health information technology on work and designing clinical decision support. Her research interests include the design and use of artificial intelligence and advanced automation in military and healthcare domains. She recently co-authored the Handbook of Augmented Reality Training Design (Cambridge University Press, 2023), a book that described 11 evidence-based design principles to support the design of recognition skills training leveraging augmented reality technology.
I will share my experiences as part of the design team for the Developing and Evaluating a Machine-Learning Opioid Prediction & Risk-Stratification E-Platform (DEMONSTRATE) project led by Jenny Lo Ciganic at the University of Florida. Dr. Lo Ciganic has developed an AI algorithm that identifies patients at risk of opioid overdose. I will highlight insights from the design of an alert intended to support primary care clinicians in understanding which patients are at most risk for opioid overdose and take appropriate actions. One insight relates to the difficulty of communicating risk effectively. The fact that even highly educated physicians sometimes have difficulty interpreting probability and risk statements is well-documented (Gigerenzer, 2015). In the context of artificial intelligence, one important strategy for reducing bias is to make the probabilities of false positives and false negatives visible (Kearns & Roth, 2019). Our design iterations highlight design approaches that were more and less effective at communicating risk and nudging clinicians toward recommended actions.
Emergency Vehicle Dispatchers’ Use Of An Automatic Assignment System
Yuval Bitan is a human factors engineer and research scientist that studies cognitive systems engineering at Ben-Gurion University of the Negev. His research examines the cognitive strategies human operators apply to handle incomplete and conflicting interaction with information, and their impact on decision-making processes.
Dr. Bitan was an Assistant Professor (status only) at the Department of Mechanical & Industrial Engineering at the University of Toronto, a Research Associate at the Cognitive Technologies Laboratory (University of Chicago, Illinois) and at HumanEra (University Health Network, Toronto, Canada). He holds a Ph.D. in Human Factors Engineering from Ben-Gurion University of the Negev, Be’er Sheva, Israel, where he is a faculty member at the Department of Health Policy and Management, and serves as the director of SimReC, the research center for simulation in healthcare.
Decision support systems are created and deployed to aid operators in their work. The progress in computational power and techniques is enhancing the capabilities of these computer-based systems, yet for a successful implementation, we still require the human operator to utilize the system and follow its suggestions. Our study aimed to comprehend how operators engage with automation in the demanding and intricate work environment of emergency vehicle dispatchers.
Our analysis focused on a dataset of emergency vehicle assignments to medical emergencies. The data was collected from a computerized system that offered dispatchers recommendations on the most suitable emergency vehicle for each emergency call. The dataset tracked whether dispatchers followed the computerized system's suggestion or manually assigned a vehicle. Surprisingly, we discovered that dispatchers relied on automation for less complex emergencies and during less stressful periods, while manually assigning emergency vehicles for more complex medical emergencies. Understanding the factors that influence operators' readiness to follow the recommendations of decision support systems would aid in designing more effective automation.
Communicating, Coordinating, And Cooperating: A Predictive Model’s Impact On Cancer Teamwork
Megan E. Salwei is a Research Assistant Professor in the Center for Research and Innovation in Systems Safety (CRISS) in the Departments of Anesthesiology and Biomedical Informatics at Vanderbilt University Medical Center. She received her PhD in Industrial and Systems Engineering from the University of Wisconsin-Madison, working with Dr. Pascale Carayon. Her research focuses on the design and implementation of health IT to support clinician workflow and improve patient safety. She is interested in the design of health IT to support teamwork in healthcare, not just between clinicians, but between the entire healthcare team - clinicians, patients, and their family caregivers.
In this panel, I will discuss a research project funded by the Agency for Healthcare Research and Quality (AHRQ) and led by Drs. France and Weinger, in which we used machine learning to predict cancer patients’ risk of unplanned treatment events (e.g., emergency department visit) within 7 days. Cancer patients are at a high risk for adverse events such as unplanned hospitalization and emergency department visits due to the complexity and toxicity of their treatment (Institute of Medicine, 2013). A recent study found that 1 in 4 re-admissions in cancer patients were preventable (Meisenberg et al., 2016). Identifying patients at risk for avoidable clinical deterioration is a challenge as clinicians are often not aware of patient complications between visits and are therefore unable to intervene in a timely manner. With the growing use of health IT and artificial intelligence-based predictive models, there are opportunities to leverage these technologies to identify clinical deterioration in cancer patients before an adverse event occurs. However, it is unclear how the implementation of these technologies will impact cancer teams. The objective of this study was to understand clinician perceptions of a clinical deterioration risk predication system and its potential impact on cancer teamwork, specifically communication, coordination, and cooperation.
Using FitBit, geolocation, electronic health record, and patient-reported survey data, we created a predictive model of clinical deterioration for head and neck, lung, and gastrointestinal cancer patients. Concurrently, we followed a human-centered design process to develop a risk communication system to deliver patient risk scores to the clinical teams. We conducted formative usability testing to gather clinician feedback on the risk communication system prior to implementation. In this panel, we will describe clinician perceptions of how the risk prediction system would support and hinder cancer teamwork. We will discuss lessons learned and future directions for the design and implementation of AI-based risk scores into team workflows.
An Algorithmic Approach To Improving Patient Safety Event Report Analysis
Raj Ratwani, PhD is an Associate Professor at the Georgetown University School of Medicine and the Director of the MedStar Health National Center for Human Factors in Healthcare. He has spent over 10 year conducting research in healthcare human factors and applying human factors principles to improve care quality, safety, and efficiency. His research has been focused on electronic health record usability and safety, safe use of digital health tools, and patient safety analytics. His work has been funded by AHRQ, ONC, NIH, NSF, the Pew Charitable Trusts and his research has been published in top-tier journals.
In this panel, I will discuss the challenges associated with the analysis of patient safety event reports and how computational algorithms can be designed, developed, and implemented to improve the identification of critical safety trends and patterns from these data. Patient safety event reports are collected by nearly every hospital in the United States and describe near misses (i.e., a patient was almost harmed) as well as harm events. These reports contain structured and free-text data and, if analyzed appropriately, can serve to identify critical safety hazards that should be mitigated to prevent patient harm. The challenge is that many hospitals collect tens of thousands of reports and relying on human review of each report to identify safety critical information is infeasible.
To address this challenge, we have focused on developing machine learning algorithms to identify safety critical trends more automatically from these safety reports based on the free-text descriptions. Algorithms have been developed to semi-automatically classify the topics of the safety event report, identify specific types of medication errors described in the report, and identify the involvement of health information technology as a contributing factor to the safety issue. These algorithms were developed from a database of hundreds of thousands of reports sourced from multiple hospitals. Reports were reviewed by clinical experts for algorithm development and validation.
We have developed prototype software with these algorithms embedded and are pilot testing the software at multiple healthcare facilities. The protype has been shaped by an iterative design and development process with patient safety subject matter experts. Key challenges we are working through in our design and pilot are effective display of algorithm results with algorithm uncertainty and workflow integration.
Supporting Patient Work Of Post-Discharge Surgery Recovery: Opportunity For Ai
Elizabeth Lerner Papautsky is an Assistant Professor in the Department of Biomedical and Health Information Sciences at University of Illinois Chicago. She received her PhD in Human Factors Psychology from Wright State University in Dayton, OH. Her research focuses on characterizing decision making of patients, particularly in cancer. Her perspective is that patients not only seek information, but as a function of continuity, possess clinically relevant information that may play a role in safety and outcomes.
To date, we have yet to recognize that patients-caregivers are consistently put in positions to make complex decisions with no medical training or experience. These decisions are far beyond selection of treatment options. One area of patient work that includes a perceptually complex task of interpreting visual cues is post-discharge infection surveillance after surgery. Surgical site infections (SSIs) cost the US healthcare system up to $3.3B annually (National Healthcare Safety Network, 2023) due to increased utilization of costly follow-up care, emergency room visits, readmissions, and even disease progression due to delays. What received little attention is the patient perspective.
Literature on examining patient experience, let alone patient work in this space is limited, if not entirely lacking. A 2014 interview study highlighted patient challenges in post-discharge surgery recovery, including lack of knowledge and self-efficacy, and accessible communication with their care team regarding their concerns (Sanger et al., 2014). The need for patient-caregiver education and support in infection surveillance cannot be understated. This presents a significant opportunity for application of AI as part of mobile health technologies to facilitate in surveillance and identification of early signs of infection (Lavallee et al., 2019).
I will provide an overview of the problem space, including characterization of patient-caregiver work of infection surveillance. I will further highlight opportunities for application of AI in supporting this work. Of note is the need to engage patients-caregivers in co-design of any such interventions.
Footnotes
Acknowledgements
This research was made possible by funding from the Agency for Healthcare Research and Quality (AHRQ), Grant Numbers: K12HS026395, K01HS029042, and R18HS026616, and through the National Library of Medicine Institutional Training Program in Biomedical Informatics and Data Science through the NIH, grant: T15LM007450-19. The content is solely the responsibility of the authors and does not necessarily represent the official views of the AHRQ or NLM.
