Abstract
Artificial intelligence (AI) systems offer immense potential at improving care processes, such as diagnostic support and automating clinical documentation. However, before these systems’ benefits can be realized, these systems first must be trusted by healthcare professionals. With an increasing number of studies that report on how trust in AI is formed, there is a need to summarize this literature. To address this gap, we aimed to conduct a review of existing empirical literature to identify factors associated with purpose-, process-, and performance-related trust of AI tools among healthcare professionals. Based on 75 studies, we found that most studies examined process- and performance-related trust. Overall, some factors influenced both process- and performance-related trust, such as endorsement by individuals within the health care organization and explainable AI. Other factors seemed to influence a specific level of trust (e.g., higher accuracy from the AI model and performance-related trust). Broadly, further studies are still needed to study purpose-related trust, develop validated measurement tools, examine differences between types of healthcare professionals, and understand how trust changes longitudinally.
Get full access to this article
View all access options for this article.
