Abstract
Generative artificial intelligence (GenAI) systems may be valuable in improving care delivery. GenAI tools include chatbots (e.g., ChatGPT) and may be particularly attractive to patients and caregivers for their wide accessibility and potential to improve access to information. However, since these tools’ quality parameters (i.e., accuracy) have been shown to be problematic, a deeper understanding of what contributes to patients’ and caregivers’ trust of GenAI tools is needed. Thus, we conducted a review of empirical literature to identify correlates of trust in GenAI tools among patients and caregivers. Based on 24 studies, there is emerging evidence that there is a moderate level of trust of GenAI among patients. No studies examined caregivers. We inductively identified five groupings of factors that contribute to patients’ trust of GenAI. These were individuals (e.g., health literacy, trait trust), tasks (e.g., administrative vs. diagnosis), agent design (e.g., personalization), agent implementation (e.g., government oversight), and agent performance (e.g., reliability). This review suggests that multi-level factors influence patients’ trust in GenAI agents. Additional research is needed on caregivers, publicly accessible GenAI tools (e.g., ChatGPT), and developing and validating trust in GenAI instruments.
Get full access to this article
View all access options for this article.
