Abstract
Background:
Artificial intelligence (AI), particularly large language models (LLMs), has shown potential in health care, including diagnostic assistance and patient education. However, concerns about accuracy, biases, and the loss of human interaction, especially in oncology care, warrant investigation into patient perceptions of AI tools.
Methods:
A survey was conducted among 276 oncology outpatients at Thomas Jefferson University to assess comfort, trust, and familiarity with AI chatbots in three clinical scenarios: medication refills, lab result reviews, and preoperative instructions. Participants rated comfort levels using a 5-point Likert scale, and qualitative responses were analyzed. Demographic data were collected to examine subgroup differences. Statistical analyses included Wilcoxon tests with Bonferroni corrections.
Results:
Patients were most comfortable using AI for routine tasks like medication refills compared to lab result reviews (p = 0.0003) and preoperative instructions (p = 0.003). White and Asian patients reported the highest comfort levels, while African American/Black patients expressed significantly less comfort in some contexts, such as preoperative instructions (p = 0.04). Trust in AI was generally higher among male, older, and more educated patients, although familiarity with LLMs did not significantly influence comfort. Less than 10% of participants were highly comfortable using AI alone, citing concerns about losing the human connection in care. Respondents emphasized transparency and the option to interact with a human as critical for building trust.
Conclusions:
This study highlights the importance of patient-centered approaches to integrating AI in oncology care. Tailored strategies addressing trust, transparency, and cultural sensitivity are essential for equitable AI adoption. Future research should explore ways to enhance patient acceptance and mitigate disparities in AI use to improve health care delivery while preserving human interaction.
Get full access to this article
View all access options for this article.
