Abstract
Research on trust in healthcare AI has grown significantly over the last five years, underscoring its vital role in AI adoption within healthcare services. While the multi-dimensional nature of trust in AI is well-documented, the literature lacks an integrative framework to fully understanding its dynamics. This study explores clinicians’ perceptions of using AI in breast screening, focusing on the evolving nature of trust in AI within a complex clinical environment. Through thematic analysis of focus groups and interviews with 27 clinicians from the population-based BreastScreen program in Victoria, Australia, we highlight that trust in healthcare AI is fluid and multi-layered. Clinicians considered the broader care context when evaluating the potential of AI in their clinical practice. Their conflicting views coexisted—seeing “AI as an opportunity” to improve service delivery and client experiences and recognizing “uncertainties” surrounding its use. Optimism about AI, framed as opportunity, was tempered by skepticism stemming from factors, such as distrust in AI’s performance, uncertainty regarding its role in their clinical practice, personal experiences with AI, and organizational barriers. Ethical, legal, and regulatory considerations also significantly influenced trust. We draw on the
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
