Abstract
The utilization of artificial intelligence (AI) in clinical practice has increased and is evidently contributing to improved diagnostic accuracy, optimized treatment planning, and improved patient outcomes. The rapid evolution of AI, especially generative AI and large language models (LLMs), have reignited the discussions about their potential impact on the healthcare industry, particularly regarding the role of healthcare providers. Concerning questions, “can AI replace doctors?” and “will doctors who are using AI replace those who are not using it?” have been echoed. To shed light on this debate, this article focuses on emphasizing the augmentative role of AI in healthcare, underlining that AI is aimed to complement, rather than replace, doctors and healthcare providers. The fundamental solution emerges with the human–AI collaboration, which combines the cognitive strengths of healthcare providers with the analytical capabilities of AI. A human-in-the-loop (HITL) approach ensures that the AI systems are guided, communicated, and supervised by human expertise, thereby maintaining safety and quality in healthcare services. Finally, the adoption can be forged further by the organizational process informed by the HITL approach to improve multidisciplinary teams in the loop. AI can create a paradigm shift in healthcare by complementing and enhancing the skills of healthcare providers, ultimately leading to improved service quality, patient outcomes, and a more efficient healthcare system.
Keywords
Introduction
The advancements in artificial intelligence (AI) have provided a wealth of opportunities for clinical practice and healthcare. Large language models (LLMs), such as BERT, GPT, and LaMDA have experienced exponential growth, with some now containing over a trillion parameters. 1 This growth in AI capabilities allows for seamless integration between different types of data and has led to multimodal applications in various domains, including medicine. 2 Evidence shows that AI has the potential to improve healthcare delivery by enhancing diagnostic accuracy, optimizing treatment planning, and improving patient outcomes.3–5 With the recent developments in AI, specifically LLM and generative AI (e.g. DALL-E, GPT-4 via ChatGPT), we reassess benefits and opportunities presented by AI toward being one step closer to an artificial general intelligence (AGI, AI with human cognitive abilities).6,7 Current evidence showed LLM capabilities with medical knowledge and support. The University of Florida's GatorTron, an 8.9 billion parameter LLM, is one of the first medical foundation models developed by an academic health system and medical data. 8 It is designed to improve five clinical natural language processing tasks, such as medical question answering and medical relation extraction. LLM further showed capability of medical knowledge, as an AI model achieved a 79.5% accuracy rate on the U.K. Royal College of Radiology examination compared to 84.8% for human radiologists. 9 Recently, the LLMs (PaLM, GPT) demonstrated their capabilities for the United States Medical Licensing Examination and several other medical question-answering tasks, showcasing the potential of AI in medicine.10,11
The increased capabilities also spread concerning conversations. These include AI and the alignment problem,
12
and ethical and unbiased implementations.
13
Recently, there has been a movement towards urging a “pause” in AI development,
14
to address these concerns, as well as to investigate societal implications, to build robust frameworks, governance, and control mechanisms. Among all, the notion of “AI taking over human jobs,” as it achieves highly accurate results and performance in completing human tasks,
15
has been one of the emerging concerns. In line with that, in the healthcare domain, the question has echoed:
AI to replace doctors
Even though the idea is intriguing, fundamentally, AI is not meant (designed and developed) to replace doctors, but able to repurpose roles and improve efficiency, as demonstrated by LLM-powered digital scribes and conversation summarization tools. 18 If we step back and look at current applications in clinical practice, AI has already been an integral part of health services, without replacing doctors. For example, AI-aided decision support systems with ultrasound or MRI machines to assist diagnosis, 19 or improving voice recognition in dictation devices to keep radiology notes. 20 However, recent developments in AI are highly complex, rapidly evolving, and overwhelmingly positive—as seen in the increased accuracy of LLMs in completing tasks, high language comprehension, and human-like conversational responses—leading us to question their value and contribution to practice.
AI to collaborate with doctors
By repurposing (not replacing) the roles, AI can contribute to a more efficient and streamlined healthcare system. Does that mean
In healthcare, HITL could be implemented as the trained doctors could collaborate with AI, monitor, validate and guide the process, interpret AI outputs, and provide feedback to improve the capability and accuracy of AI. A recent study showed that AI could enhance the accuracy of diagnosis and clinical decisions when combined with expert human evaluation, emphasizing the collaborative nature of AI and doctors. 25 This collaboration can further contribute to an often overseen value proposition toward addressing the disparities. AI has a higher value to act as a complementary tool, or knowledge augmentation mechanism, to fill the gaps, particularly in low-resource settings, such as rural areas or underdeveloped countries toward improving diagnosis, patient communication and education, and reducing language barriers. 26
However, such AI collaboration and decision support mechanisms are available via the adoption of organizations rather than solely based on personal choices in healthcare settings.
AI to be adopted by healthcare organizations
The adoption of AI is driven by organizational decisions, necessity, and readiness.
27
Therefore, the ultimate question is:
Healthcare organizations (e.g., hospitals and clinics) are responsible for providing AI tools that have undergone rigorous evaluation and validation to ensure safety and effectiveness for clinical practice (e.g., FDA clearance and FTC guidelines).28–30 In addition to that, legal, infrastructure, privacy, and security teams need to revisit organizational policies and protocols to ensure compliance, including state and federal laws and regulations, with a specific focus on personal health information exchange protocols, accountability, liability, service reimbursement, and clinical workflows.31,32 In parallel, there is a need to develop curricula and educational methods to train doctors on the fundamentals of AI, effective use in practice, and AI-supported healthcare delivery. 33
The organizational process can be informed by the HITL approach to improve the multidisciplinary team in the loop. 34 Enabling human–AI collaboration and inclusion of human feedback and control can forge the partnership, diminishing the false perception of “AI as replacement” (Figure 1). The specific considerations for collaborative AI adoption in a healthcare organization to enable human–AI collaboration are summarized in Textbox 1.

AI adoption to enable doctor–AI collaboration and considerations.
Considerations for AI adoption in a healthcare organization
Conclusion
The advancements in AI are reassuring, showing promise in creating a paradigm shift in healthcare by complementing and enhancing the skills of doctors and healthcare providers rather than replacing them. To successfully harness the power of AI, healthcare organizations must be proactive, especially now, where generative AI and LLMs are highly accessible but still in need of control and guidance. As AI becomes an essential component of modern healthcare, it is vital for organizations to invest in the necessary infrastructure, training, resources, and partnerships to support its successful adoption and ensure equitable access for all.
Footnotes
Acknowledgements
Figure 1 is created with BioRender.com.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
