Abstract
This article aimed to examine the philosophy of Artificial Intelligence (AI) in healthcare and present a novel framework that could bridge philosophy, ethics, and leadership to promote the responsible and human-centered integration of AI. Moving beyond efficiency and innovation, it explored the deeper philosophical, moral, and human dimensions of AI’s evolving role in care delivery. The proposed framework incorporated teleology, ontology, epistemology, axiology, and ethics to provide a structured foundation for guiding AI development, implementation, and governance through purpose, knowledge, values, and moral action. Grounded in these principles, it highlights the leadership approaches that foster accountability, organizational readiness, and ethical stewardship in AI adoption. These insights informed the development of a framework designed to align AI with human values and to promote compassionate, ethical, and sustainable applications that enhance healthcare outcomes while preserving the essence of human care.
Get full access to this article
View all access options for this article.
