Abstract

Letter to the Editor
The article “A Training Needs Analysis for AI and Generative AI in Medical Education” by McCoy et al. 1 makes an important contribution to understanding AI readiness and training needs among medical faculty and students. At the same time, it should be emphasized that this contribution aligns with the broader and gradual scientific process in medical education, a field that progresses carefully through collaborative inquiry. This letter is therefore not intended as a rejection of their approach, but as a complementary reflection that may enrich ongoing conversations in the discipline. However, it leaves room for critical reflection on its methodology, analysis depth, and pedagogical proposals. This letter aims to offer a focused scholarly critique supported by empirical and theoretical literature, proposing avenues for enriching AI education in medical curricula. Beyond commenting on McCoy et al.’ study, 1 this letter also highlights broader methodological and pedagogical considerations that can inform future research and curriculum development in medical education.
Overreliance on Survey Approaches and Assumptions of Technological Readiness
The study relied on a survey as the sole data collection method, giving a broad quantitative overview but lacking qualitative insight into participants’ adoption and use of AI. The difference between “familiarity” and “mastery” of AI was only briefly noted, which is crucial because familiarity does not equal functional or ethical literacy. In clinical or educational contexts, simply knowing ChatGPT exists doesn’t guarantee proper use or evaluation. Previous research has shown that mixed-method approaches better capture complex educational phenomena, such as technology adoption in healthcare education.2–4
Moreover, the study treats AI integration mainly as technical training, overlooking its epistemological dimensions. AI adoption impacts doctor-patient relationships, clinical ethics, medical epistemology, and power redistribution in knowledge production. The lack of this philosophical analysis can lead to limited engagement and emphasize technical skills at the expense of deeper critical reflection. Without addressing these layers, AI training risks remaining superficial. As emphasizes, 5 AI literacy should include critical engagement with algorithmic bias and social implications.
Challenges to Generational Narratives and Digital Power Inequality
The authors attribute differences in AI adoption to generational divides—Gen Z students being digitally native and more familiar with AI, while Baby Boomers and Generation X faculty are less prepared. This oversimplifies deeper issues like limited access, institutional structures, faculty workloads, and conservative academic cultures. Focusing on generation alone neglects structural barriers. For example, funding limitations, institutional policies, and uneven access to AI resources also serve as critical barriers to adoption. 6 For example, faculty workload and lack of institutional support can significantly hamper adoption irrespective of age. 7
Additionally, McCoy et al. propose parallel training between students and faculty. While seemingly inclusive, this approach may risk overlooking power imbalances within classrooms and academic structures. For example, will this joint training open space for critical dialogue, or will it reinforce existing hierarchies of knowledge? Without a dialogic and reflective pedagogical design, such training could perpetuate traditional top-down learning dynamics. Freire's critical pedagogy suggests that transformative learning requires questioning power dynamics rather than replicating them, a consideration missing from the current proposal. 8
Integrating Critical and Contextual Approaches
This study also misses an opportunity to explore alternative pedagogical approaches, such as critical pedagogy, reflective education, or problem-based learning models that contextually integrate AI into real-world clinical scenarios. AI training should not simply be a series of technical modules. Still, critical thinking skills should be developed toward automated systems in diagnosis, risk algorithms, and clinical data presentation. This perspective resonates with the natural progression of medical education research, where methodological innovations are gradually integrated while respecting established pedagogical practices. In this sense, our proposals aim to accompany, rather than replace, the steady advances already underway in the field.
To bridge theory and practice, actionable steps include designing curriculum modules that incorporate case studies analyzing AI biases, fostering collaborative workshops that encourage faculty-student dialogue on AI ethics, and embedding reflective journaling to develop critical awareness among learners. 9 Such systematic strategies reflect scientifically and pedagogically preferable approaches, moving the Letter beyond critique toward constructive guidance for future research and practice.
Rather than simply measuring training preferences such as “online training” or “self-paced learning,” further studies should explore the extent to which participants understand the epistemic limitations of AI, such as algorithmic bias, the opacity of large models (black-box problems), and the potential bias of noninclusive training data-based diagnosis. True AI literacy involves critiquing technology, not just operating it.
From Surveys to Observations and Ethnographies
This research could be strengthened by in-depth qualitative approaches such as narrative interviews, focus groups, or even ethnographies of digital classrooms. This is essential to understanding how AI is used in teaching and learning contexts: Does it replace clinical discussions? Do students genuinely question AI outputs? To what extent do lecturers maintain pedagogical autonomy when competing with AI outputs? Without such contextual understanding, AI training risks being pedagogically irrelevant. Ethnographic studies in related educational contexts have revealed nuanced dynamics of technology adoption and critical engagement. 10
Provoking Further Exploration
How can we create AI training that builds skills and shapes critical thinking about the technology itself? What kinds of dialogue might occur if students and faculty worked together to analyze bias in medical AI output? These kinds of questions invite deep, collaborative, cross-disciplinary exploration. The future of medical education is not just about integrating new technologies but also about shaping new paradigms of thinking in the digital age. Positioning AI literacy within this broader evolution underscores that reflections such as ours are meant to supplement the field's existing strengths and gradual trajectory, offering alternative angles without undermining ongoing scholarly efforts. By situating our commentary within this broader pedagogical evolution, the Letter aims to contribute meaningfully beyond a single case study, thereby enhancing its relevance for scholars and educators working on similar challenges.
Footnotes
Acknowledgments
The authors want to acknowledge the assistance of Laksmi Evasufi Widi Fajari for their support in assisting with reference source collection.
Ethical Considerations
This article is a scholarly commentary in the form of a Letter to the Editor and does not involve human participants, patient data, or experimental procedures. Therefore, ethical approval was not required.
Consent to Participate
As this work does not include data derived from individual participants, the requirement for informed consent is not applicable.
Author Contributions
All authors contributed substantially to the conception and drafting of this letter. Moh Salimi initiated the idea and prepared the first draft. Ganes Gunansyah refined the conceptual framing and contributed critical revisions to strengthen the argument. Arie Rakhmat Riyadi reviewed relevant literature and provided substantial intellectual input during the revision process. All authors approved the final version of the manuscript and agree to be accountable for all aspects of the work.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
