Abstract
Study purpose
this study aims to investigate the extent to which ChatGPT enhances medical educators’ pedagogical practices, fosters reflective teaching, and contributes to their ongoing learning and growth.
Methods
this cross-sectional survey study assessed ChatGPT's efficacy in contributing to professional development among medical educators. Utilizing purposive and convenience sampling, a questionnaire was administered to 309 medical educators. Statistical analyses, including t-tests and ANOVA, were conducted to evaluate perceptions of ChatGPT's effectiveness, considering demographic factors and impairment types.
Results
resource recommendations received the highest mean score (4.08), while continuing education had the lowest (3.62). T-tests showed no significant gender differences (p > .05), while ANOVA indicated significant variances across job roles (p < .0001) and age groups (p < .0001). Lecturers consistently rated ChatGPT's support highest, followed by assistant professors and associate professors, with professors providing lower ratings.
Conclusion
ChatGPT's efficacy in providing personalized feedback, resource recommendations, pedagogical guidance, and other forms of support. However, addressing challenges such as ethical considerations and ensuring accuracy remains imperative for its effective integration into educational contexts.
Get full access to this article
View all access options for this article.
