Abstract
Background
The integration of artificial intelligence (AI) into clinical practice is gaining momentum globally, yet specialty-specific perspectives remain underexplored. This study aimed to assess the awareness, knowledge, attitudes, expectations, and concerns of infectious diseases and clinical microbiology (IDCM) physicians regarding AI applications in their field.
Methods
A cross-sectional, online survey was distributed between May and June 2025 to IDCM physicians across Türkiye. The questionnaire included multiple-choice, Likert-type, and open-ended items assessing sociodemographic characteristics, AI familiarity, clinical use, and perceptions. Descriptive and inferential statistics, along with thematic analysis of qualitative responses, were employed.
Results
In total, 387 IDCM physicians completed the survey. While 0.5% (n = 2) reported prior long-term/extensive AI training, 88.9% (n = 344) agreed that IDCM physicians should be actively involved in AI system development. Notably, 23.0% (n = 89) had already used AI tools, primarily ChatGPT (n = 69, 77.5%). Regarding accountability, 68.2% (n = 264) assigned responsibility for erroneous AI-generated decisions to physicians. Familiarity with AI showed a strong association with academic title (p < .001). Total knowledge scores were significantly higher among university hospital physicians (p < .001), whereas total attitude scores differed across age (p = .003), academic title (p = .001), and years of experience (p = .006). Thematic analysis of 97 open-ended responses revealed high expectations for AI in enhancing decision support, timeliness, and operational efficiency. However, major concerns included ethical risks, algorithmic bias, data reliability, and potential erosion of clinical autonomy.
Conclusions
This study provides comprehensive insights into IDCM physicians’ perspectives on AI. Findings highlight strong interest but limited preparedness, underscoring the need for targeted education, ethical safeguards, and inclusive policy frameworks to ensure responsible AI integration.
Keywords
Background
The rapid evolution of artificial intelligence (AI) technologies is transforming the landscape of modern healthcare, offering innovative tools for diagnosis, treatment optimization, risk stratification, and clinical decision support. 1 In the field of infectious diseases and clinical microbiology (IDCM), AI holds particular promise for addressing longstanding challenges such as antimicrobial stewardship, early outbreak detection, infection control surveillance, and laboratory data interpretation. 2 These technologies can facilitate real-time analytics, automate routine processes, and support more personalized and timely interventions. 2 However, the meaningful integration of AI into clinical workflows requires more than technological advancement. It depends fundamentally on the preparedness, awareness, and engagement of end-users, particularly frontline physicians who bear the responsibility of clinical decision-making in uncertain and high-pressure contexts. 3 Several additional concerns continue to challenge the integration of AI into everyday clinical practice. These include issues related to data privacy, algorithmic transparency, potential bias in AI-generated outputs, and limited clinician training or clear ethical and legal accountability frameworks. 4 Addressing these barriers is essential to ensure that AI adoption remains both safe and clinically meaningful.
Understanding how IDCM physicians perceive and engage with AI is essential for guiding responsible implementation. Existing literature has primarily centered on technical feasibility or general practitioner attitudes, often overlooking the specialized demands and decision-making environments of IDCM physicians. 5 As a specialty that operates at the intersection of individualized patient care, public health response, and microbiological diagnostics, IDCM presents distinct opportunities and challenges for AI integration. 6 Therefore, assessing the familiarity, attitudes, and expectations of IDCM physicians is critical not only for identifying practical and acceptable use cases but also for informing the development of ethical, context-sensitive, and clinically impactful AI tools.
To address this gap, the present study aimed to evaluate the current landscape of AI-related awareness, knowledge, and attitudes among IDCM physicians in Türkiye. By surveying a diverse cohort of specialists and residents across institutional settings, the study sought to identify patterns of experience, perceived barriers, and readiness for future AI integration. The findings are intended to inform both local and global strategies for facilitating clinician-centered AI adoption in infectious disease practice.
Methods
This descriptive, cross-sectional study aimed to evaluate the awareness, knowledge, attitudes, expectations, and concerns of IDCM specialists and residents in Türkiye. The study was conducted between May 26, 2025, and June 15, 2025 using an anonymous, voluntary, and online questionnaire distributed digitally via professional networks, institutional mailing lists, and relevant society platforms. Participation was open to all IDCM specialists and residents in Türkiye who were currently working in inpatient healthcare settings. Participants were recruited using a non-probability convenience sampling method. Respondents were required to provide electronic informed consent prior to survey initiation, and incomplete responses were excluded from the final analysis. The minimum required sample size was calculated as 384 using Cochran's formula assuming a large (effectively infinite) population, with a 95% confidence level and a 5% margin of error. Data collection was terminated upon reaching this threshold. At the time of closure, a total of 387 complete responses had been recorded and included in the analysis. According to national data, approximately 4000 IDCM physicians are currently practicing in Türkiye. Thus, the achieved sample of 387 participants corresponds to roughly 10% of this professional population. This study was conducted and reported in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for cross-sectional studies.
The survey instrument was developed based on a comprehensive review of the literature and iterative feedback from professionals in IDCM. Three IDCM specialists, one medical microbiologist, and one biostatistician with expertise in survey methodology reviewed the draft questionnaire for ensuring content validity and methodological robustness. Prior to distribution, the questionnaire was piloted among twelve IDCM physicians (residents and specialists) to evaluate item clarity, wording, and overall structure. Minor revisions were made based on their feedback. Internal consistency was assessed using Cronbach's alpha (α = 0.84 for the attitude domain and α = 0.81 for the knowledge domain). It comprised five main sections: (a) sociodemographic characteristics and professional background (e.g., age, gender, academic title, institutional setting, and years of experience), (b) awareness and knowledge of AI, (c) attitudes toward AI applications in IDCM, (d) institutional practices and access to AI tools, and (e) expectations and concerns regarding future AI integration in the field. The questionnaire included multiple-choice items, 5-point Likert-type scales, and open-ended questions. Attitudinal items were rated on a five-point Likert scale, where 1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, and 5 = strongly agree. Items assessing self-reported knowledge were rated on a five-point scale, where 1 = no knowledge, 2 = limited knowledge, 3 = moderate knowledge, 4 = good knowledge, and 5 = very good knowledge. Items related to perceived knowledge were designed to evaluate self-reported familiarity with AI applications in domains such as infection diagnosis, antibiotic stewardship, hand hygiene surveillance, isolation practices, outbreak monitoring, and clinical decision support, whereas attitudinal items assessed physicians’ perceptions of AI's role in clinical decision-making, ethical implications, patient privacy, professional autonomy, and regulatory governance. Cumulative scores were calculated to assess participants’ overall knowledge and attitudes regarding AI. The total knowledge score was derived from 12 items (each rated on a 5-point Likert scale), resulting in a total possible score range of 12–60. The total attitude score was calculated from 11 items (each rated on a 5-point Likert scale), yielding a total possible range of 11–55. Higher scores in both domains indicated greater knowledge and more favorable attitudes toward AI. The full content of the survey instrument is available in Supplemental Material 1.
Descriptive statistics were used to summarize categorical variables as frequencies and percentages, and continuous or ordinal variables as means with standard deviations (SD) or medians with minimum and maximum values. Differences between two groups were analyzed using the chi-square test for categorical variables and the Kruskal–Wallis test for continuous or ordinal variables. For comparisons involving more than two groups, the chi-square test and the Kruskal–Wallis test were used for categorical and continuous/ordinal variables, respectively. When expected cell counts were less than five in contingency tables, the Fisher–Freeman–Halton exact test was used instead of the chi-square test. Correlations between continuous variables were assessed using Spearman's rank correlation. Open-ended responses concerning participants’ expectations and concerns were analyzed thematically. Thematic analysis was conducted manually by the research team through an inductive approach. The analysis was independently performed by two researchers, and discrepancies in theme identification were resolved through discussion to reach consensus. Recurrent themes were identified, categorized, and exemplified using representative participant quotes. To control for Type I error inflation across multiple statistical tests, corrections for multiple comparisons were applied using the Benjamini–Hochberg False Discovery Rate method. Adjusted p-values < .05 were considered statistically significant. Effect sizes were also calculated to complement p-values and indicate the magnitude of associations or group differences. H(df) and χ²(df) denote the Kruskal–Wallis and chi-square test statistics with corresponding degrees of freedom, respectively. Specifically, Cramer's V was computed for the chi-square and Fisher–Freeman–Halton exact tests, epsilon squared (ε²) values were calculated for the Kruskal–Wallis analyses, and Spearman's correlation coefficient (r) itself was interpreted as the effect size for correlation analyses. Effect sizes were interpreted according to established conventions: for Cramer's V, values of 0.10, 0.30, and 0.50 were considered to represent small, moderate, and large effects, respectively, particularly for 2 × 2 contingency tables. For ε², values of approximately 0.01, 0.06, and 0.14 were interpreted as small, moderate, and large effect sizes, respectively, and for r, values of 0.10, 0.30, and 0.50 indicated small, moderate, and large correlations, respectively. Effects were described as trivial when p-values exceeded .05, regardless of the effect size magnitude. All statistical analyses were performed using IBM SPSS Statistics v26.0.
Results
Demographic characteristics
In total, 387 IDCM physicians participated in the survey. Participants aged 24–35 years constituted 50.6% (n = 196) of the sample, and 71.3% (n = 276) identified as female. University hospitals accounted for 49.6% (n = 192) of institutional affiliations. Among the participants, 166 (42.9%) were residents and 102 (26.3%) were specialists, representing the two largest academic title groups in the study. In terms of professional experience, 43.7% (n = 169) had ≤5 years, and 31.3% (n = 121) had ≥16 years. Demographic characteristics of the participants are presented in Table 1.
Demographic characteristics of the participants (na = 387).
The number of participants.
General exposure to and familiarity with artificial intelligence
The initial exposure to AI among participants predominantly occurred via social media platforms (n = 292, 75.4%). While 52.5% (n = 203) of the respondents reported basic familiarity with AI, 90.7% (n = 351) had not received any formal AI training. Nonetheless, 79.6% (n = 308) indicated that they would be willing to use AI tools if made accessible by their institutions. Table 2 presents detailed data on participants’ first exposure to AI, their familiarity levels, and previous training and usage patterns.
General knowledge and exposure to artificial intelligence (na = 387).
The number of participants.
Artificial intelligence.
Eighty-nine respondents (23.0%) reported prior use of AI tools in clinical or professional contexts, with ChatGPT (OpenAI, USA) being the most frequently referenced platform, acknowledged by 69 (77.5%) participants. Reported applications of ChatGPT spanned a wide range of tasks. In clinical settings, it was used for differential diagnosis, antibiotic selection and dosing, treatment duration decisions, and dermatological lesion identification. In academic and research domains, participants utilized the tool for literature review, data analysis, study design and interpretation, summarization of scientific articles and guidelines, and the construction of tables and decision algorithms. Additionally, ChatGPT was frequently employed in educational and administrative tasks such as preparing presentations, creating visual materials (e.g., posters and invitations), translation, and language editing. Other platforms mentioned by participants included Google Gemini (Google LLC, USA), Perplexity (Perplexity AI, USA), Abacus/DeepAgent (Abacus.AI, USA), Grok (xAI, USA), DeepSeek (DeepSeek AI, China), Genspark (Genspark AI, USA), Notebook LM (Google Research, USA), Napkin.ai (Napkin, Inc., USA), and Claude (Anthropic, USA).
Uncertainty regarding the presence of AI-based systems within institutions was reported by 103 participants (26.6%). Among the 16 participants (4.1%) who confirmed institutional AI availability, reported applications included radiological diagnosis, cancer screening, antibiotic prescribing in sepsis and pneumonia, diagnostic algorithms for various clinical scenarios, electrocardiogram interpretation, pathological diagnosis, and educational purposes.
Attitudes toward AI implementation and governance
A majority of respondents agreed or strongly agreed that AI could enhance clinical decision-making (n = 163, 42.1%) and contribute to antimicrobial stewardship efforts (n = 189, 48.8%). Frequently reported concerns were ethical implications (n = 197, 51.0%), patient privacy (n = 182, 47.1%), and the potential erosion of professional autonomy (n = 168, 43.4%). Participants strongly emphasized the importance of adapting AI systems to patient-specific data (n = 244, 63.1%) and maintaining the role of AI as a support tool rather than a sole decision-maker (n = 293, 75.7%). Additionally, 59.9% (n = 232) supported integrating AI education into specialty training, while 75.4% (n = 292) endorsed regulation within a legal and ethical framework. The median total attitude score among all participants was 24 (12–60). Likert-scale responses reflecting physicians’ attitudes toward AI implementation and governance in IDCM are presented in Table 3.
Likert-scale responsesa reflecting physicians’ attitudes toward artificial intelligence implementation and governance in infectious diseases and clinical microbiology (nb = 387).
Responses were measured on a 5-point Likert scale: 1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, 5 = strongly agree.
The number of participants.
Artificial intelligence.
Self-perceived knowledge and application areas
Participants reported low to moderate self-perceived knowledge regarding AI applications across both clinical and operational domains. The highest levels of perceived competence were observed in the areas of infectious disease diagnosis (41.2% moderate to very good knowledge, n = 159) and medical imaging analysis (n = 158, 40.9%). In contrast, knowledge levels were particularly low in domains such as pandemic modeling, hand hygiene monitoring, and monitoring of isolation practices, where 47.5% (n = 184), 47.3% (n = 183), and 46.0% (n = 178) of the respondents indicated no knowledge, respectively. Among all participants, 87 (22.5%) reported the lowest possible knowledge score (1: No knowledge) across all AI domains, whereas only five individuals (1.3%) reported the highest score (5: Very good knowledge) in all assessed areas. The median total knowledge score among all participants was 24 (12–60). Physicians’ perceived knowledge levels regarding the use of AI in different clinical and operational contexts are summarized in Table 4.
Physicians’ perceived knowledge levelsa regarding the use of artificial intelligence in different clinical and operational contexts (nb = 387).
Knowledge levels were self-assessed by participants on a 5-point Likert scale, where 1 = no knowledge, 2 = limited knowledge, 3 = moderate knowledge, 4 = good knowledge, and 5 = very good knowledge.
The number of participants.
Participants were also asked to identify up to three domains in which AI could provide the greatest benefit in IDCM. Diagnosis was most frequently selected (n = 254, 65.6%), followed by antibiotic consumption analysis (n = 185, 47.8%), monitoring of guideline adherence (n = 160, 41.3%), treatment optimization (n = 154, 39.8%), and infection control (n = 138, 35.6%). Additional areas included educational support (n = 103, 26.6%), infection prevention (n = 63, 16.3%), triage and patient classification (n = 60, 15.5%), and drug interaction detection (n = 1, 0.3%).
Perspectives on responsibility and professional roles
A total of 344 participants (88.9%) agreed that IDCM physicians should be involved in the development of AI systems. Regarding responsibility in the case of erroneous AI-generated decisions, 264 respondents (68.2%) held physicians accountable, 233 (60.2%) cited the healthcare institution, and 100 (25.8%) attributed responsibility to the system developers. In relation to the future role of AI in clinical decision-making, 192 (49.6%) of participants stated that clinical decisions should remain exclusively within the physician's authority. A further 178 (46.0%) respondents supported the use of AI as a consultant, while only 17 participants (4.4%) indicated that AI could assume responsibility for certain clinical decisions.
Subgroup analyses based on age, academic title, years of professional experience, and institution type
Subgroup analyses were conducted to assess differences in AI-related familiarity, prior training, clinical/professional use, and total knowledge and attitude scores across different groups. AI familiarity was significantly associated with academic title (H = 21.76, p < .001, ε² = 0.046, small effect) but not with age (H = 2.40, p = .600, ε² = 0.000, trivial), professional experience (H = 1.56, p = .748, ε² = 0.000, trivial), or institution type (H = 5.99, p = .197, ε² = 0.007, trivial). Openness to improving oneself in AI was influenced by academic title (H = 11.24, p = .024, ε² = 0.019, small effect) only. Previous AI training differed significantly by age (χ² = 21.53, p = .001, Cramer's V = 0.166, small), academic title (χ² = 22.95, p < .001, Cramer's V = 0.172, small), and years of experience (χ² = 22.52, p = .015, Cramer's V = 0.170, small), but not by institution type (χ² = 4.59, p = .761, Cramer's V = 0.072, trivial). Prior use of AI in clinical or professional contexts did not differ significantly across any of the subgroup variables (all p > .05; Cramer's V = .017–0.125, trivial).
Correlation and group comparison analyses were conducted to explore associations between total knowledge and attitude scores and various demographic and professional characteristics. Spearman's rank-order correlation revealed no statistically significant relationship between total knowledge scores and age (r = –.0702, p = .168, trivial), academic title (r = .0179, p = .726, trivial), or years of professional experience (r = –.0433, p = .396, trivial). In contrast, total attitude scores showed small but statistically significant positive correlations with age (r = .153, p = .003, small effect), academic title (r = .163, p = .001, small effect), and professional experience (r = .141, p = .006, small effect), suggesting that older and more experienced participants, as well as those with higher academic ranks, tended to report slightly more favorable attitudes toward AI applications in clinical practice. To assess differences across institutional settings, Kruskal–Wallis tests were performed. The results indicated that total knowledge scores differed significantly by institution type (H = 16.489, p < .001, ε² = 0.035, small effect), whereas total attitude scores did not show a statistically significant difference across institutions (H = 7.643, p = .054, ε² = 0.028, trivial). Together, these findings indicate that while most observed associations were of small magnitude, they nonetheless highlight consistent patterns suggesting that both individual and institutional factors modestly influence familiarity, attitudes, and training exposure related to AI in IDCM. Table 5 summarizes the subgroup analyses, and detailed results are provided in Supplemental Tables S1 to S4.
Summary of subgroup analyses of artificial intelligence related knowledge and attitudes Among infectious diseases and clinical microbiology physicians (na = 387).
The number of participants.
Artificial intelligence.
Group comparisons were performed using the chi-square test for categorical variables and the Kruskal–Wallis test for ordinal or continuous variables. When expected cell counts were low, the Fisher–Freeman–Halton exact test was applied instead of the chi-square test. Correlations between continuous variables were assessed using Spearman's rank correlation. To control for Type I error inflation across multiple statistical tests, corrections for multiple comparisons were applied using the Benjamini–Hochberg False Discovery Rate method. Adjusted p-values < .05 were considered statistically significant. Effect sizes were also calculated to complement p-values and indicate the magnitude of associations or group differences. H(df) and χ²(df) denote the Kruskal–Wallis and chi-square test statistics with corresponding degrees of freedom, respectively. Specifically, Cramer's V was computed for the chi-square and Fisher–Freeman–Halton exact tests, epsilon squared (ε²) values were calculated for the Kruskal–Wallis analyses, and Spearman's correlation coefficient (r) itself was interpreted as the effect size for correlation analyses. Effect sizes were interpreted according to established conventions: for Cramer's V, values of 0.10, 0.30, and 0.50 were considered to represent small, moderate, and large effects, respectively, particularly for 2 × 2 contingency tables. For ε², values of approximately 0.01, 0.06, and 0.14 were interpreted as small, moderate, and large effect sizes, respectively, and for r, values of 0.10, 0.30, and 0.50 indicated small, moderate, and large correlations, respectively. Effects were described as trivial when p-values exceeded .05, regardless of the effect size magnitude.
Expectations and concerns
A total of 97 open-ended responses regarding the expectations (n = 38, 39.2%) and concerns (n = 59, 60.8%) for AI use in IDCM were thematically analyzed. The most frequently mentioned expectations included AI's role as a clinical decision support tool (n = 24, 63.2%), improving time efficiency (n = 5, 13.2%), and enabling faster diagnosis and treatment (n = 5, 13.2%). Less frequently, participants noted its potential utility in education (n = 2, 5.2%), standardization of clinical guidelines (n = 1, 2.6%), and automation of routine tasks (n = 1, 2.6%). The most prominent themes regarding the concerns related to AI integration included ethical risks and algorithmic bias (n = 25, 42.4%), as well as concerns over data quality and reliability (n = 25, 42.4%). Additional issues raised were the loss of clinical autonomy (n = 7, 11.8%), limited generalizability of AI outputs (n = 1, 1.7%), and the risk of overreliance on AI systems (n = 1, 1.7%). These themes and representative quotes with the background of respondents were summarized in Table 6.
Thematic summary of expectations and concerns regarding artificial intelligence in infectious diseases and clinical microbiology (na = 387).
The number of participants.
Artificial intelligence.
Discussion
This study adds to the limited body of literature by providing a comprehensive, multi-dimensional assessment of IDCM physicians’ perspectives on AI. As a nationwide survey conducted in Türkiye, it offers valuable insight into the awareness, knowledge, attitudes, expectations, and concerns of IDCM physicians regarding AI integration into clinical practice. In Türkiye, IDCM services operate within a highly centralized and predominantly public healthcare system. National guidelines, infection control committees, and surveillance networks play a decisive role in shaping clinical practice and institutional priorities. This structure creates both advantages and barriers for AI adoption. On one hand, Türkiye's extensive digital health infrastructure—exemplified by e-Nabız, the national electronic health record system that integrates patient data across all public and private healthcare institutions, as well as nationwide electronic prescription and laboratory reporting systems—provides a unique foundation for data-driven innovation. On the other hand, variability in institutional resources, regional disparities in access to technology, and limited structured AI training may hinder consistent implementation across settings. Moreover, the hierarchical organization of clinical decision-making in public hospitals could influence how physicians perceive the autonomy, accountability, and practicality of AI-assisted care. Contextualizing the findings within this framework underscores that successful AI integration in Türkiye will depend on aligning technological development with institutional capacity, regulatory oversight, and frontline clinical realities.
In our study, despite the predominance of basic AI familiarity and limited formal training, the vast majority expressed openness to AI implementation and institutional adoption. This is consistent with previous international studies indicating a high level of interest in AI among healthcare professionals, even in the absence of formal education or hands-on experience. 7 For instance, He et al. 8 and Alnomasy et al. 9 both reported that while most healthcare workers perceive AI as promising, structured training remains scarce. In Germany, 87.8% of anesthesiologists agreed that AI should be integrated into their field, yet only 17% were familiar with its specific applications. 10 Comparable findings have been reported among radiologists, pathologists, and general practitioners in the United States and Europe, indicating a global readiness gap.11,12 These parallels suggest that IDCM physicians in Türkiye share certain similarities with broader international trends, yet the present results specifically reflect their local experiences and professional context. The self-initiated use of ChatGPT and other AI platforms by our study participants underscores the adaptability and proactive stance of IDCM physicians. This pattern of self-directed adoption mirrors trends observed internationally among clinicians experimenting with generative AI tools for similar academic and clinical purposes, but the extent and motivations of such use should be interpreted within the national and specialty-specific scope of this study. 13
Attitudinal responses further revealed a duality of enthusiasm and apprehension. While participants largely recognized the potential of AI in IDCM practice, they also voiced significant concerns. These findings are consistent with prior qualitative investigations in oncology, and pathology, where physicians highlighted fears related to algorithmic opacity, medicolegal responsibility, and disruption of the physician-patient relationship.14,15 Notably, our respondents emphasized ethical risks and bias as major barriers to AI integration, reflecting similar sentiments found in surveys of emergency medicine and intensive care clinicians.16,17 This alignment with international concerns indicates that Turkish IDCM physicians share the same core apprehensions as their global peers. The prevalent view that AI should serve as an adjunct rather than a decision-maker aligns with ethical guidelines from organizations such as the American Medical Association which advocate for “augmented intelligence” that complements clinical expertise rather than substituting it. 18
Our subgroup analyses contribute further depth by illuminating how demographic and professional variables shape AI perceptions. Although the detected associations were generally of small magnitude, these consistent trends highlight meaningful variation in AI familiarity and attitudes across experience and institutional contexts. Even modest effects are relevant in this setting, as they may signal early indicators of emerging digital literacy gaps. Differences across experience levels and institutional types suggest that readiness for AI integration is uneven. These findings parallel international data showing that senior academic clinicians tend to be more informed about AI and exhibit greater openness to its integration in clinical practice, possibly due to increased involvement in research and institutional decision-making. 7 In our study, physicians in university hospitals scored higher on knowledge assessments. This observation reflects broader global trends, as reported in multicountry reviews, where clinicians in tertiary care settings exhibit greater digital readiness owing to their exposure to interdisciplinary innovation ecosystems. 19 However, the lack of preparedness among early-career or non-academic clinicians in our study highlights the need for more inclusive and scalable educational initiatives. This educational gap is similarly emphasized in low- and middle-income countries, suggesting that improving AI literacy is a universal rather than context-limited need. 20
Finally, the thematic analysis of open-ended responses enriched the dataset with nuanced perspectives. Participants envisioned AI as a transformative asset. For example, one participant noted, “AI could help standardize antibiotic use and alert clinicians when treatment durations are unnecessarily prolonged.” Another respondent emphasized its potential to improve efficiency: “It may support faster diagnosis and treatment decisions, saving valuable time in critical cases.” These expectations align with real-world implementations in fields such as infectious disease surveillance and radiology, where AI has been successfully deployed for early outbreak detection and image-based diagnostics.21,22 Yet these aspirations were tempered by critical concerns. Several participants highlighted ethical and professional concerns, such as “AI should not replace the physician's judgment; it must remain a supportive tool,” and “Without proper ethical and legal frameworks, there is a risk that AI will undermine clinical responsibility and patient trust.” Others stressed the need for structured training and human-centered design, stating, “Training is essential—many physicians use AI tools without fully understanding their limitations,” and “AI can never replace the intuition and empathy of a clinician, but it can enhance decision-making when used responsibly.” Notably, participants stressed the lack of transparency and explainability in current AI models—a barrier also documented in surgical and psychiatric domains.23,24 The apprehension surrounding generalizability across diverse patient populations and contexts further emphasizes the need for context-specific, evidence-based development and validation processes. Taken together, these patterns demonstrate that the perspectives of Turkish IDCM physicians are not isolated but part of a broader global narrative on responsible AI integration. Importantly, our findings echo the recommendations of leading frameworks, including the World Health Organization's Guidance on Ethics and Governance of Artificial Intelligence for Health, which calls for stakeholder engagement, transparency, and human oversight as prerequisites for responsible AI integration. 25
Relevance for clinical practice
As AI transitions from concept to clinical reality in IDCM, institutions should not only establish structured AI literacy and ethics training but also create pilot integration programs and interdisciplinary oversight mechanisms to ensure safe, transparent, and equitable adoption. Embedding clinician feedback in AI design and evaluation processes will help maintain clinical relevance, usability, and trust. The implications of this study are limited to the Turkish IDCM context; however, the observed trends may offer useful insights for specialties with similar digital-readiness profiles. Moreover, these insights can inform policy makers and healthcare institutions in other low- and middle-income countries facing similar infectious disease burdens, guiding the development of sustainable and context-appropriate digital health ecosystems that strengthen global health equity.
Strengths and limitations
This study presents several notable strengths. First and foremost, to our knowledge, this is the first systematic survey specifically targeting IDCM physicians to explore their awareness, knowledge, attitudes, expectations, and concerns regarding AI applications. While previous international studies have assessed AI perceptions among general practitioners, anesthesiologists, radiologists, and other specialties, no prior research has comprehensively examined the perspectives of IDCM physicians—a group operating at the unique intersection of individualized patient care, infection control, antimicrobial stewardship, and microbiological diagnostics. By addressing this critical gap, our study provides original and specialty-specific insight that contributes meaningfully to the international digital health literature.
Second, the study employed a comprehensive, multidimensional questionnaire informed by existing frameworks and expert consensus, allowing for robust evaluation across both clinical and operational AI domains. Third, inclusion of both residents and specialists from a wide range of institutional settings across Türkiye enhances the representativeness of the sample within the specialty. Fourth, the incorporation of qualitative thematic analysis provided rich, contextualized insights into participants’ expectations and concerns, complementing the quantitative findings.
Several limitations should be considered. First, the cross-sectional design limits the ability to draw causal inferences, as it reflects physician perceptions at a single point in time, which may change over time with the rapid evolution of AI technologies and regulatory frameworks. Second, the voluntary and self-selected nature of survey participation may have introduced selection bias, potentially overrepresenting individuals with existing interest or familiarity with AI. Third, because the study employed a non-probability convenience sampling approach, full representativeness of the national IDCM physician population cannot be guaranteed; the achieved sample of 387 participants corresponds to approximately 10% of the estimated 4000 IDCM specialists and residents in Türkiye. Fourth, the self-reported data are subject to recall and social desirability bias, possibly affecting the accuracy of reported knowledge or experiences. Additionally, inter-rater agreement (e.g., Cohen's kappa) was not calculated during thematic coding, which may limit the quantification of reliability in qualitative analysis. Lastly, as the study was conducted solely among IDCM physicians in Türkiye, generalizability may be limited not only across other medical specialties or countries with differing healthcare systems and AI adoption rates, but also across rural versus urban regions and between public and private institutions.
Conclusion
This study highlights both enthusiasm and ambivalence among IDCM physicians in Türkiye regarding the integration of AI into clinical workflows. While familiarity and formal training remain limited, there is substantial openness to AI adoption, particularly if supported by institutional frameworks and education. Our findings underscore the importance of engaging IDCM physicians in AI development and governance processes, not only as users but also as contributors and evaluators. Future efforts should prioritize targeted training programs, ethical guidelines, and cross-disciplinary collaborations to ensure that AI technologies in IDCM are safe, effective, and aligned with clinical realities. By addressing identified gaps at the national and specialty level, AI can be leveraged to advance infection diagnosis, infection control, and antimicrobial stewardship in a manner consistent with local healthcare structures and resources.
Supplemental Material
sj-docx-1-dhj-10.1177_20552076251404507 - Supplemental material for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice
Supplemental material, sj-docx-1-dhj-10.1177_20552076251404507 for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice by Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu and Nazlım Aktuğ Demir in DIGITAL HEALTH
Supplemental Material
sj-docx-2-dhj-10.1177_20552076251404507 - Supplemental material for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice
Supplemental material, sj-docx-2-dhj-10.1177_20552076251404507 for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice by Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu and Nazlım Aktuğ Demir in DIGITAL HEALTH
Supplemental Material
sj-docx-3-dhj-10.1177_20552076251404507 - Supplemental material for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice
Supplemental material, sj-docx-3-dhj-10.1177_20552076251404507 for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice by Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu and Nazlım Aktuğ Demir in DIGITAL HEALTH
Footnotes
Acknowledgments
The authors extend their sincere appreciation to the physicians, institutional stakeholders, and AI experts who contributed their time and perspectives throughout the study. We are especially grateful to the IDCM physicians who participated in the survey and shared thoughtful insights into the current and future integration of AI in their clinical practice. We also acknowledge the efforts of the academic collaborators and institutional leaders who supported the dissemination of the survey and facilitated access to key participants. To the clinicians and researchers committed to advancing ethical, equitable, and effective applications of AI in medicine: your continued efforts inspire us to explore, question, and co-create technologies that truly serve patients and providers alike.
ORCID iDs
Ethical approval
The study was approved by the Ankara University Faculty of Medicine Human Research Ethics Committee (decision number: I05-401-25; date of approval: May 26, 2025) and conducted in accordance with the Declaration of Helsinki.
Informed consent
All participants provided written informed consent via an online consent form prior to participating.
Author contributions
Conceptualization: Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu, and Nazlım Aktuğ Demir; data curation: Ezgi Gülten; formal analysis: Ezgi Gülten and Okan Derin; investigation: Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu, and Nazlım Aktuğ Demir; methodology: Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu, and Nazlım Aktuğ Demir; supervision: Funda Memişoğlu and Nazlım Aktuğ Demir; writing-original draft: Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin; writing-review and editing: Funda Memişoğlu and Nazlım Aktuğ Demir. All authors have read and approved the final version of the article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
