Abstract
Objective
Over the past decade, the German tele-emergency medical system (tele-EMS) has undergone continuous expansion. This growth has introduced a range of innovations that have transformed the daily work of tele-EMS physicians. At the same time, it has also brought new challenges, including parallel rescue operations, supra-regional deployments, and an increasing number of patient cases. To address these issues, the utilisation of an artificial intelligence (AI) system developed specifically for tele-EMS physicians was investigated.
Methods
As part of a qualitative study, 11 tele-EMS physicians were interviewed to understand their perspective on the implementation of AI in the field of tele-emergency medicine. The interview questionnaire covers a range of topics, including requirements and concerns of tele-EMS physicians regarding the use of the specific AI system, as well as their willingness to work with this system in future.
Results
The results of the study reveal that, despite certain concerns and fears, tele-EMS physicians are generally positive about the implementation of AI technology in prehospital tele-emergency medicine. When designed effectively, the system is considered potentially suitable for reducing the workload of tele-EMS physicians and improving the quality of patient care.
Conclusions
This study addresses a significant gap in the field of telemedicine research by examining perceptions of tele-EMS physicians regarding the implementation of AI in prehospital tele-emergency medicine, while also outlining critical ethical considerations related to AI integration in tele-emergency care. Furthermore, it provides a set of items for a qualitative interview study that can be easily adapted for use with other medical technologies.
Keywords
Introduction
The tele-emergency medical system (tele-EMS) has been a key component of prehospital medicine in Aachen, Germany, since 2014. 1 Since then, this system has been further integrated into various rescue services throughout Germany, 1 leading to positive changes in terms of operational time, the efficiency of rescue operations and the use of resources. 2 Tele-EMS enables communication between paramedics treating emergency patients on site and a tele-EMS physician working in a tele-EMS centre. 2 It allows paramedics to contact the tele-EMS physician via headset during a rescue operation and enables real-time streaming of vital data and video from inside and outside the ambulance, as well as images sharing 2 ; the system thus facilitates remote physician assistance. In addition to their primary tasks of providing medical support to paramedics and emergency physicians during rescue operations, tele-EMS physicians are also responsible for secondary tasks such as coordinating the transfer of patients between hospitals and providing medical consultations to the dispatch centre when required. 1 Along with the advantages that tele-EMS brings to prehospital emergency medicine, such as the provision of immediate physician expertise for paramedics and patients, as well as fewer rescue operations for emergency physicians on site, 3 tele-EMS physicians also face various challenges in their day-to-day work. These challenges mainly result from the increasing number of patient cases, parallel rescue operations, and supra-regional deployments. 2 The German KIT2 project (“KI-unterstützter Telenotarzt (KIT2)”, eng.: AI-supported tele-emergency physician), funded by the German Federal Ministry of Research, Technology and Space (grant number: 13N16401), addresses these challenges by attempting to overcome them through the use of artificial intelligence (AI) as an additional, supportive resource at the tele-EMS physician work centre. The project aligns with Germany's national digital health strategy, which promotes the integration of AI into healthcare to improve efficiency and patient outcomes, 4 and contributes to broader European initiatives such as the EU's AI Act, which emphasises the safe, transparent and trustworthy use of AI in high-risk sectors, including healthcare. 5
As part of the KIT2 project, this study examines the opinions of tele-EMS physicians on the use of AI in tele-emergency medicine. Among other things, their requirements and concerns regarding the use of a specific AI system developed as part of the project will be analysed. Recent studies6–10 emphasise that the integration of AI in clinical settings raises important ethical considerations, including transparency and explainability of AI recommendations, preservation of physician autonomy, data privacy, potential biases in patient care, and issues of accountability and liability. These factors are particularly relevant in high-pressure, time-sensitive settings such as tele-emergency medicine.
Methods
To gather the perceptions of tele-EMS physicians on the implementation of AI in the prehospital tele-emergency medicine, we developed a semi-structured interview guideline, consisting of seven sections: (1) demographics and background information, (2) working profile as a tele-EMS physician, (3) knowledge about AI, (4) experience of using AI, (5) description of the AI that is under development within the KIT2 project (KIT2-AI), (6) opinion on the KIT2-AI, and (7) closing questions. The interview guideline was developed in German, and the interviews were conducted in German as well. The items and example quotes presented in this article are translations of the original material. A detailed description of the interview section items relevant to this article is provided below, followed by information about the interview process and evaluation. The study was approved by the Ethics Commission of the Faculty of Medicine of the RWTH Aachen University (EK 23–256) and follows the Consolidated criteria for Reporting Qualitative research (COREQ). 11
Section 1: Demographics and background information
The demographics and background information section consists of questions about the age, gender and the professional experiences of the participants as physicians, emergency physicians and tele-EMS physicians. The basic requirements for admission to specialist training as a tele-EMS physician in Germany include recognised specialist status in a field closely related to clinical and rescue service emergency and intensive care medicine, along with additional training in emergency medicine. 12 Candidates must also provide proof of at least 2 years of regular and continuous activity as an emergency physician, but at least 500 independently completed emergency physician missions. 12 These requirements assume that tele-EMS physicians should also have experience working as on-site emergency physicians. The questionnaire therefore included questions to determine the number of years of experience the participants had in the relevant areas.
Section 2: Working profile as a tele-EMS physician
The second part of the interview focused on the routine tasks and responsibilities of the tele-EMS physicians. Specific items were used to gather information on their professional activities, as well as the positive and negative aspects of their work. Participants were also asked whether they have access to contacts in the workplace who could provide professional advice if required, and to specify the nature of these contacts. The objective of this section was to provide a basic understanding of the role of tele-EMS physicians within emergency medical services, including the challenges faced by practitioners. Furthermore, it sought to identify areas for improvement and to gain insight into how practitioners seek an access professional support.
Section 3: Knowledge about AI
To ensure that all participants possessed at least a basic understanding of AI, a brief definition of the term was provided at the beginning of the interview: “AI is the term used to describe technologies that imitate the intelligent behaviour of humans. Similar to a human, an AI should learn with the help of data and experience to independently find solutions to problems in order to support its users as well as possible.” 13 After receiving this definition, participants had the opportunity to ask clarifying questions, which were addressed by the interviewer. Subsequently, they were asked to name AI systems they were familiar with, in order to facilitate access to the topic. If respondents were unable to name any systems, the interviewer provided concrete examples (e.g. voice assistants such as Alexa and Google Assistant, route suggestions such as Google Maps, or chatbots such as ChatGPT).
Section 4: Experience with AI
Participants’ prior experience with AI, both in their private and professional contexts, was also documented. They were asked to share their experiences with AI systems they had previously encountered, along with the aspects they found positive and negative. These questions served to assess the participants’ familiarity with AI and to identify their preferences and attitudes towards its use.
Section 5: Description of the KIT2-AI
The fifth section of the questionnaire included a description of the KIT2-AI system: “The aim of this system is to support tele-EMS physicians in their work by providing them with suggestions for suspected diagnoses, therapeutic measures, target clinics and other resources, such as emergency physicians, carrying assistance, police, firefighters, etc… The tele-EMS physician enters the available patient and medical history data into the system. The system makes suggestions based on this data, information from past emergency operations and local procedural instructions. The system's suggestions are sorted by probability. The tele-EMS physician has the option of approving or rejecting the suggestions made by the decision support system. Accordingly, the tele-EMS physician is responsible for the treatment and the role of the AI is exclusively supportive.” After the description of the system by the interviewer, participants were given the opportunity to ask clarifying questions to ensure their understanding of the system's purpose and functionality.
Section 6: Opinion on the KIT2-AI
Section 6 constituted the main part of the semi-structured interview and aimed to capture the opinions, needs and concerns of tele-EMS physicians regarding the previously described AI system. Participants were encouraged to spontaneously name advantages and disadvantages they associated with the system. Subsequently, they were asked to provide specific feedback on the individual functions of the AI, or the individual tasks for which the AI provides support—such as suggestions for suspected diagnoses, therapeutic measures, target clinics and other resources (Figure 1).

Interview questions to gather the opinions of tele-EMS physicians on the individual functions of the KIT2-AI system, exemplified by the AI feature to suggest suspected diagnoses. Questions marked with an asterisk (*) could be omitted if the necessary information is already available. AI: artificial intelligence; EMS: emergency medical system.
In addition, more general questions were posed regarding the utilisation and implications of AI. The corresponding items, which do not inquire about particular system requirements or concerns related to its use, were based on the Model for Ethical Evaluation of Socio-Technical Arrangements (MEESTAR)14,15 (Figure 2). Five MEESTAR dimensions for ethical evaluation—safety, autonomy, justice, participation and self-conception—were operationalised in questions about the KIT2-AI system (Figures 1 and 2).

Interview questions to gather the opinions of tele-EMS physicians on the use and impact of AI. Questions marked with an asterisk (*) could be omitted if the necessary information is already available. Prompt questions in blue brackets were employed when responses to the preceding questions lacked sufficient detail. AI: artificial intelligence; EMS: emergency medical system.
The dimension of “safety” was operationalised in the questionnaire as the competence of the AI system—specifically, its ability to make correct predictions and to support physicians in delivering high-quality patient care. 15 The dimension of “autonomy” was conceptualised as the freedom of decision of the individual 16 or, more precisely, the freedom of decision of tele-EMS physicians during a rescue operation. The concept of “justice” was understood as fairness and the absence of disadvantage to the physician. 17 “Participation” was defined as the involvement of patients in decisions that affect their health. 18 The dimension of “self-conception” was interpreted as the perception of potential changes, including those in professional fields due to technological progress 19 or the implementation of AI.
Section 7: Closing questions
The final questions “Is there anything that has not come up in the interview so far that you would still like to share with me?” and “Now that we have discussed all these aspects, how do you feel about the AI? Could you imagine working with it?” provided participants the opportunity to address additional topics they considered important in the context of AI in tele-emergency medicine. The question of whether they could imagine working with the KIT2-AI system enables a concluding reflection on the potential use of AI in practice.
Interview process and evaluation
The qualitative interview study was conducted between October 2023 and February 2024 at the Fire and Rescue Service Department in Aachen, Germany, where the tele-EMS centre of Aachen is located. Fourteen tele-EMS physicians from Aachen were recruited via email by HS, a tele-EMS physician and medical director EMS, city of Aachen. All of the recruited physicians agreed to participate in the interview study. Due to sickness and shift changes, however, 11 interviews were conducted. The final sample size was deemed sufficient once data saturation was reached. Data saturation was reached when consecutive interviews no longer provided new information relevant to the research question. 20 After the 11 planed interviews were conducted and yielded similar responses as well as recurring themes, it was determined that additional data would not contribute further insights. 20 Therefore, participant recruitment was concluded at the point of thematic redundancy. All participants were interviewed during their working hours, while a tele-EMS physician involved in the project provided coverage at the tele-EMS centre. The interviews were conducted face-to-face in a quiet room from ND, a psychologist with experience of conducting interviews. She is employed as a research assistant at the RWTH Aachen University Hospital and not acquainted with the participants. Prior to the commencement of an interview, the interviewer introduced herself as a research assistant on the KIT2 project at the Institute for History, Theory and Ethics of Medicine at Aachen University Hospital and provided a brief overview of the study and eliciting the participants’ personal opinion. No additional characteristics of the interviewer were mentioned. The mean duration of the interviews was 42 minutes (range: 28:42–54:59 minutes). Prior to the interview, each participant provided written consent for both the interview and its audio recording. Superiors or colleagues were not present during the interviews. The study was approved by the Ethics Commission of the Faculty of Medicine of the RWTH Aachen University (EK 23–256). A pilot interview with a tele-EMS physician (MI), who did not participate in further interviews, was conducted in order to assure the relevance and the quality of the questionnaire items.21,22
After conducting the interviews, ND transcribed them verbatim, and anonymised them; the audio files were then deleted. A qualitative analysis according to Mayring 23 was conducted. An initial inductive categorisation was carried out in QCAmap 24 by ND. A secondary inductive categorisation was conducted with the help of ChatGPT 4.0 and specially developed prompts (Figure 3) based on extant research.25–27 Both categorisations were then subjected to a comparative analysis, followed by a summary by ND. Discrepancies were resolved through discussion and consensus with SW, a PhD in bioethics experienced in qualitative research in the field of medicine, employed as a scientist at the RWTH Aachen University Hospital, thereby ensuring coding reliability and enhancing the credibility of the findings. 28 The final categorisations were subsequently analysed and interpreted by ND and SW.

Prompts used in ChatGPT to support the inductive categorisation of the interview transcripts, exemplified by the research question “What do tele-EMS physicians like about their job?”. EMS: emergency medical system.
Results
Demographic characteristics of the sample
A total of 11 tele-EMS physicians, working in the city of Aachen, Germany, were interviewed. Five of them identified as male and six as female. Ten of those surveyed were also working as on-site emergency physicians at the time of the interview, eight of them additionally as anaesthetists, two in an intensive care unit (Table 1). One participant was working sole as a tele-EMS physician at the time of the interview due to parental leave.
Sample demographics (N = 11).
EMS: emergency medical system.
Working as tele-EMS physician—advantages, disadvantages and contacts
In response to the question “What do you like about your job as a tele-EMS physician?” the study participants expressed their appreciation for the efficiency of the tele-EMS. They highlighted that this system enables them to treat emergency patients more quickly and to safe resources by avoiding the need for an emergency physician to visit all patients in person. They also enjoyed the tele-medical teamwork with the paramedics and the emergency physicians, who treat the patients on site, as well as the interaction with the control centre. The variety of emergency situations handled by tele-EMS physicians, the different rescue regions in which they operate, the opportunity to develop skills in prioritising tasks (especially in situations involving the simultaneous treatment of multiple patients) and effective communication in the tele-medical context were also identified as positive attributes of the role.
On the other hand, technical and organisational issues, particularly during inter-hospital patient transfers, were perceived as negative aspects of the tele-EMS physician job. Furthermore, one interviewee reported that in cases where patient's condition is so critical that tele-medical treatment is insufficient and an emergency physician is needed on site, it can be very unsatisfactory to have to hand over the patient and not be able to treat them personally. The office environment was perceived as both a benefit and a drawback. On the one hand, it was considered advantageous in the sense of a safe, warm and dry working environment. On the other hand, it was viewed as disadvantageous, as one typically works alone in the office and remains in the same environment throughout the entire day.
When asked whether they have any contacts they could consult for professional advice during a rescue operation, participants gave mixed responses. Tele-EMS physicians approach various contact persons, with management staff generally available to answer questions at any time by telephone or smartphone using instant messaging and voice-over services (such as WhatsApp Messenger). It should be noted, however, that these contact options are not structurally embedded. Managers and colleagues are contacted infrequently, and primarily in cases of uncertainty regarding inter-hospital patient transfers. However, as tele-EMS physicians gain experience, they tend to require less support with logistical issues. During shift handovers, there is the opportunity to discuss previous cases with colleagues. External contacts—such as clinics, the poison control centre or technical support—are rarely used and usually only for specific issues. Participants stated that for them contacting someone during a rescue operation is not a practicable option, as emergency medicine requires time-critical action, and external support structures are therefore not foreseen.
Private and professional experience with AI
Eight out of 11 interviewees stated that they use AI systems in their personal lives; two revealed that they only experimented with AI, and one admitted to never having used AI. In the context of their professional lives as physicians, six participants indicated that they currently use or have previously used AI, while five reported never having done so. However, the interviews revealed that participants had divergent conceptions of what AI is. Some associated AI specifically with chatbots or voice assistants that enable human-like communication, while others linked it more closely to features such as fingerprint sensors or facial recognition software, automatically activated on smartphones, for example. Moreover, participants were unaware whether the systems available to them in their workplace are AI-based, making it difficult to determine the extent of their experience with medical AI. Accordingly, the results mirror the general beliefs of tele-EMS physicians regarding the use of AI in their professional field rather than their actual use of and experience with this technology. In both their personal and professional lives, interviewees particularly valued the time savings offered by AI, its ability to take on tasks and simplify workflows, the extensive knowledge base of AI systems, and the fact that in some cases they can respond more effectively and quickly than humans. However, concerns were raised when AI systems generate false or discriminatory information, and when users lack insight into how these systems function or what happens to their own or patient's data. Participants were also concerned about the absence of clear regulations and guidelines for AI usage, along with the potential for external manipulation of AI systems.
What advantages and disadvantages do tele-EMS physicians believe the use of the KIT2-AI could have for their work?
Based on the description of the KIT2-AI, participants believed that it could be beneficial for them in various ways. For example, the system may help avoid errors in diagnosis, medication, and treatment while also increasing efficiency. With the support of its database, physicians could identify a suitable target hospital more quickly—especially in unfamiliar regions—and receive assistance in managing rare or unusual cases. Another perceived advantage is that the AI system performs consistently, regardless of the user's mood, stress level or physical condition, and can also help maintain an overview during simultaneous rescue operations (Table 2).
Advantages and disadvantages of the KIT2-AI, identified during interviews with tele-EMS physicians from Aachen, Germany (October 2023–February 2024).
AI: artificial intelligence; EMS: emergency medical system.
At the same time, participants expressed that the AI system could generate inaccurate or discriminatory recommendations, potentially endangering patients if these are not carefully verified. If the system tends to offer too many suggestions (e.g. multiple suspected diagnoses), it could negatively impact physicians’ work or the rescue operation. Participants also indicated that an overload of information from the AI may distract them and cause treatment delays. Additionally, interviewees were concerned that the use of AI in tele-EMS could negatively affect the image of tele-EMS physicians and diminish the perceived value of their work and expertise (Table 2).
How do tele-EMS physicians rate the competence of the KIT2-AI?
In general, participants believed that the KIT2-AI would be capable of making accurate suggestions regarding suspected diagnoses, treatments and medication. Most participants also indicated that the AI would likely propose suitable target clinics; however, one interviewee pointed out challenges in this regard, noting that clinics and hospitals do not always offer all the treatments they should in theory. Opinions also differed concerning the AI's ability to recommend additional resources during a rescue operation, such as on-site emergency physician, police or additional carrying assistance. Some of the tele-EMS physicians believed that, with proper training, the AI would be able to correctly identify such needs. Others, however, were more sceptical, emphasising that the decision to request additional resources during a rescue operation is typically individual and sometimes subjective (Table 3).
Perceived competence of the KIT2-AI, identified during interviews with tele-EMS physicians from Aachen, Germany (October 2023–February 2024).
AI: artificial intelligence; EMS: emergency medical system.
Possible impact on physicians’ working habits and medical knowledge
Participants believed that the KIT2-AI would positively impact their working habits, particularly in making diagnoses, choosing treatment procedures, and identifying the most appropriate target clinic for the patient. They anticipated that using the AI would encourage them to engage in more reflective thinking when considering its suggestions and would also help them overlook fewer steps, for example when following procedural instructions or conducting specific diagnostics. The tele-EMS physicians, however, did not believe that the AI would influence the way they order additional resources, such as an on-site emergency physician or police, as these decisions are complex and usually made in consultation with paramedics on site. One interviewee stated that the use of the KIT2-AI could reduce the perceived effort in performing their tasks and contribute to a sense of devaluation of their profession (Table 4).
Possible impact of the KIT2-AI on users’ working habits and medical knowledge, identified during interviews with tele-EMS physicians from Aachen, Germany (October 2023–February 2024).
AI: artificial intelligence; EMS: emergency medical system.
Regarding the influence of the KIT2-AI use on their medical knowledge, participants expressed mixed opinions. Some believed that the AI would have a positive impact, enabling them to learn new things from its suggestions. Others were of the opposite opinion, expressing concern that physicians might lose knowledge if they rely too much on the suggestions provided by the KIT2-AI and follow them without making efforts to verify their accuracy. Other participants believed that the AI would not have any impact on their medical knowledge (Table 4).
Informing emergency patients about the use of an AI system
The most common opinion regarding whether patients should be informed about the use of AI was negative. Physicians provided various reasons for this—ranging from a lack of time during emergency situations and the practical impossibility of providing information about AI during rescue operations, to the belief that patients would not notice the AI usage and therefore would not be affected by whether the tele-EMS physician works with or without AI. Some also argued that informing patients about it is irrelevant as they were receiving appropriate care. Several participants expressed concerns that patients might not understand information regarding the physician's use of AI, or that such information could confuse or frighten them. As long as the AI suggests appropriate treatment and the tele-EMS physician makes the final decision, there was seen to be no necessity to inform patients about the AI. While some believed that informing patients about the use of AI was only a data protection issue, others were convinced that from a legal perspective, patients should be informed that the physician is supported by AI (Table 5).
Opinions on the need to inform patients about the use of an AI system, identified during interviews with tele-EMS physicians from Aachen, Germany (October 2023–February 2024).
AI: artificial intelligence; EMS: emergency medical system.
Willingness and requirements to use the KIT2-AI
Five of the 11 interviewed tele-EMS physicians could imagine working with the KIT2-AI in the future, while another five would prefer to trial the system first to evaluate its functionality before deciding whether to utilise it, depending on their experiences. Only one participant could not envision working with the system, despite acknowledging its potential benefits: “In fact, I can’t imagine that [working with the KIT2-AI]. (…) I’m sceptical about it, but I know that it can certainly provide valuable support” (P548).
For the interviewees, it was crucial that the diagnoses proposed by the AI are logical and applicable to the specific patient. The same applied to the treatment suggestions—they should be appropriate and consistent with the suspected diagnosis. It was considered important that the recommended treatment measures be adapted to the respective emergency medical service district: proposed medications must be available, treatment procedures must be authorised for paramedics, and paramedics must actually be capable of performing them. The recommended target hospital should be open and have available capacity in a department suitable for the patient's needs. Any additional resources suggested by the AI should be reasonable, and agreed upon by the paramedics on site.
Discussion
Our interview study revealed that, despite certain concerns and reservations, tele-EMS physicians are generally open to the integration of an AI system in prehospital tele-emergency medicine. Participants emphasised the importance of first testing the system and evaluating its capabilities through personal experience before its adoption in practice. If proven effective, the AI system was perceived as having the potential to reduce their workload and enhance the quality of patient care.
To contextualise the study results within existing knowledge and to address core ethical issues related to the implementation of AI in prehospital tele-EMS, the following section discuss the five MEESTAR dimensions for ethical evaluation (safety, autonomy, justice, participation and self-conception)14,15 as drawn from the main part of the questionnaire (Figure 4).

Key requirements for the implementation of the KIT2-AI, organised into MEESTAR categories, identified during interviews with tele-EMS physicians from Aachen, Germany (October 2023–February 2024). AI: artificial intelligence; EMS: emergency medical system.
Safety
The dimension of safety—understood as patient safety during a rescue operation—is linked to the ethical principle of non-maleficence, which emphasises avoiding harm to patients during medical interventions, 29 or in the case of the KIT2-AI, during a rescue operation. As most interviewees believed that the AI system would be capable of making accurate suggestions regarding diagnosis, treatment and target clinic, its competence was perceived as high and therefore beneficial for emergency patients’ safety. The perception that suggestions made by the KIT2-AI could reduce errors, such as fixation errors or incorrect medication dosage, is a significant aspect of the principle of non-maleficence. Furthermore, both perceived safety and non-maleficence of the system—based on participants’ perspectives, regardless of whether such safety objectively exists—are essential for its acceptance and successful integration into medical practice. 15 Our findings on safety align with existing research, in which physicians consider AI beneficial for diagnostics,30–34 medication dosage32,35,36 and patient care34,37 with the potential to reduce treatment errors.36,37
Autonomy
In the context of this study, autonomy is understood as the freedom of decision of the individual, 16 or more precisely of tele-EMS physicians and their right to make responsible decisions regarding their work. 38 Participants had different perceptions on how the KIT2-AI might influence their working habits and medical knowledge. While most expressed that using the system could have a positive or no impact on their working habits and knowledge, some were concerned that overreliance on the AI might lead to loss of skills and expertise. In the literature, this phenomenon is often referred to as “deskilling” 39 and has been recognised by physicians in other studies as a significant issue in implementing medical AI.30,32 Loss of practical experience, coupled with a decline in active participation in decision-making processes, can gradually erode existing competencies, which in turn undermines meaningful professional autonomy. 40 To prevent dependencies on the AI system and inability to act during a potential system breakdown caused by deskilling, new technologies should only be employed if it is ensured that users continue to regularly practice their skills. 39 In the case of the KIT2-AI, this could mean that tele-EMS physicians should continue to work as emergency physicians on site.
Justice
In the context of this study, the dimension of justice is also related to the physicians, suggesting that the implemented AI system should be fair to them by easing their workload rather than adding to it.17,41 Participants believed that, if well-designed, the AI system has the potential to enhance their work efficiency, save time, and support them during simultaneous rescue operations or in unfamiliar rescue areas. However, concerns were raised that the system might produce misleading or excessive suggestions, potentially leading to confusion and delays. Similar perceptions have been reported in other studies, with physicians recognising that medical AI could ease their work 30 and improve time32,42 and cost 32 efficiency, while also expressing concerns that AI implementation in medicine may cause more workload for its users. 43 To ensure AI fairness towards physicians, it is crucial to focus on user-friendly design,31,44 compatibility with existing systems, 31 and to ensure that AI adds value to their work 36 by involving them early in the development process.39,44,45
Participation
The dimension of participation is important to patients, as it involves their engagement in their own health and the decisions about their treatment. 18 However, most participants felt that patients should not be informed about AI use in tele-EMS, perceiving that emergency situations allow limited time for providing information about AI, and saving a person's life should be the highest priority regardless of AI involvement. Yet, the deployment of AI in decision making contributes to a more complex situation. If patients are unaware of the contribution of an algorithm to the decision-making process, they are unable to meaningfully engage with or contest that process. 46 This conflicts with patients’ right to receive all essential information about their treatment and to make an informed decision regarding consent. 47 Emergency medicine, on the other hand, often involves decisions limited by time and available information, 48 with emergency physicians frequently acting according to the patient's presumed will, for example, when the patient is unconscious or non-responsive. 49 In most cases, this presumed will is likely aligned with saving the patient's life or performing medically indicated treatments at that moment. 50 This creates an ethical tension in acute settings, as the urgency of treatment can limit opportunities for full informed consent. 51
Although it can be challenging to inform patients about the use of AI during an emergency operation, transparency regarding its use remains important, 39 because it fosters accountability, enables patients to better understand the decision-making process, and burst trust in AI supported medical care. 52 As do efforts to keep the public adequately informed and to provide easy access to information for those who are interested.
Self-conception
Self-conception in the context of this study relates to the perception of potential changes in the field of the prehospital tele-EMS 19 and the willingness of the participants to incorporate AI into their work. 29 Despite certain concerns, the interviewees were generally receptive to implementing AI in tele-EMS, believing that if integrated effectively, it could enhance patient treatment and ease their work. This aligns with other research where physicians recognise both benefits and risks associated with AI use,33,36,43,53 yet maintaining a generally optimistic outlook,43,45 viewing AI as a potentially useful clinical tool 53 with the potential to positively transform (emergency) medicine.44,54 Both our study participants and existing research stress that AI should not replace physicians but rather support them.30,34 AI is anticipated to change the way they work, 44 necessitating education among medical professionals on its utilisation.34,37,44,45 The “future of medicine will depend on the knowledge and the use of tools based on AI,” 54 underscoring the importance of managing it consciously and responsibly.
These findings imply that medical decision support systems must emphasise decision support features rather than fully automated recommendations, thereby preserving physician agency and supporting integration into existing workflows. 55 Creators should prioritise transparency in algorithmic reasoning and enable the physician to customise or override suggestions, ensuring that the system complements rather than supplants their judgement. 7
For ethical education among tele-EMS physicians in this context, curricula should address the shifting role of AI in decision-making, emphasise physicians’ responsibility for interpreting and validating AI output, and build competence in digital literacy and bias mitigation. 56
Future research directions and limitations
The interviews were conducted with tele-EMS physicians from Aachen, Germany, reflecting their perspectives on AI in the tele-EMS and the specific regional context. Participants from other regions, who may use different software, could hold diverging opinions, as familiarity with particular technology can shape expectations for AI integration. 57 A further study with physicians from other regions, working with different systems, may be beneficial to detect perspectives influenced by the specific regional context of this study.
Furthermore, the participants received only a brief description of the KIT2-AI system and had no opportunity to see or test it prior to the interviews, as the AI was still in development at that time. Therefore, it is possible that they imagined the AI differently based on previous experiences with AI technologies, potentially influencing their opinions. A subsequent study, conducted after the development of the system and allowing participants to interact with and evaluate it directly, would provide insights into participants’ perceptions independent of any prior assumptions about its design. At this stage, a further quantitative investigation could also be conducted to objectively assess the AI's performance and the accuracy of its recommendations.
Conclusion
Our study fills a gap in the research on AI in prehospital emergency medicine by revealing tele-EMS physicians’ opinions on the implementation of an AI system in their workplace. The interviewed physicians were generally curious about how the AI system would function and were willing to try it once developed, with their decision to continue using it depending on its quality and the benefits it offers both them and their patients. This study not only presents tele-EMS physicians’ opinions on the implementation of AI in their work place, but also provides items for qualitative interview research that can be easily adapted for other medical technologies, and offers an overview of important ethical aspects related to the integration of AI in the emergency medical context.
Supplemental Material
sj-docx-1-dhj-10.1177_20552076251411230 - Supplemental material for AI support in prehospital telemedicine: Perspectives of tele-emergency physicians and ethical considerations
Supplemental material, sj-docx-1-dhj-10.1177_20552076251411230 for AI support in prehospital telemedicine: Perspectives of tele-emergency physicians and ethical considerations by Nadezhda Durdova, Dominik Groß, Mathias Schmidt, Hanna Schröder, Marc Felzen, Matthias Irrgang and Saskia Wilhelmy in DIGITAL HEALTH
Supplemental Material
sj-docx-2-dhj-10.1177_20552076251411230 - Supplemental material for AI support in prehospital telemedicine: Perspectives of tele-emergency physicians and ethical considerations
Supplemental material, sj-docx-2-dhj-10.1177_20552076251411230 for AI support in prehospital telemedicine: Perspectives of tele-emergency physicians and ethical considerations by Nadezhda Durdova, Dominik Groß, Mathias Schmidt, Hanna Schröder, Marc Felzen, Matthias Irrgang and Saskia Wilhelmy in DIGITAL HEALTH
Footnotes
Ethics approval and consent to participate
This study was approved by the Ethics Commission of the Faculty of Medicine of the RWTH Aachen University (EK 23–256). Prior to the interview, each participant received information about the nature and purpose of the study, was informed that participation was voluntary and anonymous, and that they could withdraw from the survey at any time without any consequences. All participants provided written consent for both the interview and its audio recording. All audio files were deleted after transcription and anonymisation.
Contributorship
ND drafted the questionnaire, conducted and transcribed the interviews, and drafted the manuscript. ND and SW analysed and interpreted the results. HS recruited the participants. Funding was acquired by DG. All authors critically revised the questionnaire and the manuscript. All authors have read and approved the final manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was conducted within the work package “Ongoing Developmental Analysis of Ethical, Legal, and Social Norms & Risks” (grant number 13N16401) as part of the project “KI-unterstützter Telenotarzt” [AI-supported tele-emergency physician; KIT2], funded by the German Federal Ministry of Research, Technology and Space in the context of the announcement “Künstliche Intelligenz in der zivilen Sicherheitsforschung II” [Artificial Intelligence in Civil Security Research II].
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. MI has been employed as a staff tele-emergency physician at “umlaut – Part of Accenture” since March 2025. Umlaut operates the tele-emergency system currently in use in Aachen, Germany and is a project partner in the KIT² project. All other authors declare no conflicts of interest.
Guarantor
ND.
Supplemental material
The data that support the findings of this study are stored at the Institute for History, Theory & Ethics of Medicine, University Hospital, RWTH Aachen University, and are available from the corresponding author, ND, upon reasonable request.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
