Abstract
Objectives
The call to scale up telemedicine services globally as part of the digital health transformation lacks an agreed-upon set of constructs to guide the implementation process. A lack of guidance hinders the development, consolidation, sustainability and optimisation of telemedicine services. The study aims to reach consensus among telemedicine experts on a set of implementation constructs to be developed into an evidence-based support tool.
Methods
A modified Delphi study was conducted to evaluate a set of evidence-informed telemedicine implementation constructs comprising cores, domains and items. The study evaluated the constructs consisting of five cores: Assessment of the Current Situation, Development of a Telemedicine Strategy, Development of Organisational Changes, Development of a Telemedicine Service, and Monitoring, Evaluation and Optimisation of Telemedicine Implementation; seven domains: Individual Readiness, Organisational Readiness, Clinical, Economic, Technological and Infrastructure, Regulation, and Monitoring, Evaluation and Optimisation; divided into 53 items. Global telemedicine specialists (n = 247) were invited to participate and evaluate 58 questions. Consensus was set at ≥70%.
Results
Forty-five experts completed the survey. Consensus was reached on 78% of the constructs evaluated. Regarding the core constructs, Monitoring, Evaluation and Optimisation of Telemedicine Implementation was determined to be the most important one, and Development of a Telemedicine Strategy the least. As for the domains, the Clinical one had the highest level of consensus, and the Economic one had the lowest.
Conclusions
This research advances the field of telemedicine, providing expert consensus on a set of implementation constructs. The findings also highlight considerable divergence in expert opinion on the constructs of reimbursement and incentive mechanisms, resistance to change, and telemedicine champions. The lack of agreement on these constructs warrants attention and may partly explain the barriers that telemedicine services continue to face in the implementation process.
Introduction
The rapid adoption of digital telemedicine solutions played a critical role in responding to the enormous pressure experienced by healthcare services during the COVID-19 pandemic.1–3 Since then, there have been increasing demands to strengthen, consolidate and scale up telemedicine services globally.4–8 This is recognised as being a complex task that, besides technological deployment, requires an understanding of need, a strategic vision, skilful management of organisational change, well-considered integration into health service planning, and robust monitoring and evaluation mechanisms. However, despite the available literature of the added value that telemedicine brings to health service delivery1,9–15 post-COVID health systems are struggling – due to implementation challenges – to integrate and sustain telemedicine services.12,16–19 There is now a real risk that the promise of sustainable, equitable, high-quality telemedicine services at scale will not be realised.
A major obstacle facing those developing telemedicine services is an understanding of what are the essential components of implementing an integrated telemedicine service, as a part of a broader digital health transformation agenda.20,21 Although numerous resources and tools are available to assist telemedicine implementers, a gap has been identified for an easy-to-use, comprehensive, evidence-based tool that delineates universally acknowledged essential requirements for the successful design, deployment and optimisation of telemedicine services.22,23 The lack of an agreed-upon definition of telemedicine adds to the complexity.24,25 This study adopts the widely used World Health Organization definition, which identifies telemedicine as part of digital health, emphasizing remote clinical synchronous or asynchronous communication between those separated by distance, either client-to-provider or provider-to-provider. 26
To address this gap in implementation resources, the Support tool to strengthen telemedicine: guidance for telemedicine assessment and strategy development (referred to as the ‘support tool’) is under development at the request of the World Health Organization Regional Office for Europe (WHO/Europe). The support tool is being developed in the spirit of knowledge mobilisation (the process of moving knowledge into action), 27 drawing on complex adaptive systems thinking (systems that can adapt and self-organize in response to changes), through a robust multi-phase implementation science research mixed-method study, comprising three phases. First, a literature review to identify evidence-based essential building blocks of telemedicine deployment.12,28,29 Second, a consensus-building study to enlist agreement among telemedicine experts on the dimensions of telemedicine implementation. And third, field testing the support tool with telemedicine implementers on the ground. The findings of each study phase feed into the subsequent one and help to further refine and strengthen the support tool constructs and their use.
This article focuses on the second phase of the study, that is, to reach consensus among telemedicine experts on a set of constructs to assess the maturity of telemedicine services and guide the implementation process.
Methods
Study design
A modified Delphi process based on an online round-less real-time format 30 was adopted to evaluate the telemedicine implementation constructs that were identified in phase 1. This study approach best suited the needs of the research, which calls for expert judgement where there is a lack of consensus, and a consolidation of geographically dispersed expert opinion.31,32 The modified Delphi process is characterised by online rounds of systematic, iterative, controlled feedback, statistical aggregation and anonymised expert opinion. The Delphi method has been used widely in relevant public health research – including digital health research – to reach consensus among experts when there is a lack of agreement and to assist with the development of knowledge tools.32–44 The study received ethical approval from the Research Ethics Committee, Universitat Oberta de Catalunya (CE22-AA60, CE23-TE12).
Participant recruitment
An expert panel was recruited by purposive/convenience sampling using a multipronged selection strategy incorporating the snowball technique. This non-probability sampling method is conducive to a consensus-forming qualitative Delphi methodology, which does not require statistical significance sampling. 45 The primary criteria for selection were that candidates should be recognised, knowledgeable and experienced telemedicine implementation specialists. Diversity of experience was sought, including a variety of health professional backgrounds, comprising various levels of the health system, including public and private sector representation. Other considerations included geographic diversity, gender balance and ability to participate in the study in the English language. All participants were over 18 years of age. A list of potential experts was identified through the telemedicine global specialist network of WHO/Europe, other telemedicine professional networks, published relevant literature and their reference lists, telemedicine conference presenters, and recommendations from expert peers and the research team. The list of potential specialists was assessed by the research team, ranked according to the selection criteria and finalised. With an expected participation rate of 10% to 15% of those asked to take part, 32 247 participants who met the criteria were invited to do so. On 1 May 2023, a letter introducing the voluntary study was sent by the Unit Head, Data and Digital Health Division of Country Health Policies and Systems WHO/Europe, to those selected. A reminder was sent on 24 May 2023.
Dimensions and criteria
The telemedicine implementation constructs identified in the first phase (literature review) comprised three implementation levels: cores, domains and items. First, five cores considered to be the basis, or pillars, of a telemedicine service: Core 1 – Assessment of the Current Situation (C1); Core 2 – Development of a Telemedicine Strategy (C2); Core 3 – Development of Organisational Changes (C3); Core 4 – Development of a Telemedicine Service (C4); and Core 5 – Monitoring, Evaluation and Optimisation of Telemedicine Implementation (C5). Second, seven domains, which were defined by thematically grouping the barriers and facilitators identified in the literature:12,18,20,28,46–53 Domain 1 – Individual Readiness (D1); Domain 2 – Organisational Readiness (D2); Domain 3 – Clinical (D3); Domain 4 – Economic (D4); Domain 5 – Technological and Infrastructure (D5); Domain 6 – Regulation (D6); and Domain 7 – Monitoring, Evaluation and Optimisation (D7). The literature review identified that the domains were relational across the cores, and cross-cutting at the micro, mesa and macro levels of the health system (see Table 1 for a summary of the distribution of the domains across the cores). The monitoring, evaluation and optimisation construct was found to be both a core and a transversal domain with its own characteristics. And third, 53 items, which were defined by further articulating the barriers and facilitators within each core and domain. Each item was then formulated into an item question exploring the requirements of the item, which collectively form the determinants of the domain. For each item question, a more detailed description of the construct to be assessed was established.
Summary of the number of questions for consensus in the modified Delphi by core and domain classification.
The support tool constructs were assessed by the telemedicine experts in the modified Delphi study at core and item levels. A total of 58 questions were developed, 5 questions asked respondents about the core constructs, and 53 questions evaluated item constructs. See Multimedia Appendix S1 for the list of Delphi questions.
Delphi software selection and questionnaire design
The Smart Delphi platform by Onsanity (www.smartdelphi.com) was chosen to carry out the modified Delphi process, which, in comparison to other platforms assessed using tested criteria, 54 provided: flexibility, strong data management, security, anonymity, ease of use, intuitive design, technical assistance and cost-effectiveness. The support tool questions were uploaded to the Smart Delphi platform, along with a definitional description for each question to reduce ambiguity and response bias, thereby strengthening reliability and validity. The questions were designed to be clear, to reduce cognitive load and to not overwhelm respondents. Five telemedicine specialists, not selected to participate in the study, pre-tested the survey instrument and platform for clarity, flow, navigational ease, time taken for completion, technical glitches, reliability and validity, along with syntax, semantics and typographical errors. The survey was finalised incorporating the findings of the pilot test.
Delphi process
In the invitation letter, participants received information about the study's purpose, instructions on how to participate, the login link and the access code. Once participants had entered the platform, they were asked to complete informed consent. After consenting, they provided general sociodemographic data (gender, age range, years of experience in telemedicine, professional profile, workplace and health sector). No personal identification information was collected during the study. Each member of the expert panel individually and independently participated in the study by answering the questions. The Smart Delphi platform design displayed one question on screen at a time. On entering each question, one evaluative query was put to the specialists, How important do you think this is for the implementation of telemedicine? The evaluative query was the same for all the questions in the survey. Respondents ranked the importance of the construct on a 6-point Likert scale, from not very important (1) to very important (6). Respondents were also encouraged to express their views on the construct by giving narrative comments on each question. See Figure 1 for a screen capture of the survey question.

Screen capture of smart Delphi survey question.
Immediately after submitting a response to a question, participants were provided with anonymised aggregated data of all the responses at that time. The aggregated results for the question included the number of responses (and non-participants), the median, and the interquartile range (IQR). The data was displayed on the screen indicating an individual expert's response ranked within the total responses (see Figure 2), and these could be further broken down by sociodemographic groupings (see Figure 3). At this point, participants were asked to reflect further on their response considering the aggregated results and, if they wished, they could alter their response on the Likert scale. Respondents could also see the comments of others and were able to reply to an existing comment or add a new comment. The respondents’ comments were displayed verbatim on the screen and not moderated by the research team.

Screen capture of smart Delphi total aggregated results.

Screen capture of smart Delphi disaggregated by profession.
Once participants had finished giving their response to a question, they were invited to move on to the next question and work their way through all the questions, following the same process. The platform allowed respondents to move around the survey back and forth, revising their scores as often as they wished. At any time during the survey, they could also review the summary consensus portal comprising all their responses and comments, compared to the aggregated group results. The survey access link allowed the experts to leave the survey at any time they wished and return to it at their convenience to continue where they had left off. Once respondents had finished the survey, on the closing page they were provided with summary information (including their responses compared to aggregated responses, completion percentage, overall alignment percentage, number of adjustments, time taken for completion, and number of comments and replies), and they were again invited to return to the survey at a later stage to make further adjustments as they wished.
The potential bias associated with the initial condition effect was minimised during the survey by following good practice guidance, 55 with aggregated feedback only uploaded once a small critical mass of five responses had been completed. Throughout the study, the research team monitored the platform for potential technical glitches and configuration issues.
Analysis
Analysis commenced after the study was closed using the scores of the final round of ratings given by those who had fully completed the study. Descriptive statistical analysis and simplified narrative thematic analysis were performed. Percentages were calculated for sociodemographic data. Analysis included measuring central tendency and level of dispersion using the mean, median, standard deviation and IQR. The data were analysed using STATA 18.0. There is no universally agreed-upon definition of consensus in Delphi studies, however, informed by standard practice, 56 this study calculated consensus for every single item as the percentage of respondents who ranked the items as important (5) or very important (6). Three levels of consensus were established, high being above or equal to 80%, medium above or equal to 70% and below 80%, and low below 70%.56–61 It was determined that items on which 70% consensus was not reached should be discarded from the final version of the support tool. Comparative quantitative analysis using percentiles was applied to explore performance across cores and domains, though this should be interpreted with caution given that items are not equally distributed across each dimension of the support tool. A simple thematic analysis was used to analyse the narrative comments and replies, by searching, reviewing, defining and naming themes. 62
Results
The Delphi study was conducted over 5 weeks from 1 May 2023 to 6 June 2023. Of the 247 specialists who had been invited to take part, 63 participated (25.5%); of these, 45 (71.4%) fully completed the study, and the remaining 18 partially completed it (28.6%). Experts posted 3195 ratings over the duration of the study.
Expert panel
Of the 63 experts who had participated in the study, more than two-thirds (71.5%) were over 40 years of age (see Table 2). The gender balance was slightly skewed towards male participation (57.1%). The extensive professional telemedicine expertise that specialists brought to the study was reflected by 84.1% of respondents reporting more than 4 years’ experience and almost half (47.6%) over 10 years’ experience. The respondents’ professional roles were academic/educational professions (38.1%), followed by medicine (34.9%), information technology (9.5%), policy-making (7.9%) and management (6.4%). Most participants came from the public sector (76.2%), some from the private sector (12.7%) and others from unspecified sectors. Several levels of the health system were represented, from primary care (15.9%), hospitals (12.7%) and public health services (17.5%); though the workplace of over half the experts was unspecified (52.3%).
Expert panel characteristics.
Main findings
Of the 58 evaluative questions asked, comprising 5 core questions and 53 item questions, consensus was reached on 78% (45/58) of the constructs evaluated. In the 5 core questions (see Multimedia Appendix S2), there was high agreement among experts on all the core constructs of the draft support tool (≥80%). Monitoring, Evaluation and Optimisation of Telemedicine Implementation (C5) had the highest percentage of agreement and mean score, followed by Development of a Telemedicine Service (C4). The core with the lowest percentage of agreement (79.6%) was Development of a Telemedicine Strategy (C2).
Of the 53 item questions assessed, 40 questions (75.5%) ranked above the consensus threshold. Of these items, high consensus was reached on 21 (52.5%), with over 80% of respondents rating them highly, and medium consensus (above or equal to 70% but below 80%) was reached on 19 (47.5%). In Table 3, these items are ranked from highest to lowest percentage of consensus by core and domain.
Item questions – consensus ranking by core and domain.
Key
D1: Domain 1 – Individual Readiness
D2: Domain 2 – Organisational Readiness
D3: Domain 3 – Clinical
D4: Domain 4 – Economic
D5: Domain 5 – Technological and Infrastructure
D6: Domain 6 – Regulation
D7: Domain 7 – Monitoring, Evaluation and Optimisation
Analysing the top 10 items with the highest consensus, there was very high agreement on the importance of the constructs of telemedicine technology reliability (95.6%), followed by technical support services (93.5%). High consensus was also reached on items relating to data protection and security measures, intuition of the telemedicine solution, and assessing telemedicine outcomes, all with 91.1%. Slightly lower ranked in importance, though still within the top 10, were the constructs relating to adequate broadband, health provider outcome measures, telemedicine standardised operating procedures, and telemedicine device accessibility and adaptability, all with 88.9%.
Examining the top 10 ranked items by core, most items fall within C4 at 60% (6/10 items), followed by C5 at 30% (3/10) and C3 at 10% (1/10). No items from C1 or C2 were ranked in the top 10, moreover, high consensus was reached on only a few items from these cores. By domain, the top 10 in importance converge in the domains of D5 at 60% (6/10 items), followed by D7 at 30% (3/10) and, lastly, D6 at 10% (1/10). No items in D1, D2, D3 or D4 reached the top 10 in importance, though three items from D3 and one from D1 ranked in the high consensus group (above 80%).
Analysing the proportion of items on which consensus was reached by domain, D3 had the highest consensus at 100% (5/5 items), followed by D7 at 83% (5/6), D5 at 82% (14/17), D2 at 71% (5/ 7), D1 at 67% (4/6), D6 at 63% (5/8) and, lastly, D4 at 50% (2/4).
The 13 lowest ranked items on which consensus was not reached (below 70%) are somewhat evenly distributed across the cores, with most items found in C1 at 31% (4/13 items), followed by C3 at 23% (3/13), C4 at 23% (3/13), C2 at 15% (2/13) and, lastly, C5 at 8% (1/13). In terms of domains, the highest number of items on which consensus was not reached was found in D5 and D6, each with 23% (3/13 items). This was followed by D1, D2 and D4, each with 15% (2/13 items), and, lastly, D7 with 8% (1/13). D3 had no items that fell below the consensus threshold. Moreover, the transversal core-domain construct C5/D7 was highly rated, with only one item falling below the consensus threshold.
Qualitative analysis
Analysis of the narrative comments showed that, of the 58 questions posed, respondents provided further remarks on 21 questions (2 core questions and 19 item questions). In total, 29 remarks were posted (28 comments and 1 reply), with most of the 21 questions receiving only 1 comment, a few receiving 2 comments, and 1 generating the most discussion with 3 comments and one reply. The engagement in the comments section of the study was somewhat disappointing, though understandable, given the time constraints of the expert panel. Nevertheless, the remarks provided were extremely valuable in further elucidating the perspectives and priorities of respondents.
The questions falling into the medium consensus group received the greatest proportion of remarks (44.8%), followed by those in the lowest consensus group (31%) and, lastly, those in the highest consensus group (24.1%), where the least discussion was found. Analysing the number of remarks by core, C2 (27.6%) generated the most discussion followed by C1 (24.1%), C5 (24.1%), C4 (13.8%) and, lastly, C3 (10.3%). Across the domains, of the 27 item questions, D7 received the most remarks and D3 generated the least discussion, with the other domains generating comparable levels of discussion to one another.
Comments fell into three broad categories: first, those emphasising the importance of the telemedicine implementation construct under consideration, thereby attempting to influence the opinions of others; second, those further defining the item descriptors; and third, one suggesting improvements to the item question. See Multimedia Appendix S3 for a summary of the thematic analysis of the comments.
Reviewing the comments in more detail, remarks relating to the core questions emphasised the importance of integrating telemedicine into the health system, the risks of telemedicine silos, and the need for a supportive organisational culture. Analysing the comments by domain, in D1 they focused on the importance of building public trust, along with end-user (health workforce and patient) engagement, participation and digital literacy. The D2 comments reiterated the integration of telemedicine within the health system, along with a call for comprehensive regulatory systems and competitor analysis. The D3 ones focused on understanding the context and patient needs. The D4 ones stressed the importance of a good fit between available economic resources and telemedicine solutions, along with the need to conduct a comprehensive economic analysis (including cost-benefit, cost-utility, cost-effectiveness and opportunity costs). The D5 ones focused on interoperability and the importance of mitigating technological risks, such as the need for backup storage systems. Here, it was recommended that the interoperability question (item 29) be better defined to include semantic as well as technical interoperability. The D6 ones highlighted the importance of monitoring and evaluation frameworks, and integration was again reiterated, with a focus on care channels and secondary data use. The issue of an adequate level of consent based on emerging evidence was also raised, as was the value of telemedicine in addressing the quadruple aim (enhancing patient experience, improving population health, lowering costs and improving staff experience). And lastly, D7 was the domain in which there was the most debate, where the comments again focused on themes of health system integration, equity of access, and a call for definitional clarity of the meaning of sustainability in telemedicine implementation.
In addition to the construct comments, there were four general feedback responses provided by participants at the end of the survey. These again echoed the importance of embedding telemedicine into the health system and recommended that the support tool be designed to facilitate integration and interoperability, and not allow telemedicine services to be siloed. Another response suggested that despite the evidence base in support of telemedicine, there remains inadequate consolidation of telemedicine in the public sector, even in mature telemedicine post-COVID contexts. Lastly, one respondent challenged the professional community to revisit the term ‘telemedicine’, as it was felt that it did not adequately represent the increasingly diverse range of health professionals (i.e. nurses) providing virtual healthcare.
Discussion
The key finding of the modified Delphi study was that consensus was reached on 78% of the constructs evaluated, thus further validating their importance to the telemedicine implementation ecosystem. Telemedicine experts considered the most important core dimension to be Monitoring, Evaluation and Optimisation of Telemedicine Implementation (C5) and the least important to be Development of a Telemedicine Strategy (C2). Across the domains, the highest level of consensus was reached on the Clinical one (D3) and the least on the Economic one (D4).
To the best of our knowledge, there is no comparable research in the peer-reviewed literature that sets out to reach consensus on the constructs of a telemedicine framework for designing and implementing telemedicine interventions using the Delphi methodology or any other method of group consensus. Hence, this study offers a distinctive contribution to the global knowledge base on telemedicine implementation. There are, however, some systematic reviews that have synthesised common constructs across telemedicine implementation tools and attempted to rank these.49,53,63 Differences in taxonomy and the heterogeneity of these studies make it difficult to compare them with the findings of this modified Delphi study. Nevertheless, the constructs and ranking in these studies are somewhat consistent with the results of our study. Like the findings of our study, all of the aforementioned studies identified technology and infrastructure to be the highest priority construct, with other constructs relating to strategy, change management, service design, individual and organisational readiness, economic and financial factors, and policy and regulatory mechanisms all featuring as important dimensions across the telemedicine implementation frameworks reviewed. However, it is imperative to underscore that this predominant focus on technology may, to some extent, account for the limitations faced by telemedicine in its implementation process. Constructs that were less frequently included were dimensions relating to the public and patient sphere, which, as Mauco et al. 49 emphasise, needs further attention given the rapid growth of telemedicine interventions in diverse cultural and socio-economic contexts. In this respect, in our study, consensus was reached on all the items relating to patient and public dimensions, which may reflect a growing appreciation of the importance of patient-centred perspectives in the implementation process. Interestingly, the thematic monitoring, evaluation and optimisation construct ranked highly in our study. However, this construct does not feature explicitly in any of the systematic review findings. The increasing focus on monitoring, evaluation and optimisation may stem from the demand from decision-makers for greater evidence supporting telemedicine services. Finally, all the studies are united in highlighting the weakness of the validity and rigour of existing implementation frameworks, calling on the scientific community to develop more robust evidence-based tools. 29
Comparing our study to the broader field of eHealth, two studies provide findings of interest. Cremers et al. 64 conducted a three-round Delphi study to develop an eHealth implementation guideline for interventions in daily practice. In contrast to our study, which focused on the health system, Cremers et al. only focused on the health facility. Nevertheless, the findings are consistent with those of our study, with consensus being reached on five comparable implementation constructs: technology, acceptance, finance, organisation and legislative and policy. Interestingly, consensus was not reached on the evidence-based medicine construct in that study, whereas in ours it was reached on all clinical dimensions, including evidence-based medicine. Rezai-Rad et al. 33 also conducted a Delphi study aimed at designing an eHealth readiness assessment framework for Iran. The importance of the technology construct was again emphasised in this study, but other dimensions were grounded in different taxonomies, making comparisons with our modified Delphi study meaningless.
Another noteworthy finding from our study is the discrepancy between the results of the evidence review which informed the draft support tool and those of our study in relation to three dimensions of implementation. While consensus was not reached on the constructs of reimbursement and incentive mechanisms (item 32), resistance to change (RTC) (item 27) and telemedicine champions (TCs) (item 25) in our study, they were emphasised as important dimensions in the literature review. The low ranking of the importance of telemedicine reimbursement and incentive mechanisms (TRIMs) in our study was striking given the evidence supporting the fact that TRIMs motivate end-user participation in telemedicine services.1,12,52,65–75 Indeed, some evidence suggests that they constitute the most important dimension of implementation.3,76,77 Several studies have empirically shown a causal relationship between TRIMs and health provider/consumer acceptance of telemedicine 52 and intention to use telemedicine.1,70 Moreover, the strengthening of TRIMs by governments during the COVID-19 pandemic was recognised as contributing to the improved uptake of telemedicine services.3,65,73,75,76 The literature suggests that TRIMs are pivotal to the implementation ecosystem, though the design of TRIMs to fit the heterogeneity of health financing contexts is recognised as being complex.68,72,77,78 Interestingly, in our study, the Economic domain (D4) had the lowest consensus ranking compared to other domains, which further reinforces the lack of attention experts pay to financial aspects of telemedicine implementation. This oversight may be attributed to several factors, including fragmented budget allocations, 79 inadequate financial planning, 80 limited cost-benefit analysis, 81 and competing financial pressures, especially in lower and middle-income nations. 20
Developing strategies to address health workforce RTC in telemedicine interventions is another construct on which consensus was not reached in our study. Again, this is striking given the extensive literature on RTC as a barrier to telemedicine implementation.1,12,51,52,74,82,83 Moreover, the relationship between end-user acceptance of technological innovation as a condition of telemedicine uptake has been theoretically verified.1,84–88 Absent or poorly designed TRIMs are also associated with RTC.51,52,69,83,89 Early identification and amelioration of RTC are recognised as reducing implementation risks and contributing to the success of telemedicine interventions.12,17,51,52,66,68–70,83,87,90
The consensus threshold on the TCs construct was not reached either. The potential catalytic role that TCs play in managing change and mitigating RTC is emphasised across the literature,12,17,82,91–96 though the extent of their contribution is inconclusive. 12 Moreover, champions have been identified as a success factor in the diffusion of innovation and adoption of evidence-based practice in public health, with considerable research pointing to the positive, catalytic role that champions can play in managing change and mitigating RTC.97–101 Nevertheless, the importance of the role that champions play in the telemedicine implementation process is inconclusive, 12 which in part is due to the diversity of the TC approaches used, making comparative analysis difficult, 98 and also to the challenge of determining TC attributes from other telemedicine implementation strategies. 96 Despite the growing evidence of the potential role TCs can play in guiding, leading, coordinating, legitimising, educating, and communicating the telemedicine vision and supporting stakeholder engagement,17,82,90,94 it is recognised that the causal pathways and mechanisms between TCs and telemedicine implementation outcomes need further empirical testing.91,96,99,100
Strengths and limitations
A notable strength of this study is that it addresses an identified knowledge gap in the telemedicine implementation field, thereby helping to reach consensus among telemedicine experts on a set of constructs for telemedicine implementation. Moreover, the constructs evaluated in this study are drawn from a robust evidence-based review, strengthening their validity and reliability. Another strength is the number of experts participating in the modified Delphi process thanks to the online format, together with the iterative process of this study. However, it's worth noting that this approach also posed limitations in terms of depth. Validity was also enhanced by including several types of profile on the expert panel. Nevertheless, some profiles were underrepresented and more than half of the sample did not provide information about their workplace, meaning that some cores/domains may not have been properly addressed, thereby compromising the validity and reliability of the process. Another weakness of the study is the lack of collection of geographic characteristics of respondents. However, it's worth noting that the geographic distribution of invitees (n = 247) represented a broad global diversity. To mitigate the potential for bias and further strengthen validity and reliability, various strategies were deployed during the study by following good practice guidance, 102 including grounding the study in the literature, integrating qualitative and quantitative measurements, verifying and further validating findings with additional experts, along with the planned field testing of the support tool. Another limitation of the study is that only one evaluative query was put to the respondents. More queries exploring the level of difficulty of implementing the construct may have provided deeper insights. However, given the time-constrained expert panel, breadth was favoured over depth for this study. In this sense, open-ended answers were included in each section, thereby enabling experts to provide comments on included criteria or provide new information for consideration. Nonetheless, it's important to note that not all participants chose to engage in this aspect of the study, hence these comments may not fully represent the sample.
Future actions and conclusions
The study's findings reveal that, at the core level, monitoring, evaluation and optimisation were ranked as the most important aspects, while developing a telemedicine strategy was deemed the least important. Among the domains, the clinical domain exhibited the highest level of consensus, whereas the economic domain showed the lowest. The results of this study are important for telemedicine service decision-makers, practitioners, stakeholders and experts. The results address a critical knowledge gap identified in telemedicine interventions, that is, the lack of a robust tool to guide telemedicine design, implementation, monitoring and evaluation. The findings provide a strategic direction forward for telemedicine services, offering an evidence-based set of telemedicine implementation constructs that have been agreed upon by a panel of telemedicine implementation experts and developed into an easy-to-use support tool to guide the telemedicine implementation process. Given the exponential efforts to scale up telemedicine globally coupled with the persistent obstacles to the consolidation of telemedicine, a robust support tool that facilitates knowledge sharing and exchange represents a significant step forward for the field of telemedicine. Importantly, the findings of this study also point to several noteworthy areas of incongruity in the collective understanding of telemedicine implementation, which may contribute to the prevailing barriers experienced in telemedicine execution, namely in the implementation constructs of reimbursement and incentive mechanisms, RTC and TCs. These constructs warrant further attention by the telemedicine community to ascertain their degree of importance in the implementation process. Finally, the findings of this study suggest additional avenues for future research, including the validation and improvement of the support tool, as well as longitudinal studies to monitor the implementation outcomes of integrated telemedicine interventions within various contextual settings, contributing to the broader digital health agenda.
Supplemental Material
sj-docx-1-dhj-10.1177_20552076241251951 - Supplemental material for Selection of criteria for a telemedicine framework for designing, implementing, monitoring and evaluating telemedicine interventions: Validation using a modified Delphi process
Supplemental material, sj-docx-1-dhj-10.1177_20552076241251951 for Selection of criteria for a telemedicine framework for designing, implementing, monitoring and evaluating telemedicine interventions: Validation using a modified Delphi process by Che Katz, Noemí Robles, David Novillo-Ortiz and Francesc Saigí-Rubió in DIGITAL HEALTH
Supplemental Material
sj-docx-2-dhj-10.1177_20552076241251951 - Supplemental material for Selection of criteria for a telemedicine framework for designing, implementing, monitoring and evaluating telemedicine interventions: Validation using a modified Delphi process
Supplemental material, sj-docx-2-dhj-10.1177_20552076241251951 for Selection of criteria for a telemedicine framework for designing, implementing, monitoring and evaluating telemedicine interventions: Validation using a modified Delphi process by Che Katz, Noemí Robles, David Novillo-Ortiz and Francesc Saigí-Rubió in DIGITAL HEALTH
Supplemental Material
sj-docx-3-dhj-10.1177_20552076241251951 - Supplemental material for Selection of criteria for a telemedicine framework for designing, implementing, monitoring and evaluating telemedicine interventions: Validation using a modified Delphi process
Supplemental material, sj-docx-3-dhj-10.1177_20552076241251951 for Selection of criteria for a telemedicine framework for designing, implementing, monitoring and evaluating telemedicine interventions: Validation using a modified Delphi process by Che Katz, Noemí Robles, David Novillo-Ortiz and Francesc Saigí-Rubió in DIGITAL HEALTH
Footnotes
Acknowledgements
The authors would like to thank all the telemedicine experts who participated fully in the study.
Contributors
All authors contributed equally.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Disclaimer
DNO is a staff member of the World Health Organization. The author alone is responsible for the views expressed in this article, and he does not necessarily represent the decisions, policies, or views of the World Health Organization.
Ethical approval
The study received ethics approval by the Research Ethics Committee, Universitat Oberta de Catalunya (CE22-AA60 8 November 2022 and CE23-TE12 17 April 2023). Informed consent was included in the invitation letter, and participants consented prior to engagement in the study. No individual identification data were collected, thus ensuring the confidentiality of participants.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was funded by the WHO Regional Office for Europe.
Guarantor
The corresponding author, FSR.
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
