Abstract
It is challenging to design a proper eHealth evaluation. In our opinion, the evaluation of eHealth should be a continuous process, wherein increasingly mature versions of the technology are put to the test. In this article, we present a model for continuous eHealth evaluation, geared towards technology maturity. Technology maturity can be determined best via Technology Readiness Levels, of which there are nine, divided into three phases: the research, development, and deployment phases. For each phase, we list and discuss applicable activities and outcomes on the end-user, clinical, and societal front. Instead of focusing on a single perspective, we recommend to blend the end-user, health and societal perspective. With this article we aim to contribute to the methodological debate on how to create the optimal eHealth evaluation design.
Introduction
The World Health Organization (WHO) stressed Digital Health guidelines the need for rigorous evaluation of eHealth, in order to generate evidence and to promote the appropriate integration and use of technologies for improving health and reducing health inequalities. 1 In the scientific community that focuses on eHealth evaluation, there is no consensus on how to create the best evaluation design.2,3 According to the standards of evidence-based medicine, large prospective randomized controlled trials (RCTs) are considered the gold standard for evaluating the safety and effectiveness of medical interventions. 4 As the characteristics of an RCT do not match well with the evaluation of eHealth, it is currently acknowledged among experts that there is an urgent need for other evaluation designs.5–7 This makes it challenging to perform a proper eHealth evaluation, which hampers the subsequent implementation of eHealth in daily clinical practice.8,9 In this paper, we define eHealth according to Eysenbach (2001), 10 as not just a technology, but as a concept. Eysenbach’s definition of eHealth is: “An emerging field in the intersection of medical informatics, public health and business, referring to health services and information delivered or enhanced through the Internet and related technologies. In a broader sense, the term characterizes not only a technical development, but also a state-of-mind, a way of thinking, an attitude, and a commitment for networked, global thinking, to improve health care locally, regionally, and worldwide by using information and communication technology.” 10
To streamline the set-up of eHealth evaluations, various frameworks have been developed.3,11 The most widely used eHealth evaluation framework in European eHealth studies is the Model for Assessment of Telemedicine (MAST). 12 This model is based on the principles of Health Technology Assessment (HTA) 13 and is used to assess the effects and costs of eHealth from a multidimensional perspective. The strong points of MAST are the involvement of all the actors and the assessment of outcomes in seven domains: (1) health problem and description of the application; (2) safety; (3) clinical effectiveness; (4) patient perspectives; (5) economic aspects; (6) organizational aspects; and (7) socio-cultural ethical and legal aspects. Another commonly used framework is the five-stage model for comprehensive research on telehealth by Fatehi et al. 14 This framework outlines five important stages for eHealth intervention: concept development, service design, pre-implementation, implementation, and post-implementation. By outlining these stages, this framework addresses the difference between the assessment of prototypes and the evaluation of mature technology. The assessment of prototypes helps to identity the required improvements, while the evaluation of a mature technology aims to measuring the overall success factors and performance after implementation.
The endorsement of applying an iterative approach and to focus on multiple perspectives, are strong points of these current frameworks to streamline the set-up of eHealth evaluations. While these frameworks are useful, we foresee three major limitations. The first limitation is the current frameworks are only applicable for fully mature technologies and are no solution for technology still in development. Therefore, the applicability of these frameworks is limited. The second limitation is that these frameworks do not provide a clear method for determining technology maturity. When using these frameworks, in combination with immature technologies to evaluate the value of an eHealth service, results are likely to be overly negative or, at the very least, biased. Finally, the third limitation is the over-representation of the clinical perspective. Most of the articles that report on the use of these frameworks only present the results of a single perspective. 15 In previous eHealth evaluation studies the clinical perspective is over-represented and findings related to usability, the user-experience, technology acceptance, and costs are rarely addressed. 5 To overcome these limitations, and based on our experience within the field of eHealth evaluation, we present our position toward eHealth evaluations and present a model for the continuous evaluation of eHealth, aligned to technology maturity levels, and incorporating different evaluation perspectives.
Our position towards eHealth evaluation
Evaluation, defined here as the collection, interpretation, and presentation of information in order to determine the value of a result or process, 16 becomes a possibility and necessity as soon as technology development starts. Evaluation should be a continuous process, whereby the evaluation setup is geared towards the maturity of the technology. There is no need to wait with the evaluation until the technology is mature, the evaluation of the technology can start from the first concept. In other disciplines, such as software development in which agile SCRUM is a common approach, continuous evaluation is very common.
Next, we think that, instead of focusing on a single perspective, evaluations should incorporate a multitude of complementary evaluation perspectives. In our opinion these perspectives are (1) the end-user, (2) the health, and (3) the societal perspective. The end-user perspective focuses on the task-technology fit (which differs per type of end-user), usability, the user-experience (UX) and technology acceptance, to ensure that a technology is suitable for the intended end-users and their context; the health (or clinical) perspective should safeguard the health benefits that one derives from using the technology; the societal perspective should ensure that the technology can be implemented with the support of relevant stakeholders, and is durable.
Technology readiness levels
The maturity of a technology can be determined based on the technology readiness levels (TRLs). TRLs are a widely-accepted method to assess the maturity level of a technology, also in the context of eHealth.17–19 These levels (Figure 1) were developed by NASA in the early 70s as a means to determine whether emerging technology is suitable for space exploration. In total, there are nine levels divided in three phases: the research, development and deployment phase. With TRLs we can clearly communicate the level of maturity of a technology and determine whether the technology is ready for tests or evaluations in a real-world setting. When a technology consists of different modules, the weakest (or most immature) module determines the TRL.

Technology readiness level scale.
A model for continuous eHealth evaluation
Our model for continuous eHealth evaluation addresses both the maturity of the technology as the starting point for eHealth evaluations, as well as the inclusion of the different evaluation perspectives. An overview of the suggested activities for the three perspectives in each phase is provided in Table 1.
An overview of the activities on the end-user, health, and societal perspective for the research, development, and deployment phase.
Research phase
During the research phase, the technology is immature and the new concept, often in the form of a low-fidelity prototype, is discussed with potential end-users (end-user perspective) (e.g. as in van Velsen et al. 20 and Jansen-Kosterink et al.). 21 These discussions aim to gauge the end-users’ reactions towards the basic concepts and main functionality of the prototype. The main aim of the continuous evaluations in the research phase is to optimize the new concept and technology. As the technology mainly consists of ideas and simple prototypes at this stage, applying an iterative approach in this phase is crucial. Quick rounds of testing-redesing-testing should ensure the proper focus of the innovation. The work of Schnall and colleagues22–24 on their Health Information Technology Usability Evaluation Scale (Health-ITUES) fit very well with this phase.
Development phase
Within this phase the technology evolves from a prototype towards a more mature application. At this moment end-users can interact with a high-fidelity prototype. We start by incorporating small-scale usability tests and short-term clinical studies in a controlled setting should be conducted to identify usability issues and to assess use, acceptance, and potential health benefits (e.g. as in Olde Keizer et al., 2019). 25 The outcomes of the technology-oriented evaluations (e.g. usability tests) should feed an iterative redesign process, in which technology is optimized. The outcomes of the short-term clinical studies help to compose hypotheses concerning health benefits for subsequent evaluations. Next to these activities the discussions with relevant stakeholders can be started to assess the forecast of financial and extra-financial value.
Deployment phase
At this stage, the technology is almost ready for market launch. There are no more critical usability issues left and the next step is a large-scale clinical study combined with a summative usability study in a real-life setting. This clinical study could be a RCT to assess the safety and clinical effectiveness of the technology in comparison to usual care in daily clinical practice (e.g. as in Kosterink et al.). 26 To comply with national or international legislation the technology needs to be certified based on the outcome of these studies, such as a CE marking in Europe. Besides, based on the outcome of these studies the forecast of financial and extra-financial value can be validated and finalized. During the deployment phase there is little focus anymore on research and development. Although it is important to keep monitoring the long-term health benefits and safety of the technology within the broad clinical context by for instance a large cohort study. During this study, the long-term financial and extra-financial value also need to be assessed (e.g. as in Talboom et al.), 27 so as to become aware of additional exploitation opportunities.
Discussion
The evaluation of eHealth should be a continuous process, based on the maturity of the technology, and should focus on the end-user perspective, the health perspective, and the societal perspective. The focus of an evaluation should be aligned with the maturity of the technology that is being put to the test. The use of TRL levels and their alignment to evaluation perspectives is what mainly distinguishes our model from other evaluation models for eHealth. These models only focus on one perspective,22–24 are only applicable for mature technology,12,15 or do not specify how to assess the maturity of a technology.14,28
Our model for the continuous evaluation of eHealth is based on our experience within the field of eHealth evaluation and the lessons we have learned during our involvement in various national and international eHealth projects. However, since this model reflects a vision on eHealth evaluation, it would be impossible to prove the truth of it. Therefore, case studies should inform us of its worth and the opportunities for improvement. Additionally, the role of the environment and technical infrastructure in which an eHealth technology is embedded plays a role. 29 How does environmental and infrastructure maturity affect evaluation? While we consider these factors to be aspects of the technology maturity, it would be interesting to see studies that aim to distinguish among the different types of maturity. We hope that the research community sees this article as a source of inspiration to combine evaluation approaches with TRL levels and will share their experiences with us.
Footnotes
Contributorship
All authors (SJK, MB, and LvV) contributed substantially to this article and all participated in drafting the article and revising it critically for important intellectual content.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
