Abstract
Background
Although digital mental health interventions (DMHIs) offer a potential solution for increasing access to mental health treatment, their integration into real-world settings has been slow. A key reason for this is poor user engagement. A growing number of studies evaluating strategies for promoting engagement with DMHIs means that a review of the literature is now warranted. This systematic review is the first to synthesise evidence on technology-supported strategies for promoting engagement with DMHIs.
Methods
MEDLINE, EmbASE, PsycINFO and PubMed databases were searched from 1 January 1995 to 1 October 2021. Experimental or quasi-experimental studies examining the effect of technology-supported engagement strategies deployed alongside DMHIs were included, as were secondary analyses of such studies. Title and abstract screening, full-text coding and quality assessment were performed independently by two authors. Narrative synthesis was used to summarise findings from the included studies.
Results
24 studies (10,266 participants) were included. Engagement strategies ranged from reminders, coaching, personalised information and peer support. Most strategies were disseminated once a week, usually via email or telephone. There was some empirical support for the efficacy of technology-based strategies towards promoting engagement. However, findings were mixed regardless of strategy type or study aim.
Conclusions
Technology-supported strategies appear to increase engagement with DMHIs; however, their efficacy varies widely by strategy type. Future research should involve end-users in the development and evaluation of these strategies to develop a more cohesive set of strategies that are acceptable and effective for target audiences, and explore the mechanism(s) through which such strategies promote engagement.
Introduction
Digital mental health interventions (DMHIs) are increasingly being recognised as effective, scalable solutions that can help to treat a range of mental health issues, including depression, anxiety, substance abuse and suicide ideation for both youth and adult users.1–4 Due to the ubiquity of technology ownership and increasing rates at which people are turning to digital platforms for mental health support,5,6 DMHIs offer an unprecedented opportunity to extend traditional mental health services to people who may be unable, or unwilling, to access them. The increasing investment in developing new models of care that harness advancements in technology is evidence of a widespread belief that leveraging technology is core to being able to meet the demand for mental health services into the future. 7
Despite the potential of DMHIs to address access to service gaps and/or enhance existing treatments, we are yet to see the routine integration of these interventions in health, education and community settings where they can reach individuals in need.7,8 This implementation lag is, in part, because clinicians will require stronger evidence that DMHIs work before these are offered as part of treatment-as-usual, and in part, because the evidence suggests that user engagement is typically poor.9,10 A recent meta-analysis of 10 randomised controlled trial (RCT) studies found that approximately only 30% of users completed 75% or more of their assigned DMHI. 11 Another meta-analysis that examined intervention attrition rates in 11 RCTs found that participants assigned to the DMHI were more likely to drop out the intervention than those assigned to a waitlist or attentional control group. 3 Given that engagement is a core component of the effectiveness of DMHIs,12–14 solving how to improve barriers to engagement is an important focus if we are to see DMHIs integrated routinely into real-world settings. 15
In recognition of issues of low engagement with digital interventions, there has been an increase in the number of studies examining the efficacy and acceptability of different engagement strategies over the past 10 years. 16 This research can be broadly classified into two categories. One group of studies has investigated the efficacy of various features within digital interventions themselves that could promote engagement. These features are generally grounded in principles of persuasive system design (PSD) – a framework that has been used to guide the design of technology-based services aimed at changing users’ attitudes or behaviours17,18 – or incorporate game-like strategies aimed at encouraging user interactivity and continued intervention use. 19 The other group of studies has examined the utility of technology-supported strategies which are not part of the digital intervention but implemented alongside the latter. These strategies generally include reminders, feedback, coaching and peer support delivered to users via the Internet (e.g. email, web-based software) or telephone (e.g. call, text message), with dissemination usually managed by a healthcare professional or an automated system.16,20 The present review focuses on this latter group of strategies because greater empirical attention has already centred on the first group of strategies.17,18
To date, only one systematic review by Alkhaldi and colleagues 16 has examined the effectiveness of external technology-supported strategies in promoting user engagement with digital interventions aimed at improving either physical or mental health. They found that technology-supported strategies showed modest effects in promoting engagement compared to when no strategies were employed, as indicated by app-obtained engagement metrics such as the number of completed modules or activities, number of features accessed, number and frequency of log-ins or page views or time spent. However, as only eight of the 14 digital interventions included in that review were specifically targeting mental health, it was difficult to draw conclusions about the role of these strategies in improving engagement among interventions addressing mental health. Users with mental health problems may find it difficult to remain engaged with interventions as a result of mental fatigue and heightened stigma-related concerns. 21 Strategies that may work for users of digital interventions targeting physical health might not be generalisable to users of DMHIs. Evidence of the effectiveness of engagement strategies in the context of DMHIs remains inconclusive on the basis of prior reviews. A review of the current state of evidence is therefore warranted. By focusing on DMHIs only, this study extends on prior reviews of this general topic area. The specific aims of this narrative synthesis are to: (i) provide an overview of the types of technology-supported engagement strategies used to promote engagement with DMHIs, and (ii) describe the effectiveness of these strategies in promoting engagement.
Method
This study adheres to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) reporting guidelines 22 and is registered with PROSPERO (CRD42020209380).
Search strategy
Four databases were searched from 1 January 1995 (when the first journal of internet interventions was published) to 1 October 2021: MEDLINE, EmBASE, PsycINFO and PubMed. The initial search strategy was built using Medical Subject Heading (MeSH) terms and keywords from a test set of five papers meeting study inclusion criteria that were obtained via manual search. Search terms were centred around four conceptual blocks: (i) engagement/adherence, (ii) digital interventions, (iii) mental health and (iv) study design. The final search strategy (refer Supplementary Material, Appendix 1) was developed and tested in MEDLINE. It achieved 100% sensitivity against the test set and was subsequently adapted for use in the other databases. Articles in languages other than English and grey literature were excluded.
Screening and selection
All citations were downloaded to Endnote X9 (Thomson Reuters) and duplicates were removed. Screening (titles and abstracts, and full text records) was conducted by two authors (DZQG and LM). Discrepancies were resolved through discussion at each stage and, where necessary, a third author (MT) was consulted. Inter-rater consensus for full text screening was acceptable (κ = 0·80,
Inclusion criteria
Experimental (e.g. RCT, micro-randomised trial) or quasi-experimental studies which compared levels of engagement (operationalised as: the percentage of participants attaining a certain level of completion specified by the study authors; number of activities/modules completed; number and frequency of log-ins/site visits/page views; or time spent logged on) between DMHI users who received an engagement strategy with another group of users who do not receive that strategy were included. Studies that tested variations of a particular strategy (e.g. different types of reminder messaging), or explored the cumulative effect of multiple strategies were also included. Studies with three or more treatment arms were eligible if they compared the effectiveness of (i) multiple types of engagement strategies, or (ii) alternative variations of one specific engagement strategy (e.g. types of text messaging content).
Exclusion criteria
To improve comparability, we excluded studies examining interventions that (i) were aimed at helping caregivers or health professionals care for individuals with mental health difficulties, (ii) consisted only of information or self-management activities (e.g. journal, mood tracking) without any therapeutic content, or (iii) delivered by other digital media (e.g. fully text-based interventions, pre-recorded videos, DVD).
Studies were also not eligible if engagement strategies were examined in relation to non-digital interventions, if digital interventions did not primarily target mental health outcomes, or if engagement with DMHIs was measured using subjective methods (e.g. self-reported use) rather than objective methods (e.g. usage-related information that is digitally recorded on an app or digital device at the source of its origin).
Participants
There were no restrictions on characteristics pertaining to participants (e.g. age, race, gender), population (e.g. adult, youth) or setting (e.g. clinical, community).
Interventions
DMHIs had to deliver manualised therapeutic content via a technology-based medium such as the Internet or smartphone. There were no restrictions on mental health condition(s) targeted by DMHIs. Restrictions on the characteristics of DMHIs have been covered in the exclusion criteria.
Comparators
Comparator groups for eligible studies were those in which participants received one of the following: (i) little or no engagement support, (ii) engagement strategies that were not delivered using technology (e.g. face-to-face human support, physically mailed reminders) or (iii) alternative variations of a specific technology-supported engagement strategy.
Outcomes
The main outcome of interest was engagement with the DMHI, measured objectively as the number of features accessed, number of sessions/activities completed, number and frequency of log-ins/visits, time spent on the DMHI, completion/non-completion rates and rates of adherence to prescribed levels of use. Reports of one or more of these metrics are considered collectively as engagement.
Data extraction
Data from the included studies were extracted and recorded in a custom spreadsheet created using methods described in the Cochrane Handbook for Systematic Reviews of Interventions. 23 Key study information extracted included the following: (i) study design, (ii) characteristics of the study sample, (iii) characteristics of the DMHI, (iv) characteristics of the engagement strategy, (v) how engagement was measured and (vi) main study findings. One author (DZQG) extracted and recorded data for all the included papers. This was subsequently verified by another author (LM). Any disagreements were resolved through discussion, and if no consensus was achieved, a third author (MT) was consulted. Corresponding authors of studies were contacted by email for any clarification or missing information.
Risk of bias
The Cochrane Collaboration's Risk of Bias 2 (RoB2) 24 and Risk Of Bias in Non-randomized Studies – of Interventions (ROBINS-I) 25 tools were used to assess the risk of bias for RCT and non-RCT studies, respectively. Each of the five risk domains of the RoB2 was scored against a three-point rating scale corresponding to a low, unclear or high risk of bias. Each of the seven risk domains of the ROBINS-I was scored against a four-point rating scale, corresponding to a low, moderate, serious, or critical risk of bias. Risk of bias ratings was conducted independently by two reviewers (DZQG and LM). Discrepancies were resolved through discussion.
Results
Figure 1 summarises the systematic search process. The search yielded a total of 11,066 papers. Following the removal of duplicates, the titles and abstracts of 6248 papers were screened for inclusion. Following this, 55 papers were identified as potentially eligible and underwent full-text screening. A total of 24 papers were eligible for inclusion; 31 papers were excluded at the full-text screening stage for the following reasons: 22 papers did not empirically evaluate the effect of an engagement strategy; five papers tested engagement strategies for non-DMHI interventions; one study did not contain data on the post-intervention effects of the engagement strategy; one study used subjective self-report engagement measures; one study did not measure engagement; and one study was a protocol.

PRISMA flow diagram of the study selection process.
Included studies
Characteristics of the included studies are shown in Table 1. The 24 included papers contained data from 23 unique studies (
Characteristics of included studies.
17 studies evaluated the effectiveness of a singular engagement strategy relative to either a non-technological engagement strategy 34 or with no engagement strategy.27–29,32–36,39–41,43,44,46–49 Seven studies explored the incremental effect of an additional engagement strategy on top of an existing engagement strategy.26,27,40,42,43,45,48 Six studies investigated characteristics of engagement strategies that may be potentially associated with their effectiveness. These included opt-in flexibility (allowing clients to opt in or out of receiving the engagement strategy),29,37 feedback response rate, 38 time of delivery, 31 type of content 30 and mode of delivery. 34
Risk of bias assessments
Risk of bias ratings for each study is provided in Supplementary Material (Appendix 2). Among the 20 RCTs, 13 studies (65.0%) had a low risk of bias relating to randomisation sequence and allocation concealment. Eight studies (40.0%) had greater than low risk of bias in deviations from the intended intervention. 14 studies (70.0%) had a low risk of bias pertaining to incomplete outcome data. All studies were rated to be at low risk for bias from outcome measurement issues. Notably, only one study was rated to have a low risk of bias for selective reporting because the other studies did not have a published trial registration or protocol with an a priori statistical analysis plan. The four non-RCT studies were each assessed to have an overall risk of bias rating of “moderate” using the ROBINS-I tool, indicating that they were methodologically sound despite not having the same level of rigour as a high-quality RCT.
Characteristics of DMHIs
DMHIs targeted a range of mental health symptoms. The primary treatment targets of DMHIs in most studies (19 studies; 79.2%) were symptoms of depression (13 studies; 54.2%), anxiety (7 studies; 29.2%) and general psychological distress (6 studies; 25.0%). In the remaining five studies (20.8%), DMHIs targeted specific mental health conditions including social phobia (two studies), bipolar disorder (two studies) and bulimia nervosa (one study). The average length of the intervention was 10.9 (
Characteristics of technology-supported engagement strategies
Effectiveness of strategies towards promoting engagement
Findings pertaining to the effectiveness of the engagement strategies for each study can be found in Table 2.
Study findings split by strategy type.
Discussion
This systematic review is the first to describe the characteristics of technology-supported strategies used to promote engagement with DMHIs and summarise findings pertaining to their efficacy. Engagement strategies generally involved coaching, reminders, personalised information and peer support. Most strategies engaged users throughout the intervention period through email, online forums, mobile apps or phone calls at regular intervals – usually at least once a week. Characteristics of the strategies reviewed in this paper were similar to those in the studies reviewed by Alkhaldi and colleagues. 16 Overall findings from the narrative synthesis indicated that evidence for the efficacy of technology-supported strategies in promoting engagement with DMHIs was inconclusive. Our findings also do not support the view that the use of multiple strategies has an additive effect on engagement. In addition, the results reported for each group of similar strategies were mixed. Thus, there is no evidence that a specific strategy or group of strategies is relatively more efficacious in promoting engagement than others. This contradicts previous research suggesting that human-centred strategies delivered by trained healthcare professionals, such as coaching or telephone calls, improve user engagement. 51
Studies that did not find support for the efficacy of their engagement strategy furnished several explanations for their findings. Suboptimal matching of strategies to client needs or preferences consistently emerged as a possible factor that may have hindered their effectiveness. For example, users may not have accessed strategies as intended if these were not delivered via users’ preferred medium.40,41,43 Users might also be less likely to connect with strategies pitched at a level that was not commensurate with their mental health needs. 39 In particular, reminder-based strategies may have been frowned upon by some users who perceived them as controlling 35 or lacking emotional validation. 46 Apart from one study, 26 there was no information on whether prospective users of the digital interventions were involved in developing these strategies. This absence of end-user perspectives in the design process may have had a negative impact on the potential efficacy of the strategies examined.
The studies reviewed identified a number of contextual factors that could have influenced the observed effects of engagement strategies. Although intended users of a given DMHI may have similar mental health needs, it is expected that they will differ in their preferences for the type of strategy received. 31 As a result, it is difficult to control the myriad of possible ways in which users could respond to the same strategy. For instance, engagement strategies may be successful for increasing help-seeking motivation; however, motivation may not necessarily translate into engagement with the digital intervention.41,43 Some strategies, such as those delivered via telephone call, may inadvertently provide users with convenient opportunities to discontinue with the DMHI or the research study. 35 Finally, levels of DMHI engagement in the intervention and control conditions were high in some studies,29,43 and this may have created a ceiling effect which made it difficult to detect meaningful gains in engagement that can be attributed to the strategy.
There were several variables not discussed by the studies reviewed that could have influenced engagement instead of the strategies evaluated. First, it is plausible that interactive features built directly into DMHIs for the purpose of encouraging user interaction might contribute more to engagement than the strategies discussed. 17 However, a recent meta-analytic review found that the number of in-built design elements in mHealth interventions for depression and anxiety was not positively associated with engagement. 18 The studies also did not discuss the extent to which user satisfaction with the DMHIs may have influenced engagement findings. This is important because user satisfaction with the content and design features of a DMHI may be positively associated with engagement. 52 Thus, it would not be possible to establish the effectiveness of a technology-supported engagement strategy if used in conjunction with a DMHI with poor user acceptability ratings. Following up with users who drop out of these interventions is needed to better understand the factors which might predict loss of engagement among the intended users for specific DMHIs. These insights might then be subsequently incorporated in the development of engagement strategies.
Implications
Several implications can be drawn from the findings of this review. As is the case with mental health interventions, the results suggest that there is no one-size-fits-all approach to promoting engagement with DMHIs. Tailoring of characteristics to match user's needs, preferences and profile is critical towards designing engagement strategies that are acceptable and effective. Two recent research strands that have emerged in the digital mental health space are focused on identifying user-specific characteristics 53 and intervention-specific features 18 associated with engagement. Results from such studies can be applied to the context of engagement strategies, so as to guide the design and development of strategies that are acceptable and effective to specific groups of users.
This review also highlighted two research gaps that future studies on engagement with DMHIs are recommended to address. First, research aimed at elucidating the mechanism(s) through which engagement strategies increase use with DMHIs is scarce. This was noted by some studies in this review.33,49 A recent review looking at barriers and facilitators of engagement with DMHIs and which included 208 studies highlighted that user-specific factors such as having a sense of self-efficacy, insight to their own condition, feeling that the intervention is a good fit for their needs and being able to connect with others are key factors that nudge users towards engaging with these interventions. 52 Thus, subsequent evaluations of engagement strategies should collect data assessing one or more of these potential facilitators. Also, future studies should strive to measure the extent to which a strategy was accessed by users. This is necessary as a manipulation check for treatment group(s) receiving the strategy and facilitates identification of potential dose–response associations between a given strategy and user engagement.
The existing literature is replete with examples of poorly conceptualised health research yielding outcomes of little use in meeting user needs.54,55 Collaboration with end-users should therefore constitute an essential part of the design, development and evaluation of engagement strategies for DMHIs. In the past few years, there has been growing recognition of the value of user input in the development of mental health technologies. In particular, principles of co-design – a participatory design approach whereby health professionals and end-users collaborate as equal partners – are increasingly being adopted in the design and development of digital mental health technologies, especially for younger users.56,57 There is some evidence that employing co-design approaches in the design and development of digital technologies may improve engagement by intended users. A pilot evaluation of iBobbly – an mHealth intervention that helps indigenous young people in Australia manage suicidal thoughts – found that 85% (34/40) of study participants completed all of the app's learning modules. 58 Another example of an app co-developed together with young people with lived experience of mental health difficulties is SPARX (Smart, Positive, Active, Realistic, X-factor thoughts). 59 Evaluation of SPARX revealed decent levels of engagement – close to 90% of participants completed over half of the modules, and 60% completed all modules. 59 Notwithstanding, the use of co-design methods may not always have a positive impact on user engagement. For instance, one study in this review 48 developed its strategy using a rigorous co-design process but found no effect relative to a no-strategy comparison group on all measures of engagement; however, the low overall engagement rates for the DMHI in that study suggest that engagement may be difficult to change if users do not find the intervention sufficiently engaging to begin with. In summary, the role of co-design in developing digital technologies is an emerging field with promising areas for further investigation. 56 Evaluating the impact of co-design processes on the efficacy of engagement strategies for DMHIs will be essential for establishing their value in this context.
Limitations
Our study had several limitations. The studies reviewed differed in many ways, such as the type of strategy used, type of DMHI that strategies were used in conjunction with, type of engagement metrics used and research methodologies employed to assess efficacy. The strategies reviewed also differed from one another in terms of their characteristics. Due to these sources of between-study variation, exploring aggregated levels of efficacy using meta-analysis was not feasible. In addition, the number of studies that explored the effect of multiple strategies or specific characteristics within individual strategies was small, thereby preventing conclusions from being drawn. In addition, the systematic search did not include non-English language papers and grey literature. Thus, the risk of publication bias was not minimised. Notwithstanding, the earlier review by Alkaldi and colleagues 16 noted that engagement with DMHIs is an emerging field of research where both positive and null findings are equally valued.
The lack of conclusiveness in the study findings may partly be due to the broader challenges associated with operationalising engagement with digital interventions. As a result, there was considerable variation in the types of indicators used to measure engagement. The studies reviewed mostly employed system usage data (e.g. modules completed, logins, time spent in app) as indicators of engagement. These indicators are objective and tangible measures of digital technology use, and analysis of usage patterns may reveal relationships between engagement and outcomes. 60 However, they may not account for user-specific factors – such as motivation to change, actual interaction with content and offline application of content in everyday life – which may be predictive of client outcomes rather than technology usage per se. 61 As a result, it has been proposed that valid measures of engagement incorporate objective usage combined with subjective user experience elements. 61 Establishing the construct validity of engagement is a critical area of research requiring further investigation in order to elucidate the role of engagement in digital interventions. 62 Achieving a clear operational definition of engagement is an essential precursor for the standardisation of engagement-related outcome measures. In addition, future studies should also report on follow-up time points. Collectively, these steps should increase the pool of data for future meta-analyses that will help to establish the effectiveness of engagement approaches.
It was not possible to discuss the efficacy of engagement strategies in relation to differences in health condition. The mental health conditions of participants in nearly all studies were not established using formal diagnostic criteria, and there was no way to validate differences in the severity of conditions. Future studies should endeavour to account for these differences in developing engagement strategies. Specifically, users with more complex or severe psychopathology might require more simple tasks/content that minimise cognitive burden, while users with milder symptoms may have the capacity to participate in a wider range of interactive content or activities.
Finally, it is noteworthy that all of the studies reviewed were conducted prior to the coronavirus (COVID-19) pandemic. It is possible that the increased mental health burden brought about by COVID-19 may have affected how people engage with DMHIs, given the greater need for mental health support that has stemmed from this global health crisis. A further review of engagement with DMHIs will be warranted in future when more studies undertaken following the pandemic are published.
Conclusions
Strategies to improve engagement with DMHIs are undoubtedly needed if their benefits are to be fully realised. Our findings show that technology-supported strategies deployed alongside DMHIs could be effective in promoting engagement with the latter. However, more evaluation studies are needed before definitive conclusions can be drawn. Given that the lack of user input may account for the mixed findings observed across studies, it is recommended that future efforts to improve engagement with DMHIs involve end-users in the development of engagement strategies. Other areas for future research include refining the construct validity of engagement, measuring the extent to which strategies are accessed by users, exploring possible mechanisms through which the strategies of interest bring about engagement and evaluating the impact of co-design methods in developing effective engagement strategies for DMHIs.
Supplemental Material
sj-docx-1-dhj-10.1177_20552076221098268 - Supplemental material for Technology-supported strategies for promoting user engagement with digital mental health interventions: A systematic review
Supplemental material, sj-docx-1-dhj-10.1177_20552076221098268 for Technology-supported strategies for promoting user engagement with digital mental health interventions: A systematic review by Daniel Z Q Gan, Lauren McGillivray, Mark E Larsen, Helen Christensen and Michelle Torok in Digital Health
Footnotes
Acknowledgements
DZQG is supported by an Australian Government Research Training Program Scholarship and a Centre of Research Excellence in Suicide Prevention (CRESP) PhD Scholarship. HC is supported by a NHMRC Elizabeth Blackburn Fellowship. MT is supported by a NHMRC Early Career Fellowship.
Conflict of interest
The author(s) declared the following potential conflicts of interest with respect to the research, authorship and/or publication of this article: Lauren McGillivray, Mark E Larsen, Helen Christensen and Michelle Torok are employed by the Black Dog Institute (University of New South Wales, Sydney, NSW, Australia), a not-for-profit research institute that develops and tests digital interventions for mental health.
Contributorship
DZQG, LM, MEL, HC and MT designed the study. DZQG extracted and analysed the data, with assistance from LM and MT. DZQG and LM assessed study eligibility and quality. DZQG wrote the first draft of the manuscript. All authors contributed to the interpretation of results, revised the initial draft critically for important intellectual content and approved the final version of the manuscript.
Ethical approval
Not applicable, because this article does not contain any studies with human or animal subjects.
Funding
The author(s) received no financial support for the research, authorship and/or publication of this article.
Guarantor
MT
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
