Abstract
This editorial introduces the multiphase optimization strategy (MOST), a principled framework for the development, optimization and evaluation of multicomponent interventions, to the field of implementation science. We suggest that MOST may be integrated with implementation science to advance the field, moving closer towards the ultimate goal of disseminating effective interventions to those in need. We offer three potential ways MOST may advance implementation science: (1) development of an effective and immediately scalable intervention; (2) adaptation of interventions to local contexts; and (3) optimization of the implementation of an intervention itself. Our goal is to inspire the integration of MOST with implementation science across a number of public health contexts.
Keywords
Behavioral interventions are met with implementation challenges practically from inception. A researcher must create an intervention that not only is efficacious or effective (simplified to effective herein for readability), but also can readily be adopted, disseminated, and implemented with fidelity in applied settings. In fact, the ubiquitous goal of intervention scientists is to develop an intervention that is both effective and scalable.
Traditionally, intervention scientists have employed what we will call the classical treatment package approach. In this approach, the first step is to identify an effective intervention using rigorous and resource intensive experimental designs, such as the randomized controlled trial (RCT), in which any number of components are combined as a package and tested against a suitable comparator (e.g., control or standard of care). To achieve the desired outcome of an effective intervention, the researcher may be tempted to include a large number of components in the hope that each one will incrementally improve the outcome of interest. However, in most cases there is a direct tradeoff between effectiveness and implementability. Each added component increases complexity of the intervention and consumption of resources, e.g., the intervention's cost, the burden on participants, and/or required provider time in delivering the intervention. It follows that with each added component the adoption, dissemination, and implementation of that intervention may become more difficult, more tenuous, or even less likely. Moreover, typically the experimentation done to evaluate the intervention does not include any investigation of which components are actually contributing to the outcome, the magnitude of their contributions, and how the components may work together.
This approach of adding components without regard to the resource demands they make and not investigating their individual contributions can create several types of problems for implementation science. First, it is likely that one or more of the many components included in the intervention are not having a detectable effect on the outcome and thus are extraneous. In our experience, the intervention science field has to date paid relatively little attention to whether or not an intervention contains extraneous components. Yet the inclusion of even one extraneous component may have an immediate deleterious impact on implementation, because that component is using resources without returning a benefit to participants and/or stakeholders.
Second, if the level of resources required to deliver the intervention as designed exceeds the resources available in a particular setting, it may be necessary to sacrifice one or more of the components to make the intervention affordable or to respond to other local constraints. It is this process that poses the greatest threat to the implementation fidelity of an evidence-based intervention (Carroll et al., 2007). When the performance of individual components is unknown, it is unclear which can be removed with the least impact on effectiveness. The components selected for removal may be largely responsible for any effect observed in the RCT, and as a result the revised intervention is less effective or even ineffective.
Third, if the individual and combined contributions of components are never estimated, there is little opportunity to grow understanding about what does and does not work, and about what variables are mediating and moderating component effects. This makes it difficult for the field to develop sophisticated theory and establish a coherent base of scientific knowledge, both of which are essential for successful adaptation of interventions to local circumstances and incremental improvement of interventions over time.
What if instead researchers routinely assessed the effectiveness of individual intervention components? What if the researcher could use this information to explicitly manage the tradeoff between effectiveness and the need to work within implementation constraints? What if the design of interventions could be responsive in a principled manner to constraints on local resources? The multiphase optimization strategy (MOST) is a principled framework for the development, optimization and evaluation of multicomponent interventions. Using MOST, a researcher is able to strategically balance effectiveness with implementation constraints such as affordability, scalability, and efficiency. The purpose of this editorial is to introduce MOST to the field of implementation science and to inspire the integration of MOST and implementation science across a number of public health topics including mental health, substance use or other addictive behaviors.
A brief Introduction to MOST
Informed by principles from the fields of engineering, behavioral science, and health economics, MOST is a framework for not only the development and evaluation of multicomponent behavioral, biobehavioral, and biomedical interventions, but also optimization of these interventions (Collins, 2018). The process of optimization is designed to achieve intervention
The preparation phase lays the groundwork for optimization. The researcher uses theory and empirical literature to identify candidate components hypothesized to have a desirable effect on the outcome of interest, typically through a mediator or chain of mediators. The conceptual model expresses the way in which each intervention component is hypothesized to affect behavior leading to the desired outcome. To learn more about the development of a empirically and theoretically derived conceptual model, we refer readers to two applied examples in the fields of HIV/AIDS prevention (Collins et al., 2016) and STI prevention (Kugler et al., 2018). At this point, the researcher may consider conducting pilot studies to ascertain the acceptability or feasibility of components prior to experimentation in a larger trial. Lastly, the researcher will specify the
Next, in the optimization phase, the intervention components are subjected to a rigorous randomized experiment (i.e., the
MOST and the goals of implementation science
We offer three ways MOST could potentially advance implementation science. First, as described above, MOST can be applied with the objective of producing interventions that are already implementable and, thus, scalable when they are brought to evaluation in an RCT. MOST enables the researcher to optimize an intervention so that it is as efficacious as possible within the constraints of an implementation setting. For example, consider the development and optimization of a smoking cessation intervention for delivery in emergency departments. The researcher could begin by interviewing emergency department staff to determine what are the important constraints on implementation. Suppose staff say the main concern is constraints on staff time in busy emergency department settings, and the upper limit on available time averages about ten minutes. Then the optimization objective could be to identify which subset of components being examined produces the best expected outcome without exceeding an upper limit of ten minutes of staff time required. Of course, the success of such an approach hinges on a realistic assessment of the constraints that define scalability. If the main consideration is in fact staff time, and ten minutes is an accurate upper limit, then the intervention produced using this approach would be immediately implementable and scalable. It is possible that there is no combination of the components being examined that can produce a detectable effect without requiring more than ten minutes of staff time. If this turns out to be the case, the researchers will need to return to the preparation phase of MOST to reconsider the conceptual model and identify some new intervention components. Fortunately, this is not a return to “square one,” because they will know from the results of the optimization trial whether any of the components will be worth retaining, and which ones should be revised or possibly discarded.
Second, MOST can be applied to help with some aspects of adapting interventions to local contexts. Suppose the emergency department based smoking cessation intervention is to be implemented widely across the nation. Emergency departments may differ considerably in the number of staff, how busy they are, and how much time they can spare to deliver the intervention; or they may differ in the levels of other resources. This kind of heterogeneity may suggest that the optimization objective will be different in different hospitals. If it can be assumed that the results of the optimization trial generalize across all of these hospitals (the same assumption that would be considered with respect to the results of an RCT), then the same data can be used to optimize interventions according to the optimization objectives most relevant to each setting. For example, perhaps a particularly well-staffed emergency department can spare as much as 15 min of staff time for intervention delivery. As another example, perhaps for a different emergency department staff time is ample but funds to support delivery of certain components, e.g. nicotine replacement therapy, are limited. It is straightforward to show that this kind of adaptation to local resource levels is not simply a matter of identifying a “Cadillac” intervention and removing one component here and two more there (Collins, 2018). Rather, different locations may be best served by interventions made up of partially non-overlapping subsets of components.
Third, MOST can be used to optimize implementation (or dissemination) of an intervention itself. This is likely to be most useful if implementation can be considered a kind of wrap-around intervention, distinct from the core intervention, with its own components and component levels. An optimization trial can be undertaken to assess the performance of individual components of implementation, and then to identify the optimized implementation procedure based on the optimization objective and the results of the trial. Like all intervention optimization, this approach requires a detailed conceptual model at the outset, in this case a model of the factors resulting in effective implementation. In some cases, important outcomes will be defined at an aggregate level, such as at the hospital, church, medical practice, physician, or classroom teacher level. This may require the use of multilevel optimization trial designs (Nahum-Shani et al., 2018). As is common in all research on aggregate units, achieving the desired level of statistical power may be challenging, but the efficiency of the factorial experimental design or variations thereof (e.g., Sequential Multiple Assignment Randomized Trial; SMART) may offset this to an extent. Interested readers are referred to applied examples in optimizing implementation activities in a smoking cessation program (Fernandez et al., 2020) as well as mental health programs implemented in schools (Kilbourne et al., 2018), the Veteran's Health Administration (Kilbourne et al., 2013), and community-based outpatient clinics (Kilbourne et al., 2014).
The future of implementation science and MOST
The field of implementation science has made a concerted effort in the past few decades to improve upon the traditional approach to intervention development. It is increasingly acknowledged that researchers should consider implementability as they develop an intervention. First, because the RCT setting often does not reflect the applied setting, implementation science has encouraged the use of hybrid effectiveness-implementation experimental designs (Landes et al., 2020). Hybrid designs adopt a dual focus on clinical effectiveness and implementation with the goal of accelerating the translational process from research to practice (Curran et al., 2012). Second, the field has developed a plethora of frameworks, including but not limited to the Consolidated Framework for Implementation Research (Damschroder et al., 2009), RE-AIM (Glasgow et al., 2019), and EPIS (Aarons et al., 2011), which seek to better understand the contexts in which interventions are implemented or mechanisms by which an intervention may be implemented. Though these frameworks certainly address some of the complexities of the goals of implementation science and increase the potential for translation, the need to empirically improve the effectiveness of an intervention while balancing implementation constraints remains.
In this editorial, we have proposed that the integration of MOST with implementation science has the potential to considerably advance the field by providing a framework for achieving the desired balance of effectiveness and implementability. We view MOST as complementary to hybrid effectiveness-implementation designs and implementation frameworks. We look forward to seeing the ways in which MOST may be integrated with these (and other) aspects of implementation science. We offered three concrete ways in which we believe MOST can be used to advance implementation science; more may emerge in the future because this remains an open area of research. MOST has been, and continues to be, applied across a number of public health priorities of interest to the
We encourage others to share their ideas and strategies for integrating optimization and implementation science through this call for papers: https://journals.sagepub.com/page/irp/collections/intervention-optimisation-cfp.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
This work was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (HD089922) and the National Institute of Drug Abuse (DA049699). The content of this work is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
