Abstract
Background:
Limited empirical evidence exists on the impact of adaptations that occur in implementing evidence-based practices (EBPs) in real-world practice settings. The purpose of this study was to measure and evaluate adaptations to an EBP (InSHAPE) for obesity in persons with serious mental illness in a national implementation in mental health care settings.
Methods:
We conducted telephone interviews with InSHAPE provider teams at 37 (95%) of 39 study sites during 24-month follow-up of a cluster randomized trial of implementation strategies for InSHAPE at behavioral health organizations. Our team rated adaptations as fidelity-consistent or fidelity-inconsistent. Multilevel regression models were used to estimate the relationship between adaptations and implementation and participant outcomes.
Results:
Of 37 sites interviewed, 28 sites (76%) made adaptations to InSHAPE (M = 2.1, SD = 1.3). Sixteen sites (43%) made fidelity-consistent adaptations, while 22 (60%) made fidelity-inconsistent adaptations. The number of fidelity-inconsistent adaptations was negatively associated with InSHAPE fidelity scores (β = −4.29; p < .05). A greater number of adaptations were associated with significantly higher odds of participant-level cardiovascular risk reduction (odds ratio [OR] = 1.40; confidence interval [CI] = [1.08, 1.80]; p < .05). With respect to the type of adaptation, we found a significant positive association between the number of fidelity-inconsistent adaptations and cardiovascular risk reduction (OR = 1.59; CI = [1.01, 2.51]; p < .05). This was largely explained by the fidelity-inconsistent adaptation of holding exercise sessions at the mental health agency versus a fitness facility in the community (a core form of InSHAPE) (OR = 2.52; 95% CI = [1.11, 5.70]; p < .05).
Conclusions:
This research suggests that adaptations to an evidence-based lifestyle program were common during implementation in real-world mental health practice settings even when fidelity was monitored and reinforced through implementation interventions. Results suggest that adaptations, including those that are fidelity-inconsistent, can be positively associated with improved participant outcomes when they provide a potential practical advantage while maintaining the core function of the intervention.
Plain language abstract:
Treatments that have been proven to work in research studies are not always one-size-fits-all. In real-world clinical settings where people receive mental health care, sometimes there are good reasons to change certain things about a treatment. For example, a particular treatment might not fit well in a specific clinic or cultural context, or it might not meet the needs of specific patient groups. We studied adaptations to an evidence-based practice (InSHAPE) targeting obesity in persons with serious mental illness made by teams implementing the program in routine mental health care settings. We learned that adaptations to InSHAPE were common, and that an adaptation that model experts initially viewed as inconsistent with fidelity to the model turned out to have a positive impact on participant health outcomes. The results of this study may encourage researchers and model experts to work collaboratively with mental health agencies and clinicians implementing evidence-based practices to consider allowing for and guiding adaptations that provide a potential practical advantage while maintaining the core purpose of the intervention.
Despite imperatives to maintain implementation fidelity when delivering an evidence-based practice (EBP) in routine care (Breitenstein et al., 2010), EBPs are often adapted from their original design when they are implemented in real-world settings to fit provider characteristics, organizational contexts, and service settings (Aarons et al., 2017; Chambers & Norton, 2016). A dynamic tension has developed in implementation science regarding two viewpoints: (a) emphasizing intervention fidelity—the delivery of an EBP as intended by model developers (Carroll et al., 2007) and (b) accommodating program adaptations—deliberate alteration to the design or delivery of an EBP, with the goal of improving its fit, effectiveness, and sustainability in a given context (Stirman et al., 2015, 2017). Some researchers have argued that too much deviation or “drift” from manualized protocols in real-world delivery of interventions may decrease the effectiveness of the intervention (Breitenstein et al., 2010). Bolstering this viewpoint is research documenting better outcomes for programs achieving higher fidelity to a program model (Bond & Drake, 2020; Durlak & DuPre, 2008). In contrast, others have emphasized that an over-reliance on quality assurance to prevent “program drift” can lead to extensive pressure on real-world practices to adhere to the intervention protocols despite mismatches between intervention components and the population and context to which it was implemented (Chambers et al., 2013).
Recently, there have been calls to balance the importance of implementation fidelity with the need for adaptation to accommodate differences across heterogeneous implementation settings, populations, and contexts (Allen et al., 2018; Pérez et al., 2016). From this perspective, fidelity and adaptation are intertwined, and some implementation frameworks accommodate adaptation to enhance program fit, for example, by customizing the intervention to local context while maximizing fidelity to core intervention components (Pérez et al., 2016). Chambers et al. (2013) argue that quality improvement processes focused on optimizing interventions through adaptation rather than strictly trying to minimize deviations from the original model (i.e., fidelity) are ultimately more likely to achieve sustainment of an EBP (Chambers et al., 2013).
Organizations and service providers sometimes prospectively plan an adaptation prior to implementing an EBP (Aarons et al., 2012; McGurk et al., 2015), making modifications prior to implementation because of the local circumstances and/or client’s needs and preferences. For example, Witheridge and colleagues (1982) adapted the Assertive Community Treatment (ACT) model (Stein & Test, 1980) by narrowing the admission criteria to clients who were frequent users of psychiatric hospitals and greatly curtailing the time commitment of psychiatrists in the ACT original model. Aarons et al. (2012) developed the Dynamic Adaptation Process to support adaptations to core forms (or activities) of an EBP in a pro-active process that is data informed; involves fidelity guidance, monitoring, and support; and has stakeholders working together with model experts who can carefully guide the adaptations. More common are program adaptations that are unplanned and in response to unanticipated obstacles. Such reactive adaptations may not be consistent with an intervention’s goals, research base, or theory and may or may not be aligned with functions of an intervention that make it effective.
Depending on the nature of adaptations made to an EBP, they could have either a positive or negative impact on implementation and participant-level outcomes. Implementation science has been challenged to develop rigorous methods to specify and evaluate the impact of adaptations on desired outcomes. Stirman et al. (2013) developed a system for classifying the types of modifications made to interventions and programs implemented in routine care settings. Their coding scheme categorizes modifications as fidelity-consistent or fidelity-inconsistent (Stirman et al., 2015). Fidelity-consistent modifications do not alter core functions of the EBP and do not reduce the ability to differentiate between practices adhering to the EBP and those not. In contrast, fidelity-inconsistent modifications reduce or preclude the delivery of core functions and/or decrease ability to differentiate between treatments. According to this conceptualization of adaptation and modification (Stirman et al., 2015), fidelity-inconsistent modifications are less likely to be planned adaptations than are fidelity-consistent modifications (Aarons et al., 2012; Lee Shawna et al., 2008).
Despite the emergence of implementation frameworks that support the adaptation of EBPs (Chambers et al., 2013) and methodological advances in studying modifications and adaptations to interventions and programs (Stirman et al., 2013, 2015, 2019), there is limited empirical evidence regarding the impact of adaptations that occur in routine care. The purpose of this study was to identify and evaluate adaptions made to an evidence-based lifestyle program (InSHAPE) for adults with serious mental illness (SMI) by providers in real-world mental health care settings. The primary function or purpose of InSHAPE is to target changes in diet and physical activity through supported exercise and health coaching sessions that reduce cardiovascular risk in persons with SMI through weight loss and improved cardiorespiratory fitness.
The core forms of the InSHAPE model as specified by its purveyors include (a) a full-time health mentor who is a certified personal fitness trainer employed by the mental health agency who provides instruction and support for both exercise and healthy eating; (b) weekly 45–60 min in-person health mentoring sessions that take place in an integrated community setting that is open to the public to promote social inclusion, such as a local community gym, with participants enrolled less than 9 months (thereafter participant directs the frequency of the meeting); (c) fitness facility memberships to all participants enrolled in the program that are free or financially reasonable; (d) group celebrations at least three times per year to provide group reinforcement and encouragement; and (e) weekly 30 min supervision for the health mentor in delivering program components. In addition, mental health centers that implement InSHAPE are expected to develop community-based partnerships with providers of fitness programming and with other community resources, such as parks and recreation departments, university-sponsored cooperative extension services, and grocery stores.
Evidence supporting the multicomponent InSHAPE program as an EBP includes two randomized controlled trials (RCTs) and a statewide implementation study (Bartels et al., 2013, 2015, 2018). In the first RCT of the program (N = 133), 49% of program participants achieved clinically significant cardiovascular risk reduction (defined as ⩾5% weight loss or improved fitness) (Bartels et al., 2013). These findings were replicated in a second RCT (N = 210) conducted in two community mental health organizations in a large urban area serving an ethnically diverse population that also demonstrated that half of InSHAPE participants (51%) achieved clinically significant cardiovascular risk reduction (Bartels et al., 2015). Findings from a subsequent statewide implementation of InSHAPE in four community mental health centers resulted in reductions in cardiovascular disease risk similar to the previous RCTs of the InSHAPE intervention (Bartels et al., 2018).
In the current study, we identified adaptations made by InSHAPE teams participating in a national implementation study of the InSHAPE program, rated the adaptations as fidelity-consistent or fidelity-inconsistent according to Wiltsey Stirman’s framework (Wiltsey Stirman et al., 2015), and evaluated the relationships between fidelity-consistent adaptations and fidelity-inconsistent adaptations and implementation (i.e., fidelity) and participant-level (i.e., cardiovascular risk reduction) outcomes.
Method
This study of adaptations took advantage of a Hybrid Type 3 trial funded by the National Institutes of Mental Health (NIMH) (R01 MH102325) to evaluate the effectiveness of a Virtual Learning Collaborative (VLC) compared to Technical Assistance (TA) in the initial implementation of InSHAPE on implementation and participant-level outcomes. In total, 55 mental health provider organizations were randomized to the implementation interventions and 39 sites were assessed at the 24-month follow-up period. Details about the study design for the implementation study have been published elsewhere (Aschbrenner et al., 2019). Briefly, following an in-person, group-based agency training on the manualized InSHAPE intervention, organizations were randomized to either 18 months of web-delivered monthly learning collaborative sessions or telephone-delivered individual technical assistance of four scheduled conference calls over 18 months and as-needed follow-up. Fidelity was emphasized in both implementation interventions through initial training, supervision, and ongoing discussion of core functions and forms of the InSHAPE lifestyle program. At the conclusion of the 24-month study, we invited each of the 39 remaining InSHAPE teams to participate in a 1-hr telephone interview to learn about their InSHAPE programs and any changes they made to the program since initial implementation.
Participants
Participants interviewed in the present study were supervisor-health mentor dyads staffing InSHAPE program teams at behavioral health organizations across the United States participating in the implementation study. InSHAPE teams were offered a US$50 stipend for participating in the interview that was audio-recorded with their verbal consent. Thirty-seven (95%) of the 39 study sites that were actively providing InSHAPE services at 24 months agreed to participate in the adaptation interviews, including 20 VLC sites and 17 TA sites that comprised our study sample at the organizational level. The InSHAPE participant sample included individuals enrolled in InSHAPE at the participating agencies who had met with a health mentor for 9 months or longer. The sample size at the InSHAPE participant level was N = 533 (3 months), N = 509 (6 months), N = 467 (9 months), and N = 358 (12 months). The Dartmouth College Committee for the Protection of Human Subjects approved the study procedures.
Materials
An Organizational Characteristics Survey assessed the organization size (total operating budget, volume of patients served annually, and volume of SMI patients served annually; each measure was operationalized as a categorical variable with three levels—low, medium, and high—based on terciles); rural–urban classification (urban, suburban, rural, and mixed); payer mix (proportion of patients with mental illness who paid for services with Medicaid), use of Medicaid Waiver Program to support InSHAPE program (yes/no); use of funds from a federal block grant program to support InSHAPE program (yes/no); and existence of a Primary and Behavioral Health Care Integration (PBHCI) program (yes/no).
InSHAPE adaptations
Our strategy for identifying naturally occurring adaptations to the evidence-based InSHAPE model was based on a framework developed by Perez et al. (2016). This framework identifies adaptation in the context of fidelity and is centered on the idea that fidelity and adaptation co-exist and that adaptations can affect either positively or negatively on an intervention’s effectiveness. Within our modified framework, adaptations can be planned or unplanned and include (a) addition of new components and (b) modifications to an existing intervention component. We identified adaptations based on descriptors of fidelity to the core forms of the InSHAPE model specified by the model purveyors in the treatment manual and fidelity scale.
We held joint interviews with the InSHAPE supervisor and health mentors at each agency where we asked them to describe how they currently delivered each of the InSHAPE program components at their agency and whether this differed from the original model as described in the treatment manual that was provided to them at the initial InSHAPE training. Follow-up questions explored how and why the team made adaptations to InSHAPE. We asked additional questions about any enhancements or changes to the model (e.g., use of technology, integration of the program within other services at the agency).
We used a three-step process to identify a final list of InSHAPE adaptations: (1) two researchers who conducted the telephone-based interviews at each site generated a preliminary list of adaptations after reviewing the interview transcripts using the InSHAPE treatment manual and fidelity scale to help determine whether the changes described rose to the level of adaptation to the program; (2) three InSHAPE model experts who led the InSHAPE training and supervision then reviewed the preliminary list of adaptations generated by the researchers; and (3) the researcher-model expert team then discussed the preliminary list of adaptations to identify a final list of program adaptations again using the InSHAPE treatment manual and fidelity scale as a reference to settle any disagreements. While there were few disagreements on the classification of adaptations, we resolved disagreements through consensus among three model developers and the two researchers.
Once a final list of adaptations was identified, the model experts rated them by applying Wiltsey Stirman’s framework and coding system for classifying adaptations to EBPs as fidelity-consistent or fidelity-inconsistent (Stirman et al., 2013). According to the Wiltsey Stirman framework, fidelity-consistent adaptations do not alter core components of the EBP. We identified fidelity-consistent modifications as those that did not alter core forms (or activities) of the InSHAPE model. In contrast, fidelity-inconsistent adaptations reduce or preclude the delivery of core components. We identified fidelity-inconsistent adaptations as those that changed to core forms (or activities) of InSHAPE.
InSHAPE Fidelity Scale
The 22-item InSHAPE Fidelity Scale assesses core forms of program implementation in the domains of staffing, organization, and services. There are seven sections to the InSHAPE Fidelity Scale: (1) Staffing and Organization; (2) Initial Health and Fitness Assessments; (3) Creation of InSHAPE Plan; (4) Health Mentor Meetings with Participants; (5) Ongoing Health and Fitness Assessments; (6) Group Celebrations; and (7) Community Integration. Items are rated on a 5-point behaviorally anchored scale ranging from 1 (not implemented) to 5 (fully implemented) (e.g., weekly review of exercise objectives is scored 5 if 90%–100% of required exercise logs are completed, a score of 1 represents 0%–24%). A rating of 4 or higher is considered to be good fidelity, between 3 and 4 is fair fidelity, and below 3 is poor fidelity (Bond & Drake, 2020). The 22 items yielded a total score ranging from 22 to 110, with higher score indicating better fidelity. According to fidelity scoring conventions (Bond & Drake, 2020), to achieve good fidelity, an InSHAPE program must receive a fidelity score of at least 88 (which is 80% of the maximum score and equal to mean rating of 4 across the 22 items). We assessed program fidelity during the implementation period at 6 and 12 months. The information used to assess fidelity was submitted through a combination of de-identified paperwork emailed or faxed to study staff members, and survey data entered remotely by health mentors at each agency on a web-based data collection program equipped with service activity log and participant outcome measures installed on a secure, password-protected tablet.
Participant-level outcome
Cardiovascular risk reduction
Cardiovascular risk reduction was defined as achieving clinically significant weight loss of ⩾5% body weight or clinically significant increase in cardiorespiratory fitness of >50 m on the 6-Min Walk Test (6-MWT) since baseline (Bartels et al., 2013, 2015). Health mentors assessed InSHAPE participants’ weight and submitted the data to the research team on a weekly basis. Health mentors also administered the 6-MWT to participants enrolled in the InSHAPE program and submitted these data to the research team quarterly. The 6-MWT measures the distance an individual can walk in 6 min. In obese adults, the 6-MWT is a reliable and valid measure of cardiovascular fitness with favorable test–retest and discriminant validity (Beriault et al., 2009; Larsson & Reynisdottir, 2008). An increase in distance of >50 m on the 6-MWT is associated with clinically significant reduction in risk for cardiovascular disease (Rasekaba et al., 2009). We assessed cardiovascular risk reduction during the implementation period (i.e., 3, 6, 9, and 12 months).
Statistical design
Summary statistics were presented for the organizational characteristics—mean, SD, median, and interquartile range for continuous variables and proportions for categorical variables. The total instances and distribution across sites for adaptations were reported, overall and by type of adaptation (i.e., fidelity-consistent or fidelity-inconsistent). At the organizational level, the association between adaptations and fidelity score over time (i.e., 6 and 12 months) was estimated using a two-level mixed effects linear regression model with time in Level 1 and sites in Level 2. The randomization group (VLC vs. TA) and time were included as fixed effects and a random effect at the site level was used to account for the correlated observations over time due to the repeated measures. To assess the relationship between adaptations and cardiovascular risk reduction over time (i.e., 3, 6, 9, and 12 months), a three-level mixed effects logistic regression model was estimated, with time in Level 1, clients in Level 2, and sites in Level 3. Randomization group (i.e., VLC vs. TA), time, and logarithm of the baseline values of body weight and cardiorespiratory distance on the 6-MWT were included as fixed effects; a random intercept was included at the client level to account for the within-client correlation over time due to the repeated measures, and another random intercept was used for sites to adjust for clustering of clients within sites. Moreover, heteroskedasticity-robust standard errors clustered at the site-level were reported to account for the nesting of clients within sites.
For each study outcome, we estimated four different models, each including a different independent variable in terms of the adaptations: (a) total number of adaptations (Model 1); (b) total number of fidelity-consistent adaptations (Model 2); (c) total number of fidelity-inconsistent adaptations (Model 3); and (d) distinct adaptations that were inconsistent with the core InSHAPE model (i.e., exercise at mental health agency, hybrid group and one-to-one health mentoring sessions, less frequent celebrations, and health mentor not a certified personal trainer) (Model 4). A maximum likelihood estimation approach was used to address missing data (Enders, 2010; Jennrich & Schluchter, 1986). All analyses were performed using Stata 15.1 (StataCorp, 2017).
Results
The characteristics of the behavioral health organizations interviewed at the conclusion of the 24-month randomized trial are presented in Table 1. A plurality of organizations were located in urban areas (48.7%) and over one-fifth in rural areas. The mean operating budget among organizations with high financial resources was US$78 million compared with US$5.7 million among the low resource organizations. High-volume organizations served an average of 23,713 patients compared with 6,434 patients in medium volume and about 2,271 patients in low-volume organizations. Medicaid was the primary payer for patients with mental illness (64.3%).
Organizational characteristics of study sites.
SMI: serious mental illness.
The implementation study team identified adaptations across 28 of 37 sites interviewed (76%) with a total of 58 instances of adaptations to InSHAPE (M = 2.1, SD = 1.3); 24 instances of adaptations were rated as fidelity-consistent across 16 sites (43%), and 34 instances were rated as fidelity-inconsistent across 22 sites (60%). Using technology to enhance and support InSHAPE, which model experts judged to be fidelity-consistent, was the most common type of adaptation (12 instances across eight sites), followed by offering participants individual and group health mentoring sessions (5 instances across five sites), and holding health mentoring/exercise sessions at the mental health center instead of a fitness facility located in the community (5 instances across five sites), both of which were judged as fidelity-inconsistent by the model experts (see Figure 1).

Distribution of Adaptations to the InSHAPE Program.
With respect to the primary implementation outcome, the InSHAPE fidelity score averaged over 6- and 12 month time points during the implementation phase was 86.8 (SD = 11.7), which is slightly below the benchmark for good fidelity (fidelity = 88). Table 2 reports the results from the two-level mixed effects linear regression models examining the association between adaptations and InSHAPE fidelity scores, with each column presenting results from a separate model. We noted that a greater number of adaptations overall was associated with significantly lower fidelity scores (β = –2.26; p < .05). In particular, more fidelity-inconsistent adaptations resulted in significantly lower fidelity scores (β = −4.29; p < .05) during the InSHAPE implementation phase.
Results of the multivariate regression of adaptations on InSHAPE fidelity score.
Note. Beta coefficient and standard errors (in parentheses) are reported; each column represents a separate multilevel linear regression model. Fidelity score is based on 22 items and ranges from 22 to 110. VLC = Virtual Learning Collaborative; TA = Technical Assistance.
p < .05. **p < .01. ***p < .001.
Examining the association between distinct adaptations that were inconsistent with the core forms of the InSHAPE model and fidelity scores, we found that fewer InSHAPE celebrations and having a health mentor who was not a certified personal trainer (a requirement for the health mentor position) were associated with significantly lower mean fidelity scores (β = −5.39; p < .001 and β = −7.78; p < .05, respectively).
Regarding the primary participant-level outcome, the proportion of clients that achieved cardiovascular risk reduction averaged over the 3-, 6-, 9-, and 12 month time points was 50.6%. The results from the multilevel logistic regression models estimating the association between adaptations and the participant-level outcome of cardiovascular risk reduction are presented in Table 3, with each column representing a separate model. We found that a greater number of adaptations overall was associated with significantly higher odds of cardiovascular risk reduction (odds ratio [OR] = 1.40; confidence interval [CI] = [1.08, 1.80]; p < .05). With respect to the relationship between the type of adaptation and cardiovascular risk reduction, fidelity-consistent adaptations were not significantly associated with cardiovascular risk reduction. However, we observed a significant positive association between the number of fidelity-inconsistent adaptations and cardiovascular risk reduction (OR = 1.59; CI = [1.01, 2.51]; p < .05). Exploring potential mechanisms underlying the association between fidelity-inconsistent adaptations and cardiovascular risk reduction, we found that holding health mentor exercise sessions at the mental health facility rather than at the community fitness center to be associated with over 2.5-fold increase in the odds of cardiovascular risk reduction (OR = 2.52; CI = [1.11, 5.70]; p < .05).
Results of the multilevel logistic regression of adaptations on cardiovascular risk reduction.
Note. Odds ratio and confidence interval (CI; in brackets) are reported; each column represents a separate multilevel logistic regression model; baseline weight and 6-min walk distance in feet were included as covariates in each model. Cardiovascular risk reduction was defined as achieving clinically significant weight loss of ⩾5% body weight or clinically significant increase in cardiorespiratory fitness of >50 m on the 6-Min Walk Test (6-MWT) since baseline. Our initial client sample was N = 1,867. We dropped a total of seven observations (N = 2 at 3 months; N = 1 at 6 months; N = 1 at 9 months; N = 3 at 12 months) since the baseline walk distance was missing for these observations; the final sample size for the multilevel regression models was 1,860. VLC = Virtual Learning Collaborative; TA = Technical Assistance.
p < .05, ***p < .001.
Discussion
The current study leveraged a national implementation study of the evidence-based InSHAPE lifestyle program for adults with SMI to investigate adaptations made to the EBP by InSHAPE teams in real-world mental health agencies and evaluate their relationship with implementation and participant-level outcomes. EBPs are often adapted from their original design when they are implemented in real-world settings to fit provider characteristics, organizational contexts, and service settings (Aarons et al., 2017; Chambers & Norton, 2016). However, limited empirical evidence exists on the relationship between adaptations and implementation and participant outcomes in real-world practice settings. The results of this study showed that adaptations to InSHAPE were common during implementation; 28 of 37 sites (76%) interviewed for this study had made at least one adaptation to InSHAPE. Fidelity-consistent adaptations were not associated with implementation or participant-level outcomes. However, fidelity-inconsistent adaptations were significantly associated with lower fidelity scores and significantly higher odds of achieving cardiovascular risk reduction. The positive association between fidelity-inconsistent adaptations and cardiovascular risk reduction was largely explained by the fidelity-inconsistent adaptation of holding exercise sessions at the mental health agency versus a fitness facility in the community (a core form of the InSHAPE model). While this adaptation altered the delivery of InSHAPE, it did not change the primary function of InSHAPE which was to help individuals with SMI lose weight and improve fitness through exercise and dietary change supported by a health mentor.
The current study also advances measurement in the area of adaptation in implementation science with the development and pilot testing of a method for identifying and rating adaptations made by organizations and providers in routine care that engages model experts and researchers in a collaborative measurement process. Existing implementation science frameworks are silent on who determines whether an adaptation was made and how to rate it. However, rigorous and relevant approaches to measuring adaptation may need to involve model experts who have experiential, real-world knowledge of an EBP working together with researchers who can guide the team to follow protocols for rating adaptations based on established intervention manuals and fidelity measures.
Agencies and providers made adaptations to InSHAPE despite access to guidance, instruction, and support for implementing an EBP with high fidelity during a Hybrid Type 3 trial. Prior research by Stirman et al. (2015) investigated adaptations to cognitive behavioral therapy (CBT) by interviewing clinicians 2 years after they received training and consultation to implement CBT with high fidelity. Clinicians made, on average, 1.1 fidelity-consistent adaptations and 3.3 fidelity-inconsistent adaptations to CBT when implementing the model in real-world practice. Clinicians who met baseline CBT training success criterion reported more fidelity-inconsistent adaptations by the 2-year follow-up. The researchers speculated that clinicians who attained the competencies necessary to implement CBT were better able to recognize when they were departing from the protocol and thus accurately reported these changes to the study team.
Despite participating in implementation training and coaching designed to optimize fidelity, a majority (60%) of InSHAPE teams made fidelity inconsistent adaptations without adversely affecting participant level outcomes. This finding raises the possibility agencies and providers may deliberately make fidelity-inconsistent adaptations that change the form of an EBP (but not the function) because they believe doing so may improve their ability to deliver the intervention and potentially result in a positive impact on participant outcomes. Although our study supports the potential of providers to make fidelity-inconsistent adaptations that do not depreciate participant outcomes, it does not suggest that changes can be made to the core function of an EBP. Unfortunately, the effectiveness of an EBP implemented in real-world practice settings can be compromised when agencies and providers make inappropriate adaptations to EBPs that go unreported to research teams or model experts and result in a loss of downstream participant benefits. This underscores the need for guided adaptation processes, such as the Dynamic Adaptation Process (Aarons et al., 2012), where model experts and agency stakeholders work together toward a data-informed collaborative approach to making adaptations that maintain the core functions of model while meeting local needs.
Through multivariate modeling, we explored potential mechanisms underlying the positive association between fidelity-inconsistent adaptations and cardiovascular risk reduction among InSHAPE participants. We found that the fidelity-inconsistent adaptation of holding exercise sessions at the mental health agency was significantly associated with higher odds of achieving cardiovascular risk reduction. Achieving strict adherence to the “form” of an intervention (e.g., requiring that weekly health mentoring sessions take place at a fitness facility setting that is open to the public, such as a local community gym, at every site) may be counterproductive in cases where tailoring intervention strategies to context might be more effective (e.g., allowing sites to adapt where the health mentor sessions are held to fit available resources within the local setting and maintain the core function of supported exercise to improve fitness) (Perez Jolles et al., 2019). InSHAPE model experts rated the adaptation of holding exercise sessions at the mental health agency as fidelity-inconsistent. The InSHAPE intervention protocol and underlying philosophy of the program embraces social inclusion through integrated exercise activities outside a mental health agency as key to initiating and sustaining lifestyle change in this group. The interesting and seemingly paradoxical finding of a relationship between fidelity-inconsistent adaptations and positive health outcomes for participants in this study highlights the need for future research to measure and evaluate intervention strategies that have fidelity to the function or purpose of EBPs while adapting the form of highly specified core intervention components.
Study limitations
A limitation of this study was that the period for observing adaptations was during agency participation in a larger implementation study testing two strategies for implementing InSHAPE with high fidelity. Agency teams participating in the study were implementing InSHAPE under the close supervision of the original developers of the model and implementation interventionists who continuously monitored and reinforced fidelity to the InSHAPE model. Participants may have followed the InSHAPE model closely because of their participation in the study and the good-faith intention of helping the research team to conduct the study as planned. The adaptation interviews took place with InSHAPE teams within 6 months of finishing participation in the implementation interventions. Thus, it is possible that adaptations, in particular fidelity-inconsistent adaptations, were minimal in this context. A future study with a longer follow-up period from the implementation intervention phase may be more revealing of InSHAPE adaptations.
Another limitation of the present study is that we were not able to identify the timing of the InSHAPE adaptations during the course of the study. We asked InSHAPE teams to recall any adaptations they made to the intervention during the course of the 2-year implementation study when we interviewed them 24 months after they began implementing the InSHAPE program. InSHAPE participants were able to enroll in InSHAPE anytime during the 18 months the study sites were receiving the implementation interventions. In this study, we examined the relationship between adaptations and participant outcome data collected through the first 12 months of their participation in the InSHAPE program. It is possible that sites made InSHAPE adaptations after participant outcome data were collected. The study also lacked objective data on exercise or physical activity at the participant level that could have been used to further explore the association between the fidelity-inconsistent adaptation of holding exercise at the mental health agency and cardiovascular risk reduction. Therefore, the results of this study should be interpreted with caution. Other limitations include the lack of a validated fidelity scale and possible selection bias in the participants included in the outcome analysis.
Conclusions
This research will lay a foundation for future research examining the relationship between the type of adaptations to EBPs (e.g., fidelity-consistent vs. fidelity-inconsistent) by generating a framework and method for exploring these relationships in the context of implementation in routine care. Considering the nuances of adaptations to the function or form of an EBP may advance our understanding of the potential positive impact of fidelity-inconsistent adaptations on outcomes. Future longitudinal research should investigate adaptations made to EBPs at different phases in the implementation process from pre-implementation to sustainment to determine when adaptations to EBPs are more likely to occur and whether they were planned or unplanned, and the impact of these factors on implementation and participant outcomes. In addition, future research would advance our understanding of factors that influence sustainability of EBPs by applying the Adaptation-Sustainability Framework proposed by Chambers et al. (2013) to examine whether adaptations (by type) have an impact on program sustainability.
Footnotes
Author’s contributions
KAA, GRB, and SJB conceptualized and designed the adaptation study. SIP was a major contributor in detailing the role of the model experts in rating adaptations to the InSHAPE lifestyle program. KJ and GW played major roles in the data coding process. SB assembled the data together and performed all statistical analysis. All authors read and approved the final manuscript.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by an NIMH-funded administrative supplement to R01 MH102325.
Ethical approval and consent to participate
Dartmouth College Committee for the Protection of Human Subjects approved the protocol for this project. Participants were asked to provide verbal consent to participate in the telephone interviews after reviewing an information sheet detailing the study procedures
Availability of data and material
Data sharing is not applicable to this article as no new outcome data were created or analyzed in this study.
