Abstract
Background
The implementation strategies used to enhance the implementation of interventions during efficacy and effectiveness studies are rarely reported. Tracking and reporting implementation strategies during these phases has the potential to improve future research studies and real-world implementation. We present an exemplar of how this might be executed by specifying and reporting the implementation strategies that were used during a school-based efficacy trial, Project POWER, which tested a trauma-informed prevention program delivered by a university research team, community members, and school staff facilitators in 29 schools.
Methods
Following the conclusion of the 4-year trial, core Project POWER research team members identified the implementation strategies that supported intervention delivery during the trial using an established taxonomy of school-based implementation strategies. The actors, actions, action targets, temporality, dose, and implementation outcomes were specified using established implementation strategies reporting guidelines.
Results
The research team identified 37 implementation strategies that were used during the Project POWER trial. Most strategies fell within the categories of Train and Educate Stakeholders, Use Evaluative and Iterative Strategies, and Develop Stakeholder Interrelationships. Actors included members of the research team and partner schools. Strategies were used multiple times during the preparation and implementation phases. Action targets were most often characteristics of individuals, implementation process, and characteristics of the inner setting. Strategies predominantly targeted the implementation outcomes of fidelity, acceptability, feasibility, and adoption.
Conclusions
This study provided evidence that implementation strategies are used and can be identified in efficacy research using a retrospective approach. Identifying and specifying implementation strategies used during the initial phases of the translational research pipeline can inform the implementation strategies that are carried forward, adapted, or discontinued in future trials and routine practice to improve implementation and effectiveness outcomes.
Plain Language Abstract
Intervention development and testing often occurs separately from implementation planning. However, evaluating an intervention without considering how it will be subsequently used in real-world settings is a major factor contributing to the research-to-practice gap. During the rigorous testing of interventions, research teams invest significant effort and resources to ensure their program is delivered as intended and so that beneficial outcomes can be assessed. However, the methods or techniques used to support implementation (i.e., implementation strategies) are often not measured or specified to be used and evaluated during later research or included with intervention materials that are distributed to stakeholders; this is a missed opportunity. This study identifies and describes the implementation strategies used during a large school-based research trial of a universal trauma-informed prevention program delivered by a university research team, community members, and school staff. In collaboration with the trial’s research team, we identified 37 implementation strategies that were used during the trial and defined how each strategy was used, including: the actions (i.e., things done), people who carried out the strategies, the targets of the actions, when and how often during the implementation process the strategies were used, and which implementation outcome(s) the strategy was expected to impact. Explicating implementation strategies during early phases of intervention research in schools can inform which implementation supports to carry forward, adapt, or discontinue in future studies and routine practice.
Keywords
Introduction
In response to the established gap between intervention development and uptake into practice across clinical and public health settings (Ennett et al., 2003; Gottfredson & Gottfredson, 2002; Hicks et al., 2014; Shelton et al., 2018), there has been an increased focus on identifying implementation strategies to support adoption, implementation, and sustainment of evidence-based practices (EBPs) in real-world settings (Powell et al., 2019a). Implementation strategies are the methods or techniques used to enhance the adoption, implementation, and sustainability of a given intervention (Proctor et al., 2013). They include discrete (i.e., single component) and multifaceted strategies targeting implementation factors at multiple levels (Powell et al., 2019a). Existing taxonomies describe over 70 implementation strategies for use in health care (i.e., Expert Recommendations for Implementing Change [ERIC]; Powell et al., 2015; Waltz et al., 2015) and educational settings (i.e., School Implementation Strategies, Translating ERIC Resources [SISTER]; Cook et al., 2019). Established reporting guidelines further inform the operationalization of implementation strategies to support replication (Proctor et al., 2013).
Within the traditional translational research pipeline (efficacy, effectiveness, dissemination and implementation), implementation strategies are emphasized and investigated as part of implementation studies after an intervention has demonstrated efficacy and effectiveness (Brown et al., 2017; Lane-Fall et al., 2019). Existing work that has applied the criteria for specifying and reporting implementation strategies has been exclusively situated within the implementation phase of translational research (e.g., Boyd et al., 2018; Bunger et al., 2017; Huynh et al., 2018; Perry et al., 2019; Rogal et al., 2017). However, limiting implementation strategies research to the latter phases of the translational research pipeline may contribute to delays in interventions achieving public health impact (Rudd et al., 2020). Rigid adherence to the traditional translational research process may perpetuate the research-to-practice gap, particularly when efficacious interventions are later found to be incompatible with real-world service delivery. Scholars increasingly urge that interventions are designed and tested with future dissemination and implementation in mind (Lane-Fall et al., 2019).
Interventions that are developed and tested with later implementation as a priority may be more readily adopted and implemented to scale (Rudd et al., 2020). The goal of clinical efficacy and effectiveness research is to produce an evidence-based intervention that is successfully disseminated to, adopted by, and implemented by stakeholders in real-world settings. However, few published clinical research studies sufficiently report the information needed to subsequently implement interventions (Premachandra & Lewis, 2021); this information is typically not prioritized until implementation trials or hybrid effectiveness-implementation studies (Curran et al., 2012). Identifying implementation outcomes and the implementation strategies used to achieve these outcomes earlier in the translational research pipeline (i.e., during the efficacy phase) is aligned with calls to design interventions for future implementation (Lane-Fall et al., 2019) and may be valuable for enhancing real-world implementation (Arnold et al., 2020; Rudd et al., 2020). Despite their underreporting in the literature, implementation strategies are still often used during efficacy and effectiveness studies to achieve implementation, service, and health outcomes (Curran, 2020; Stevens et al., 2020). For example, it is common practice to track and evaluate the fidelity of program implementation during intervention trials to ensure interventions have been delivered as intended (e.g., Gould et al., 2014). Yet, the extensive resources (i.e., implementation strategies) directed toward achieving intervention fidelity are not explicated as part of standard research trials or in the resulting literature; this is a missed opportunity. Investigating implementation strategies during the efficacy phase of intervention research has potential to improve future effectiveness, hybrid effectiveness-implementation, and implementation trials, especially in educational settings in which school partners may be involved in the implementation process during early phases of research.
Schools are recognized as a prominent mental healthcare system for children in the United States (Duong et al., 2020; Jacob & Coustasse, 2008), particularly children of color and children from families with low income who often have less access to mental health services (Alegria et al., 2010). Young people spend most of their time in school, and mental health interventions can be integrated within school curricula or made available to identified students with particular challenges (Atkins et al., 2010; Domitrovich et al., 2010; Masten, 2003). Educational settings—characterized by principals’ organizational leadership, professionals in varied roles working to achieve common or related goals, and a unique calendar that influences all aspects of service delivery—are rich contexts in which to conduct implementation research (Owens et al., 2014). Fidelity is the most commonly assessed and reported aspect of implementation in school-based mental health research (Rojas-Andrade & Bahamondes, 2019). Some researchers have evaluated school-based mental health intervention adoption (Arnold et al., 2020), teacher-based program delivery (Franklin et al., 2012; Han & Weiss, 2005), and sustainability (Arnold et al., 2021; Herlitz et al., 2020), as well as characterized facilitators and barriers to successful implementation (Beidas et al., 2012; Eiraldi et al., 2015; Locke et al., 2017; Powell et al., 2019b).
Implementation scientists have advocated for an increased focus on identifying and testing implementation strategies in settings where mental health services are delivered to children (Novins et al., 2013; Powell et al., 2014). Across mental health service contexts, however, few studies have assessed or reported the extent to which they employ the range of implementation strategies identified in common implementation taxonomies, such as ERIC and SISTER. No research has been conducted within the context of an efficacy study or in school settings (e.g., Boyd et al., 2018; Bunger et al., 2017). Identifying implementation strategies that are used within school-based intervention research is imperative, however, in realizing recommendations to tailor implementation strategies to their intended contexts to advance implementation, service, and child outcomes (Powell et al., 2017; Boyd et al., 2018).
Current study
Recognizing the opportunities present in educational settings to implement EBPs that prevent and address youth mental health needs (Lyon & Bruns, 2019) and to model designing for implementation during early phases of the translational research pipeline (Lane-Fall et al., 2019), this study specifies and reports the implementation strategies used during the testing of a universal trauma-informed prevention program for middle school youth. This study employed the SISTER implementation strategy taxonomy (Cook et al., 2019), and Proctor and colleagues’ (2013) reporting guidelines to identify, describe, and operationalize the implementation strategies used during the school-based Project POWER efficacy trial (Mendelson et al., 2020). This study provides a unique examination of the use of implementation strategies during school-based efficacy research, with a goal of informing recommendations for investigating and reporting implementation strategies during the initial phases of the translational research process.
Method
Study context
Project POWER (
RAP Club was adapted as a school-based prevention program from Structured Psychotherapy for Adolescents Responding to Chronic Stress (SPARCS; DeRosa et al., 2006; DeRosa & Pelcovitz, 2009), an evidence-based group trauma treatment. The core components of SPARCS and RAP Club are evidence-based mindfulness and cognitive behavioral therapy strategies, augmented by psychoeducation about the effects of stress and trauma. RAP Club was adapted to have a prevention rather than treatment focus and included young adult community members as program cofacilitators (i.e., “mentors”) to enhance trust and buy-in from participants. Pilot research conducted in two Baltimore City Public Schools supported RAP Club’s feasibility, acceptability, and potential benefits (Mendelson et al., 2015).
Although the research team delivered both programs, a unique feature of the Project POWER trial was that the team engaged school stakeholders in training, program delivery, and supervision to build the school’s capacity to continue delivering the programs after study participation ended. The research team partnered with each participating school for one year and worked with 7–9 schools each year for four academic years (2016–2017 to 2019–2020). School mental health personnel (e.g., psychologists, social workers, or counselors) and/or teachers who were selected by the principal received training in the RAP Club curriculum immediately prior to the start of the school year. They attended and assisted with RAP Club sessions at their school and were invited to join weekly phone supervision sessions with the group leaders and project staff. Teachers with interest or expertise in health (e.g., health or physical education teachers) received training in the Healthy Topics curriculum and were engaged in the same manner with program delivery and supervision as the RAP Club trainees. The role of school staff during the intervention trial was to observe the modeling of program delivery by research staff and participate in weekly supervision. Throughout this article we refer to partnering school staff as “cofacilitators” in recognition of Project POWER’s goal to equip these stakeholders with knowledge and skills to support their continued use of the RAP Club and Healthy Topics interventions following the trial; the amount of cofacilitation varied across school staff members.
Data collection procedures
The Project POWER trial was approved by the Institutional Review Board (IRB) at Johns Hopkins University. Procedures for this study, which involved discussions with team members and review of study documents, were executed within the parent IRB as part of ongoing research team operations. Data for this study were obtained through meetings with 10 Project POWER trial research team members, each with varied years of experience with the trial. Four team members were involved in the trial from project initiation (2016–2020), three for the final two years of the trial (2018–2020), and three for the final year of the trial (2019–2020). Team members represented multiple roles, including principal investigator, project scientist, project coordinator, research assistant or associate, intervention group leader or mentor, and data manager. The number of team members present at these meetings fluctuated between four and ten, depending on their availability and expertise regarding implementation across school sites. The first authors (SM, KA) of this study participated in the Project POWER trial as a Healthy Topics intervention group leader (SM) and a RAP Club mentor (KA) for two years and one year, respectively. Our knowledge of the trial and each interventions’ delivery enhanced our understanding and coding of the research team’s data.
We used a group consensus building process following the conclusion of the Project POWER trial. The final cohort of Project POWER schools completed the implementation of RAP Club and Healthy Topics in November 2019. In December 2019, two weeks prior to the first meeting with the trial’s research team, the first authors distributed the SISTER taxonomy (Cook et al., 2019) to senior research team members (n = 6) along with the definition of each school implementation strategy and ERIC ancillary material to reference (Powell et al., 2015). Each team member was asked to record the implementation strategies that they thought were used during the trial and to come to the team meeting prepared to discuss the strategies with other team members.
From mid-December 2019 to mid-February 2020, the first authors met five times with the Project POWER research team members to name, define, and operationalize the implementation strategies that were used during the Project POWER trial. The first two sessions were focused on naming and defining the implementation strategies that were used based on the SISTER taxonomy. The name and definition of each of the 75 strategies from the SISTER taxonomy were presented individually to team members. Individuals endorsed whether the strategy was used during the trial, and group consensus regarding strategy use was reached through moderated discussion facilitated by the first authors. Team members discussed activities congruent with the target strategy and once a majority of team members were in agreement about whether activities were performed that were congruent with the target strategy’s definition, the target strategy was indicated to have been used during the trial.
The three subsequent meetings with the trial’s research team were focused on operationalizing the identified strategies using the implementation strategy reporting guidelines developed by Proctor and colleagues (2013). The first authors recorded notes from each meeting into an Excel spreadsheet containing the name and definition of each identified SISTER strategy and columns for the seven implementation strategy reporting domains (i.e., actor, action, action target, temporality, implementation outcome, dose, justification; Proctor et al., 2013). The database was populated during group discussions and displayed in real time for team members to view and correct for accuracy.
Each meeting lasted for 1 h, except for one meeting that lasted for 2 h (6 h total). The resulting implementation strategies and operational definitions were reviewed by all ten research team members prior to data analysis. Finally, we reviewed the Project POWER grant proposal to determine whether any of the implementation strategies that were identified by the research team were also outlined in the project proposal.
Data analysis
The first authors individually cleaned and summarized the data collected from the trial’s research team for each identified implementation strategy and determined a coding scheme for the action target, temporality, and implementation outcome domains. We coded each strategy’s conceptual action target using constructs from the Consolidated Framework for Implementation Research (CFIR; Damschroder et al., 2009). The first authors’ training in implementation determinant frameworks and experience with Project POWER implementation were used during the coding process to link each strategy to conceptual targets within the five CFIR domains. We coded temporality using the four established stages of implementation and developed final codes via consensus for each domain. Data were recoded as needed throughout the iterative coding process. The resulting data file, including all codes, was sent to all research team members to review and verify for accuracy and completeness before doing any further analysis of the data; no team members disputed the accuracy of the data or codes. We summarized each strategy’s actions but could not assess the frequency with which each occurred. Descriptive statistics were used to explore and describe the identified SISTER strategies’ actors, action targets, temporality, implementation outcomes, and dose. Identified strategies were also categorized into one of four categories based on importance (i.e., impact of the strategy and how critical it is for implementation) and feasibility (i.e., practical and can be used to support implementation) ratings reported by school implementation leaders in Lyon et al.’s (2019) study: both important and feasible, important but not feasible, feasible but not important, and neither feasible nor important.
Results
Congruent with Hooley et al.’s (2020) recommendations, an implementation strategy description table that includes each implementation strategy identified in this study and its operational definition as specified using the Proctor et al. (2013) categories is included as Supplemental File 1. Data and codes generated from this table were used in this study’s analyses.
Summary of school implementation strategies used during the project POWER trial
The Project POWER research team reported that 37 of the 75 SISTER strategies were used during the four-year trial (Table 1). Most of the employed strategies were in the SISTER categories of Train and Educate Stakeholders (n = 7), Use Evaluative and Iterative Strategies (n = 7), and Develop Stakeholder Interrelationships (n = 5). Four strategies each were within the Adapt and Tailor to Context and Support Educators categories. The categories of Change Infrastructure, Engage Consumers, and Use Financial Strategies each had three strategies, and only one strategy from the Provide Interactive Assistance category was identified.
Implementation strategies used during Project POWER efficacy trial.
Note. IES = Institute of Education Sciences; NICHD = National Institute of Child Health and Human Development; SISTER = School Implementation Strategies, Translating ERIC Resources.
Implementation strategy was included in the Project POWER grant proposal.
Implementation strategy changed during the Project POWER trial.
Implementation strategy actor(s) included school partners.
Of these 37 implementation strategies, 27 (73%) were referenced in the trial’s funded grant proposal. Eleven (29.7%) strategies’ actions were reported to have changed during the trial (see Table 1). The nature of these changes was largely to improve implementation strategy delivery (e.g., moved from group-based to individualized consultation to accommodate school partner schedules). In describing why implementation strategies were changed during the trial, most team members noted reasons consistent with increasing stakeholder engagement in either the implementation strategy itself (e.g., ongoing consultation) or in the intervention delivery (e.g., provided smaller incentives more frequently, rather than a large incentive at the end of the intervention, to increase demand and expectations for implementation). Some modifications to implementation strategy use were related to preserving the fidelity of intervention delivery (e.g., obtained formal commitments from school staff in year 4 of the trial using a contract) or integrity of fidelity data needed to evaluate the intervention at the conclusion of the trial (e.g., changed fidelity logs to capture cofacilitator attendance and participation in intervention sessions).
When comparing the strategies that were used during the Project POWER efficacy trial to school implementation leaders’ ratings of SISTER strategies’ feasibility and importance, most (57%) strategies used during the present trial were within the important and feasible category in the Lyon et al. (2019) study. However, several strategies used during our trial were in the categories of important but not feasible (19%; e.g., access new funding, alter student or school personnel obligations, improve implementer’s buy-in, precorrection prior to implementation) or neither important nor feasible (19%; e.g., alter and provide individual- and system-level incentives, change record systems, obtain formal commitments, test-drive and select practices). Few strategies in the feasible but not important category (5%; i.e., remind school personnel, tailor strategies) were used during the trial.
Actors
Implementation strategy actors during the Project POWER trial included the research team, principal investigator, coinvestigators, community partners, intervention developers/trainers, project coordinator, data manager, intervention group leaders, community mentors, school principals, and school cofacilitators (see Table 2). Across employed implementation strategies, 90% of reported actors were members of the research team. Trial-employed intervention group leaders or a collective of research team members were the most frequent types of actors.
Actors from research team and partner schools.
Of the 37 identified implementation strategies, nine included school partner actors—two strategies (i.e., mandate for change and alter student or school personnel obligations) were enacted only by school partners, and seven (19%) involved actors from both the research team and partner schools. These implementation strategies spanned six of the nine SISTER categories (i.e., Adapt and Tailor to Context, Change Infrastructure, Develop Stakeholder Interrelationships, Engage Consumers, Train and Education Stakeholders, Use Financial Strategies; see Table 1). School partner actors included school administrators and school staff cofacilitators.
Actions
Actions comprising each implementation strategy used during the trial are thoroughly described in Supplementary File 1 and summarized in Table 1. Most strategies were composed of several actions (see Table 1). For example, increasing demand and expectations for implementation involved: (a) during recruitment and training, informing school principals, school staff cofacilitators, and research team-employed group leaders about the interventions’ structure and potential benefits for their students (based on data from pilot trials); (b) during intervention implementation, group leaders reviewed implementation plans for each lesson with school staff cofacilitators and identified concrete portions of the lesson that the school cofacilitator would lead; and (c) developing written expectations for participation in intervention sessions and tying participation to financial incentives during the final year of the trial to increase school staff cofacilitator engagement in leading intervention groups.
Action target and dose
An average of 2.81 conceptual action targets was identified per implementation strategy. The most common action targets were in the CFIR determinant domains of characteristics of individuals (38.5%; e.g., knowledge and beliefs about the intervention), followed by implementation process (36.5%; e.g., engaging key stakeholders and formally appointed implementation leaders), and characteristics of the inner setting (21.2%; e.g., available resources). Only 3.8% of strategies targeted characteristics of the intervention (e.g., adaptability), and none targeted the outer setting (see Supplementary Tables 1 and 2).
To specify dose, we examined whether each implementation strategy was used once or multiple times during each year of intervention implementation during the trial. Most implementation strategies were used multiple times (n = 24), with 13 strategies used only once (see Supplementary File 1). Of those strategies used multiple times, four were used during each intervention group session (i.e., capture and share local knowledge, monitor the progress of the implementation effort, precorrection prior to implementation, and shadow other experts), and three were used weekly (i.e., audit and provide feedback, capture and share local knowledge, provide ongoing consultation/coaching). We were unable to more precisely quantify the frequency of strategies used more than once. Refer to Supplementary File 1 and Supplementary Tables 1 and 2 for more detailed results for these domains.
Implementation outcome
The research team reported that the implementation strategies used during the Project POWER trial were most likely to impact the implementation outcomes of fidelity (64.9%), acceptability (54.1%), feasibility (29.7%), and adoption (27.0%). Fewer implementation strategies were reported as likely to impact sustainability (16.2%), penetration (5.4%), or cost (2.7%), and none were indicated as likely to impact appropriateness. Overall, implementation strategies were reported as likely to affect an average of two implementation outcomes.
Temporality
Figure 1 summarizes implementation strategies used by stage of implementation (i.e., exploration, preparation, implementation, sustainment). Most implementation strategies were used during the implementation (83.7%) and preparation (54.1%) phases. Fourteen (37.8%) strategies were used during both the preparation and implementation phases, 6 (16.2%) only during the preparation phase, and 17 (45.9%) during only the implementation phase. Strategies were not identified for the exploration or sustainment phases.

Names of School Implementation Strategies, Translating ERIC Resources (SISTER). Implementation strategies used during each stage of implementation of the Project POWER trial.
Discussion
Implementation strategies are traditionally emphasized in the latter phases of translational research (Brown et al., 2017) and are rarely measured or reported by clinical (i.e., efficacy and effectiveness) researchers (Rudd et al., 2020). This study illustrated that implementation strategies are used and can be identified in efficacy research. We found that 37 implementation strategies were used during the Project POWER efficacy trial of a universal, trauma-informed prevention program for middle school youth. Implementation strategies were identified using the school-adapted, SISTER implementation strategy taxonomy (Cook et al., 2019) and operationalized using Proctor et al.’s (2013) implementation strategies reporting guidelines; we reported findings for eight of the nine categories. The retrospective method we used provides an exemplar for specifying and reporting implementation strategies used during school-based efficacy research.
A range of implementation strategies—spanning multiple implementation actors, outcomes, and phases—were used during the Project POWER trial. Half of the 75 SISTER implementation strategies were used during the trial; most strategies (27) were described, although not labeled, in the trial’s grant proposal. It is noteworthy that so many strategies were used because implementation strategies are rarely measured or reported in efficacy research. This number of strategies is consistent with research specifying and reporting implementation strategies used during implementation studies (i.e., 11–45; Boyd et al., 2018; Bunger et al., 2017; Huynh et al., 2018; Rogal et al., 2017) and with a recent illustration within an effectiveness study (20; Rudd et al., 2020). We extended this research by examining whether implementation strategies were adapted over the course of the four-year Project POWER trial. Eleven strategies were changed, suggesting that implementation in efficacy trials is dynamic and iterative in support of these studies’ goals to achieve internal validity via appropriate implementation.
Congruent with the efficacy context of this study, implementation strategies were reported as most likely to impact the implementation outcomes of fidelity, acceptability, feasibility, and adoption and were used only during the preparation and implementation phases. Although no strategies targeted appropriateness, it is important to assess the intervention’s fit for the intended population and context in which it will be delivered or sustained. Interestingly, approximately 16% of strategies targeted sustainability as an implementation outcome, which is typically not a priority of efficacy trials. Nevertheless, it remains important to plan and build the capacity for the sustainment of promising interventions beyond the research context to maximize public health impact and promote health equity (Arnold et al., 2021). These findings underscore the need for collaboration among clinical and implementation researchers to understand how implementation strategies are employed during the early phases of translational research (Rudd et al., 2020).
A unique aspect of the Project POWER trial was the involvement of school stakeholders in intervention delivery. Most strategies used during the trial were consistent with the types of implementation strategies that are well established in educational settings (i.e., SISTER categories of Train and Educate Stakeholders, Use Evaluative and Iterative Strategies, and Develop Stakeholder Interrelationships; e.g., Lyon & Bruns, 2019). As expected in the context of an efficacy trial, implementation strategies were primarily enacted by members of the research team and used the trial’s financial and personnel resources; only 10% of the implementation strategy actors were school partners. In the future real-world implementation of Project POWER’s interventions, responsibility for supporting implementation would fall upon school stakeholders, who are less likely to have equivalent resources. Whereas most strategies used in this trial may be considered important and feasible, others (43%) may have limited feasibility and/or importance for school stakeholders (Lyon et al., 2019). Future implementation research and practice that builds on this efficacy trial should investigate how implementation strategies affect implementation and child outcomes. Strategies that both improve outcomes and are feasible and important for school contexts should be prioritized.
The approach employed in this study enabled identification of implementation strategies and explication of their key features, including primary actors and their actions, when each strategy was used during implementation, and focal implementation outcomes. If applied to other efficacy research, this approach could facilitate a clearer understanding of how implementation strategies are used during early translational studies in school settings. However, we encountered challenges in retrospectively using Proctor et al.’s (2013) reporting guidelines. First, it was not always possible to identify discrete actions for each strategy, and some strategies had multiple discrete actions, making them difficult to operationalize. Adopting a more action-oriented approach, such as tracking implementation activities, may be preferable; however, matching discrete actions with strategies will likely continue to present challenges (Bunger et al., 2017). The retrospective approach also limited our ability to fully quantify the magnitude or frequency of strategy use (as team members were often engaged in similar implementation activities across multiple sites simultaneously) and to report the empirical, theoretical, or pragmatic justification for selection of each strategy. Our approach was also time-intensive. Instead of tracking and reporting as part of the implementation process, we had several meetings to identify with the research team which strategies were used, operationalize the strategies using the reporting guidelines, analyze the results, and then verify the results with the team. In contrast, activity logs completed retrospectively or in real time during the study period may be more efficiently completed and yield more precise estimates of strategies’ actions, dose, and temporal sequence (Boyd et al., 2018; Bunger et al., 2017).
The independent identification of implementation strategies and their conceptual action targets by research team members—particularly those who were unfamiliar with implementation science, the SISTER taxonomy, or the CFIR—was also a challenge. This project benefitted from having trained implementation researchers facilitate strategy specification. It may be difficult for researchers and school stakeholders who are less steeped in implementation to articulate which implementation strategies are used and how; thus, intervention developers should consider including implementation experts on their research teams to support strategy specification (Rudd et al., 2020; Tabak et al., 2021).
Finally, it was difficult during data collection to capture the full extent of informal activities that occurred during the trial. For example, one research team member mentioned that informal communications occurred that were not a required part of routine job responsibilities and/or did not fall squarely into the SISTER strategy definitions. Current implementation strategies, taxonomies, and activity logs might not adequately capture informal communications and other activities (Boyd et al., 2018). Due to inadequate data and the retrospective nature of this study, we were unable to determine which informal activities should be elevated to an implementation strategy; this is an area that warrants future research.
Limitations
This study demonstrated that the SISTER taxonomy and Proctor et al.’s (2013) reporting guidelines could be utilized to specify and report implementation strategies used during a school-based efficacy trial. We acknowledge several limitations of this research. Research team members were sole informants in this study, given their primary role in the selection and employment of implementation strategies; the inclusion of school stakeholders as informants may have yielded important information relevant for continued use of the identified strategies in schools. As data were collected retrospectively via self-reported use of implementation strategies by research team members in a group setting, there is potential for reporting and recall biases. Although most research team members who explicated the strategies were involved in the trial from the beginning, their reporting of strategies used during the 4-year trial may not have been as accurate because strategies were not tracked in real time. Additionally, both the first authors were part of the Project POWER research team before this study was conducted and served as the data collectors and data analysts for this study, potentially contributing to researcher bias. To reduce this bias, we used member checking with research team members in real time during the implementation strategy specification meetings and electronically during the data analysis phase. The retrospective nature of this study also limited our ability to measure the impact of used strategies on implementation outcomes or to determine which strategies were most critical for this type of intervention trial.
Implications and future directions
This study illustrates the breadth and depth of information that can be gleaned when implementation strategies are retrospectively identified and operationalized during efficacy research. Our findings underscore the need for comprehensive specification and reporting of implementation strategies during the early phases of translational research. This recommendation is aligned with calls for implementation to be considered from the start of intervention testing to maximize its potential feasibility, acceptability, and scalability in the real world and to reduce delays and roadblocks along the research to practice pipeline (Lane-Fall et al., 2019; Lyon & Bruns, 2019).
Whereas the retrospective approach used in this study may be most practical for research studies that have concluded or are currently underway, prospective implementation strategy tracking, for example, via activity logs or other means of documenting activities systematically in real time over the study period (e.g., Boyd et al., 2018; Bunger et al., 2017), may lead to a more thorough characterization of implementation strategy use. Prospective tracking can facilitate comparably more accurate descriptions of an implementation strategy’s dose and temporal sequence (Huynh et al., 2018) and can be less time-intensive when incorporated in study data collection protocols. Regardless of whether a prospective or retrospective approach is used, adaptations made to implementation strategies and associated outcomes of these adaptations should be tracked in future studies. Explicating when and in what sequence implementation strategies are used, and when and how strategies are altered throughout the implementation process, is essential for speeding real-world implementation (Powell et al., 2019a).
When intervention efficacy or effectiveness studies have already been published, recent research has analyzed published manuscripts to identify implementation strategies (Hooley et al., 2020; Premachandra & Lewis, 2021). Our data indicate it is also beneficial to review funded grant proposals for a more comprehensive understanding of strategy use. However, asking clinical researchers to complete a comprehensive implementation strategy specification and reporting processes using an established strategy taxonomy, and Proctor and colleagues’ (2013) reporting guidelines may yield greater breadth and depth of implementation information (Rudd et al., 2020).
Identification and monitoring of implementation strategies throughout the intervention development process has implications for school practices and policies. Researchers who carefully monitor and evaluate implementation strategies during efficacy trials may be well positioned to provide school stakeholders with detailed information on strategies to facilitate adoption, implementation, sustainability, and scale-up of the intervention upon the conclusion of research support. In addition to tracking the school actors who are involved in implementing strategies during the trial, researchers should study which implementation strategies are related to successful implementation during the research study and recommend strategies that could be carried forward by school partners in future implementation. This information could be disseminated to schools in the form of user-friendly toolkits that accompany intervention materials and specified in publications from efficacy and effectiveness studies. Making detailed information about implementation available in a variety of outlets enables school implementation leaders to more effectively plan for implementation and to provide guidance for educators and school mental health professionals responsible for intervention delivery. To support planning for school stakeholder-led implementation, these materials should clearly define the roles and responsibilities of school actors in delivering the intervention and enacting implementation strategies.
Conclusion
This study provides evidence that implementation strategies are indeed used, and may change, during efficacy research and highlights the importance of examining implementation strategies during earlier phases of research. We further illustrated a retrospective approach to specifying and reporting implementation strategies that may be leveraged within efficacy research in educational settings. The unacknowledged and overlooked implementation supports that are built into school-based efficacy, and effectiveness studies are often relevant to improving implementation (e.g., fidelity, acceptability), health (e.g., reduced symptoms of anxiety and depression), and academic (e.g., grades) outcomes. We urge that implementation strategies be strategically selected prior to implementation, clearly explained to research team and school partners involved in implementation, tracked during the trial (including adaptations), and reported in the literature using an implementation strategy taxonomy and Proctor et al.’s (2013) reporting guidelines.
Supplemental Material
sj-docx-1-irp-10.1177_26334895211047841 - Supplemental material for Specifying and reporting implementation strategies used in a school-based prevention efficacy trial
Supplemental material, sj-docx-1-irp-10.1177_26334895211047841 for Specifying and reporting implementation strategies used in a school-based prevention efficacy trial by Stephanie A. Moore, Kimberly T. Arnold and Rinad S. Beidas, Tamar Mendelson in Implementation Research and Practice
Supplemental Material
sj-xlsx-2-irp-10.1177_26334895211047841 - Supplemental material for Specifying and reporting implementation strategies used in a school-based prevention efficacy trial
Supplemental material, sj-xlsx-2-irp-10.1177_26334895211047841 for Specifying and reporting implementation strategies used in a school-based prevention efficacy trial by Stephanie A. Moore, Kimberly T. Arnold and Rinad S. Beidas, Tamar Mendelson in Implementation Research and Practice
Footnotes
Acknowledgements
The authors would like to thank the schools in the Baltimore City Public Schools District that participated in the Project POWER trial, and the following research team members who generously contributed their time to this project (names listed alphabetically): Laura Clary, Christine Crimmins, Rachel Dows, Karen Edwards, Jeffery Krick, Marcus Nole, Steven Sheridan, Violet Odom, and Alexander Welna. The authors would also like to thank our colleagues for their feedback on this project: Gazi Azad, Courtney Benjamin Wolk, Molly Davis, David Mandell, and Brittany Rudd.
Declaration of conflicting interests
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Rinad S. Beidas receives royalties from Oxford University Press, served as a consultant to Camden Coalition of Healthcare Providers, currently consults for United Behavioral Health, and receives an honorarium for serving on the Optum Behavioral Health Clinical Scientific Advisory Council. Dr. Beidas is an Associate Editor of Implementation Research and Practice; all decisions on this paper were made by another editor.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Institute of Mental Health (grant numbers T32MH109433-03, T32MH109436) and the Institute of Education Sciences (grant number R305A160082). The opinions expressed are those of the authors and do not represent the views of the National Institutes of Health or the US Department of Education.
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
