Abstract
Keywords
Implementation facilitation is a multifaceted process that incorporates a bundle of implementation strategies (Ritchie et al., 2020a), and is used by a growing number of efforts to implement innovations into practice (Carter et al., 2021; Ruud et al., 2021). Implementation facilitation consists of various forms of support (e.g., task management, accountability checks, process monitoring, and relationship building) that is provided to personnel at a site implementing an evidence-based practice (Kitson et al., 1998; Olmos Ochoa et al., 2021). A common feature of implementation facilitation is to utilize an external facilitator (EF) who may bring a range of expertise including or such as content and process improvement expertise to an implementation site (Sullivan et al., 2021). Recent years have seen a noticeable increase in the use of external facilitation in implementation efforts. For instance, as of mid-October 2021, searching the online database PubMed for publications on implementation-related external facilitation yielded 5 articles prior to 2010, 14 articles from 2010 to 2015, and 58 articles from 2016 to 2021. Common implementation facilitation activities include implementation planning, engaging stakeholders, monitoring progress, and problem-solving (Smith et al., 2019). How much time EFs spend on implementation facilitation activities is of great interest to entities planning or funding implementation efforts. This information is critical for allocating appropriate EF resources to a site.
Measuring who spends how much time, on what activities and when, can be accomplished through a time–motion study. Time–motion studies conduct such measurement by documenting both the time spent on a work task (e.g., in minutes) and what the task entailed (e.g., the task’s date, purpose, and individuals involved) (Bradley et al., 2021; Lim et al., 2022). Time–motion studies are widely used in fields such as industrial engineering to measure the amount of time and movements that employees require to complete work tasks (Burri Jr. & Helander, 1991; Magu et al., 2015). Time–motion studies are increasingly used in health care settings to assess and improve clinic workflow (Kato et al., 2021; Solomon et al., 2021). Examples include comparing nurses’ time spent in face-to-face versus care management activities in the Collaborative Chronic Care Model (CCM) (Glick et al., 2004), comparing physicians’ time spent working in the electronic health record versus face-to-face time with patients in clinic visits (Young et al., 2018), and improving patient wait times by assessing and restructuring clinical practice patterns to eliminate inefficiencies (Racine & Davidson, 2002).
Time–motion methodology is beginning to be more widely used for implementation science. Especially as economic evaluation of implementation is receiving increased attention within the field (since implementation cost can influence whether and how to implement an evidence-based intervention) (Eisman et al., 2020; Yoon, 2020), time–motion methodology is being embraced as an approach for collecting and analyzing data that inform implementation cost (Matthieu et al., 2020; Lamper et al., 2020; Pittman et al., 2021). Notably, Ritchie et al. (2020b) conducted a by-week time–motion study of implementation facilitation activities for the EFs, internal facilitators, and participating staff across all levels of the health care systems in which implementation of integrated mental health services within primary care settings was being carried out. They used a structured spreadsheet to document time and activities, based on the weekly collection of these data from the facilitators throughout the 28-month study. In reporting their study, they noted the challenges of collecting these data, especially given the burden of weekly documentation.
Building on Ritchie et al. (2020b)’s work, we adapted their structured spreadsheet to conduct a time–motion analysis of EF efforts in a randomized trial. To minimize the burden of time–motion data documentation, we used a less intensive data collection approach—namely, collection over a 2-week period (henceforth, “a two-week interval”) near each of the beginning, middle, and end of the active implementation phase. Hence, our primary objective was to understand how much, and how, EF time is spent on a multi-site randomized implementation trial. Our secondary objective was to examine the extent to which the less intensive data collection approach could potentially substitute for data collection throughout the entire active implementation phase.
Methods
Time–motion data for this study were from a multi-site randomized stepped wedge implementation trial conducted from February 2016 to February 2018 at the U.S. Department of Veterans Affairs (VA), in close partnership with VA’s national Office of Mental Health and Suicide Prevention (OMHSP) (Bauer et al., 2019). As a hybrid type II trial (Curran et al., 2012), it examined the effectiveness of implementation facilitation in establishing the evidence-based CCM to structure the clinical care delivered by interdisciplinary outpatient mental health provider teams. The CCM organizes health care delivery to be anticipatory, coordinated, and patient-centered (Von Korff et al., 1997; Wagner et al., 1996; Coleman et al., 2009). It comprises six core elements: work role redesign, patient self-management support, provider decision support, clinical information systems, linkages to community resources, and organizational/leadership support. The CCM has demonstrated effectiveness for improving outcomes in medical conditions (Bodenheimer et al., 2002a; Bodenheimer et al., 2002b) and mental health conditions (Badamgarav et al., 2003; Woltmann et al., 2012; Miller et al., 2013). The trial also assessed the impact of CCM implementation on health outcomes of team-treated individuals. Published articles provide additional information regarding the stepped wedge design used (Bauer et al., 2016; Bauer et al., 2019), the implemented evidence-based CCM (Miller et al., 2019; Sullivan et al., 2021), and the EFs who worked with the sites (Connolly et al., 2020). Research procedures were approved by the VA Central Institutional Review Board (IRB). Implementation efforts and related measures were exempt from IRB review, considered to be program evaluation in support of OMHSP’s endeavors to enhance VA’s mental health care delivery.
Implementation Facilitation
For each site, EF activities consisted of approximately 4–6 weeks (5.3 ± 0.8 weeks across six sites) of a pre-implementation phase and a kick-off site visit followed by 12 months of an active implementation phase. During the pre-implementation phase, the EF held semi-structured conversations with site personnel to begin to understand the context under which the CCM implementation was to be carried out (henceforth referred to as the “pre-site visit assessment”). In addition to the time spent on these conversations with site personnel, the pre-implementation phase’s data also included time spent on scheduling the conversations, preparing for the site visit, and beginning to correspond with the internal facilitator to collaboratively prepare for implementation. A 1.5-day on-site visit by the EF continued these conversations, informed stakeholders about the implementation that was about to commence, and served as the point in time after which the active implementation phase began. During the active implementation phase, the EF monitored and guided the site’s implementation progress, regularly coordinating with an on-site internal facilitator and joining the site’s implementation team meetings over video or by phone.
Time–Motion Tracking
For the six implementation sites involved in this study, we used an adaptation of Ritchie et al. (2020b)’s structured spreadsheet to collect the following information on EF time and activities:
Facilitation activity (e.g., stakeholder engagement, process monitoring). Date of the activity. Type of activity (e.g., preparatory time for an implementation task, one-on-one or group-facing). Mode of communication used for the activity (e.g., email, phone). Interaction target—that is, with whom the activity involved interacting (e.g., internal facilitator, clinic leadership). Duration of the activity (in increments of 15 min).
The study had three EFs, each facilitating implementation at two of the six sites. Each of the three EFs was allocated one site to work with per wave of the stepped wedge trial. The data for each site were logged on the spreadsheet by the EF working with the site. Whenever an EF came across a question regarding the logging procedure (e.g., whether to log a facilitation activity as being primarily stakeholder engagement or education), ongoing weekly meetings among the EFs were used to discuss the procedure in question and reach a consensus on how to log the data.
The implementation facilitation approach used by the trial (Bauer et al., 2016), as shown in Figure 1, had the EF gradually transition implementation expertise to the site over the course of the active implementation phase. We thus expected the amount of EF time spent on implementation facilitation activities to similarly decrease between the beginning and the end of the 12-month active implementation phase. Accordingly, after collecting continuous time–motion data through the 4–6 weeks pre-implementation phase (i.e., for the pre-site visit assessment and the site visit), we selected one 2-week interval (deemed typical by the EF—e.g., not overlapping with holidays or irregular work schedules) from each of months 1–4 (weeks 1–17), months 5–8 (weeks 18–34), and months 9–12 (weeks 35–51) of the active implementation phase over which to collect time–motion data. For two of these sites (which worked with one EF, were located in two different regions, and were one in each of two stepped wedge waves), we also collected continuous data throughout the active implementation phase. We then assessed the extent to which the selected interval data could be used to estimate the weekly EF time spent on implementation facilitation activities.

Implementation facilitation activities, implementation timeline, and time–motion data collection approach.
Analysis Plan
Our analyses proceeded in three stages. First, we assessed how much EF time was spent per site, both overall and for each of (i) the pre-site visit assessment, (ii) the site visit, and (iii) the beginning, middle, and end of the 12-month active implementation phase (i.e., the first, second, and third 17-weeklong periods, respectively, in a year). Second, we examined the activities and interaction targets across which the EF time spent was allocated, and calculated the percent EF time across activities and across interaction targets. Third, we compared the time estimates from 2-week intervals versus continuous data collection, and examined the changes in our results when comparing the 2-week intervals to intervals of different lengths (derived from the continuous data) ranging from 2 to 8 weeks. We used 8 weeks as the longest interval length for comparison, since longer intervals would mean collecting data for more than half of the 17 weeks in each of the beginning, middle, and end of the 12-month active implementation phase.
For each analytical stage, means and SD were calculated for the time estimates, and comparisons of the means were conducted using t-tests or analyses of variance with Tukey’s method for post hoc tests. Specifically, using 2-week interval data, we compared the mean total time spent per site by EF, site (region), and calendar time (wave of stepped wedge). We also compared mean time spent per week with a site between the beginning, middle, and end of the active implementation phase, accounting for the EF, site, and calendar time through stratified analyses. For the two sites for which we had time estimates from both 2-week intervals and continuous data collection, we compared the mean time spent with a site between the interval and continuous data, both total and per week for the beginning, middle, and end of the active implementation phase. We also compared the mean time spent per week with a site during the beginning, middle, and end of the active implementation phase, between the interval lengths of 2 and 8 weeks.
Results
The three EFs, facilitating two sites each, were health services researchers (e.g., psychiatrist, clinical psychologist, and health systems engineer). The six sites were distributed across the southern, midwestern, and northeastern United States (two sites per region). CCM implementation commenced at these sites in two waves (different calendar times) of the stepped wedge (three sites per wave).
Time Spent on External Facilitation
Table 1 shows our results regarding the time spent on external facilitation. Across both the pre-implementation phase and the active implementation phase, EFs spent 136.2 ± 27.6 (mean ± SD) hours per site for implementation over the course of 55–57 weeks, based on 2-week interval data. Figure 2 shows the average number of hours spent per site, for each of the (i) pre-site visit assessment, (ii) the site visit, and (iii) the beginning, middle, and end of the active implementation phase.

Average external facilitator (EF) time spent across six implementation sites.
Time spent on external facilitation.
Note: *p < .05, **p < .01.
For the 4-6 weeks of the pre-implementation phase, the EFs spent 21.8 ± 4.5 h per site for the pre-site visit assessment, orienting the site and assessing its contextual factors. The EFs spent 27.5 ± 4.6 h per site for the 1.5-day on-site visit (which was scheduled for one full 8-h work day plus another half work day of 4 h) including travel time to and from the site. For the 12-month active implementation phase, based on 2-week interval data, the EFs spent 2.5 ± 0.8 h per week per site toward the beginning of the active implementation phase, 1.4 ± 0.6 h per week per site around the middle of the active implementation phase, and 1.2 ± 0.6 h per week per site toward the end of the active implementation phase, or a total of 86.9 ± 24.1 h per site (an overall average of 1.7 ± 0.5 h per week per site). The time spent along this transition did not always monotonically decrease—for example, for two of our six sites, time spent was higher at the end of the active implementation phase than at the middle. Differences in the mean time spent with a site were significant (p < .05) between the beginning and the middle of active implementation phase, and also between the beginning and the end of the active implementation phase. Differences in the mean total time spent per site between the three EFs were not significant (p = .87).
Figure 3 shows a breakdown of the activities and interaction targets across which the EF time was allocated, for each of (i) the pre-site visit assessment, (ii) the site visit, and (iii) the active implementation phase. The most prevalent activities were preparation/planning, ongoing process monitoring, program adaptation, problem identification, and problem-solving. The activities that were found to be prevalent were those that are considered to be core activities for implementation facilitation (Smith et al., 2019), which by definition is a bundle of strategies including implementation planning, engaging stakeholders, monitoring progress, and problem-solving (Ritchie et al., 2020a). Across all of these and other activities, 73.6% of EF time involved communication via email, phone, or video.

Breakdown of the primary activities and interaction targets across which the external facilitator (EF)
Comparison of Time Estimates From 2-week Intervals Versus Continuous Data Collection
Comparing these averages based on 2-week interval data to those based on continuous data collected throughout the active implementation phase for two of the sites (Table 2), we found that the interval-based data underestimated the averages by 1.0, 0.1, and 0.2 h per week per site for the beginning, middle, and end of the active implementation phase, respectively. This represents an 18.1% difference in terms of total time spent on facilitation. To better understand the source of this discrepancy, we manually reviewed logged EF activities for the weeks in the continuous data that measured a higher amount of time spent than the time estimate based on the 2-week interval data. We noticed that the continuous data more consistently accounted for irregular time spent on assessing how implementation was progressing, engaging stakeholders, and continuously developing the network of needed personnel and resources for implementation. Using the continuous data available for two of the sites, we examined increasing the interval length from two, to three, to four, etc., up to eight for each of the beginning, middle, and end of the active implementation phase. (Using such 8-week intervals would mean collecting data for 8 × 3 = 24 weeks total—i.e., almost half of the 12-month active implementation phase.) There was a small percentage difference in the average estimate of weekly EF time spent between using 2- versus 8-week intervals for each of the beginning, middle, and end of the active implementation phase (3.0 ± 0.3%, 2.3 ± 1.9%, and 6.7 ± 2.8%, respectively; in hours per week, these correspond to 0.05 ± 0.01, 0.02 ± 0.02, and 0.07 ± 0.03, respectively). Thus, there appears to be little advantage to extending data collection intervals from 2 to 8 weeks.
Comparison of time estimates for 2-week intervals, 8-week intervals, and continuous data collection.
Note: *p < .05.
Discussion
This study examined how EF time was spent on a multi-site randomized trial that used implementation facilitation as a strategy to implement the evidence-based CCM in interdisciplinary outpatient mental health provider teams. For each of the six implementation sites, we collected time–motion data continuously over 4–6 weeks of the pre-implementation phase and a kick-off site visit, then over a 2-week interval for each of the beginning, middle, and end of the 12-month active implementation phase. We estimated that EFs spent a total of 136.2 ± 27.6 h per site over the pre-implementation phase, the site visit, and the active implementation phase. EFs spent significantly more time at the beginning of the active implementation phase than at the middle or the end. The most prevalent activities were preparation/planning, ongoing process monitoring, program adaptation, problem identification, and problem-solving. Based on data from two of the sites at which we also collected continuous data throughout the active implementation phase, we found that the interval-based data underestimated the continuous data’s averages by 1.0, 0.1, and 0.2 h per week per site for the beginning, middle, and end of the active implementation phase, respectively, representing an 18.1% difference in terms of total time spent on facilitation.
As implementation facilitation has become increasingly embraced as a strategy for implementing innovations into practice (McCormack et al., 2021; Moore et al., 2020), there is heightened interest in examining the time allocated to implementation facilitation (Edelman et al., 2020). This interest is heavily fueled by growing demands to better estimate the cost of implementation (Yoon, 2020), and to adequately plan for resource allocation that can sufficiently support both the spread and sustainability of implemented practices (Hunter et al., 2020). The burden of measurement and documentation is considerable, however, when it comes to collecting data continuously throughout an implementation effort (Ritchie et al., 2020b), especially when it involves multifaceted strategies such as implementation facilitation. Pragmatic methods for economic evaluations of implementation strategies are beginning to be proposed (Cidav et al., 2020), and as economic analyses of implementation trials (Miller et al., 2020) are actively sought after by the field, this study offers one potentially useful approach to EF time data collection that is less time-intensive than continuous data collection.
Our finding that the EFs spent more time at the beginning of the active implementation phase than at the middle or the end is consistent with the expectation that EFs transition implementation expertise to the site over the course of the implementation phase (Kirchner et al., 2014). We found, however, that the time spent along this transition did not always monotonically decrease—for example, for two of our six sites, time spent was higher at the end of the active implementation phase than at the middle. The trial had deliberately designed the EF’s site-facing efforts to be on an “as needed” basis for the latter months of the active implementation phase, to move away from pre-scheduled weekly meetings with the implementation site while remaining flexible to offer assistance to the site when needed. This flexibility may have contributed to EFs increasing their engagement with sites that underwent contextual changes requiring attention at the end of the implementation phase. To accurately understand reasons for such specific observed patterns in time spent, future studies may consider additionally documenting the reasons for EF activities at the time of data collection, or triangulating with other contemporaneously collected implementation data (e.g., through interviews with EFs [Connolly et al., 2020]).
Close to three-quarters of the EF time spent involved communication via email, phone, or video. EF activities involving email, phone, or video communication are unlikely to be drastically altered even for fully virtual implementation facilitation that involves no in-person contact (Hartmann et al., 2021; Ritchie et al., 2020a). This makes our findings applicable to future implementation efforts that use virtual implementation facilitation, which has always offered potential travel cost savings and is currently being further considered in light of physical distancing guidelines and travel restrictions due to the COVID-19 pandemic (Sacco & De Domenico, 2021). Moreover, the less intense 2-week interval-based data collection approach has the potential to better enable resource-limited implementation efforts to feasibly collect data that they can use for economic evaluation. This can in turn more accurately inform future implementation in such contexts that are often responsible for delivering care to vulnerable populations, and thus help to devise implementation strategies that can serve as countermeasures to existing health disparities (Woodward et al., 2019).
Our study has its limitations. First, the 2-week interval-based data collection approach underestimated the EF time spent on irregular activities. By design, each EF had selected 2-week intervals for their sites that they deemed typical, to avoid random selection of nonrepresentative weeks that overlap with holidays and other irregular work schedules. Losing accurate capture of irregular activities was thus a tradeoff, and our findings point to even 8-week interval-based data not providing large improvements in accuracy. Future studies can potentially mitigate the impact of this limitation by (i) accounting for the underestimation tendency of the interval-based approach in projecting resource allocation needs and (ii) adapting the approach to have the EF document irregular activities that are time-expensive and are not during their selected data collection weeks. Second, our study did not measure implementation time spent by internal facilitators and other site personnel participating in facilitation efforts. This was a design decision that we made both to limit the burden of implementation-related tasks on site personnel and to focus on our operational partner’s priority regarding how much external resources need to be provided to sites to conduct CCM implementation. Future implementation efforts, especially those for which the burden on the implementation site is considered relatively low, can explore the extension of interval-based data collection to site personnel’s implementation activities. Third, the data collection approach has been used only in a CCM implementation trial so far; facilitation of more or less complex interventions may require different amounts of EF effort. Further work is needed to examine the approach’s applicability to implementation efforts across different settings and practices, which is also crucial for rigorously assessing the exact extent to which time estimates based on this approach deviate from those based on continuous data collection across various implementation efforts. Fourth, the two continuous data collection sites had the same EF; this EF had expertise and a strong interest in time–motion analyses, and their activities may have differed from those of the other EFs. To minimize the impact of this limitation, the EFs were trained in facilitation by the same curriculum (Ritchie et al., 2020a), followed a manualized guide to implementing the CCM (available upon request), and held weekly coordination meetings among them to make consistent their facilitation activities throughout implementation. Fifth, the number of sites included in this study is small. Especially in light of the aforementioned impact that contextual changes may have had on the results, further work on a larger sample and across multiple projects is needed to enable a more robust examination of external facilitation time-related findings that accounts for potential confounding factors including EF, site, and calendar time.
Conclusions
EFs spent 136.2 ± 27.6 h per site over the course of 55–57 weeks during a study investigating the implementation of the evidence-based CCM in interdisciplinary outpatient mental health provider teams. Nearly three-quarters of their time was spent on communication with the site via email, phone, or video, and they spent more time at the beginning of the active implementation phase than at the middle or the end. The 2-week interval-based data collection approach that we used may be a practical option for future implementation studies collecting time–motion data. This approach is a timely contribution to the field, especially as many implementation efforts are asked to evaluate their associated cost to inform adequate resource allocation that can support sustained and scaled-up implementation.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by funding from the Department of Veterans Affairs Quality Enhancement Research Initiative (QUERI), grants QUE 15-289 and QUE 20-026. The funding sources had no involvement in the design and conduct of the study; collection, analysis, and interpretation of data; and preparation, review, or approval of the manuscript. The views expressed in this article are those of the authors and do not represent the views of the U.S. Department of Veterans Affairs or the United States Government.
