Abstract
There are few validated contextual measures predicting adoption of evidence-based programs. Variation in context at clinical sites can hamper dissemination. We examined organizational characteristics of Veterans Affairs hospitals implementing STRIDE, a hospital walking program, and characteristics’ influences on program adoption. Using a parallel mixed-method design, we describe context and organizational characteristics by program adoption. Organizational characteristics included: organizational resilience, implementation climate, organizational readiness to implement change, highest complexity sites versus others, material support, adjusted length of stay (LOS) above versus below national median, and improvement experience. We collected intake forms at hospital launch and qualitative interviews with staff members at 4 hospitals that met the initial adoption benchmark, defined as completing supervised walks with 5+ unique hospitalized Veterans during months 5 to 6 after launch with low touch implementation support. We identified that 31% (n = 11 of 35) of hospitals met adoption benchmarks. Seven percent of highest complexity hospitals adopted compared to 48% with lower complexity. Forty-three percent that received resources adopted compared to 29% without resources. Thirty-six percent of hospitals with above-median LOS adopted compared to 23% with below-median. Thirty-five percent with at least some implementation experience adopted compared to 0% with very little to no experience. Adopters reported higher organizational resilience than non-adopters (mean = 23.5 [SD = 2.6] vs 22.7 [SD = 2.6]). Adopting hospitals reported greater organizational readiness to change than those that did not (mean = 4.2 [SD = 0.5] vs 3.8 [SD = 0.6]). Qualitatively, all sites reported that staff were committed to implementing STRIDE. Participants reported additional barriers to adoption including challenges with staffing and delays associated with hiring staff. Adopters reported that having adequate staff facilitated implementation. Implementation climate did not have an association with meeting STRIDE program adoption benchmarks in this study. Contextual factors which may be simple to assess, such as resource availability, may influence adoption of new programs without intensive implementation support.
Keywords
Organizational characteristics influence implementation outcomes for translating evidence-based practices in health care settings.
This study uses qualitative and quantitative methods to evaluate national implementation of an evidence-based hospital walking program, STRIDE, to describe the relationship between implementation contextual factors that may influence adoption.
Validated measures of contextual factors (eg, organizational readiness, climate) had less impact on adoption status than relatively “crude” but easier to measure factors (eg, resource availability), which could streamline measurement for future projects.
Introduction
A key contributor to hospital-associated disability is inactivity. 1 Inactivity during hospitalization has been associated with negative health outcomes including delirium, falls, increased lengths of stay, greater readmission risk, and functional decline, among others.2,3 Health systems may consider how to mitigate these risks by implementing mobility interventions in the inpatient setting as part of an Age Friendly Health System transformation. To facilitate an age-friendly focus, there are evidence-based programs (EBPs) that improve health outcomes by effectively increasing mobility during hospitalization. An EBP is a program or practice with a rigorous body of evidence supporting its impact. One such evidence-based program is STRIDE (AssiSTed EaRly MobIlity for HospitalizeD VEterans).3 -5 STRIDE promotes early (within 24 h of admission) and supervised ambulation for patients that are typically 60 years or older and are able to ambulate safely. STRIDE was initially developed and tested at the Durham VAMC within the national Veterans Health Administration Health Care System (VA), the largest integrated healthcare system in the U.S. STRIDE has been tested in a stepped wedge randomized controlled trial3,6,7 and shown to be associated with lower rates of discharge to a skilled nursing facility. 7 STRIDE consists of a targeted gait and balance assessment conducted by a trained STRIDE team member, followed by daily supervised walks for the duration of the hospital stay. Because of STRIDE’s initial success in improving mobility and outcomes for hospitalized Veterans, the VA recognized STRIDE with several prestigious awards to recognize its diffusion throughout the VA health care system. 8 As a result of continued research and operational investment, STRIDE started as a single-site trial and as of October 2023, has been successfully implemented at 41 VA facilities with an additional 17 facilities in progress (Figure 1). Additional information on the status of STRIDE national implementation in the VA is reported elsewhere. 8

Evolution of STRIDE.
The experience of STRIDE of moving from a research-funded clinical intervention to a health care system- and operations-supported scale-up of a program is atypical. Very few EBPs successfully cross from research to practice. 9 Evidence suggests that 30% to 90% of implementation efforts fail, largely dependent upon the degree of organizational change that is required for implementation. 10 There are a host of reasons for this. One common problem that can impede implementation of age-friendly care initiatives is that there is no early planning for scale-up or understanding of organizational contexts in which a particular evidence-based program can be most successful. In the case of STRIDE, there has been a focus on assessing organizational characteristics. We assessed validated and theoretically-sound measures of organizational context that can be used to predict which sites would be most successful with adoption of STRIDE. Each site has unique contextual features, including variability in the complexity of the site (ie, patient risk, diversity and volume of services offered), available resources, readiness for STRIDE implementation, and other characteristics. Our goal was to consider these organizational characteristics and how “fit” of an evidence-based program, like STRIDE, for a specific context might hamper or support dissemination efforts. This work could be applicable to other Age-Friendly Health Systems that are seeking to identify how to best implement mobility programs in their contexts.
Informed by the Dynamic Sustainability Framework 11 and the Consolidated Framework for Implementation Research, 12 we selected validated measures to assess contextual factors and their association with sites’ subsequent adoption of STRIDE. This work addresses important questions in the field of implementation science on the relationship between organization contextual factors and implementation outcomes of interest by leveraging data from a hybrid, cluster randomized trial to assess nation-wide program adoption in the nation’s largest integrated health system. Having a better understanding of organizational characteristics that predict successful adoption with limited external implementation support could inform future efforts to broadly assess and select sites with context that supports the highest likelihood of successful adoption and guide efficient use of implementation resources. In other words, knowing what contextual factors are most likely to lead to adoption success could inform future efforts in terms of what contextual factors and most important to measure and, additionally, inform our understanding of which sites to prioritize for implementation efforts.
Methods
Study Design and Setting
This study was conducted in the VA, the largest integrated health care system in the US, as part of the Optimizing Function and Independence Quality Enhancement Research Initiative (Function QUERI) aimed at enhancing the functional independence of older Veterans. Function QUERI used a type III effectiveness-implementation hybrid design framework to conduct a pragmatic parallel cluster-randomized trial where hospitals were randomly assigned 1:1 to either foundational support, comprised of standard, low-touch activities, or enhanced support, which includes the addition of tailored, high-touch activities if hospitals do not meet STRIDE program benchmarks at 6- and 8-months following start date. 13
As part of the Function QUERI implementation trial, hospitals were eligible for inclusion if they were VA Medical Centers (VAMCs) that did not currently nor previously offer a STRIDE program in the past 5 years and were willing to implement on a general medicine unit. Hospitals were enrolled in 5 separate cohorts at intervals of approximately every 3 to 4 months. The start date for a hospital was the launch of the cohort. We informed potentially eligible hospitals of the implementation opportunity through recruitment calls, word of mouth, leveraging operational partners, and utilizing Microsoft Identity Manager to identify potential points of contact at sites. VA hospitals were located throughout the country. Additional inclusion criteria included: (1) facility leadership willing to participate in the study via a signed participation agreement and (2) agreement to attend monthly touchpoints with FQ staff. We enrolled 35 hospitals and used a stratified block randomization to randomize hospitals 1:1 to either foundational support (low-touch or active comparator) or enhanced support (high-touch or experimental) which includes the addition of tailored, high-touch activities if hospitals do not meet STRIDE program benchmarks at 6- and 8-months following start date. All hospitals received foundational support in the first 6 months following their start date. Foundational support activities included access to STRIDE toolkits, SharePoint, data dashboards, and a Diffusion Network, as well as technical assistance. Enhanced support added “high touch” regular and structured facilitation calls for hospitals that did not meet STRIDE activity benchmarks with activities that included featured practice facilitation, a process of interactive problem solving and support that occurs in a context of a supportive interpersonal relationship. Our current analysis is limited to the first 6 months following start date, thus all hospitals had received the same implementation support regardless of their study arm assignment; hospitals that did not meet STRIDE program benchmarks had not yet had an opportunity to receive the enhanced implementation support. Additional detail describing Function QUERI is published elsewhere. 13 Our Institutional Review Board determined this study exempt.
Data Collection
Data collection occurred in June 2021 through September 2023. Our work was guided by the Dynamic Sustainability Model and the Consolidated Framework for Implementation Research.11,12 We selected validated measures consistent with the inner and outer contextual domains in these conceptual frameworks including organizational readiness to change, organizational resilience, and organizational climate. These measures are described in detail below. We collected information about each organizational characteristic from a baseline survey administered in a pre-implementation time period (eg, before a hospital officially engaged in local implementation of the STRIDE program). Surveys were administered to leadership and hospital staff identified by the STRIDE delivery team as a designated point of contact. We conducted survey assessments at the start date, at 6 months, and at 10 months; however, this analysis only reports on baseline data collected at the start date. Survey and related study data were collected and managed using REDCap electronic data capture tools hosted at Veterans Health Administration. 14
To complement the quantitative surveys, we also collected qualitative data via interviews. The qualitative component shed light on implementation activities and perceptions on intensification. We sampled approximately half of hospitals enrolled in the study for qualitative interviews due to capacity, with the goal of maximizing diversity of the overall STRIDE sample by geography and implementation experience. We enrolled sites by cohort every 3 to 4 months and sampled per cohort to ensure representation over time. We conducted 30-min semi-structured interviews with hospital staff to gain detailed insights into the facilitators and barriers affecting the implementation of EBPs. We also interviewed leaders who are not directly involved in the day-to-day operations (eg, chief of staff, facility director) to gather higher level perspectives on implementation.
Measures
The contextual measures of interest were facility complexity score, hospital quality of care metrics, geographic region, organizational resilience,15,16 organizational readiness,17,18 implementation climate, 19 and availability of local resources. The quantitative data stem from 3 data sources including administrative data sources (eg, facility complexity score), an intake form that was reported by a site’s point of contact at the time of enrollment (eg, having supplemental resources available), and staff surveys where we collected validated measures (eg, organizational resilience, organizational readiness, implementation climate). These validated measures assess distinct contextual factors that have previously been identified to predict implementation success. We also leveraged electronic health record data to assess adoption. Key measures are described below.
Facility complexity
Facility complexity is an organizational measure of hospital-level volume and resources based on the level of services provided at a health care institution. Facility complexity is based on VA administrative data sources. 20 The facility complexity score includes multiple factors including volume, patient risk, involvement in teaching and research, as well as the number and breadth of physician specialties. A lower number (eg, 1a) indicates a higher degree of services provided (eg, radiation therapy), whereas a higher number (eg, 3) indicates a lower degree of services provided (eg, primary care). We measured facility complexity as we assert that complexity may be associated with an organization’s demands that might influence likelihood of adoption including competing priorities, and historical experiences with implementing other evidence-based programs, among other factors.
Quality of care metrics
Hospital adjusted length of stay (medicine) and patient overall hospital rating (inpatient) were obtained from Strategic Analytics for Improvement and Learning (SAIL), 21 using the measurement from the most recent quarter available relative to hospitals’ start date.
Geographic region
Census Bureau regions (northeast, midwest, south, and west) were used to classify the hospitals’ geographic region.
Organizational resilience
The other contextual measures were self-reported by sites on the baseline survey. We considered organizational resilience, 22 which is a 53-item validated measure that assesses a site’s ability to adapt and change in meaningful ways to overcome a new or unexpected change in inner or outer context, such as the need to implement a new program. Specifically, for this study we used the 8-item benchmark resilience tool. The organizational resilience measure we used, as created by Lee et al, views resilience as multidimensional, capturing the different ways organizations respond to uncertainty.22,23
Organizational readiness
Next, we considered organizational readiness, 17 which is a validated measure assessing a site’s commitment to change and efficacy for change. Organizational readiness to change has previously been demonstrated to predict successful implementation in several programs, including some implemented within the VA health care system.24,25 In consultation with the measure developer, we only used selected items (9 out of 12 items) from the Organizational Readiness for Implementing Change (ORIC) measure.
Implementation climate
In addition, we used the Implementation Climate Scale (ICS), 19 an 18-item measure assessing a site’s focus, educational support, recognition, rewards, selection for the evidence-based practice (EBP), and the selection for openness. The ICS has previously been used in inpatient settings to assess EBP implementation outcomes.26,27
Additional resources and experience implementing EBPs
We assessed supplemental resources and experience with implementing EBPs with the following questions asked on the intake form completed by the sites’ point of contact: “Has anyone at your facility previously tried to start a STRIDE or other mobility program on a general medical ward in the last 5 years?,” “Have you received any resources or other staffing support from your VISN or a VA program office to start your STRIDE program?,” and “How much experience does your facility have with implementing evidence-informed practices?.”
Outcome: Adoption
Hospitals that met the STRIDE program activity benchmark at 6 months following start date were characterized as Adopters. The STRIDE program activity benchmark was EHR documentation of ≥5 general medicine patients receiving a STRIDE walk during months 5 to 6 after start date. The 5-patient threshold was selected to represent a minimal amount of clinical activity to suggest that the program was underway. An individual patient could have multiple walks but would only be counted once toward a hospital meeting the STRIDE program activity benchmark.
Analysis
Quantitative analysis
For each categorical hospital-level characteristic, we calculated the percentage meeting the STRIDE program activity benchmark (adopters); means were reported for continuous characteristics by meeting/not meeting the benchmark. Hospital-level average scores were generated for each of the validated contextual measures administered to staff members in a baseline survey. Descriptive statistics were calculated for these hospital-level average scores by adoption benchmark status. Overlayed histograms with kernel density curves of selected mean hospital-level scores by adoption benchmark status are also presented. Analyses were performed using SAS 9.4 (SAS Institute, Cary, NC).
Qualitative analysis
To analyze the data, we used directed content analysis. 28 We also included data-derived labels to reflect respondents’ descriptions of their experiences with barriers and facilitators of implementation. We used qualitative data to capture more detail about the implementation process, including factors that aided or hindered, as well as the perspectives of staff regarding program support provided (eg, low touch REP activities). Analyses were performed using NVivo 12 plus software from QSR International to code the data.
Results
Quantitative Results
A total of 35 hospitals participated in the STRIDE implementation evaluation (Figure 2). We administered 389 surveys and received 163 surveys with complete or partially completed data, which equates to an approximately 42% individual response rate. At 6 months, 11 sites (31%) had met the STRIDE adoption benchmark of ≥5 patients on general medicine with at least one STRIDE walk in months 5 to 6 from start date and 24 hospitals (69%) did not meet the adoption benchmark status.

Study flow.
Approximately 7% of sites with the highest degree of complexity (complexity score 1a) sites met the adoption benchmark compared to 48% with lower complexity scores (Table 1). Thirty-six percent of hospitals with above-median LOS of national VAMCs met the benchmark compared to 23% with below-median. Forty-three percent of hospitals that received resources met the adoption benchmark compared to 29% without resources. Thirty-five percent of hospitals with at least some experience implementing evidence-based practices met the adoption benchmark compared to 0% with very little to no experience. Hospitals with previous experiences expressed commitment to implementing new programs like STRIDE. Additionally, in contacts with Function QUERI personnel, 8 of the 24 hospitals that did not meet the adoption benchmark reported intentionally delaying launching STRIDE due to staffing issues.
Hospital Characteristics and STRIDE Adoption Benchmark (Adopted/Did Not Adopt).
Note. Percentages may not add to 100% due to rounding.
SD = standard deviation.
Facility complexity level classifies VHA facilities at levels 1a, 1b, 1c, 2, or 3 with level 1a being the most complex and level 3 being the least complex. The model is reviewed and updated with current data every 3 years. The peer grouping system is based on 7 variables relating to patient population, clinical services complexity, and education and research. Data are from fiscal year 2020.
Strategic Analytics for Improvement and Learning (SAIL) measure from the most recent quarter available relative to hospitals’ start date.
SAIL measure from the most recent quarter available relative to hospitals’ start date. A facility’s score is percentage of patients responding 9 or 10 on a 0 (worst hospital possible) −10 (best hospital possible) scale their hospital stay.
From intake form completed by hospitals’ point of contact.
Hospitals that successfully met the adoption benchmark reported a slightly higher degree of organizational resilience and greater readiness to change (Table 2). Hospitals that did meet the adoption benchmark reported higher resilience than those that did not (mean = 23.5 [SD = 2.6] vs 22.7 [SD = 2.6], respectively; Table 2, Figure A1). Further, hospitals that met the adoption benchmark reported greater organizational readiness to change than those that did not (mean = 4.2 [SD = 0.5] vs 3.8 [SD = 0.6], respectively; Table 2, Figure A2). Whether these differences in organizational resilience and readiness to change were of sufficient magnitude to signal a true difference is unclear. Implementation climate was not consistently associated with successful adoption (Table 2, Figure A3).
Baseline Measures of Organizational Characteristics and STRIDE Adoption Benchmark (Adopted/Did Not Adopt).
Note. The median number of staff respondents for measures in Table 2 per site = 4 (minimum = 2, maximum = 11).
Organizational resilience is scored by summing 8 (1 [strongly disagree] – 4 [strongly agree]) Likert-scale items; minimum and maximum possible scores are 8 and 32, respectively. Higher scores reflect higher organizational resilience.
ORIC is scored by averaging 9 (1 [disagree] – 5 [agree]) Likert-scale items; minimum and maximum possible scores are 1 and 5, respectively. Higher scores reflect greater organizational readiness to implement change.
ICS subscales are each scored by averaging 3 (0 [not at all] – 4 [to a great extent]) Likert-scale items. The ICS total score is scored by averaging the 18 items. Minimum and maximum possible scores for subscales and total score are 0 and 4, respectively. Greater scores reflect higher endorsement of an item (eg, larger number indicates greater educational support, etc.).
SD = standard deviation; ORIC = Organizational Readiness for Implementing Change; ICS = Implementation Climate Scale; EBP = Evidence-based practices.
Qualitative Results
We purposively sampled and interviewed points of contacts from 4 hospitals at baseline, with an effort to obtain one from each of 4 cohorts. We achieved data saturation. Of the interviewed hospitals that met the adoption benchmark, all reported having some experience implementing other programs at their hospital. Consistent with the organizational readiness to change measure, we also qualitatively asked participants how committed the staff in their facility were with implementing STRIDE. All sites reported that staff were committed to implementing STRIDE. Specifically, we heard:
“We’re all committed. The nurse and nurse assistant have been detailed just to the STRIDE program, so they’re 100% committed. And the chief and myself are still doing other roles but also 100% committed to STRIDE.”
“One hundred percent. Simple. I’m in charge for the length of stay and readmission, so I’m heavily involved in their mortality and hospital-acquired complications. This is actually a very important measure. Early ambulation has a tight correlation with some of the outcome. So, leadership is fully supporting.”
Several hospitals also discussed the challenge of rolling out multiple EBPs at once and that this could create competing priorities. Specifically, we heard:
“So there is a group implementing something like nearly concurrently with STRIDE, and that wasn’t planned. It just happened to be that way.”
“Right now we’re currently working on trying to implement different communication strategies with nurses and utilizing teams for non-urgent messaging to cut down on the burden of pages and those sorts of things, We do a lot of stuff.”
“Well we, we have another, there was another pilot going on, but it was about early discharge, appointments for those like to, to expedite the process of discharge.”
Interview participants also reported other barriers to adoption, most notably challenges with staffing and delays associated with hiring staff.
Discussion
We leveraged data derived from a type 3 hybrid implementation-effectiveness cluster randomized controlled trial to describe the relationship between organizational characteristics and initial STRIDE adoption benchmarks, an evidence-based hospital mobility program to mitigate functional decline of older Veterans. We found that a greater proportion of VA hospitals with lower complexity scores met these benchmarks when compared to hospitals with higher complexity. This may be informative for Age-Friendly Health Systems who are engaging in transformation to ensure that they successfully implement mobility programs to support patient well-being in the impatient setting. More specifically, we found that VA facility complexity measures may be a useful proxy for assessing competing priorities, a well understood barrier to implementation, 12 that could limit team capacity to introduce a novel program like STRIDE. While increased facility complexity may be associated with increased resources, this finding supported our expectations because we viewed increased facility complexity as also being potentially associated with competing priorities. Another finding that corroborated previous work is the finding that hospitals reported having received resources in support of their STRIDE programs were more likely to have met adoption benchmarks. Having supplemental resources available is also a well-understood implementation factor 12 and our evidence suggests that it was a significant driver of STRIDE adoption given the disparity in adoption benchmarks among hospitals that received resources versus those that did not (43% and 29%). We also observed that hospital experience with implementation may have been an important signal or predictor of subsequent adoption given that 35% of hospitals with experience implementing evidence-based practices compared to no hospitals meeting adoption benchmarks that reported very little to no experience. Our qualitative findings provide additional context to this phenomenon. Finally, we unexpectedly did not find strong evidence to suggest that VA hospitals with higher levels of organizational readiness, resilience, or implementation climate having higher adoption benchmarks than hospitals that did not. We noted that for organizational readiness to implement change the magnitude of difference between those meeting adoption benchmarks and those not meeting benchmarks may be meaningful. However, to our knowledge a clear threshold for what difference makes a meaningful impact on adoption benchmarks has not been established. However, this research, when coupled with additional recent findings on ORIC’s relationship with adoption, suggests that even small differences in reported readiness may contribute to significant differences in implementation outcomes of interest. 25 Additionally, given limitations to the study design, the insufficient evidence may be an artifact of the small sample size or other sources of bias.
These confirmatory and counter-intuitive findings underscore how difficult it is to measure contextual factors that drive adoption, and other implementation outcomes of interest, despite their importance to predicting implementation success. Thus, there are continued calls across the field of implementation science to understand the causal pathways related to modifiable organizational characteristics (eg, organizational readiness) that drive implementation outcomes. 26
We considered reasons that the expected relationship between hospitals meeting the initial program benchmarks (ie, adoption) and the organizational characteristics were not found in this study. There could be many explanations for this, including low numbers of respondents, unmeasured sources of bias, and a lack of construct validity for these measures when tested in real-world settings. This last issue, potential lack of measure applicability in real-world settings, deserves further investigation as the field of implementation science has widely noted that there are issues on measurement where the field is blossoming with measures, but not much real-world validation which is essential for measuring clinical practice change. Future research should, whenever possible, focus on doing so in real-world settings to better understand pragmatic measures that could inform decision-making in shorter time frames without significant data collection burden on sites.
Implementation is difficult and few interventions cross the hurdle into practice, and when they do, it often requires great organizational effort. This is consistent across many Age-Friendly Health Systems. We know that context is variable across organizational settings and is often not measured or reported. Context has an impact on “fit” of a program within an organization. Context also has an impact on the likelihood of implementation success. Future efforts could pair intervention development teams with implementation science teams to enhance the potential fit of interventions earlier in the evidence generation pipeline with the settings in which they may be implemented. 29
In an integrated health care system, as in many Age-Friendly Health Systems, we found that successful program adoption following low-touch implementation support varied by hospital complexity, material support, and improvement climate. It is worth noting that this work was conducted in the context of the Veterans Health Affairs Health Care setting, which is a national, integrated health care system and, as such, may have unique features that support successful adoption such as a shared electronic health record. It also builds on a robust body of scholarly work, including clinical trials, testing the effectiveness of STRIDE.3 -7 The strength of the evidence for STRIDE influences its implementation success, as does the support for implementation that STRIDE received from the national health care system.
Limitations and Future Research
Our study had a few limitations. First, we had few staff members reporting per hospital. Our response rate was not especially low, but rather we did not have a large number of staff people to pull from at any given hospital. Second, our findings are focused on an existing evidence-based program that was developed to be nimble, with relatively few core components, and a high degree of flexibility allowed to enable hospitals to tailor STRIDE delivery to address local contextual factors. While this is ideal for successful implementation and supports adoption and scale-up, this is not the norm for most interventions. Most complex interventions are not designed with such flexibility; thus, our findings may not generalize to all EBPs. Finally, we conducted this work in an integrated health care system, and one which openly supported STRIDE implementation. However, our sites varied in terms of complexity, geographic area, and having supplemental resources available, thus mitigating this potential limitation.
Conclusions
Scaling up effective interventions is important to ensure the patients get care that is known to be effective. To do this requires an awareness of which sites are primed for implementation. Assessing validated contextual measures that are known to be associated with successful implementation is a crucial way to ensure that investments in implementation are used in the most judicious manner and can help with implementation resource planning.
Footnotes
Appendix
Author Contributions
Susan Hastings and Leah Zullig had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Hastings, Zullig, Drake.
Acquisition, analysis, or interpretation of data: Hastings, Zullig, Drake, Coffman, Stechuchak.
Drafting of the manuscript: Hastings, Zullig, Drake.
Critical revision of the manuscript for important intellectual content: Zullig, Hastings, Drake, Webster, Tucker, Choate, Stechuchak, Coffman, Kappler, Meyer, Van Houtven, Allen, Hughes, Sperber.
Statistical analysis: Coffman, Stechuchak.
Obtained funding: Hastings, Van Houtven, Allen.
Administrative, technical, or material support: Kappler, Webster.
Supervision: Hastings.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work is funded by the United States (U.S.) Department of Veterans Affairs Quality Enhancement Research Initiative (QUE 20-023). The contents do not represent the views of the US Department of Veterans Affairs or the US Government.
Data Available
Yes
Data Types
Data dictionary
How to Access Data
For requests consistent with Veterans Affairs policy, data may be made available via inquiry to Susan Hastings, MD (
When Available
With publication
Consent Statement
This study was approved as exempt research by the Institutional Review Board of the Durham VA Health Care System (#2334). No patients were enrolled in this study. We obtained survey and interview data from staff members. This study was registered on June 1, 2021 at ClinicalTrials.gov (identifier NCT04868656).
