Abstract
Background
Barriers and facilitators, collectively called determinants, of evidence-based practice implementation are key to identifying the best strategies for promoting implementation. Assessing determinants before implementation can help tailor strategies to those that would be most effective. Current measures of determinants are not comparable across implementation settings and implementation scientists and practitioners often have to create their own measures. This study was the first step in creating determinants item banks that are usable across settings and focused on intervention characteristics. We aimed to establish the content validity of the item bank.
Method
This study used a concurrent mixed methods approach. Items for assessing intervention characteristic determinants were first identified through systematic reviews. Implementation scientists then completed a survey where they provided both quantitative and qualitative feedback on the items. Finally, three experts with both clinical and implementation experience provided feedback on redundancy and representativeness.
Results
The systematic reviews identified over 1,959 items so subsequent steps were limited to focus on intervention characteristic determinants (271 items) such as adaptability of the practice. Based on feedback from thirty implementation scientists, the items were reduced to 92 but an additional 53 were added, most due to qualitative feedback. Items were also rewritten based on qualitative results. Three experts reviewed the remaining 145 items. Based on their feedback, the number of items was reduced to 109.
Conclusions
Creating a determinants item bank was feasible and the final items had content validity. The next steps include testing reliability and validity in a larger sample of clinicians implementing evidence-based practices.
Plain Language Summary
Barriers prevent or impede an organization from using a new practice or innovation. Facilitators help promote the use of a new practice or innovation within an organization. Assessing barriers and facilitators before starting a new practice can help target barriers and increase the chances of successfully using the practice. This study created new measures of barriers and facilitators of using a new practice or innovation. Previous measures were identified through literature reviews. Implementation scientists provided feedback on the measures through an online survey. Three experts with clinical and implementation experience provided additional feedback. Measures were revised based on the survey and expert feedback. The next steps are to test the measures with clinicians implementing a new practice.
Introduction
One of the first steps to implementing an evidence-based intervention is identifying the barriers and facilitators to implementation. Barriers and facilitators are called determinants because they influence whether the evidence-based intervention gets implemented (Koppelaar et al., 2009). However, determinants are not always identified before deploying implementation strategies (Baker et al., 2010). Current methods for identifying determinants are very labor intensive (Lawrence et al., 2016) or too general to then identify appropriate implementation strategies (Geerligs et al., 2018; Kajermo et al., 2010). Other measures are specific to an intervention or setting and therefore not usable across studies, or do not map to implementation science frameworks (Martinez et al., 2014). Flexible yet comparable and accessible methods of assessing determinants are needed.
The purpose of the current study was to start addressing this critical barrier to assessing determinants of implementing evidence-based interventions. This study is the first in a series that aims to develop flexible and valid measures of determinants that can be easily used to assess determinants before and during implementation. Due to the potential scope of determinants, the project was initially limited to determinants around intervention characteristics that include relative advantage, complexity, trialability, adaptability, evidence strength and quality, cost, compatibility, and observability as defined by the Consolidated Framework for Implementation Research (CFIR) and the Diffusion of Innovation theory (Damschroder et al., 2009; Rogers, 2003). CFIR labels and defines determinants at various levels (provider, intervention, organization, and community levels). We use the CFIR definition of intervention characteristics, specifically that these are the attributes of a practice that can influence implementation. We chose intervention characteristics to start with because most implementation scientists and practitioners will have some control over adaptations to the intervention that could be guided by determinants. The aim was to create an item bank for each intervention characteristic, pick a term for evidence-based intervention and decide whether questions should apply to before implementation, during implementation, or both. Item banks are often developed using item response theory (Reise & Haviland, 2005), a modern statistical approach that allows items to be used in any configuration. Item response theory scores the items in such a way that different studies or projects can select different items based on what is most important to them but scores are still comparable across studies and to norms or cutpoints. Here, we report the initial content validity studies for these determinant item banks, following the procedures outlined by Streiner and Norman (2015) using a literature review, expert opinion, and a survey to create a quantitative content validity assessment.
Method
We first compiled items of determinants used in previous studies into an item bank. We started with items from one general review (Miake-Lye et al., 2020) and then included measures identified from two other reviews of determinant measures, specifically because these reviews identified which measures focused on intervention characteristics (Chaudoir et al., 2013; Lewis et al., 2021). Reviews not focused on intervention characteristics were included to capture measures with relevant items but that were not specifically intervention characteristic measures. Items were added to a spreadsheet and coded by which the CFIR determinant was represented. Over the course of coding the items, several were coded in a general “other” category. After reviewing the content of these items, the investigative team decided to add three categories. The first category was observability from the Diffusion of Innovation (DOI) framework (Rogers, 2003). The last two categories were risk and burden. Table 1 summarizes the definitions of each determinant including the ultimate definition derived from the item bank and the CFIR and DOI frameworks. We then obtained copyright approval for items and eliminated items for which copyright approval was not available (N = 100).
Definitions of Intervention Characteristic Determinants
Content Validity Survey
Next, we assessed the content validity of the item bank using a concurrent mixed methods approach. The institutional review board reviewed procedures and determined the content validity survey was exempt. We recruited 30 implementation scientists to complete an online survey and provide feedback on the 271 items in the bank. Participants were recruited through the professional networks of the investigative team and were sent email invitations explaining the study and providing a link to the survey. Inclusion criteria were self-identifying as an implementation scientist and the ability to complete the survey in English. Participants reviewed an informed consent statement before completing the survey. The quantitative portion of the survey used the content validity index (CVI; Polit et al., 2007) and had participants rate each item's relevance to the determinant domain using a 4-point scale (not relevant, somewhat relevant, quite relevant, highly relevant). Items were retained if 78% of participants rated the item relevance as quite or highly relevant to that determinant (Polit et al., 2007). Incentives were not provided for completing the survey.
Qualitative feedback was solicited in two ways. First, questions at the beginning of the survey asked for feedback on using two sets of items, one for a planned implementation effort and another for an implementation effort underway (Supplemental Materials). Another open-ended question asked for feedback on terminology for referring to the evidence-based intervention and the larger organizations in which people work. Each question prompt had an open text field in which participants could type responses. Data was coded by two members of the investigative team using a primarily inductive, framework approach (Elo & Kyngas, 2008). The first coder read through the responses and created a preliminary codebook with 19 codes. The second coder read the codebook and responses. The two coders then independently coded each response. Discrepancies in coding were identified and discussed by the two coders. All discrepancies were resolved through discussion and revision of the codebook. The definitions for three codes were revised and one code was split into two codes. The second method of soliciting qualitative feedback was an open text field after each determinant that asked for feedback on missing items or concepts for that domain. Each domain had a question on missing or misplaced items for that domain. The principal investigator drafted new items based on the comments. The investigative team reviewed new items for wording and representativeness of the determinant. The quantitative data informed the inclusion of specific items and the qualitative data informed the general structure of the item bank.
Expert Panel Review
Following the content validity survey, additional feedback on the item bank was sought from experts. Experts had to be actively practicing in some form of clinical care (medicine, psychology, social work, physical therapy, and nursing) and be either an implementation scientist or have experience implementing new practices in a clinical setting. The investigative team identified experts through their professional networks by contacting colleagues in social work, nursing, and physical therapy. Experts (n = 3) then reviewed a spreadsheet of the remaining items and provided feedback on redundancy, representativeness, and clarity for each item. Experts received a stipend for completing the review. The principal investigator then reviewed the feedback and eliminated or reworded items based on the expert feedback. Another member of the investigative team then reviewed the principal investigator's decisions and any disagreements were resolved through discussion.
Results
After compiling items from previous measures and obtaining copyright approval, 271 items were identified as potential candidates for the item bank. Through the process of categorizing the items into the CFIR categories, the need for additional intervention characteristic categories became clear as many items did not fall within the CFIR intervention characteristics based on item content. The DOI category of observability was added as were categories for risk and burden from the evidence-based practice (Table 1). The risk and burden categories were added based on item content and because these items did not clearly fit within either the DOI or CFIR definitions for intervention characteristic domains. The principal investigator reviewed and categorized each item that did not fit in a CFIR domain based on item content and another member of the investigative team provided feedback when categorization was unclear. Items were then rewritten to have a consistent format and so items could apply to multiple settings and implementation phases. For multiple settings, the items each used a generic term (“practice”) and asked respondents to think of the practice being implemented when they saw the term “practice.” For implementation phases, verbs, and verb tenses were used that could apply to a future implementation project or a current implementation project.
Content Validity Survey: Quantitative Results
Thirty implementation scientists rated the relevance of the 271 items to the corresponding domain (see Table 2 for characteristics). Based on the 23 participants likely completing the survey in one sitting, we estimate the survey took 45 min to complete. Based on the CVI, 179 items were eliminated resulting in 92 items (Supplemental Materials). However, based on suggestions from the participants and analysis of the text responses, an additional 46 items were drafted and added to the item bank. An additional 7 items were added as copyright approval was received after the content validity survey was fielded. Retained items were also revised based on feedback from participants. After the content validity survey, 145 items remained in the item bank across 11 intervention characteristic domains.
Sample Description
Content Validity Survey: Qualitative Results
The results of the qualitative analysis on having one set of questions or two sets of questions for practices implemented and not yet implemented indicated a split between preferring one set of questions or two sets (Table 3). Support for both options was demonstrated. A single set of questions would be simpler and easier to use and could track determinants over time. Tracking determinants over time could inform whether implementation strategies worked. Two sets of questions were indicated because there would be differences between the two phases. Participants stated that planning the implementation and implementing the practice were two separate processes and determinants might differ between the two phases. Some participants reasoned that one set of questions would lose the specificity and relevance for assessing determinants, thereby supporting two sets of questions. Preventing mistakes was a potential advantage for both options. One set or two sets of questions was not clearly marked as the better option.
Qualitative Analysis Results for One Versus Two Sets of Questions
Participants were asked whether they preferred one set of questions for both pre-implementation and during implementation or two sets of questions, one for each phase.
Results on using the terms “practice” and “organization” suggested support for this terminology (Table 4). Most participants supported the use of the term “practice,” citing it was a more generalized term that was not only associated with clinical practices. Alternative terms included “intervention” instead of “practice” and “setting” instead of “organization.” However, others reported “intervention” implied experimental intervention and was not as inclusive as “practice”. The majority supported using “organization” because it was broad and could be clearly defined to expand beyond medical practices. Participants also stated the need to define these terms clearly, consider non-health settings, and make the questions apply beyond healthcare. For example, the term “practice” is often associated with clinical practice and not necessarily public health. Ultimately, support for using the terms “practice” and “organization” with definitions was found.
Qualitative Analysis Results for Terminology
Expert Review
Three clinical and implementation experts, representing social work and physical therapy, reviewed the 145 items and provided feedback about whether items were redundant with other items or representative of the domain and on item wording. As the qualitative feedback did not clearly indicate one or two sets of items was preferred, we decided to create one set of questions that could apply to both the planning and implementing phases. One set of questions allows tracking over time and is easier for scientists and practitioners to use. The terms “practice” and “organization” were retained as qualitative results supported these. Hence, experts were asked to review only one set of items that could apply to both phases and that used “practice” terminology. The principal investigator synthesized the expert feedback and eliminated 36 items. Items marked “redundant” or “not representative” across all three experts were eliminated. Items so marked by two experts were reviewed by the principal investigator for deletion. Another member of the investigative team reviewed the inclusion/exclusion decisions and disagreements were resolved through discussion. The remaining 109 items were rewritten based on expert feedback (see Supplemental Material for final item bank).
Discussion
The current study reported on the development and content validity of an item bank to assess intervention characteristic determinants of implementing an evidence-based practice. Using previous literature reviews and a mixed methods approach, this study established the content validity of the item bank across 11 determinants and informed the structure and wording of the questionnaire. Developing an item bank to assess intervention characteristic determinants was feasible, supporting the development of other item banks for additional determinants.
Our results suggest several potential drawbacks for current determinants measures that could be addressed by the item banks. Few items substantially overlapped and implementation experts suggested many items in addition to those already created. This implies that current measures are not adequately capturing intervention characteristic determinants. Nearly two-thirds of the items from previous measures were not rated as relevant by implementation experts and practically all the items needed to be rewritten. This could have been due to previous measures being specific to the setting or intervention. The determinant item banks will help address these drawbacks, by expanding what is measured, eliminating potentially redundant items, and creating comparable measures for use across settings.
Our next steps for developing the item bank include surveying a large sample of clinicians and other healthcare personnel involved in implementing an evidence-based practice. We plan to use item response theory (IRT; Reise & Revicki, 2015 ) to further establish the reliability of the item bank. Using IRT also means the item bank will be flexible and implementation scientists will be able to build customized assessments of determinants that are tailored to their settings but still comparable across studies. We will also use IRT to create a series of short forms for each determinant that are tailored to specific uses such as generally screening for barriers to target with implementation strategies and monitoring barriers over time.
Using the intervention characteristics determinants item bank as a model, we also plan to develop determinant item banks for other domains. For example, item banks to assess provider characteristics (Damschroder et al., 2009) could help target training and support to provider needs. Item banks to assess patient characteristics could inform intervention adaptations based on patient needs. Our plan is to use IRT for future determinants item banks.
The results should be considered within the strengths and limitations of the study. A strength was the use of multiple mixed methods to develop the item banks. While we developed these measures for use in healthcare, most items were phrased so they could be used in other implementation settings such as education or public health. However, additional content validity work may be needed before these items are used outside healthcare. Another limitation is that we were unable to determine response rates as recruitment methods did not provide a denominator or sampling frame. The diversity of the samples and the use of existing relationships for recruitment were limitations. The study showed that it is feasible to create item banks assessing determinants of implementing evidence-based practices. Future studies should assess the psychometric properties of the item banks and develop measures of additional determinants.
Supplemental Material
sj-docx-1-irp-10.1177_26334895231175527 - Supplemental material for Content validity of an item bank to assess intervention characteristic determinants of implementing evidence-based practices
Supplemental material, sj-docx-1-irp-10.1177_26334895231175527 for Content validity of an item bank to assess intervention characteristic determinants of implementing evidence-based practices by Salene M.W. Jones, Aditya Shrey and Bryan J. Weiner in Implementation Research and Practice
Supplemental Material
sj-docx-2-irp-10.1177_26334895231175527 - Supplemental material for Content validity of an item bank to assess intervention characteristic determinants of implementing evidence-based practices
Supplemental material, sj-docx-2-irp-10.1177_26334895231175527 for Content validity of an item bank to assess intervention characteristic determinants of implementing evidence-based practices by Salene M.W. Jones, Aditya Shrey and Bryan J. Weiner in Implementation Research and Practice
Supplemental Material
sj-docx-3-irp-10.1177_26334895231175527 - Supplemental material for Content validity of an item bank to assess intervention characteristic determinants of implementing evidence-based practices
Supplemental material, sj-docx-3-irp-10.1177_26334895231175527 for Content validity of an item bank to assess intervention characteristic determinants of implementing evidence-based practices by Salene M.W. Jones, Aditya Shrey and Bryan J. Weiner in Implementation Research and Practice
Footnotes
Acknowledgments
This work was funded by a supplement to grant P50CA244432 from the National Cancer Institute.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Supplemental Material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
