Abstract
Introduction
Despite an increased interest in implementation costs, there is little to no practical guidance on how to conduct implementation cost evaluations. Recommendations, tools, and examples are needed to incorporate reliable and feasible costing approaches into implementation studies and guidance on when an economist is necessary. To this end, we identified key issues and developed this paper and a guide on pragmatic approaches for assessing and reporting implementation costs.
Method
We assembled a team of implementation scientists and health economists working in various settings to identify central issues related to implementation costing. Our objective was to support the broad application of costing in implementation studies that is consistent, feasible, and can be practically applied. We engaged in a limited, iterative process of developing initial guidelines and soliciting feedback, consistent with principles of USE-EBPI (Usability Evaluation for Evidence-Based Psychosocial Interventions) methodology, to make refinements and enhance broad applicability.
Results
We developed initial recommendations for a limited number of critical issues to advance the application of costing in implementation science and illustrated them using a study example. These issues were: (a) identifying relevant resource costs, (b) capturing resources using activity-based costing (ABC), (c) valuing resource units, summarizing and reporting, and (d) estimating replication and sustainment costs. We also emphasize the need to tailor approaches to meet different contexts and project-specific needs, and provide guidance when additional help may be needed.
Conclusions
Key to ensuring any program's successful adoption, implementation, and sustainment is understanding the costs and resources required. Costing implementation in the “real world” is both an art and a science; teams must make decisions about give-and-take related to precision and burden on participants and the research team while still producing generalizable estimates. Transdisciplinary costing guidance can address these issues and provide details and resources to help pragmatically cost and report implementation efforts.
Plain Language Summary
1. What is already known about the topic?
Many authoritative, detailed references and guidelines exist for comprehensive economic evaluation. Recent papers have focused on cost and economic issues in dissemination and implementation (D&I) science. However, none of these provide concrete, pragmatic guidance. Recent work has outlined and discussed needs but not explicitly “how to” directions.
2. What does this paper add?
This paper adds concrete guidance to identifying a limited number of specific vital issues to assess and report. This paper provides examples of pragmatic costing and describes a guidebook intended to clarify and enhance the accessibility of several critical issues in implementation costing for researchers and practitioners.
3. What are the implications for practice, research, or policy?
These recommendations can help address one of the most important but still least consistently reported outcomes in implementation science.
Keywords
Introduction
Implementation science is a rapidly expanding field that encompasses implementing and evaluating various evidence-based interventions, guidelines, programs, or other “products” (Brownson et al., 2023). Implicit within implementation research studies is the need for robust evaluations of implementation efforts’ cost impacts (Proctor et al., 2011). Cost analysis evaluates the resources for initiating and applying interventions and their associated implementation strategies. Including multiple partner perspectives, such as decision-makers and patients, is crucial when estimating costs to inform resource allocation effectively. While the need for such analysis is well-recognized (Gold et al., 2022; Powell et al., 2019; Proctor et al., 2011; Saldana et al., 2022; Bowser et al., 2021), there is a lack of consistency in data collection and reporting, which challenges the comparability across studies for economic evaluations and decision-making. Authoritative economic analysis guidance exists (e.g., Drummond et al., 2015; Neumann et al., 2017), but specific guidance for costing implementation remains sparse.
This paper aims to bridge a critical gap in health economics within implementation science (Gold et al., 2023). We offer practical, step-by-step guidance on collecting and reporting cost data, supplemented by a detailed guidebook. Consistent with the recommendations for measuring healthcare value put forth by both the first and second Panels on Cost-effectiveness in Health and Medicine (Sanders et al., 2016; Weinstein et al., 1996), our objective is enhancing implementation researcher and practitioner capacity to conduct pragmatic, replicable cost analyses. Thus, we propose broad guidance intended to be context-flexible, transcend any single discipline and usable by diverse interested parties. We focus on activity-based costing (ABC), a type of microcosting that measures each input in producing and delivering a good, service, or intervention and then derives its cost (Alves et al., 2018). ABC is beneficial for comparing alternative implementation strategies, including costs relative to a reference case (e.g., standard practice). This paper serves those conducting implementation science research who may be new to cost evaluations and economic concepts, providing initial recommendations for cost analysis. We forgo addressing more complex economic evaluations, including cost-effectiveness and cost-benefit analysis, and methods such as simulation modeling. The guidance presented here follows standard practice in economic evaluation, with a focus on capturing the total costs of implementation (Turner et al., 2023); total costs refer to the total value of resources used, not just the monetary amount paid for them (Drummond et al., 2015). The “cost of a resource” typically refers to its financial cost or market price, and opportunity cost considers the potential value of that resource if used differently (Neumann et al., 2017; Turner et al., 2023). We focus on general implementation costing guidance and include a glossary of economic terms to support readers’ understanding. This guidance is designed to be broad and adaptable to meet various implementation efforts’ unique questions, settings, and decision-maker needs.
Method
Three primary sources inspired this paper, and the accompanying guidebook on implementation costing. The first was an informal group of researchers convened by the National Cancer Institute's Division for Cancer Control and Population Sciences and the U.S. Department of Veterans Affairs to explore the intersection of implementation science and economics. The second is the Colorado Implementation Science Center in Cancer Control (COISC3), which utilized the microcosting methods presented here and developed further resources, including tables, templates, and practical examples (Glasgow et al., 2024). Lastly, the National Institute on Drug Abuse (NIDA)-funded CHERISH (Center for Health Economics of Treatment Interventions for Substance Use Disorder, HCV, and HIV) provided foundational tools for assessing cost and budget impacts related to treatments for substance use disorders and associated health conditions (Ryan et al., 2024). This paper aims to consolidate and present this guidance based on our collective experience to bolster practical implementation science.
Among the costing issues identified in the paper collection (Gold et al., 2023) was the absence of guidance for consistently conducting and reporting cost analysis in implementation research (Dopp et al., 2023). Costs remain an essential consideration in the field as both a potential determinant of implementation success and sustainment (Damschroder et al., 2022) and a vital implementation outcome (Proctor et al., 2011). Despite recognizing that costs are essential to decision-making by potential adopting settings, a recent scoping review indicated that in the past decade, only 7.8% of studies examined implementation costs (Proctor et al., 2023), partly due to limited guidance available for implementation costing. Researchers recognize the need for pragmatic costing guides, usable by researchers and community partners that “unify a common set of costing methodologies suited for D&I; standardize procedures for capturing costs; and are flexible enough to be applicable across varied D&I strategies (Dopp et al., 2023, p. 243).”
Costing Guidance Development Process
The development of the costing guidance shared in this manuscript did not follow a formal research methodology. Our primary objective was not to establish novel methods but to address an urgent need for greater harmonization in costing approaches within implementation research. The guidance shared here was developed through a collaborative and iterative process, drawing upon the authors’ expertise in implementation science and economic evaluation, recent advancements in the field (Bowser et al., 2021; Gold et al., 2022, other papers in this collection; 2023), and established guidelines from health economics (e.g., Neumann et al., 2017).
A team of implementation scientists and economists from COISC3 initially developed the costing guidance, with the team later expanding beyond COISC3. Following the drafting of the initial cost guidebook, the team expanded to include additional collaborators beyond COISC3. This expansion arose from the recognition of potential duplication of efforts and a shared commitment to consolidating collective learnings rather than pursuing siloed approaches. The expanded cost guidance contributors were included after the initial draft was circulated and contributed to its revisions and refinement.
Thus, to address the lack of costing resources in implementation science, we developed a pragmatic costing approach based on researcher input, available information, and the field's current needs. The team created a “Costing Guidebook for Implementation Scientists” (referred to as “Costing Guidebook”) which is a freely available resource.
We intend the guidance in this paper and the guidebook to be broadly applicable yet focused, practical, and customizable, allowing implementation science teams, program planners, and evaluators to cost implementation efforts. The team sought input from fellow researchers, conducting informal pilot usability testing, whose steps were aligned with USE-EBPI (Usability Evaluation for Evidence-Based Psychosocial Interventions; Lyon et al., 2020), depicted in Figure 1. We describe the four-step process used in the next section.

Iterative feedback steps for the implementation costing guidance and associated implementation costing guidebook, consistent with general steps of USE-EBPI (usability evaluation for evidence-based psychosocial interventions) methodology (Lyon et al., 2020).
User Feedback on the Costing Guidance Steps
Step 1: Identify Users
Rather than employing formal sampling procedures, we recruited seven implementation scientists with expertise in behavioral science, public health, and applied health economics through personal invitations and outreach via professional networks. This targeted approach ensured feedback from individuals with the expertise needed to strengthen the costing guidance and associated steps provided in the guidebook. We collected feedback through anonymous surveys and follow-up interviews.
In addition, we led a half-day workshop at a national implementation science meeting, open to all attendees. This session attracted participants with diverse backgrounds and experiences, providing valuable insights related to clarity, organization, and practical application of the costing guidance, and its accompanying steps. Gathering feedback in this open forum aligns with USE-EBPI guidelines, which emphasize incorporating varied perspectives (Lyon et al., 2020).
Step 2: Define and Prioritize Components
During the initial guidebook/guidance development, the contributors focused on incorporating tangible, digital materials that would aid in completing costing for implementation projects. The expanded content included templates (e.g., Excel spreadsheets), examples, and short videos introducing and discussing specific concepts before their application (consistent with EBPI packaging). We also prioritized specific, foundational steps to conduct a cost analysis that did not require extensive economic expertise (consistent with prioritizing EBPI components). We sought to determine the core components of conducting a cost analysis alongside an implementation trial that would be generalizable across settings and content areas.
Step 3: Plan and Conduct Small Pilot Tests
Guidebook developers created a survey that combined closed- and open-ended questions to share with potential users. The anonymous survey asked participants (the seven implementation scientists with expertise in behavioral science, public health, and applied health economics) open-ended questions about the tools and guidance, whether they would need additional information or tools, their comfort level in using the guide without an economist, and whether they would recommend it to colleagues. Additionally, the survey asked a Likert-scale response question indicating the extent to which the guidebook was easy to use and met their costing needs. The team utilized feedback collected to inform revisions to cost guidance and related guidebook components.
Step 4: Organize and Prioritize Revisions
We refined the guidebook based on user feedback, focusing on four primary areas: (1) Terminology—we simplified language to be more accessible to noneconomists (Gold et al., 2022). (2) Process mapping clarity and options—beyond the process map, we included a new section on using the Stages of Implementation Completion® (SIC) and its COINS (Costs of Implementing New Strategies) methodology for activity mapping alternatives (Saldana et al., 2014). (3) Practical Examples—we enriched the guidebook with additional case studies and a video to illustrate the application in clinical and nonclinical settings. (4) Scaffolding—we enhanced user understanding by adjusting explanations, such as streamlining the section on sensitivity analysis to concentrate on sustainability and replication in implementation science. To assist the reader, the Appendix includes a glossary of terms relevant to the guidance presented in this article.
Results
The implementation costing guidance provides an initial foundation for basic implementation costing that can be understood without extensive training. Although focused on introductory costing, this guidance also underscores instances where an economist's expertise is necessary and/or recommended. Covering essential steps for cost analysis, the guidebook and this paper together outline: (a) identifying the resources required (e.g., staff time, materials); (b) mapping the implementation process to define key activities and resources used; (c) measuring the quantity of resources used (i.e., how much of each input is needed), using approaches such as activity-based costing (ABC); (d) assigning monetary values to those resources by applying appropriate unit costs; and (e) reporting total and per-unit costs, including considerations for future replication and long-term sustainability.
Case Example: Brief Background
We use the Project ED Health study to illustrate each of the essential cost analysis steps. Project ED Health examined the impact of implementation facilitation (IF) on Emergency Department (ED)-initiated buprenorphine, along with referrals for community-based medication for opioid use disorder (D’Onofrio et al., 2019, 2023; Ryan et al., 2024; Lu et al., 2025).
Identifying Relevant Resources Required
Determining What Costs to Include
We recommend collecting cost data on all aspects of implementation possible, especially the implementation strategies used to support effective delivery of the intervention (e.g., implementation facilitation, coaching). Prospective microcosting approaches, as in Project ED Health, are often optimal for assessing component/task level implementation and intervention costs not required for nonresearch implementation. Top-down or gross costing, in contrast, is frequently used to determine total health expenditures and may produce reliable estimates at the group level, but these aggregated costs may not be applicable in many current implementation research projects (Chapko et al., 2009; Olsson, 2011).
When considering whether to include a cost (or not), an important question is, “What activities and resources would be needed for replication in nonresearch settings?” Adopting in nonresearch settings may incorporate a less explicit protocol for data collection, less frequent and intensive time tracking, feedback for, and supervision of implementation agents. But research settings can provide a valuable estimate of resources required for EBP implementation, especially when other information is not available, provided that research-specific costs, such as time spent preparing an IRB application, are omitted. Implementation resource requirements will differ according to the setting, target population, and site. We recommend a comprehensive approach to costing that includes identifying all the resources required to implement and sustain an intervention, regardless of whether they represent a direct or subsidized cost to a particular site. As such, one is accounting for the value associated with those resources should the organization choose to use them in an alternative manner. Clearly stating these assumptions when estimating costs also enhances transparency and generalizability, given that resource constraints are likely to vary across potential sites (Jones Rhodes et al., 2018; Ritzwoller et al., 2009; Saldana et al., 2022).
Types of Costs: Variable, Fixed Startup, and Time-Dependent Costs
Considering different types of costs helps determine which costs to include. This manuscript and the guidebook focus on activity-based costing (ABC), and we address fixed startup, variable, and time-dependent costs. Fixed startup costs are one-time expenses incurred at the beginning of the prespecified budgeting timeframe, such as equipment and planning (Neumann et al., 2017). Fixed startup costs during the preimplementation phase for the Project ED Health sites included time costs for conducting a two-part readiness assessment, startup partner engagement meetings, and implementation planning for rollout (See Table 1). Time-dependent costs are recurring expenses fixed over a given period within the prespecified budgeting timeframe (e.g., monthly or annually; Neumann et al., 2017). Time-dependent costs for the Project ED Health included quarterly meetings with the referral team and annual nursing educational meetings. Variable costs are expenses that fluctuate depending on the extent and scale of the implementation effort, for example, costs associated with each patient's care (Gold et al., 2022; Neumann et al., 2017). Variable costs for the Project ED Health sites included time costs for provider education, program marketing, and identifying patients and linking them to care.
Activities Across Implementation Phases for Project ED Health Guided by i-PARIHS.
Note. i-PARIHS: Integrated promoting action on research implementation in health services (Harvey & Kitson, 2016).
Downstream costs include the events, utilization, resources, and costs that may be attributable to the intervention and implementation strategy but take place outside of the intervention. While included in comprehensive cost-effectiveness and other economic analyses, they are outside the scope of this manuscript. One example of downstream costs is the resources used (or saved) for deploying public health interventions (Roberts et al., 2019). Some consequences may be outside the healthcare (or other) sector. In the Project ED Health example, downstream costs include those associated with community-based opioid use disorder (OUD) care and cost-offsets associated with resources saved by averting more intensive or prolonged medical care related to insufficiently treated OUD (Lu et al., 2025).
Identify Activities: Process Mapping and Process Frameworks
Defining the implementation process is essential to identifying inputs, measuring resources, and assessing costs (Cidav et al., 2020; Keel et al., 2017). This process is consistent with the implementation strategy specification described by Proctor et al. (2013): naming the implementation strategy, defining it, and specifying it (actor, action, action target, temporality, dosage, etc.). Specifying the implementation strategies helps ensure we account for all essential activities. We discuss two approaches to identifying key activities: process maps and frameworks. These approaches can be used individually or together, depending on what suits the project.
Process Maps
Implementation teams and partners should collaboratively construct process maps. We recommend outlining the intervention's delivery process—including the implemented strategies—and identifying all relevant activities, personnel, and resources involved. This includes activities across interested parties and implementation phases (Roseen et al., 2024). Ideally, implementation teams and researchers coordinate workflow development depicted in the process map with sites. The outline is mapped visually and iteratively reviewed and revised with implementation partners. Figure 2 displays a process map of the Project ED Health study. It serves to identify relevant activities, the personnel involved, and the resources required. The process map can guide cost-collection protocol development by asking detailed questions about those identified activities, personnel involved, duration, and other resources used.

Process map of Project ED Health.
Process Frameworks
Implementation science process frameworks can also guide ABC. Process models describe or guide phases of the implementation process, including planning, execution, and sustainment of implementation efforts (Nilsen, 2015). Process model examples include EPIS (Exploration, Preparation, Implementation and Sustainment; Aarons et al., 2011) and SIC (Saldana, 2014). In Project ED Health, we describe the implementation phases consistent with the integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) framework: preimplementation, implementation, and sustainment, to guide activity identification across phases. See Table 1 for example activities in Project ED Health across phases.
Researchers must also consider time horizons when distinguishing between implementation phases. For example, cost-effectiveness analysis compares alternative interventions when both are at a steady state; thus, one should not estimate an intervention's sustainment cost until this is believed to be the case. Implementation project cost information is arguably most useful to routine implementation when evaluated within a timeframe that aligns with strategic planning decisions, typically 3–5 years (Steiner, 2010).
Researchers have developed tools specifically to aid in operationalizing the implementation process and guide assessing associated resource needs; one is the Cost of Implementing New Strategies (COINS; Saldana et al., 2014). Saldana and colleagues developed COINS for use with the SIC to measure the implementation process and milestones (Saldana et al., 2014). Activities on the SIC define key implementation strategies needed throughout the full implementation process from preimplementation (e.g., Stage 1: engagement) to implementation (e.g., Stage 5: fidelity monitoring established) to sustainment (Stage 8).
Analysis Perspectives
The stakeholder perspective is a critical component of a cost analysis, as it dictates which resources should be included and the value that should be associated with them. An extensive discussion of perspectives is beyond the scope of this paper. However, several resources are available that discuss this issue in more detail (e.g., Drummond et al., 2015; Neumann et al., 2017), including implementation science-specific discussions (Eisman et al., 2021, 2023; Gold et al., 2022). We emphasize that the perspectives depend on the economic evaluation's overarching focus and the cost analysis's target audience. Among the most common perspectives adopted for microcosting analyses, including Project ED Health, is that of the clinic/health system (Lu et al., 2025; Ryan et al., 2024). The intent of capturing and valuing resources required for intervention implementation and sustainment is to inform the decisions of those planning to manage it; thus, focusing on the health system, payer, organization, or provider is often optimal in implementation research (Dopp et al., 2023; Eisman et al., 2020)
Measuring the Quantity of Resources Used via Activity-Based Costing
Activity-based costing (ABC) refers to a cost accounting method that identifies the cost of each implementation activity (Alves et al., 2018; Cooper & Kaplan, 1991). We focus on ABC methodology as most implementation strategies center on engaging in specific activities (e.g., audit and feedback; facilitation) to support implementation; ABC is widely used in implementation research as it is suitable for estimating costs associated with replicating both the implementation and intervention activities related to a specific project. What resources are relevant may include additional considerations. For example, debates and confusion have arisen over whether implementation science should cost the intervention, the implementation strategies, or both (Eisman et al., 2020; Gold et al., 2022). The methods described here are for estimating costs related to both the intervention itself and its implementation. The key is to be transparent about the specific activities and items included (and excluded) in the cost estimates (Crowley et al., 2018).
Measuring Resource Quantities: Assessment Methods
We briefly describe five options for assessing time spent (i.e., resource quantity) on activities to estimate their costs. For example, in Project ED Health semi-structured interviews were conducted with relevant personnel and detailed activity reports were logged. We also focus on time as this is consistently a challenging resource to quantify and is often responsible for the majority of costs in implementation science projects (Eisman et al., 2023; Levy et al., 2023). Decisions regarding the best-fit option require a balance of precision and burden; generally, the more precise the method, the more burdensome (Wagner et al., 2020). We describe briefly in the following section. See Table 2 for a summary.
Summary of Time-Cost Data Collection Methods With Examples.
Note. Project ED Health utilized primarily activity logs, other methods are provided as possible examples (informed by Chapel & Wang, 2019; Huebschmann et al., 2022; Keel et al., 2017).
Direct observation. Staff members observe and record resources used during implementation. These data are often precise and accurate. However, it (a) places a high burden on the research team, and it can require considerable time (and expense) to obtain reliable data, and (b) requires that observations be conducted during a typical activity/workload time that is representative of the general or average spent in that activity (Frick, 2009). When implementation encounters are recorded, it may be possible to use automated records of time spent.
Activity logs
These involve recording the time and resources of the implementing staff. Activity logs are commonly used in implementation research (e.g., Hoeft et al., 2019; Ritchie et al., 2020). They can help attribute time and costs to each discrete activity related to implementation strategies. Still, self-reported logs can also have issues with accuracy and completion rates (Wong et al., 2022).
Targeted questionnaires
This method involves surveying delivery staff about the time and resources used for implementation. These questionnaires often ask for cumulative time estimates and can provide insight into personnel time on an implementation project. Though less burdensome than activity logs, they trade off precision and are- by definition retrospective, which recall issues can influence (Chapel & Wang, 2019).
Key informant interviews
Key informant interviews are another method to collect time and resource data, garnering information from those involved and knowledgeable about the implementation process to estimate costs (Chapel & Wang, 2019). The interview process can shed additional insight into the implementation process. It is often a valuable adjunct to other costing methods as it permits probing for clarifications from different sources (e.g., activity logs). It is sometimes possible to structure interviews around process maps, which can help collect data on adaptations and time.
On-site database approaches
Some implementation projects occur in settings where an on-site database can help to track and log resources, including labor, such as “time stamps” in the electronic health record (Huebschmann et al., 2022). However, there are challenges with missing data and extreme values (e.g., forgetting to close an EHR record). This approach requires infrastructure and programming, but will likely be used more as the digital infrastructure develops.
Measuring Resource Quantities: Tailoring Cost-Collection Approaches
Researchers and practitioners should tailor cost collection methods to the specifics of their implementation projects, existing data collection, and available resources. Recent work by Levy et al. (2023) underscores some challenges with selecting the method that optimizes time-cost data reliability and rigor. They found that methods such as activity logs were challenging; passive data collection, including data from administrative records, was the most straightforward to execute for assessing provider time; however, it also missed the activities and time spent outside these data (Levy et al., 2023). Tailoring cost data collection methods should include aligning the methods with data collection already in process, whenever possible; for example, tracking time when workflows are already being tracked (Levy et al., 2023). While no one method is best for all projects, we underscore the need for transparency in methods reporting, including the frequency and timing of assessments. When possible, we recommend using multiple methods to assess costs, given the strengths and limitations of each method.
Assigning Monetary Values: Calculating, Summarizing, and Reporting Costs
Calculating and Summarizing Costs
Valuing resource units refers to estimating costs for each identified activity and resource, including time. When valuing resource units, we assign a monetary value to time using salary and wage data, including fringe benefit rates. Many potential sources are available to calculate these costs, each with benefits and drawbacks. One widely used source is the Bureau of Labor Statistics (BLS), a publicly available database for wages and salaries based on various factors, including specific position sectors, experience/education, and region of the country (U.S. Department of Labor, 2023). BLS is widely accessible and can be helpful, especially when estimating replication costs when tailoring to a different occupation, education level (e.g., from an MD to an RN), or region. One drawback is that some occupations do not fit neatly into the BLS categories. Another option is actual wages. A benefit is that this results in more precise estimates for an implementation project in a specific setting. Drawbacks include limited generalizability to other settings and challenges accessing salary and wage data from organizations that do not wish to disclose this information.
Valuing resource units, specifically labor units, includes multiple components: (a) Listing personnel and their occupation/credentials; (b) Quantifying time for each perspective and/or activity depending on desired information and organization; (c) Identifying hourly (or whatever time increment) wages plus fringe rate, and (d) Summing the total time costs by perspective, phase, activity, or potentially all the above. Referring to Table 3, we use the example of an activity in the implementation phase of local champion training and support: (a) Personnel: physician in the ED as the champion; (b) One hour bimonthly for 12 months; (c) Using BLS, hourly rate + fringe: $159; and (d) annual cost: $1,909/site.
Project ED Health Activity-Based Costing Input Table.
Beyond time (i.e., labor), implementation projects likely also include nonlabor costs such as space, equipment, and materials. There are different approaches to estimating nonlabor costs: each has benefits and drawbacks. This includes project-specific estimates that are precise but may lack generalizability (website modification/update costs) and general estimates from public repositories (e.g., space and overhead costs) that may not be very specific to the project.
Reporting Costs
After calculating costs, it is essential to communicate results in ways that are clear and most relevant to your audience and decision-makers. Ideally, conversations occur with decision-makers before implementation to best understand their priorities and the types of costs they are most concerned about. Reports can then highlight these costs and should include reader-friendly tables or graphs. The essential elements include illustrating the following: (a) personnel time, (b) personnel costs, and (c) other costs (e.g., materials, technology). It can often be helpful to present costs across implementation phases, by perspective, and potentially by activity. Table 3 reports results for Project ED Health includes costs by activity, site, and implementation phases. Table 4 includes tips for reporting results to different audiences. Adopting this approach for reporting will accomplish multiple objectives: (a) enhancing transparency in cost reporting, (b) supporting consistency across implementation projects, (c) providing explicit recognition of perspectives included or not included, and (d) informing replication of implementation efforts.
Tips for Reporting Results to Different Audiences From the Costing Guidebook.
Considerations for Scale and Replication
Scaling up (to similar systems) and scaling out (to new settings) is critical for advancing implementation cost analysis (Aarons et al., 2017). While project-specific cost data are informative, they reflect unique context, staffing, participants, and resources, that may differ elsewhere. To improve generalizability, we recommend estimating replication costs in likely adopting settings through sensitivity analyses (Jones Rhodes et al., 2018; Ritzwoller et al., 2009). Although a detailed review of sensitivity analysis methods is beyond the scope of this manuscript, scenario analysis represents a practical approach to characterizing uncertainty in implementation cost estimates. This form of “what if” analysis enables decision-makers to explore the impact of plausible variations in key cost drivers, such as personnel salaries, on overall expenditures (Neumann et al., 2017). For instance, analysts may model best-case scenarios using 10th percentile salary estimates and minimal resource use, and worst-case scenarios using 90th percentile salaries and higher-cost assumptions (Simoens, 2009). When extrapolating or scaling cost estimates to other settings, it is critical to consider contextual factors that may influence resource requirements and pricing. These include the number and qualifications of implementation staff, availability of supportive infrastructure (e.g., EHR systems, dashboards), and variation in intervention delivery modality, intensity, and frequency.
Discussion
While implementation science has made strides in acknowledging the importance of costs, more empirical research is needed to deepen our understanding and enhance practical application in routine settings (Dopp et al., 2023; Gold et al., 2023; Proctor et al., 2023). We urgently need to expand capacity related to practical and rigorous cost data collection and analysis methods for implementation researchers without training in economics to utilize when conducting costing research. Simultaneously, it is essential to recognize and most efficiently use the expertise of economists when their specialized knowledge and skills are needed. While continuing to expand the number of economists working in implementation science, we need to build the capacity of implementation scientists and their teams to conduct foundational activities of economic evaluation, starting with implementation cost analysis.
The primary purpose of this paper is to discuss a set of key implementation costing issues and make recommendations for their pragmatic use so that teams can competently conduct such assessments and analyses. Providing concrete guidance addresses a barrier for implementation scientists without sufficient access to economic expertise or for whom it is not affordable. In this paper and the related costing guidebook (Cronin et al., 2023), we provide guidance, a framework for costing, examples, and references to resources that enable noneconomists to assess and report implementation costs. This paper and related guidebook tackle the fundamental challenge of building capacity for implementation science, a central goal of the COISC3 and a need explicitly recognized in publications of the consortium on merging implementation science with health economics (Gold et al., 2023; Oh et al., 2021; Glasgow et al., 2023).
Often, implementation costing is all that is necessary, affordable, and within the scope of implementation research projects. For example, feasibility studies aim to obtain ballpark estimates of the time and cost of one or more implementation strategies, not to evaluate long-term economic impact. Understanding these costing issues and outcomes and how to address the perspectives of potential adopters on cost issues is essential to increase the successful dissemination and implementation of evidence-based programs. We hope these recommendations will help standardize and promote transparent reporting of implementation costs to facilitate report comparisons. For example, reporting training time and costs, implementer time in various settings, and costs across implementation phases are often not reported or, if they are, done in idiosyncratic ways.
We provide supplemental materials, templates, and resources in the guidebook that are publicly available for download and adaptation (Cronin et al., 2023). These supplemental materials include fillable process maps, outlines, results tables, interview guides, questionnaires, and auto-calculating spreadsheets. While each resource will require adaptation and tailoring for a specific implementation project, the guide should help users develop their own cost-data collection and reporting instruments.
Limitations and Future Directions
Both this paper and the guidebook it describes have limitations. We did not have specific funding resources for guidebook development and consequently were not able to conduct full-scale usability testing. Instead, we focused on a needs-based process of rapid, rigorous, and pragmatic costing guidance development to address this critical gap, recognizing that it is an iterative and ongoing process. Given these constraints and the relatively small number of persons invited to provide feedback, we prioritized including both implementation scientists and economists, and recruiting people with a range of implementation science expertise.
We did not go into detail regarding unique considerations of various settings (e.g., healthcare, education), health topics (e.g., cancer prevention, substance use treatment), or other critical cross-cutting topics (e.g., equity). The authors and the examples are all U.S.-based, and some of the issues of wages, related costs, and perspectives (e.g., of different payers) may not be applicable across countries. Adapting, expanding, and evaluating applications in international settings is an important future direction.
This paper and the accompanying guidebook emphasize a microcosting approach. In some contexts, however, gross-costing may be more feasible. At this stage, it remains unclear when collaboration with an economist, an experienced implementation scientist, or both is essential. Economists can provide valuable support in selecting the appropriate evaluation method, identifying and valuing costs, and interpreting findings. When seeking consultation, teams should first consider: (a) the primary economic question (e.g., program and implementation cost, budget impact, cost-benefit tradeoff), (b) the type and quality of data available, and (c) whether sufficient resources exist to support economist involvement. Barnett et al. (2021) offer more detailed recommendations and comprehensive discussion.
While characterizing uncertainty through sensitivity analyses is important, user feedback suggested minimizing detail in this area. This initial guidance—and the accompanying costing guidebook—will require future refinement and potential expansion to address emerging methods and setting-specific considerations. Realist evaluations (Pawson, 2013) can help determine under what conditions the guidance is most effective and for which types of projects and teams. Future work should also focus on updating best practices for resource measurement, incorporating methods such as budget impact analysis, and providing clearer direction on engaging health economics expertise. This proposed guidance offers a foundational step toward more consistent, pragmatic, and rigorous cost analysis in implementation research. A critical next step is aligning cost reporting with established guidelines, such as CHEERS (Consolidated Health Economic Evaluation Reporting Standards; Husereau et al., 2022) and STARI (Standards for Reporting Implementation Studies; Pinnock et al., 2017) to support transparency and reproducibility.
Conclusion
Understanding the costs of implementing evidence-based programs is crucial for successful adoption, implementation, and sustainment. Despite its importance, pragmatic guidance for costing implementation efforts in applied settings is lacking. This paper addresses this gap by providing concrete guidance on identifying and reporting vital cost elements, offering practical examples, and describing a user-friendly guide with available supplemental resources to enhance the accessibility of implementation costing for both researchers and practitioners. By promoting transparency and consistency in cost reporting, these recommendations aim to improve the understanding and reporting of this critical aspect of implementation science, ultimately facilitating the translation of research into practice.
Supplemental Material
sj-docx-1-irp-10.1177_26334895261438501 - Supplemental material for Making Implementation Costing More Accessible: Initial Transdisciplinary Guidance for Researchers and Practitioners
Supplemental material, sj-docx-1-irp-10.1177_26334895261438501 for Making Implementation Costing More Accessible: Initial Transdisciplinary Guidance for Researchers and Practitioners by Andria B Eisman, John Cronin, Debra P Ritzwoller, Sean M Murphy, Lisa Saldana and Russell E Glasgow in Implementation Research and Practice
Footnotes
Funding
This research is supported by the National Institute on Drug Abuse, Division of Cancer Prevention, National Cancer Institute (K01DA044279, PI: Eisman; R01DA044745, PI: Saldana; P50CA244688, PI: Glasgow; and P30DA040500, PI: Schackman).
Declaration of Conflicting Interests
The authors have no conflicts of interests to disclose.
Supplemental Material
Supplemental material for this article is available online.
