Abstract
Most program evaluation efforts concentrate on assessments of
Introduction
An intervention program has three sequential phases: planning, implementation, and outcome. Evaluation of each phase is essential. Since planning is the first phase of the program cycle, plan quality affects program implementation and its overall effectiveness. Plan evaluations refer to the assessments of program plans and planning activities (Alexander, 2016; Khakee, 2008; Lichfield et al., 2016), while assessments of the implementation and outcome phases have commonly been conducted under the umbrella of program evaluation (Alkin & Christie, 2023; Shadish et al., 1991).
Both plan evaluation and program evaluation have similar interests: to provide tangible, trustworthy, and useful information about programs through systematic assessment. However, linkages between plan evaluation and program evaluation are not explicit in the literature. Guyadeen and Seasons (2016, 2018) pointed out that plan evaluation lacks an established theoretical and practical foundation, which it can potentially gain through a stronger connection with program evaluation. Likewise, plan evaluation is likely to expand the scope and enhance the usefulness of program evaluation.
Despite accomplishments, some issues are apparent from the literature regarding plan evaluations. The first is related to the approach to evaluating program plans. Three distinct
The second issue apparent in the plan evaluation literature is a focus on market values. Popular plan evaluation methods (i.e., Plan Balance Sheet Analysis [Alexander, 2016], Goal Achievement Matrix [Bracken, 2013], and cost–benefit analysis [Patassini & Miller, 2017]) are heavily focused on market value. This focus appears justified from the FPA's perspective, as they are accountable for how they spend appropriated public money. Critics of these methods point out that plan evaluation must expand beyond the limitations of these methods and should consider nonmarket values such as social equity, environmental, and participatory values (Englehart, 2007). Clearly, to this end, plan evaluations must engage and reflect the views of diverse stakeholders, not just those of FPAs.
This paper aims to use theories and approaches from the field of program evaluation to outline a
Evaluating Program Plans from the Program Evaluation Perspective
The Concept and Principles of Participatory Plan Evaluation
The literature suggests four venues for evaluators to contribute to program planning: (1) provide background information for developing a program plan (i.e., needs assessment or formative research; Chen, 2015), (2) join the planning team as members, to help develop a program plan (Nichols, 2002), (3) facilitate planners and stakeholders to codevelop a program plan (Majid et al., 2018), and (4) assess a program plan for improvement purposes, before its implementation, through a participatory approach. This paper will focus on option 4.
Program plans are blueprints for implementing interventions serving clients in communities. Some program plans are created by FPAs and disseminated along with funding announcements to address public issues. Other program plans are created by CBOs and included in their grant applications for funding opportunities advertised by FPAs. In either case, it is unknown if plans are suitable for implementation in communities, or if they are likely to produce desired outcomes. Program plan evaluation could provide the necessary evidence before proceeding with implementation.
Program plan evaluation assesses the viability of the program plan in communities, before implementation. Some questions answered by plan evaluations are similar to those answered by viability evaluation: Is the proposed plan addressing the problem for which it was created? Can CBOs implement the plan adequately? Can clients be recruited and retained in the program? Are services attractive to clients? Is the planned intervention likely to achieve outcomes? (Chen, 2010, 2015). Program plan evaluation is a unique evaluation approach by itself, just like process evaluation and outcome evaluation are.
Since stakeholders are the ones having to implement the plan, it is essential that they participate in its evaluation. Participatory approaches (Chouinard, 2013; Cousins & Chouinard, 2012; O'Sullivan, 2004; Ray & Miller, 2017; Rodríguez-Campos et al., 2020) are best suited to serve this purpose. Participatory approaches have long been used for program evaluations (implementation and outcomes), centered on principles such as shared control over evaluation processes, jointly setting objectives despite potentially different agendas, working out difficulties as a group, and developing a collective awareness (Crishna, 2007). However, participatory approaches for plan evaluations are difficult to find in the literature. Their value for program plan evaluations will be explained next, in the context of evidence-based interventions (EBIs) and non-EBIs.
In the case of EBIs, since their focus is heavy on rigor or internal validity issues, their transferability or external validity is assumed, but truly unknown (Glasgow et al., 2006). Participatory plan evaluation can improve the external validity or transferability of the proposed intervention since stakeholders have valuable knowledge about organizational capacity and community conditions (i.e., barriers and opportunities for delivering services, feasibility and sustainability issues, target populations, etc.). The participatory approach can help close the gap between recommendations coming from the research community in the form of EBIs (with a solid theoretical foundation) and practice realities encountered by local organizations and practitioners (Cousins & Chouinard, 2012; O'Sullivan, 2004; Ray & Miller, 2017).
In the case of non-EBIs, stakeholders might have little or no input at all. This may be an inherent consequence of the fact that those holding the knowledge and power to solve social or health problems, as well as the resources to do so, are generally the ones planning programs and making most decisions, including evaluation decisions. Engaging stakeholders in planning requires extra time and work, with little to no incentive to do so. However, non-EBI program plans can benefit from stakeholder input. Community engagement strengthens collaboration and promotes a common understanding of issues, solutions, and assessments while improving the feasibility, utility, appropriateness, and sustainability of programs (Roach & Fritz, 2022). The participatory approach also increases community buy-in (Koorts et al., 2018), an important factor for implementation success.
In conducting a participatory plan evaluation, evaluators could follow either a practical or a transformative stream (Cousins, 2005; Cousins & Whitmore, 1998)—each with its overarching rationale for evaluation, type of participants, and evaluation procedure.
Participatory approaches—practical or transformative—are based on democracy (Bergold & Thomas, 2012). Participants must be willing and capable of collaboration and power-sharing (Jagosh et al., 2015). Furthermore, evaluators must work with an FPA to create safe spaces for participants to express their views with confidence, without fear of retribution (Bergold & Thomas, 2012). Although it may sound trivial, ensuring safe spaces is essential to win stakeholders’ trust and cooperation in the participatory process.
The Methodology of Participatory Program Plan Evaluation
Participatory program plan evaluation involves two steps: (1) recognizing and addressing barriers, and (2) using effective methods to facilitate stakeholders’ assessment of the program plan.
Recognizing and addressing barriers to participatory program plan evaluation
Evaluators must be aware of and prepared to address the barriers to participatory plan evaluation. Barriers are factors that do not favor participatory evaluation, are common across different implementation contexts (e.g., organization hierarchies, power structures), or hinder participatory evaluation in particular environments or contexts (e.g., organizational culture). We present a few barriers commonly described in the literature, as well as acknowledge that others may exist.
Barrier 1: Accountability priority
The current governance climate sets the gold standard of program evaluation on a positivist view to generate impartial and objective evidence for accountability purposes (Chouinard, 2013). Under this condition, FPAs might not see the value of participatory evaluation beyond accountability purposes. To address this barrier, evaluators should argue how program evaluation can serve both accountability and improvement, rather than accountability purposes alone (Alkin & Christie, 2023; Shadish et al., 1991). Participatory plan evaluation can prevent wasting money and resources by identifying program plan weaknesses before implementation in communities.
Barrier 2: Skepticism on stakeholders’ contribution
Researchers and decision-makers doubt stakeholders’ likelihood to meaningfully contribute toward developing program plans, due to their lack of research training (Ahmed et al., 2004). To address this barrier, evaluators should stress the stakeholders’ extensive knowledge, skills, and experience in serving communities. Additionally, there is growing recognition that conventional research alone provides limited community benefits; therefore, it is important to embrace community knowledge in solving community problems (Chouinard, 2013).
Barrier 3: Timeline requirement
Participatory plan evaluation will likely add extra time and effort to the planning process. Decision makers might object to it for delaying program completion (Plottu & Plottu, 2011). To defuse this issue, evaluators should propose conducting plan evaluation as early as FPAs create the plan. In the long run, evaluators should advocate for normalizing plan evaluations as a typical component of evaluation efforts, to preemptively strengthen the chances of program success in communities.
Barrier 4: Distrust
Funding/planning agencies and CBOs operate on two different levels, with unequal power and resources, and potentially different missions, priorities, and interests (Mullin & Daley, 2010; Weiss & Hughes, 2005). Since FPAs hold the purse and have more authority than CBOs, there is no incentive to elicit inputs from CBOs. When it does happen, CBOs may not provide candid feedback for fear of jeopardizing current or future financing from FPAs (Chen et al., 2019; Norris & Kushner, 2007). This power differential arrangement likely creates a distrust barrier to conducting participatory plan evaluation (Roach & Fritz, 2022).
To address this barrier, evaluators must act as mediators, ensuring everyone's interests are equally served. Maintaining a neutral position will support identifying solutions acceptable to both parties. This process—understanding the views and responsibilities of each party and communicating and negotiating on their behalf, while preserving the anonymity of stakeholders’ critiques—can be time-consuming.
Barrier 5: Communication issues
Longest and Rohrer (2005) indicated that communication between government agencies and stakeholders could have interpersonal, organizational, skills/knowledge, and attitude barriers. Particularly relevant to participatory evaluation are the channels used by FPAs to communicate with stakeholders. Announcements, emails, telephone calls, and grantee meetings are commonly used and assumed to offer plenty of opportunities for stakeholders to provide feedback. However, stakeholders perceive these channels as mere top-down, unidirectional ways used by FPAs to tell them what to do (Chen et al., 2019).
To address this barrier, evaluators need to work with FPAs and/or decision-makers to create safe and supportive environments for stakeholders to join the participatory plan evaluation and express their opinions. Funding/planning agencies should formally encourage stakeholders to participate in program plan evaluations and expressly state the willingness to adjust program plans, based on stakeholders’ input and plan evaluation findings.
Applying effective methods for facilitating stakeholders to assess a program plan
Duea et al. (2022) provided an inventory of participatory research methods based on project and partnership goals, organized into five domains. For example, the evaluation domain includes references to conceptual models and a web repository of evaluation tools for partnerships, as well as examples of participatory evaluation frameworks and their real-world applications. Duea and colleagues (2022) argue for the integration of participatory evaluation into projects from the beginning, to engender ownership and relevance of projects for communities. In agreement, we suggest using the participatory approach to evaluation in each of the three program stages: plan/planning, implementation, and outcomes. In this paper, we focus on the participatory approach for evaluating the plan/planning of a program by using the stress test (Chen et al., 2019).
The program stress test is a brainstorming exercise to examine the resilience of the structural and functional integrity of a program as planned, discover potential barriers or weaknesses of the plan, and work out solutions through open dialogue among representative stakeholders (Chen et al., 2021). The literature supports that people are more creative in groups and generate powerful and innovative ideas through brainstorming exercises (O’Loghlin, 2016; Paulus & Nijstad, 2019). They are more likely to speak when others do as well, and more likely to advance an idea when everyone contributes. The program stress test stimulates participants to (1) identify issues incongruent with the program plan communicated by the FPA, (2) discuss their implications for implementation in local communities, and (3) explore the chain of consequences if issues are not addressed (i.e., how bad can it get, what could be the secondary effects—including positive and negative or unexpected events, etc.; Chen et al., 2021). Once evaluators summarize the information provided by stakeholders and present it to the FPA, a follow-up meeting can be scheduled to agree on feasible solutions for the issues raised by CBOs. Potential solutions identified by CBOs must be openly discussed with FPAs to inform acceptable program plan adjustments. Program stress test was initially proposed for evaluating an implemented program (Chen et al., 2021), but it can also be used to test a program plan before implementation.
Ex-Ante and Ex-Post Participatory Evaluation Approaches to Assess Program Plans
There are two general approaches to participatory evaluation for assessing program plans:
By contrast, the
Because participatory program plan evaluation is relatively new in program evaluation, we had difficulties finding applications in the literature. We also realized that several of our past projects were program plan evaluations. However, at that time we did not recognize these evaluations as plan evaluations, nor did we understand their significance. With the conceptual framework and methodology discussed above, it became obvious to us that those projects included plan evaluations. This section will use two examples to illustrate the application of the
Ex-Ante Approach for Evaluating a Program Plan
The
In the late 1990s, the Center for Disease Control and Prevention (CDC) prepared a plan to establish a national HIV Prevention M&E system for collecting data on funding, implementation, and outcomes to meet US Congress demands. This was a challenging undertaking affecting many grantees such as state health departments, local health districts, and community organizations funded by CDC. These organizations had a wide range of capacity, responsibility, and involvement in providing direct services to the population. Furthermore, at that time, there was no precedent effort to guide building such a system. An initial plan for developing the system was proposed. However, internal meetings to discuss the next steps produced mixed feelings about the plan. Some staff suggested inviting stakeholders to meetings to comment on the initial plan of the proposed M&E system. Others raised concerns that such action would delay building the system on time to meet US Congress requirements. In the end, it was decided to get stakeholders’ feedback on the plan. Part of the reason for the decision was the CDC HIV division had long and productive working relationships with state and local health departments via the National Alliance of State and Territorial AIDS Directors (NASTAD)—which represents and advocates for HIV health workers. The collaboration with NASTAD was expected to address obstacles such as communication and distrust barriers and complete the task before the deadline.
The stream of participatory evaluation used in this project was practical participatory evaluation. The CDC project team worked closely with NASTAD to plan activities and invited representatives from state departments, local health districts, and CBO representatives to brainstorm ideas and comment on the M&E plan through focus groups (Chen, 2001). A secure environment was provided to ensure participants spoke candidly. With NASTAD's presence, participants felt comfortable expressing their concerns. Participants agreed on the urgent need for the M&E system. However, they cautioned the task would be highly complex and difficult. They identified some major problems of the plan, as follows:
1. Reporting to a national M&E would create a burden on health departments and CBOs 2. Fear of arbitrary use of evaluation results by the CDC 3. Concern about low input from stakeholders in the development of guidance 4. Lack of expertise and capacity in health departments to implement the new requirements
The system required grantees to revise the existing system or develop a new one, compatible with the national system. Trained staff were necessary to implement these changes and ensure the quality of data, which was a burden for agencies without this capacity.
Health departments worried that once the new M&E was set up, the CDC could use the findings as punitive criteria when awarding grants. Those performing poorly (as determined through analyses of these newly available data) feared receiving substantially reduced CDC funding, without regard to the complex factors and situations that affected HIV prevention in their local area, and regardless of how justifiable the reasons for their performance were.
Stakeholders were concerned that CDC could take a top-down approach to developing guidance without consulting them, and simply mandate implementation. They were concerned that despite the CDC's best efforts, the top-down evaluation guidance would be difficult to implement, could provide misleading information, and be useless to grantees.
Stakeholders worried that the proposed M&E would “raise the bar” in terms of expectations for evaluation, meaning health departments had to have evaluators and electronic information experts, as well as financial resources to implement the M&E in their organizations. As such, stakeholders believed that the new system set them up for failure.
Center for Disease Control and Prevention took the feedback seriously and used the following strategies to develop the M&E system: (1) made evaluation guidance useful for both program accountability and improvement, (2) agreed to aggregate data at the national level, (3) pilot-tested the guidance, (4) created a phased-in implementation, and (5) provided technique and capacity building assistance to stakeholders. In the end, the M&E project was successfully implemented and had strong stakeholder buy-in and support.
One important lesson from the project was that participatory evaluation was not only to get stakeholder feedback but also to engage in constructive dialogue. Funding/planning agencies must be willing to share power with stakeholders. This implies stakeholder participation was a give-and-take process. Funding/planning agencies cannot get everything they want. They must be willing to negotiate. The compromises that CDC evaluators made were related to issues of scope and focus (i.e., which parts of the evaluation could be done immediately and which could wait, which evaluation questions needed to be rephrased to reflect reality, what kind of assistance would health departments need, and how should evaluation data be used). More specifically, the CDC team gave up the individual-level data requirement in this project. On the other hand, for issues related to the integrity of the design and methodology, there was little dispute about those agreed-on areas, which were of most concern to evaluators.
Ex-Post Participatory Approach for Evaluating a Program Plan
The
The evaluation team reviewed the initiative guidelines, the data from M&E, and the planning documents from health districts, and finally conducted site visits and interviews with health districts and the state office (Chen et al., 2019). The evaluation found that low performance ratings in some cases were due to faults in the M&E system which did not allow reporting of districts’ successes, while high-performing ratings could have resulted from knowing how to parse the information to report what the M&E system asked for. The common theme in all interviews was the deep-seated distrust of state officials who systematically planned initiatives without consulting with local agencies, and without having someone with significant field experience to coordinate each initiative.
The initiative required health promotion coordinators (HPCs) to be hired, to take a leading role in recruiting community partners and mobilize community resources to promote community health. The HPCs indicated they did not command such authority and did not have the resources to lead such efforts (i.e., funding barely covered the HPCs’ salaries). Rather than leading, HPCs piggy-backed on the partners’ efforts and reported these successes into the M&E system. This issue and others were never discussed during the planning phase because of ineffective communication avenues (top-down, informal), and because the projected power of the state office deterred local agencies from providing candid feedback. For example, the state used multiple communication channels such as announcements, emails, and annual meetings to engage health districts in providing feedback on the plan and the M&E system. However, health districts viewed these channels as avenues for the state to tell them what to do. Although districts were aware that they did not subordinate to the state office, they also acknowledged the subtle efforts of the state office to maintain the illusion of hierarchy through funding mechanisms, coordinating annual events, and asking district employees to cease meeting without a state official being present (Chen et al., 2019). These issues could have been identified through an
Conclusion and Discussion
Program evaluation has traditionally focused on evaluating program implementation and outcome phases, while plan evaluation focuses on program planning. Guyadeen and Seasons (2016, 2018) asserted that plan evaluation lacks an established theoretical and practical foundation and suggested a stronger connection with program evaluation might solve this issue. In response to that suggestion, we proposed a participatory plan evaluation from the perspective of expanding program evaluation. We presented a conceptual framework discussing the requirements, barriers (as well as the strategies to remove the barriers), and methodology to conduct plan evaluation. Two approaches to program plan evaluation were discussed: the
Another important issue relates to disseminating EBIs. Implementation science literature debates whether EBIs should be implemented with high fidelity or adapted to fit community settings during the implementation stage (Bopp et al., 2013; Quinn & Kim, 2017). This paper suggests the possibility of expanding the scope of the debates or discussions from the implementation stage to the planning stage. Should a proposed EBI be implemented in communities with fidelity in its disseminated form, or should it be examined during planning for necessary adaptations to enhance its viability in the community before implementation? We favor the second option. EBIs use randomized controlled trials to maximize internal validity at the expense of external validity (Glasgow et al., 2006). Efficacious EBIs (documented via randomized controlled trials) could be difficult to implement or ineffective in communities where contextual factors are not controlled. By contrast, adaptation has the benefit of weighing the influence of context and other factors during the planning stage rather than during or after implementation. Furthermore, since stakeholders are familiar with their communities, their input in adapting an EBI might alleviate implementation difficulties or even increase its effectiveness in communities. These issues also require further investigation.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
