Abstract
This paper describes a community-based participatory research (CBPR) approach to evaluation used by an academic-practitioner partnership to refine the logic model for a violent crime reduction program and develop associated performance measures. Through qualitative and quantitative data collection, including semi-structured interviews, engagement with program stakeholders (e.g., program leaders and staff, peer researchers, community residents), community trainings, and review of program records, the research team worked through a six-phase participatory model to enhance stakeholders’ capacity for systematic progress assessment and data-driven decision-making. Key methodological contributions include making youth perspectives a central part of the logic model building process through in vivo qualitative data analyses and co-creating rigorous performance measures that holistically capture program outputs and outcomes. Although some challenges were encountered, such as delays in data collection and turnover of peer researchers, the CBPR approach strengthened organizational capacity for internal monitoring and program improvement, and set the stage for context-specific, responsive future impact evaluation.
Keywords
Introduction
Addressing the needs of young people involved in or exposed to community violence is critical for fostering safer communities, promoting positive outcomes, and ensuring their overall well-being. In many urban areas across the United States (U.S.), community violence and gun injury have escalated to critical levels, posing significant public health risks. In the U.S., firearm homicides spiked by nearly 35% from 2019 to 2021, with urban areas disproportionately affected (Kegler et al., 2023). Aggravated assaults and gun assault rates also increased during and after the pandemic (Rosenfeld et al., 2021).
Although by 2024 many U.S. jurisdictions were experiencing a decline in reported rates of violence, the impact of community violence on public health has been profound, with firearm violence now recognized as one of the leading causes of death for young people in the U.S., particularly in marginalized communities (U.S. Centers for Disease Control, 2024). Globally, interpersonal violence—which includes community violence and gun deaths—is a leading cause of death among adolescents (Cullen et al., 2024). The effects of violence extend beyond fatalities, as survivors of violence face lasting trauma and a wide range of negative short- and long-term health and economic outcomes (Roman et al., 2025). The continuing and costly harms from violence have exposed the limitations of local governments to address the crisis adequately. Many scholars have stressed that addressing community and interpersonal violence requires a holistic approach that acknowledges the systemic drivers of violence, including economic hardship, social dislocation, and limited access to mental health resources (John Jay College Research Advisory Group, 2020; Prevention Institute, 2020).
Too few violence intervention programs today take a comprehensive, individualized approach that addresses both the root causes of violence—such as poverty, inequality, and lack of community resources—and the more immediate, individual-level drivers, such as personal trauma and exposure to violence (Burrell et al., 2021; Sharpe et al., 2022). While many interventions that have been utilized globally incorporate models like Cure Violence 1 and the Group Violence Intervention (GVI), 2 which focus on interrupting cycles of violence through outreach and community engagement, these models often overlook the importance of tailoring strategies to the specific needs of individual participants (Butts et al., 2015). These models, which have shown results in the U.S. and other countries (e.g., see: Braga et al., 2018; Butts et al., 2015), tend to concentrate on violence at the community level, leaving gaps in addressing personal pathways to violence and the unique challenges faced by individuals at high risk of violence exposure. As a result, innovative programs and strategies that provide more holistic, person-centered interventions—combining economic support, job training, trauma-informed care, and cognitive behavioral skills training—are underutilized (Campie et al., 2024). Such approaches are critical for breaking the cycle of violence on both a broader community scale and an individual level, fostering lasting change by addressing the complex, intertwined factors that drive violent behavior.
In addition, research on the development and implementation of comprehensive, person-centered violence reduction models is limited. There is a pressing need for studies that explore innovative approaches, evaluate their potential effectiveness, and document the processes used to implement them (Nubani et al., 2023; Roman, 2021). A wide range of stakeholders—researchers, practitioners, governmental leaders—can benefit from understanding how these new and innovative models can be adapted to different contexts and which types of methods researchers employ with success to study or evaluate the programs. By expanding research in this area, scholars can build a body of literature that supports researcher-practitioner collaborations focused on creating safer, healthier communities (Bibby et al., 2018).
One reason for the lack of rigorous evaluation studies in this area may be because traditional research and evaluation methods can alienate communities, particularly when data collection is geared to external purposes rather than addressing local needs or involving community voices in solutions (Chicago Beyond, 2019; Griffith et al., 2008). Across the globe and over the last decade there has been a growing call for evaluation research methods to involve those directly impacted by crime and violence to ensure that the research is grounded in lived experience (Kia-Keating et al., 2017; Nubani et al., 2023). This, in turn, can lead to nuanced findings that have actionable outcomes both benefiting the program being studied and the surrounding community. Marginalized communities are often over-researched, particularly in ways that focus on their deficits or problems rather than their strengths. This can deepen mistrust of research and academia in general, especially when the research reinforces negative stereotypes or fails to bring about positive change. Communities feel exploited and excluded from decision-making processes that impact their lives (Chicago Beyond, 2019; Morrel-Samuels et al., 2016). Early research on CBPR in violence prevention shows that these approaches can increase collaboration and consultation with academic researchers and result in better implementation of research practices and consistent measures across programming strategies (Pickens, 2011).
This article summarizes a community-based participatory research (CBPR) approach undertaken to conduct a process evaluation of a violence intervention program and lay the foundation for a rigorous impact evaluation study. The main goals for the first year of the two-year project were to define and document the processes of the program that lead to intended outcomes and carefully define the range of outcomes related to (a) changes in attitudes, (b) changes in actions and behaviors, and (c) positive long-term community-level outcomes. To achieve these goals, the CBPR approach described in this project uses principles and tools from empowerment evaluation to co-develop a detailed logic model and performance measures with key intervention stakeholders. We follow the work of Rowe et al., 1999 and use the framework of empowerment evaluation as both a general philosophy and set of practices geared toward building organizational and community capacities to deliver and sustain effective programming activity. By detailing the tasks undertaken using CBPR to collaborate and co-produce products with program stakeholders, this article offers insights for researchers and practitioners on how participatory approaches can enhance process evaluation for new programs and contribute to more meaningful, context-specific evaluation measures. Community violence is a complex problem, and innovative programs will benefit from evaluation methods designed to simultaneously raise awareness about underlying social issues and program processes that potentially can bring about lasting change (Fetterman & Wandersman, 2005). Collaborative evaluation research that focuses on carefully articulating program and change processes can support this change. To date, most published studies that describe empowerment evaluation or collaborative evaluation designs highlight the entire framework, without providing details about techniques and strategies for logic model building and performance measure development—aspects that are central to building evidence-based programming (Kettner et al., 1990).
Specifically, co-production of logic models and performance measures can foster a shared understanding of change mechanisms among key stakeholders, helping to align program activities with desired outcomes and ensuring that evaluation metrics reflect organizational priorities and perspectives. In addition, it can help expand staff interest in critical evaluation tasks and build evaluation skills within the organization, using these skills to actively monitor, assess, and adapt the program over time. The outcome measures resulting from the project equip program leaders with credible, evidence-based metrics that can be effectively communicated to funders and outside stakeholders, enhancing the program’s legitimacy and appeal in funding discussions and outreach efforts. Furthermore, this participatory evaluation approach embraces a systems worldview, recognizing that societal issues and contexts are dynamic, and barriers and obstacles are likely shifting over time. This perspective supports flexibility and adaptability in both program strategy and measurement of outputs and outcomes, as the detailed logic model and performance measures provide a structured yet responsive framework that can be adjusted as community needs and priorities evolve (Neely et al., 2001).
In summary, we see this paper as explicitly addressing gaps in the extant literature by: • Advancing the application of CBPR and empowerment evaluation specifically in the context of violence reduction, where such approaches remain underutilized. Within the CBPR framework, the study also illustrates how participatory methods can strengthen program evaluations by integrating lived experiences into the design of logic models and performance measures. • Addressing the methodological gap in existing CBPR and empowerment evaluation studies, which often lack detailed descriptions of the processes and strategies used to co-produce logic models and co-develop performance measures. This study provides a step-by-step framework for these tasks. • Demonstrating how participatory evaluation enhances organizational capacity, fosters stakeholder alignment, and develops actionable performance metrics that support program sustainability and adaptability over time. By focusing on detailed processes and context-specific tools, this study contributes to a more nuanced understanding of how collaborative evaluation approaches can help support meaningful changes in violence.
Background
A local philanthropic organization in Philadelphia, Pennsylvania was interested in supporting an innovative community-based violence reduction strategy to reduce gun violence and youth involvement in the juvenile and criminal legal systems. The funder’s philosophy is to pair independent research and evaluation alongside funding support for non-profit programming, hoping pairings would foster clear understanding of program processes and effectiveness and help inform future funding decisions. Clarity on successful program components and change processes would provide data to the funder on the potential scalability of the program, as a whole or via particular components. The philanthropic organization is known for investing in capacity building for grassroots organizations seeking to better leverage data in both their operations and their storytelling for new funding audiences. The funder facilitated a collaboration between the community-based non-profit organization delivering the intervention and a university-based researcher who would lead the evaluation. This process involved jointly developing a comprehensive research plan that allowed the funder to simultaneously support both the implementation of the violence reduction program and the companion evaluation. The funder specifically emphasized using a participatory process throughout the partnership, with resources readily available to compensate community participants. In addition, the funder encouraged the development of research products that not only highlighted the collaborative nature of the evaluation, but also demonstrated the violence reduction program’s depth in addressing complex, systemic challenges.
The Program: YEAH Inc.’s Violent Crime Initiative
The Violent Crime Initiative (VCI) operates in Philadelphia, Pennsylvania (U.S.), serving young people ages 15–24 who have been arrested for violent offenses, including gun crimes. The VCI provides comprehensive, individualized support to participants from West and Southwest Philadelphia through court advocacy and holistic case management. The program’s primary aims are to reduce recidivism, address the participants’ wide-ranging needs, and improve their overall well-being. VCI participants receive extensive assistance, including financial support, from a dedicated team of case managers, legal experts, and program coordinators who work to ensure these young people have the resources they need to succeed in the community.
The VCI is a core program of YEAH Inc. (Youth Empowerment for Advancement Hangout), a non-profit organization based in West and Southwest Philadelphia. Founded in 2018, YEAH Inc. was created to address the stark lack of safe, culturally supportive, and engaging spaces for Black youth in the city. The organization seeks to tackle the root causes of violence through direct investment in young people and the community, providing spaces that are culturally relevant, empowering, and fun. YEAH Inc. aims to uplift teenagers and young adults by offering support and advocacy tailored to the unique challenges young people face in Philadelphia’s urban environment. The VCI, launched in late 2020, plays a central role in YEAH Inc.’s mission, offering a vital pathway for young people to break the cycle of violence and work toward self-sufficiency.
The Method: A Capacity Building Participatory Research Approach
As stated in the introduction, the CBPR approach for this project utilizes principles of empowerment evaluation that help develop the skills, knowledge, and infrastructure needed within an organization to effectively use techniques and products developed during a process evaluation and outcome evaluation. By focusing on establishing a strong theory of change via careful development of a logic model and defining clear measures of program processes and outcomes, the project helps expand program capacity to systematically assess progress, make data-informed decisions, and improve processes over time. These capacities are critical for continuous programmatic growth and intended milestones and outcomes (Rowe et al., 1999). The CBPR design, shown in Figure 1, included iterative tasks and training that helped embed evaluation tasks and products into the routine operations of the organization. In general, the techniques designed in this project also fall under the umbrella of participatory evaluation, utilization-focused evaluation, and collaborative evaluation—approaches where stakeholders are actively involved in the evaluation process. Phased community-based participatory model of evaluation capacity building.
The CBPR method for the VCI process evaluation includes six overarching phases described below. The research team worked with the co-CEOs of the community-based non-profit organization for two months to define the tasks behind the phases. The key stakeholders involved in the process evaluation included youth participants, program leaders and staff, and peer researchers. Peer researchers (PRs), also referred to in academic literature as citizen scientists or community researchers, are individuals from the communities being studied who actively participate in the research process. They bring lived experience and local knowledge, which enhances the relevance and accuracy of the research.
Phase 1: Develop an Understanding of Key Program Inputs, Outputs, and Outcomes
The first step was to develop a logic model grounded in the participants’ perspectives that aligned with the overall vision and goals the co-founders of YEAH and the VCI program sought to achieve. By explicitly articulating a program’s theory of change via a logic model, one can better understand the pathways through which interventions lead to desired outcomes, making it easier to assess effectiveness, adapt programs, and inform future decision-making (Weiss, 1995). Although the co-CEOs and the young people in their programs clearly understood the mission and purpose of their work, they did not yet have a formal logic model. Detailed logic models capture a program’s resources and investments, components and corresponding activities, outputs and short- and long-term outcomes. The logic model would form the basis to define a practical range of performance measures collected and reported out monthly by VCI program staff. While VCI staff already had defined some performance measures, the CBPR process evaluation tasks were co-developed to refine and greatly expand these measures to capture the full range of program outputs and also support a future rigorous evaluation of the VCI’s impact.
Informed consent was obtained from all individual participants involved in project tasks. The university project team first conducted several interviews and meetings to ask YEAH’s staff about the VCI’s key goals, case examples and their approaches to the VCI work. Key individualized goals for VCI participants included (a) being free from any juvenile or criminal legal system contact; (b) reaching self-sufficiency; and (c) reaching a state of general wellness or wellbeing. The co-CEOs also describe their overall macro-level or societal goals of the VCI (and all their programming) being geared toward reducing community violence, systemic racism and oppression of young Black people; creating a movement of youth self-advocacy; and increasing a sense of cohesion and community.
The university project team also shadowed staff in the community and at court and spent time connecting with young people at the hangout each week and at numerous YEAH programs and events. Then, a semi-structured interview protocol was developed to elicit responses that aligned with the components of a logic model. The protocol included questions for participants about: (a) the environmental and situational contexts that brought them to be part of the VCI; (b) the resources that the VCI offered and services they received; (c) their experiences and perceptions of the processes involved in receiving wraparound services and support at YEAH; (d) how the program helped them to reach their self-defined version of self-sufficiency; (e) how they describe their personal wellbeing or wellness (and how the program supported their wellness); and (f) what they hope for in their future. Because the CEOs articulated that self-sufficiency and wellbeing were key intermediate and longer-term goals (at the individual level), the protocol probed deeply in these areas to elicit participant descriptions of these constructs.
Since the organization does not separate VCI participants and non-participants in its praxis, there was often no distinction between discussion of the organization versus the program. The co-CEOs continually emphasized that this program was not merely a violence reduction program, that to reduce violence, there must be corresponding and complementary individualized and community goals outside the domain of criminal justice—such as safe and secure housing, pathways to a living wage, and health and wellness. The draft interview protocol was given to VCI staff to review the questions and provide feedback on content, language and style.
Phase 2: Hire Peer Researchers and Train Participants, Staff, and Community Residents
The emphasis on the co-production of knowledge and data for the process evaluation included hiring peer researchers who would become part of the research team. Early in year one, the university team and YEAH co-leaders collaborated to develop a job advertisement to select and hire three peer researchers for the research team. The three peer researchers hired for this project were from the local neighborhood where the organization is based.
The peer researchers were trained and integrated into the research team, with initial training including, but not limited to research ethics and human subjects research certification, an overview of research methods, CBPR approaches, deriving and developing research evidence, logic models, and qualitative research tools and techniques. Within six months however, two of the peer researchers resigned, leaving one peer researcher working closely with the university team. One PR went on to work for YEAH as a VCI intern, gaining more hands-on experience toward his long-term goal of becoming a Defense Attorney. The project team decided not to hire new peer researchers given the time and intensity of training among the short duration of the two-year project.
The remaining peer researcher and the academic team continued meeting weekly and conducting monthly research trainings for YEAH participants and community members at the YEAH hangout space. The community-facing trainings were designed to equip training participants with essential knowledge about research methodologies, evidence-based practices, and the effective use of data. The trainings, held in the late afternoon and evenings, are advertised widely via social media, include a paid stipend for attendees, and provide dinner.
Phase 3: Collect Initial Data for Logic Model and Co-Design Additional Evaluation Components
Fifteen current and past VCI participants were selected for in-depth interviews. From the list, the university team conducted ten semi-structured interviews with VCI participants. The co-CEOs helped connect interviewees to the research team. To serve as somewhat of a pilot test, during each of the first two interviews, the research staff asked the participants for their feedback on the interview questions and flow. A few minor changes were then made. All participants had been involved with the VCI for at least three months, and all had been incarcerated at some point. The research team provided $50 to each interviewed participant and the organization matched this incentive with an additional $100. The youngest participant interviewed was 15 years old; the oldest was 25 years old (mean = 19.9). Two young women were interviewed; the remaining participants interviewed were young men. Some young people no longer had active legal cases, while some were still working through one or more cases.
Most interviews took place in person at the organization’s headquarters, but some were held at participant homes, and one was conducted over the telephone. The research team was unable to interview any young people while they were in placement or jail. The Philadelphia Department of Human Services, which oversees the city’s only secure juvenile justice detention center, was only permitting family members to visit. Similarly, the local adult jails were undergoing a change in leadership and research projects had been stalled. The ten interviews that took place occurred over a period of 3 months.
In addition, the peer researcher worked with the academic team to develop her own research tasks to measure outcomes of the VCI, including a Photovoice project. The project, which began recruitment in June 2024, broadly seeks to understand and document the wide array of opportunities, services, and resources that young people, particularly VCI participants, perceive YEAH Philly to provide. Photovoice, considered a visual research method, creates photographic evidence and symbolic representations that can offer unique insights into social processes (Strack et al., 2004). The project, which furthers data collection efforts through a different medium, was reviewed by the co-CEOS. After revisions were made to recruitment and methods, young people both in VCI and YEAH Philly uploaded photos via text on cell phones. Young people were also given the opportunity to complete a brief interview in person or on the phone about their photographs.
Separately, since the research team was unable to interview incarcerated participants directly, the peer researcher and university team collaborated closely with the co-CEOs to develop a list of written questions to provide to one incarcerated participant. As this data collection took place after the analysis of the semi-structured interviews, the research team analyzed the written responses separately. The findings from the analysis of the written responses were then incorporated into a book chapter submission (with the VCI young people as co-authors) to amplify the voices of incarcerated participants.
Phase 4: Analyze Data, Gather Feedback, and Finalize Logic Model
The research team conducted a thematic analysis of the qualitative data collected from the semi-structured interviews, using an “in vivo” coding process to prioritize the language used by youth to discuss the resources (or inputs) received by participants and the outcomes they achieved related to (a) learning and attitudes and (b) actions and behaviors. In vivo coding is a qualitative data analysis method where the researcher uses the exact words or phrases of participants as codes to capture key concepts, ensuring that the analysis remains closely tied to the participants’ own language and meanings (Saldaña, 2021). The initial logic model using in vivo coding with the interviews of participants is shown in Figure 2. The logic model includes the phrasing the participants used to describe the changes in themselves. Because the participants were not prompted to discuss societal-level outcomes, the last column in Figure 2 is left blank. Initial in vivo VCI logic model highlighting youth participant voices.
Then, the research team used an iterative process that involved presenting preliminary logic model analyses and findings back to the stakeholders, including both staff and young people, to verify that the interpretations of the research team aligned with the voices of participants but also captured the array of inputs available to young people and the outcomes desired by program leaders. In addition, the research team asked the co-CEOs to provide detailed feedback on the array of outcomes that should be included in long-term, aggregate/society-level outcomes (i.e., the last column in the logic model). After back-and-forth discussions and editing with program leaders and staff, the in vivo logic model was developed into the final logic model shown in Figure 3. For organizational and measurement purposes, the final logic model collapses a few outcomes/constructs and adds a few outcomes not directly discussed in the in-depth interviews of VCI participants. Finalized VCI logic model after incorporating staff feedback and organizing outcomes.
Phase 5: Refine Performance Measures
To develop performance measures, using the logic model as a guide, the research team worked collaboratively with VCI staff and young people to determine metrics that would effectively capture the program outcomes. Deriving performance measures from a detailed logic model involves identifying key elements within the model—such as inputs, activities, outputs, and outcomes—and translating them into specific, measurable indicators that can track progress and impact. Importantly, it is greatly beneficial to have a listing of the full range of program activities so that potential causal mechanisms or processes are depicted via the multi-layered outcomes listed in the logic model. By doing so, activities can be assessed by performance measures that not only capture the frequency and quality of activities within program components, but also by the breadth of activities delivered. This type of detail, when linked to outcomes, moves the project team closer to understanding what is happening inside the “black box.” Evaluation studies in the areas of positive youth development and criminal justice typically do not include measures that fully capture aspects of a program’s theory of change (Crowley et al., 2017; Hausman et al., 2013; Roman, 2021). Without detailing constructs/activities that align closely with specified outcomes, audiences cannot ascertain how change occurs.
Hence, outputs, such as the number of participants reached, and short-, medium-, and long-term outcomes, such as changes in attitudes or behavior, must be quantified to provide a clear theory of change and as a foundation to capture program effectiveness. The in vivo coding and analyses for logic model development directly influenced how program staff refined and expanded their assessment tools to capture personal changes in attitudes and mindset (the first set of outcomes in the logic model). The research team focused on developing new measures to capture frequency of activities and outcomes related to criminal and juvenile legal system status and contact.
Furthermore, throughout data collection tasks it became apparent that a strong feature of the VCI program model included meeting the legal advocacy-related needs of young people regardless of the status of a participant’s court case (or cases). Participants consistently stated that “staff showed up” or “were always there for me.” VCI staff brought court advocacy, support and services to the young person regardless of whether they had been re-arrested, or were in detention or youth placement. VCI staff reported (and research team court observations confirmed) that the court advocacy often resulted in less punitive outcomes for participants. This could mean, for instance, reduced sentencing, release from detention, alternative placements, or access to diversion programs that prioritize rehabilitation over punishment. Court advocacy can also help ensure that youth receive fair treatment and have access to appropriate legal representation and supportive services that address their specific needs.
Importantly, these small wins are all program outcomes. This can include reduced surveillance, reduced incarceration, fewer restrictions on liberty, etc. Yet, these outcomes (e.g., home detention vs. placement, reduced probation months or incarceration time) are not typically captured in outcome data in evaluations of violence reduction programs (Roman, 2021). In typical criminal justice program evaluations, the focus is often on easily-obtainable measures such as reductions in arrests, recidivism, and incarceration rates. While measures such as arrest are important indicators, they fail to capture subtler but equally significant outcomes, such as the reduction in system and government surveillance that many young people experience through programs like VCI. The small victories are critical in preventing deeper entanglement with the legal system and enhancing their chances for long-term success. Through the collaborative and iterative process to refine the logic model and continually update it to be reflective of program processes, the research team reviewed all qualitative and quantitative data collected and/or provided by program staff. The data included quantitative output from the VCI’s case management database (provided by the co-CEOs), and administrative data on arrest histories on a large cohort of VCI participants provided by the Philadelphia Police Department. The research team output descriptive statistics for the VCI cohort, as well as descriptive case histories for a sample of VCI participants and then held in-depth meetings with the co-CEOs to ensure quantitative and qualitative summaries accurately reflected the case timeline details known to the staff for each youth in the data cohort.
From this iterative process review of quantitative and qualitative data with the program leaders, the research team identified a need to further delineate the various legal system intercepts where successes are realized for VCI participants. We then developed the diagram shown in Figure 4, as a complement to the logic model. This was an important part of the collaborative theory-of-change building work, because it moved the academic-practitioner partnership deeper into measuring the changes the co-CEOs envisioned. Specifically, given the co-CEOs’ desire to contribute to larger community and societal-level changes related to reducing mass incarceration and legal system disparities (and its effects), the “small wins” in court that result in reduced surveillance and system contact remain central to reducing violence in general. Legal system intercepts highlighting holistic support provided by the Violent Crime Initiative (VCI).
Sample Monthly Performance Measures Related to Legal Status/Outcomes of VCI Participants.
a“Decertification” (sometimes called reverse waiver) can occur when a youth appears as an adult in criminal court and a judge transfers jurisdiction to juvenile court. This occurs before trial and restores the protections and rehabilitative focus of the juvenile justice system.
Phase 6: Co-Design and Disseminate Final Products; Celebrate Successes
During the final phase, the collaborative team co-designed final accessible products, such as a summary report, a published book chapter and the PhotoVoice presentation. In the near future, the team will gather stakeholders to celebrate the successes and the contributions of everyone involved, which will reinforce relationships and build momentum for sustained engagement and future initiatives, such as a rigorous impact evaluation.
Discussion
The use of CBPR in evaluating the VCI provides critical insights into how academic-practitioner partnerships can evolve to better contribute to knowledge building about evidence-based strategies to serve high-crime communities and justice-involved young people. The capacity building CBPR approach allowed for a deep engagement with staff and program participants, ensuring that a detailed theory of change, and corresponding performance measures, were produced that truly capture the vision of program developers. Using in vivo coding for the logic model—incorporating the actual phrasing and language used by participants during interviews, also placed high value on the beliefs and perceptions of program participants and which outcomes mattered to them. Participant opinions about key outcomes may differ from what program leaders initially identified as important. In vivo coding indeed revealed a few gaps and misalignments between participant experiences and the outcomes described by lead staff in initial meetings, enabling more participant-centered program refinement and a logic model that truly reflected the priorities of those being served. For instance, findings from the in vivo logic model process led to program staff holding participant focus groups organized around question prompts related to changes in mindset and new goal setting to further understand how these immediate and intermediate outcomes can be capitalized on to spur further positive change.
In addition, the deliberate, iterative approach to producing the logic model and performance measures likely helped minimize the effects of potential implicit or explicit biases that the academic team may have had. Researchers who have not lived within the studied community, or who have not personally experienced the complex challenges faced by program participants, may inadvertently overlook the incremental but meaningful changes that serve as foundations for broader, long-term successes. Furthermore, some researchers may prefer the route of traditional evaluations, which are typically less time-consuming. CBPR is a time-intensive approach that requires commitment to deep reflection, continuous learning, and authentic collaboration to ensure that the evaluation design and component tasks and products truly align with the experiences and needs of the community. Through this collaborative effort, the program’s logic model was refined to better capture the complexity of young people’s needs and experiences, emphasizing small but significant victories like reduced surveillance, less time in pre-adjudication detention, fewer incarcerations, and diminished restrictions on liberty.
Similarly, the performance measures were developed to enable systematic tracking of data that provide comprehensive and detailed outcome data and lay the groundwork for a future impact evaluation. The collaborative process also highlighted the need to go beyond traditional metrics like arrest rates and incarceration when evaluating violence reduction programs. Often, evaluations not only fail to unpack the black box of a program’s mechanisms, but also fail to account for the reduction in state surveillance or the empowerment of young people to navigate the legal system more effectively. Hence, one of the successes of this project in its first year lies in its ability to articulate a broad range of outcomes and successes that reflect the holistic nature of the violence reduction program. Relatedly, another success was that the resulting data information and reporting system was derived from within the program, not provided by the evaluators. Furthermore, the program does not have to rely on the evaluators for data or access to data.
Challenges
The participatory process did present some challenges—such as the time it took to build trust with program leaders and a population that has faced trauma, and in navigating barriers posed by incarceration of program participants. Program staff emphasized the importance of research staff talking with participants who were incarcerated at the time or recently incarcerated. Hence, one of the key lessons learned was the importance of flexibility and adaptability in participatory research. The iterative nature of CBPR requires researchers to be open to unexpected insights and willing to adapt their tasks and techniques based on feedback from participants and community stakeholders. This approach needs ample time for relationship-building, trust, and reflection. Researchers must account for this in their timelines, recognizing that co-developing performance measures, refining logic models, and integrating participant feedback cannot be rushed. Researchers should also explain the need for extended timelines to funders at the beginning of project development and ask for great flexibility on deadlines and deliverables. In this project, the nature of some of the deliverables changed and new deliverables were added where the research team took on additional tasks to support the co-CEOs in their ongoing research activities. For example, the team transcribed focus group recordings conducted by program staff to aid their internal assessments and reporting. These contributions not only shared resources and expertise but also reinforced trust and collaboration between the research and program teams.
Related to funding, it would be greatly beneficial to these partnerships if funders support sufficient timeframes and resources for participatory research to be carried out effectively. Short-term funding cycles often do not accommodate the depth of engagement needed for CBPR, which can result in frustration felt by the stakeholders and missed opportunities to capture meaningful change. To support robust, community-driven and capacity-building evaluations, funders should allow for extended project durations and allocate resources not only for the research itself but also for the time and training needed to help enable program stakeholders to participate fully in the process. In addition, having at least bi-monthly research team-funder conversations can increase the likelihood funders will understand and approve extensions for deliverables or the overall project timeline. This project had the benefit of a funder invested in CBPR methods. Without this flexibility, the potential impact may have been diminished, limiting the likelihood of building evaluation- and data-related capacities across staff and program participants. A related challenge in evaluation research in general is that the research partners may be working without any external funding, primarily motivated by their commitment to the community and/or the possibility of academic publications. Projects that take long periods of time or demand flexibility may scare off potential academic partners. Academic scholars invested in CBPR often report obstacles to career advancement when university practices prioritize the quantity of publications over community impact (Nyden, 2003).
Another challenge was retaining the peer researchers. The two peer researchers who resigned early in their tenure (after roughly 3 months) realized that they were more interested in direct service and advocacy than conducting research. This is not necessarily surprising, but likely implies that the academic research team could have approached the hiring process in a more strategic way. Furthermore, the loss of two peer researchers mid-project likely reduced the breadth of perspectives and insights that could be integrated into the process evaluation, potentially limiting the diversity of interpretations and findings (as well as the diversity of creative perspectives available to suggest relevant final research products for public dissemination).
Suggestions for the future to limit turnover of peer researchers include hiring a larger number of them at the outset or establishing renewable short-term contracts that would allow peer researchers to test their fit within the project and the team, while giving them the option to extend their involvement if they wish to continue. Or if funding would allow, the research team could offer flexible roles that combine research with advocacy or community service work.
Future Evaluation Capacity Building Research in Violence Reduction and Other Contexts
Researchers working in the field of violence reduction can replicate the methods described herein by leveraging their utility in providing detailed and specific insights into both program processes and outcomes. As programs like the VCI continue to develop, the techniques and tasks here can serve as a blueprint for other organizations looking to enhance their capacity for evidence-based decision-making and contribute to safe communities and long-term, systemic change. Key elements in spreading or replicating a program or going to scale within a community include a deep understanding of how it works, alongside relevant and compelling data that read like a blueprint (Bibby et al., 2018). The participatory nature of the method, combined with its emphasis on integrating stakeholder voices through tools like in vivo coding for logic models and contingent comprehensive performance metrics, offers a clear roadmap for co-developing logic models and performance measures tailored to diverse program contexts. This level of specificity not only enhances a process evaluation’s credibility and validity but also can equip future researchers and practitioners with actionable guidance for adapting the approach to other programs—regardless of the field. By capturing a full range of participant-driven outcomes across learning, actions and behaviors and community-level changes, the tasks described in this paper help ensure that evaluation efforts are both meaningful and replicable across settings. In other words, the details in logic models are critically important—not just as an academic exercise, but as a highly practical tool for understanding and guiding program implementation, evaluation, and feedback. A well-crafted, co-produced logic model provides clarity by distinguishing between learning (e.g., changes in attitudes or knowledge), actions and behaviors (e.g., shifts in decision-making), and community-level changes (e.g., reductions in community rates of violence or systemic disparities). The distinctions across types of outcomes are essential for rigorously measuring and tracking progress, identifying potential causal pathways, and ensuring that evaluation efforts add real value from the program leaders’ perspective.
Adaptation of the CBPR approach used in this project likely will take many forms, as programs will vary in their starting points and in the level of trust established among partners. New partnership-based capacity building projects should proceed slowly, meet frequently, and never assume researcher-practitioner alignment on any goals or tasks without verification. Building trust, fostering open communication, and remaining flexible to the program’s unique context are essential for the success of a capacity-building participatory evaluation process.
Conclusion
The experience described herein underscores the value of applying CBPR methods in the field of violence reduction research and other fields with over-researched populations. Not only do capacity building participatory evaluation methods foster empowerment and more equitable partnerships between researchers and communities, but it also results in evaluation products that more accurately reflect the realities of marginalized communities or those experiencing high levels of violence (Hausman et al., 2013). This study highlighted the centrality of logic models in evaluation processes and evidence-based program development. By co-producing logic models with stakeholders, we demonstrate how participatory methods can ensure that program strategies are both context-specific and grounded in data and evidence.
Although the original intent of this participatory evaluation was to build the organization’s capacity to track and measure outcomes and report on successes, the same capacity-building principles also support broader efforts to sustain and scale the program. By building evaluation skills and embedding them into routine operations, VCI staff can continue to assess program effectiveness and adapt the program as community needs evolve. The participatory process helped establish a strong foundation for future impact evaluations, buoying the data-informed nature of program strategizing and refinement, and in showing funders and policymakers how the VCI addresses both more proximal (e.g., situational, etc.) causes of violence and the underlying drivers of violence. Although the full impact of this capacity-building effort is still to be seen (a limitation of the current paper), this study represents a step forward in understanding how CPBR can be an essential tool in partnership efforts to address the drivers of community violence and create and sustain effective, community-driven interventions.
Footnotes
Acknowledgements
The authors thank the peer researchers who worked with us for the first part of this project –Rodney Gardner and Duane Price. We also thank the young people who hang out at YEAH and those who are involved in the VCI for their willingness to discuss their lives and share their stories and program perceptions with us. Last, but not least, we are deeply grateful for the support of the Neubauer Family Foundation for bringing this unique partnership together. We also note that the opinions expressed in this paper are those of the authors and do not necessarily reflect the views of the Neubauer Family Foundation.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The study was supported by the Neubauer Family Foundation.
