Abstract
Community-based partnerships are integral to mental health programming and research. However, there are limited published guidelines that apply the principles of community-based participatory research (CBPR), especially within the context of supporting vulnerable youth populations. This article demonstrates the application of the CBPR principles in cocreating an evaluation approach for a healthy relationships program for vulnerable youths with community partners. We present our research procedures and activities and highlight the importance of having a trauma-informed lens and flexibility with the research process and outcomes. We conclude the article by sharing our lessons learned and providing recommendations for future CBPR with vulnerable youths.
Keywords
It can be very challenging for vulnerable youth to access mental health support. We use the term vulnerable to describe individuals, populations, and communities who may be at risk of or are currently experiencing harm or adversities due to environmental or social factors (Tremblay et al., 2018). Issues relating to systematic barriers, structural discrimination, racialized history, and marginalization all contribute to ongoing challenges that vulnerable youth face accessing mental health supports (Canadian Mental Health Association Ontario, 2014; Centre for Addiction and Mental Health, 2014; Sakai et al., 2014). Many stakeholders, including university-based researchers and community partners, recommend integrating mental health interventions into settings where youths are already receiving essential services (e.g., schools, child welfare agencies; Andrews et al., 2019).
Mental health interventions brought to communities need to be examined for their fit, flexibility, and effectiveness to promote well-being among participating youth. Interventions may also need to be adapted to align with each community organization’s unique structures and needs. Thus, the appropriateness of these adaptations should be studied (Campano et al., 2015). There are significant gaps in the research on effective and feasible community-based mental health interventions with vulnerable youth (Crooks et al., 2019). This research gap highlights the many challenges of conducting mental health intervention research in the community. These challenges are not surprising given the different priorities and mandates that university-based researchers may have versus community organizations (Crooks et al., 2019). Scientific guidelines often encourage researchers to incorporate a positivist methodology and conduct studies in controlled conditions to seek new knowledge (Fals Borda, 2001).
In contrast, community-based research requires flexibility in design, theoretical orientations, and methodologies to accommodate communities’ unique needs, realities, and resource limitations. Community organizations also have a service mandate (not a research mandate) to support all their youth with innovative programs. As such, controlled intervention study designs with comparison groups or treatment-as-usual conditions are not always feasible for community settings (Buchanan et al., 2007). There could also be additional barriers to conducting community-based research, including navigating issues with confidentiality, ownership of research information, and challenges in fostering openness and trust with community organizations (Kennedy et al., 2009). To address these barriers, there have been numerous calls for researchers to formally incorporate the input of relevant stakeholders when designing studies to evaluate community-based mental health interventions (i.e., community-based participatory research [CPBR]; Andrews et al., 2019; Beatriz et al., 2018; Waechter et al., 2009).
According to the CBPR principles, it is important to establish formalized colearning processes between researchers and community stakeholders from the outset and throughout the entire process to inform change and actions (Israel et al., 1998). The expertise of local stakeholders could guide researchers in how to promote and build upon the existing strengths of the community to address its identified needs and design studies that will have direct benefits and implications for community change and growth (Campano et al., 2015; Green et al., 2001; Johnston & Woody, 2008; Kennedy et al., 2009). However, few studies have published guidelines on how to apply the CBPR framework into tangible actions to cocreate studies to evaluate community-based mental health interventions (Andrews et al., 2019). The absence of established, action-oriented guidelines may further discourage researchers from conducting community-based studies, especially when researchers have been criticized in the past for advancing their personal academic interests while devaluing the needs and priorities of the community (Campano et al., 2015; Stoecker, 2008).
Researchers have published models of CBPR to illustrate how they have successfully partnered with organizations to evaluate community-based mental health programs with vulnerable populations. For example, Waechter and colleagues (2009) reported on how to effectively build and sustain a partnership between researchers and child protection agencies to design an evaluation (e.g., identifying constructs to measure) to investigate the health and mental well-being of child welfare involved Indigenous youth. Furthermore, other authors have published interdisciplinary CBPR models between researchers and youths participating in a school-based dating violence prevention program (Beatriz et al., 2018) or between researchers and multiple community organizations supporting women who have experienced interpersonal violence (Andrews et al., 2019). Generally, these case models of CBPR emphasize that community expertise strengthens research processes and outcomes, and that flexibility in research design was essential in maintaining trusting relationships with community partners and meeting their needs. Furthermore, the application of CBPR principles has unique research and ethical considerations in larger-scale studies involving several community partners, given each partners’ individualized needs and challenges.
Along with applying a CBPR framework, researchers are encouraged to incorporate trauma-informed approaches to conducting studies to support vulnerable communities (Andrews et al., 2019). Trauma-informed programming involves the understanding that individuals may have complex histories and adversities that can impact their behaviors, reactions, and day-to-day functioning (Brave Heart & DeBruyn, 1998; Savage et al., 2007; Steele & Malchiodo, 2012). In the past decade, there have been increased efforts to develop trauma-informed practices (activities, resources, and tools) to create safe spaces and experiences for people to resist retraumatization while participating in mental health programs (Emerson & Ramaswamy, 2015; Ko et al., 2008; Savage et al., 2007). Trauma-informed programming is particularly important for vulnerable youth who are more likely than their peers to have experienced trauma and related adversities (Bulanda & Byro Johnson, 2016; Ghafoori et al., 2014). Within the context of CBPR specifically, the research community needs to improve our understanding of how we can better integrate trauma-informed approaches into research methodologies. For organizations supporting vulnerable youths, trauma-informed practices need to extend to all individuals involved at all levels of community programming, including program facilitators, administrators, and decision makers. Safety needs to be prioritized and integrated at all steps of research and engagement with community partners (Andrews et al., 2019).
Healthy Relationship Programming With Vulnerable Youth
Supporting the development of healthy relationship skills is integral to reducing relationship violence and promoting mental health among vulnerable youths (Crooks et al., 2019; Lapshina et al., 2018). In this article, we describe how we applied the principles of CBPR to develop and pilot an evaluation strategy for a healthy relationships program, Healthy Relationships Program-Enhanced (HRP-E; Townsley et al., 2017), in collaboration with community organizations supporting vulnerable youth. The pilot evaluation was part of a larger research project, sponsored by a federal government grant, to address identified gaps in gender-based violence prevention and mental health programming with vulnerable youth populations. Our goal was to develop a pilot evaluation design that would be flexible enough to fit highly diverse settings (e.g., community mental health, child protection, juvenile justice, and public health) and sufficiently rigorous to build a foundation for a future quasi-experimental study. Before detailing the pilot, we describe HRP-E and its previous single community partner evaluations to highlight how those experiences guided the codevelopment of this pilot with multiple partners.
Description and Development of HRP-E Programming
The HRP-E focuses on fostering healthy relationship skills among participating youth as a health promotion strategy for multiple adverse health outcomes such as mental health challenges, substance misuse, and dating violence. It is a small group program consisting of 16 hr of intervention and was developed for youth aged 14–18 in school and community settings. The core components of the HRP-E derive from the evidence-based, Canadian Fourth R program, a classroom-based, universal healthy relationships promotion and dating violence prevention curriculum (Crooks et al., 2011, 2015; Wolfe et al., 2009, 2012).
The Fourth R has been extended to incorporate a trauma-informed framework (Crooks & Wolfe, 2019; Houston, 2020; Kerry, 2019) for programming with vulnerable youth (i.e., HRP-E). Compared to the original Fourth R programs, the HRP-E has an increased emphasis on mental health, teaches harm reduction strategies for safer substance use, and helps youth to develop effective and healthy responses to higher risk taking, scenarios relating to substance use, relationship violence, and mental well-being (Kerry, 2019). The HRP-E offers a selection of alternative activities, avoids extreme imagery, and provides training about trauma-informed care to facilitators implementing the program. Moreover, the program takes a strength-based approach that promotes youth’s existing assets and aims to enhance their social-emotional competencies through extensive skills training in personal and healthy relationship development (e.g., communication skills, social resistance training; Kerry, 2019).
Previous Evaluations of HRP-E
We have conducted several case study evaluations of HRP-E with vulnerable youth over the past few years. For example, we implemented the program with justice-involved youth living in secure custody facilities or an intensive residential treatment facility (Kerry, 2019). Our findings from focus groups and a repeated measures design with self-and-teacher-report surveys suggested that participation in HRP-E promoted the development of social–emotional learning skills (e.g., empathy, problem-solving efficacy) among justice-involved youth. We also evaluated the fit of HRP-E for child welfare involved youth (Houston, 2020). Youth and program facilitators reported that the healthy relationship skills learned from the program helped the youth to navigate real-life interpersonal and transitional challenges better. These evaluation findings generally support the fit and acceptability of HRP-E for youth justice settings and child welfare agencies. Importantly, these community-based HRP-E evaluations highlighted that the fit and implementation of the program largely depend on our partners’ structure and internal resources and the unique needs of the different youth groups (Houston, 2020; Kerry, 2019). Although these small studies are promising, there is a need to continue evaluating the HRP-E in highly diverse settings to elucidate further its fit, feasibility, and effectiveness for other community organizations supporting vulnerable youths.
Evaluation Pilot
As mentioned earlier, with the support of our federal grant, we advanced the evaluation of the HRP-E with vulnerable youth in partnership with multiple community organizations. Before proceeding to a quasi-experimental design, we conducted a pilot evaluation to explore and develop measures and data collection protocols that were relevant, feasible, and appropriate for the youth participants and partnering organizations. We administered pre-and-post youth outcome surveys, collected youth attendance and engagement data, and had facilitators complete an implementation survey at the end of program completion. To the date of writing this article, we have partnered with 16 community partners across two Canadian provinces. Our community partners provide programming and services to support diverse, vulnerable youth populations, including youth involved in child protective services, youth justice and correctional reform settings, and youth with complex substance dependence and mental health challenges. Some of these partners we have worked with for many years, and others are newer alliances. In some cases, we approached particular partners (such as the local child protection agency) because we wanted to explore programming and research in that context. In other cases, community organizations approached us for programming expertise or training, and the timing serendipitously leant itself to them becoming part of our grant application team. Our partners’ experience and expertise with research varied widely. Some partners had almost no previous experience working with a university-based research team, yet in another case, the executive director of an agency is also a highly experienced researcher.
Current Article
The overarching purpose of this article is to describe how we built our research frameworks, procedures, and strategies for the HRP-E pilot based on three CBPR considerations: (1) ongoing co-creation and consultation with community partners, (2) integration of trauma-informed research methodology, and (3) flexibility in research design and approaches. We elaborate on our experiences and the procedural outcomes of the HRP-E evaluation to highlight the lessons learned about cocreating an evaluation approach in partnerships with highly diverse community partners, based on the abovementioned CBPR considerations. We use these lessons to develop the article’s final focus, which includes recommendations for CBPR research with vulnerable youths in the community. We formulated these recommendations to clarify and establish more explicit and action-oriented guidelines for conducting CBPR with vulnerable youth populations.
Co-creation and Consultation With Community Partners
Collaborative, trusting, and respectful relationships with community organizations are essential to developing innovative mental health care services in the community (Kyoon-Achan et al., 2018). Building relationships takes time, and rushing to ask community members to engage in the research process can cause ruptures and resistance (Campano et al., 2015; Crooks et al., 2013; Drahota et al., 2016). These relationships are strengthened when community stakeholders are actively involved in cocreating the research project (Andrews et al., 2019). For example, we learned that dedicating time and establishing colearning opportunities were critical to developing authentic relationships with community stakeholders for evaluating a mental health promotion program with First Nations, Métis, and Inuit youth (FNMI; Crooks et al., 2013). Specifically, the active participation of community stakeholders in informing the iterative development and evolution of the mental health program and the research processes led to more meaningful outcomes for the participating FNMI youth (e.g., positive cultural identity, intra-and-interpersonal growth, better mental health) and the community (Crooks et al., 2016). Therefore, for the HRP-E pilot, we prioritized having colearning procedures with organizational leaders and decision makers from the outset to seek their input in designing the pilot’s activities and procedures.
Building a Strong Partnership
Since beginning discussions about partnering on our grant application, we met with our partners individually to learn about their existing services, successes, challenges, and the youths’ and service providers’ lived experiences. We needed to understand better how the skills and strategies taught in HRP-E, and their potential outcomes, align with the types of mental health supports that the vulnerable youth within our partnering communities needed. Getting this information and gaining a deep understanding of our partners’ needs, services, and priorities was essential to the partnership process. The individual discussions with the partners highlighted that the outcomes and impacts of HRP-E would not be the same or consistent across and between the groups of vulnerable youth. The diversity of youth across the settings was reflected in discussions we had with partners that were already running the program when we attempted to identify primary outcome variables. For example, a partner that implemented the program with adolescent mothers felt that the participants’ most significant gains were in the areas of assertive communication skills and a sense of social support in the group. Conversely, partners that implemented the program with male adolescents involved in the criminal justice system felt that the largest gains were in the areas of empathy and self-control. It is not surprising that different partners see the program’s impact differently given the foundational nature of social and emotional skills, the heterogeneity of skills deficits between different youth populations, and the flexibility within the program to focus on particular areas. However, these diverse perspectives on the primary outcome for the program created an evaluation challenge. Specifically, it was challenging to determine the appropriate constructs to measure in evaluating the program’s effectiveness for the pilot.
In addition to individual meetings, we held biannual meetings with all of our community partners, focusing on research planning. During the meetings, we learned more about our partners’ administrative structures, programming priorities, and service policies to begin to understand their capacity and readiness to engage in research activities in addition to running HRP-E. Discussions included partners with research experience or a designated researcher, as well as with those whose expertise is more firmly rooted in clinical practice. We learned from these discussions that the available resources, time lines, organizational structures, and priorities of our partners were diverse across sites. Thus, the research procedures for the pilot needed to fit according to their contexts. We also asked for their feedback to identify challenges and barriers that each site might encounter in carrying out our proposed research consent procedures and data collection methods. Following the partners’ meetings, we continued to engage in either in-person or telephone meetings with individual partners to discuss the topics described above.
Partner Consultations to Codesign Measures and Data Collection
We consulted with our community partners to codesign the youth surveys we administered to evaluate the HRP-E. Co-creation of evaluation measures does not mean putting a disproportionate burden on community partners to develop their own data collection measures aligning with their needs and having researchers only review them later. In a CBPR design, researchers need to work with community partners to identify appropriate outcomes that are commensurate with both the evaluation objectives and organizational needs and create preliminary drafts of measures for coreview and follow-up consultations. To create our youth surveys, we first incorporated the HRP-E logic model, what we learned from our initial evaluations of HRP-E, and the literature to identify several outcome variables relevant to our diverse groups of vulnerable youth. We operationalized these outcome variables with measures that were relatively brief and had adequate psychometric properties. Where possible, we selected measures that had been previously used with vulnerable youth, either by our research team or others. Where we could not find appropriate questions, we augmented the survey with items reflecting the program’s logic model.
Seeking and incorporating feedback on youth surveys
Next, we shared the initial draft of the youth survey with our partners for feedback before submitting it for approval from our institutional research ethics board. There is often a hierarchy in academic institutes to determine who the experts are in establishing the best practices for evaluation studies (Cacari-Stone et al., 2014). It has been argued that the sole use of institutional knowledge excludes the knowledge and wisdom from the community leading to further health inequalities and disparities (Allan & Smylie, 2015; Barnabe et al., 2017). Our partners reviewed the initial drafts of the survey to ensure that it captured outcomes that were meaningful to measure for their youth and organizational needs and suggested additional items that would also be applicable for the youth they serve. We provided adequate time and an iterative process for this feedback loop to be meaningful. Their feedback was also valuable for improving the wording of some of the survey items and prompts. For example, we received feedback that youth may be less familiar with certain terms or some of the terms or phrases on the survey might be too formal or academic (e.g., dating partner; cisgender). We revised these items to provide definitions or other commonly used terms (e.g., someone you are in a relationship with; your gender identity matches your sex assigned at birth).
Consultation on measuring risks and adversity
Furthermore, consultations with our partners also helped us understand how to measure certain variables on the youth survey. Specifically, we consulted extensively about how to effectively measure the types and intensity of risks our youth participants had been exposed to and the adversities they faced to examine whether those experiences moderated their HRP-E outcomes. Importantly, from an equity lens, collecting this information about youth’s exposure to risks and experience of adversities was important to monitor whether we were reaching the most vulnerable youth in our communities. Traditional approaches to measuring adverse life events include listing the frequency or the type of adverse life events experienced (Purewal et al., 2016; Stevens, 2014). In particular, they capture a dosage measure of trauma and adversity, which has proven to have great utility in the Adverse Childhood Experiences Study, for example, to predict future life outcomes and refer individuals for interventions and supports (Cronholm et al., 2015; Finkelhor, 2018). However, many of the youth at our partnering organizations have experienced significant marginalization and adversities. They might be at increased risks of experiencing discomfort or distress when asked about certain life events, especially events associated with abuse and violence (Priebe et al., 2010).
Furthermore, asking participants to answer multiple questions about specific traumas is both time-consuming and incongruent with our strengths-based approach. We had extensive conversations with our partners on the safe and sensitive mechanisms to measure cumulative risks and adversity with youth and presented an alternative measurement option (i.e., a single item continuous scale) that we found in the literature (Sullivan et al., 2019). Based on our partners’ input, we ultimately decided to use a single-item continuous scale to measure cumulative risks and adversity on the youth surveys. Specifically, the question asks participants to place a mark on a line to mark the overall rating of their life challenges, considering both the number and severity of those challenges (Sullivan et al., 2019). It is anticipated that this brief and low burden estimate will reduce the likelihood of participants experiencing distress (Evans et al., 2013; Kessler et al., 2010).
Cocreating and Implementing Supports for Research
According to CBPR principles, relevant community stakeholders become coresearchers within their organizations (Kyoon-Achan et al., 2018). However, taking over the new research role could overwhelm community coresearchers’ already demanding work schedules, given their multiple responsibilities in their existing roles in their organizations (Tremblay et al., 2018). Community coresearchers need adequate time, tools, and training to familiarize themselves with the research tasks and carry them out effectively within their organizations. For our evaluation pilot, the HRP-E facilitators, and sometimes administrators as well, were our coresearchers. They were involved with participant recruitment, obtaining informed consent from guardians or youth, and helping youth complete pre-and-post pilot surveys. We relied on their experience to determine the most realistic and effective approaches to carry out the research activities (e.g., how to reach youth and their guardians to share information about the program). During the initial stages of the HRP-E pilot, our community partners indicated that they needed more support in understanding our research design and protocols and training on doing their research tasks as coresearchers (e.g., obtaining consent from participants). In response, we provided training by videoconference to program facilitators and administrators to review the pilot design, procedures, and their specific tasks (e.g., completing youth attendance and engagement questionnaire after each session). As part of this research training, we also shared scripts that community administrators or program facilitators could use to obtain informed consent from guardians or, depending on the youth’s age, consent, or assent from youth. The video training was followed up with face-to-face meetings to provide additional support as needed. We also provided a one-page flow chart of the research procedures that was simple and easy to follow.
The ongoing communications helped our partners better understand the importance of information that guardians and youth need to decide on their participation in the pilot study. Our communications also highlighted critical processes such as maintaining confidentiality and privacy of participants’ personally identifiable information and research data. Our research training and meetings with partners were collaborative rather than didactic. That is, when we were explaining our research procedures, our partners were given opportunities to reflect and decide whether our pilot procedure was feasible for their sites. They also identified possible barriers with our initial proposed research protocol. For example, we heard from some of our partners that it is often not feasible to have in-person contacts with guardians to obtain written consent, especially with tight time lines. Hence, we adapted our consent procedure in alignment with how many of our partner organizations obtain consent for programming when appropriate and provided flexibility in how consent could be obtained (e.g., providing additional options for online or verbal consent). At the same time, written consents were maintained for those partner organizations that required them. We also provided implementation supports to our partners during data collection. If needed, our team members visited partner sites to recruit participants, obtain consent or assent from youth, or assist with data collection (e.g., answering questions about the survey). An important lesson we learned was that the research training and implementation supports needed to be coplanned and rolled out in advance before the planned time line to start implementing HRP-E.
Trauma-Informed Research Methodology
At the core, a trauma-informed research methodology ensures that the physical and emotional safety of participants and researchers are not violated when they engage in the research protocols and tasks (Andrews et al., 2019). Retraumatization can occur when an individual is explicitly or implicitly reminded of their past trauma, resulting in them reexperiencing their initial trauma event (Child Welfare Committee, National Child Traumatic Stress Network, 2013). This reminder may be so vivid that the individual perceives the circumstance to be equally threatening to the original traumatic event, and they are unable to use their existing internal and external resources to cope with it. Specific environments, behaviors, attitudes, and expressions can trigger retraumatization.
Risks of Asking Youth About Their Past Relationship Abuse and Trauma
When evaluating a program such as the HRP-E, a primary evaluation objective is to assess whether the program is effective in reducing relationship abuse and violence among youth. At the same time, there can also be risks in explicitly asking youth about their relationship abuse and trauma, and these risks could be higher for vulnerable youth populations. Having marginalized status and social disadvantages associated with their living circumstances and environments may place them at increased risks for more frequent and significant relationship abuse and trauma (Priebe et al., 2010; Sharkey et al., 2017). Questions about relationship abuse and violence might also remind vulnerable youth of traumas triggered by family, community, systematic, or historical violence, and oppression, potentially causing psychological harm. Affleck (2017) argues that the use of risk–benefit calculus as indicated by the Common Rule is erroneous to compare the risks versus benefits of asking research participants about their traumatic experience. Common Rule, or more officially known as the Federal Policy for the Protection of Human Subjects, is the comprehensive framework providing the ethical principles to guide research involving human participants (Hudson & Collins, 2015). Cathartic release from reporting or discussing certain experiences may be an anticipated benefit of research participation. However, disclosing traumatic experiences in research contexts may not always be cathartic and may pose significant risks for individuals’ psychological safety if appropriate measures are not in place (Affleck, 2017; O’Mathuna, 2010).
On the other hand, the current literature does not sufficiently capture vulnerable youths’ voices to provide insights to researchers about youths’ interests and the impact of sharing stories relating to possible relationship violence or related traumas (Chu & DePrince, 2013; Jaffe et al., 2015). Limited understanding of the impact of participating in violence or trauma-related research on vulnerable youth makes it further complicated to determine the unique protections that vulnerable youth might need in research settings.
Cocreating Trauma-Informed Data Collection Methods
For the HRP-E evaluation pilot, we consulted with our community partners to modify survey contents to reduce risks and possible retraumatization and introduced mechanisms to collect youth’s feedback about the survey. We took the time to learn more about the groups of vulnerable youth that our partner organizations serviced and drew from our partners’ experience to better evaluate the appropriateness of the youth survey content and scenarios from a trauma-informed lens. We also added a prompt at the end of the youth survey to inquire whether and which questions made the youth upset. Youth’s responses to this prompt would help us to assess whether specific topics or scenarios on the current survey are distressing youth and make modifications accordingly as we progress to the quasi-experimental phase. We also provided distress management resources for youth participants at the end of the survey.
Building capacity of community facilitators
Trauma-informed research methodology also involves integrating specific programming strategies and research protocols to prioritize the safety of youth (Andrews et al., 2019). At the program level, the training for HRP-E developed for community facilitators and administrators included a specific focus on trauma-informed care. The training on trauma-informed care focused on teaching community facilitators and administrators about the signs and symptoms of retraumatization, and how they could support youth experiencing potential distress from programming contents. Beyond wanting to build capacity for trauma-informed programming, our intention was that the training would also equip our partners with knowledge and tools to monitor the impact of our research protocols and activities on youth and support them to reduce their distress if needed. Furthermore, we worked with our community partners to have explicit mechanisms in place for timely referral to mental health professionals if required for youth participants as a result of participating in any of the research tasks (Levine, 2004). Due to the nature of the services they provided, many of our partners for the pilot already had such referral mechanisms in place.
Data collection in safe and controlled settings
For research protocols, we recognized the importance of carrying out data collection with our participants in controlled and safe settings (Affleck, 2017). Whenever feasible, we recommended that our community partners have youth complete their surveys at their sites, while being in close contact with trusted adults. We realized that these trusted adults could also be the HRP-E facilitators, with whom the youths already formed therapeutic relationships from participation in previous programs or during the implementation of HRP-E. Within clinical intervention methodology, ethical guidelines instruct minimal interactions between program facilitators and participants during the research process (Tremblay et al., 2018). These ethical guidelines for intervention research are set in place to minimize perceived coercion to participate and influence from facilitators on participants’ responses about the program. However, in this circumstance, ensuring participants’ safety and comfort were of higher priority than guaranteeing that facilitators were not physically present or unaware of which youth participants also completed the research tasks. We also offered and provided resources (e.g., laptops) to help our partners facilitate on-site data collection with youth participants. These additional resources were well received and utilized among our partners.
Flexibility in Research
The tenets of CBPR involve ensuring that the interests of researchers align with the needs of the community. To ensure this alignment, we prioritized our community partners’ needs, realities, and diversities instead of adhering to strict, standardized protocols. Specifically, we were flexible concerning our pilot time line, procedures, and activities to accommodate the unique needs of our community partners and the groups of vulnerable youths that they supported. We learned that this flexibility was necessary to ensure that the pilot did not cause undue burden or distress on our partners or youth participants as well as to decrease the power imbalance between the research team and the community partners.
Flexibility With Research Protocols and Timelines
We needed to be flexible with our research protocols and time lines for preintervention data collection. There would have been various benefits to having all our youth participants complete the surveys online (e.g., use of the skip function on the online survey so that youth can bypass questions that were not relevant based on the initial information provided). However, it was not realistic for some of our partners to administer surveys online. Our partner organizations that support youth in custody or provided residential programming for mental health disorders had privacy and safety policies that restricted youth from accessing the internet or visiting nonorganizational websites. In accordance with their legal mandates, correctional facilities also cannot share their youth’s names, even for research consent purposes. We then had to delegate the informed consent process to their research department.
With respect to time lines, many of our partner sites could not always collect preintervention data with youth before starting HRP-E. Many of our HRP-E facilitators did not have in-person interaction with youth until they began the program. Along with doing all the session activities, facilitators needed sufficient time to explain the pilot to youth, so they could make an informed decision about their research participation and, if required, obtain guardian consent. Plus, facilitators were required to allocate extra time before or after HRP-E sessions to administer the surveys with youth. In light of these realities, many of our facilitators ended up administering the pre-intervention youth surveys between the first and fourth sessions of the 16-session program. We continue to work with our partners to identify strategies to make the pre-HRP-E data truly preintervention. We learned that perhaps our partner sites would need more support (e.g., personnel support from the research team to administer the surveys with youth) from us to collect pre-(and-post) intervention data. For example, having additional research personnel from our team on sites to administer the youth surveys would have taken off some of the task loads from our community facilitators. Some partners may have also been interested in adding data collection days pre-and-post HRP-E and receiving financial supports to incorporate snacks and recreational activities for participating youth. Finally, we may explore the impact of youth completing their pretests after a few sessions of the intervention empirically in our upcoming quasi-experimental study, potentially by looking at pretest timing as a covariate.
Flexibility With Intended Sample Sizes
We also learned that the degree of research participation might change during HRP-E programming, and we had to be flexible with our expectations for the overall sample size. Some of our partners identified additional barriers and challenges to data collection with youth after they began facilitating HRP-E.
Inconsistent attendance
One identified barrier was inconsistent attendance in HRP-E sessions. Due to social disadvantage and unpredictable circumstances (e.g., housing, mental health challenges, family conflict) that many of our vulnerable youth experience, dropouts could be common, and attending 16 sessions could be too demanding (Kronsberg & Bettancourt, 2020). Inconsistent attendance and early dropout could be significant barriers in intervention evaluation studies and an important factor for researchers to consider at the development stage. For example, researchers may need to run more groups than the initial group numbers determined for the planned analysis after power calculations. The inclusion of midpoint or formative assessments could also help document possible outcomes participants gain at different phases of the intervention rather than just having pre- and postevaluations, especially in sites with a high risk of dropouts.
Length and literacy demands of youth surveys
The second identified barrier to data collection was the length and literacy demands of the youth survey. After pretest administration, a few of our partners reported that their youth found the survey too long or cognitively demanding. Although the youth surveys went through informal, brief acceptability assessment (e.g., administrators and facilitators reviewing it and giving us feedback), it is also important to collect direct feedback from youth to assess the usability of research materials. Youth may identify unique issues and experiences with research materials and protocols that may not be captured by adults in the same setting (Dare & Nowicki, 2019). Factors like literacy challenges could weaken the validity of data collected to examine the benefits of the intervention, and we collaboratively decided with our partners not to administer the postintervention survey in groups experiencing these challenges. We continue to work with our partners to improve the usability of our youth survey for the quasi-experimental phase of the HRP-E evaluation.
Providing an Alternative Evaluation Design
Finally, we are offering flexibility in how our partners can evaluate HRP-E by introducing an alternative evaluation design. While coresearching with our community partners for HRP-E, we recognized that it might not always be feasible for all the partners to administer pre-and-post surveys to evaluate the program. Pre-and-post evaluation designs could warrant a substantial amount of time commitment and resources (Young & Kallemeyn, 2019), putting an additional burden on community facilitators who already have numerous other responsibilities, including running the program of interest. Our research team is currently piloting an alternative evaluation design that would only require the administration of a youth survey at the end of the HRP-E program (i.e., retrospective evaluation). Retrospective evaluations reduce time commitment from respondents and are arguably more cost-effective for community organizations (Young & Kallemeyn, 2019). The retrospective evaluation avoids burdening youth before they have engaged in the program, which minimizes the likelihood of the research component being aversive enough to have the youth change their mind about attending the program. Aside from the benefits of having a one-time survey, retrospective evaluations could also help mitigate issues with response–shift bias. This bias happens when survey respondents inadvertently under or overestimate their knowledge or skills because they have not learned about the concepts being asked about in the survey (Chan et al., 2016). Response–shift bias can weaken the reliability and validity of the data collected to determine intervention effects. We are collecting feedback from research experts, community stakeholders, and youth to inform the development of our retrospective youth survey for HRP-E evaluations (Ibanez, 2020).
Recommendations and Considerations for Future CBPR With Vulnerable Youth
Cocreating the HRP-E evaluation pilot with our community partners has further highlighted the complexities of partnering with multiple community organizations to research mental health-focused programming with vulnerable youth. Based on our experiences and lessons learned, we have identified research recommendations and considerations that may be helpful or transferrable to other CBPR with vulnerable youth. Future CBPR researchers could benefit from mapping out a framework and considering how each aspect of CBPR could be responsive to ongoing co-creation with community partners and the integration of trauma-informed and flexible approaches.
Recognize and Honor the Diversity of Community Partners
When conducting CBPR, researchers are reminded to recognize and honor the diversity of community partners and vulnerable youth populations and consider how the diverse organizational structures and needs of vulnerable youths may impact their research design, procedures, and activities. Each community organization has its own needs, structures, and policies that will inform its priorities and its readiness and resourcefulness to be our coresearchers. There is also diversity within vulnerable youth populations with respect to their lived experiences, marginalization status, and supports, which may also inform how interventions are to be delivered and evaluated. Flexibility is inherent in CBPR as evaluation procedures, activities, and short- and long-term outcomes differ between and across community partners, even when they are implementing the same program. That said, we also acknowledge that flexibilities of such magnitude in research designs require flexibility from funders for time lines, outcomes, and productivity. Pressures to meet project deadlines and outcomes could be significant barriers for researchers in exercising flexibility to research with communities to support vulnerable youths. It is imperative to build realistic expectations with funders by acknowledging the amount of time required for cocreating evaluations in our initial funding applications.
Provide Implementation Supports for Community Partners to Carry out Research
Research teams are encouraged to provide training and ongoing implementation supports and consultations to community partners to carry out their research activities within their respective sites. Training on informed consent, participants’ confidentiality, and trauma-informed data collection methods will help strengthen partners’ readiness and capacity to coresearch with us. Moreover, equipping our partners with tools and resources will increase our partners’ ease and confidence to carry out the research activities. It is important to consider these needs even at the funding proposal stage, as there may be significant resource implications for creating appropriate supports.
Cocreate With Community Partners Throughout the Design and Implementation Process
We encourage researchers to collaborate with their community partners, including youth participants, to develop their procedures and evaluation tools and incorporate all parties’ input to assess the relevance and fit of their overall research design and activities for each site. Specifically, factors including length and types of research tasks, wording, and eliminating potentially triggering scenarios are to be considered. Even when we incorporate trauma-informed practice into our research approaches, explicit protocols are indicated for participants to report distress after completing research tasks. One specific strategy we have employed is having youth participate in research tasks in safe and controlled environments, often in the same settings as the programming itself. Researchers can assist community partners in establishing mechanisms for mental health referrals if needed.
Engage in an Ongoing and Authentic Relationship Over Time
The time line established for the research by institutions and funders may not align with the time needed to engage with community partners and conduct the study effectively. Overall, CBPR needs flexibility with time lines. Along with building relationships, community partners need time to be trained on the research tasks and plan for and implement the program of interest. Notably, external and internal forces within the organization and youth’s circumstances often cause unpredictable shifts or delay to the time line. Rather than only focusing on evaluation outcomes, researchers also need to value studying these community processes in CBPR and broaden our conceptualization of what constitutes outcomes. Broadening the focus of the work to include these processes is best considered at the funding proposal development stage so that adequate time and resources can be allotted to these components.
Conclusion
The research base for effective and sustainable community-based mental health programs with vulnerable youth is limited. Lack of well-defined, action-orientated guidelines to coresearch with vulnerable communities could be a contributing factor, especially since it is challenging to effectively and ethically navigate the tensions between rigor and CBPR principles (Crooks et al., 2013). Plus, the evaluation of mental health programs with comparison groups or treatment-as-usual conditions is not always practical in the community. Community organizations are responsible for ensuring that all youth receive all relevant programming and services offered. In this article, we shared our experiences of collaborating with numerous community mental health, child protection, juvenile justice, and public health partners to co-create an evaluation of the HRP-E with vulnerable youth. Ongoing consultations with community partners, integrating specific trauma-informed research methods, and overall flexibility with research protocols guided our work. Finally, we assessed our lessons learned to formulate recommendations and considerations for future community-based research studies with vulnerable youth.
Footnotes
Acknowledgments
The authors would like to express their gratitude to the entire Healthy Relationships Program-research team at the Centre for School Mental Health at Western University. Additionally, a special acknowledgment to the community partners who collaborated with the research team to co-create the evaluation approach described in the article.
Authors’ Note
Ethical approval was obtained from the Non-Medical Research Ethics Board of Western University to conduct the study with participants. The study adhered to the guidelines of the Tri-Council Policy Statement (Canada) to ethically conduct research involving humans.
Declaration of Conflicting Interests
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: The fourth author is one of the developers of the Healthy Relationships Program-Enhanced program, described in this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by a Public Health Agency of Canada grant (C. Crooks; Grant# 1819-HQ-000052). The pilot evaluation is part of a larger research project to address identified gaps in the areas of violence prevention and mental health promotion programming for vulnerable youth populations.
