Abstract
Although psychological research often relies on convenience samples, the most informative participants may be individuals who are reluctant to engage because of vulnerability, mistrust of the research process, and/or disagreement with a study’s goals. Although this concern is particularly urgent for certain research questions, it is relevant for all researchers because relying solely on easy-to-reach participants limits a study’s validity and generalizability and may substantially hollow out or even render unanswerable some research questions. We review a number of challenges for conducting research with “reluctant but informative” participants and strategies to meet and overcome these challenges. We argue that engaging reluctant participants requires attention at every phase of the research process, including study design and planning, participant recruitment and testing, data analysis and interpretation, and reporting and broader impacts. We also illustrate with our own recent experiences conducting research on children’s gender concepts in rural, conservative U.S. communities that expressed skepticism about the value of this research. Although every study is different, notable across all projects are the need to respect both individual participants and their communities and to balance competing desiderata. Finally, we discuss the importance of transparency around sample composition and constraints to generalizability and the potential utility of collaborative research both across research institutions and between researchers and participant communities.
Keywords
In psychology, as in all science, some questions offer easier answers than others. There are questions for which accessing the most informative participants may present logistical difficulties, from geographic distance to language barriers to sensory, cognitive, or physical differences that present unique challenges to participation. There are also important scientific questions for which the most informative participants may be the least interested in participating.
In this article, we are concerned with the latter situation: conducting research for which recruitment of appropriate participants presents a challenge because of their reluctance to take part. Studies may seek to recruit participants who are disinclined to engage in scientific research in general or in research focused on a particular topic. Potential participants may have a historical basis for discomfort with being subjects of research because of prior severe and troubling examples of exploitation or pathologization (e.g., the Tuskegee Syphilis Study), as is the case for African Americans (Robinson & Trochim, 2007; Shavers-Hornaday et al., 1997) and other racial or ethnic minority groups (Rowley & Camacho, 2015) and transgender populations (Tebbe & Budge, 2016; Vincent, 2018). They may have concerns about stigma, vulnerability, or negative consequences of participation because of issues such as mental health, substance use, or criminal liability (Mirick, 2016; Oruche et al., 2012; Young et al., 2020). Participant groups of interest may also mistrust researcher goals, intentions, or perspectives related to the research question itself (e.g., in investigations of extreme political views and other radicalized beliefs, Sazak, 2019; Sikkens et al., 2017; vaccine hesitancy, Hilário et al., 2023; Marshall et al., 2002; Wiley et al., 2021; gender diversity, Fine et al., 2024; Gross et al., 2024). These issues are not mutually exclusive; potential participants may experience reluctance from several sources at once.
Although engaging reluctant participants is a more urgent concern for some research questions than others, we maintain that it is an issue for all researchers to carefully consider. Excluding people from participation because engaging them poses difficulties is problematic in principle because doing so limits the validity and generalizability of psychological research, likely in systematic ways (i.e., undersampling on the basis of race, ethnicity, education, or political beliefs). This issue is a broad problem that has received substantial attention in recent years (Henrich et al., 2010; Kidd & Garcia, 2022; Nielsen et al., 2017; Roberts et al., 2020; Roberts & Mortenson, 2023). At times, excluding hard-to-reach populations may substantially hollow out the value of a research question (e.g., studying responses to messaging about climate change in individuals who are already receptive to scientific expertise) or even render it unanswerable (e.g., conducting an educational program designed for children in poverty by studying a sample of middle-class children). Furthermore, the issues for which researchers seek reluctant participants are not niche questions given that they include some of the most consequential and timely issues of the day. More positively, taking care to engage reluctant participants, although challenging, permits a more robust, informative, and socially accountable science.
Framing the Issue
There are two points regarding the scope of the issue that provide a framework for understanding the challenges, strategies, and lessons learned.
First, many researchers have grappled with recruitment challenges as a methodological issue that can be addressed with attention to sampling strategies or techniques (e.g., Hilário et al., 2023; Patel et al., 2003; Shaghaghi et al., 2011). Although outreach is certainly important, we view it as just one component of a broader set of considerations that involve every phase of the research process. Accordingly, we address how engaging reluctant participants raises challenges regarding study design, data analysis and interpretation, and postproject impacts—in addition to recruitment per se (see Table 1). This broader framing of the problem raises practical, scientific, and ethical considerations.
Challenges and Strategies in Research With Reluctant Participants as a Function of Study Phase
A second overarching issue is who is a reluctant participant in research and why. The primary message here is that the reasons (and potential consequences) are highly varied and need to be understood from the participant’s point of view. Much of the literature focuses on working with populations that are characterized as “hard to reach,” “hidden,” or “vulnerable” (e.g., Ellard-Gray et al., 2015; Hilário et al., 2023; Rockliffe et al., 2018; Shaghaghi et al., 2011)—classifications that focus on the perspective of the researcher more than the participant. To take the participant’s perspective, one must consider the reasons behind these classifications. For example, populations may be “hard to reach” not only because they are physically remote but also because of their social status (e.g., individuals who are unhoused or social elites; Ellard-Gray et al., 2015; Shaghaghi et al., 2011). “Hidden” participants are individuals who cannot easily be tracked because no clear record exists of the characteristics that define them as the population of interest—but for variable reasons (e.g., LGBTQ+ people because insufficient resources are devoted to identifying their needs, women who experience abuse but may choose not to report it out of concern for the consequences, radicalized groups who mistrust anyone deemed an outsider; Ellard-Gray et al., 2015; Sazak, 2019). “Vulnerable” participants are individuals who may perceive (and experience) heightened risk from research participation—but again, the reasons can vary, including potential discrimination, stigma, or legal consequences (e.g., undocumented immigrants, intravenous drug users; Ellard-Gray et al., 2015; Young et al., 2020). Moreover, communities may resent “helicopter” research, in which researchers land in their community only long enough to gather data, offering nothing in return (Rowley & Camacho, 2015).
A further concern is that these classifications (e.g., hard to reach, hidden) may be characterized as features of the individual (Patel et al., 2003; Shaghaghi et al., 2011) rather than acknowledging the complex interaction of structural forces at play (e.g., social positions, life experiences, and systems of power). When framed from participants’ perspectives, some documented reasons for reluctance to engage with research include experience of stigma, suspicion of research, experiences of vulnerability or disenfranchisement, and history of research abuses (see Table 2). Although these sources of reluctance may be associated with the listed groups, they do not necessarily apply to all group members (e.g., not every member of a racial-minority group necessarily experiences themselves as vulnerable or disenfranchised).
Some Sources of Participant Reluctance
In the present article, we review the literature on challenges and strategies for research with “reluctant but informative” participants, informed in part by our own recent experiences. From 2022 to 2024, we conducted a study aimed at reducing gender essentialism and prejudice against gender nonconformity with children ages 6 to 10 living in rural, conservative communities in the United States (Gross et al., 2024). We were especially interested in these children given prior research indicating higher levels of gender essentialism and prejudice against gender nonconformity in these communities (Fine et al., 2024). But the same reasons these communities were of interest also meant that many families were reluctant to participate and even suspicious of or hostile toward the project. Consequently, it took us 2 years to recruit our sample (compared with fewer than 2 months to recruit the same number of children in this age range for other research conducted in our lab), and throughout the process, we were required to step back, reconsider, and pivot numerous times. We are immensely grateful to all families who expressed interest in the study, especially families who ultimately chose to participate. At the same time, we also have learned important lessons—including from individuals who chose not to participate.
The rest of the article has four main sections, each focused on one of the four phases of the research process listed in Table 1: (a) study design and planning, (b) recruitment and data collection, (c) data analysis and interpretation, and (d) reporting and broader impacts. For each, we discuss challenges and strategies for meeting these challenges, illustrating with our own case study. We bring together a range of thoughtful considerations and suggestions provided in the literature within and beyond psychology to discuss common themes in participant reluctance, recruitment challenges and strategies, and our own missed opportunities and lessons learned. Finally, we conclude with some general lessons and directions for the future.
Study Design and Planning
In the study-design phase, important tasks and considerations include identifying and gaining familiarity with potentially informative participant populations, which will aid in defining sampling parameters. There are also important ethical considerations at this stage, including awareness of community interests and concerns.
Identifying the population of interest
In the early phase of a research project, researchers need to consider how to balance two interests that may at times conflict: identifying participants who would be most informative in answering the research question and what is feasible in terms of recruitment and outreach. For example, if one is interested in a rural sample, theoretically, the ideal participant would be one living in a county with a Rural-Urban Continuum Code score of 9, but as of 2022, these accounted for only 1.4% of the population, by definition (Purdue Center for Regional Development, 2024). There are many different criteria a team could set to identify any population of interest, and small differences may have a big impact on recruitment. To the extent possible, researchers should maintain flexibility and be willing to revisit specific criteria as they reevaluate the balance between accessing an informative sample and one that is large enough to allow for meaningful analysis.
Understanding the population of interest
Understanding the population of interest includes consideration of a participant’s context, values, and beliefs and is of central importance for two key reasons. First, from the standpoint of a researcher’s ethical responsibilities, understanding the population of interest is necessary to achieve the ethical goals of treating participants with respect, enhancing the benefits that participants can receive from participating in the research, and minimizing potential risks or harms. But second, from the standpoint of achieving one’s research goals, this is the most effective means of identifying why a participant may be reluctant and thereby beginning to work to address their concerns. It also is the basis for a more valid and meaningful program of research (Rowley & Camacho, 2015).
Strategies for understanding the population of interest include conducting focus groups or pilot studies with members of the participant community (Lewis et al., 2020), working with community partners or advisory boards to help build rapport and gain initial insights in participants’ experiences, or including community members on the research team (Ellard-Gray et al., 2015; Rowley & Camacho, 2015). For some projects, it may be feasible to adopt a participatory or community-based research design, involving collaborative discussion of the project with target communities from its inception (Levac et al., 2019; Tebbe & Budge, 2016). Participatory approaches may be particularly helpful in situations in which potential participants are vulnerable or have a historical basis for mistrust of research; these approaches include meeting with community members to listen to their concerns and priorities, involving some members on research teams, and ensuring that the research in some way “gives back” to the community of interest (Levac et al., 2019; Rowley & Camacho, 2015; Tebbe & Budge, 2016; Vincent, 2018). Participatory research requires some degree of access to the community; researchers may begin by identifying and contacting key stakeholders and introducing the research project and the team involved, including at least some information about researcher backgrounds and identities (Tebbe & Budge, 2016; Vincent, 2018). Resource intensiveness is a major limitation to the participatory approaches discussed above, and thus, they do not lend themselves to every project and are more commonly adopted for qualitative investigations than quantitative investigations (Levac et al., 2019). In the absence of a full participatory design, holding meetings with members of participant populations (or attending public town halls, school board meetings, or similar events) can allow researchers to gain insight into community priorities and potentially integrate these into the project, adding value for participants and strengthening the scientific work.
Defining the sample
The design phase may also include significant challenges related to defining the sample population, particularly for hidden participants (Ellard-Gray et al., 2015). In addition to considering how eligibility criteria will be set, in studies that allow for participant self-identification, researchers will likely need to incorporate strategies for identifying and excluding fraudulent participants (a problem that is particularly common in research conducted online; Lawlor et al., 2021).
Familiarity with the population and carefully considered sampling parameters can aid in selecting respectful and appropriate language for recruitment and other study materials (and avoiding serious pitfalls). It is important to avoid stigmatizing language in recruitment materials; doing so requires awareness of how stigma is experienced in a given community (Ellard-Gray et al., 2015). As one example, for work in LGBTQ+ populations, familiarity with differing terminology can help in recruiting the desired sample (Ellard-Gray et al., 2015). Whereas some older transgender adults prefer the term “transexual,” younger trans people may find this outdated or even offensive (Tebbe & Budge, 2016). Likewise, recruitment targeting “homosexuals” may sound overly clinical or even offensive, and advertising to “gay or lesbian men and women” is likely to draw a different sample than a more gender-neutral and less academic-sounding advertisement to “queer folx.” Likewise, Sikkens et al. (2017) found that participants were more open to participation when asked to discuss their “strong ideals,” whereas the word “radicalization” could be a barrier. Even when the terminology used is not overtly stigmatizing or offensive, using wording that is overly clinical, dated, or otherwise “out of step” with the community can indicate that a research team is not sufficiently well informed, heightening issues of mistrust.
Ethical considerations and constraints
A major ethical consideration in the design phase of a project is accounting for participant needs in a responsible manner that minimizes harm to both the individual participants and the community or communities to which they belong (e.g., exploitation or contributing to deficit approaches or disparaging stereotypes; Cheng et al., 2021; Rowley & Camacho, 2015). Researchers’ scientific responsibilities are also important (e.g., to minimize confounding factors that may undermine the validity of the findings and to report all results free from bias even if doing so may be uncomfortable for participants).
An example in which these ethical goals may come into conflict is projects involving either outright deception (see Nicks et al., 1997) or failure to disclose the true aim of a study. These practices may be especially common in the context of reluctant participants when the topics under study are controversial. For example, researchers who are interested in participants’ views on one topic (e.g., vaccination, Horne et al., 2015) may devise a survey including a host of other topics (e.g., abortion, euthanasia) to mask their primary focus. Debriefing participants following deception or incomplete information is standard practice in such cases (Greene & Murphy, 2023; McShane et al., 2015; Nicks et al., 1997), although care needs to be taken that the information is clear, memorable, and actually read by participants.
Our case study
In recruiting for participants in our own project, we had to iterate over time to satisfy two criteria that were somewhat opposing: on the one hand, restricting our sample to our theoretically based “ideal” (children living in rural, politically conservative, predominantly White communities) and on the other hand, broadening our sample to feasibly and realistically permit data collection within a promised time frame. On multiple occasions, when data collection stalled, we successively broadened the range of communities that we included, primarily by loosening the inclusion criteria, from definitively “rural” communities to those that are “nonmetropolitan.” The slow pace of data collection meant that we had to ask for a year-long extension from the journal at which we had an approved Phase 1 Registered Report (and they were graciously accommodating). We recruited participants based on characteristics of the communities in which they lived rather than characteristics of the individuals themselves. As a result, the families in our final sample were on the whole less conservative, more educated, and higher income than the communities in which they lived, likely because they were more comfortable with the focus of our study or with participating in psychological research more generally.
We grappled with the ethical issue of how much information to provide to potential participants regarding the nature of our study. On the one hand, we had an ethical obligation to let parents know that the study was focused on how children think about and evaluate boys and girls. We told prospective families that we were trying to learn about how children think about individuals who do not conform to gender stereotypes and how they use these beliefs. We also shared details of the study procedure with those parents who requested more information about the study itself (admittedly a small number). On the other hand, our recruitment materials and consent form deliberately did not include the word “gender” because we wished to avoid triggering a negative response based on terminology. Nor did we discuss the concept of “essentialism” or our goal of increasing children’s tolerance for gender-diverse individuals.
Participant Recruitment
Recruitment challenges have been a primary focus of research on reluctant participants (e.g., Hilário et al., 2023; Patel et al., 2003; Shaghaghi et al., 2011) and include both engaging participants (helping them overcome sources of reluctance, suspicion, or distrust) and reaching them (identifying individuals who match the desired characteristics). A variety of solutions have been developed and discussed, each with a different set of benefits and issues to consider.
As a basic first step, providing clear and complete information about the study aims and value of participation in language that is approachable to participant communities and being available to respond to questions from potential participants can help increase interest and reduce reluctance because of both mistrust and disinterest. For example, in research with immigrant groups, ensuring that study information is available in the language(s) spoken by community members, including study-team members who are fluent in such languages and can answer questions, and checking that the reading level of materials will be available to participants with less formal education may help reduce reluctance and uncertainty (Ellard-Gray et al., 2015; Rowley & Camacho, 2015). In situations in which participants may be particularly vulnerable (e.g., victims of sexual assault), keeping language in recruitment materials less specific—for instance, discussing unwanted or distressing dating experiences or focusing on resilience—can attract desired participants who can then choose to self-identify later in the research process (Ellard-Gray et al., 2015). In all situations and especially in face-to-face interactions, maintaining respect for participants, including their unique experiences and the time and effort involved in contributing to the research process, is of particular importance.
Snowball sampling (in which initial participants identify others, such as friends or family members) can generate large samples in potentially hard-to-reach populations but may produce bias because of differences in individuals with the largest social networks (Johnston & Sabin, 2010; Shaghaghi et al., 2011). Respondent-driven sampling (RDS), in which researchers purposively identify “seed” participants who are each provided with a limited number of recruitment “coupons” to expand recruitment, has been suggested as a less biased option but is resource-intensive, requiring printing and tracking of individual coupons and recruitment (Johnston & Sabin, 2010). A related method, web-based RDS (webRDS), is identical to traditional RDS except for the provision of free and more easily tracked e-coupons instead of physical printouts (Young et al., 2020).
Young et al. (2020) reported on the efficacy of several targeted and in-person approaches for recruiting in a study of people who use drugs. These approaches included hosting cookouts and walking through neighborhoods with high rates of reported drug overdoses and both traditional RDS and webRDS. They found that simply walking through neighborhoods and posting flyers was the most time-intensive and the least effective strategy and that although the cookouts got the most participants “in the door,” relatively few of these actually led to valid data. By contrast, referrals through webRDS had a high rate of initial response and a much higher rate of conversion to usable data, leading to a higher percentage of includable participants, and was more effective than traditional RDS (Young et al., 2020). However, it is unknown how broadly these relative costs and benefits would extend to other research projects. The strategy of tracking the efficacy of different methods is an excellent way to gauge relative success, allowing research to iterate as needed over the course of data collection.
Online surveys offer a different avenue to reach a wide variety of potential participants, and anonymized responses can allow participation with less risk for vulnerable populations. They also come with a heightened risk of fraud, from both bots and human respondents who are not actually eligible for the study (Chandler & Paolacci, 2017; Lawlor et al., 2021). Prescreening surveys, single-use links, “check” questions, and use of CAPTCHA technology can help in reducing these issues, but particularly when financial incentives are involved, scam participants can be persistent. Researchers should consider vulnerabilities in their survey and how final data reflect expectations (e.g., if more than 90% of respondents to a survey exploring the lives of people with disabilities report working 40 or more hours a week, there is a mismatch between the expected and obtained data; Lawlor et al., 2021). Each additional effort to protect a survey against fraud has some trade-offs for either researchers or participants, and moreover, online research will not be effective for populations with limited internet access.
For some studies and populations, it may be necessary to employ more resource-intensive approaches, including targeted and time-and-location sampling strategies, which involve conducting in-person recruitment at specific locations where potential participants gather, such as when recruiting individuals with radicalized political beliefs (Sazak, 2019; Shaghaghi et al., 2011). When target participants are individuals who tend to use specific services (e.g., HIV-positive populations’ attendance in clinics that treat sexually transmitted infections, families involved in child welfare system’s connection to mental-health and substance-abuse programs), researchers can partner directly with the facilities that serve these groups (Mirick, 2016; Shaghaghi et al., 2011). This facility-based sampling has the benefit of connecting researchers with populations that are often hidden and hard to reach but has drawbacks related to gatekeeping by agencies and biasing the sample toward participants who are well engaged with services (Mirick, 2016; Shaghaghi et al., 2011). Another strategy that has been discussed is focused recruitment of specific, individual participants who are identified as eligible through publicly available information (e.g., Facebook profiles; Sikkens et al., 2017). This approach is not widely used because of a combination of resource intensiveness and ethical concerns.
Researching without recruitment
There are at least two avenues for research that do not involve specifically recruiting potentially reluctant participants, instead drawing on anonymized analysis of large data sets. One strategy involves secondary analysis of large-scale studies, such as nationally representative surveys. For example, the Behavioral Risk Factor Surveillance System, conducted by the Centers for Disease Control and Prevention (Centers for Disease Control and Prevention, 2024), allows states to opt in to collecting data on gender identity. Although only a subset of states includes these items and transgender respondents make up only a small fraction of the overall data set, they nonetheless number in the thousands—thus allowing researchers to study health and well-being among transgender populations without having to engage in large-scale recruitment (e.g., Conron et al., 2012; Ferrucci et al., 2021; Narayan et al., 2017).
A second option is to examine information that does not come directly from participants or survey respondents at all, including research analyzing publicly available text or video content (e.g., scraping Twitter posts, analyzing YouTube videos; Grimmer & Stewart, 2013; Miller et al., 2020; Nicolas et al., 2021). These strategies have been useful in seeking to understand the attitudes of populations least likely to choose to participate in any research (e.g., individuals who hold broadly anti-science views; Erviti et al., 2020; Rao et al., 2021). As technological options continue to advance, machine-learning systems are an increasingly useful tool for investigating particularly hard-to-research issues, such as terrorism (Atran et al., 2017) and other similarly challenging social phenomena.
Our case study
In retrospect, we did not give sufficient care and attention to planning participant recruitment in our own project, and we could have engaged in this process more thoughtfully and effectively. One missed opportunity is that we did not engage with the relevant community of interest ahead of time. Doing so could have allowed us to explain the purpose of the research in a direct, face-to-face manner and alerted us to potential concerns raised by community members; ultimately, it may have garnered greater “buy-in” from skeptical families. We also did not include a member of these communities on the leadership of the research team (although some of our undergraduate research assistants were from eligible communities), and thus, we were operating based on our own assumptions or stereotypes about what is important to these families. As recruitment went on, we tried out a range of different strategies (e.g., postings on a university-run recruitment website, Facebook ads, ads in local newspapers, flyers in coffee shops and libraries) but did not track the routes by which participants learned of our study. Had we done so, we could have determined which strategies were most (or least) successful and used that information accordingly.
Data Analysis and Interpretation
Important considerations arise when analyzing and interpreting the data from reluctant populations. Although much of this process takes place after the hard work of data collection is complete, the first such step may occur during initial planning in the form of preregistration. Preregistering one’s study is both valuable and increasingly common (thanks to registries such as OSF and AsPredicted). At the same time, researchers need to be mindful of unanticipated complications that may arise (Nosek et al., 2019), especially those that stem from working with reluctant participants. For studies with many anticipated “unknown unknowns,” researchers can provide broader preregistrations, which allow for more exploratory research and serendipitous discovery while still providing transparency regarding initial plans and hypotheses (Hardwicke & Wagenmakers, 2023). When deviating from preregistered recruitment (as Nosek et al., 2019, noted, “Preregistration is a plan, not a prison,” p. 817), transparency is critical. Although the registered-report format (in which studies are reviewed before data collection) may further limit flexibility, it has not been shown to be less friendly to unexpected findings, and in-principle acceptance ensures that a second round of difficult recruitment will not be required for publication (Briker & Gerpott, 2024; Higgs & Gelman, 2021).
When considering the analytic plan for a study involving understudied populations, it is also important to reflect on assumptions of what constitutes a “neutral” comparison, how to interpret group differences, and whether a comparison group is even needed or appropriate (Roberts & Mortenson, 2023; Sampson, 1993). As one example, the achievement gap between Black and White students in the United States is predicated on measurements that do not value Black children’s particular strengths (e.g., oral-narrative skills; Gardner-Neblett et al., 2023). Examining a participant group on its own merits and exploring the specific and intersectional experiences of group members can provide a more accurate (and more nuanced) picture than a simple comparison, especially one based on unexamined criteria or assumptions that one group’s experiences are normative (Cole, 2009). For qualitative methods, engaging in member checking has the benefit of both sharing findings with participants and ensuring that researcher analyses and interpretations are not out of step with the experience before submitting results for publication (Tebbe & Budge, 2016).
Even after data analysis is complete, interpreting one’s data can be complicated when conducting research with reluctant-participant populations. One major issue is that study samples may be unrepresentative of the broader target groups of interest. For example, in populations experiencing poverty, individuals who participate may have more stable home environments or more extensive educational background than individuals who do not. Likewise, for participants with ideological reasons for their reluctance (e.g., vaccine hesitancy, strongly held religious or other values, or examination of prejudices), individuals who hold more moderate views may be more willing to enroll in a study. Researchers can gather demographic information not only on the final sample but also on the larger target community to identify differences and openly address limitations to generalizability. A similar issue is that given challenges in recruitment, samples may be smaller than desired—a limitation that needs to be noted (Ellard-Gray et al., 2015; Rowley & Camacho, 2015; Sullivan & Cain, 2004).
Our case study
As noted earlier, we conducted this study as a Registered Report; this was altogether a highly positive experience. Doing so meant that we obtained valuable feedback from reviewers during the design phase of the study (before data collection). It also had the benefit of providing a clear roadmap for our data analysis and did not prevent us from conducting and reporting on exploratory analyses (explicitly flagged as such). As noted earlier, we did find that the families in our final sample were less conservative, more educated, and higher income than the communities in which they lived and noted this limitation in our write-up.
Reporting and Broader Impacts
The final phase we discuss involves sharing one’s work: with scientists, with participants and their community, and with the public at large. At minimum, researchers have a responsibility to report findings in ways that are respectful of the participants and their community, appropriately convey participants’ perspectives and context (and are mindful of the researchers’ own biases), and avoid portrayals that are sensationalized, simplistic, or overly generalizing (DeJesus et al., 2019; Simons et al., 2017; Sullivan & Cain, 2004). Failing to do so risks harming not only participants and their community (the most pressing concern) but also the scientific enterprise by undermining participants’ trust in the scientific community and willingness to participate in future research (Rowley & Camacho, 2015). It is also important to acknowledge participants for their contributions to the research; in qualitative work that draws heavily on specific experiences, researchers may even highlight individual participants, should they choose to be identified (Ellard-Gray et al., 2015; Sullivan & Cain, 2004).
Researchers have a responsibility to report findings to participants in ways that are accessible and have the potential to improve their experience while being transparent about what can and cannot be guaranteed (e.g., for a study of domestic violence, this could entail ensuring that a final report will be presented to the domestic-violence agency in which recruitment occurred while clarifying that the team does not have a say in how the agency acts—or does not—on this report; Ellard-Gray et al., 2015). Researchers can bring results back to those places where participants were initially contacted—whether this means posting on websites or social media groups accessed for online recruitment or sharing results with support groups or other meetings—and as possible, endeavor to incorporate results into educational resources or trainings that will benefit participant communities (Tebbe & Budge, 2016).
Our case study
We took no special steps in reporting beyond best practices available to all researchers. In addition to doing our best to write up our report clearly and completely, we thanked the families who joined our study in the acknowledgments. We also asked participating families if they would like to receive an update on the outcomes of the study—everyone who expressed interest was provided a copy of the abstract after publication.
Discussion
In this article, we have argued that “reluctant” participants are essential to psychological research: Certain research questions cannot be fully answered without the participation of such individuals, but more broadly, all research would benefit from making serious efforts to expand sampling beyond individuals who are easiest to reach or most willing to engage. Understandably, recruitment has received the most attention in the literature, but we have argued that meaningfully broadening participation requires attention at each phase of the research process—study design, participant recruitment and testing, data analysis and interpretation, and reporting results—with learning from and about participant communities early in research as an especially important step. To address challenges, researchers will face competing demands and must make difficult decisions that at best will satisfice.
To provide a concrete illustration, we return to the example of our own research with a reluctant population, investigating gender essentialism and prejudice against gender nonconformity in a sample of children from rural, conservative communities in the United States (Gross et al., 2024). In this work, we encountered challenges related to eligibility, recruitment strategy, language choice and use of social media, and generalizing the results. We initially set more stringent criteria for what “counted” as a rural community than in prior work given the goal of focusing on individuals with especially high rates of gender essentialism. Although this approach had the benefit of clarity, we found that it substantially reduced the number of eligible families, leading us to create a more flexible eligibility metric—a compromise that resulted in a sample that was less representative of the ideal (because it was less conservative and less rural). During recruitment, we were transparent about the goals of the study but took care to use language that we hoped would be less politically charged. Nonetheless, our advertisements received many negative comments (e.g., “Sounds like a brainwashing set up” and “Is this about introducing our children to the trans lifestyle?”). Testing over Zoom had the advantage of accessing participants who lived too far away to test in person but meant an increase in fraudulent sign-ups, in some cases, with adults posing as 10-year-old children. Over time, we introduced new strategies for targeting eligible communities, including posting flyers at local libraries and other community spaces and setting up tables at cafés and farmers’ markets within driving distance. We regret not tracking how participants learned of our study, which would have helped us determine which efforts were most effective. Overall, our sense was that studying children’s concepts of gender in a rural, conservative community has become more difficult over time, corresponding to increases in U.S. legislation and scrutiny around gender diversity and gender identity in young people (Ronan, 2021; Trans Legislation Tracker, 2023). More generally, shifting attitudes may require researchers to be nimble and flexible in their approaches.
Every study is different, and the challenges and solutions involved will vary accordingly. Two general lessons, however, transcend these differences. First is the importance of respect for both individual participants and their communities, including the need to understand concerns and historical contexts to minimize discomfort and maximize positive impact. Second is the need to balance competing desiderata. There is a push and pull to research with participant groups that may be reluctant to engage: between the theoretical best match of participant characteristics to study question and resource constraints, between the goal of deeply understanding communities and time constraints, between priorities of the community and priorities of the researchers, between convenience sampling and bias, between reporting negative results and protecting vulnerable groups, and so forth.
We believe that these tensions are inherent to the research process and consequently, that it is important (and most constructive) for researchers to acknowledge them openly and consider how they may have implications for the generality of their research findings (Simons et al., 2017). Ideally, the challenges will provoke researchers to generate new and increasingly productive solutions. Looking to the future, collaborative research teams may be a productive strategy for engaging diverse and potentially hard-to-reach populations—including collaborations between researchers and community members (Ellard-Gray et al., 2015; Rowley & Camacho, 2015) and collaborations across labs and institutions (Byers-Heinlein et al., 2020). Greater transparency in describing samples can help bring clarity to participant groups (DeJesus et al., 2019) and undermine problematic assumptions (e.g., that certain groups are neutral and can represent the larger population; Roberts & Mortenson, 2023). Thoughtful consideration of these issues holds promise for strengthening the value, reach, and transparency of psychological science.
