Abstract
Keywords
Background
The Problem, Condition, or Issue
Communicating research results with study participants has been mandated in ethics policies (Government of Canada, 2023; World Medical Association, 2025; National Health Service Health Research Authority, 2017), and communicating to others with lived experience relevant to the research (e.g., individuals living with the condition being studied but who did not participate in the study) is encouraged in many research funding agency policies (Canadian Institutes of Health Research, 2005; National Institutes of Health, 2018; National Institute for Health and Care Research, 2019).
Many people participate in research, despite burdens involved (e.g., time commitments, travel, undergoing uncomfortable procedures, managing emotional or physical strains; Cameron et al., 2020; Lingler et al., 2014; Naidoo et al., 2020), because of a desire to help others (Sheridan et al., 2020), and most research participants wish to learn about research results (Baylor et al., 2013; Dixon-Woods et al., 2006; Fernandez et al., 2009; Long et al., 2016; Partridge et al., 2005; Purvis et al., 2017; Shalowitz & Miller, 2008). Communicating research results with study participants in an easy-to-use and understandable format would help them understand how their participation has contributed to science and, thus, benefitted others; could help build trust in research and confidence in using research results; and could increase the likelihood of participation in future research and recommending participation to others (Bruhn et al., 2022; Raza et al., 2020; Rigby & Fernandez, 2005; Willison et al., 2019). Not being informed about research results may exacerbate the experience of research burden by leaving participants feeling undervalued, unacknowledged, or uncertain about the impact of their contribution, and may discourage future participation among some participants (Treweek et al., 2013). Sub-optimal participation in clinical trials and other research threatens our ability to conduct studies needed to provide effective health care and reduces confidence that study results reflect the “real world” (Briel et al., 2016; Houghton et al., 2020; Sheridan et al., 2020; Vist et al., 2005).
The Intervention
Effective communication of research results to study participants and others with relevant lived experience requires communication tools that present the information that people with lived experience want to know, in a way that is understandable, and in an easy-to-use format (Pluye et al., 2014; South et al., 2021). Examples of tools that have been used for this purpose include plain-language summaries, news articles, infographics, comics, podcasts, videos, and study-specific websites (eLife, 2017; Kearns et al., 2022; Mancini et al., 2012; Quaiser, 2021; Racine et al., 2017; South et al., 2021; Trevena et al., 2006; Tuzzio et al., 2024).
How the Intervention Might Work
Several types of tools may be used to communicate complex research results to individuals without research backgrounds, including, for example, plain-language summaries, infographics, websites, and videos or podcasts. Despite differences in format, these tools share the common goal of simplifying and clarifying information while maintaining accuracy and transparency. Tool components that are commonly recommended include (1) audience-centered content that considers the needs, values, and concerns of the intended audience; (2) a clear structure, such as logical organization with headings, summaries, and visual hierarchies; and (3) the use of plain language suited to the audience’s level of comprehension. Tools also often include (4) visual aids to simplify complex data or abstract concepts and (5) engagement strategies, such as storytelling or relatable examples, to sustain attention and enhance understanding (Centers for Disease Control and Prevention, 2019; National Institutes of Health, 2018).
Why It Is Important to Do This Review
Evidence from comparative studies that directly test how effectively different tools engage people with lived experience to learn about research and communicate research in an understandable way has not been systematically assessed. We identified a scoping review of studies that included information on results dissemination (Bruhn et al., 2021). However, this review was limited to dissemination among clinical trial participants in medical research and did not evaluate effectiveness of different approaches. We identified a protocol for a systematic review of studies that included information on results dissemination (South et al., 2019). However, this protocol similarly did not focus on comparative evidence and only included reports of approaches for communicating clinical study results. Clinical studies were defined as observational or interventional medical research relating to treatment, diagnosis, or disease prevention that have implications for health policy or practice. Trials that have compared the effectiveness of tools to communicate research results to study participants and others with relevant lived experience have been conducted (e.g., Bruhn et al., 2021; Buljan et al., 2018; Racine et al., 2017; South et al., 2021). However, to the best of our knowledge, there is no available evidence synthesis that focuses specifically on the comparative effectiveness of different approaches for communicating health results.
Objectives
Living systematic reviews are systematic reviews that are updated regularly to incorporate evidence as it becomes available. They ensure timely access to evidence and reduce costs and delays from having to re-launch the review process from scratch when evidence becomes out of date (Cochrane, n.d; Elliott et al., 2017). We aim to conduct a living systematic review to assess the comparative effectiveness of different tools for communicating research results to study participants and others with relevant lived experience.
Our aim is to evaluate outcomes relevant to making decisions about using tools. Our primary objectives are to evaluate (1) overall satisfaction with the communication of study results, defined as how well the tool met participants’ expectations and needs; (2) understanding of the study results, assessed through self-reported (e.g., perceived understanding) or objective measures (e.g., multiple-choice questions about study findings); and (3) ease of use, including clarity of language and navigability of the tool. Our secondary objectives are to evaluate (1) additional outcomes that may reflect comparative advantages or disadvantages between tools that communicate research results to study participants and others with relevant lived experience and (e.g., trust in results, perceived usefulness, likelihood of participating in similar future studies); and (2) whether the comparative effectiveness of tools is associated with participant characteristics such as medical condition, sex, gender, age, race or ethnicity, education, eHealth literacy, preferred learning style (e.g., reading or listening), or the type of study being communicated.
Methods
The living systematic review was registered in the PROSPERO prospective register of systematic reviews (CRD42024463844). The protocol was developed based on methodological guidance from the Cochrane Handbook for Systematic Reviews of Interventions and Cochrane Guidance for Living Systematic Reviews (Cochrane, n.d; Elliott et al., 2017). It has been reported according to Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P; (Shamseer et al., 2015). Any changes to the review protocol will be added as amendments. Systematic review results will be reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement (Liberati et al., 2009).
Criteria for Considering Studies for This Review
Types of Studies
Eligible studies must be randomized controlled trials (RCTs) that compare two or more tools for communicating research results to people who participate in health research studies or other potentially interested people with lived experience. Non-randomized trials will be excluded because of important limitations on the ability to draw conclusions about tool effectiveness, particularly when done without a control group. For crossover trials in which participants evaluate more than one tool, we will include results only if order effects are controlled; otherwise, only results from the first tool presented to study participants will be eligible given the high likelihood of carry-over effects if multiple tools are presented to the same participants (Higgins et al., 2024).
Types of Participants
Eligible trial participants will include those enrolled in a health research study who receive the study’s results, as well as others with lived experience related to the study topic being communicated. In addition to study participants, this might include people with the same medical condition as the study participants, or people who might consider an intervention, such as a preventive intervention. Trials conducted among people recruited for their research experience or because they hold positions where they are expected to be familiar with research, such as researchers, healthcare providers, and policymakers, will be excluded. If an RCT includes some eligible participants (e.g., research participants, people with lived experience), plus non-eligible participants (e.g., researchers, healthcare providers, policymakers), it will only be included if results for eligible participants are reported separately or if the majority of participants are eligible (Li et al., 2024).
Types of Interventions
Eligible interventions will include the offer to receive or receipt of health research study results via a communication tool designed for disseminating research results to study participants or others with relevant lived experience. The disseminated research may include any type of health research study, including but not limited to biomedical research, clinical research, health services research, public health research, and social, cultural, and environmental health research. Eligible communication tools may include, but will not be limited to, plain-language or lay summaries, infographics, visual abstracts, news articles or newsletters, comics, podcasts, study-specific websites, brochures, summary sheets, videos, leaflets, cartoons, and reports. Artificial intelligence-generated tools will be eligible.
Eligible comparators will include (1) communication tools or approaches not specifically designed for study participants or other people with relevant lived experience (e.g., scientific article, scientific abstract); this represents usual practice for how people access research results without tools designed for their needs and will allow assessment of the effectiveness of specifically designed tools, or (2) any eligible intervention comparator, regardless of tool format (e.g., print vs. video), designed to disseminate research results to study participants or others with relevant lived experience.
Types of Outcome Measures
Eligible outcomes will include any outcome that reflects some element of user satisfaction with communication of results.
Primary Outcomes
Primary outcomes are based on recommendations from our patient advisory team and will include (1) overall satisfaction and the degree that the tool (2) provides the information study participants or other people with relevant lived experience want to know about the study, (3) is understandable, and (4) is easy to use.
Secondary Outcomes
Secondary outcomes may include satisfaction with taking part in the study being disseminated, general preferences for tool format, assessed comprehension of key aspects of the study being disseminated, reaction to results, reported likelihood of participants enrolling in a future study similar to the study being disseminated, or reported likelihood of recommending research participation to others, for instance.
Duration of Follow-Up
There will be no restriction on follow-up duration, which may be related to how long participants can assess a research communication tool and how long they have to provide outcome ratings. We anticipate that most or all trials will evaluate responses immediately after using a tool.
Types of Settings
Eligible settings will be settings that support one-direction communication methods such as mail, email, websites, or social media, used to disseminate eligible communication tools. Settings involving two-way interactions, such as meetings or workshops, will be excluded.
Search Methods for Identification of Studies
Our search strategy was peer-reviewed (McGowan et al., 2016). Complete search strategies and results will be available in a publicly accessible data repository (https://doi.org/10.5683/SP3/ZC6BN2). Search strategies are also presented in Appendix 1.
Electronic Searches
Articles for review will be sought by searching the MEDLINE, EMBASE, PsycInfo, CINAHL, and Cochrane Central databases from database inception using a search strategy designed by an experienced health sciences librarian. Searches will not be restricted by language or publication status.
Searching Other Resources
In addition to database searches, we will manually review references from included RCTs and any relevant systematic, scoping, or narrative reviews that we identify, conduct a forward-citation search of included RCTs via Google Scholar, search clinical trial registries and query authors of included RCTs about unpublished trials, and search organizational websites (e.g., Patient-Centered Outcomes Research Institute, James Lind Alliance). We will not search for additional grey literature because it is unlikely that RCTs that meet our eligibility criteria would be disseminated through other types of venues not included in our search strategy. After the initial search, automated searches will be set for monthly updates to facilitate continual review and update.
Data Collection and Analysis
Description of Methods Used in Primary Research
Eligible publications will report findings of a randomized controlled trial.
Selection of Studies
The results of the initial search and subsequent searches will be uploaded into the systematic review software DistillerSR (Evidence Partners, Ottawa, Canada), where duplicate references will be identified and removed. Two investigators will independently review studies for eligibility. Titles and abstracts will be reviewed in random order. If either reviewer deems a study potentially eligible based on title and abstract review, full-text review will be conducted, also independently, by two reviewers. Discrepancies at the full-text level will be resolved through consensus, with a third investigator consulted as necessary. To ensure the accurate identification of eligible studies, a coding manual with inclusion and exclusion criteria was developed and will be pretested. See Appendix 2 for title and abstract and full-text review coding manuals.
Data Extraction and Management
For each included RCT, one investigator will extract data using a pre-specified standardized form, and a second reviewer will validate the extracted data using the DistillerSR Quality Control function. Reviewers will extract (1) study characteristics (e.g., first author last name, publication year, journal, country of corresponding author, trial design, allocation ratio, randomization level); (2) participants’ characteristics and demographics (e.g., main study eligibility criteria, recruitment method, country(ies), total number of participants randomized, sex and gender, age); (3) type of research disseminated (e.g., trials, systematic reviews, cross-sectional studies) and the target audience for dissemination (e.g., study participants, otherwise eligible participants, or members of the public without relevant lived experience related to the disseminated research); (4) intervention components (e.g., descriptions of the intervention and comparator tools, including cost or resource used and tool development process, number of participants randomized to intervention and control groups); (5) outcomes, including number of participants analyzed for each outcome; and (6) funding sources and declarations of interest of included studies. Reviewers will also use an adapted version of the Template for Intervention Description and Replication (TIDieR) checklist (see Appendix 3) to assess the completeness of intervention reporting (Hoffmann et al., 2014). For any reports requiring translation, data will be collected and translated by bilingual reviewers. Disagreements will be resolved by consensus, with a third investigator consulted as necessary.
Assessment of Risk of Bias in Included Studies
Two reviewers will independently assess included studies for risk of bias using the Cochrane Risk of Bias 2 tool (Sterne et al., 2019) with ratings entered into a DistillerSR form.
Measures of Treatment Effect
For dichotomous outcomes, we will report relative risks between groups with 95% confidence intervals (CIs). For continuous outcomes, Hedges’ g will be used to calculate standardized mean differences (Hedges, 1982). We will prioritize intent-to-treat over per-protocol or other complete-case analyses.
Unit of Analysis Issues
The unit of analysis will be the study. For crossover trials in which participants evaluate more than one tool, only data from the first tool presented to participants will be analyzed given the high likelihood of carry-over effects when multiple tools are presented to the same participants (Higgins et al., 2024).
Criteria for Determination of Independent Findings
For multiple reports of the same study, the most complete data will be used to avoid duplication. When studies report conceptually similar outcomes (e.g., different satisfaction measures), effect sizes will be combined or averaged to ensure each study contributes only one independent result in each relevant outcome domain to the analysis. If multiple outcome measures that assess the same outcome domain are included in a study, effect sizes from all measures will be synthesized before being entered into the meta-analysis.
Dealing With Missing Data
Considering trial designs, we do not expect that many studies will impute missing data. Missing data, however, can occur in trials where participants access a communication tool but do not complete outcome assessments. In this case, we will prioritize multiple imputation analysis, then last observation carried forward or similar analyses, followed by analysis of all available data, and finally complete case analysis. It is similarly unlikely that we will encounter change or baseline-controlled post-intervention comparisons. However, we will prioritize post-intervention comparisons adjusted for baseline values, then comparisons of change scores, followed by unadjusted comparisons, and then comparisons using inappropriate variable adjustment (e.g., unlikely a priori, insufficient sample per variable). If we are not able to use a study’s data in a meta-analysis because, for instance, only p values are reported, we will query authors to attempt to obtain eligible outcome data. If the full results are not obtained, we will present what was provided in publications in tables.
Assessment of Heterogeneity
The I2 statistic will be used to assess the heterogeneity of included trials (Higgins & Thompson, 2002). If an adequate number of studies are available, prediction intervals will be reported to estimate the expected range of true effect sizes (Deeks et al., 2024).
Assessment of Reporting Biases
To assess selective outcome reporting bias, we will compare what was pre-specified trial registrations, protocols, and statistical analysis plans, if available, with the published results to identify discrepancies or omissions that suggest selective reporting of outcomes. If these are not available, we will inquire with study authors.
We will assess publication bias using funnel plots if there are at least 10 studies included for a given intervention. If funnel plot asymmetry is detected, we will use Egger’s test for continuous outcomes and regression-based methods (e.g., Harbord’s or Peters’ test) for dichotomous outcomes to explore potential sources. Following Cochrane guidelines, publication bias will be considered as one of several possible explanations, and we will assess the implications alongside any qualitative signal or additional sensitivity analyses (Page et al., 2024).
Data Synthesis
Meta-analyses will be considered if at least two eligible RCTs assess the effectiveness of similar tools, report comparable outcomes in similar populations, and if the trials are of sufficiently high quality to draw conclusions based on judgments about risk of bias and sample size. When studies are synthesized meta-analytically, data will be pooled using the DerSimonian Laird random effects model (DerSimonian & Laird, 1986). We do not anticipate data from cluster randomized trials or multiple dependent effect sizes from within a single trial or from multiple dependent samples that test the same intervention combinations. If this occurs, we will amend our analysis plan prior to evaluating outcome data. If meta-analysis cannot be performed, we will describe results from included RCTs qualitatively (Bender et al., 2018).
Subgroup Analysis and Investigation of Heterogeneity
If sufficient data are available, we will conduct subgroup analyses to examine if comparative effectiveness of tools varies by (1) participant characteristics (medical condition, sex or gender, age, race or ethnicity, education level, health literacy level), (2) design of study being communicated (e.g., RCT, test accuracy, observational, qualitative), and (3) participants (study participants, others with relevant lived experience, combined). We will also attempt to investigate heterogeneity through meta-regression to explore how study-level characteristics, such as participant demographics, tool types, and methodological quality, influence effect sizes. Meta-regression will be conducted using a random-effects model.
Sensitivity Analysis
We will consider conducting sensitivity analyses that include only trials assessed as not being at high risk of bias.
Treatment of Qualitative Research
We do not plan to include qualitative research.
Summary of Findings and Assessment of the Certainty of the Evidence
We will complete a summary of findings table following guidance from the Cochrane Collaboration (Schünemann et al., 2024). We will assess the certainty of evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach (Guyatt et al., 2008).
Patient Engagement
This review will be conducted by researchers in consultation with a team of 12 people living with the rare autoimmune disease systemic sclerosis (SSc; scleroderma) who are members of the Scleroderma Patient-centered Intervention Network (SPIN; https://www.spinsclero.com/) Patient Engagement Advisory Team. Members of the advisory team were recruited based on their involvement with other SPIN project-specific patient advisory teams and their experience in patient engagement activities.
SPIN is an international collaboration of researchers, clinicians, patient organizations, and people living with SSc that is dedicated to understanding the needs of people living with SSc and developing, testing, and disseminating interventions to support coping and quality of life. This review represents a key component of a program to improve SPIN’s patient engagement, including improving communication of research results. It will also provide critical information to inform trials of communication tools and inform strategies to communicate research results in other areas of health research. Members of the advisory team participated in selecting this review as a priority and reviewing outcomes to be evaluated. They will also participate in interpretation and dissemination of results. They represent a spectrum of people with SSc with respect to age, gender, country or region, and employment status.
Supplemental Material
Supplemental Material - Comparative Effectiveness of Tools for Communicating Health Research Results to Study Participants and Others With Relevant Lived Experience: A Living Systematic Review of Randomized Controlled Trials
Supplemental Material for Comparative Effectiveness of Tools for Communicating Health Research Results to Study Participants and Others With Relevant Lived Experience: A Living Systematic Review of Randomized Controlled Trials by Elsa-Lynn Nassar, Claire E. Adams, Danielle B. Rice, Amanda Wurz, Annabelle South, Jill Boruff, Meira Golberg, Marie-Eve Carrier, Susan J. Bartlett, Katie Gillies, Agnes Kocher, Linda Kwakkenbos, Mwidimi Ndosi, Matthew R. Sydes, Andrea Benedetti, Brett D. Thombs, the SPIN Patient Engagement Advisory Team in Campbell Systematic Reviews
Footnotes
Author Contributions
Content: Elsa-Lynn Nassar, Claire Adams, Danielle Rice, Amanda Wurz, Annabelle South, Marie-Eve Carrier, Susan Bartlett, Katie Gillies, Agnes Kocher, Linda Kwakkenbos, Mwidimi Ndosi, Matthew Sydes, Brett Thombs.
Systematic review methods: Annabelle South, Andrea Benedetti, Katie Gillies, Linda Kwakkenbos, Brett Thombs.
Statistical analysis: Andrea Benedetti, Meira Golberg.
Information retrieval: Jill Boruff.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: We have not received financial support for the review. Ms. Nassar was supported by a Fonds de Recherche du Québec – Santé (FRQS) Doctoral Research Award, Dr. Adams by a Canadian Institutes of Health Research (CIHR) Banting Postdoctoral Fellowship, and Dr. Thombs by a Tier 1 Canada Research Chair, all outside of the present work.
Declaration of Conflicting Interest
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Katie Gillies, Annabelle South, and Matthew Sydes declared a conflict of interest related to authoring one study that will likely be included in the systematic review (South et al., 2021). All other authors declare no personal, political, academic, financial, or other potential conflicts.
Preliminary Timeframe
Approximate date for submission of the systematic review: August 2026.
Plans for Updating This Review
Data and Analytic Code
The data and analytic codes will be included as supplementary materials.
Supplemental Material
Supplemental material for this article is available online.
