Abstract
The quality of online health information remains one of the leading causes in combating misinformation for patients and the public. However, assessing online health content is challenging for those without medical expertise. This article briefly outlines the development and validation of an evidence-based online health information evaluation tool. A systematic approach with five phases was adopted: (1) synthesizing the current state of the reliability of online health information, (2) conducting content analysis of existing quality assessment tools, (3) drafting a comprehensive list of quality criteria, (4) developing and validating a quality benchmark, and (5) disseminating the results. Collaborative input from healthcare providers, patients, caregivers, and the public developed and validated a quality benchmark. The quality benchmark consists of 5 quality criteria and 8 accompanying descriptions that define each quality criterion. A printable version of the benchmark is provided in the article to facilitate easy implementation by both patients and healthcare providers. The benchmark is recommended for use and intended to empower patients with a skill set to navigate through online misinformation, facilitating access to credible health information and promoting improved health outcomes.
Introduction
In the United States, 53.1% of the population seeks health information online, 1 and concurrently, 72% of Americans utilize electronic tools to access information about health-related topics. 2 As the utilization of online health information (OHI) undergoes heightened scrutiny, it is prudent to advocate for cautious consumption. Global organizations, including the World Health Organization, underscore the imperative for preventive measures against the spread of misinformation, often referred to as an “infodemic,” especially in the aftermath of global health crises like the COVID-19 pandemic. 3
In Canada like other countries, the impact of misinformation is notable. According to the Council of Canadian Academics, misinformation cost $300 million to the Canadian healthcare system during the first 9 months of the COVID-19 pandemic in 2021. 4 This figure excludes additional expenses such as outpatient medications, physician compensation, and the long-term effects of COVID-19. Furthermore, misinformation contributed to vaccine hesitancy, leading to increased hospitalizations and deaths.4,5 In the United States, as of October 2021, 22% of Americans were unvaccinated due to COVID-19 misinformation and disinformation (MIDI) that cost between $50 and $300 million everyday (6). The credibility of OHI thus remains a pressing concern due to the coexistence of trusted and untrusted sources. The distortion of health information on the internet is a pervasive issue that extends beyond this recent health crises.
Health misinformation can be defined as “a health-related claim that is based on anecdotal evidence, false, or misleading owing to the lack of existing scientific knowledge”. 7 In contrast, health disinformation pertains to “information that is false and spread by someone aware it is false and is intent on misleading people”. 8 This multifaceted misuse of information poses significant challenges to patients’ health and necessitates a concerted effort to promote accurate, evidence-based health communication on the web.
The repercussions of online health MIDI are manifold and significantly impact health outcomes. For patients, these inaccuracies not only amplify the intensity of panic but also contribute to heightened mental and physical consequences. 9 Furthermore, individuals often grapple with compromised medical decision-making in the face of such misleading information (10). This includes a tendency to misunderstand symptoms, misanalyze conditions, and engage in imprecise medical reasoning, exacerbating the overall deterioration of a patient's health. Addressing and mitigating the spread of inaccurate health information online are crucial for promoting informed decision-making and safeguarding patient health.
For healthcare professionals, the prevalence of health MIDI erodes trust and reliability among patients. 11 Despite possessing robust levels of health literacy and clinical knowledge, these professionals find it necessary to employ strategies to rectify unreliable information. 12 Beyond the inherent risk of distorting the interpretation of scientific evidence, these inaccuracies have the potential to intensify the polarization of opinions. Additionally, the acceptance of MIDI may lead to the misuse of patients’ resources, compounding the negative consequences. 12 As a result, the goals for service delivery may be achieved by transferring and interpreting evidence with professional experiential knowledge of services and approaches to facilitate patients’ integration of information to transform their lives.13,14 The benefits of OHI are contingent on patients and those involved in facilitating health, who can assist in finding relevant information and discriminate between high quality and MIDI. 14 Extensive access to reliable HI provides the opportunity to inform, teach, and connect professionals and lay people. The widespread use of the internet as a source of HI has thus become a double-edged sword for patients and professionals. On the one hand, it provides easy access to a wealth of information, but on the other hand, it also means that they are at risk of having access to harmful information. Moreover, health MIDI contributes to widespread confusion about critical scientific findings and scientific information at large, thereby undermining patient and public health efforts. 15 Addressing and countering these challenges are vital for maintaining trust in healthcare professionals and ensuring the effective dissemination of accurate HI.
Avoiding online health MIDI is challenging for various reasons. First, the importance of health literacy and patients’ consciousness against health misinformation has been enhanced in substantial research. But, unlike other types of MIDI, like political misinformation where multiple source checking is recommended, health misinformation is rather related to the lack of knowledge on medicine and healthcare. 16 Second, in the scientific literature, although detection methods are recommended to catch MIDI, it is known that the creators of this kind of information use ambiguous language on purpose and strategies to bypass detection methods. 17 Third, the incapacity of patients to identify the credibility of OHI consolidates the diffusion of MIDI. 18
To bridge this gap, we systematically developed a quality benchmark in five phases: (1) determining the current state of the reliability of OHI,19,20 (2) conducting content analysis of existing quality assessment tools identified from Step 1, (3) drafting a comprehensive list of quality criteria (QC), 21 (4) developing and validating a quality benchmark, and (5) disseminating the results. This article briefly outlines step four, the development and validation of an evidence-based OHI evaluation tool, a quality benchmark, shaped by collaborative input from healthcare providers, patients, caregivers, and the public. 22
Methods
Development and Validation of a Quality Benchmark
The detailed protocol encompassing the five aforementioned steps is published elsewhere. 21 A brief description of these steps is provided below:
In Step 1, we performed two comprehensive systematic reviews aimed at assessing the current state of the quality (reliability and readability) in OHI targeted to patients and the public.19,20 The studies revealed that understanding OHI typically requires at least a Grade 12 or college-level education. This indicates that much of the health information available on websites is not accessible to a significant portion of patients and the public. Additionally, our review uncovered widespread deficiencies in the quality of OHI, highlighting a considerable gap in the availability of reliable content. During these reviews, we compiled a list of 17 OHI assessment tools, yet, encountered challenges in identifying a validated universal tool for patient use.
Moving to Step 2, we conducted a content analysis of the tools (commonly known as checklists, codes of conduct, and benchmarks) using the Delphi Consensus method. Our analysis is detailed in a synthesis article currently in peer-review. Accessing these tools can be quite challenging. They often lack user-friendliness and are poorly designed, making it difficult for both patients and the public to navigate. Additionally, the quality of these tools often falls short, and their readability is hindered by high standards that make them hard to understand. Moreover, there are no universally accepted standards that effectively accommodate the diverse needs of various stakeholders.
Consequently, in Step 3, we drafted a set of QC—principles aimed at ensuring the adequacy of health content on a website, such as authorship, accessibility, readability, and other key factors. These criteria encompassed 9 unique quality principles and 32 corresponding domains that we called “Descriptions” which elucidate the specific attributes of each principle. This list was crafted by consulting the tools identified through the systematic reviews.
In Step 4, focus groups involving 25 participants were conducted in Quebec, Canada. Participants were presented with the definitions of the QC and a detailed explanation of the “Descriptions.” Through interactive discussions, participants expressed their preferences and voted on the criteria they considered most important when seeking OHI. A detail methodology of these focus groups has been published in a previous article. 22
Based on the final selection of these QC and accompanying descriptions, we developed a quality benchmark to serve as a reference point for evaluating the quality of OHI. To validate the benchmark, we employed two methods: “Member Checking” and a “Usability Test” involving a representative sample of 12 participants randomly chosen from the focus group pool.
Member Checking: During the “Member Checking” process, participants were provided with the QC discussed in the focus group workshops along with the benchmark developed afterward. They were asked to review the benchmark for accuracy and suggest any modifications. Based on their feedback, the QC and the benchmark were refined and then used in a subsequent “Usability Test.”
Usability Test: During the “Usability Test,” participants were provided with the refined benchmark, links to two websites (www.webmd.com and www.reyvow.com) exhibiting varied quality, and a feedback form. They were instructed to use the benchmark to assess the content of the websites. Two multiple-choice questions employing a Likert scale (1—Not Useful to 10—Extremely Useful) were posed: “How useful do you think the Quality Benchmark is?” and “How useful were the quality criteria for evaluating the information found on the websites?” Additionally, three open-ended questions sought detailed feedback on their experience using the benchmark, encountered challenges, intentions to use, and any recommendations for modifications.
Results
Member Checking: Participants provided feedback suggesting a reduction in the number of QC and improvements to the descriptions and format. Below is a sample of their feedback: “The revised version seems to have reflected what was discussed in the workshop and has become much more simpler and easier to read/understand. It gets straight to the point and isn’t in complicated language, which allows for everyone to be able to access it regardless of their reading abilities.” “Yes, I think using this Quality Benchmark when searching for health information is very important, because it provides a guideline on how health information should be organized, distributed and what the audience should expect to look for in terms of reliability/credibility when searching for health information”. “I can see how something like this might be something to teach in a school setting to students learning about health literacy and evaluating information”.
The participants also suggested modifications to the benchmark, including renaming certain criteria and simplifying the overall structure. For example, the criterion originally labeled “User-friendliness” was revised to “Accessibility.” One key challenge identified was the time required to use the benchmark during website evaluations. To address this, we streamlined the benchmark by reducing the number of the QC to five, accompanied by eight concise descriptions. “If there was a way to make a shortened version of the benchmark, it might be more helpful for consumers….”
We also conducted individual interviews with five randomly selected experts, including a primary care physician, clinician scientists specializing in digital health literacy, a pharmacist, a scientific advisor specializing in knowledge transfer for public health, and an IT architect. They shared valuable insights into the quality benchmark and offered strategies for dissemination and implementation. The findings from these interviews will be featured in an upcoming publication.
The final version of the benchmark is presented below.
Next Steps
The focus group workshops included participants who spoke in English and French, prompting us to develop the benchmark in French as well. The French benchmark will be featured in an upcoming publication in a French journal. Additionally, the benchmarks will be validated with an international sample to ensure their relevance to a more diverse audience. This sample will encompass individuals with varying levels of education, age, sex, gender, ethnicity, literacy, and linguistic backgrounds. To enhance accessibility and ensure ease of understanding, we will apply the readability formulas (https://readabilityformulas.com/free-readability-calculators/) to make the benchmark comprehensible to laypersons.
In Step 5, we will implement a list of dissemination and implementation strategies to distribute the quality benchmark to diverse groups, clinicians, and organizations. The intent of this Research Brief is a part of the dissemination strategies to disseminate the benchmark to scientific communities as well as clinicians’ and patients’ use.
Limitations
The quality benchmark was validated by a small group of participants most of whom were aged 45 and older and had higher levels of education. As a result, it may not fully capture the preferences of a more diverse group. This limitation will be addressed in an upcoming international survey designed to ensure the benchmark's generalizability across a broader and more diverse patient population. Another potential limitation is using the benchmark without a supporting guide. To mitigate this, we will create an easy-to-follow guide in different languages that define each criterion and explain how to use the benchmark for evaluating OHI.
Discussion and Implications
The quality benchmark aims to empower patients in navigating through misinformation, facilitating access to reliable health information online, and promoting health outcomes. Each quality criterion was carefully selected and validated by patients and members of the public, aligning with their needs and preferences as well as supported by scientific literature.
The quality benchmark provides a guideline for assessing OHI for both patients and clinicians. It has been suggested that health MIDI should be incorporated into digital education curricula, educating patients on how to find, evaluate, validate, and cross-reference information from sources before adopting recommendations found on social media platforms. While patients can significantly contribute to mitigating the impact of MIDI on epidemic spread, the responsibility to counteract health misinformation should not rest solely on individuals. 23
The Quality Benchmark provided in this article is readily available for integration into patient care, with appropriate copyright considerations. Health professionals play a crucial role in patient education. Encouraging patients to utilize this benchmark is essential for avoiding misinformation and promoting access to credible health information on the web. Organizations are urged to endorse this tool to enhance patient health and well-being. Additionally, health information dissemination channels, including search engines and web content developers, should also incorporate filtering options aligned with this benchmark.
Footnotes
Acknowledgments
The authors are grateful to the study participants for sharing their valuable experiences and knowledge in the study.
Authors’ Contribution
Daraz is the principal investigator who conceptualized the study, developed methods, contributed to data collection, analysis, developed the benchmark, and wrote the manuscript. Dogu contributed to data collection and analysis and partially wrote the manuscript.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethical Statement
The study was approved by the University of Montreal's Research Ethics Board, the Comité d’éthique de la recherche en arts et humanités (CERAH), (CERAH-2022-028-D).
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded by a Knowledge Development Grant from the Social Sciences and Humanities Research Council (SSHRC), # 430-2021-0056, a Research Support for Emerging Professors Grant from the Fonds de recherche du Québec – Société et culture (FRQSC) # 2023-NP-311649, and a FRQS New Research Centres and Institutes Program from the Centre de recherche en santé publique (CReSP) awarded to Daraz.
