Abstract
Background
In primary care, low health literacy, particularly reading ability, is associated with worse health outcomes. Most physicians do not receive feedback on the reading levels of written communication that they may provide to patients, including result letters.
Objective
Our study compares the readability of result letters, written by resident versus attending physicians, to patients with positive or negative screens for reading ability, as determined by the single-item literacy screener (SILS).
Methods
Result letters to 50 patients at high risk and 50 patients at low risk of low reading ability were randomly selected starting from January 1st, 2020 at Albany Medical Center. Flesch–Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman–Liau Index (CLI), Simple Measure of Gobbledygook (SMOG), and Flesch Reading Ease (FRE) were used to compare the readability of resident versus attending result letters.
Results
For all SILS levels, attending physicians wrote result letters at a lower grade level than resident physicians based on the FKGL, GFI, and SMOG indices. The FKGL, GFI, and SMOG readability scores of result letters written to patients with SILS 3–5 were also lower when written by attending physicians compared to resident physicians.
Conclusions
Result letters written by attending physicians may be easier to read than result letters written by resident physicians, especially for patients with low reading ability. Future electronic health record (EHR) software should give physicians and providers feedback on the reading level of their written communication.
Introduction
Personal health literacy is the degree to which individuals have the capacity to obtain, process and understand basic health information to make health decisions. 1 Low health literacy is associated with decreased access to care, decreased use of preventive healthcare services, increased prevalence of chronic disease, and poorer health outcomes.2–4 The average US citizen reads at an 8th grade level, 5 but may be lower for individuals of color or lower socioeconomic status. Educational and community-level interventions such as Questions are the Answer or AskMe3 campaigns have attempted to address health literacy outside of primary care centers.6–8 However, evidence to support the long-term effectiveness of these interventions is scant.9,10
The single-item literacy screener (SILS) is used by physicians to identify patients with limited reading ability, one component of health literacy, in patient-centered medical homes.11,12 Previous studies have not evaluated whether written correspondence to patients, such as result letters that discuss findings from radiology or laboratory studies, are written at an appropriate grade-reading level for patients. Result letters written above an individual's reading level may hinder a patient's ability to understand their health information. Furthermore, primary care practices certified as patient-centered medical homes are required to risk stratify patients to determine risk of increased health care utilization. Previous studies have not compared risk stratification scores (RSSs) with patient reading level. Finally, as an academic practice, the reading levels of result letters sent to patients may be affected by experience, or whether the physician is a resident or an attending. Resident physicians typically overestimate the clarity in which they communicate with verbal communication. 13 The aim of our study is to identify any differences between reading level of resident versus attending-written letters, categorized by the patients’ SILS score in an Upstate New York Academic Primary Care Practice.
Materials and Methods
Methods
One hundred patients, ages 18 and above, at high risk (n = 50) and low risk (n = 50) for low reading ability, as identified by the SILS, were randomly selected from the Albany Medical Center Internal Medicine/Pediatrics practice, with service dates starting on January 1, 2020 or later. This identified patients who were considered active patients at the time of the study. Providers in the practice were either resident physicians (PGY 1–4) or attending physicians. Patient charts were excluded if they did not include written provider text and were from non-English-speaking patients.
The SILS score is routinely collected at each visit (including complete physical exams and follow-up visits) via a paper questionnaire and entered electronically into the electronic health record (EHR). SILS scores of 1 and 2 identify patients at low risk for low reading ability. SILS scores of 3, 4, and 5 identify patients at elevated risk for low reading ability. To determine a grade difference of one between the two groups with an alpha rate of 0.05 and a power of 80%, a minimum sample size of 32 would be needed in each group. Physicians typically send written communication in the form of result letters to their patients after an encounter to discuss results from laboratory or radiology studies. Result letters are mailed via the postal service to the home address, and are also immediately available on the patient portal, for patients with portal access.
Demographic information including gender, race, ethnicity, age, the type of physician (resident or attending) who authored the result letter, and RSS were collected. The RSS is a numeric value assigned to patients by their provider used to estimate future costs based on their current comorbidities. 14 A higher RSS indicates multiple chronic or severe acute diagnoses and greater projected healthcare cost. In our practice, a score of 1 to 3 is considered standard risk, whereas a score of 4 is considered high risk. The RSS has not previously been utilized as a variable of interest in studies assessing health literacy, despite being used in patient-centered medical homes to identify medically complex patients.
Readability of Result Letters
The readability of letters was measured using five validated scores: Flesch–Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman–Liau Index (CLI), Simple Measure of Gobbledygook (SMOG), and Flesch Reading Ease (FRE), using readable.io©. Each of these scores applies different weights to sentence length, number of sentences, number of syllables, and character count in a text to calculate readability. 15 Multiple measures of grade-reading level were chosen to improve reliability of the results. In addition, measurements for certain readability scores can suggest ways to improve reading levels of selected text, as each score determines readability based on different factors. The FRE is based on difficulty, using a scale ranging from 0 to 100 with higher scores representing less difficulty and scores ≤ 60 being considered difficult. This value is calculated based on the average number of syllables per word and average number of words per sentence. 16 In this study, the most recent result letter written to each patient was analyzed by these scores, as this letter represents the most current communication pattern of the patient's primary care physician.
Analysis
All lab values, radiology reports, and greetings were omitted from the analysis to only evaluate the authored text. Auto-generated text letters were excluded; only free text written by the physician was analyzed by copying and pasting the authored text into readable.io©. Chi-squared test was used to identify risk factors for low reading ability and Mann–Whitney U tests were conducted to compare the readability of result letters written by residents versus attendings for all SILS levels. The Mann–Whitney U test was used as the data was found not to be normally distributed. This protocol was approved by the Albany Medical Center Institutional Review Board.
Results
Demographics
Most patients were female (60%), white (80%), non-Hispanic (92%), and seen by attending physicians (79%) (Table 1). The mean age of resident patients was younger than those of attending patients in both the SILS 1-2 group (47.86 vs. 54.05) and SILS 3-5 (51.93 vs. 55.80), respectively. No significant differences were found between the SILS 1-2 versus SILS 3-5 groups when comparing sex, race, ethnicity, type of provider, and age of patients seen by a resident or attending regarding poor health literacy (Table 1). High-risk patients as identified by the RSS were significantly associated with SILS scores of 3–5 (P = .008) (Table 1).
Demographic Information of Subjects Grouped by SILS Score in an Upstate New York Academic Primary Care Practice.
P-Values were calculated using Chi-squared tests, except for mean patient age calculated using paired t-test.
The mean readability of result letters sent to patients for SILS 1–5 levels were written at a significantly lower grade level by attendings compared to residents for the FKGL (6.42 vs. 7.72), GFI (7.87 vs. 9.86), and SMOG (9.34 vs. 10.63) indices. When comparing the mean readability scores of result letters written to patients with SILS 1–2 between residents and attendings, no significant differences were observed. However, the mean readability scores of result letters written to patients with SILS 3–5 was found to be higher among resident letters in the FKGL (6.56 vs. 8.13), GFI (8.03 vs. 10.12), and SMOG (9.54 vs. 10.85) scores, compared to attendings (Table 2).
Readability of Result Letter Compared Between Resident Versus Attending Physicians Categorized by SILS Score.
P-Values calculated by Mann–Whitney U tests.
Discussion
The comparison of result letters written by residents and attendings showed a significant difference across all SILS scores, as well as SILS scores 3–5, for the FKGL, GFI and SMOG indices. This finding was also consistent with age, as residents saw younger patients compared to attending physicians. Younger patients may be assumed to have higher health literacy and reading ability due to increased technological exposure to inquire about illnesses. 17 However, younger patients are at the same risk for limited reading ability as older patients. Although younger adults tend to look for health information on the internet at earlier ages compared to older adults, Subramaniam et al 18 found that young adults demonstrate difficulty performing effective keyword searches or disregard the credibility of certain online sources. These factors demonstrate how younger individuals may still struggle to appropriately comprehend their health information.
Our study shows that high-risk patients are more likely to have limited reading ability (22% vs. 4%, P = .008). Therefore, the patients who may receive the most communications from their provider may be least likely to understand what is written in the results letter. Previous unpublished data in our practice also showed an association between high-risk patients and SILS scores of 3–5, further illustrating the usefulness of the RSS as a risk factor for low reading ability. 19
Comparison of resident versus attending result letters demonstrated improved grade level readability in the letters written by attending physicians based on the FKGL, GFI, and SMOG indices for all SILS levels as well as in the subgroup SILS 3–5 analysis (all P's < .05). Although the FKGL, GFI, and SMOG equations vary in the weights they apply to certain syntactical devices, significant differences in these indices suggests that attending physicians write fewer sentences, shorter sentence length, and contain fewer words with three or more syllables. 15 The CLI score is distinguished from the other indices by its incorporation of character count. 15 Attending and resident-written result letters did not show differences between this index, demonstrating that their letters have similar character count and that this may not have significant effect on the readability of the result letters. Our results correspond with another single-institution study that investigated the effect of editing glaucoma patient education handouts on grade level readability. 20 Their revisions, which included using fewer sentences and one to two syllable words but not fewer character count, improved their FKGL from 10.0 to 6.4 (P < .001). 20 Sheppard et al 21 also evaluated methods to improve readability and found that shortening sentences to 15 words or fewer improved the readability of orthopedic patient education materials by an average of 1.41 grade levels. Patients prefer only reading one page and not more than two or three pages at a time. 20 Result letters with fewer sentences, shorter sentence length, and fewer words with three or more syllables may improve readability and encourage patients to read and comprehend their health information.
As access to telemedicine has expanded, as well as accessibility of information in patient portals, result letters should be written at a lower grade level to ensure patient understanding. 22 Solutions to provide health information at a 5th grade level is an appropriate goal since the average Medicare patient reads at a 5th grade level; this grade level is also recommended by the Joint Commission.23,24 Previous studies show that editing patient education materials can improve grade level readability up to 3.6 grade levels.20,21 Encouraging physicians to write with fewer sentences, shorter sentence length, and fewer words with 3 or more syllables may be practical tools to improve the readability of result letters. Furthermore, particular attention to reading level should be made for patients at high risk of increased medical expenditures, as indicated by a high RSS in our population.
Our study has limitations. First, this study utilized a small sample size of 100 with limited ethnic and racial diversity. As this study focused on English-speaking patients, the findings may have reduced generalizability to practices which serve patients of different racial and ethnic groups. However, evaluating the result letters sent only to English-speaking patients improved the internal validity of our design, as it excluded patients whose primary language is not English. Further research should also be done in communities where English is not the primary language. Although we assessed readability in this study, we could not assess reading comprehension, or how the patient responds to and understands the information provided. 25 However, these readability scores are commonly used and can give feedback to physicians on readability of written material. Lastly, our analysis may only indicate an area for improved training at our specific residency. Larger studies in other residency programs are warranted to confirm our findings. Nevertheless, lessons learned in our program can prompt assessment and improvement of readability in other institutions.
Conclusions
There were significant differences in the reading level of result letters sent to patients between resident and attending-written letters, especially for patients with low reading ability. This may illustrate the lack of practice that trainees have in conveying health information in an easy-to-understand manner, compared with attending physicians. Patients with higher RSSs were more likely to have low reading ability. Clinicians should be aware that their high-risk patients may also be at higher risk of limited reading ability. Further research is needed on interventions to assist medically complex patients with low reading ability.
Physicians use result letters to update patients on important laboratory and radiology findings. Physicians should be aware of the reading level of these result letters. EHRs should consider additional functionality, such as readability level in their software, to allow providers to assess the reading level of written patient communication. Finally, physicians should place greater attention on reading levels of result letters sent to patients with high RSS, as they are at increased risk of limited reading ability.
Footnotes
Acknowledgments
The authors wish to acknowledge Dr Paul Sorum, MD, PhD, for his thoughtful review of the manuscript. They would also like to thank Liz Irish for her assistance in journal review.
Author Disclosures
Dr Wales previously received salary support from Gilead, Inc. for Project FOCUS, a hepatitis C screening program. This relationship ended in January 2021.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
