Abstract
Prevalent user bias, such as in the context of treatment with medication, occurs when a medication-using sample is attenuated by the experience of medication use, separating the original sample into former users and continuing users (“prevalent users”). Recruiting a sample after such a change has occurred results in a biased sample. This is problematic when the biasing influence is relevant to the outcome being studied. Three hypothetical studies are described to illustrate prevalent user bias: a cross-sectional study, a longitudinal observational study, and a randomized controlled trial (RCT). Concepts related to prevalent user bias are discussed. For example, this bias may help explain the well-known obesity paradox; and, performing completer analyses in RCTs is fallacious because it is an examination of outcomes in “prevalent users.” Prevalent user bias can be avoided by recruiting only new users. If a study recruits “prevalent users,” contamination by prevalent user bias should be considered. Finally, in longitudinal studies, reasons for drop out should be ascertained to determine whether the reasons influence outcomes through the prevalent user bias.
Keywords
Categories of bias described in research include selection bias, information bias, confounding, and other forms of bias, along with their subtypes. 1 Earlier articles in this column discussed confounding, lead time bias, and immortal time bias.2–4
Prevalent user bias 5 is a type of selection bias that overlaps with or is synonymous with certain other named biases, such as survivor bias and active user bias. Differences amongst these terms lie in the context of occurrence, in what terms authors prefer to use, and in how they use the terms. Prevalent user bias is important to recognize because it may distort findings and result in faulty conclusions being drawn in cross-sectional studies as well as in longitudinal studies (including randomized controlled trials [RCTs]). The possibility of prevalent user bias should therefore be kept in mind when reading published research as well as when designing one’s own studies.
This article explains prevalent user bias, and why it is important for readers and investigators to understand it in the context of different study designs. Readers are encouraged to examine the hypothetical studies presented in the next three sections and to consider what might go wrong.
Cross-sectional Study
Emotional blunting and anorgasmia are known but uncommonly reported adverse effects of selective serotonin reuptake inhibitors (SSRIs). Estimates of how common these adverse effects are vary widely. I therefore plan a pharmacovigilance study using a structured method of assessment to formally identify the prevalence of these adverse effects in patients assessed cross-sectionally; that is, in a single interview and at a single point in time. In this study, I plan to sample all consenting outpatients who are on active treatment with an SSRI. What could go wrong in this study?
Longitudinal Study
Olanzapine is known to be associated with an increased risk of type 2 diabetes mellitus (T2DM). Because most of the data on the subject have been published in Western studies, because South-Asians are at increased risk of T2DM, and because I work with South Asian patients, I decide to conduct a longitudinal observational study to formally identify the presence and frequency of new-onset T2DM in patients undergoing active treatment with olanzapine, all of whom are free of T2DM at the time of recruitment. What could go wrong in this study?
Randomized Controlled Trial
An RCT is a special type of longitudinal study. Instead of merely observing patients, as described above, I randomize patients receiving olanzapine, whom I find to be nondiabetic at baseline, to either continue on olanzapine or switch to aripiprazole. At 12 months, I examine how many patients in each group develop prediabetes or diabetes. What could go wrong in this study?
Answers
In the SSRI study, I might find that emotional blunting and anorgasmia are both infrequent. One explanation is that this is a true finding in the population being studied. Another explanation is that I underestimated the true prevalence of these adverse effects because I recruited not new users but “prevalent users”; that is, patients who were receiving SSRIs for an unspecified duration. Thus, my sample might have included patients who were receiving SSRIs for several months, even several years. Such a sample might have been enriched for freedom from SSRI adverse effects because patients who experienced adverse effects, including emotional blunting and anorgasmia, might have switched to another antidepressant, such as mirtazapine, leaving me to falsely conclude that emotional blunting and anorgasmia are uncommon adverse effects among the SSRI ‘‘prevalent users’’ whom I assessed.
In the observational study of olanzapine and T2DM, I might find that few patients transition into prediabetes and diabetes at long-term follow-up. This may be a true finding; but it is also possible that, because I recruited ‘‘prevalent users” of olanzapine, patients who experienced increased appetite, weight gain, and laboratory indices of metabolic dysregulation had prior to my study selected themselves out of my sample; that is, they might have switched away from olanzapine to antipsychotics, such as aripiprazole and cariprazine, that have a better metabolic profile. So, I might have unknowingly recruited an olanzapine-treated sample at lower risk of T2DM, and my finding of low risk of diabetes in these long-term olanzapine ‘‘prevalent users’’ could be a false finding.
In the RCT comparing olanzapine and aripiprazole, as explained above, the sample of olanzapine patients may have been enriched for a lower risk of T2DM. As a result, at the end of the RCT, I may find a less-than-expected metabolic advantage for aripiprazole. So, RCTs are also vulnerable to prevalent user bias.
Prevalent User Bias
In all the examples provided, the common theme is that “prevalent users” were recruited, and that the sample was characterized by risk depletion.
Prevalent user bias occurs when what defines a user is changed by the experience of use, separating the original user sample into former users and continuing users (“prevalent users”). Recruiting a sample after a risk-depleting change has occurred results in a biased sample. This is problematic when the biasing influence is relevant to the outcome being studied, as illustrated in the three hypothetical studies described in this article.
Guidance
The best way to avoid a prevalent user bias is to recruit only new users, such as new users of SSRIs or olanzapine. If any study recruits “prevalent users,” the possibility of prevalent user bias contaminating the findings should be considered.
The Obesity Paradox
Obesity is well-known to predispose to a range of adverse medical outcomes. Curiously, in some studies, obese persons display better-than-expected outcomes even in RCTs. This is known as the obesity paradox. 6 Prevalent user bias (here, represented as survivor bias, because obesity is not ‘‘use’’) is one of many explanations for the obesity paradox; obese individuals who are at risk of adverse outcomes would have already experienced the outcome and would therefore be ineligible for inclusion in observational studies or RCTs that examine those outcomes.
Parting Notes
Readers may now understand why performing completer analysis in RCTs is discouraged: completer analyses examine outcomes in “prevalent users.”
Finally: in any longitudinal study, reasons for dropout should be ascertained to determine whether they influenced outcomes through prevalent user bias.
Footnotes
Acknowledgements
I acknowledge useful comments and suggestions on a draft of this article, received from Dr Vikas Menon, Professor, Department of Psychiatry, JIPMER, Puducherry, India, and Dr Shahul Ameen, Consultant Psychiatrist, St. Thomas Hospital, Changanassery, Kerala, India.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Declaration Regarding the Use of Generative AI
None used.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
