Background. Observational studies are used to evaluate the effect of breast cancer screening programs, but their validity depends on use of different study designs. One of these is the evaluation model, which extends follow-up after screening only if women have been diagnosed with breast cancer during the screening program. However, to avoid lead-time bias, the inclusion of risk time should be based on screening invitation and not breast cancer diagnosis. The aim of this study is to investigate potential bias induced by the evaluation model. Methods. We used large-scale simulated datasets to investigate the evaluation model. Simulation model parameters for age-dependent breast cancer incidence, survival, breast cancer mortality, and all-cause mortality were obtained from Norwegian registries. Data were restricted to women aged 48 to 90 y and a period before screening implementation, 1986 to 1995. Simulation parameters were estimated for each of 2 periods (1986–1990 and 1991–1995). For the simulated datasets, 50% were randomly assigned to screening and 50% were not. Simulation scenarios depended on the magnitude of screening effect and level of overdiagnosis. For each scenario, we applied 2 study designs, the evaluation model and ordinary incidence-based mortality, to estimate breast cancer mortality rates for the screening and nonscreening groups. For each design, these rates were compared to assess potential bias. Results. In scenarios with no screening effect and no overdiagnosis, the evaluation model estimated 6% to 8% reductions in breast cancer mortality due to lead-time bias. Bias increased with overdiagnosis. Conclusions. The evaluation model was biased by lead time, especially in scenarios with overdiagnosis. Thus, the attempt to capture more of the screening effect using the evaluation model comes at the risk of introducing bias.
Highlights
The validity of observational studies of breast cancer screening programs depends on their study design being able to eliminate lead-time bias.
The evaluation model has been used to evaluate breast cancer screening in recent studies but introduces a study design based on breast cancer diagnosis that may introduce lead-time bias.
We used large-scale simulated datasets to compare study designs used to evaluate screening.
We found that the evaluation model was biased by lead time and estimated reductions in breast cancer mortality in scenarios with no screening effect.
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
0.00 MB
0.77 MB