Abstract
Traditional risk-assessment theory assumes the existence of a threshold for non-cancer health effects. However, a recent trend in environmental regulation rejects this assumption in favor of non-threshold linearity for these endpoints. This trend is driven largely by two related concepts: (1) a theoretical assumption of wide-ranging human sensitivity, and (2) inability to detect thresholds in epidemiologic models. Wide-ranging sensitivity assumes a subpopulation with extreme background vulnerability, so that even trivial environmental exposures are hazardous to someone somewhere. We use examples from the real world of clinical medicine to show that this theoretical assumption is inconsistent with the biology of mammalian systems and the realities of patient care. Using examples from particulate-matter air-pollution research, we further show that failure to reject linearity is usually driven by statistical rather than biological considerations, and that nonlinear/threshold models often have a similar or better fit than their linear counterparts. This evidence suggests the existence of practical, real-world thresholds for most chemical exposures.
Keywords
BACKGROUND
The traditional assumption in the field of quantitative risk assessment (QRA) has been that non-cancer health effects are associated with thresholds below which chemical exposures cause negligible risk (NRC 1983). This assumption forms the basis for the reference doses/concentrations within the Integrated Risk Information System (IRIS) maintained by USEPA (2011). But, there is an ongoing regulatory trend toward an assumption that exposure concentration-response functions may be linear (through the ordinate) for non-cancer health effects, including such diverse and complex outcomes as death, heart disease, and reproductive impairment. This trend is being driven largely by two concepts: (1) a theoretical assumption of extreme sensitivity within the diverse human population, and (2) an inability to detect thresholds in epidemiologic models.
In the 2006 proposed rule for particulate matter (PM) regulation, the US Environmental Protection Agency (Federal Register 2006) acknowledged that “it is reasonable to expect that there likely are biologic thresholds for different health effects in individuals or groups of individuals with similar innate characteristics and health status,” but that population thresholds are unlikely due to extreme differences in sensitivity between individuals. The Agency stated the following:
“Individual thresholds would presumably vary substantially from person to person due to individual differences in genetic-level susceptibility and pre-existing disease conditions … Thus, it would be difficult to detect a distinct threshold at the population level, below which no individual would experience a given effect, especially if some members of a population are unusually sensitive” (Federal Register 2006).
This idea is mirrored in the recent National Research Council (NRC 2009) report on advancing risk assessment at EPA, in which the authors state that “noncarcinogens can exhibit low-dose linearity, for example, when there is considerable inter-individual variability in susceptibility and each individual has his or her own threshold.” A similar argument was made by members of a recent EPA workshop on low-dose extrapolation, in which a wide range of “individual thresholds” are assumed to be driven by similarly wide differences in “background risk” (White et al. 2009).
The idea of near-infinite variation in sensitivity echoes arguments posited within the environmental-justice literature, in which hypothesized extreme sensitivity is purportedly due to social/racial stressors from the environmental “riskscape” (IOM 1999; Morello-Frosch and Shenassa 2006). These stressors are said to act directly or indirectly through various vague mechanisms such as “weathering,” allostatic load, chronic stress, and wear and tear (Geronimus 1992, IOM 1999; Morello-Frosch and Shenassa 2006). Researchers also speculate that “the body's defense mechanisms and ability to recover or detoxify have been compromised through prior exposure to harmful agents” (Morello-Frosch and Shenassa 2006).
A related argument suggests that even low-level exposures add to background risks, enhancing ongoing disease-causing processes in a linear manner (Crawford and Wilson 1996; White et al. 2009). For practical purposes this argument can be seen as an extension of the sensitivity argument posited above, in that these hypothetical background risks are thought to enhance individual sensitivity to other environmental exposures.
The other commonly posed argument for non-carcinogenic, low-dose linearity is that thresholds are difficult to detect in epidemiologic studies. The EPA uses this inability of common epidemiologic models to detect low-dose thresholds as support for their linearity arguments, noting that “new epidemiologic studies have used different modeling methods to address this question, and most have been unable to detect threshold levels in the relationship between short-term PM [particulate matter] exposure (generally using PM10) and mortality” (Federal Register 2006). The previously cited workshop panelists repeat this idea, stating that “exposure-response models … [for] environmental toxicants with relatively robust human health effects databases at ambient concentrations (eg, ozone and particulate matter air pollution …) do not exhibit evident thresholds” (White et al. 2009). The NRC (2009) repeats this argument yet again, noting that “There are multiple toxicants (for example, PM and lead) for which low-dose linear concentration-response functions rather than thresholds have been derived for non-cancer end points.”
In a recent paper, Rhomberg et al. (2011) evaluated these arguments and rebutted them on a theoretical basis. These authors noted that the extreme variability and sensitivity posited by proponents of linearity are hypothetical arguments that ignore the reality of biological and disease processes. That is to say, biological systems are highly regulated homeostatic entities, and toxic states generally result when control mechanisms are overwhelmed, leading to a cascade of events that cause disease. Therefore, at low concentrations, the impacts of potentially toxic exposures are both less common and less severe, producing negligible effects at levels well below the toxic/therapeutic range. Rhomberg et al. (2011) further point out that rather than being just theoretical assertions, their arguments are backed by repeated empirical observation.
In the current paper, we attempt to place these arguments in perspective by highlighting the empirical evidence from clinical medicine, and examining the practical limitations to the concept of extreme sensitivity within human populations. This discussion updates and expands upon an earlier paper (Bukowski and Lewis 2004). We also use mortality from PM air pollution as an epidemiologic case study, pointing out examples where thresholds have been detected, and the statistical limitations of assuming linearity in epidemiologic models.
THE CLINCAL REALITIES OF EXTREME SENSITIVITY
The non-threshold assumption is grounded in a belief that sensitivity to chemical exposure lies along an almost infinite distributional range, or at least that this range extends well beyond the 10–100-fold uncertainty factors for inter-individual human variability typically applied to QRA. This assumption implies that even trivial exposures are thought to present a risk to some person or group within this highly diverse population.
Extreme sensitivity is said to reside within groups that are not always addressed by traditional QRA, including children, senior citizens, poor/minority populations, and those with preexisting illness or impairment. Factors that promote the enhanced sensitivity of individuals within these groups have been referred to as determinants of “background risk” (White et al. 2009; NRC 2009) or non-chemical “stressors” (Morello-Frosch and Shenassa 2006; NRC 2009). The NRC (2009) calls for a formal “vulnerability assessment that takes into account underlying disease processes in the population.”
Acceptance of the above argument poses several unanswered questions, including: (1) are these groups inherently more sensitive to low-level environmental exposures, (2) does the range of this enhanced sensitivity encompass even trivial chemical exposures, (3) is such extreme sensitivity compatible with existence, and (4) are extremely sensitive individuals already protected from environmental exposures? We address these questions below.
Range of Variation
The extreme variation argument suggests that biological variability spans multiple orders of magnitude, so that even trivial perturbations of the homeostatic mechanism, as might occur from a few molecules of pollution exposure, could push someone somewhere over the edge into death or overt illness. Yet, this notion appears inconsistent with what is already known about pathophysiology.
As Rhomberg et al. (2011) note, “we do not see continuous and gradual gradation between people with normally functioning physiology (…adequate to maintain basic functioning) and those who cannot maintain internal states in the face of environmental fluctuation or… cannot carry out basic life processes.” Put another way, mammalian physiology follows truncated rather than open-ended continuous distributions, with extreme variation being incompatible with life. This can be observed by examining basic clinical parameters (Table 1).
Range of normal values for selected blood parameters across major systems. a
Values taken from Physician's Desk Reference (2010)
The normal ranges for clinical blood parameters do not encompass all healthy individuals, but these ranges do suggest limits that cannot be exceeded (Table 1). For example, a small minority of healthy people might have an oxygen partial pressure (pO2) beyond the range of 80–100 mm/Hg, but no living person has a value of 8 or 1000, as suggested by the extreme variation/sensitivity argument. Similarly, blood glucose levels of 8 mg/dL, leukocyte counts of 400, and potassium levels of 0.5 (or 50) can be found only among those in danger of imminent death. Even for less critical parameters such as ALT or bilirubin, order-of-magnitude extremes are associated with clinical disease, not relative health. Rather than wide-ranging variation, we see that human physiology functions within relative narrow ranges, so that extremes of even 10-fold are rare and often incompatible with life.
Extreme Sensitivity of Children
It is often assumed that children are more sensitive to pollutant exposures. In a recent report on cumulative impacts of environmental exposures, the California Environmental Protection Agency (CalEPA 2010) cites potential differences in kinetics and inherent sensitivity as explanations for enhanced risk among children. However, while it is certainly true that there are cases where this is correct, it is not clear that it is universally true, or that this heightened risk would be great enough as to approach extreme or near-infinite sensitivity.
Several authors have examined differential sensitivity between children and adults and have concluded that although the physiology of children may differ from that of adults, these differences are not uniformly in the direction of increased sensitivity for children (Dourson et al. 2002; ECETOC 2005). In fact, children often have more rapid clearance of chemical exposures, suggesting decreased overall sensitivity (Renwick 1998; Dourson et al. 2002). Using chemicals for which clearance data were available for both children and adults, Dourson et al. (2002) concluded that the average child/adult clearance ratio of 1.8 favored children, suggesting that they were on average less sensitive than adults to these select chemical exposures. Furthermore, for those instances in which children were more sensitive, the 3.16-fold kinetic component of the default intraspecies uncertainty factor covered the variation of 91% of the chemicals tested (Dourson et al. 2002).
Premature infants do appear to have reduced clearance (Renwick and Lazarus 1998), but this is a group that exists in a highly protected hospital environment removed from exposures to airborne pollutants and most chemical contaminants. Furthermore, premature infants would largely lack the enzymes necessary to convert many potentially toxic chemicals into their active forms (Renwick and Lazarus 1998).
Given that most of the chemicals examined by Dourson et al. (2002) were pharmaceutical agents, it is possible to put these data in perspective by comparing the child and adult dosages for these drugs, as well as for others selected at random from the Physician's Desk Reference (2010). These dosages are based on clinical research, and encompass not only the pharmacokinetic data examined by Dourson et al. (2002), but often issues of absorption, distribution, and other pharmacodynamic factors as well. In general, child dosages are not necessarily lower than those given to adults (Table 2), which agrees with the findings of Dourson et al. (2002). Furthermore, child/adult variation rarely differs by more than several-fold, not the 10–100 factor used in QRA. This variation applies to highly toxic cancer chemotherapeutics as well as more benign antibiotics and antihistamines (Table 2).
Dosage instructions for common drugs used in children and adults. a
Data taken from Physician's Desk Reference (2003, 2010) or appropriate package insert
Units converted to mg/kg/d unless otherwise indicated
On occasion, idiosyncratic effects from drugs like chloramphenicol and tetracycline limit their use in children. However, these mechanisms generally operate at therapeutic doses that are not relevant to the exceptionally low levels of exposure implied by extreme sensitivity. Allergic reactions also limit drug use, but even allergens appear to exhibit thresholds below which trivial exposures fail to elicit a reaction (Taylor et al. 2009).
Extreme Sensitivity of Other Subgroups
Other high-risk subgroups with background vulnerability are said to include senior citizens and those with “underlying disease processes” or poor “health status.” Specific conditions are often not well defined, although respiratory, cardiovascular, and immune disorders have been mentioned as background risks that could greatly enhance sensitivity to environmental exposures (NRC 2009).
Liver and kidney disease
Drug dosage recommendations often include reduced dosages for those with liver or kidney disease (Table 2). However, most suggested reductions range from half to one quarter of the dose given to healthy adults, rather than the orders of magnitude reductions suggested by the extreme vulnerability argument used in QRA. Often, clinicians are admonished only to “use with caution” in those with potential organ impairment.
Senior citizens
Cautionary dosage statements are often applied to the elderly as well, not because of special vulnerability, but because seniors are more likely to have subclinical organ impairment compared to younger patients. As with children, older adults in good health do not appear to be inherently more susceptible to clinical drug effects (Table 2).
Variation in genetic polymorphism
Genetic polymorphism accounts for much of the variable impact in drug therapy, with enzymes such as those in the cytochrome P450 (CP450) family playing a major role in lack of response or adverse reactions (Johansson and Ingelman-Sundberg 2011). Between-person differences in individual CP450 enzymes can span 1–2 orders of magnitude (Zhou et al. 2009).
Although genetic differences influence idiosyncratic drug responses, they do not necessarily translate into wide-ranging differences in chemical toxicity among the general population. For example, recommended dosage ranges for pharmacotherapeutics rarely span orders of magnitude (Table 2). Furthermore, research has not necessarily shown wide variation in toxicity of common xenobiotics. One study of 14 drugs and chemicals that covered a wide range of toxicant classes found little variation in toxicity across 85 different human lymphoblast cell lines, except for two chemicals (perfluorooctanoic acid and phenobarbital) in which variability ranged from approximately 2–5 fold (O'Shea et al. 2011). Similarly, research among workers occupationally exposed to butadiene has shown little genetic variability in metabolism, with most results spanning a factor of two or less (Fustinoni et al. 2002; Albertini et al. 2001, 2003, 2007).
The impact of genetic polymorphism would also be greatly lessened within the context of extremely low doses. The high doses that produce therapeutic or toxic effects can kill cells, alter homeostasis, overwhelm repair mechanisms, saturate enzyme pathways, and deplete co-substrates. Genetic polymorphism forms an important modifying factor for these high-level processes (Slikker et al. 2004) but should have little impact on the orders-of-magnitude lower exposures implicated under the extreme sensitivity argument.
Background disease status and the impact of competing risks
Theoretical arguments in favor of extreme vulnerability suggest that those with existing disease conditions would be at greatest risk from low-level environmental exposures. In the real world of clinical medicine, recommendations for the management and protection of these most sensitive individuals are embodied within the formal practice guidelines and position papers developed by expert committees and professional associations. These cover conditions such as chronic obstructive pulmonary disease (COPD), immune suppression, chronic liver or kidney disease, and cardiovascular disorders (Table 3). Under the extreme sensitivity argument, such individuals would be the ones hypothetically most impacted from ever diminishing pollution exposure.
Summary of major practice guidelines for management of selected chronic diseases.
Expert guidelines on the management of important conditions such as cardiovascular disease (and associated risk factors), hepatitis, and immune suppression stress two main needs: proper treatment and modification of personal behaviors (Table 3). Proper treatment includes pharmacotherapy, surgical interventions (eg, procedures to control heart rate in atrial fibrillation), and adjunct therapies (eg, oxygen for COPD). Behavioral modification involves education and support for weight reduction, proper nutrition, increased exercise, smoking cessation, alcohol reduction, and other factors that improve organ function and reduce the risk of further damage.
Practice guidelines do not include any caution about exposure to low levels of pollution, trace chemicals, or disinfection byproducts. In fact, chemical disinfectants are widely used to guard against infection. In cases of serious respiratory diseases such as COPD, avoidance of irritants is recommended, but this refers to combustion products and chemical irritants at levels high enough to cause frank respiratory irritation, not the low-level ambient exposures suggested by extreme sensitivity arguments.
In extreme immune suppression, isolation under conditions of sterilization, HEPA-filtered air, and limited access to outside people may be recommended, but this is to limit transmission of pathogens or opportunist organisms, not pollutants (Dadd et al. 2003; Sehulster and Chinn 2003). In fact, mortality rates for patients in strict isolation under HEPA filtration and positive pressure are similar to those associated with less stringent approaches that mainly limit visitors and stress hand washing and face masks (Ogden et al. 1990; Dadd et al. 2003). This suggests that completely removing all air pollution exposure is of no apparent benefit, and that most of the risk among highly vulnerable immune-suppressed patients resides with endogenous microorganisms and close contacts, not exogenous exposures.
Guidelines for extreme immune suppression may also restrict food and water, including avoidance of certain foodstuffs. These again are based on concerns for microbiological exposure, not chemical contaminants. In fact, highly processed foods and juices, and highly disinfected municipal drinking water, are all acceptable despite the fact that such consumables contain numerous chemical byproducts. In some instances, water filtration or boiling is suggested, but only to remove the risk of Cryptosporidium infection (Yokoe et al. 2009).
It is tempting to ignore the import of practice guidelines under the notion that healthcare professionals deal with clinical situations and would therefore be unaware of the more subtle effects of environmental exposures. But rather than ascribe ignorance to the practitioner, we need to recognize that doctors and nurses must deal with the realities of patient care, in which important competing risks dwarf the hypothetical impact of trace-level environmental exposures.
Clinicians recognize that immune-suppressed individuals are most likely to succumb to infection or some other aspect of bone-marrow depression (eg, bleeding disorders or anemia), matters unrelated to low-level exposures to chemical pollutants. By the same token, it is much more important for those with COPD to stop smoking, limit exposure to high-level irritants, and guard against weight gain, malnutrition, anemia, heart disease, and other comorbid conditions associated with disease progression (O'Donnell et al. 2008). Similarly, adherence to proper treatment, nutritional management, exercise, control of hypertension, psychosocial support, and many other factors are much more important in the management of chronic conditions such as heart disease and kidney failure (Table 3).
Based on the clinical guidelines, it would seem that low-level environmental exposures have negligible impact on patient health in the face of important known risks for disease progression. Put another way, real-world competing risks tend to trump concerns about hypothetical trace-level exposures to pollutants. That is to say, it is very unlikely that someone would be healthy enough to survive the gauntlet of major competing risks, only to be felled by trace-levels of ambient PM2.5 or ozone.
Clinical Protection of those at Highest Risk
Justification for the extreme sensitivity argument requires a population of excessively debilitated individuals who would be most vulnerable to even minor environmental exposures. This would include patients with severe respiratory impairment or immune dysfunction. Yet, best clinical practice tends to incidentally minimize pollution exposure in these very individuals. For example, those with advanced COPD are maintained on oxygen for most of the day, limiting exposure to ambient air (Qaseem et al. 2007; O'Donnell et al. 2008). Of even more relevance, patients at highest risk from immune suppression, such as those undergoing bone marrow transplants, may be kept in protective isolation that includes HEPA filtration, laminar flow, and positive pressure (Dadd et al. 2003; Sehulster and Chinn 2003). Premature infants are similarly isolated. An extreme example would be a so-called “bubble boy” existence, in which an individual with severe immune depression lives in an artificial environment devoid of outside contact (Guerra and Shearer 1986). Such protective practices would remove all or most ambient exposures, so that those at highest hypothetical risk of effect would be at lowest actual risk of exposure.
Even if severe protective practices were not in place, it is doubtful that very low-level pollution exposures would negatively affect even the most vulnerable described above. Referring again to the switch from restrictive isolation practices to less restrictive “reverse isolation” procedures, restrictive HEPA filtration and positive air pressure essentially remove exposure to all ambient pollution, yet switching to less stringent procedures has produced little difference in patient mortality. This highlights the fact that the main risk to those with severe immune dysfunction is from endogenous organisms and close contacts, not external exposures (Dadd et al. 2003).
STATISTICAL CONSIDERATIONS FOR THRESHOLD DETECTION IN PM STUDIES
Interpretation of the data from risk assessments almost always involves statistical evaluation. In this section we discuss how statistical considerations such as model choice can influence identification of nonlinear relationships, and how these considerations may have combined to obscure thresholds in air pollutions studies of acute or chronic mortality. A more in-depth discussion of statistical issues involved in model choice can be found in Nicolich and Gamble (2011).
Model Choice
Regulatory agencies often base safety standards on interpretation of statistical models. However, different models applied to the same data can lead to different conclusions, highlighting the importance of choosing the appropriate model. Reliance on inappropriate models can lead to either overly lax standards that jeopardize health or overly stringent ones that suppress economic benefits by requiring expensive, but needless, remediation and prevention strategies.
Some agencies apply models based on the concept of a Threshold of Toxicological Concern (TTC) which is “a pragmatic risk assessment tool that is based on the principle of establishing a human exposure threshold value for all chemicals, below which there is a very low probability of an appreciable risk to human health” (Kroes et al. 2004). This approach has been adopted (for some applications) by the US Food and Drug Administration (FDA) and several European agencies (Kroes et al. 2000, 2004, 2005; EFSA 2004). Some agencies apply models based on a related concept called the Threshold of Regulation (TOR), which posits that there is a de minimis risk level below which regulation is unwarranted. The EPA has proposed applying the TOR rules to pesticide regulation (Federal Register 2002). Both concepts (TTC and TOR) recognize the practical existence of a threshold without necessarily committing to a functional threshold model.
However, air pollution standards developed by the EPA are generally based on strict linear no-threshold concentration-response (CR) models from epidemiologic studies, rejecting or ignoring thresholds unless the assumed linear relationship can be proven wrong (USEPA 2009; OAQPS 2005). Therefore, one reason that the EPA and its advisers may fail to detect thresholds is that they begin with models that do not address the existence of a threshold.
In essence, the approach used by EPA assumes a linear, no-threshold relationship unless proven otherwise. In this case, the “proof” relies on statistical tests that have weak power to detect nonlinearity in the typical situation with few observations in low-concentration range of the CR model. This makes it difficult to reject the no-threshold assumption even in the face of apparent nonlinearity (Li and Li 2008). Such an approach also ignores a basic tenet of traditional hypothesis testing, which cautions against accepting the null assumption simply because you failed to reject it (Daniel 1991).
Exposure misclassification, which is a common problem in environmental epidemiology, also tends to obscure thresholds. In air pollution studies, exposure assessment typically relies on area-wide measurements that are not highly correlated with personal exposures. Using time-series data, Brauer et al. (2002) demonstrated that the resulting exposure misclassification tends to obscure the presence of a threshold when it exists.
The importance of using the correct CR function is widely acknowledged as “a critical component in interpreting health risks associated with ambient PM concentrations” (OAQPS 2003), which is “important from both an etiologic and regulatory perspective” (Abrahamowicz et al. 2003). In fact, the EPA has stated that the “single most important factor influencing the uncertainty associated with the risk estimates was whether or not a threshold concentration exists below which PM-associated health risks are not likely to occur” (OAQPS 2003). Therefore, a search for the correct model should include looking for nonlinearity rather than simply assuming linearity.
Choosing the appropriate model can be difficult given the typical situation in which there are relatively few data points in the low-exposure region wherein thresholds typically lie. Applying a linear model to such data will greatly overestimate risk at low concentrations and underestimate the risk at higher concentrations (Figure 1). Under this typical situation of few data points below the threshold, hypothesis tests will usually fail to reject linearity (Robins and Greenland 1986).

Overestimation of risk when a linear model is applied to non-linear, threshold data.
Thresholds in Acute-Mortality Data
Much of the support for a significant effect from low-level exposure to PM and other pollutants resides within time-series (TS) studies that use complex linear models to link daily fluctuations in pollution to daily mortality. In a number of instances, investigators have reanalyzed these data and shown that nonlinear models produced as good or better fit than the original linear ones, and that thresholds could often be statistically demonstrated (Table 4).
Time-series studies of PM mortality that have been reanalyzed using nonlinear/threshold models.
Mortality displacement
One concern in TS models of acute mortality is so-called harvesting or mortality displacement, wherein exposure hastens the deaths of a frail subset of the population that already has a short life expectancy, independent of exposure to PM. Most studies investigating this concept have concluded that mortality displacement would have only a small impact on the relative effects of PM (Dominici et al. 2003; Murray and Lipfert 2010; Murray and Nelson 2000; Schwartz and Zanobetti 2000; Smith et al. 1999; Zanobetti et al. 2000; Zeger et al. 1999). However, Roberts (2011a, b) showed that in some situations, mortality displacement can alter the shape of the CR curve so as to mask a mortality threshold. Although further work needs to be done, these initial findings suggest that apparently linear, no-threshold effects may in some cases be an artifact of mortality displacement.
Thresholds in Chronic-Mortality Data
The effects of PM have also been explored with cohort studies of chronic-disease mortality. Some of these have found consistent and significant associations between PM and death (Dockery et al. 1993; Pope et al. 2002), whereas others have not (Abbey et al. 1999; Lipfert et al. 2000). However, results from the American Cancer Society (ACS) cohort, which is the largest and most influential study, suggest significant association between PM exposure and either cardiopulmonary, lung-cancer, or all-cause mortality (Pope et al. 2002).
Cohort analyses typically rely on the Cox proportional hazards (PH) model, which considers time to an event (eg, death) under the influence of an outside exposure concentration (eg, PM). This model is built around the hazard function, which is based on the probability that the event will occur in the next short interval of time t (assuming it has not yet occurred). Mathematically, the time to the event is directly related to the hazard function, so that changes in the hazard directly affect survivorship. The Cox model is robust and effective for this type of analysis because it does not make assumptions about the form of the hazard function; only that exposure has a proportional effect on hazard. This allows considerable flexibility regarding the form of baseline survivorship, and direct estimation of covariate effects (assuming that covariates also have a multiplicative effect on the hazard function) (Harrell 2010).
An unfortunate limitation of the Cox model is that its standard form is not compatible with thresholds, because there is a basic assumption of proportional hazard across exposure concentrations (ie, the hazard function increases at all concentrations above zero) (Harrell 2010). Therefore, it is not surprising that thresholds have not been readily identified in cohort studies, given that a basic assumption of the standard modeling approach does not permit them. Additional research has also raised concerns about the Cox assumption of proportionality over the range of PM concentrations in the ACS study, suggesting the possibility of a threshold (Abrahamowicz et al. 2003).
Graphical evidence
In some instances, the USEPA and study investigators have failed to acknowledge nonlinearity that was statistically insignificant, but graphically evident. This highlights a key problem of relying on weak statistical tests as the sole arbiter of linearity/thresholds.
For example, results from the ACS cohort study are used to support the USEPA's stance that PM is linked to mortality in a linear fashion without an apparent threshold. Based on Cox PH results, authors of the ACS study reported significantly increased risks for PM that “were not significantly different from linear.” However, nonparametric smoothed graphs showed apparent departures from linearity, primarily in the region of lowest exposure (Pope et al. 2002). This is mirrored in graphs taken from an earlier iteration of the study (Figure 2), in which the Health Effects Institute noted departures from linearity during flexible analyses (Krewski et al. 2000). In all cases there was evidence of apparent thresholds, given that the plots of the residuals from the linear model are not flat and the plots for all-cause mortality and cardiopulmonary mortality show negative residuals below approximately 15 μg/m3 (Krewski et al. 2000; Pope et al. 2002).

Shape of the CR function using standardized residuals generated during analysis of data from the ACS cohort (Adapted from Krewski et al. 2000). Graphs copied from reanalysis of the original ACS data performed by Krewski et al. (2000).
CONCLUSIONS
Research in disciplines ranging from business to patient care has addressed the problem of “silos,” whereby professionals in one discipline work within that restricted environment without tapping into the breadth of knowledge and experience in the wider universe of science, medicine, and engineering (Lunn 1997; Conway 1997). Such may be the case for QRA, with some risk assessors so highly focused on the theoretical world of hypothetical risk that they tend to lose sight of the practical realities of the world around them. In the hypothetical world, theories about extreme sensitivity and vulnerability make sense and support models that fail to demonstrate nonlinearity. But these theories break down under the pathophysiologic realities of human biology and medicine.
The extreme sensitivity argument is predicated on a population with extreme background vulnerability, so that even trivial exposures can be viewed as hazardous to their fragile state. In this paper, we offer several scientific arguments that bring this assumption into question:
The tightly regulated homeostasis of mammalian systems suggests that extreme sensitivity/variability is usually incompatible with life.
From a clinical standpoint, groups such as children and seniors are not inherently highly sensitive, and appear to have a range of susceptibility not greatly different from that of middle-aged adults.
Those with the highest background disease vulnerability would likely succumb to real-world competing risks that dwarf any hypothetical hazard from trace-chemical exposures.
The most highly vulnerable are often isolated from pollution, precluding impacts from such exposures.
All this argues for practical, real-world thresholds for most chemical exposures.
The recent regulatory trend has been to assume linearity in concentration-response unless proven otherwise. To this end, linear models are proposed and then rejected only if significant departures from linearity can be demonstrated. This violates the basics of hypothesis testing, in that failure to reject a model does not prove its appropriateness. Furthermore, the tests used to assess thresholds often have weak power and will frequently fail to reject even in the presence of nonlinear trends. Frequently, nonlinear/threshold models have a similar or better fit than their linear counterparts, especially for the typical situation in which the majority of data are at the higher levels of exposure away from the threshold region.
The theoretical world of extreme vulnerability is inconsistent with the biology of mammalian systems and clinical medicine. Therefore, rather than pursuing ever-diminishing exposures in an effort to protect an ever-diminishing hypothetical population, resources would seem better directed toward the competing risks of greater magnitude and proven importance.
Footnotes
ACKNOWLEDGMENTS
Part of the funding for this article was provided by the American Petroleum Institute. The authors are grateful to Ms. Lauren Mackenzie for her technical assistance in preparation of this manuscript.
