Open accessResearch articleFirst published online 2009-10
Calculating Hematopoietic-Mode-Lethality Risk Avoidance Associated with Radionuclide Decorporation Countermeasures Related to a Radiological Terrorism Incident
This paper provides theoretical health-risk-assessment tools that are designed to facilitate planning for and managing radiological terrorism incidents that involve ingestion exposure to bone-seeking radionuclides (e.g., radiostrontium nuclides). The focus is on evaluating lethality risk avoidance (RAV; i.e., the decrease in risk) that is associated with radionuclide decorporation countermeasures employed to remove ingested bone-seeking beta and/or gamma-emitting radionuclides from the body. To illustrate the application of tools presented, hypothetical radiostrontium decorporation scenarios were considered that involved evaluating the hematopoietic-mode-lethality RAV. For evaluating the efficacy of specific decorporation countermeasures, the lethality risk avoidance proportion (RAP; which is the RAV divided by the total lethality risk in the absence of protective countermeasures) is introduced. The lethality RAP is expected to be a useful tool for designing optimal radionuclide decorporation schemes and for identifying green, yellow and red dose-rate zones. For the green zone, essentially all of the lethality risk is expected to be avoided (RAP = 1) as a consequence of the radionuclide decorporation scheme used. For the yellow zone, some but not all of the lethality risk is expected to be avoided. For the red zone, none of the lethality risk (which equals 1) is expected to be avoided.
Recent events throughout the world underscore the growing threat of different forms of terrorism, including radiological terrorism (Durante and Manti 2002; Major 2002; Mettler and Voelz 2002; CRS 2004; Augustine et al. 2005; González 2005; Bunn 2006; Dombroski and Fischbeck 2006; Tofani and Bartolozzi 2008). This has stimulated research on and development of medical countermeasures for protecting humans against radiation harm, including harm resulting from use by terrorist of a radioactivity dispersal device (RDD) (Goan 2001; NCRP 2001; FAO 2002; Ansari 2004; Scott 2005). An RDD is any weapon designed to disperse radioactivity, cause harm to humans, and possibly also have negative economic impact (e.g., render economically viable areas of a city uninhabitable for extended periods of time). A dirty bomb is the most well-known RDD.
During a radiological terrorism incident, radionuclides could be inhaled or ingested depending on the type of incident (e.g., dirty bomb detonation; intentional food or beverage contamination with high levels of radionuclides, etc.). Inhaling or ingesting large quantities of radionuclides could lead to lives lost among the general public from radiation-induced deterministic effects as was illustrated in the case of the ingestion route poisoning of Mr. Alexander Litvinenko in London in November of 2006 with polonium-210 (210Po) (Harrison et al. 2007; Scott 2007). Deterministic effects are threshold-type effects that include lethal damage to the bone marrow, lung, and gastrointestinal tract due to massive cell killing. The effective management of such terrorist-invoked incidents requires careful planning that includes considering the impact of countermeasures likely to be employed to reduce the risk of harm to the general public.
Pharmaceutical and other products are being developed (discussed below) for possible employment as decorporating agents to facilitate removal of radionuclides from the body in the event of a radiological terrorism incident (Augustine et al. 2005). In addition, drug efficacy studies using animal models of radionuclide decorporation currently are being planned and carried out. However, essentially no research is being supported that relates to evaluating the expected lethality risk avoidance associated with employment of decorporation protocols. Risk avoidance (RAV) is defined as the amount of risk (on a scale from 0 to 1) that is avoided due to protective measures employed. A related concept, risk avoidance proportion (RAP) is defined as the risk avoidance expressed as a proportion of the total risk that would be expected in the absence of protective countermeasures such as decorporation therapy.
Knowledge of lethality RAV associated with specific countermeasure applications facilitates planning for and managing radiological terrorism incidents that involve the intake of radionuclides. This paper introduces a theoretical modeling framework for evaluating lethality RAV for radiological incidents that involve the ingestion intake of bone-seeking radionuclides. The focus here is on low linear-energy-transfer (LET) beta and/or gamma-emitting radionuclides that deposit from the systemic circulation into the skeleton and irradiate the radiosensitive bone marrow. Only ingested forms of radiostrontium that are readily absorbed into the systemic circulation are considered. For insoluble forms that are very poorly absorbed, gastrointestinal tract damage may be the main concern rather than damage to the hematopoietic system.
The objective of the research discussed in this paper was to provide a modeling framework under which one can evaluate the expected impact of radionuclide decorporation therapy on avoiding the lethality risk for the hematopoietic mode of death. Radiostrontium isotopes are considered as representative beta/gamma emitting, bone-seeking radionuclides. We consider the very simple case in which a radiostrontium isotope is deposited from the systemic circulation into the skeleton after ingestion and the dose rate to bone marrow decreases as a single negative-exponential function of time. The single negative-exponential characterization is but a crude approximation that is used here for illustrative purposes to facilitate understanding the RAV/RAP modeling framework presented. The modeling results presented in the main text and in the Appendix can be used along with dosimetry software (to evaluate dose rates and cumulative doses to body organs and tissue of interest) to conduct a formal risk assessment. While the focus is on the hematopoietic mode of death, equations presented also can be applied to other endpoints such as lethality via the pulmonary mode.
The Most Important Radiostrontium Isotopes
There are 16 known radiostrontium isotopes. The most important for the current consideration are Strontium-90 (90Sr, physical half-life T1/2 = 29 y), strontium-89 (89Sr, T1/2 = 50.6 d), and strontium-85 (85Sr T1/2 = 65 d). Strontium-90 is a fission product associated with fallout from a nuclear detonation and is therefore human-made. It decays via beta emission (average energy = 200 keV [maximum 546 keV]) to yttrium-90 (90Y T1/2 = 64.1 h; average beta energy = 940 keV [maximum 2.28 MeV]). Strontium 89 is also a beta emitter (average energy = 583 keV [maximum 1.49 MeV]). Strontium-85 mainly emits gamma rays (514 keV). The RAV modeling approach presented below applies to each of the indicated radiostrontium isotopes as well as to combinations of the isotopes.
Radiostrontium Metabolism in Mammals after Ingestion Exposure
Radiation doses to body organs from radiostrontium isotopes depend on their metabolism. The metabolic behavior of radiostrontium isotopes in humans can be described in general terms as follows (NCRP 1991): After ingestion, a fairly substantial part (averaging between 20% and 30%) is absorbed from the gastrointestinal tract and a portion is excreted unabsorbed in the feces. The absorbed portion is characterized as follows: (1) radiostrontium that deposits in the bone volume; (2) radiostrontium that is distributed in an exchangeable pool, which has been considered to be comprised of the plasma, extra cellular fluid, soft tissue and bone surfaces; and (3) radiostrontium that is removed from the body by urinary and fecal excretion.
Countermeasures after Radiostrontium Intake
A large number of studies have investigated methods of preventing the absorption in humans of radiostrontium, while not adversely impacting calcium (Ca) absorption. Information that follows in this section is based on an Agency for Toxic Substance and Disease Registry report (ATSDR 2004). Methods for reducing the body burden of radiostrontium that have been tested in animals include application of Ca, chelators (with variable results), hemodialysis, and magnesium sulfate. Suggestions for reducing absorption of radiostrontium include ingestion of antacids containing aluminum phosphate. Treatment is most effective when initiated within 2 h of radiostrontium exposure. Other reasonably effective strategies have included applications of alginates and sulfates. Less effective or less practical strategies have included a cold environment, diet, dietary fiber, flavones, and stable Sr.
In this paper for the purpose of illustrating lethality RAV evaluation methods, a hypothetical novel radiostrontium decorporation procedure is considered that allows significant removal of radiostrontium from both the skeleton and systemic circulation via the urinary pathway.
LETHALITY RISK AVOIDANCE MODELING
As already mentioned, the lethality risk avoidance modeling presented here focuses on scenarios involving the ingestion intake of highly soluble forms of radiostrontium (85Sr, 89Sr, 90Sr, or combinations of these). After intake, rapid uptake to the systemic circulation is presumed with deposition in the skeleton occurring very quickly (ATSDR 2004). To simplify the examples presented below, single, negative-exponential-decaying dose-rate patterns to bone marrow are assumed here. Theoretical results for addressing more complex, monotonically decreasing dose-rate patterns to bone marrow are presented in the Appendix. A single, negative-exponential-decaying pattern is a reasonable assumption when >90% of the maximum radiation dose is associated with the long-term component of target-organ retention of radioactivity. For the skeleton, it is also a reasonable assumption for short-lived, bone-seeking radionuclides ingested in highly soluble forms because physical half-life would be the major determinant of effective retention half-time in the skeleton.
Exposure Scenarios Considered
Two hypothetical exposure scenarios are considered that involve large low-LET radiation doses to bone marrow from radiostrontium that would be expected to be associated with massive cell killing in the absence of protective radionuclide decorporation procedures (countermeasures) being implemented. The first scenario is a single ingestion intake (via a dietary liquid) of a radiostrontium isotope in a highly soluble form followed shortly thereafter (1 h) by single application of a hypothetical novel radionuclide decorporation therapy. The hypothetical novel therapy involves two drugs: one drug stimulates the partial release of radiostrontium from bone into the systemic circulation; the other drug chelates the systemic radiostrontium causing its excretion via the urinary pathway. The second scenario is also a single ingestion intake (via a dietary liquid) of a radiostrontium isotope followed 24 h later by single application of the hypothetical novel decorporation therapy. The efficacy of the decorporation therapy is presumed to be much less after a 24-h delay than after a 1-h delay because of the larger amount of radiostrontium that would have been incorporated into bone by the 24-h post-intake time point (ATSDR 2004).
Evaluating Lethality Risk
As in our previous modeling of lethality risks for radiation-induced deterministic effects, we rely on the hazard function (HF) model developed at our Institute by this author (Scott 2004, 2005, 2007). The HF model is quite general and can be adapted for application to a variety of radiation exposure scenarios (Scott and Hahn 1989; Scott and Dillehay 1990; Scott et al. 1995; Scott et al. 1998; Scott and Peterson 2003; Scott 2004, 2005). The HF model has undergone extensive peer review and is used internationally for radiological incident risk assessment by different, widely recognized organizations. This includes use by the U.S. Nuclear Regulatory Commission (USNRC 1998), by the Health Protection Agency (Harrison et al. 2007), and by the International Atomic Energy Agency (IAEA 2005).
Formal risk modeling often includes addressing dose, model parameter, and model uncertainties as well as variability between different individuals. Model uncertainty relates to a lack of knowledge about the appropriate model and can be addressed using Bayesian inference methods (Kass and Raftery 1995). However, for the research discussed here, variability and uncertainty characterization was well beyond the scope of the research. Only central estimates of risk (i.e., best estimates) are presented without a discussion of related variability and uncertainty.
Hazard Function Dose-Response Model
For competing modes of death (e.g., hematopoietic, gastrointestinal, pulmonary), the risk function R (i.e., individual probability of death) is given as a function of the total lethality hazard, Htotal,(a cumulative hazard function) and survival probability, S, by:
Htotal represents the sum of lethality-mode-specific hazard functions which are evaluated in a way that accounts for changing dose rate and allows for combinations of low- and high-LET radiations (Scott and Hahn 1989; Scott 2004). Here, only a single mode of death is being assumed (hematopoietic) for beta/gamma-emitting radionuclides that are retained in the skeleton and irradiate the bone marrow. In this case, Htotal is set equal to the hematopoietic mode lethality hazard Hhem.
Evaluating the Lethality Hazard for the Hematopoietic Mode
The hematopoietic-mode lethality hazard Hhem can be evaluated in different ways depending on the type of the exposure scenario considered. The deterministic effects relative-biological-effectiveness-weighted dose has been used when combinations of high- and low-LET radiations are involved (IAEA 2005). However, with beta/gamma-emitting radionuclides dose weighing is unnecessary (i.e., weight = 1) (Scott 2004). The risk for acute lethality can be adequately characterized using the un-weighted, organ-specific absorbed radiation dose rate y(t) to bone marrow at exposure time t. The indicated lethality hazard is given by
where the integral is evaluated over the exposure period of interest; V (>0) is the shape parameter that determines the steepness of the dose-response curve; and X is the corresponding normalized dose in units of the lethality-mode-specific, median-lethal absorbed dose D50. While the notation D50 is used here for the median lethal dose (i.e., D50 = LD50), it can also be used for the median effective absorbed dose (ED50) for morbidity endpoints (Scott 2004). Thus, Equations 1 and 2 can also be applied to morbidity risks with HF model parameters depending on the endpoint considered. The value X = 1 corresponds to the D50, which increases as the dose rate decreases. Thus, for low-dose-rate exposure, larger doses D50 (an absorbed dose here) are associated with X = 1 than those for high-rate exposure. The cumulative absorbed radiation dose is obtained by integrating y(t) over the exposure period of interest. For the hematopoietic mode of death and for low-LET beta and/or gamma radiation, the central estimate of V is 6 for adult humans of both sexes and is based on Chernobyl accident victims (Scott and Hahn 1989). The indicated central estimate has also been applied to all ages after birth (Scott and Hahn 1989; Scott 2004). Uncertainty about the influence of age and other factors on V were addressed via assigning subjective lower and upper bounds of 4 and 8, respectively (Scott and Hahn 1989).
Usefulness of the Normalized Dose
The normalized dose (i.e., X) was introduced to address influences of a changing dose rate and situations that involve combined exposure to low- and high-LET radiation. The integral in Equation 2 is presented in a number of publications (e.g., Scott et al. 1988, 1998; Scott 1991) and can be explained as follows. The product y(t)dt gives the small absorbed radiation dose increment in the short exposure time interval (t, t+dt). The absorbed dose increment is converted to an increment, dX, in the normalized dose X by dividing the incremental absorbed dose y(t)dt by D50(y(t)). These increments are added (via integration). This normalized dose increment approach to evaluating the risk of a specific deterministic effect (e.g., lethality via the hematopoietic or pulmonary mode) has been validated by laboratory animal studies designed specifically for HF model testing (Scott et al. 1987, 1989; Filipy et al. 1989). This includes studies involving inhalation exposure to insoluble aerosols that contained mixtures of alpha and beta emitting radionuclides (Scott et al. 1990). The normalized-dose approach was also validated by applying it to data where initial brief exposure to high-dose-rate low-LET radiation was followed immediately by chronic low-rate exposure (Scott et al. 1998).
The dose-normalization approach was motivated by the observation that when absorbed dose was divided by the median lethal dose for the hematopoietic mode of death, variability related to age, species, dose rate, and photon radiation energy was greatly reduced so that when lethality risk was plotted vs. this normalized dose, data from a number of different animal studies clustered around the same dose-response curve (Figure 3 in Scott et al. 1988).
Computer codes used internationally for radiological risk assessment employ the full HF model and related normalized doses to each critical target organ to assess overall lethality risk (NRC/CEC 1997; Evans et al. 1993). Critical-organ-specific increments ΔX over consecutive time steps Δt (e.g., 1-day steps) are evaluated (based on the absorbed dose increment for each time step divided by the D50 evaluated at the average dose rate for the time step) and added to obtain an estimate of the total normalized dose X. Relative biological effectiveness (RBE) weighted dose rate (based on the RBE for the deterministic effect of interest) is used for evaluating y(t) for combined exposure to low- and high-LET radiations (Scott 2004; IAEA 2005). The unit of radiation dose for the combined exposure is the gray-equivalent (Gy-Eq) (IAEA 2005).
The full HF model implemented based on normalized dose has been used to develop an extended framework for emergency response criteria related to radiological incidents (IAEA 2005).
The dose X is also quite useful for addressing subtle changes in dose rate (e.g., dose rate reduction associated with applying protective decorporation countermeasures). For example, the normalized dose increment that occurs before applying a given radionuclide decorporation procedure can be evaluated and added to the additional normalized dose increment (or increments) that occurs after application of the procedure. Risk is then evaluated based on the total normalized dose (i.e., the sum of the increments). This is illustrated in this paper for hypothetical applications of radionuclide decorporation countermeasures. This allows demonstrating the expected impact of different time delays of radionuclide decorporation therapy on its efficacy. Evaluating the normalized dose X requires a functional relationship between D50 and the radiation absorbed dose rate y (a fixed dose rate).
Functional Relationship between D50 and the Radiation Absorbed Dose Rate
The notation D50(y) is used here for the dose-rate dependent D50. The D50(y) for death via the hematopoietic mode has been demonstrated to be adequately characterized using the empirical relationship (Scott et al. 1988; Scott and Hahn 1989; NRPB 1996; USNRC/CEC 1997)
that was developed for exposure to low-LET beta and/or gamma radiation. Similar relationships can be applied to other lethality and morbidity risks with parameter values depending on the endpoint considered (NRPB 1996). Beta and gamma radiations are treated as equally effective for inducing radiation deterministic effects. The parameter θ∞ is the value (asymptotic value) of D50(y) at very high dose rates y. The term (θ1/y) accounts for the steep rise in the D50 as dose rate decreases to very low values allowing for more efficient recovery from radiation damage than after high rates (Scott et al. 1988; Scott and Hahn 1989).
It is useful to review the development of Equation 3. Early research on the influence of radiation dose rate on D50(y) for death via the hematopoietic mode suggested that D50(y) decreased as a power function of y (e.g., θy−n), where θ and n are fixed positive parameters and y is a fixed value for the dose rate (see for example Bateman et al. 1962). According to the indicated power function relationship, D50(y) progressively decreases towards zero as the dose rate y increases (Jones 1981; Morris and Jones 1988). Thus, with extremely high but constant dose rates, even a very small dose (near zero) could be lethal. However, our early research which involved looking at essentially all of the then available data related to the influence of dose rate on the median lethal dose, led us to conclude that for high dose rate exposure, there was strong evidence for D50(y) approaching an asymptotic value significantly greater than zero (represented by the parameter θ∞ in Equation 3). Apparently, this phenomenon was first observed by John Ainsworth who pointed out the observation to me and kindly provided me with a data set on D50(y) that his research group had generated over the years. In an acute lethality study headed by Dr. Ainsworth, no significant difference was found in the median lethal dose when mice received total-body exposure to external gamma rays at 60 Gy/h (body midline dose rate) when compared to that for pulsed exposure (TRIGA Reactor) at rates in the range of 5.4×105 to 2.4×106 Gy/h (Ainsworth et al. 1964). Thus, there was no significant difference in D50(y) when the gamma-ray dose rate was increased more than 8000-fold from the already very high value of 60 Gy/h. The observed value for D50(y) (7.82 Gy; 95% confidence interval 7.55 – 8.18 Gy) after the ultra high dose rate was much larger than the expected value of near zero and not significantly different from the value obtained for the 60 Gy/h exposure (7.97 Gy; 95% confidence interval 7.75 – 8.21 Gy). Similar results were reported for dogs exposed bilaterally to 1-MeV x-rays (Ainsworth et al. 1964). The observation of D50(y) approaching an asymptotic value >> 0 at high dose rates is here named the Ainsworth Phenomenon in honor of his contributions during his life to the understanding of the influences of radiation dose rate on the median lethal dose. The Ainsworth phenomenon can be explained on the basis of all recovery processes being overwhelmed during the period of irradiation. For exposure at very low rates, recovery can take place during the exposure period, causing an increase in D50(y) (Scott et al. 1988).
In our initial modeling of D50(y) vs. y, the ratio θ1/y in Equation 3 was replaced by the construct θ1y−n, with n being a free parameter (Scott et al. 1988). The subscript 1 on the parameter θ was used to emphasize that the parameter represents the expected increase in the D50 above its asymptotic value θ∞ when y had unit value (e.g., y = 1 Gy/h). Based on the work of Bateman et al. (1962), it was expected that n would be found to be approximately 1/3 (i.e., a y−1/3 dependence). However, central estimates considerably different from n = 1/3 were found in all cases where animal data spanned the dose rate range from about 10−2 to 10−1 Gy/h (Scott et al. 1988). Central estimates for n and their associated standard deviations for swine, dogs, goats, and sheep were 0.9 ± 0.1, 0.8 ± 0.1, 1.1 ± 0.3, and 1.0 ± 0.5, respectively (Scott et al. 1988). Data for rats, and mice did not span the dose rate range of interest and were considered not to be suitable for estimating n. The average of the central estimates of n for swine, dogs, goats, and sheep was 0.95 ± 0.13 or approximately 1, implication θ1y−1 = θ1/y for use in Equation 3 (Scott et al. 1988). Thus, Equation 3 has been employed since the 1980s and has been used for a variety of deterministic effects (see for example NRPB 1996 and IAEA 2005).
Table 1 presents estimates (central and associated standard deviation) of model parameters θ1 and θ∞ derived using available data for uniform gamma or high-energy x-ray exposure of different mammals (rats, mice, swine, dogs, goats, sheep, and humans) and is based earlier publications (Scott and Hahn 1989; Scott and Dillehay 1990). No standard deviation is provided in the table for the parameter estimates for humans because of the way they were derived. Information on uncertainty for these parameters is provided below. Values in Table 1 apply to circumstances where no supportive medical care is provided. The indicated values for θ1 and θ∞ are highly correlated (R2 = 0.94, p < 0.001).
Estimates of θ1 and θ∞ for the hematopoietic mode of death for different mammal species for low-LET radiation (beta and/or gamma) when no medical support is provided, based on Scott and Dillehay (1990) and Scott and Hahn (1989).
Only data for individuals or group presumed to be of normal susceptibility to irradiation were used to obtain the indicated parameter estimates (Scott and Hahn 1989).
For the hematopoietic mode of death and for low-LET beta and gamma radiation, central estimates for HF model parameters for humans are θ∞ = 3.0 Gy and θ1 = 0.07 Gy2/h and are based on data and information related to humans that were exposed briefly (at high rates) or chronically (at low rates) to low-LET radiation (Scott and Hahn 1989). The 3 Gy estimate is similar to the two estimates (2.9 Gy and 3.4 Gy) developed by Levin et al. (1992)via fitting two different dose-response models to acute lethality data for human occupants of reinforced concrete structures in Nagasaki, Japan at the time of the atomic bomb detonation.
Subjective lower and upper bounds for θ∞ for the hematopoietic mode of death in humans have been developed and are presumed to account for uncertainty about the influence of age, gender, and other factors. The lower and upper bounds are 2.5 and 3.5 Gy, respectively (Scott and Hahn 1989). The corresponding bounds for θ1 are 0.06 and 0.08 Gy2/h, respectively (Scott and Hahn 1989). The impact of the indicated HF model parameters uncertainty on risk assessment reliability has been addressed in other publications (USNRC/CEC 1997; Scott and Peterson 2003).
To account for the influence of supportive medical care provided to radiation victims, the model parameters θ1 and θ∞ are increased by a fixed multiplication factor (protection factor [PF]) (Scott 1991, 2005). The central estimate of the PF for the hematopoietic mode of death and for supportive medical treatment that was developed during the 1990s was 1.5 (Scott 1991; Evans et al. 1993). A larger value may apply today given the advancements that have been made in treating radiation injuries.
Parameter estimates in Table 1 [based on Scott et al. (1988), Scott and Dillehay (1990) and on Scott and Hahn (1989) and Equation 3] were used to compare D50(y) values for the hematopoietic mode of death for different mammals (rats, mice, dogs, swine, goats, sheep, and humans) that were exposed to high-energy ionizing photon radiation at different dose rates. Predicted values for the D50 vs. dose rates of 0.01, 0.1, 1, 10, and 100 Gy/h are presented in Figure 1. Note that the Ainsworth Phenomenon is clearly shown at high dose rates for each of the different species. In each case, the asymptotic value for D50(y) exceeded 1.5 Gy (1,500 mGy) (see values for θ∞ in Table 1). Note also that similar estimates of D50(y) were obtained for dogs, goats, and humans, indicating (as previously recognized [Scott et al. 1988]) that dogs and goats might be suitable surrogates for humans when studying radiation toxicity to the hematopoietic system. Dogs are also considered a good animal model (surrogate for humans) for studying the biokinetics of bone-seeking radionuclides such as 85Sr and 90Sr. Results presented in Figure 1 are expected to also apply to low-LET beta radiation or mixtures of beta and gamma radiations (Scott and Hahn 1989).
Predicted median lethal dose (D50(y)) for the hematopoietic mode of death for different mammals (rats [Sprague-Dawley], mice [white, non-inbred], dogs, swine, goats, sheep, and humans) as a function of the external low-LET radiation dose rate y to bone marrow in Gy/h. For each species, D50(y) is predicted to approach an asymptotic value > 1.5 Gy (1,500 mGy) as dose rate becomes large (Ainsworth Phenomenon). Dose rate is presented as a categorical variable and is not aligned to tick marks.
The HF model has also been applied to lethality risk evaluation for exposure to high-LET alpha radiation or to mixtures of high-LET alpha and low-LET beta and gamma radiations. For combined exposures to low- and high-LET radiations, RBE-weighted dose (e.g., in Gy-Eq) has been used as mentioned above.
For exposure to mainly alpha radiation from 210Po (or other alpha radiation sources), HF model parameters developed for low-LET radiation can be adjusted for direct applications to high-LET radiation (Scott 2007). To make the adjustment, θ∞ should be divided by the RBE for alpha radiation relative to gamma rays and θ1 should be divided by RBE2 (i.e., the square of RBE) (USNRC/CEC 1997; Harrison et al. 2007; Scott 2007).
Analytical Solution for X for Single, Negative-Exponential-Decaying Dose Rate
The following analytical solution for the dose X in Equation 2 was previously published (using somewhat different notation) for a single negative-exponential-decaying dose-rate pattern to bone marrow with initial dose rate A and decay parameter λ (Scott and Dillehay 1990):
D{t, A} is the exposure-time-dependent radiation absorbed dose to the target organ at exposure time t and is evaluated as (A/λ)[1 − exp(−λt)] for the dose-rate pattern considered here. Here A is evaluated as the dose rate to bone marrow shortly after uptake of radiostrontium from the systemic circulation. This is a reasonable assumption for intake via ingestion or inhalation when in a highly soluble form. The parameter λ relates to the effective retention half-time (T1/2) of the radionuclide in bone according to the equation λ = [ln(2)]/T1/2. The parameter λ is the sum of two components (λp for physical decay and λb for biological removal). For short-lived radionuclides that are avidly retained in the skeleton (e.g., for 85Sr and 89Sr), λb < λp, so that λ ≈ λp. For a population of humans and for bone seeking radionuclides such as radiostrontium, λb is known to vary for different individuals (ATSDR 2004). The variability may in part relate to variations in gene sequences that impact on bone mineralization and remodeling processes. Influential polymorphisms in the indicated genes as well as other genes could lead to altered radionuclide biokinetics. There is a need for a new international research initiative related to genetics-based radionuclide (GBR) biokinetics modeling.
The term D{t, A}/θ∞ in Equation 4 can be replaced with Xmax{t, A} (value of X{t, A} when recovery is overwhelmed during high-rate exposure) and the term following the negative sign can be replaced with Q{t, A}. The function Q{t, A} is called the normalized dose adjustment (NORDA). The NORDA is due to the body's protective measures (e.g., repopulation of lost bone marrow cells and repair of DNA damage) that come into play when the dose rate is low, making the normalized dose shrink from its maximum value Xmax{t, A} that occurs when dose rate is high. This leads to the following useful equation:
An equation analogous to Equation 5 is utilized in the Appendix to obtain results for application to any type of dose-rate pattern that monotonically decreases with time. For very high dose rates, there is little to no opportunity for recovery from damage during irradiation so that the NORDA (i.e., Q{t, A}) approaches zero as dose rate becomes very large and the normalize dose essentially equals Xmax{t, A}.
Asymptotic Solution for Normalized Dose
Over time and for a single, negative-exponential-decaying dose-rate pattern, X{t, A} will approach an asymptotic value X∞{A} (i.e., asymptotic normalized dose) that can be obtained by taking the limit of X{t, A} (based on Equation 5) as t → ∞. This limit is given by the following:
The new terms on the right-hand side of the equations represent the following asymptotic relationships:
and
The absorbed dose D∞{A} represents what historically has been called the infinite dose (the committed dose evaluated over all times). The term Q∞{A} is the maximum value for the NORDA for a single, negative-exponential-decaying dose-rate pattern. The dose X∞{A} when evaluated according to Equation 6 will be linearly related to T1/2, because λ (which is inversely related to T1/2) appears in the denominator of both Equations 7 and 8.
Figure 2 shows the central estimate of X∞{A} for a range of initial dose rates A to bone marrow when T1/2 = 50, 100, or 1000 d. A value of T1/2 = 50 days would be expected to be representative of 89Sr because it is a short-lived radionuclide with a physical half-life of 50.6 d. For short-lived radionuclide avidly retained in the skeleton, the effective retention half-time should be similar to the physical half-life.
Central estimates of the asymptotic normalized dose X∞{A} for the hematopoietic mode of death in humans for a single, negative-exponential-decaying, dose-rate pattern of beta and/or gamma radiation to bone marrow of humans. Results are presented for different initial dose rates A and for three effective retention half-times: 50 d (diamonds), 100 d (squares), and for 1000 d (triangles).
Determining the Period over Which Radiation Dose Should Be Evaluated
Plotting X{t, A} vs. radiation exposure time t is a useful way for determine the period over which radiation dose should be evaluated when assessing lethality risks for inhaled and ingested radionuclides. This period will depend on the initial dose rate A, the parameter λ (or alternatively T1/2), and parameters θ∞ and θ1. The value of X{t, A} will increase initially and later approach the asymptotic value X∞{A}. Figure 3 shows the expected temporal pattern of build up of X{t, A} in humans for the hematopoietic mode of lethality for T1/2 = 50, 100, and 1,000 d when A = 0.01 Gy/h when follow-up is to 500 d after radiostrontium intake. For points of reference, an estimate of the population threshold for lethality is X∞{A}= 0.5 (R = 0.01 for the hematopoietic mode of death) and an estimate of the dose that would be fatal for all exposed persons is X∞{A} = 1.5 (See Table 2).
Central estimates of the lethality risk for the hematopoietic mode of death in humans as a function of the asymptotic value of the normalized dose.
A value of X∞{A} = 0.5 (lethality risk ≅ 0.01) has been used as an estimate of the threshold exposure in radiological risk assessment because of the steepness of the dose-response curve for the lethality risk (Scott 2004). A values of X∞{A} = 0.4 (lethality risk ≅ 0.003) may be preferred for the threshold estimate for some applications.
Central estimates of the temporal pattern of build-up of the normalized dose X{t, A} for the hematopoietic mode of death in humans when radiation dose rate to bone marrow decreases as a single, negative-exponential-decaying pattern over time after intake of a bone-seeking radionuclide that emits beta and/or gamma radiation. Results are presented for an initial dose rate of 0.01 Gy/h and for three effective retention half-times: 50 d (diamonds), 100 d (squares), and for 1000 d (triangles).
Special consideration needs to be given to the time period over which radiation dose is evaluated when assessing lethality risks for ingested (or inhaled) radionuclides. Preparing a table of values of X{t, A}/X∞{A} vs. t/T1/2 (normalized time in units of T1/2) is a useful way of determining the period over which radiation dose should be evaluated when assessing lethality risks for single negative-exponential-decaying, dose-rate patterns. Exploratory analyses revealed that for a given set of HF model parameters the results one gets is determine by the value assigned to normalized time. Thus, altering both t and T1/2 by the same factor (e.g., factor of 2 increases for both) yields the same result. Because the value of X{t, A} increases with increasing t and eventually approaches the asymptotic value X∞{A}, the ratio X{t, A}/X∞{A} will eventually approach the asymptotic value of 1.0. The lethality risk R will be expected to increase so long as X{t, A}/X∞{A} increases.
Table 3 provides results for the ratio X{t, A}/X∞{A} vs. t/(T1/2) for the hematopoietic mode of death for radiostrontium incorporated into the skeleton. Results are specific for single negative-exponential-decaying dose rate patterns and for A = 0.001, 0.002, 0.004, 0.006, 0.008, or 0.01 Gy/h. The results are weakly dependent on A and are strongly dependent on normalize time (i.e., t/T1/2). Based on the results presented in Table 3, most of the lethality risk accumulation would be expected to have occurred by the time t for which t/T1/2 = 4. This time is given by 4T1/2 which for example would be 400 d after radionuclide intake when T1/2 = 100 d. Thus, for short-lived radionuclides that are ingested in a highly soluble form, the asymptotic value for the normalized dose will be reached rather quickly. For long-live radionuclides that are ingested in a highly soluble form, it may take more than a year for the asymptotic value to be achieved.
Ratio X{t, A}/X∞{A} vs. t/(T1/2) for the hematopoietic mode of death for radiostrontium incorporated into the skeleton.a
Results presented are determined by the ratio t/T1/2 and apply to all single, negative-exponential-decaying, dose-rate patterns for the range of initial dose rates presented.
Gy/h.
For multiple negative-exponential-decaying, dose-rate patterns, the product 4T1/2 can be evaluated based on the long-term retention component. However, the more involved procedure presented in the Appendix is then needed for evaluating the normalized dose.
Evaluating the Normalized Dose Increment that Occurs Before and After Application of Decorporation Countermeasures
The dose increment, X{t, A}, that occurs before application of decorporation countermeasures for radiostrontium after intake as a result of a radiological terrorism incident can be evaluated using the following equation, where t = T is the time at which the single decorporation treatment is applied:
The increment in the normalized dose that occurs after application of protective decorporation countermeasures depends on the scenario considered. It is assumed that the lethality risk associated with the countermeasures application is negligible. Thus, only risk associated with radiation exposure is addressed. As already indicated, we consider application of two hypothetical novel drugs that cause release of radiostrontium from bone (drug 1) facilitating its removal via chelation (drug 2) from the body. The dose increment X∞{Aalt(T)} that occurs after application of decorporation countermeasures for radiostrontium after intake is evaluated using the asymptotic solution in Equation 6, after calculating the altered dose rate Aalt(T) to bone marrow at time T resulting from having applied the decorporation countermeasures. In this hypothetical example, the early dose rate to bone marrow is on average quickly reduced by a decorporation-time-dependent dose-rate-reduction factor (DRRF) given by DRRF(T), and the pattern of decline in dose rate afterward is presumed still to be a single negative exponential with parameter λ. The altered dose rate just after radiation time t = T is therefore given by the following:
Equation 10 provides the new initial dose rate for the time interval being considered. The variable A in Equation 6 is therefore replaced by Aalt(T) to obtain the dose increment (an asymptotic solution):
or
The total normalized dose long after protective radionuclide decorporation countermeasures are applied is represented by Xpro and is given by the following:
The term X{T, A} in Equation 13 is the increment in the normalized dose that occurs before applying decorporation therapy and the term X∞{Aalt(T)} is the increment that occurs after the therapy is applied. The subscript pro stands for protective radionuclide decorporation. Lethality risk avoidance can now be estimated based on Xpro (Equation 13), X∞{A} (asymptotic normalized dose when no countermeasures are applied), and Equation 1.
Evaluating Lethality Risk Avoidance Due to Protective Decorporation Countermeasures
The total hematopoietic-mode-lethality risk (central estimate) without decorporation countermeasures is indicated as Runpro and is given by the following:
The subscript unpro stands for unprotected. The corresponding risk (central estimate) when protective decorporation countermeasures are applied at time T is indicated as Rpro and is given by the following:
The central estimate of the lethality RAV can therefore be evaluated as follows:
The corresponding equation for the lethality RAP is as follows:
In order to apply the above equations to real world scenarios, one needs a plausible value for T1/2. The long-term retention of radiostrontium [presumably skeletal retention (ATSDR 2004)] has been studied in humans who were exposed to 90Sr in the Techa River area of Russia after fission products from a plutonium production process were released into the river. Techa River water and foods contaminated with radionuclides from the river were sources of the human exposures. Whole-body retention half-times for radiostrontium were estimated in a study population of 361 males and 356 females to be 28 y (central estimate) in males and 16 y (central estimate) in females (Tolstykh et al. 1997). Most of the difference in the retention half-times estimated for males and females is presumed to have resulted from a pronounced increase in the elimination rate in females after age 50 y (Tolstykh et al. 1997).
For applications of the above equations, a value of T1/2 = 16 y has been used for a hypothetical group of female children under the age of 16 y. This choice is to ensure that the asymptotic solutions derived can justifiably be employed. For T1/2 > about 25 y, the asymptotic solutions may overestimate risk (although not necessarily, depending on the initial dose rate), as a significant part of the calculated radiation dose would occur after the end of the life expectancy. This extra dose, however, may be associated with very low dose rates so that there is no significant contribution to the normalized dose and calculated lethality risk.
For lethality RAV and RAP evaluations for the hematopoietic mode of death after ingesting radiostrontium, two subjective values of DRRF were considered: DRRF = 10 (dramatic protection when the radionuclide decorporation therapy is administered 1 h after radionuclide intake) and DRRF = 2 (modest protection when radionuclide decorporation therapy is applied 24 h after radionuclide intake). The radionuclide decorporation therapy considered here is for a single application. Equations are provided in the Appendix that can be applied to multiple applications over time of radionuclide decorporation therapy.
The results (central estimates) obtained for the hematopoietic mode lethality RAV are presented in Figure 4. The initial dose rate was allowed to range from 0.0001 to 0.004 Gy/h. Note that calculated results were the same whether radionuclide decorporation therapy was given at 1 h or at 24 h when the initial dose rate was < 0.001 Gy/h. However for an initial dose rate >0.001 Gy/h, the lethality RAV progressively decreased toward zero as the initial dose rate increased to just above 0.002 Gy/h for the 24-h decorporation therapy results but not for the 1-h decorporation therapy results.
Central estimate of the hematopoietic mode lethality RAV for young (<16 y) female humans that ingest radiostrontium in a highly soluble form and afterwards receive radionuclide decorporation therapy (hypothetical) at 1 h (diamonds) or 24 h (squares) after intake of the radioactive substance. Evaluations were carried out for DRRF(1 h) = 10 (diamonds) and for DRRF(24 h) = 2 (squares) for a range of initial dose rates to bone marrow when T1/2 = 16 y. The two curves overlap for initial dose rates <0.001 Gy/h.
The corresponding results (central estimate) obtained for the hematopoietic mode lethality RAP are presented in Figure 5 and are better indicators of the efficacy of radiation countermeasures (related to risk reduction) than are the RAV results. For initial dose rates <0.001 Gy/h, essentially all of the lethality risk was calculated as being avoided (i.e., RAP = 1) by the radionuclide decorporation therapy whether administered at 1 h or 24 h after intake of radiostrontium. This dose-rate zone is called the green zone (essentially all of the lethality risk is expected to be avoided, i.e., RAP = 1). For initial dose rate >0.001 Gy/h but < 0.002 Gy/h, the efficacy of the decorporation therapy (with respect to lethality risk reduction) progressively decreased toward zero (i.e., towards RAP = 0) as the initial dose rate increased toward 0.002 Gy/h for the calculated 24-h decorporation therapy results. For the case of the 24-h post radiostrontium intake application of countermeasures, this dose rate zone is considered the yellow zone (zone for which some but not all of the lethality risk is expected to be avoided, i.e., 0 < RAP < 1). For the case for the 1-h post radiostrontium intake application of countermeasures, the results, indicate an expected complete avoidance of all the lethality risk (i.e., RAP = 1) for the range of initial dose rate indicated. Thus, for this decorporation scheme, the green zone is extended at least up to an initial dose rate of 0.004 Gy/h. However, for the 24-h post radiostrontium intake decorporation scheme, initial dose rate > 0.002 Gy/h would be expected to be lethal for everyone (no lethality risk avoided since RAP = 0). This dose rate zone is called the red zone, as none of the lethality risk (Runpro = 1) is expected to be avoided (Rpro = Runpro = 1).
Central estimates of the hematopoietic mode lethality RAP for young (<16 y) female humans that ingest radiostrontium in a soluble form and afterwards receive radionuclide decorporation therapy (hypothetical) at 1 h (diamonds) or 24 h (squares) after intake of the radioactive substance. Evaluations were carried out for DRRF(1 h) = 10 (diamonds) and for DRRF(24 h) = 2 (squares) for a range of initial dose rates to bone marrow when T1/2 = 16 y. The two curves overlap for initial dose rates <0.001 Gy/h.
The results presented here relate to a single, negative-exponential-decaying dose-rate pattern. Such a pattern may not apply to some real world situations (e.g., long-lived bone seeking radionuclides), as available data (ATSDR 2004) indicate that for 90Sr early clearance is more rapid than the later clearance (long-term retention T1/2 > 15 y). For such circumstances, the results presented in the Appendix can be applied that allow evaluating exposure-time-dependent increments, X{t, Δt, z(t)}, in the normalized dose over small exposure time increments (t, t + Δt) for any monotonically decreasing dose-rate pattern to bone marrow (or other organs). The function z(t) is the radiation absorbed dose rate at time t when the impact of radionuclide decorporation is accounted for. The approach presented in the Appendix is a generalization of what is presented here and could be used to design optimal radionuclide decorporation schemes and to identify dose rate zones (green, yellow, and red) for a given decorporation schedule. Boundaries for the indicated dose rate zones will depend on the exposure/counter-measures scenario considered.
Using the indicated computational procedures, along with a reliable computer code (e.g., Eckerman 2006) that evaluates radiation absorbed doses and dose rate to key body organs and tissue, allows for a more formal risk assessment related to evaluating lethality RAV and RAP; however, such an undertaking is beyond the scope of our current research.
The research described here is an extension of much earlier research conducted by the author. Unfortunately support for continuing the earlier research disappeared some years ago and interest in supporting the type of theoretical/modeling research presented in this paper has not increased since that time. Unless more appreciation is develop by the scientific community of the important contributions that theoretical/modeling research can make to the advancement of scientific knowledge related to planning for and managing radiological terrorism incidents, the next generation of scientist may be devoid of essential knowledge needed for addressing the types of issues that are addressed in this paper.
CONCLUSIONS
This paper provides a theoretical framework for evaluating hematopoietic-mode lethality RAV and RAP that were designed to facilitate planning for and managing radiological terrorism incidents that involve the ingestion intake by humans of large quantities of bone-seeking beta/gamma-emitting radionuclides. The lethality RAP when evaluated as outlined could be used in developing optimal radionuclide decorporation schemes and for identifying green, yellow, and red dose-rate zones for a given decorporation scheme. For the green dose-rate zone, essentially all of the lethality risk is expected to be avoided (RAP = 1) as a consequence of the radionuclide decorporation scheme used. For the yellow zone, some of the risk is expected not to be avoided (0 < RAP < 1) for the decorporation scheme used. For the red zone, none of the lethality risk is expected to be avoided (RAP = 0 and Rpro = Runpro = 1) for the decorporation scheme used.
Footnotes
ACKNOWLEDGMENTS
This research was supported by Lovelace Respiratory Research Institute. I am grateful to the reviewers for their very helpful comments and to Vicki Fisher for editorial assistance.
APPENDIX
References
1.
AinsworthEJLeongGFKendalKAlpenEL, and AlbrightML. 1964. Pulsed irradiation studies in mice, rats and dogs. In: Biological Effects of Neutron and Proton Irradiation, Vo. II, pp 15–30, IAEA, Vienna
2.
AnsariA. 2004. Dirty bomb pills, shots, weeds, and spells. Health Physics NewsXXXII(11):1–7
3.
ATSDR (Agency for Toxic Substances and Disease Registry).2004. Toxicological Profile for Strontium. U.S. Department of Health and Humans Services, Agency for Toxic Substances and Disease Registry, April 2004
4.
AugustineADGodré-LewisTMcBrideWMillerLPellmarRC, and RockwellS. 2005. Animal models for radiation injury, protection and therapy. Radiat Res164:100–109
5.
BatemanJLBondVP, and RobertsonJS. 1962. Dose-rate dependence of early radiation effects in small mammals. Radiology79:1008–1014.
6.
BunnM. 2006. A mathematical model of the risk of nuclear terrorisms. Annals AAPSS607:103–120
7.
CRS (Congressional Research Service, The Library of Congress).2004. Terrorist “Dirty Bombs”: A Brief Primer. CRS Report to Congress, Revised through the CRS Web. Order Code RS21528, Updated April 1, 2004
8.
DombroskiMJ and FischbeckPS. 2006. An integrated physical dispersion and behavioral response model for risk assessment of radiological dispersion device (RDD) events. Risk Anal26(2):501–514
9.
DuranteM and MantiL. 2002. Estimates of radiological risk from a terrorist attack using plutonium. Radiat Environ Biophys41:125–130
EvansJS.AbrahamsonMA.BenderBB.BoeckerBBGilbertES, and ScottBR. 1993. Health Effects Models for Nuclear Power Plant Accident Consequence Analysis. Part I: Introduction, Integration, and Summary. NUREG/CR-4214, ITRI-141, U.S. Nuclear Regulatory Commission, Washington, DC FAO (Food and Agriculture Organization of the United Nations, International Atomic Energy Agency, International Labor Organization, OECD Nuclear Energy Agency, Pan American Health Organization, United Nations Office for the Coordination of Humanitarian Affairs, World Health Organization). 2002. Preparedness and Response for a Nuclear or Radiological Emergency, Safety Requirements. Safety Standards Series No. GS-R-2. IAEA, Vienna
12.
FilipyRE.LauhalaKEMcGeeDRCannonWCBuschbomRLDeckerJRKuffelEGParkJFRaganHAYanivSS, and ScottBR. 1989. Inhaled 147Pm and/or Total-Body Gamma Radiation: Early Mortality and Morbidity in Rats. NUREG/CR-5353, PNL-6891, U.S. Nuclear Regulatory Commission, Washington, DC
13.
GoanRE. 2001. Update on the treatment of internal contamination. In: RicksRCBergerME and O'HaraFM (eds), The Medical Basis for Radiation-Accident Preparedness, pp 201–216. Elsevier, New York
14.
GonzálezAJ. 2005. Radiation protection in the aftermath of a terrorist attack involving exposure to ionizing radiation. Health Phys89(5):418–446
15.
HarrisonJLeggettRLloydDPhippsA, and ScottB. 2007. Polonium-210 as a poison. J Radiol Prot27:17–40
16.
IAEA (International Atomic Energy Agency).2005. Development of an Extended Framework for Emergency Response Criteria. Interim Report for Comments Jointly Sponsored by IAEA and WHO. International Atomic Agency Report IAEA-TECDOC-1432, January 2005, Vienna, Austria
17.
JonesTD. 1981. Hematologic syndrome in man modeled from mammalian lethality. Health Phys41:83–103.
18.
KassRE and RafteryAE. 1995. Bayes factor. J Am Stat Assoc90(430):773–795
19.
LevinSGYoungRWStohlerRL. 1992. Estimation of median human lethal dose computed from data on occupants of reinforced concrete structures in Nagasaki, Japan. Health Phys63(5)510–521
MettlerFA and VoelzGL. 2002. Major radiation exposure–what to expect and how to respond. N Engl J Med346(20):1544–1560
22.
MorrisMD and JonesTD. 1988. A comparison of dose-response models for death from hematological depression. Int J Radiat Biol53:439–456.
23.
NCRP (National Council on Radiation Protection and Measurements).1991. Some Aspects of Strontium Radiobiology. Report No 110, National Council on Radiation Protection and Measurements, Bethesda, MD
24.
NCRP (National Council on Radiation Protection and Measurements).2001. Management of Terrorist Events Involving Radioactive Material. Report No 138, NCRP, Bethesda, MD
25.
NRC/CEC (National Research Council, Commission of European Communities).1997. Probabilistic Accident Consequence Uncertainty Analysis, Early Health Effects Uncertainty Assessment. NUREG/CR-6545, EUR 16775, Vol. 2, A Joint Report by U.S. Nuclear Regulatory Commission and Commission of European Communities, Brussels-Luxumberg
26.
NRPB (National Radiological Protection Board).1996. Risk from Deterministic Effects of Ionizing Radiation. Report 7, No 3, Chilton, Didcot, Oxon OX11 ORD
27.
ScottBR. 1991. Early occurring and continuing effects. In: Health Effects Model for NuclearPower Plant Accident Consequence Analysis. Modifications of Models Resulting from Recent Reports on Health Effects of Ionizing Radiation. Low LET Radiation. NUREG/CR-4214, LMF-132, Part II, Addendum 1, Chapter 2, pp 3–5, U.S. Nuclear Regulatory Commission, Washington, DC
28.
ScottBR. 2004. Health risks from high-level radiation exposures from radiological weapons. Radiat Prot Manage21(6):9–25
29.
ScottBR. 2005. Evaluating residual risks for lethality from deterministic effects after application of medical countermeasures against damage from inhaled radioactivity dispersal device released gamma-emitting radionuclides. Radiat Prot Manage22(3):7–26
30.
ScottBR. 2007. Health risk evaluations for ingestion exposure of humans to polonium-210. Dose-Response5:94–122
31.
ScottBRHahnFFNewtonGJSnipesMBDamonEGMauderlyJLBoeckerBB, and GrayDH. 1987. Experimental Studies of the Early Effects of Inhaled Beta-Emitting Radionuclides for Nuclear Accident Risk Assessment. Phase II Report. NUREG/CR-5025, LMF-117, U.S. Nuclear Regulatory Commission, Washington, DC
32.
ScottBRHahnFFMcClellanRO, and SeilerFA. 1988. Risk estimators for radiation-induced bone marrow syndrome lethality in humans. Risk Anal8:393–402
33.
ScottBR and HahnFF. 1989. Chapter 2, Early occurring and continuing effects. In: Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, Low LET Radiation, Rev 1, Part II, NUREG/CR-4214, SAND85-7185, U.S. Nuclear Regulatory Commission, Washington, DC
34.
ScottBRFilipyREHahnHF. 1989. Models for Pulmonary Lethality and Morbidity After Irradiation From Internal and External Sources. NUREG/CR-5351, LMF-122 RH, U.S. Nuclear Regulatory Commission, Washington, DC
35.
ScottBR and DillehayLE. 1990. A model for hematopoietic death in man from irradiation of bone marrow during radioimmunotherapy. Br J Radiol63:863–870
36.
ScottBRHahnFFSnipesMBNewtonGJEidsonAFMauderlyJL, and BoeckerBB. 1990. Predicted and observed early effects of combined α and β lung irradiation. Health Phys59(6):791–805
37.
ScottBRLangbergCW, and Hauer-JensenM. 1995. Models for estimating the risk of ulcers in the small intestine after localized single or fractionated irradiation. Br J Radiol68:49–57
38.
ScottBRLyzlovAF, and OsovetsSV. 1998. Evaluating the risk of death via the hematopoietic syndrome mode for prolonged exposure of nuclear workers to radiation delivered at very low rates. Health Phys74(5):545–553
39.
ScottBR and PetersonVL.2003. Risk estimates for deterministic health effects of inhaled weapons grade plutonium. Health Phys85(3):280–293
40.
TofaniA.BartolozziM.2008. Ranking nuclear and radiological terrorisms scenarios: The Italian case. Risk Analysis28(5):1431–1443.
41.
TolstykhEIKozheurovVPVyushkovaOV, and DegtevaMO. 1997. Analysis of strontium metabolism in humans on the basis of the Techa river data. Radiat Environ Biophys36:2
42.
USNRC (United States Nuclear Regulatory Commission).1998. MELCOR Accident Consequence Code System for the Calculation of the Health and Economic Consequences of Accidental Atmospheric Radiological Releases. User's Guide, MACCS2 Ver. 1.12, Report NUREG/CR-6613, SAND97-0594, Vol 1, Washington, DC
43.
USNRC/CEC (United States Nuclear Regulatory Commission, Commission of European Communities).1997. Probabilistic Accident Consequence Uncertainty Analysis, Early Health Effects Uncertainty Assessment, Vol 2 Appendices. Expert B, Report NUREG/CR-6545, Vol 2, pp C-31–C-60, Washington DC