Abstract
Standardized protocols for repeat-dose toxicity studies have many advantages, including experimental redundancy (i.e., the use of more than a single experimental approach in the assessment of a given organ or tissue) and evaluation of numerous tissues to ensure detection of adverse effects, as well as the ability to develop robust historical control databases. However, traditional toxicology study designs may not adequately address questions of a mechanistic nature that might provide insights on whether particular toxicology findings in animals are relevant to humans. Such questions may be more readily answered using mechanism-based technologies such as toxicogenomics, proteomics, or metabonomics. Such newer approaches may permit, for example, the detailed assessment of the transcriptional profile differences that distinguish a normal healthy tissue from a diseased or damaged tissue. The resultant information can be used to elucidate the mechanism and accompanying biomarkers for toxicity, as well as to identify potential molecular targets for therapeutic intervention by drugs. Despite their conceptual appeal, the use of emerging technologies in toxicology is accompanied by significant challenges. For example, toxicogenomic assessments entail the generation of large amounts of bioinformatic data that must be interpretable for their full value to be realized. Also, none of these newer approaches have established uniformly acceptable quality standards (e.g., such as may be defined in inter-laboratory validation studies) or a track record of achievement in guiding regulatory decisions. As a result, newer techniques, at least for the present, are more likely to be focused on mechanistic questions with compounds of known toxicity (either positive indicator compounds or “failed” pharmaceutical candidates). If the use of a nascent or emerging technology is contemplated for mechanistic studies of pharmaceutical compounds later in preregistration development, it will be crucial for toxicologists to engage their regulatory colleagues in discussions at an early stage to ensure closer alignment in thought. The successful use of emerging technologies to address toxicology issues will require a close partnership between industry and regulatory agencies.
In recent years, various new and revised regulatory guidelines for toxicity testing have appeared. These include, for example, the Organisation for Economic Cooperation and Development (OECD) testing guidelines, the U.S. Environmental Protection Agency OPPTS Series 870 Health Effects Test Guidelines, as well guidelines of the International Conference on Harmonisation (ICH 1997). These documents have provided toxicologists with a clear framework for safety assessment studies in non-human test systems. These regulatory guidelines, along with their predecessor guidelines, have contributed to the establishment of standardized toxicology study designs that bring with them several distinct advantages. Standard studies often incorporate experimental redundancy (i.e., use of more than just one method) in assessing a broad range of toxicity endpoints in multiple tissues and organs. For example, liver toxicity may be indicated by either excursions of tissue-specific markers of injury (e.g., alanine aminotransferase [ALT], aspartate amino-transferase [AST], γ-glutamyltransferase [GGT], alkaline phosphatase [ALP], or bilirubin) in a clinical chemistry panel or by histopathologic evidence of injury. The redundancy of evaluations (i.e., use of both clinical chemistry and histologic assessments) and the inclusion of multiple tissues are recommended not only to assure scientific rigor, but also to maximize the likelihood for detecting important target tissue effects. An additional measure of redundancy is gained with the sequential extension of testing duration. As study duration increases from 2 to 4 weeks to 3 months and then to chronic treatments, toxicologists have the opportunity to confirm previous findings. In some regulatory arenas, redundancy is further reinforced by requirements for testing in at least two different test systems. For example, the nonclinical safety assessment of pharmaceutical products entails studies in both rodents and nonrodents (usually either dogs or monkeys). In the case of oncogenicity evaluations, ICH S1B requires testing in at least two different test systems, with rats and mice being commonly selected. Standardized approaches to toxicity testing have also permitted the development of robust historical control databases for parameters that are measured in various types of toxicology studies (e.g., acute and repeat-dose toxicity studies, reproductive toxicity studies, studies on mutagenicity, etc.). These historical control data are often useful, if not essential, for proper interpretation of results from individual studies of new chemical entities. Finally, the standardized toxicological testing protocols are amenable to the generation and comparison of data from a wide range of test materials within or across chemical platforms.
Despite their many attractive features, standardized toxicology studies, by design, may not adequately answer some key mechanistic questions. This should not be surprising. The level of observation in these in vivo studies is such that physiological complexities of any given organ system cannot routinely be evaluated at the molecular or mechanistic level. Moreover, because standardized protocols must be broadly applicable for the study of a variety of different test materials, they cannot realistically be expected at the same time to address highly focused mechanistic issues associated with only one or a few compounds. In addition, the emphasis on timelines in drug discovery efforts has dictated that higher-throughput approaches be used in an effort to develop assays with greater predictive value (Todd and Ulrich 1999). Consequently, many toxicologists are relying to an increasing degree on new in vitro and short-term in vivo technologies to answer questions that have not been easily approached previously with standard toxicology protocols. Applications and potential issues with some of these new and emerging technologies were described at a symposium at the 2002 Meeting of the American College of Toxicology (Lawton 2002; Leighton 2002; Rajpal 2002; Reynolds 2002; Watson 2002).
EXAMPLES OF NEW TECHNOLOGIES IN TOXICOLOGY: TOXICOGENOMICS AND TaqMan
Toxicogenomic assessments involve the screening of genes (numbering from one to thousands) for changes in expression patterns in response to treatment with a test material (Petricoin et al. 2002). The resultant gene expression data can then be compared to profiles generated by known toxicants to determine whether a correlative relationship exists that may help guide predictions regarding the potential toxicity of the test material in question. Toxicogenomic evaluations can readily be done in a tissue- or organ-specific manner and can be conducted on several different mammalian test species. Platforms and reagents are commercially available for humans, rats, mice, and, more recently, dogs. Conceptually, toxicogenomic data offer a very appealing way to predict the toxicity of an unknown compound. In particular, toxicogenomic data may be very useful early in development to help guide understanding of structure-activity relationships (SARs) within or between platforms. However, because numerous study design questions and data interpretation issues are yet to be definitively resolved with this technology, many pharmaceutical companies are understandably reluctant to use toxicogenomic methods to study their key compounds, particularly those in later stages of development. A key issue regarding the integration of toxicogenomic data in toxicology assessments is this: Toxicologists evaluate adverse responses that occur in the context of dose- (or exposure-) and time-dependent relationships. These adverse effects may be either exacerbated by secondary events (e.g., potentiation of the toxicity of one chemical by the presence of a second chemical) or attenuated by compensatory responses of the test system (e.g., decreased exposure in repeat-dosing regimens due to induction of drug metabolizing enzymes). Toxicogenomic studies should not be interpreted without appreciating the importance of dose, timing, and the multiplicity of modulating responses provoked in the test system by the toxic insult.
Another technologic advance that was developed recently is the TaqMan assay. TaqMan is a polymerase chain reaction (PCR)-based method that permits the sensitive and selective quantitative assessment of specific nucleic acid sequences (Gibson, Heid, and Williams 1996; Kalinina et al. 1997). Applications for TaqMan in toxicology include improving a focused understanding of nucleic acid–based events in mechanistic investigations. TaqMan can be used to detect genetic polymorphisms, including nucleotide polymorphisms, and can also be used for the detection of genotoxic endpoints.
CHALLENGES FOR TOXICOLOGISTS: DEALING WITH THE INFORMATION EXPLOSION AND GAINING REGULATORY ACCEPTANCE
Inherent in the use of many of the newer experimental approaches in toxicology is the generation of large amounts of data that must be organized and rendered legible and comprehensible before the real challenge of data interpretation can begin. Consider the difficulties in dealing with a toxicogenomic assessment that may include many thousands of genes, each with its own expression pattern varying in response to treatment with a test material. Multiplying this problem by the number of animals to be treated, by the number of tissues and organs to be evaluated, and by the number of time points at which data are to be collected quickly leads one to the conclusion that methods need to be in place to deal with the increasingly vast and complex data sets that are being generated. Today and for the foreseeable future, toxicologists will face major challenges in managing and, more importantly, interpreting their increasing volumes of data. This concern has given rise to the rapidly evolving field of bioinformatics. This field is indispensable to toxicologists as biotechnology moves towards high throughput approaches for whole genome molecular analyses. Bioinformatics provides disciplined algorithms and data-imaging techniques that encompass all aspects of biological information acquisition, processing, storage, distribution, analysis, and interpretation. Having said this, it is important to note that the successful bioinformatician will not simply serve as a data processor in vacuo. Instead, a close collaborative effort between the bioinformatician and the toxicologist at all phases of study (including prestudy planning) will be needed. The ideal bioinformatics analysis will provide alternative and efficient means to interpret large amounts of complex data from genome-scale studies, to identify biomarkers for toxicity, to generate new hypotheses, and to guide the design of appropriate experiments.
Despite their obvious conceptual relevance, there remains a broad need for experimental validation of the linkages between established toxicology endpoints (e.g., excursions in clinical chemistry analytes, histological changes, inflammatory markers, indicators of cell proliferation) and endpoints arising from “-omics” assessments. The successful application of new technologies in toxicology will require researchers to face a series of important questions (for examples, see Table 1). For experimental strategies based on technological approaches that are truly in a nascent state of development, the answers to many of these questions may be problematic. It can be reasonably argued that a study protocol using a new method in a focused mechanistic investigation should be developed on a case-by-case basis and, therefore, the lack of favorable responses to the questions in Table 1 should be expected, and even accepted. Indeed, a case-by-case approach to designing studies using newer technologies may be entirely appropriate for screening studies on compounds early in development where many of the newer approaches are more likely to be used. However, the questions in Table 1 will be more problematic when assessing compounds later in development, particularly after human clinical trials are underway.
Regulatory agencies charged with ensuring public and environmental safety should rightly be cautious when weighing data produced by methods that lack a track record of achievement in reliably guiding safety assessment recommendations and decisions. The burden of proof with safety assessment properly resides with sponsors. Thus, it will be crucial for toxicologists using emerging techniques to engage regulatory agencies at an early stage if the methods will be used to guide decisions regarding the applicability of nonclinical findings to humans. Important topics for discussion include evaluation and comparison of different experimental platforms, data analysis procedures, and agreed-upon standards for data interpretation. Agencies such as the Food and Drug Administration (FDA) clearly recognize the increasing size and complexity of data sets being generated in support of regulatory submissions and are actively developing strategies and guidelines for the handling of these data sets. The successful use of emerging technologies to address toxicology issues will require a close interactive partnership between industry and regulatory agencies (Petricoin et al. 2002).
