Abstract
A number of genetically modified (GM) crops bioengineered to express agronomic traits including herbicide resistance and insect tolerance have been commercialized. Safety studies conducted for the whole grains and food and feed fractions obtained from GM crops (i.e., bioengineered foods) bear similarities to and distinctive differences from those applied to substances intentionally added to foods (e.g., food ingredients). Similarities are apparent in common animal models, route of exposure, duration, and response variables typically assessed in toxicology studies. However, because of differences in the nutritional and physical properties of food ingredients and bioengineered foods and in the fundamental goals of the overall safety assessment strategies for these different classes of substances, there are recognizable differences in the individual components of the safety assessment process. The fundamental strategic difference is that the process for food ingredients is structured toward quantitative risk assessment whereas that for bioengineered foods is structured for the purpose of qualitative risk assessment. The strategy for safety assessment of bioengineered foods focuses on evaluating the safety of the transgenic proteins used to impart the desired trait or traits and to demonstrate compositional similarity between the grains of GM and non-GM comparator crops using analytical chemistry and, in some cases, feeding studies. Despite these differences, the similarities in the design of safety studies conducted with bioengineered foods should be recognized by toxicologists. The current paper reviews the basic principles of safety assessment for bioengineered foods and compares them with the testing strategies applied to typical food ingredients. From this comparison it can be seen that the strategies used to assess the safety of bioengineered foods are at least as robust as that used to assess the safety of typical food ingredients.
In recent years, biotechnology has been used to produce field crops with targeted agronomic traits that are commonly referred to as genetically modified (GM) crops. Although breeding for agronomically desirable traits is by no means new, the methods used to produce GM crops with biotechnology are relatively new. By comparison to traditional breeding techniques, biotechnology presents an attractive alternative because of increased speed and selectivity (Castle, Wu, and McElroy, 2006).
The basic principle of genetic modification of field crops is insertion of an engineered DNA cassette that includes a gene encoding a particular protein from start codon to stop codon and the genetic elements necessary to drive transcription of the protein that is responsible for imparting specific biological functions (e.g., traits). The genetic modification refers to the transfer of the cassette into the germline of seeds of the targeted crop. For example, a number of plants produced with biotechnology express transgenic Bacillus thuringiensis (Bt) proteins with demonstrated selective toxicity toward insect pests (such as Coleopterans and Lepidopterans) that are responsible for yield loss by parasitizing the stems or roots of field crops. The outcome of such modifications may not be specifically intended to direct the expression of new proteins or other substances in the grains obtained from the GM crop; however, it is the grains and processed food and feed fractions obtained from the grains of GM crops (hereafter referred to as bioengineered foods) that are consumed by humans and livestock animals.
The “newness” of the technology used to produce GM crops has led to the development of specific safety-testing paradigms for bioengineered foods. These paradigms have similarities and differences in design and intent when compared to those applied to assess the safety of food additives and other ingredients intentionally added to foods (hereafter referred to as food ingredients). Similarities are based largely on the fact that exposure to both types of substances is oral, as a component of the diet. Differences in the testing paradigms are attributable to differences in the physical and nutritional properties of these different types of substances. In particular, typical food ingredients are synthetic or organic substances that are added to whole and packaged foods at small concentrations for specific technological purposes and most have no particular nutritional value. In most cases, they are declared on the ingredient labels of packaged foods. Accordingly, the average consumer can readily identify most individual ingredients present in a packaged food. In contrast, bioengineered foods (as defined within the context of this document) are whole grains and food and feed fractions obtained from GM crops. These substances can constitute a substantial proportion of the diets of humans and animals and are consumed for nutritional purposes. Although highly purified fractions obtained from the grains of GM crops such as bleached and deodorized oils can be characterized analytically according to industry or commodity standards, whole grains and less processed fractions (e.g., soybean meal) are complex mixtures that cannot be defined analytically to the same degree of absoluteness.
Guidelines for the safety assessment of GM foods have been published by scientific and regulatory authorities (Codex 2001; FAO 1996; FAO/WHO 2000, 2001). Although certain concepts from these guidelines are discussed in the current paper, it is not intended to review or compare and contrast the different guidelines. Further, this paper is not intended to review all of the laboratory and field studies conducted during the commercial development of a GM crop. Rather, the goal is to review the principles of safety assessment of whole grains and food and feed fractions obtained from GM crops (i.e., bioengineered foods) and compare and contrast them with the strategies applied to assess the safety of typical food ingredients.
WHAT IS A “BIOENGINEERED” FOOD?
For the current review, the term “bioengineered food” refers to whole grains and processed food and feed fractions obtained from GM crops. Although grains from crops (GM or non-GM) are consumed as a component of the human diet, they form a more substantial proportion of livestock diets. The majority of human exposure comes in the form of processed fractions obtained from the grains of field crops such as soybean oil and high fructose corn syrup. These fractions are produced using industrial processes and their composition can be defined using analytical chemistry methodologies. Accordingly, obvious compositional differences between processed fractions obtained from the grains of GM crops and non-GM crops should be readily identified from analytical characterization. However, as will be discussed in subsequent sections of this document, the primary focus of the safety testing of bioengineered foods is to identify unintended changes that could have occurred from the process of bioengineering of the particular crop.
The process of bioengineering crops with specific traits is an integration of activities that often originates with the identification of a particular trait in the form of a protein (or proteins) with specific desirable biological activities. It is not surprising that the traits in a number of commercial crops derived from biotechnology are agronomic and include insect control and herbicide tolerance proteins because they were developed by seed companies. Subsequent activities focus on construction of a suitable DNA vector and transfer of the DNA intended for insertion and corresponding trait into the germline of the targeted crop so that subsequent generations will also express the protein. At a minimum, the elements included in a vector include the specific DNA sequence or sequences of proteins to be expressed for the desired trait, a promoter, and a terminator to aid in the selection process.
Some GM crop transformants were produced using particle transfer methods (e.g., gene gun). Although this method of transformation is still used, more recent methods have used Agrobacterium-based T-DNA transfer (reviewed in Tzfira, Lacroix, and Citovsky, 2004). Transfer of the vector into a target crop can involve production of thousands of individual transformants, each of which may be a candidate for the commercial product. Subsequent activities focus on selection of suitable candidates for further development using processes that are typically based on evaluation of the agronomic properties of individual transformants for the desired phenotype.
The grains obtained from GM crops share one obvious and critical property with food ingredients: they are eaten. However, the strategies to assess the safety of food ingredients and bioengineered foods differ in concept and application. Strategies applied to assess the safety of food ingredients are designed with the purpose of quantitative safety assessment (see below). The safety assessment of bioengineered foods are conducted using grains obtained from GM crops because the grain represent the final branch point in the production bioengineered foods where it would be possible to detect unintended changes that could have occurred as a result of the process of genetic modification. As noted above, processed fractions from grains can be thoroughly characterized analytically with regard to the identity and purity of the particular fraction. In contrast, detection of unintended changes in grains using analytical chemistry will typically be limited to key nutrients and antinutrient substances such as those identified for soybeans, maize grain, rice, and potatoes (OECD 2001 a, 2002a, 2002b, 2004).
The following sections of this paper review some of the fundamental strategies that are applied to the safety assessment of food ingredients. The ensuing sections compare and contrast the principles and practice of those strategies with those applied to the safety assessment of bioengineered foods.
SAFETY TESTING OF FOOD INGREDIENTS
Characterization of Ingredient
Food ingredients are typically discrete organic substances with defined structures that can be both identified and quantified using defined analytical chemistry methodologies. Generally, they are produced using defined industrial processes and added to packaged and, in some cases, whole foods to achieve specific technological functions. Except in the case of ingredients specifically intended for nutritional purposes (e.g., vitamins, minerals), they typically do not have substantial nutritional value.
Contaminants are a category of substances that can be present in foods but obviously differ from food ingredients in that they are not intentionally added to foods. Substances in this category can include natural (e.g., heavy metals) and synthetic (e.g., pesticide and herbicide residues) substances that are incorporated from the environment as well as naturally occurring substances produced by parasitic organisms (e.g., mycotoxins).
Regardless of the nature of the substances identified in the categories above, the safety assessment strategies share some common features owing in part to similar nutritional and physical properties as described in the introductory section. In particular, they are generally present at small quantities, have defined chemical structures, can be identified and quantified using defined methodologies, and typically do not have substantial nutritional value and most do not have effects on satiety. This is an important feature of the category of food ingredients that will be elaborated further in subsequent sections of this paper.
Conceptually, the characterization of any particular food ingredient could be minimized to apply only to the analytical chemistry methodology used to define the test substance. In practice, a more comprehensive characterization will include evidence of prior human exposure to either the substance itself or similar substances from published literature. In some cases, the body of literature on distribution and human consumption of individual food ingredients may be extensive enough to conclude that a food ingredient is safe without conducting any actual laboratory studies. As a practical matter in the cases where laboratory studies are conducted, the analytical methodology necessary to characterize and quantify the specific substance should be conducted prior to initiation of any safety studies.
Toxicology Testing
The basic safety evaluation process applied to food ingredients can include in vitro studies and in vivo laboratory animal studies. The following sections of this paper review some of the fundamental aspects of the safety assessment strategy for food ingredients. In practice, assessment of the safety for any particular food ingredient requires a comprehensive testing strategy that may or may not include the following studies and, in some cases, a substantial battery of additional studies that are not discussed in the current review.
In Vitro Studies
In vitro studies such as mutagenicity assays with bacterial cells (e.g., Ames test) or mammalian cells (e.g., mouse lymphoma thymidine kinase gene mutation assay) are conducted to determine whether exposure to specified food ingredient is likely to result in an increase in the incidence of heritable genetic defects. These studies are rapid, inexpensive, require relatively small quantities of test substance, and are generally predictive of pathos from in vivo studies including carcinogenesis and developmental toxicity (Kirkland et al. 2005; Mohn 1981; Zeiger 1987).
For ingredients added to food intentionally, in vitro toxicology studies are conducted early in the development process. Generally, it is better to know if ingredients are likely to test “positive” in in vitro mutagenicity studies early because results from these studies can provide guidance to longer-term studies that may be necessary for the overall safety assessment strategy for that particular substance. Ingredients testing positive in in vitro mutagenicity assays can be expected to require a more substantial battery of safety studies than those that are not mutagenic.
Within the context of the current review, it should be noted that the outcome of in vitro mutagenicity studies are qualitative. That is, they are conducted to determine whether the test substance is or is not mutagenic, rather than to establish a dose at which the substance causes an increase in heritable defects.
In Vivo Studies
Short-Term Studies
Acute toxicology studies with food ingredients are also typically conducted early in the safety assessment process. Although these studies can produce a numeric indicator of the acute toxicity of the test substance (e.g., LD50 value), they are typically not conducted for purposes of quantitative risk assessment, except for margin of exposure (MOE). Rather, they are conducted as a reference to compare against acute toxicity values of other substances.
A variety of exposure routes can be applied, but oral exposure (e.g., gavage) is appropriate for most food ingredients. In more recent studies, the LD50 values have been evaluated using methods (e.g., fixed-dose, up-and-down) that require the use of fewer laboratory animals than historical methods (OECD 2001b). Another method of evaluating the acute toxicity of food ingredients is the concept of limit dose. This method is particularly useful when the substance is not expected to exhibit acute toxicity. In studies of this design, laboratory rodents are administered the test substance via oral gavage at a predetermined high dose (2000 mg/kg [OECD 2001b] or 5000 mg/kg [U.S. FDA 2003]) and observed for mortality, body weight changes, and signs of intoxication daily for 14 days following exposure. In the event that adverse effects are not observed, it is concluded that the test substance is not acutely toxic.
Repeated-Dose Studies
Depending on the physical properties and nature of a food ingredient, repeated-dose dietary feeding studies are often conducted in laboratory rodents. To evaluate the impact of repeated exposure to an analytically defined test substance (e.g., food ingredient).
The basic design of repeated-dose studies is incorporation of the test substance into the diets of laboratory rodents at multiple doses. Doses typically include one that is within the range of anticipated human exposure and additional doses at higher concentrations. Over the course of the in-life portion of repeated-dose studies, nutritional performance metrics are evaluated, including body weights, weight gain, feed consumption, and feed efficiency ratios. At the conclusion, the animals are subjected to a standard battery of clinical pathology analyses that includes organ weights and histopathology as well as serum chemistry, hematology, and coagulation. The response variables evaluated in these studies are usually predetermined according to published guidelines (e.g., OECD 1995, 1998).
Nutritional performance and clinical pathology response variables in groups exposed to the test substance are compared to the control group to determine if there are statistically significant differences (Waner 1992). However, organ histopathology is typically planned for control and high-dose groups with tissues from other dose groups evaluated per recommendations of the designated study clinical pathologist based on observations in the control and high dose groups.
Historically, the outcome of repeated-dose toxicology studies was expected to demonstrate that adverse effects occur in at least one of the dose groups and that no adverse effects will be observed in at least one other dose group (Zbinden 1989). Accordingly, studies of this design are also useful for identifying the target organ of this effect—the organ that appears to be most sensitive to adverse effects that are attributable to exposure to the test substance.
The dogma of repeated-dose toxicology studies with substances including food ingredients is that a threshold of exposure exists below which adverse are not likely to occur. Accordingly, repeated-dose studies are designed and conducted with the goal of identifying either the highest dose of the test substance where adverse effects are not observed (no observed adverse effect level; NOAEL) or the lowest dose where adverse effects are observed (lowest observed adverse effect level; LOAEL) expressed as mg of intake per kg of body weight per day (mg/kg/day).
Absorption, Distribution, Metabolism, and Excretion (ADME)
The ability of food ingredients to be absorbed following oral exposure and evaluation of the organs to which they are distributed, metabolized, and excreted is sometimes conducted in laboratory animals. These types of studies are particularly important in the case of food ingredients because they are usually small-molecular-weight substances and therefore may reach the systemic circulation following oral exposure. In the case of ingredients with large molecular weights such as carbohydrate fractions obtained from field grains, proteins, or other substances degraded by digestive processes, characterization of the ADME profile may not be necessary because they are unlikely to be absorbed intact and possess minimal potential to accumulate in any particular organ system.
Quantitative Safety Assessment
At the most fundamental level, quantitative safety assessment (QSA) is an integrative process that applies results obtained from repeated dose toxicology studies with pure (e.g., analytically characterized) food ingredient to determine a dose to the food ingredient that is not likely to result in increased incidence of adverse effects. Although not reviewed here, an important component of QSA is exposure assessment. In this analysis, intended applications of the food ingredient (i.e., definition of the foods in which it will be used) and concentrations necessary to achieve the intended effect are defined. Human exposure to specific food ingredients can be estimated in quantitative terms of estimated daily intake (EDI) and standardized to mg/kg/day.
The fundamental concept of QSA involves comparison of the NOAEL/LOAEL from repeated-dose toxicology studies to determine a level of acceptable daily intake (ADI) for humans by application of uncertainty factors. Uncertainty factors applied to NOAELs typically include a factor of 10 for interspecies extrapolation and another factor of 10 to account for possible sensitive individuals (Moch, Dua, and Hines 1997). In general, if the EDI is less than the ADI the food ingredient is considered to present a “reasonable certainty of no harm” (Brock et al. 2003). However, if the adverse effect associated with the food ingredient is an increase in the incidence of cancers in experimental laboratory animals, alternative risk assessment modeling is usually applied (see O’Brien et al. 2006).
SAFETY TESTING OF BIOENGINEERED FOODS
Some of the general concepts and even specific safety studies conducted with food ingredients have been applied to the safety testing strategies of bioengineered foods, including characterization of the test substance and toxicology testing. However, there are obvious differences. Although not entirely accurate scientifically, it is perhaps simpler to understand the application of basic concepts by considering that safety testing strategies with bioengineered foods are conducted as if there are two separate components: (i) the transgenic protein expressed in the grains of GM crops and (ii) the grain of the GM crop in which the transgenic protein is expressed.
Differences in the design of safety studies conducted for food ingredients and bioengineered foods are based at least partly on differences in the physical and nutritional properties of the test substances. Further, it is important to distinguish the goal of safety testing strategies between food ingredients and bioengineered foods. As indicated above, safety studies with food ingredients are conducted as a component of QSA. In contrast, the safety assessment studies conducted for bioengineered foods are for qualitative purposes that will be defined in the subsequent sections of this paper.
Oral consumption of proteins as a general class of substances is not considered inherently dangerous as proteins form a substantial component of the human diet. In most cases, orally consumed proteins are degraded by normal digestive processes into constituent amino acids and small peptides that are absorbed for nutritive purposes. Nevertheless, because two adverse effects have been described for a small number of proteins (allergenicity and acute toxicity [see below]), the safety assessment of transgenic proteins expressed in the crops of bioengineered foods includes an assessment of the likelihood that they could be allergenic or acutely toxic.
With regard to evaluating the safety of bioengineered foods, the fundamental concept is comparison to an appropriate non-GM comparator food with a history of safe use. Overall, the consumption of whole foods is considered safe even though some are known to contain natural toxins and antinutritional substances (CODEX 2001; König et al. 2004: and WHO/FAO 2000). Assuming the process of genetic modification produces phenotypically acceptable field crops, the GM crop will only be different from the non-GM crop in that it expresses one or more proteins that impart it with the intended trait. However, additional studies are sometimes conducted with specific GM events to determine if unintended changes occur as a result of the process of producing the particular GM crop.
Subsequent sections of this manuscript review the safety testing strategies conducted with bioengineered foods with emphasis on comparing and contrasting them to those conducted for typical dietary ingredients.
Proteins Expressed in GM Crops
As a general class of substances, proteins are not considered toxic. However, two safety concerns that have been reported with a small number of proteins are allergenicity and acute toxicity (FAO 1996; FAO/WHO 2000; Sjoblad, McClintock, and Engler 1992; Taylor and Lehrer 1996). Accordingly, one component of the safety testing strategy of bioengineered foods focuses on evaluating the allergenic and acute toxicity potential for transgenic proteins. Comprehensive recommendations for the safety testing of transgenic proteins in the context of agricultural biotechnology have recently been proposed by the International Life Sciences Institute (Delaney et al. 2007).
Characterization of the Test Substance
As with food ingredients, the characterization of transgenic proteins includes historical information about possible prior human exposure by reviewing the distribution of the particular protein in nature. In the case where a transgenic protein is a naturally occurring protein with a documentable history of human exposure with a modified in planta expression profile in the source crop or, perhaps expressed in a different crop, there may be enough information to establish significant prior exposure and to conclude that it has not resulted in adverse effects.
In the case of proteins that differ in sequence or have specifically been altered from their native state for purposes of increased activity or selectivity through mutagenesis and selection, gene shuffling, or other technologies, there may not be enough historical information to make this conclusion (for examples see Castle et al. 2004; Ott et al. 1996; Sathasivan, Haughn, and Murai 1991; Siehl et al. 2005). It may be informative to document the human history of exposure to proteins that are similar in sequence to those used in bioengineered foods, but laboratory studies such as those described in subsequent sections of this review may be necessary.
Historical information about the transgenic protein also considers adverse effects that may have been described from exposure to the source of the transgenic protein—but not necessarily attributable to the protein expressed within that source. That is, if exposure to the source of the transgenic protein is not known to result in allergenicity or acute toxicity, it is less likely that proteins obtained from that source will possess potential for allergenicity or acute toxicity than if they would have been obtained from sources with a documented history of these effects.
In the case of insect-specific protein toxins such as the crystalline proteins obtained from different strains of Bacillus thuringiensis bacteria (i.e., Bt toxins), information about the mechanism of action in sensitive species can be used to differentiate sensitivity from nontarget species. For example, the toxicity observed following exposure of sensitive Lepidopteran species to Bt toxins is attributable to interaction between gut-specific receptors and subsequent pore formation and lysis of intestinal epithelial cells that causes cessation of feeding and ultimately mortality in the target species (Betz, Hammond, and Fuchs 2000; IPCS 1999; McClintock, Schaffer, and Sjoblad 1995; Schnepf et al. 1998). Cytotoxicity is not observed in nontarget species because of differences in the physical environment (i.e., pH) of the gut and the absence of receptors for Bt toxins.
Most transgenic proteins are expressed at low concentrations in GM crops and isolating gram quantities is difficult. When laboratory studies are conducted with transgenic proteins, they are often conducted using recombinant transgenic proteins isolated from heterologous bacterial expression systems rather than with transgenic proteins isolated from GM crops.
As with food ingredients, the test substances in these studies are subjected to analytical characterization. Analytical studies to characterize recombinant transgenic proteins are conducted to determine whether it is representative of the protein that will be expressed in planta. The analyses conducted to compare the recombinant transgenic protein from a microbial expression system with the protein expressed in planta may include sodium do-decyl sulfide–polyacrylamide gel electrophoresis (SDS-PAGE), Western blot analysis, amino acid compositional analysis, tryptic peptide fingerprinting, N-terminal sequencing, mass spectroscopy, and, where indicated, biological or enzymatic activity (Gao et al. 2004, 2006). However, the actual extent of analytical comparisons is determined on a case-by-case basis.
Allergenicity
A number of strategies have been published by scientific authorities to evaluate the allergenic potential of transgenic proteins (CODEX, 2003; FAO/WHO, 2001; Metcalfe et al. 1996). No single factor, however, has been recognized as a primary indicator for allergenic potential, and no validated animal model that is predictive of allergenic potential is available. Current methodologies apply a weight-of-evidence approach that includes consideration of the source of the transgenic protein (i.e., was it obtained from a source containing commonly allergenic proteins?) and bioinformatic comparison of the amino acid sequence of the transgenic protein with other proteins for sequence similarity to known allergenic proteins. In comparison with the safety testing strategies conducted for food ingredients, these analyses are different because they are essentially “dry” data in that the pertinent information about the protein but can be obtained in silico and therefore does not require any actual isolated test substance.
In Vitro Studies
As with food ingredients, in vitro studies are conducted as a component of the safety assessment of transgenic proteins. However, unlike food ingredients, transgenic proteins are typically not evaluated for mutagenicity using bacterial and mammalian tester strains because proteins do not have a documentable history of causing mutagenicity or carcinogenesis (Pariza and Johnson 2001).
In vitro studies with transgenic proteins are conducted to evaluate the susceptibility of the proteins to degradation by digestive processes. It has been reported that allergenic proteins are more resistant to digestive processes than nonallergenic proteins (Astwood, Leach, and Fuchs 1996). Conceptually, the lability of dietary proteins to digestion results in reduced exposure to the intact protein and decreased likelihood for allergenic sensitization. To assess the stability of transgenic proteins to typical digestive processes they are evaluated for susceptibility to degradation by digestive enzymes including pepsin in simulated gastric fluid (SGF) and pancreatin in simulated intestinal (SIF) fluid using a purified recombinant form of the protein as described above (Thomas et al., 2004; US Pharmacopoeia 1995). Transgenic proteins that are rapidly degraded in these in vitro test systems therefore differ in physical properties with those of allergenic proteins and are considered less likely to present a risk for allergenicity in the weight-of-evidence allergenicity assessment.
Acute Toxicity
The majority of proteins in nature are not toxic; however, it has been stated that those that are toxic act through acute mechanisms of action (FAO 1996; FAO/WHO 2000; Sjoblad, McClintock, and Engler 1992). Consequently, transgenic proteins under consideration for development in bioengineered foods are typically evaluated for evidence of acute toxicity (Harrison et al. 1996; Herouet et al. 2005). During the course of this evaluation, a number of factors are considered, including bioinformatic analysis of the amino acid sequence of the transgenic protein for similarity to known protein toxins and information about distribution of the protein to assess possible prior human exposure. Laboratory studies may or may not be conducted depending on the nature of the transgenic protein as well as the outcome of bioinformatic analysis and distribution about the transgenic protein in nature.
When acute toxicity studies are conducted with transgenic proteins, they differ from those conducted historically with food ingredients. In particular, because most proteins are not toxic following oral exposure, oral LD50 studies are typically not conducted. Numerous guidelines have been published to assess the acute toxicity of substances including proteins that include different methodologies and routes of exposure (OECD 2001 b). One concept that has been applied effectively to assess the acute toxicity of transgenic proteins and other substances that are not anticipated to be acutely toxic is that of the limit dose (OECD 2001 b). In studies of this design, groups of rodents (usually mice) are dosed one time with a high concentration (2000 mg/kg of body weight) of the recombinant transgenic protein via oral gavage. Over the course of the following 14 days, treated animals are evaluated for the same response variables that would be evaluated with food ingredients including mortality, body weights, and clinical signs of intoxication. In some cases, the acute toxicity of recombinant transgenic proteins have been conducted using parenteral routes (intravenous [i.v.], intraperitoneal [i.p.]), however; there are no current limit dose guidelines for these routes of exposure. If no deaths or indicators of adverse effects are observed at the conclusion of these acute toxicity studies, it can be concluded that the recombinant transgenic protein is not acutely toxic. As noted in prior sections, this is a qualitative (rather than quantitative) indicator of the acute toxicity of a transgenic protein because a definitive indicator of acute toxicity (e.g., LD50) is often not identified.
Laboratory animal studies are not always necessary to evaluate the acute toxicity of the transgenic protein and, in some cases, they may not be possible. If, for example, the transgenic protein is a transmembrane protein, it simply may not be possible to produce the quantity of the transgenic protein necessary to conduct these studies because of physical limitations in the processes used to express and isolate the proteins. In cases such as this, alternative methods of assessing the potential for acute toxicity such as natural distribution and exposure to the specific transgenic protein may be sufficient. Alternative routes of exposure (e.g., parenteral) that require smaller quantities of the purified test substance may also be considered.
Bioengineered Foods
Subsequent sections of this paper describe the similarities and differences between the safety assessment strategies conducted for bioengineered foods and those conducted for food ingredients.
Characterization of Test Substance
As with food ingredients, bioengineered foods are characterized analytically as one component of the safety assessment. In contrast to food ingredients that are discrete organic entities, bioengineered foods are complex mixtures of nutritional and non-nutritional components; they do have nutritional value and because they are typically whole foods, are also likely to have effects on satiety. Accordingly, there are considerable differences in the analytical characterization of the test substances between these types of studies. Although analytical characterization of food ingredients can be used to define the substance in terms of absolute purity, the analytical characterization of bioengineered foods is conducted by comparison of the concentrations of a number of known nutrient, non-nutrient, secondary metabolites, and antinutrient components of the bioengineered food with its closest non-GM isogenic comparator and additional non-GM reference substances obtained from the same field trials (George et al. 2004; Herman et al. 2004, 2006; Novak et al. 2000; Ridley et al. 2002; Sidhu et al. 2000). The range of values of nutritional and other components for numerous field crops including soybean and maize grain have been published (OECD 2001, 2002a, 2002b, 2004; ILSI 2003). Additionally, the bioengineered food is subjected to a series of studies to characterize the presence of the transgenic insert and expression of the transgenic protein or proteins.
Molecular Characterization of the Transgene Insert
Grains from GM crops (bioengineered foods) and grains from non-GM crops (control substances) evaluated in repeated-dose feeding studies are typically subjected to either Southern blot or polymerase chain reaction (PCR)-based analysis using event-specific DNA primers. These studies are conducted to demonstrate that the genetically modified event is present in the bioengineered food and to demonstrate that the insert is not present in non-GM control and reference substances prior to production of test and control diets.
It should be noted that in addition to PCR-based assays conducted on test and control substances for animal feeding studies, more extensive molecular characterization studies are conducted routinely as part of characterizing genetically modified crop. The molecular characterization of an event is conducted to characterize the inserted DNA sequence or sequences (reviewed by Heck et al. 2005; König et al. 2004; Padgette et al. 1995; Vaughn et al. 2005). Typically, Southern blot analysis is used to determine (i) the copy number and locus or loci of the inserted DNA sequences; (ii) an event-specific hybridization pattern that may be used to confirm the stability of an insertion across generations; (iii) to construct a physical map of the inserted DNA sequences; and (iv) to confirm the absence of inserted vector backbone DNA. In addition, DNA sequencing analyses are conducted on the DNA insertion and bordering genomic sequences of the genetically modified crop. This is conducted to assess the integrity of the DNA insertion and to determine whether the process of transformation resulted in changes to the DNA sequence intended to be inserted or the bordering genomic sequences. Furthermore, the sequencing analyses allow evaluation of the junctions with bordering genomic sequences for the potential creation of novel open reading frames (ORFs) that could result in the expression of novel proteins. Therefore if an ORF is discovered, Northern blot analysis and/or reverse transcriptase-PCR (RT-PCR)-based techniques are used to confirm the absence of expression of the ORF thereby eliminating the likelihood of potential expression of novel proteins.
Protein Concentration and Gene Expression
As with the transgenic DNA insert, the presence of the transgenic protein (or proteins) are usually determined in bioengineered foods and non-GM controls prior to formulation and production of the diets to be evaluated in repeated-dose feeding studies. These studies are typically conducted with enzyme-linked immunosorbent assays (ELISA) that utilize antibodies developed specifically for the transgenic protein. The ELISA is used to demonstrate that the transgenic protein is expressed in the bioengineered food and confirm that it is not present in the non-GM control and reference substances that typically were obtained from the same field trial. In some cases, it may also be used to confirm that the test diets were prepared homogenously and that the transgenic protein itself is stable within the matrix of the diet as administered in laboratory animal feeding studies.
Substantial Equivalence
The term “substantial equivalence” as applied to the safety assessment of GM foods was first introduced in a report of the OECD (1993).
Bona fide substantial equivalence testing has translated into a paradigm of comparing the composition and agronomic properties of the bioengineered food or food products with those of traditional nonmodified comparators. The basis of this concept as applied to the safety assessment of bioengineered foods is that the nutrient and non-nutrient components should be comparable to those observed in an appropriate non-GM comparator that is inherently safe (OECD 1993). The molecular and protein characterization studies discussed above are conducted to demonstrate that the only identifiable genetic difference between the bioengineered food and control and reference substances is the presence of the transgenic DNA insert and expression of the corresponding transgenic protein. A further battery of analytical characterization is conducted to compare the concentrations of nutrient and non-nutrient components of bioengineered foods with non-GM comparators.
For many GM foods that have been commercialized including soybean and corn, the identification of key nutrient ingredients and other components have been published (ILSI 2003; OECD 2001, 2002). Therefore, for studies with bioengineered foods, the test substance is not defined as a discrete organic substance. Rather, it is defined by an extensive series of analyses conducted to determine the concentrations of key nutrient and anti-nutrient ingredients for the purposes of preparing diets to be evaluated in feeding studies. These analyses are also conducted to compare the concentrations of predefined key ingredients with the concentrations of the same ingredients in non-GM control substances typically obtained from the same field trial in which the GM ingredients were generated. The concentrations of key nutrient and antinutrient ingredients in grains or processed fractions from control and GM crops are also compared to the published ranges of values for these key ingredients. The substantial equivalence analyses of a number of GM crops have been published including corn, soybean, and rice (Herman et al. 2006; Oberdoerfer et al. 2005; Padgette et al. 1996).
It is important to note that the focus of these studies is not to demonstrate absolute safety of a GM crop. Rather the goal of these studies is to determine whether it is substantially equivalent to the appropriate non-GM comparator.
Toxicology Testing
The comprehensive processes for protein safety testing and compositional analysis of bioengineered foods described in prior sections are often used as a basis to demonstrate that they are as safe as their non-GM counterpart. Nevertheless, repeated-dose subchronic toxicity studies have been included in the safety assessment of some bioengineered foods to determine their potential to cause unintended effects (FAO/WHO 2000). These studies include a number of similarities and differences to subchronic feeding studies conducted with food ingredients.
Subchronic feeding studies conducted with bioengineered foods are usually conducted with rats as the test animal and are approximately 13 weeks in duration. These studies typically include the same nutritional performance and OECD 408 toxicology response variables that are evaluated in studies conducted with food ingredients and other substances (Hammond et al. 2004, 2006a, 2006b; MacKenzie et al. 2006; Malley et al. 2007). In these aspects, these studies are similar to repeated-dose feeding studies with food ingredients. However, it would not be correct to classify them as true toxicology studies because of differences in design and the absence of the 100-fold safety factor (minimum) usually applied for the purposes of QSA for food ingredients. It may be more appropriate to consider them nutritional equivalence studies conducted using toxicology response variables.
Numerous published subchronic rodent studies have reported the results of whole grain or processed feed fractions from GM crops. In these studies, the “test substance” was a component that already constituted a substantial proportion of the typical rodent diets (e.g., maize grain [~30%], soybean meal [~20%]; Hammond et al. 2004, 2006a, 2006b; MacKenzie et al. 2006; Malley et al. 2007). For test substances such as these it is not possible to produce experimental diets containing multiple doses as indicated above with repeated-dose studies with food ingredients without creating diets that were likely to have nutritional imbalances. Therefore, dose-response exposure to the test substance at concentrations 100s or 1000s of times higher than predicted human exposure is typically not a component of the subchronic feeding studies conducted with GM foods.
The subchronic rodent dietary feeding studies with bioengineered foods cited above were conducted by formulating and producing experimental diets by directly substituting the GM test substance for the non-GM comparator according to the specifications of rodent feed producers. Control diets were produced in a similar manner using non-GM control grains that are, whenever possible, obtained from the same field trial used to produce the GM food. The nutrient composition data necessary to determine how to formulate these diets is obtained from the analytical composition studies described above. In this respect, these studies borrow some of the critical elements of numerous nutritional performance studies of GM foods conducted in livestock animals including broiler chickens (Taylor et al. 2003a, 2003b, 2003c, 2004, 2005a, 2005b).
Differences in the design and conduct of these studies result in differences in the evaluation of data obtained from them as well. Toxicology studies with food ingredients are conducted to compare the response variable metrics observed in experimental groups to those in the control group individually for statistical significance. Statistical differences observed between the response variables of the control group and any individual experimental group are viewed in the context of the values observed in other experimental groups for trend to determine if there is a relationship between exposure to the particular ingredient and the effect.
Similarly, in feeding studies with GM foods, response variable metrics are compared between rats consuming diets containing the fraction from the GM food and the values obtained from rats consuming control diets produced with the appropriate non-GM comparator. However, because multiple doses to the test substance, at least in the strictest sense, are not included, statistical differences between control and experimental groups can be more problematic to interpret than if they were to occur in a subchronic toxicology study with a food ingredient. The large number of response variables evaluated in subchronic feeding studies in combination with limits of traditional statistical methods increase the likelihood that statistical differences will be identified. However, these studies are conducted to determine if long-term exposure to diets formulated with GM foods result in biologically important unintended effects.
To address this issue, subchronic feeding studies with bioengineered foods typically contain multiple additional individual groups consuming diets formulated with reference control grains. The grains or processed feed fractions from reference groups should be from commercially available seed stock that is not genetically related to the test substance. Ideally, they should also be obtained from the same field trial used to produce the GM food. The nutritional performance and toxicology response parameters from these groups are included for the purpose of establishing acceptable ranges of within-study control values. Data obtained from rats consuming diets formulated with reference control grains is particularly useful when statistical differences are observed between control and experimental groups in subchronic feeding studies with bioengineered foods (for a good example see Malley et al. 2007).
The lack of multiple doses to the experimental test substance is the most obvious difference between repeated-dose feeding studies conducted to assess the safety of GM foods and typical food ingredients. However, studies of this design are particularly well-suited to determine whether the GM food is “as safe as” the non-GM comparator.
Qualitative Safety Assessment
As discussed in the sections above, the information obtained for any particular bioengineered food, including analytical characterization and in vitro and in vivo studies, are not conducted for QSA. Rather, this information is used to determine if the transgenic protein within the bioengineered food is likely to be acutely toxic or allergenic and if the GM food expressing the transgenic protein is “as safe as” the appropriate non-GM comparator. With these goals, the outcome of safety studies conducted with GM foods in most cases will not produce numeric data to which safety and uncertainty factors can be applied.
The outcome of the protein characterization and homology comparison studies described above should permit the determination of whether the transgenic protein under consideration either is or is not likely to represent a risk for acute toxicity or as an allergenic protein. Similarly, the compositional and agronomic comparison studies conducted with bioengineered food crops should allow the identification of distinguishable differences between the GM food and the appropriate non-GM comparator.
CONCLUSIONS
The concepts and studies discussed in the current review compare and contrast the safety testing strategies applied to bioengineered foods with those applied to food ingredients. Although there are a number of similarities in the individual components that comprise typical safety assessments for these two types of substances, particularly with regard to the animal models, there are fundamental philosophical and strategic differences. Strategies for the safety assessment of food ingredients generally focus on obtaining quantitative information that can be used to establish safe conditions of exposure and for comparison with results across other classes of ingredients. In contrast, the basic concepts and data supporting the safety assessment of bioengineered foods are typically not amenable to QSA. Rather, the safety-testing strategies and corresponding studies are conducted to determine whether the transgenic proteins used in bioengineered foods are likely to be allergenic or acutely toxic. Additionally, feeding studies with bioengineered foods have been conducted in rodents and livestock animals, including broiler chickens, to determine whether the process of producing them resulted in unintended changes that could be detected using nutritional performance and health indicators, rather than from compositional testing of the food itself and comparison to appropriate grains from non-GM crops. The outcome of this comparison demonstrated that the overall strategies used to assess the safety of bioengineered foods are at least as robust as that used to assess the safety of typical food ingredients.
Footnotes
The author thanks the following persons for guidance in the preparation of the manuscript: Ian Lamb, Ray Layton, Lee Prochaska, Tom Davis, John Zhang, Greg Ladies, Mary Locke, Natalie Weber, and Jean Schmidt from Pioneer Hi-Bred International, Inc., and Pieter Windels from Pioneer Overseas Corporation.
This article is a summary of a presentation given at the 27th Annual Meeting of the American College of Toxicology, Indian Wells, CA. Symposium XII: Safety Evaluation of Foods and Food Ingredients: Emerging Issues, Paths to Market, and Managing Business Risk. Meeting was held November 5–8, 2006.
