Abstract
Some aspects of nature are better potential models for agriculture than others. Natural ecosystems have not competed against each other the way wild plants have, so individual adaptations have been improved more consistently over time, relative to ecosystem-level patterns and processes. Wild plants have also been improved by competitive natural selection for longer than humans or most ecosystems have existed. Evolution-tested adaptations, like inducible defenses against pests, will often be worth preserving (if inherited by crops from wild ancestors) or copying. However, when there are tradeoffs between individual competitiveness and plant-community performance, as illustrated by solar tracking, reversing effects of past natural selection will often be a better option. Nitrogen-fixing cereals are unlikely to be a viable alternative to fertilizer unless we can copy adaptations that existing nitrogen-fixing plants have evolved to deal with oxygen and with conflicts of interest with symbionts.
Keywords
Introduction
Natural ecosystems and wild species are a rich source of ideas for improving agriculture, but which nature-inspired ideas are most likely to outperform human-designed approaches? In this first section, I highlight some aspects of nature that may be poor models for sustainable and productive agriculture. These examples show that we should only “copy what works” as argued in the second section. The third section argues that wild species have been “tested by time” in ways that natural ecosystems have not. Although some aspects of natural ecosystems may pass the “copy-what-works” test, the adaptations of wild species are likely to pass this test more consistently. The final two sections contrast cases where reversing past natural selection could be beneficial with those, like the development of nitrogen-fixing cereals, where copying evolution-tested adaptations may greatly speed progress.
Disagreements that arise from selective quotation of of complex texts—the Bible, say, or Wealth of Nations—have analogies in disagreements over applying “Nature’s wisdom” to agriculture. Vegan advocates of nature-based solutions, for example, are unlikely to quote Howard’s statement that “Mother earth never attempts to farm without livestock” (Howard, 1940).
High plant diversity in some natural ecosystems inspires advocates of intercropping, but many natural ecosystems have low plant diversity. Wild rice (Zizania aquatica) grows naturally in Minnesota as a near monoculture (Pastor and Walker, 2006) as did wild ancestors of some important crop species (Wood and Lenné, 2001). Do low-diversity natural ecosystems prove the superiority of monoculture in those regions?
Even where natural ecosystems are more diverse, do their spatial patterns show how agriculture should deploy diversity on farms and in regions to maximize its benefits? For example, if a nearby natural ecosystem has many species per hectare, i.e., high alpha diversity (Whittaker et al., 2001), but little difference among hectares, i.e., low beta diversity, can we assume a similar spatial pattern would be best for agriculture? Or might diversity among farms be useful? If plant species composition in a forest changes little from year to year, is that good evidence that crop rotation is harmful? Does uneven natural topography disprove the value of land leveling to manage water in rice fields?
Although I will argue that wild species are a good source of ideas, advocates of agriculture mimicking nature might not want to use leaf-cutter ants as an example. These ants have been practicing agriculture for fifty million years (Mueller et al., 1998), demonstrating sustainability. But rather than eating “low on the food chain” by eating leaves they harvest, they feed leaves to their fungal “crops.” So their fungus “gardens” are more like to feedlots: like cattle, fungi convert only a fraction of the nutrients they receive into edible product (Denison, 2012). The ants also practice an extreme form of monoculture: each colony grows a single strain of a single fungal species (Mueller et al., 2005). Finally, from a coevolution perspective, the bacteria-derived toxins ants use to control fungal “weeds” are more analogous to synthetic pesticides than to biological control (Denison, 2012).
“Selective quotation” (choosing aspects of nature that agree with our ideology) is only part of the problem. Advocating monoculture in a region because the natural vegetation there had low plant diversity is only slightly less illogical than advocating intercropping there because other regions have high plant diversity. And ants are not the only fungus-growing insects (Schultz et al., 2022). So, how can we recognize those aspects of nature that agriculture might benefit from copying?
I will discuss two contrasting approaches. A “copy-what-works” approach would assess the current performance of potential model systems and then attempt to copy whichever system performs best. The “tested-by-time” approach acknowledges that natural systems that have persisted for millennia must at least be sustainable. But are they also superior to human-designed systems based on other criteria, such as efficient use of resources, stability over years and ability to feed large numbers of humans? The answer depends on whether alternatives had to compete with each other, as in evolution, or merely to persist.
The two approaches should usually be combined. For example, ant agriculture has persisted for millions of years, but do individual colonies often suffer catastrophic die-offs? If so, then their practices might not be a good model for human agriculture. Mere persistence is no substitute for good performance.
Natural ecosystems often fail the “copy-what-works” test
“Copying nature” could mean “copying the organization of natural ecosystems” but it could also mean “copying adaptations of wild species.” This section focuses on ecosystems.
How do natural ecosystems really compare to agricultural ones, by criteria relevant to agriculture? Agriculture might benefit from copying the organization of a natural ecosystem if that organization enhances productivity, stability, or sustainability. By “organization” I mean things like number of species, relative abundance of legumes versus grasses and spatial patterns, like the extent to which each species is clumped versus dispersed.
Are we trying to copy pattern or process? For example, if a natural grassland with high productivity and stability has 30% legumes and 70% grasses, is maintaining that ratio important? (This could require managed grazing, reseeding, or selective herbicides.) Or are the ecological processes that adjust legume:grass ratio, based on conditions, what we really need? Before copying either pattern or process, however, we would need evidence that the natural ecosystem out-performs well-managed agricultural ones in the same region.
Specifically, can the natural ecosystem feed more people? If not, are the agricultural ecosystems that replaced the natural one(s) unsustainable? What if an agricultural ecosystem is more productive today, but also more reliant on nonrenewable inputs or more destructive of its own internal components (e.g. soil)? Could its long-term ability to feed us be less than a more-natural ecosystem in the same location? A natural ecosystem that under-performs by agricultural criteria might still contain good ideas for improving agriculture, as discussed in the “tested-by-time” section, but blindly copying its organization is not one of those ideas.
In general, natural ecosystems produce much less human food than the agricultural ecosystems that have displaced them. In one study, prairie vegetation at the Cedar Creek research area, here in Minnesota, annually produced <1 kg/ha of “flowers plus fruit”, while its oak forest produced 20 kg/ha of acorns. The equivalent production from a nearby maize plot (cobs including seeds) was 5337 kg/ha (Ovington et al., 1963). Similarly, a tropical forest may produce <2 kg/ha of edible tubers (Bailey and Headland, 1991), while a farm produces >12,000 kg/ha of cassava roots (Suyamto, 1998).
These extreme differences are partly due to differences in reproductive allocation between plant species, so let us compare a single species on farms versus in natural ecosystems. Yields of wild rice, Zizania aquatica, from wild stands are highly variable, from 40–400 kg/ha (Moyle, 1944). Farmed wild rice yields up to 1800 kg/ha, with much less variability over years (Oelke et al., 1982). My book has more details on these and other comparisons, including at least five-fold greater harvest of meat from herded reindeer, relative to hunting the same species (caribou) in the wild (Denison, 2012).
So, if we only compare production of food for humans, natural ecosystems do not look like a good model for agricultural to emulate. If agriculture copied natural ecosystems closely enough to match their low productivity, we could not feed most of the planet’s human population. This does not exclude the possibility that some aspects of natural ecosystems are worth copying.
What about sustainability? Agriculture is feeding us now, but how long can this continue? First, how long can we continue to provide the inputs that modern agriculture uses? Pesticides could be made from renewable sources, such as soybean oil. Production of nitrogen fertilizer consumes large amounts of natural gas. Much of my research focuses on the alternative of biological nitrogen fixation (Denison, 2014, 2021). However, nitrogen fertilizer could be made by combining the inexhaustible supply of nitrogen gas in the atmosphere with hydrogen from electrolysis, using electricity from windmills or other renewable sources. We may never need to recycle phosphorus from human cadavers (Huxley, 1932), but phosphorus sent to cities in grain, milk, or meat may eventually need to be returned to farms, as mined sources of phosphorus are depleted.
An influential paper by two leading advocates of nature-based solutions endorsed “attempting to model agroecosystems on nature’s standards” including reliance on “locally derived nutrients” (Jackson and Piper, 1989). Nutrient cycling is certainly important in natural ecosystems, but how long could these ecosystems persist without external inputs? Biological nitrogen fixation can meet the nitrogen needs of natural forests in Hawaii, but the older islands increasingly depend on wind-blown dust from Asia for nutrients like phosphorus and calcium (Chadwick et al., 1999). Similarly, traditional agriculture on floodplains benefits from nutrients imported by rivers.
Unlike farms, forests do not need to export large quantities of nutrients to hungry humans in distant cities (Denison and McGuire, 2014; Doré et al., 2011). Internal recycling cannot replace exported nutrients. This difference invalidates any simple comparison of sustainability between natural and agricultural ecosystems.
The availability of nutrients is not the only reason to question the sustainability of some agricultural practices, however. A negative trend in any soil property with demonstrated importance to crop production raises concerns about a farm’s future contribution to food security. Long-term experiments with consistent management over decades are the most-reliable way to see how a given set of practices affect slow-changing properties, like soil organic matter and how those changes affect crop yields. For example, in long-term experiments at Rothamsted, England, large annual additions of manure doubled soil organic matter, but it took 75 years. Wheat yields in the manure treatment were not consistently higher than in plots receiving only inorganic fertilizers, however, despite much greater total nitrogen inputs from manure than from fertilizer (Jenkinson, 1991). (Once organic nitrogen in the soil reached steady state, this difference resulted in greater nitrogen losses to the environment from the manure system than from fertilizer.) Given the well-documented benefits of soil organic matter (water-holding capacity, etc.), this lack of a yield benefit suggests that less-substantiated “soil health” criteria should be viewed with some skepticism.
Other work at Rothamsted, however, has shown that gradual soil-quality trends can eventually have devastating consequences. One experiment had to be terminated after a few decades because acidification of soil, apparently due to nitrogen fertilizer, allowed a serious pathogen to become established (Johnston and Poulton, 2018). My involvement in a single-site, long-term experiment at UC Davis (Denison et al., 2004) and a three-site, long-term experiment at the University of Minnesota (https://ltarn.cfans.umn.edu/) reflects my concern that current agricultural practices may be unsustainable. But that does not automatically imply that attempting to copy natural ecosystems will help.
Wild species have been “tested-by-time” more rigorously than ecosystems
If natural ecosystems like forests compare poorly to well-managed agricultural ecosystems as a source of food for large numbers of humans, why copy them? We still might “imitate their useful traits” some agroecologists argue, because “native ecosystems are time-proven survivors” (Ewel, 1999). But in what sense have natural ecosystems been tested over time?
Much of the Americas was dominated by agriculture before diseases introduced by European explorers killed indigenous people, allowing natural vegetation to recover (Denevan, 1992). Rice cultivation in Asia has persisted for 4000 years (King, 1911), making it about eight times older than many seemingly-natural ecosystems in the Americas.
But suppose we know that some natural ecosystem has persisted for thousands of years. We might be able to show this using pollen data, for example. In what sense is this natural ecosystem “time-proven”? It lasted a long time, but what aspects of the ecosystem’s structure or processes were key to this persistence? Would an ecosystem with more or less plant diversity have been just as persistent, but more productive? Would adding a major herbivore or predator have enhanced year-to-year stability, or undermined it? Persistence alone cannot answer these questions.
We get a different answer if we ask the same kind of question about the adaptations of wild species. Consider the symbiosis of legume plants with nitrogen-fixing rhizobia, which has persisted for tens of millions of years, thousands of times longer than natural ecosystems have had to recover since glaciers retreated at the end of the last ice age. Would plants benefit from making more root nodules, as some nitrogen-fixation researchers assume? Probably not. Nodules have costs as well as benefits (Denison, 2014). With genetic variation in nodulation, plants that make more nodules competed against plants with fewer nodules, repeatedly, over millions of years. Today’s legumes inherited alleles influencing nodule number from the winners of that repeated competition. Current nodule numbers may therefore be close to ideal, at least for past conditions. Consistent with this hypothesis, soybeans that made many more nodules than typical cultivars tended to have lower, not higher, yields (Song et al., 1995).
Or consider the timing of flowering. A plant that flowers too early, using resources that might have been used to grow taller, may be shaded by a neighbor that waits a bit. But plants that flower too late may be killed by frost or terminal drought before seeds mature. Over time, natural selection will have balanced these and other factors, such as the availability of pollinators, enhancing seed production in a given environment.
The same logic does not apply to ecosystems (Denison, 2012). We may not know why wild rice grows as a monoculture, but it is not because lakes with a single aquatic plant species out-competed lakes with more species. More-productive terrestrial ecosystems may disperse more species to less-productive ecosystems than vice versa, but traits like number of species or spatial patterns are not inherited with anything like the fidelity of DNA-based inheritance. Evolution has improved the arrangement of leaves on trees, not arrangement of trees in forests.
Elsewhere we have argued that ecological processes like succession or “self-organization” have not consistently improved ecosystem organization either (Denison and McGuire, 2014). There may be specific cases where it has, but we would need specific evidence for each case. This contrasts with the many adaptations that wild plants have evolved that help them address challenges like drought, pests, low-fertility soils and competition with other plants. These adaptations have been competitively tested repeatedly. The adaptations that survived and spread may not be optimal in any absolute sense, but we can be confident that (at least relative to any alternative phenotypes that arose repeatedly via mutation or recombination) they enhanced individual-plant fitness in past environments.
For example, high-elevation populations of teosinte, the wild ancestor of maize, have various adaptations to cold. The genes responsible have already spread to high-elevation populations of maize sympatric with teosinte (Hufford et al., 2013), but they might be useful in breeding cold-tolerant maize for other cold climates. Transferring useful genes from wild plants to crops (through backcross breeding or biotechnology) is an obvious way to copy the adaptations that wild species have evolved, but other approaches are also conceivable. But if a species cannot be crossed with the wild species where the adaptation was discovered and if we cannot determine which genes to transfer, it might still be possible to select for the phenotypic trait. For example, small hairs on teosinte leaves, key to cold tolerance, could be used as a selection criterion for cold tolerance in any species that has variants making similar hairs.
Time-tested adaptations counter-productive in agriculture
Copying adaptations that won repeated rounds of competition over millions of years has a sounder theoretical basis than copying the structure of ecosystems, which have only persisted for hundreds or thousands of years, without any reliable mechanism for propagating winning “designs.” But the adaptations that helped plants out-compete their neighbors in past environments will not always be optimal in agriculture. Sometimes, it will even be useful to negate adaptations our crops inherited from their wild ancestors. We therefore need to combine the “tested-by-time” approach with a “what-works” approach.
Something that worked well over millions of years may not work well today. Conditions on today’s farms may be very different than where a wild plant’s adaptations evolved. In addition to differences due to crop management by humans (irrigation, fertilization, fences to keep out large herbivores, etc.), simply moving a species to a new location exposes it to new conditions.
Changes over time may also make adaptations that evolved in past environments less useful in agriculture today.
For example, increases in atmospheric CO2 have apparently outpaced evolution’s ability to modify photosynthetic systems. Like the timing of flowering, photosynthesis is subject to tradeoffs. The key photosynthetic enzyme, rubisco, can interact with oxygen rather than CO2, resulting in wasteful photorespiration. Some rubisco variants are better able to distinguish between CO2 and oxygen, reducing photorespiration, but they generally have lower photosynthesis rates per rubisco molecule (Tcherkez et al., 2006). We had predicted this tradeoff (Denison et al., 2003), based on limited published data, in response to a suggestion from molecular biologists that a red-algal rubisco with greater specificity for CO2 would increase crop photosynthesis (Long, 1998). As atmospheric CO2 increases, specificity for CO2 becomes less important, relative to a faster reaction rate. A change along rubisco’s tradeoff curve—in the opposite direction from that advocated twenty years ago (Long, 1998)—might therefore be beneficial today and in the future (Zhu et al., 2004).
Solar tracking, where leaves turn to follow the sun, has apparently evolved repeatedly and is found in both wild and cultivated species. It has been suggested that solar avoidance (facing away from the sun) is useful when water is limiting, whereas irrigated agriculture could benefit from increased solar tracking (Ehleringer and Forseth, 1980). By facing the sun, a leaf gets more light, so it could photosynthesize more. But solar-tracking leaves also cast larger shadows, potentially reducing the photosynthesis rate of leaves below them.
To measure the net effect of solar tracking on photosynthesis, we enclosed 1-m2 communities of alfalfa plants in transparent chambers to sunlight and measured their net CO2 exchange. Rotating the whole community disrupted solar tracking long enough to see effects on photosynthesis. When leaf area per plant was small enough that most shadows fell on soil, disrupting solar tracking decreased total canopy photosynthesis by up to about 3%. But as leaf area increased, the net effects of solar tracking on total photosynthesis became slightly negative (Denison et al., 2010).
The break-even leaf area in these experiments was greater than we predicted using a computer model (Denison and Loomis, 1988), so obviously the model was wrong. But which model? The computer model assumed a field large enough to minimize edge effects and predicted that an alfalfa cultivar with less solar tracking would have 5% greater seasonal yield. In the experimental model, light from the sides could have ameliorated the effects on lower-canopy photosynthesis of shading by upper leaves. The experiments may therefore have underestimated the negative effects of solar tracking on canopy photosynthesis.
If the net effects of solar tracking on photosynthesis are negative, why has this trait persisted? If the sun were directly overhead, shadows cast by solar-tracking upper leaves would mostly fall on lower leaves of the same plant. At lower solar elevations, however, shadows could mostly fall on leaves of neighboring plants. Suppressing the growth of competing neighbors would increase a plant’s future access to water, soil nutrients, pollinators and even sunlight. Shading effects would be even greater near the ground: 50% more reduction in sunlight with solar tracking (Denison et al., 2010). A perennial plant like alfalfa might therefore get large future-year benefits from suppressing photosynthesis in seedlings nearby. By suppressing neighbors, a solar-tracking plant might enhance its fitness even if tracking slightly decreased its own photosynthesis. If so, this trait would be maintained by natural selection, despite its negative effects on plant-community productivity.
When there are tradeoffs between the individual-plant competitiveness maintained by natural selection and the plant-community performance key to crop yields, it may be relatively easy to increase yields by reversing the effects of past natural selection (Donald, 1968; Weiner, 2019). Consider tassels, the male flowers at the top of maize plants. In older varieties, tassels cast large shadows on leaves and they were often much larger than needed to pollinate all the female flowers. Darwin referred to “the astonishing waste of pollen by our fir-trees” (Darwin, 1859). Like over-size antlers on Irish elk bulls (now extinct), over-size tassels increased individual fitness at a cost to the community. Although the photosynthetic cost of tassels shading leaves—again, often leaves of competitors!—was quantified 40 years ago (Duncan et al., 1967), reducing tassel size has apparently not been an explicit goal in maize breeding. Tassel size has decreased as a side-effect of selection for yield, but it has taken many decades (Duvick and Cassman, 1999).
The best-known example of tradeoffs between individual-plant competitiveness and crop yield involves plant height. Peter Jennings predicted that shorter rice plants would have higher yield (Jennings, 1964) and confirmed this prediction by developing IR8, which had up to twice the yield of existing varieties. He also predicted that rice cultivars whose higher yield depended on short stature would be less competitive and confirmed that prediction as well (Jennings and de Jesus, 1968). Similarly, a plant that sends roots into soil underneath a neighbor may gain additional water or nutrients, but plant-community productivity decreases when plants collectively invest more in roots without increasing their collective uptake of soil resources (Zhang et al., 1999). Additional examples of actual and potential improvements in agriculture through reversing past selection for individual-plant competitiveness have been discussed elsewhere (Anten and Vermeulen, 2016; Denison, 2012, 2015).
Time-tested adaptations that might benefit agriculture
My focus here, however, is on time-tested wild-plant adaptations that agriculture might benefit from copying. For example, defending crops against pests is becoming increasingly challenging as pest populations evolve resistance to some insecticides, while use of others is banned or restricted. What can we learn from the mechanisms that wild plants have evolved to defend themselves? Although many plants rely heavily on toxins for defense, their defense strategies may be more sophisticated than recent biotech approaches. For example, wild-plant chemical defenses are often inducible, in contrast to maize plants engineered to make Bt toxin all the time.
Why might inducible defenses enhance fitness, relative to constitutive ones? An obvious reason is that making chemical defenses consumes resources. But that may not be the most important reason to turn defenses off when they are not needed. In the absence of pests, the fitness costs of cyanide-based chemical defenses were much greater than could be explained by their biosynthesis cost (Kakes, 1989). Maybe cyanide sometimes poisoned the plant itself, or maybe it deterred pollinators. Given the many reasons why inducible defenses may increase fitness (Agrawal and Karban, 1999), converting inducible defenses inherited from wild ancestors into constitutive ones seems misguided.
Wild potatoes offer an interesting case study on the value of inducible defenses. Their leaves have hairs (trichomes) which release a glue that immobilizes insect pests, but only when trichomes are damaged, typically by insects (Gibson and Pickett, 1983). The evolutionary persistence of inducibility suggests that it has advantages over continuous glue production. I am not claiming that evolution has always found the best solution to the challenges wild species have faced. But complex adaptations like inducibile defenses must require additional genes, beyond those needed for glue production. Various simple (e.g. knockout) mutations in those additional genes would presumably turn glue production on all the time. So we can assume that potatoes with inducible glue production competed against mutants with continuous glue production and inducibility won.
But wild potatoes have another chemical defense that appears to be constitutive. They make and release a volatile chemical similar enough to aphid alarm pheromones that it can frighten aphids away (Gibson and Pickett, 1983). When wheat was genetically engineered to release similar chemicals, aphids were repelled in lab experiments. However, there was no decrease in aphid numbers or damage in the field (Bruce et al., 2015). The authors noted that, when aphids are attacked by predators, they release alarm pheromone in a quick burst. So continuous release of the pheromone, either in wheat or in wild potatoes, might not scare aphids as effectively as release triggered by the presence of aphids.
If inducible pheromone production would work better than continuous production, why didn’t inducible production evolve? Inducible defenses are more complex, requiring additional genes beyond those for pheromone synthesis. So maybe an inducible version of this defense never arose in wild potatoes. If wild potatoes with continuous pheromone production never had to compete against mutants with an inducible version, then we cannot say that natural selection favored continuous production over inducible production. This contrasts with the argument above, where evolutionary persistence of more-complex, inducible glue production was considered evidence for its superiority over simpler, continuous glue production.
This logic is not specific to pest-related adaptations. Persistence in the face of competitive testing by natural selection is strong evidence that a trait is (or at least was) beneficial, but only if that competitive testing actually happened. For example, evolutionary persistence of solar tracking (another complex adaptation) in one species is good evidence that it benefited that species. But lack of solar tracking in another species does not prove that it would not have benefited that other species. If solar-tracking mutants never evolved in that species, the trait has never been tested by competitive natural selection.
Complex traits have evolved surprisingly often, however. For example, C4 photosynthesis solves the photorespiration problem mentioned above by compartmentalizing the rubisco enzyme and pumping CO2 into the compartments. This is a complex adaptation, yet it has apparently evolved more than 60 times. What does this tell us about the most-popular current approach to improving crop photosynthesis, namely, to modify rice to use the C4 photosynthesis pathway (Ermakova et al., 2019)? Repeated evolution of the trait suggests that we might have expected it to evolve in rice or its recent ancestors, if it would have benefited them. A closer look at the phylogenetic distribution of C4 photosynthesis might help. Rice is a member of a large group of grasses in which C4 photosynthesis has never evolved, perhaps due to anatomical limitations that would prevent it from evolving easily even if it were beneficial. Yet, within this clade, the anatomy of Oryza coarctata, a rice relative, is apparently in the range that could facilitate evolution of C4 photosynthesis (Christin et al., 2013). A complication is that Oryza coarctata is highly salt tolerant (Mondal et al., 2018) and has evolved under very different conditions than wild rice. So it is not clear what the failure of rice to evolve C4 photosynthesis tells us about whether it would have been beneficial in the past. The longer it takes to develop C4 rice, the more increasing atmospheric CO2 will undermine the benefits of its CO2-concentrating mechanism. But, given the massive efforts underway, we may soon learn whether this trait is generally beneficial in rice and what tradeoffs, if any, are involved.
Genetic engineering of cereal crops to fix their own nitrogen (perhaps via symbiosis with bacteria) is a long-term dream (Lim et al., 1979). Production of nitrogen fertilizer is a significant contributor of greenhouse gases. A significant fraction of fertilizer applied ends up polluting water in rivers or wells. Both problems would be reduced if we relied more on crops whose bacterial symbionts convert atmospheric nitrogen to forms their host plants can use. Among crop plants, legumes can obtain much of the nitrogen they need from symbiosis with nitrogen-fixing, root-nodule bacteria known collectively as rhizobia. Some wild plant species, including trees like alder, form nitrogen-fixing symbioses with a different group of bacteria. But could major nonlegume crops like maize, wheat, or rice be genetically engineered to form effective nitrogen-fixing symbioses? Maybe, but these efforts would benefit from a greater understanding of natural nitrogen-fixing symbioses with plants.
Three approaches are being considered (Beatty and Good, 2011): symbiosis with bacteria in root nodules, some looser symbiosis, or engineering a plant organelle (mitochondrion or chloroplast) to fix nitrogen. Transferring or editing genes are recognized as technical challenges, but rapid progress is being made. So I will focus on three more-fundamental challenges.
First, the metabolic costs of biological nitrogen fixation, even ideally, may limit crop yields, relative to crops with nitrogen fertilizer. A second challenge arises from contrasting requirements for oxygen concentration and supply. Simplistic solutions to this problem are likely to reduce nitrogen-fixation efficiency below its theoretical ideal. Third, conflicts of interest between plant hosts and their symbionts (perhaps including engineered organelles) can further reduce the efficiency of nitrogen fixation. I will discuss each challenge in turn.
The first challenge is the greater metabolic cost of nitrogen fixation to crop plants, relative to nitrogen from fertilizer. In one early review, theoretical estimates of the cost to plants of obtaining nitrogen ranged from 2.9 to 6.1 gC/gN for symbiotic nitrogen fixation versus only 0.8 to 2.4 gC/gN for using soil nitrate (Atkins, 1984). Experimental estimates of the cost of nitrogen fixation have ranged as high as 8.1 gC/gN for alfalfa (Twary and Heichel, 1991) and 5.8 gC/gN in the nonlegume, alder (Lundquist, 2005). Some recent experiments estimated nitrogen fixation’s cost to bean plants at 4 gC/gN (Schilling et al., 2006). Although this was similar to the same group’s estimates of the cost of using nitrate, they noted that the energetically expensive process of nitrate reduction can occur mainly in leaves in some species. In leaves, Adenosine triphosphate from the light reactions of photosynthesis can be used without exchanging CO2 (and water) with the atmosphere, whereas nitrogen fixation in root nodules cannot avoid this carbon and water cost.
If nitrogen fixation by cereal crops were only slightly more metabolically expensive than nitrogen fertilizer (so reduced yields only slightly), then the environmental benefits of decreased nitrogen pollution could perhaps outweigh environmental costs like converting forests and wetlands to agriculture to make up for lost food production. The public might accept this tradeoff as long as food supply, relative to demand, was sufficient to keep food affordable. If food shortages cause prices to surge, however, political restrictions on fertilizer use would probably not survive. Although increases in energy prices would increase fertilizer prices, helping to make nitrogen fixation a viable alternative, alternative sources of energy may be easier to develop than alternative sources of food. So keeping the cost of nitrogen fixation to plants low will be key to the wider use of nitrogen fixation, by either legumes or cereals.
Solving the oxygen problem is key to successful nitrogen fixation in cereals and to keeping its yield-reducing metabolic costs low. Nitrogenase is destroyed by oxygen concentrations well below atmospheric, but nitrogen fixation’s high ATP requirements require high oxygen flux to the sites of respiration. These two criteria are not necessarily in conflict. High oxygen consumption (e.g. from respiration by nitrogen-fixing bacteria in root nodules) will keep nodule-interior oxygen concentrations low and also generate large amounts of ATP. A barrier to diffusion between the nodule interior and the atmosphere (Tjepkema and Yocum, 1973) is required, however. Engineering cereals to make nodule-like structures with a diffusion barrier might not be too much of a challenge, but a fixed barrier would be sufficient only if the environment were constant. Consider a challenge as simple as diurnal changes in soil temperature. If respiration rate is just sufficient to consume all the oxygen that diffuses into the nodule and then cooler temperatures decrease respiration by 10%, the nodule-interior oxygen concentration will increase to 10% of atmospheric, by Fick’s law of diffusion. That would destroy nitrogenase. This problem could be solved by over-investing in nodule-interior respiration capacity or by over-investing in the nodule diffusion barrier. Either way, the cost of nitrogen fixation, already a potential limitation, would increase relative to its benefits.
Legumes evolved an alternative: a variable diffusion barrier (Witty et al., 1987). Permeability of soybean and birdsfoot trefoil nodules to oxygen and other gases (Denison and Layzell, 1991; Weisz and Sinclair, 1988; Witty and Minchin, 1998) adjusts appropriately in response to changes in external oxygen concentration, as might occur with flooding and draining of soils. When grazing of clover reduces the capacity of nodule-interior respiration to consume oxygen, a dramatic decrease in nodule gas permeability protects nitrogenase from oxygen damage (Denison and Okano, 2003; Hartwig et al., 1987). When the availability of soil nitrate offers a cheaper alternative to nitrogen fixation, a decrease in nodule oxygen permeability (limiting consumption of photosynthate by nodule respiration) precedes nodule senescence (Denison and Harter, 1995).
We have only preliminary evidence on the structure and operation of the diffusion barrier, even in well-studied legumes (De Lorenzo et al., 1993; Denison and Kinraide, 1995). In nonlegumes, individual vesicles provide a variable diffusion barrier around the bacteria (Parsons et al., 1987), which may not respond as rapidly as that in legumes (Berry, 1998). Without an appropriate variable diffusion barrier, cereal crops that rely on nitrogen fixation will likely have yields too low for this to be a viable alternative to nitrogen fertilizer.
Cereals relying on nitrogen-fixing symbionts would face a third problem: conflicts of interest between host and symbiont over allocation of resources between nitrogen fixation and symbiont reproduction. In loose associations, like that between nitrogen-fixing bacteria and maize roots (Van Deynze et al., 2018), why should bacteria fix more nitrogen than they need for their own reproduction? Bacteria inside root nodules could perhaps face physical constraints on immediate reproduction, but bacteria hoarding resources to support their own future survival and reproduction in soil can significantly reduce the efficiency of nitrogen fixation (Oono et al., 2020). If mitochondria or chloroplasts were engineered to fix nitrogen, could nonfixing mutant organelles use the resources saved to reproduce more within plant cells, eventually displacing the nitrogen-fixing organelles?
If there were only one symbiont genotype per individual host plant—and if mutation were somehow prevented—then strains that fix little or no nitrogen would have lower fitness, unless their host had good access to soil nitrogen. This is because their host plants would grow less well than host plants supporting better strains. But with realistic numbers of strains per plant, bacterial strains that use host resources only for their own reproduction would have greater fitness (i.e. increase in frequency over years) than strains that fix nitrogen (West et al., 2002). This assumes that fixing nitrogen yields only collective benefits, shared by all strains on a plant. Under this scenario, nitrogen-fixing symbionts on roots or in nodules—perhaps even nitrogen-fixing organelles—would soon be displaced by “free-rider” mutants that divert resources from nitrogen fixation to their own reproduction.
Fortunately, legumes have evolved solutions to this problem, too. Host-imposed “sanctions” shut down nodules that fail to fix nitrogen (Oono et al., 2011) apparently by reducing nodule oxygen permeability (Kiers et al., 2003). Strains that fix little or no nitrogen could perhaps escape whole-nodule sanctions when they share a nodule with a strain that fixes nitrogen efficiently (Denison, 2000). However, there is some evidence for sanctions against nonfixing strains within mixed nodules of Acmispon strigosus and Lotus japonicus (Regus et al., 2017) and Mimosa pudica (Daubech et al., 2017). Without some host-sanctions mechanism to make fixing nitrogen efficiently a higher-fitness option for bacteria or engineered organelles, nitrogen-fixing cereals may appear on the cover of high-impact journals, but they will never be a practical alternative to nitrogen fertilizer.
How strong would host sanctions need to be to limit the spread of resource-diverting “free-riders” enough to make nitrogen-fixing cereals practical? This depends on the cost of resource diversion to the plant and on its benefit to the rhizobia. When a plant is much more limited by nitrogen than by carbon—this is probably true sometimes, whether or not the carbon demands of nitrogen fixation can stimulate photosynthesis (Kaschuk et al., 2009)—the carbon cost of nitrogen fixation may not matter much. But knocking out one rhizobial resource-diversion mechanism significantly increased nitrogen accumulation in bean (Cevallos et al., 1996). The lifetime carbon cost of nodules that get shut down because they fix too little nitrogen may be small. But nonfixing nodules may have a large opportunity cost if they formed at a time when their carbon requirements competed with other urgent needs. How the fitness benefits to rhizobia from resource diversion compare with their fitness risks from host sanctions is a major focus of current research in my laboratory (Muller and Denison, 2018).
In summary, wild-plant adaptations tested competitively over millions of years may outperform our own ideas, but only when natural selection’s criterion of individual-plant fitness is aligned with our goals for plant-community performance under agricultural conditions. Although natural ecosystems may have demonstrated sustainability with few external inputs, they are not necessarily a good model for agricultural ecosystems because they have not been subject to competitive testing or the massive nutrient removals in harvested products that current human populations require.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship and/or publication of this article.
