11
FEEDING THE WORLD
[Agriculture’s] principal object consists in the production of nitrogen under any form capable of assimilation.
—JUSTUS VON LIEBIG, 1840
The Machine That Changed the World
Compared with the flight of Wright brothers’ first plane or the detonation of the first atomic bomb, the appearance of a few drips of colorless liquid at one end of an elaborate apparatus in a laboratory in Karlsruhe, Germany, on a July afternoon in 1909 does not sound very dramatic. But it marked the technological breakthrough that was to have arguably the greatest impact on mankind during the twentieth century. The liquid was ammonia, and the tabletop equipment had synthesized it from its constituent elements, hydrogen and nitrogen. This showed for the first time that the production of ammonia could be performed on a large scale, opening up a valuable and much-needed new source of fertilizer and making possible a vast expansion of the food supply—and, as a consequence, of the human population.
The link between ammonia and human nutrition is nitrogen. A vital building block of all plant and animal tissue, it is the nutrient reponsible for vegetative growth and for the protein content of cereal grains, the staple crops on which humanity depends. Of course, plants need many nutrients, but in practice their growth is limited by the availability of the least abundant nutrient. Most of the time this is nitrogen. For cereals, nitrogen deficiency results in stunted growth, yellow leaves, reduced yields, and low protein content. An abundance of available nitrogen, by contrast, promotes growth and increases yield and protein content. Nitrogen compounds (such as proteins, amino acids, and DNA) also play crucial roles in the metabolisms of plants and animals; nitrogen is present in every living cell. Humans depend on the ingestion of ten amino acids, each built around a nitrogen atom, to synthesize the body proteins needed for tissue growth and maintenance. The vast majority of these essential amino acids comes from agricultural crops, or from products derived from animals fed on those crops. An inadequate supply of these essential amino acids leads to stunted mental and physical development. Nitrogen, in short, is a limiting factor in the availability of mankind’s staple foods, and in human nutrition overall.
The ability to synthesize ammonia, combined with new “high-yield” seed varieties specifically bred to respond well to chemical fertilizers, removed this constraint and paved the way for an unprecedented expansion in the human population, from 1.6 billion to 6 billion, during the course of the twentieth century. The introduction of chemical fertilizers and high-yield seed varieties into the developing world, starting in the 1960s, is known today as the “green revolution.” Without fertilizer to nourish crops and provide more food—increasing the food supply sevenfold, as the population grew by a factor of 3.7—hundreds of millions of people would have faced malnutrition or starvation, and history might have unfolded very differently.
The green revolution has had far-reaching consequences. As well as causing a population boom, it helped to lift hundreds of millions of people out of poverty and underpinned the historic resurgence of the Asian economies and the rapid industrialization of China and India—developments that are transforming geopolitics. But the green revolution’s many other social and environmental side effects have made it hugely controversial. Its critics contend that it has caused massive environmental damage, destroyed traditional farming practices, increased inequality, and made farmers dependent on expensive seeds and chemicals provided by Western companies. Doubts have also been expressed about the long-term sustainability of chemically intensive farming. But for better or worse, there is no question that the green revolution did more than just transform the world’s food supply in the second half of the twentieth century; it transformed the world.
The Mystery of Nitrogen
The origins of the green revolution lie in the nineteenth century, when scientists first came to appreciate the crucial role of nitrogen in plant nutrition. Nitrogen is the main ingredient of air, making up 78 percent of the atmosphere by volume; the rest is mostly oxygen (21 percent), plus small amounts of argon and carbon dioxide. Nitrogen was first identified in the 1770s by scientists investigating the properties of air. They found that nitrogen gas was mostly unreactive and that animals placed in an all-nitrogen atmosphere suffocated. Yet having learned to identify nitrogen, the scientists also discovered that it was abundant in both plants and animals and evidently had an important role in sustaining life. In 1836 Jean-Baptiste Boussingault, a French chemist who took a particular interest in the chemical foundations of agriculture, measured the nitrogen content of dozens of substances, including common food crops, various forms of manure, dried blood, bones, and fish waste. He showed in a series of experiments that the effectiveness of different forms of fertilizer was directly related to their nitrogen content. This was odd, given that atmospheric nitrogen was so unreactive. There had to be some mechanism that transformed nonreactive nitrogen in the atmosphere into a reactive form that could be exploited by plants.
Some scientists suggested that lightning created this reactive nitrogen by breaking apart the stable nitrogen molecules in the air; others speculated that there might be trace quantities of ammonia, the simplest possible compound of nitrogen, in the atmosphere. Still others believed that plants were somehow absorbing nitrogen from the air directly. Boussingault took sterilized sand that contained no nitrogen at all, grew clover in it, and found that nitrogen was then present in the sand. This suggested that legumes such as clover could somehow capture (or “fix”) nitrogen from the atmosphere directly. Further experiments followed, and eventually in 1885 another French chemist, Marcelin Berthelot, demonstrated that uncultivated soil was also capable of fixing nitrogen, but that the soil lost this ability if it was sterilized. This suggested that nitrogen fixation was a property of something in the soil. But if that was the case, why were leguminous plants also capable of fixing nitrogen?
The mystery was solved by two German scientists, Hermann Hellriegel and Hermann Wilfarth, the following year. If nitrogen-fixing was a property of the soil, they reasoned, it should be transferable. They put pea plants (another kind of legume) in sterilized soil, and they added fertile soil to some of the pots. The pea plants in the sterile soil withered, but those to which fertile soil had been added flourished. Cereal crops, however, did not respond to the application of soil in the same way, though they did respond strongly to nitrate compounds. The two Hermanns concluded that the nitrogen-fixing was being done by microbes in the soil and that the lumps, or nodules, that are found on the roots of legumes were sites where some of these microbes took up residence and then fixed nitrogen for use by the plant. In other words, the microbes and the legumes had a cooperative, or symbiotic, relationship. (Since then, scientists have discovered nitrogen-fixing microbes that are symbiotic with freshwater ferns and supply valuable nitrogen in Asian paddy fields; and nitrogen-fixing microbes that live in sugarcane, explaining how it can be harvested for many years from the same plot of land without the use of fertilizer.)
Nitrogen’s crucial role as a plant nutrient had been explained. Plants need nitrogen, and certain microbes in the soil can capture it from the atmosphere and make it available to them. In addition, legumes can draw upon a second source of nitrogen, namely that fixed by microbes accommodated in their root nodules. All this explained how long-established agricultural practices, known to maintain or replenish soil fertility, really worked. Leaving land fallow for a year or two, for example, gives the microbes in the soil a chance to replenish the nitrogen. Farmers can also replenish soil nitrogen by recycling various forms of organic waste (including crop residues, animal manures, canal mud, and human excrement), all of which contain small amounts of reactive nitrogen, or by growing leguminous plants such as peas, beans, lentils, or clover.
These techniques had been independently discovered by farmers all over the world, thousands of years earlier. Peas and lentils were being grown alongside wheat and barley in the Near East almost from the dawn of agriculture. Beans and peas were rotated with wheat, millet, and rice in China. In India, lentils, peas, and chickpeas were rotated with wheat and rice; in the New World, beans were interleaved with maize. Sometimes the leguminous plants were simply plowed back into the soil. Farmers did not know why any of this worked, but they knew that it did. In the third century Theophrastus, the Greek philosopher and botanist, noted that “the bean best reinvigorates the ground” and that “the people of Macedonia and Thessaly turn over the ground when it is in flower.” Similarly, Cato the Elder, a Roman writer of the second century was aware of beneficial effects of leguminous crops on soil fertility, and he advised that they should “be planted not so much for the immediate return as with a view to the year later.” Columella, a Roman writer of the first century advocated the use of peas, chickpeas, lentils, and other legumes in this way. And the “Chhi Min Yao Shu,” a Chinese work, recommended the cultivation and plowing-in of adzuki beans, in a passage that seems to date from the first century B.C. Farmers did not realize it at the time, but growing legumes is a far more efficient way to enrich the soil than the application of manure, which contains relatively little nitrogen (typically 1 to 2 percent by weight).
The unraveling of the role of nitrogen in plant nutrition coincided with the realization, in the mid-nineteenth century, of the imminent need to improve crop yields. Between 1850 and 1900 the population in western Europe and North America grew from around three hundred million to five hundred million, and to keep pace with this growth, food production was increased by placing more land under cultivation on America’s Great Plains, in Canada, on the Russian steppes, and in Argentina. This raised the output of wheat and maize, but there was a limit to how far the process could go. By the early twentieth century there was little remaining scope for placing more land under cultivation, so to increase the food supply it would be necessary to get more food per unit area—in other words, to increase yields. Given the link between plant growth and the availability of nitrogen, one obvious way to do this was to increase the supply of nitrogen. Producing more manure from animals would not work, because animals need food, which in turn requires land. Sowing leguminous plants to enrich the soil, meanwhile, means that the land cannot be used to grow anything else in the meantime. So, starting as early as the 1840s, there was growing interest in new, external sources of nitrogen fertilizer.
Solidified bird excrement from tropical islands, known as guano, had been used as fertilizer on the west coast of South America for centuries. Analysis showed that it had a nitrogen content thirty times higher than that of manure. During the 1850s, imports of guano went from zero to two hundred thousand tons a year in Britain, and shipments to the United States averaged seventy-six thousand tons a year. The Guano Islands Act, passed in 1856, allowed American citizens to take possession of any uninhabited islands or rocks containing guano deposits, provided they were not within the jurisdiction of any other government. As guano mania took hold, entrepreneurs scoured the seas looking for new sources of this valuable new material. But by the early 1870s it was clear that the guano supply was being rapidly depleted. (“This material, though once a name to conjure with, has now not much more than an academic interest, owing to the rapid exhaustion of supplies,” observed the Encyclopaedia Britannica in 1911.) Instead, the focus shifted to another source of nitrogen: the huge deposits of sodium nitrate that had been discovered in Chile. Exports boomed, and in 1879 the War of the Pacific broke out between Chile, Peru, and Bolivia over the ownership of a contested nitrate-rich region in the Atacama Desert. (Chile prevailed in 1883, depriving Bolivia of its coastal province, so that it has been a landlocked country ever since.)
Even when the fighting was over, however, concerns remained over the long-term security of supply. One forecast, made in 1903, predicted that nitrate supplies would run out by 1938. It was wrong—there were in fact more than three hundred years of supply, given the consumption rate at the time—but many people believed it. And by this time sodium nitrate was in demand not only as a fertilizer, but also to make explosives, in which reactive nitrogen is a vital ingredient. Countries realized that their ability to wage war, as well as their ability to feed their populations, was becoming dependent on a reliable supply of reactive nitrogen. Most worried of all was Germany. It was the largest importer of Chilean nitrate at the beginning of the twentieth century, and its geography made it vulnerable to a naval blockade. So it was in Germany that the most intensive efforts were made to find new sources of reactive nitrogen.
One approach was to derive it from coal, which contains a small amount of nitrogen left over from the biomass from which it originally formed. Heating coal in the absence of oxygen causes the nitrogen to be released in the form of ammonia. But the amount involved is tiny, and efforts to increase it made little difference. Another approach was to simulate lightning and use high voltages to generate sparks that would turn nitrogen in the air into more reactive nitrous oxide. This worked, but it was highly energy-intensive and was therefore dependent on the availability of cheap electricity (such as excess power from hydroelectric dams). So imported Chilean nitrate remained Germany’s main source of nitrogen. Britain was in a similarly difficult situation. Like Germany, it was also a big importer of nitrates, and was doing its best to extract ammonia from coal. Despite efforts to increase agricultural production, both countries relied on imported wheat.
In a speech at the annual conference of the British Association for the Advancement of Science in 1898, William Crookes, an English chemist and the president of the association, highlighted the obvious solution to the problem. A century after Thomas Malthus had made the same point, he warned that “civilised nations stand in deadly peril of not having enough to eat.” With no more land available, and with concern growing over Britain’s dependence on wheat imports, there was no alternative but to find a way to increase yields. “Wheat pre-eminently demands nitrogen,” Crookes observed. But there was no scope to increase the use of manure or leguminous plants; the supply of fertilizer from coal was inadequate; and by relying on Chilean nitrate, he observed, “we are drawing on the Earth’s capital, and our drafts will not perpetually be honoured.” But there was an abundance of nitrogen in the air, he pointed out—if only a way could be found to get at it. “The fixation of nitrogen is vital to the progress of civilised humanity,” he declared. “It is the chemist who must come to the rescue . . . it is through the laboratory that starvation may ultimately be turned into plenty.”
A Productive Dispute
In 1904 Fritz Haber, a thirty-six-year-old experimental chemist at the Technische Hochschule in Karlsruhe, was asked to carry out some research on behalf of a chemical company in Vienna. His task was to determine whether ammonia could be directly synthesized from its constituent elements, hydrogen and nitrogen. The results of previous experiments had been unclear, and many people thought direct synthesis was impossible. Haber himself was skeptical, and he replied that the standard way to make ammonia, from coal, was known to work and was the easiest approach. But he decided to go ahead with the research anyway. His initial experiments showed that nitrogen and hydrogen could indeed be coaxed into forming ammonia at high temperature (around 1,000 degrees Centigrade, or 1,832 degrees Fahrenheit) in the presence of an iron catalyst. But the proportion of the gases that combined was very small: between 0.005 percent and 0.0125 percent. So although Haber had resolved the question of whether direct synthesis was possible, he also seemed to have shown that the answer had no practical use.
Fritz Haber.
And there things might have rested, had it not been for Walther Hermann Nernst, another German chemist, who was professor of physical chemistry at Göttingen. Although he was only four years older than Haber, Nernst was a more eminent figure who had made contributions in a number of fields. He had invented a new kind of light bulb, based on a ceramic filament, and an electric piano with guitar-style pickups, though neither was a commercial success. Nernst was best known for having proposed a “heat theorem” (now known as the third law of thermodynamics) in 1906 that would win him the Nobel prize in Chemistry in 1920. This theorem could be used to predict all sorts of results, including the proportion of ammonia that should have been produced by Haber’s experiment. The problem was that Nernst’s prediction was 0.0045 percent, which was below the range of possible values determined by Haber. This was the only anomalous result of any significance that disagreed with Nernst’s theory, so Nernst wrote to Haber to point out the discrepancy. Haber performed his original experiment again, obtaining a more precise answer: This time around the proportion of ammonia produced was 0.0048 percent. Most people would have regarded that as acceptably close to Nernst’s predicted figure, but for some reason Nernst did not. When Haber presented his new results at a conference in Hamburg in 1907, Nernst publicly disputed them, suggested that Haber’s experimental method was flawed, and called upon Haber to withdraw both his old and new results.
Haber was greatly distressed by this public rebuke from a more senior scientist, and he suffered from digestion and skin problems as a result. He decided that the only way to restore his reputation was to perform a new set of experiments to resolve the matter. But during the course of these experiments he and his assistant, Robert Le Rossignol, discovered that the ammonia yield could be dramatically increased by performing the reaction at a higher pressure, but a lower temperature, than they had used in their original experiment. Indeed, they calculated that increasing the pressure to 200 times atmospheric pressure, and dropping the temperature to 600 degrees Centigrade (1,112 degrees Fahrenheit), ought to produce an ammonia yield of 8 percent—which would be commercially useful. The dispute with Nernst seeemed trivial by comparison and was swiftly forgotten, and Haber and Le Rossignol began building a new apparatus that would, they hoped, produce useful amounts of ammonia. At its center was a pressurized tube just 75 centimeters tall and 13 centimeters in diameter, surrounded by pumps, pressure gauges, and condensers. Haber refined his apparatus and then invited representatives of BASF, a chemical company that was by this time funding his work, to come and see it in operation.
The crucial demonstration took place on July 2, 1909, in the presence of two employees from BASF, Alwin Mittasch and Julius Kranz. During the morning a mishap with one of the bolts of the high-pressure equipment delayed the proceedings for a few hours. But in the late afternoon the apparatus began operating at 200 atmospheres and about 500 degrees Centigrade, and it produced an ammonia yield of 10 percent. Mittasch pressed Haber’s hand in excitement as the colorless drops of liquid ammonia began to flow. By the end of the day the machine had produced 100 cubic centimeters of ammonia. A jubilant Haber wrote to BASF the next day: “Yesterday we began operating the large ammonia apparatus with gas circulation in the presence of Dr. Mittasch and were able to keep its production uninterrupted for about five hours. During this whole time it functioned correctly and it continuously produced liquid ammonia. Because of the lateness of the hour, and as we all were tired, we stopped the production because nothing new could be learned from continuing the experiment.”
Ammonia synthesis on a large scale suddenly seemed feasible. BASF gave the task of converting Haber’s benchtop apparatus into a large-scale, high-pressure industrial process to one of its senior chemists, Carl Bosch. He had to work out how to generate the two feedstock gases (hydrogen and nitrogen) in large quantities and at low cost; to find suitable catalysts; and, most difficult of all, to develop large steel vessels capable of withstanding the enormous pressures required by the reaction. The first two converters built by Bosch, which were around four times the size of Haber’s apparatus, failed when their high-pressure reaction tubes exploded after around eighty hours of operation, despite being encased in reinforced concrete. Bosch realized that the high-pressure hydrogen was weakening the steel tubes by depleting them of the carbon that gives steel its strength and resilience. After much trial and error he redesigned the inside of the tubes to prevent this problem. His team also developed new kinds of safety valves to cope with the high pressures and temperatures; devised clever heat-exchange systems to reduce the energy required by the synthesis process; and built a series of small converters to allow large numbers of different materials to be tested as possible catalysts. Bosch’s converters gradually got bigger during 1910 and 1911, though they were still producing only a few kilograms of ammonia per day. Only in February 1912 did output first exceed one ton in a single day.
Fritz Haber’s experimental apparatus.
By this time Haber and BASF were under attack from rivals who were contesting Haber’s patents on the ammonia-synthesis process. Chief among them was Walther Nernst, whose argument with Haber had prompted Haber to develop the new process in the first place. Some of Haber’s work had built on earlier experiments by Nernst, so BASF offered Nernst an “honorarium” of ten thousand marks a year for five years in recognition of this. In return, Nernst dropped his opposition to Haber’s patents, and all other claims against Haber were subsequently thrown out by the courts.
Meanwhile ever-larger converters, now capable of producing three to five metric tons a day, were entering service at BASF’s new site at Oppau. These combined Haber’s original methods with Bosch’s engineering innovations to produce ammonia—from nitrogen in the air, and hydrogen extracted from coal—using what is now known as the Haber-Bosch process. By 1914 the Oppau plant was capable of producing nearly 20 metric tons of ammonia a day, or 7,200 metric tons a year, which could then be processed into 36,000 metric tons of ammonium sulphate fertilizer. But the outbreak of the First World War in August 1914 meant that much of the ammonia produced by the plant was soon being used to make explosives, rather than fertilizer. (Germany’s supply of nitrate from Chile was cut off after a series of naval battles, in which the British prevailed.)
Carl Bosch.
The war highlighted the way in which chemicals could be used both to sustain life or to destroy it. Germany faced a choice between using its new source of synthetic ammonia to feed its people or supply its army with ammunition. Some historians have suggested that without the Haber-Bosch process, Germany would have run out of nitrates by 1916, and the war would have ended much sooner. German production of ammonia was scaled up dramatically after 1914, but with much of the supply being used to make munitions, maintaining food production proved to be impossible. There were widespread food shortages, contributing to the collapse in morale that preceded Germany’s defeat in 1918. So the synthesis of ammonia prolonged the war, but Germany’s inability to produce enough for both munitions and fertilizer also helped to bring about the war’s end.
Haber himself strikingly embodies the conflict between the constructive and destructive uses of chemistry. During the war he turned his attention to the development of chemical weapons, while Bosch concentrated on scaling up the output of ammonia. Haber oversaw the first successful large-scale use of chemical weapons in April 1915, when Germany used chlorine gas against the French and Canadians at Ypres, causing some five thousand deaths. Haber argued that killing people with chemicals was no worse than killing them with any other weapon; he also believed that their use “would shorten the war.” But his wife, Clara Immerwahr, who was a chemist herself, violently disagreed, and she shot herself using her husband’s gun in May 1915. Scientists of many nationalities protested when Haber was awarded the 1918 Nobel prize in Chemistry, in recognition of his pioneering work on the synthesis of ammonia and its potential application in agriculture. The Royal Swedish Academy of Sciences, which awarded the prize, commended Haber for having developed “an exceedingly important means of improving the standards of agriculture and the well-being of mankind.” This was a remarkably accurate prediction, given the impact that fertilizers made using Haber’s process were to have in subsequent decades. But the fact remains that the man who made possible a dramatic expansion of the food supply, and of the world population, is also remembered today as one of the fathers of chemical warfare.
When scientists in Britain and other countries had tried to replicate the Haber-Bosch process themselves during the war, they had been unable to do so because crucial technical details had been omitted from the relevant patents. These patents were confiscated after the war, and BASF’s plants were scrutinized by foreign engineers, leading to the construction of similar plants in Britain, France, and the United States. During the 1920s the process was refined so that it could use methane from natural gas, rather than coal, as the source of hydrogen. By the early 1930s the Haber-Bosch process had overtaken Chilean nitrates to become the dominant source of artificial fertilizer, and global consumption of fertilizer tripled between 1910 and 1938. Having relied on soil microbes, legumes, and manure for thousands of years, mankind had decisively taken control of the nitrogen cycle. The outbreak of the Second World War prompted the construction of even more ammonia plants to meet the demand for explosives, which meant that there was even more fertilizer-production capacity available after the war ended in 1945. The stage was set for a further dramatic increase in the use of artificial fertilizer. But if its potential to increase food production was to be exploited to the full, new seed varieties would also be needed.
The Rise of the Dwarfs
The availability of artificial fertilizer allowed farmers to supply much more nitrogen to their crops. For cereals such as wheat, maize, and rice, this produced larger, heavier seed heads, which in turn meant higher yields. But now that they were no longer constrained by the availability of nitrogen, farmers ran into a new problem. As the use of fertilizer increased the size and weight of the seed heads, plants became more likely to topple over (something farmers call “lodging”). Farmers had to strike a balance between applying plenty of fertilizer to boost yield, but not so much that the plants’ long stalks were unable to support the seed heads. The obvious solution was to switch to short, or “dwarf” varieties with shorter stalks. As well as being able to support heavier seed heads without lodging, dwarf varieties do not waste energy growing a long stalk, so more energy can be diverted to the seed head. They therefore boost yield in two ways: by allowing more fertilizer to be applied, and by turning applied nutrients more efficiently into useful grain, rather than useless stalk.
During the nineteenth century, dwarf varieties of wheat, probably descended from a Korean variety, had been developed in Japan. They greatly impressed Horace Capron, the United States’ commissioner of agriculture, who visited Japan in 1873. “No matter how much manure is used . . . on the richest soils and with the heaviest of yields, the wheat stalks never fall down and lodge,” he noted. In the early twentieth century these Japanese dwarf varieties were crossed with varieties from other countries. One of the resulting strains, Norin 10, was a cross between Japanese wheat and two American varieties. It was developed in Japan, at the Norin breeding station, and was transferred to the United States after the Second World War. Norin 10 had unusually short, strong stems (roughly two feet tall, rather than three feet), and responded well to heavy applications of nitrogen fertilizer. But it was susceptible to disease, so agronomists in different countries began to cross it with local varieties in order to combine Norin 10’s dwarf characteristics with the pest resistance of other varieties. This led to new, high-yielding varieties of wheat suitable for use in particular parts of the world. In industrialized countries where use of nitrogen fertilizer was growing quickly, the new varieties descended from Norin 10 made possible an impressive increase in yield. By this time new, high-yielding varieties of maize had also become widespread, so that during the 1950s the U.S. secretary of agriculture complained that the country was accumulating “burdensome surpluses” of grain that were expensive to store.
When it came to the developing world, one man did more than anyone else to promote the spread of the new dwarf varieties: Norman Borlaug, an American agronomist. He went to Mexico in 1944 at the behest of the Rockefeller Foundation, which had established an agricultural research station there to help to improve poor crop yields. The foundation had concluded that boosting yields was the most effective way to provide agricultural and economic assistance, and reduce Mexico’s dependence on grain imports. Borlaug was put in charge of wheat improvement, and his first task was to develop varieties that were resistant to a disease called stem rust, which was a particular problem in Mexico at the time: It reduced Mexico’s wheat harvest by half between 1939 and 1942. Borlaug created hundreds of crossbreeds of local varieties, looking for strains that demonstrated good resistance to stem rust and also provided strong yields. Within a few years he had produced new, resistant breeds with yields 20 to 40 percent higher than the traditional varieties in use in Mexico.
Mexico was an excellent place to carry out such research, Borlaug realized, because one wheat crop could be grown in the highlands in the summer, and another in the lowland desert in the winter. He developed a new system called “shuttle breeding,” in which he carried the most promising results from one end of the country to another. This broke the traditional rule that plants should only be bred in the area in which they would subsequently be planted, but it sped up the breeding process, since Borlaug could produce two generations a year rather than one. His rule-bending also had another, unanticipated benefit: In order to thrive as both summer and winter crops, the resulting varieties could not afford to be fussy about the difference in the number of hours of daylight between the two seasons. This meant their offspring could subsequently be cultivated in a wide range of different climates.
Norman Borlaug.
In 1952 Borlaug heard about the work being done with Norin 10, and the following year he received some seeds from America. He began to cross his new Mexican varieties with Norin 10, and with a new variety that had been created by crossing Norin 10 with an American wheat called Brevor. Within a few years he had developed new wheat strains with insensitivity to day length and good disease resistance that could, with the use of nitrogen fertilizer, produce more than twice the yield of traditional Mexican varieties. Borlaug wanted to make further improvements, but curious farmers visiting his research station were taking samples of his new varieties and planting them, and they were spreading fast. So Borlaug released his new seeds in 1962. The following year, 95 percent of Mexico’s wheat was based on one of Borlaug’s new varieties, and the wheat harvest was six times larger than it had been nineteen years earlier when he had first arrived in the country. Instead of importing 200,000 to 300,000 tons of wheat a year, as it had done in the 1940s, Mexico exported 63,000 tons of wheat in 1963.
Following the success of his new high-yielding dwarf wheat varieties in Mexico, Borlaug suggested that they could also be used to improve yields in other developing countries. In particular, he suggested India and Pakistan, which were suffering from poor harvests and food shortages at the time and had become dependent on foreign food aid. Borlaug’s suggestion was controversial, because it would mean encouraging farmers to grow wheat rather than indigenous crops. Borlaug maintained, however, that since wheat produced higher yields and more calories, his new dwarf wheat varieties presented a better way for South Asian farmers to take advantage of the advent of cheap nitrogen fertilizer than trying to increase yields of indigenous crops. Monkombu Sambasivan Swaminathan, an Indian geneticist who was an adviser to the agriculture minister, invited Borlaug to visit India, and Borlaug arrived in March 1963 and began promoting the use of his Mexican wheat. Some small plots were planted, and they produced impressive results at the following year’s wheat harvest: With irrigation and the application of nitrogen fertilizer, the yields were around five times that of local Indian varieties, which typically produced around one ton per hectare. Swaminathan later recalled that “when small farmers, who with the help of scientists organised the National Demonstration Programme, harvested over five tons of wheat per hectare, its impact on the minds of other farmers was electric. The clamour for seeds began.”
Another impressive harvest in early 1965 prompted the Indian government to order 250 tons of seed from Mexico for further trials. But wider adoption of the new seeds was held back by political and bureaucratic objections. A turning point came when the monsoon, which normally occurs between June and September, failed in 1965. This caused grain yields to fall by nearly one fifth and made India even more dependent on foreign food aid. The government sent officials to Mexico to place an order for eighteen thousand tons of the new wheat seeds—enough to sow around 3 percent of India’s wheat-growing areas. As the ship carrying the seeds departed for Bombay, war broke out between India and Pakistan, diverting attention from the food crisis gripping the region. And by the time the seeds were being unloaded in September, it was apparent that the monsoon had failed for a second year.
The combination of political instability, population growth, and drought in South Asia gave rise to a new outbreak of Malthusianism in the late 1960s. Across the developing world, the population was growing twice as fast as the food supply. Pundits predicted imminent disaster. In their 1967 book William and Paul Paddock argued that some countries, including India, Egypt, and Haiti, would simply never be able to feed themselves and should be left to starve. That same year, one fifth of the United States’ wheat harvest was shipped to India as emergency food aid. “The battle to feed all of humanity is over,” declared Paul Ehrlich in his 1968 bestseller, The Population He predicted that “in the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.” He was particularly gloomy about India, declaring that it “couldn’t possibly feed two hundred million more people by 1980.”
As with Thomas Malthus’s predictions nearly two centuries earlier, the technologies that would disprove these gloomy predictions were already quietly spreading. Following the introduction of Borlaug’s high-yield varieties from Mexico, wheat yields in India increased from twelve million tons in 1965 to nearly seventeen million tons in 1968 and twenty million in 1970. The harvest in 1968 was so large that schools had to be closed in some areas so that they could be used for grain storage. India’s grain imports fell almost to zero by 1972, and the country even became an exporter for a while during the 1980s. Further improvements in yields followed in subsequent years as Indian agronomists crossed the Mexican varieties with local strains to improve disease resistance. India’s wheat harvest reached 73.5 million tons in 1999.
Norman Borlaug’s early success with high-yield dwarf varieties of wheat, meanwhile, had inspired researchers to do the same with rice. The International Rice Research Institute (IRRI), based in the Philippines and funded by the Rockefeller and Ford foundations, was established in 1960. Borlaug’s shuttle-breeding approach was adopted to speed up the development of new varieties. As with wheat, researchers took dwarf varieties, many of them developed in Japan, and crossed them with the local varieties planted in other countries. In 1966 researchers at the IRRI created a new variety, called IR8, by crossing a Chinese dwarf variety (itself derived from a Japanese strain) with an Indonesian strain called Peta. At the time, traditional strains of rice produced yields of around one ton per hectare. The new variety produced five tons without fertilizer, and ten tons when fertilizer was applied. It became known as “miracle rice” and was quickly adopted throughout Asia. IR8 was followed by further dwarf strains that were more disease resistant and matured faster, making it possible to grow two crops a year for the first time in many regions.
In a prescient speech in March 1968, William Gaud of the United States Agency for International Development had highlighted the impact that high-yield varieties of wheat were starting to have in Pakistan, India, and Turkey. “Record yields, harvests of unprecedented size and crops now in the ground demonstrate that throughout much of the developing world—and particularly in Asia—we are on the verge of an agricultural revolution,” he said. “It is not a violent red revolution like that of the Soviets, nor is it a white revolution like that of the Shah of Iran. I call it the green revolution. This new revolution can be as significant and as beneficial to mankind as the Industrial Revolution of a century and a half ago.” The term “green revolution” immediately gained widespread currency, and it has remained in use ever since.
The impact of the green revolution was already apparent by 1970, and in that year Norman Borlaug was awarded the Nobel Peace Prize. “More than any other single person of this age, he has helped to provide bread for a hungry world,” the Nobel Committee declared. He had “turned pessimism into optimism in the dramatic race between population explosion and our production of food.” In his acceptance speech, Borlaug pointed out that the increase in yields was due not simply to the development of dwarf varieties, but to the combination of the new varieties with nitrogen fertilizer. “If the high-yielding dwarf wheat and rice varieties are the catalysts that have ignited the green revolution, then chemical fertilizer is the fuel that has powered its forward thrust,” he said.
In the three decades after 1970, the new high-yield dwarf varieties of wheat and rice swiftly displaced traditional varieties across the developing world. By 2000, the new seed varieties accounted for 86 percent of the cultivated area of wheat in Asia, 90 percent in Latin America, and 66 percent in the Middle East and Africa. Similarly, the new varieties of rice accounted for 74 percent of the rice-producing area across Asia in 2000, and 100 percent in China, the world’s largest rice producer. As well as offering increased yields—provided appropriate fertilizers and irrigation were available—they also increased cereal production in other, indirect ways. Farmers switched to wheat and rice from other crops, and farmers who were already growing wheat and rice could, in some cases, grow more than one crop a year by switching to new varieties. All this increased cereal production and meant that the food supply grew faster than the population. Asia’s population increased by 60 percent between 1970 and 1995, but cereal production in the region over the same period more than doubled. Overall, nitrogen fertilizer has supported around four billion people born in the century since Haber’s demonstration in 1909. By 2008, nitrogen fertilizer was responsible for feeding 48 percent of the world’s population. Haber-Bosch nitrogen sustains more than three billion people, nearly half of humanity. They are the offspring of the green revolution.