The World Without Us - Alan Weisman (2007)

Part II

Chapter 11. The World Without Farms

1. The Woods


HEN WE THINK civilization, we usually picture a city. Small wonder: we’ve gawked at buildings ever since we started raising towers and temples, like Jericho’s. As architecture soared skyward and marched outward, it was unlike anything the planet had ever known. Only beehives or ant mounds, on a far humbler scale, matched our urban density and complexity. Suddenly, we were no longer nomads cobbling ephemeral nests out of sticks and mud, like birds or beavers. We were building homes to last, which meant we were staying in one place. The word civilization itself derives from the Latin civis, meaning “town dweller.”

Yet it was the farm that begat the city. Our transcendental leap to sowing crops and herding critters—actually controlling other living things— was even more world-shaking than our consummate hunting skill. Instead of simply gathering plants or killing animals just prior to eating them, we now choreographed their existence, coaxing them to grow more reliably and far more abundantly.

Since a few farmers could feed many, and since intensified food production meant intensified people production, suddenly there were a lot of humans free to do things other than gather or grow meals. With the possible exception of Cro-Magnon cave artists, who may have been so esteemed for their talents that they were relieved of other duties, until agriculture arrived, food-finding was the only occupation for humans on this planet.

Agriculture let us settle down, and settlement led to urbanity. Yet, imposing as skylines are, farmlands have much more impact. Nearly 12 percent of the planet’s landmass is cultivated, compared to about 3 percent occupied by towns and cities. When grazing land is included, the amount of Earthly terrain dedicated to human food production is more than one-third of the world’s land surface.

If we suddenly stopped plowing, planting, fertilizing, fumigating, and harvesting; if we ceased fattening goats, sheep, cows, swine, poultry, rabbits, Andean guinea pigs, iguanas, and alligators, would those lands return to their former, pre-agro-pastoral state? Do we even know what that was?

For an idea of how the land on which we’ve toiled might or might not recover from us, we can begin in two Englands—one old, one New.

In any New England woods south of Maine’s boreal wilderness, within five minutes you see it. A forester’s or ecologist’s trained eye notices it just by spotting a stand of big white pine, which only grow in such uniform density in a former cleared field. Or they spot clusters of hardwoods— beech, maples, oaks—of similar age, which sprouted in the shade of a now-missing stand of white pines that were cut or blown away in a hurricane, leaving hardwood seedlings an open sky to fill with their canopies.

But even if you don’t know a birch from a beech, you can’t miss seeing it around knee-height, camouflaged by fallen leaves and lichens, or wrapped in green brambles. Someone has been here. The low stone walls that crisscross the forests of Maine, Vermont, New Hampshire, Massachusetts, Connecticut, and upstate New York reveal that humans once staked boundaries here. An 1871 fencing census, writes Connecticut geologist Robert Thorson, showed at least 240,000 miles of handmade stone walls east of the Hudson River—enough to reach to the moon.

As the last glaciers of the Pleistocene advanced, stones were ripped from granite outcroppings, then dropped as they melted back. Some lay on the surface; some were ground into the subsoil, to be periodically heaved up by frost. All had to be cleared along with trees so that transplanted European farmers could start over in a New World. The stones and boulders they moved marked the borders of their fields and penned their animals.

So far from large markets, raising beef wasn’t practical, but for their own use New England’s farmers kept enough cattle, pigs, and dairy cows that most of their land was pasture and hay fields. The rest was in rye, barley, early wheat, oats, corn, or hops. The trees they downed and the stumps they yanked were of the mixed hardwood, pine, and spruce forests we identify with New England today—and we do, because they’re back.

Unlike almost anywhere else on Earth, New England’s temperate forest is increasing, and now far exceeds what it was when the United States was founded in 1776. Within 50 years of U.S. independence, the Erie Canal was dug across New York State and the Ohio Territory opened—an area whose shorter winters and loamier soils lured away struggling Yankee farmers. Thousands more didn’t bother to return to the soil after the Civil War, but headed instead into factories and mills powered by New England’s rivers—or headed west. As the forests of the Midwest began to come down, the forests of New England began coming back.

The unmortared stone walls built by three centuries of farmers flex as soil swells and shrinks with the seasons. They should be part of the landscape for a few more centuries, until the leaf litter turns to more soil and buries them. But how similar are the forests growing around them to what was here before the Europeans arrived, or the Indians before them? And untouched, what would they become?

In his 1980 book Changes in the Land, geographer William Cronon challenged historians who wrote of Europeans encountering an unsullied forest primeval when they first arrived in the New World—a forest supposedly so unbroken that a squirrel might leap treetops from Cape Cod to the Mississippi without ever having to touch ground. Indigenous Americans had been described as primitives who inhabited and fed off the forest, with little more impact on it than the squirrels themselves. To accommodate the Pilgrims’ account of Thanksgiving, it was accepted that American Indians practiced limited, unobtrusive agriculture involving corn, beans, and squash.

We now know that many of the allegedly pristine landscapes of North and South America were actually artifacts, the result of enormous changes wrought by humans that started with the slaughter of megafauna. The first permanent Americans burned underbrush at least twice a year to make hunting easier. Most fires they set were low-intensity, meant to clear brambles and vermin, but they also selectively torched entire stands of trees to shape the forest into traps and funnels to corner wildlife.

The coast-to-Mississippi treetop traverse would have been possible only for birds. Not even flying squirrels could have managed it, because it took wings to cross large swathes where forest had been thinned to parkland or razed completely. By observing what grew after lightning opened clearings, paleo-Indians learned to create berry patches and herb-filled meadows to attract deer, quail, and turkeys. Finally, fire allowed them to do exactly what the Europeans and their descendants later came to do on such a grand scale: They farmed.

Yet there was one exception: New England, one of the first places where colonists arrived to stay, which may partly explain the familiar misconception of an entire virgin continent.

“There’s now an understanding,” says Harvard ecologist David Foster, “that precolonial eastern America had an agriculturally-based, maize-dependent large population with permanent villages and cleared fields. True. But that’s not what we had up here.”

It is a delicious September morning in deeply wooded central Massachusetts, just below the New Hampshire border. Foster has paused in a stand of tall white pines, which just a century earlier was a tilled wheat field. In their shady understory, little hardwoods are sprouting— maddening, he says, to timbermen who came after New England farmers had departed for points southwest and who thought they had a ready-made pine plantation.

“They spent decades of frustration trying to get white pine to succeed itself. They didn’t get that when you cut down the forest, you expose a new forest that rooted in its shade. They never read Thoreau.”

This is the Harvard Forest outside the hamlet of Petersham, established as a timber research station in 1907 but now a laboratory for studying what happens to land after humans no longer use it. David Foster, its director, has managed to spend much of his career in nature, not classrooms: at 50, he looks 10 years younger—fit and lean, the hair falling across his forehead still dark. He bounds over a brook that was widened for irrigation by one of the four generations of the family who farmed here. The ash trees along its banks are pioneers of the reborn forest. Like white pine, they don’t regenerate well in their own shade, so in another century the small sugar maples beneath them will replace them. But this is already a forest by any definition: exhilarating smells, mushrooms popping through leaf litter, drops of green-gold sunlight, woodpeckers thrumming.

Even in the most industrialized part of a former farm, a forest resurges quickly here. A mossy millstone near a tumble of rocks that was once a chimney reveals where a farmer once ground hemlock and chestnut bark for tanning cowhides. The mill pond is now filled with dark sediment. Scattered firebricks, bits of metal and glass, are all that remain of the farmhouse. Its exposed cellar hole is a cushion of ferns. The stone walls that once separated open fields now thread between 100-foot conifers.

Over two centuries, European farmers and their descendants laid bare three-quarters of New England’s forests, including this one. Three centuries more, and tree trunks may again be as wide as the monsters that early New Englanders turned into ship beams and churches—oaks 10 feet across, sycamores twice as thick, and 250-foot white pines. The early colonists found untouched, huge trees in New England, says Foster, because, unlike other parts of precolonial North America, this cold corner of the continent was sparsely populated.

“Humans were here. But the evidence shows low-density subsistence hunting and gathering. This isn’t a landscape prone to burning. In all New England, there were maybe 25,000 people, not permanently in any one area. The postholes for structures are just two to four inches across. These hunter-gatherers could tear down and move a village overnight.”

Unlike the center of the continent, says Foster, where large sedentary Native American communities filled the lower Mississippi Valley, New England didn’t have corn until AD 1100. “The total accumulation of maize from New England archeological sites wouldn’t fill up a coffee cup.” Most settlements were in river valleys, where agriculture finally began, and on the coast, where maritime hunter-gatherers were sustained by immense stocks of herring, shad, clams, crabs, lobsters, and cod thick enough to catch by hand. Inland camps were mainly retreats from harsh coastal winters.

“The rest,” says Foster, “was forest.” It was a human-free wilderness, until Europeans named this land after their own ancestral home and proceeded to clear it. The timberlands the Pilgrims found were the ones that emerged in the aftermath of the last glaciers.

“Now we’re getting that vegetation back. All the major tree species are returning.”

So are animals. Some, like moose, have arrived on their own. Others, like beavers, were reintroduced and have taken off. In a world without humans to stop them, New England could return to what North America once looked like from Canada to northern Mexico: beaver dams spaced regularly on every stream, creating wetlands strung like fat pearls along their length, filled with ducks, muskrats, willets, and salamanders. One new addition to the ecosystem would be the coyote, currently trying to fill the empty wolf niche—though a new subspecies may be on the rise.

“The ones we see are substantially larger than western coyotes. Their skulls and jaws are bigger,” says Foster, his long hands describing an impressive canine cranium. “They take larger prey than coyotes in the West, like deer. This probably isn’t sudden adaptation. There’s genetic evidence that western coyotes are migrating through Minnesota and up across Canada, interbreeding with wolves, then roaming here.”

It’s fortunate, he adds, that New England’s farmers left before nonnative plants flooded America. Before exotic trees could spread across the land, native vegetation again had a roothold on their former farmlands. No chemicals had been spaded into their soils; no weeds, insects, or fungi here had ever been poisoned to help other things grow. It’s the nearest thing to a baseline of how nature might reclaim cultivated land—against which to measure, for example, old England.

2. The Farm

Like most British trunk roads, the Ml motorway that runs north from London was built by Romans. In Hertfordshire, a jog at Hempstead leads to St. Alban’s, once a substantial Roman town, and beyond that, to the village of Harpenden. From Roman times until the 20th century, when they became bedroom commutes to London, 30 miles away, St. Alban’s was a center for rural commerce, and Harpenden was flat farmland, the conformity of its grain fields disrupted only by hedgerows.

Long before the Romans appeared in the first century AD, the dense forests of the British Isles began coming down. Humans first arrived 700,000 years ago, likely following herds of aurochs, the now-extinct wild Eurasian cattle, during glacial epochs when the English Channel was a land bridge, but their settlements were fleeting. According to the great British forest botanist Oliver Rackham, after the last ice age, southeastern England was dominated by vast stands of lindens mixed with oaks, and by abundant hazels that probably reflect the appetites of Stone Age gatherers.

The landscape changed around 4,500 BC, because whoever crossed the water that by then separated England from the Continent brought crops and domestic animals. These immigrants, Rackham laments, “set about converting Britain and Ireland to an imitation of the dry open steppes of the Near East, in which agriculture had begun.”

Today, less than 1/100 of Britain is original forest, and essentially none of Ireland. Most woodlands are clearly defined tracts, bearing evidence of centuries of careful human extraction by coppicing, which allowed stumps to regenerate for building supplies and fuel. They remained that way after Roman rule gave way to Saxon peasantry and serfdom, and into the Middle Ages.

At Harpenden, near a low stone circle and adjacent stem wall that are the remains of a Roman shrine, an estate was founded in the early 13th century. Rothamsted Manor, built of bricks and timbers and surrounded by a moat and 300 acres, changed hands five times over as many centuries, accruing more rooms until an eight-year-old boy named John Bennet Lawes inherited it in 1814.

Lawes went to Eton and then to Oxford, where he studied geology and chemistry, grew luxuriant muttonchops, but never took a degree. Instead, he returned to Rothamsted to make something of the estate his late father had left to seed. What he did with it ended up changing the course of agriculture and much of the surface of the Earth. How long those changes will persist, even after we’re gone, is much debated by agro-industrialists and environmentalists. But with remarkable foresight, John Bennet Lawes himself has kindly left us many clues.

His story began with bones—although first, some would say, came chalk. Centuries of Hertfordshire farmers had dug the chalky remains of ancient sea creatures that underlie local clays to spread on their furrows, because it helped their turnips and grains. From Oxford lectures, Lawes knew that liming their fields didn’t nourish plants so much as soften the soil’s acidic resistance. But might anything actually feed crops?

A German chemist, Justus von Liebig, had recently noted that powdered bonemeal restored vigor to soil. Soaking it first in dilute sulfuric acid, he wrote, made it even more digestible. Lawes tried it on a turnip field. He was impressed.

Justus von Liebig is remembered as the father of the fertilizer industry, but he probably would have traded that honor for John Bennet Lawes’s enormous success. It hadn’t occurred to von Liebig to patent his process. After realizing what a bother it was for busy farmers to buy bones, boil them, grind them, then transport sulfuric acid from London gasworks to treat the crushed granules, and then mill the hardened result yet again, Lawes did so. Patent in hand, he built the world’s first artificial fertilizer factory at Rothamsted in 1841. Soon he was selling “superphosphate” to all his neighbors.

His manure works—possibly at the insistence of his widowed mother, who still lived in the big brick manor—soon moved to larger quarters near Greenwich on the Thames. As the use of chemical soil additives spread, Lawes’s factories multiplied, and his product line lengthened. It included not just pulverized bone and mineral phosphates, but two nitrogen fertilizers: sodium nitrate and ammonium sulfate (both later replaced by the ammonium nitrate commonly used today). Once again, the hapless von Liebig had identified nitrogen as a key component of amino and nucleic acids vital to plants, yet failed to exploit his discovery. While von Liebig published his findings, Lawes was patenting nitrate mixtures.

To learn which were most effective, in 1843 Lawes began a series of test plots still going today, which makes Rothamsted Research both the world’s oldest agricultural station and also the site of the world’s longest continual field experiments. Lawes and John Henry Gilbert, the chemist who became his partner of 60 years, earning the equal loathing of Justus von Liebig, began by planting two fields: one in white turnips, the other in wheat. They divided these into 24 strips, and applied a different treatment to each.

The combinations involved a lot, a little, or no nitrogen fertilizer; raw bonemeal, his patented superphosphates, or no phosphates at all; minerals such as potash, magnesium, potassium, sulfur, sodium; and both raw and cooked farmyard manure. Some strips were dressed with local chalk, some weren’t. In subsequent years, some plots were rotated with barley, beans, oats, red clover, and potatoes. Some strips were periodically fallowed, some continually planted with the same crop. Some served as controls, with nothing added to them whatsoever.

By the 1850s, it was obvious that when both nitrogen and phosphate were applied, yields increased, and that trace minerals helped some crops and slowed others. With his partner, Gilbert, assiduously taking samples and recording results, Lawes was willing to test any theory—scientific, homespun, or wild—of what might help plants grow. According to his biographer, George Vaughn Dyke, these included trying superphosphate made from ivory dust, and slathering crops with honey. One experiment still running today involved no crops at all, only grass. An ancient sheep pasture just below Rothamsted Manor was divided into strips and treated with various inorganic nitrogen compounds and minerals. Later Lawes and Gilbert added fish meal and farm manure from animals fed different diets. In the 20th century, with increasing acid rain, the strips were further divided, with half receiving chalk to test growth under various pH levels.

From this pasture experiment, they noticed that although inorganic nitrogen fertilizer makes hay grow waist-high, biodiversity suffers. While 50 species of grass, weeds, legumes, and herbs might grow on unfertilized strips, adjacent plots dosed with nitrogen hold just two or three species. Since farmers don’t want other seeds competing with the ones they’ve planted, they have no problem with this, but nature might.

Paradoxically, so did Lawes. By the 1870s, now wealthy, he sold his fertilizer businesses but continued his fascinating experiments. Among his concerns was how land could grow exhausted. His biographer quotes him as declaring that any farmer who thought he could “grow as fine crops by the aid of a few pounds of some chemical substances as by the same number of tons of farm-yard dung” was deluded. Lawes advised anyone planting vegetables and garden greens that, if it were him, he would “select a locality where I could obtain a large supply of yard manure at a cheap rate.”

But in a rural landscape rushing to meet the dietary demands of a rapidly growing urban industrial society, farmers no longer had the luxury of raising enough dairy cows and pigs to produce the requisite tons of organic manure. Throughout densely populated late-19th-century Europe, farmers desperately sought food for their grain and vegetables. South Pacific islands were stripped of centuries of accumulated guano; stables were scoured for droppings; and even what was delicately called “night soil” was spread on fields. According to von Liebig, both horse and human bones from the Battle of Waterloo were ground and applied to crops.

As pressures on farmlands escalated in the 20th century, test plots at Rothamsted Research were added for herbicides, pesticides, and municipal sewage sludge. The winding road to the old manor house is now lined with large laboratories for chemical ecology, insect molecular biology, and pesticide chemistry, owned by the agricultural trust that Lawes and Gilbert founded after both were knighted by Queen Victoria. Rothamsted Manor has become a dormitory for visiting researchers from around the world. Yet tucked behind all the gleaming facilities, in a 300-year-old barn with dusty windowpanes, is Rothamsted’s most remarkable legacy.

It is an archive containing more than 160 years of human efforts to harness plants. The specimens, sealed in thousands of five-liter bottles, are of virtually everything. From each experimental strip, Gilbert and Lawes took samples of harvested grains, their stalks and leaves, and the soil where they grew. They saved each year’s fertilizers, including manure. Later, their successors even bottled the municipal sewage sludge spread on Rothamsted test plots.

The bottles, stacked chronologically on 16-foot metal shelves, date back to the first wheat field in 1843. When mold developed in early samples, after 1865 they were stoppered with corks, then paraffin, and finally lead. During war years, when bottle supplies grew scarce, samples were sealed in tins that once held coffee, powdered milk, or syrup.

Thousands of researchers have mounted ladders to peruse the calligraphy on time-yellowed bottle labels—to extract, say, soil collected in Rothamsted’s Geesecroft Field at a depth of nine inches in April 1871. Yet many bottles have never been opened: along with organic matter, they preserve the very air of their era. Were we to go suddenly, assuming no unprecedented seismic event dashes thousands of glass vessels to the floor, it’s fair to surmise that this singular heritage would survive intact long beyond us. Within a century, of course, the durable slate-shingled roof would begin to yield to rain and vermin, and the smartest mice might learn that certain jars, when pushed to the concrete and shattered, contain still-edible food.

Supposing, however, that before such entropic vandalism occurs, the collection is discovered by visiting alien scientists who happen upon our now-quiet planet, bereft of voracious, but colorful, human life. Suppose they find the Rothamsted archive, its repository of more than 300,000 specimens still sealed in thick glass and tins. Clever enough to find their way to Earth, they would doubtless soon figure out that the graceful loops and symbols penned on the labels were a numbering system. Recognizing soil and preserved plant matter, they might realize that they had the equivalent of a time-lapse record of the final century-and-a-half of human history.

Rothamsted Research Archive.


If they began in the oldest jars, they would find relatively neutral soils that didn’t stay that way for long as British industry redoubled. They would find the pH dropping farther into the acid end by the early 20th century, as the advent of electricity led to coal-fired power stations, which spread pollution beyond factory cities to the countryside. There would also be steadily increasing nitrogen and sulfur dioxide until the early 1980s, when improved smokestacks cut sulfur emissions so dramatically that the aliens might be puzzled to find samples spiked with powdered sulfur, which farmers had to start adding as fertilizer.

They might not recognize something that first appeared in Rothamsted’s grassland plots in the early-1950s: traces of plutonium, a mineral that barely occurs in nature, let alone in Hertfordshire. Like grape vintages embodying annual weather, the fallout from tests in the Nevada desert, and later in Russia, marked Rothamsted’s distant soils with their radioactive signature.

Uncorking the late 20th century, they would find that the bottles held other novel substances never before known on Earth (and, if they were lucky, not on their planet, either), such as polychlorinated biphenyls— PCBs—from the manufacture of plastics. To naked human eyes, the samples appear as innocent as comparable handfuls of dirt in specimen bottles from 100 years earlier. Alien vision, however, might discern menaces we only see with devices like gas chromatographs and laser spectrometers.

If so, they might glimpse the sharp fluorescent signature of polyaro-matic hydrocarbons (PAHs). They might be astonished at how PAHs and dioxins, two substances emitted naturally by volcanoes and forest fires, suddenly leaped from background levels into center-stage chemical prominence in soil and crops as the decades advanced.

If they were carbon-based life-forms like us, they might leap themselves, or at least back away, because both PAHs and dioxins can be lethal to nervous systems and other organs. PAHs were buoyed into the 20th century aboard clouds of exhaust from automobiles and coal-fired power plants; they’re also in the pungent odor of fresh asphalt. At Rothamsted, as at farms everywhere, they were introduced deliberately, in herbicides and pesticides.

Dioxins, however, were unintended: they’re by-products formed when hydrocarbons combine with chlorine, with tenacious, disastrous results. Besides their role as sex-changing endocrine disruptors, their most infamous application before being banned was in Agent Orange, a defoliant that laid bare entire Vietnamese rain forests so that insurgents would have nowhere to hide. From 1964 to 1971, the United States doused Vietnam with 12 million gallons of Agent Orange. Four decades later, heavily dosed forests still haven’t grown back. In their place is a grass species, cogon, called one of the world’s worst weeds. Burned off constantly, it keeps springing back, overwhelming attempts to supplant it with bamboo, pineapple, bananas, or teak.

Dioxins concentrate in sediments, and thus show up in Rothamsted’s sewage sludge samples. (Municipal sludge, since 1990 deemed too toxic to dump into the North Sea, is instead spread as fertilizer on European farmlands—except in Holland. Since the 1990s, the Netherlands has not only offered incentives that practically equate organic farming with patriotism, but has also struggled to convince its EU partners that everything applied to the land ends up in the sea anyway.)

Will the future visitors who discover Rothamsted’s extraordinary archive wonder if we were trying to kill ourselves? They might find hope in the fact that, beginning in the 1970s, lead deposition in soil waned significantly. But at the same time, the presence of other metals was increasing. Especially in preserved sludge, they would find all the nasty heavies: lead, cadmium, copper, mercury, nickel, cobalt, vanadium, and arsenic, and also lighter ones like zinc and aluminum.

3. The Chemistry

Dr. Steven McGrath hunches over his corner computer, deep-set eyes beneath his gleaming pate crinkling through rectangular reading lenses at a map of Britain and a chart color-coded with things that on an ideal planet— or one that gets the chance to start over—wouldn’t show up in plants that animals like to eat. He points at something yellow.

“This, for instance, is the net accumulation of zinc since 1843. No one else can see these trends because our samples,” he adds, his shirtfront slightly inflating, “are the longest test archive in the world.”

From sealed samples of a winter-wheat field called Broadbalk, one of Rothamsted’s oldest, they know that the original 35 parts per million of zinc present in the soil have nearly doubled. “That’s coming from the atmosphere, because our control plots have nothing added—no fertilizers, no manure or sludge. Yet the concentration is up 25 ppm.”

The test farm plots, however, which also originally had 35 ppm of zinc, now are at 91 ppm. To the 25 ppm from airborne industrial fallout, something is adding another 31 ppm.

“Farmyard manure. Cows and sheep get zinc and copper in their animal feed to keep them healthy. Over 160 years, it’s nearly doubled the zinc in the soil.”

If humans disappeared, so would zinc-laced smoke from factories, and no one would be feeding mineral supplements to livestock. Yet McGrath expects that, even in a world without people, metals we put into the ground will be around a long time. How long before rain leaches them out, returning soils to a preindustrial state, would depend, McGrath says, on their composition.

“Clays will hang onto them up to seven times as long as sandy soils, because they don’t drain as freely.” Peat, also poorly drained, can retain lead, sulfur, and organochloride pollutants like dioxin even longer than clay. McGrath’s maps show hot clusters on peat-covered hilltops on the English and Scottish moors.

Even sandy soils can bind nasty heavy metals when municipal sludge is mixed into them. In sludged earth, leaching of metals drops as chemical bonds form; extraction is mainly via roots. Using archived samples of Rothamsted carrots, beets, potatoes, leeks, and various grains treated since 1942 with West Middlesex municipal sludge, McGrath has calculated how long metals we’ve added to such soil will stay there—assuming crops are still being harvested.

From a file drawer, he produces a table that gives the bad news. “With no leaching, I figure zinc lasts 3,700 years.”

That’s how long it took humans to get from the Bronze Age to today. Compared to the time other metallic pollutants would linger, that turns out to be short. Cadmium, he says, an impurity in artificial fertilizer, will cling twice as long: 7,500 years, or the same amount of time that’s passed since humans began irrigating Mesopotamia and the Nile Valley.

It gets worse. “Heavier metals like lead and chromium tend not to be taken up as easily by crops, and not to be leached. They simply bind.” Lead, the one with which we’ve most recklessly laced our topsoil, will take nearly 10 times as long as zinc to disappear—the next 35,000 years. Thirty-five thousand years ago was a couple of ice ages back.

For unclear chemical reasons, chromium is the most stubborn of all: McGrath estimates 70,000 years. Toxic in mucous membranes or if swallowed, chromium leaks into our lives mainly from tanning industries. Smaller amounts are chipped from aging chrome-plated sink taps, brake linings, and catalytic converters. But compared to lead, chromium is a minor concern.

Humans discovered lead early, but only recently realized how it afflicts nervous systems, learning development, hearing, and general brain function. It also causes kidney disease and cancer. In Britain, Romans smelted lead from mountain-ore veins to make pipes and chalices—poisonous choices suspected to have left many people dead or demented. The use of lead plumbing continued through the Industrial Revolution—Rothamsted Manor’s historic storm drains bearing ornate family crests are still lead.

But old plumbing and smelting add just a few percentage points of lead to our ecosystem. Will our visitors who arrive sometime in the next 35,000 years deduce that vehicle fuel, industrial exhaust, and coal-fired power plants spewed the lead they detect everywhere? Since no one will harvest whatever grows in metal-saturated fields after we’re gone, McGrath guesses that plants will keep taking it up, then putting it back as they die and decay, in a continuous loop.

Through genetic tinkering, both tobacco and a flower called mouse-ear cress have been modified to suck up and exhale one of the most dreaded heavy-metallic toxins of all, mercury. Unfortunately, plants don’t redeposit metals deep in the Earth where we originally dug them. Breathe away mercury, and it rains down elsewhere. There’s an analogy, Steve McGrath says, to what happens with PCBs—the polychlorinated biphenyls once used in plastics, pesticides, solvents, photocopying paper, and hydraulic fluids. Invented in 1930, they were outlawed in 1977 because they disrupt immune systems, motor skills, and memory, and play roulette with gender.

Initially, banning PCBs seemed to have worked: Rothamsted’s archive clearly shows their presence in soils dropping through the 1980s and 1990s until, by the new millennium, they practically reached preindustrial levels. Unfortunately, it turns out that they merely wafted away from the temperate regions where they were used, then sunk like chemical stones when they hit cold air masses in the Arctic and Antarctic.

The result is elevated PCBs in the breast milk of Inuit and Laplander mothers, and in the fat tissues of seals and fish. Along with other pole-bound POPs—“persistent organic pollutants”—such as polybrominated diphenyl flame retardants, or PBDEs, PCBs are the suspected culprits for growing numbers of hermaphroditic polar bears. Neither PCBs nor PBDEs existed until humans conjured them up. They consist of hydrocarbons wedded to highly reactive elements known as halogens, like chlorine or bromine.

The acronym POPs sounds regrettably light-hearted, because these substances are all business, designed to be extremely stable. PCBs were the fluids that kept on lubricating; PBDEs the insulator that kept plastic from melting; DDT the pesticide that kept on killing. As such, they are difficult to destroy; some, like PCBs, show little or no sign of biodegrading.

As the flora of the future keep recycling our metals and POPs for the next several thousand years, some will prove tolerant; some will adapt to a metallic flavor in their soil, as the foliage growing around Yellowstone geysers has done (albeit over a few million years). Others, however—like some of us humans—will die from lead or selenium or mercury poisoning. Some of those that succumb will be weak members of a species that will then grow stronger as it selects for yet a new trait, such as mercury or DDT tolerance. And some species will be selected out entirely, and go extinct.

After we’re gone, the lasting effects of all the fertilizers we’ve spread on furrows since John Lawes began hawking them will vary. Some soils, their pH depressed from years of nitrates diluting to nitric acid, may recover in decades. Others, such as those in which naturally occurring aluminum concentrates to toxic proportions, won’t grow anything until leaf litter and microbes make soil all over again.

The worst impact of phosphates and nitrates, however, isn’t in fields, but where they drain. Even more than a thousand miles downstream, lakes and river deltas suffocate beneath over-fertilized aquatic weeds. Mere pond scum morphs into algae blooms weighing tons, which suck so much oxygen from freshwater that everything swimming in it dies. When the algae collapse, their decay escalates the process. Crystalline lagoons turn to sulfurous mudholes; estuaries of eutrophic rivers balloon into gigantic dead zones. The one spreading into the Gulf of Mexico at the mouth of the Mississippi, charged with fertilizer-soaked sediments all the way from Minnesota, is now bigger than New Jersey.

In a world without humans, a screeching halt to all artificial farmland fertilization would take instant, enormous chemical pressure off the richest biotic zones on Earth—the areas where big rivers bearing huge natural nutrient loads meet the seas. Within a single growing season, lifeless plumes from the Mississippi to the Sacramento Delta, to the Mekong, Yangtze, Orinoco, and the Nile, would begin to shrink. Repeated flushings of a chemical toilet will steadily clarify the waters. A Mississippi Delta fisherman who awakened from the dead after only a decade would be amazed at what he’d find.

4. The Genes

Since the mid-1990s, humans have taken an unprecedented step in Earthly annals by introducing not just exotic flora or fauna from one ecosystem into another, but actually inserting exotic genes into the operating systems of individual plants and animals, where they’re intended to do exactly the same thing: copy themselves, over and over.

Initially, GMOs—genetically modified organisms—were conceived to make crops produce their own insecticides or vaccines, or to make them invulnerable to chemicals designed to kill weeds competing for their furrows, or to make them—and animals as well—more marketable. Such product improvement has extended the shelf life of tomatoes; spliced DNA from Arctic Ocean fish into farmed salmon so that they churn out growth hormones year-round; induced cows to give more milk; beautified the grain in commercial pine; and imbued zebra fish with jellyfish fluorescence to spawn glow-in-the-dark aquarium pets.

Growing more ambitious, we’ve coaxed plants that we feed to animals to also deliver antibiotics. Soybeans, wheat, rice, safflower, canola rapeseed, alfalfa, and sugarcane are being genetically hot-rodded to produce everything from blood thinners to cancer drugs to plastics. We’ve even bio-enhanced health food to produce supplements like beta carotene or gingko biloba. We can grow wheat that tolerates salt and timber that resists drought, and we can make various crops either more or less fertile, depending on which is desired.

Appalled critics include the U.S.-based Union of Concerned Scientists, and approximately half of Western Europe’s provinces and counties, including much of the United Kingdom. Among their fears is what we might do to the future, should some new life-form proliferate like kudzu. Crops such as Monsanto’s suite of “Roundup Ready” corn, soy, and canola—molecularly armored to shrug off that company’s flagship herbicide while everything else nearby dies—are doubly dangerous, they insist.

For one, they say, sustained use of Roundup—a trade name for glyphosate—on weeds has simply selected for Roundup-resistant strains of weeds, which then drive farmers to use additional herbicides. Second, many crops broadcast pollen to propagate. Studies in Mexico that show bio-tinkered corn invading neighboring fields and cross-pollinating natural strains have provoked denials and pressure on university researchers by the food industry, which underwrites much of the funding for expensive genetic studies.

Modified genes from commercially bred bentgrass, a turf used on golf courses, have been confirmed in native Oregon grasses, miles from the source. Assurances from the aquaculture industry that genetically supercharged salmon won’t breed with wild North American stock, because they’re raised in cages, are belied by thriving salmon populations in estuaries in Chile—a country that had no salmon until breeders were imported from Norway.

Not even supercomputers can predict how man-made genes already loosed upon the Earth will react in a near infinity of possible eco-niches. Some will be roundly trounced by competition toughened over eons by evolution. It’s a fair bet, though, that others will pounce on an opportunity to adapt, and evolve themselves.

5. Beyond the Farm

Rothamsted research scientist Paul Poulton stands in November drizzle, knee-deep in holly, surrounded by what will be around after human cultivation ceases. Born just a few miles up the road, lanky Paul Poulton is as rooted to this land as any crop. He started working here right out of school, and now his hair has whitened. For more than 30 years, he’s tended experiments that began before he was born. He’d like to think they will continue on long after he himself turns to bone dust and compost. But one day, he knows, the wild green profusion beneath his muddy irrigation boots will be the only Rothamsted experiment that will still matter.

It is also the only one that has required no management. In 1882, it occurred to Lawes and Gilbert to fence off a half-acre of Broadbalk—the winter-wheat field that had variously received inorganic phosphate, nitrogen, potassium, magnesium, and sodium—and leave the grain unharvested, just to see what would happen. The following year, a new crop of self-seeded wheat appeared. The year after that, the same thing occurred, though by now invading hogweed and creeping woundwort were competing for the soil.

By 1886, only three stunted, barely recognizable wheat stalks germinated. A serious incursion of bentgrass had also appeared, as well as a scattering of yellow wildflowers, including orchid-like meadow peas. The next year, wheat—that robust Middle Eastern cereal grown here even before the Romans arrived—had been entirely vanquished by these returning natives.

Around that time, Lawes and Gilbert abandoned Geescroft, a parcel about half a mile away, consisting of slightly more than three acres. From the 1840s to the 1870s, it had been planted in beans, but after 30 years, it was clear that even with chemical boosts, growing beans continuously without rotation was a failure. For a few seasons, Geescroft was seeded in red clover. Then, like Broadbalk, it was fenced off to fend for itself.

For at least two centuries before Rothamsted’s experiments began, Broadbalk had received dressings of local chalk, but low-lying Geescroft, hard to cultivate without digging drainage, apparently hadn’t. In the decades following abandonment, Geesecroft turned increasingly acidic. At Broadbalk, which was buffered by years of heavy liming, pH had barely lowered. Complex plants like chickweed and stinging nettle were showing up there, and within 10 years filbert, hawthorn, ash, and oak seedlings were establishing themselves.

Geescroft, however, remained mainly a prairie of cocksfoot, red and meadow fescue, bentgrass, and tufted hair grass. Thirty years would pass before woody species began shading its open spaces. Broadbalk, meanwhile, grew tall and dense. By 1915 it added 10 more tree types, including field maple and elm, plus thickets of blackberry and a dark green carpet of English ivy.

As the 20th century progressed, the two parcels continued their separate metamorphoses from farmland to woodland, the differences between them amplifying as they matured, echoing their distinct agricultural histories. They became known as the Broadbalk and Geescroft Wildernesses—a seemingly pretentious term for land totaling less than four acres, yet perhaps fitting in a country with less than 1 percent of its original forests remaining.

In 1938, willows sprouted around Broadbalk, but later they were replaced by gooseberry and English yew. “Here in Geescroft;” says Paul Poulton, unsnagging his rain parka from a bush bright with berries, “there was none of this. Suddenly, 40 years ago, holly started coming in. Now we’re overgrown. No idea why.”

Some of the holly bushes are the size of trees. Unlike Broadbalk, where ivy swirls up the trunk of every hawthorn and flows over the forest floor, there is no ground cover, save for brambles. The grasses and herbaceous weeds that first colonized Geescroft’s fallowed field are completely gone, shaded out by oaks, which prefer acidic soil. Due to overplanting of nitrogen-fixing legumes, and also to nitrogen fertilizers and decades of acid rain, Geesecroft is a classic example of exhausted soil, acidified and leached, with only a few species predominating.

Even so, a forest of mainly oak, brambles, and holly is not a barren place. It is life that, in time, will beget more.

Broadbalk Wheatfield and “Wilderness.” (Trees, upper left.)


The difference at Broadbalk—which has just one oak—is two centuries of chalk lime, which retains phosphates. “But eventually,” says Poulton, “it will wash out.” When it does, there will be no recovery, because once the calcium buffer is gone, it can’t return naturally unless men with shovels return to spread it. “Someday,” he says, almost in a whisper, his thin face scanning his life’s work, “all this farmland will go back to woody scrub. All the grass will disappear.”

Without us, it will take no more than a century. Rinsed of its lime, Broadbalk Wilderness will be Geescroft revisited. Like arboreal Adams and Eves, their seeds will cross on the winds until these two remnant woodlands merge and spread, taking all the former fields of Rothamsted back to their unfarmed origins.

In the mid-20th century, the length of commercialized wheat stalks shortened nearly by half even as the number of grains they bore multiplied. They were engineered crops, developed during the so-called Green Revolution to eliminate world hunger. Their phenomenal yields fed millions who otherwise might not have eaten, and thus also contributed to expanding the populations of countries like India and Mexico. Designed through forced crossbreeding and random mixes of amino acids—tricks that preceded gene splicing—their success and survival depend on calibrated cocktails of fertilizers, herbicides, and pesticides to protect these laboratory-bred life-forms from perils that lurk outside, in reality.

In a world without people, none will last in the wild even the four years during which wheat hung on in the Broadbalk Wilderness after Lawes and Gilbert abandoned it to the elements. Some are sterile hybrids, or they spawn offspring so defective that farmers must purchase new seed each year—a boon for seed companies. The fields where they are destined to die out, which are now most of the grain fields in the world, will be left deeply soured by nitrogen and sulfur, and will remain badly leached and acidic until new soil is built. That will require decades of acid-tolerant trees rooting and growing, then hundreds of years more of leaf litter and decaying wood broken down and excreted as humus by microbes that can tolerate the thin legacy of industrial agriculture.

Beneath these soils, and periodically disinterred by ambitious root systems, will lie three centuries’ worth of various heavy metals and an alphabet soup of POPs, substances truly new under the sun and soil. Some engineered compounds like PAHs, too heavy to blow away to the Arctic, may end up molecularly bound in soil pores too tiny for digesting microbes to enter, and remain there forever.

IN 1996, LONDON journalist Laura Spinney, writing in New Scientist Magazine, envisioned her city abandoned 250 years hence, turned back into the swamp it once was. The liberated Thames wandered among the waterlogged foundations of fallen buildings, Canary Wharf Tower having collapsed under an unbearable tonnage of dripping ivy. The following year, Ronald Wright’s novel A Scientific Romance jumped 250 years more, and imagined the same river lined with palms, flowing transparently past Canvey Island into a sweltering mangrove estuary, where it joined a warm North Sea.

Like the entire Earth, the posthuman fate of Britain teeters on the balance of these two visions: a return to temperate foliage, or a lurch into a tropical, super-heated future—or, ironically, into a semblance of something last seen in England’s southwestern moors, where Conan Doyle’s Baskerville hound once wailed into chill mist.

Dartmoor, the highest point in southern England, resembles a 900-square-mile baldpate with occasional massive chunks of fractured granite poking through, fringed by farms and patches of woods that exploded from old boundary hedgerows. It formed at the end of the Carboniferous Age, when most of Britain lay submerged, with sea creatures dropping shells on what became its buried chalk. Beneath that was granite, which 300 million years ago bulged with underlying magma into a dome-shaped island—which it may be again if seas rise as high as some fear.

Several ice ages froze enough of the planet’s water solid to drop ocean levels and allow today’s world to take shape. The last of these sent a mile-high ice sheet right down the Prime Meridian. Where it stopped is where Dartmoor begins. Atop its granite hilltops, known as tors, are remnants from those times that may be portents of what awaits if yet a third climatic alternative proves to be the British Isles’ destiny.

That fate could occur if meltwater from Greenland’s ice cap shuts down, or actually reverses, the oceanic conveyor atop which rides the Gulf Stream, which currently keeps Britain far warmer than Hudson’s Bay, at the same latitude. Since that much-debated event would be the direct result of rising global temperatures, probably no ice sheet will form—but permafrost and tundra could.

That happened at Dartmoor 12,700 years ago, the last time the global circulation system nearly slowed to a halt: no ice, but rock-hard ground. What followed is not only instructive, as it shows what the United Kingdom might resemble in coming years, but also hopeful, because these things, too, will pass.

The deep freeze lasted 1,300 years. During that time, water trapped in fissures in Dartmoor’s granite dome bedrock froze, cracking apart huge rocks below the surface. Then the Pleistocene ended. The permafrost thawed; its runoff exposed the shattered granite that became Dartmoor’s tors, and the moor bloomed. Across the land bridge that for another 2,000 years connected England to the rest of Europe, pine moved in, then birch, then oak. Deer, bears, beavers, badgers, horses, rabbits, red squirrels, and aurochs crossed with them. So did a few significant predators: foxes, wolves, and the ancestors of many of today’s Britons.

As in America, and Australia long before, they used fire to clear trees, making it easier to find game. Except for the highest tors, the barren Dartmoor prized by local environmental groups is another human artifact. It is a former forest repeatedly burned, then waterlogged by more than 100 inches of annual rainfall into a blanket of peat where trees no longer grow. Only charcoal remnants in peat cores attest that once they did.

The artifact was shaped further as humans pushed hunks of granite into circles that became foundations for their huts. They spread them into long, low unmortared stone reaves that crossed and hatched the landscape, and remain vivid even today.

The reaves divided the land into pastures for cows, sheep, and Dartmoor’s famous hardy ponies. Recent attempts to emulate Scotland’s picturesque heaths by removing livestock proved futile, as bracken and prickly gorse appeared rather than purple heather. But gorse befits a former tundra, whose frozen surfaces melt to spongy peat familiar to anyone who walks these moors. Tundra this may be again, whether humans are here or not.

Elsewhere on Earth, on former croplands that humans tended for millennia, warming trends will create variations of today’s Amazon. Trees may cover them with vast canopies, but the soils will remember us. In the Amazon itself, charcoal that permeates frequent deposits of rich black soil called terra preta suggests that, thousands of years ago, paleo-humans cultivated wide swathes of what we think of today as jungle primeval. Slowly charring rather than burning trees, they ensured that much of their nourishing carbon was not expelled into the atmosphere but was instead retained, along with nitrogen, phosphorus, calcium, and sulfur nutrients—all packaged in easily digested organic matter.

This process has been described by Johannes Lehmann, the latest of a lineage of Cornell University soil scientists who have studied terra preta nearly as long as the heirs to Rothamsted founder John Lawes have experimented with fertilizer. The charcoal-enriched soil, despite incessant use, never gets depleted. Witness the lush Amazon itself: Lehmann and others believe that it sustained large pre-Columbian populations, until European diseases reduced them to scattered tribes who now live off nut groves planted by their ancestors. The unbroken Amazon we see today, the world’s largest forest, rushed back so quickly across rich terra preta that European colonists never realized it was gone.

“Producing and applying bio-char,” writes Lehmann, “would not only dramatically improve soil and increase crop production, but also could provide a novel approach to establishing a significant, long-term sink for atmospheric carbon dioxide.”

In the 1960s, British atmospheric scientist, chemist, and marine biologist James Lovelock proposed his Gaia hypothesis, which describes the Earth as behaving like a super-organism, its soil, atmosphere, and oceans composing a circulatory system regulated by its resident flora and fauna. He now fears that the living planet is suffering a high fever, and that we are the virus. He suggests we compile a user’s manual of vital human knowledge (on durable paper, he adds) for survivors who may sit out the next millennium huddled in the polar regions, the last habitable places in a super-heated world, until the ocean recycles enough carbon to restore a semblance of equilibrium.

If we do so, the wisdom of those nameless Amazonian farmers should be inscribed and underlined so that we might attempt agriculture a little differently next time around. (There may be a chance: Norway is now archiving examples of the world’s crop seed varieties on an Arctic island, in hopes they may survive untold calamities elsewhere.)

If not, and if no humans return to till the soil or husband the animals, forests will take over. Rangelands that receive good rainfall will welcome new grazers—or old ones, as some new incarnation of Proboscidae and sloths replenish the Earth. Other places, however, less blessed, will have parched into new Saharas. The American Southwest, for instance: waist-high in grasses until 1880, when their cattle population of a half-million suddenly sextupled, New Mexico and Arizona now face unprecedented drought, with much of their water-retaining capacity lost. They may have to wait.

Still, the Sahara itself was once covered with rivers and ponds. With patience—though not, unfortunately, human patience—it will be again.