The World in 2050: Four Forces Shaping Civilization's Northern Future - Laurence C. Smith (2010)
Chapter 1. Martell’s Hairy Prize
“Prediction is very difficult. Especially about the future.”
—Niels Bohr (1885-1962)
“The future is here. It’s just not evenly distributed yet.”
—William Gibson (1948-)
On a cold April day in 2006, Jim Martell, a sixty-five-year-old businessman from Glenns Ferry, Idaho, shot and killed a strange beast. Cradling his rifle, he ran with his guide, Roger Kuptana, to where it lay slumped on the snow. Both men wore thick parkas as protection against the icy wind. They were on Banks Island, high in the Canadian Arctic, some 2,500 miles north of the U.S. border.
Martell was an avid big-game hunter and had paid about forty-five thousand dollars for the right to bag Ursus maritimus, a polar bear, one of the most coveted trophies of his sport. Kuptana was an Inuit tracker and guide who lived in the nearby village of Sachs Harbor. Polar bear hunting is strictly regulated but legal in Canada, and the hefty license and guide fees provide important revenue to Sachs Harbor and other Inuit towns like it. Martell had permission to take down a polar bear and only a polar bear. But that was not what lay bleeding in the snow.
At first glance the creature looked well enough like a polar bear, albeit a small one. It was seven feet long and covered with creamy white fur. However, its back, paws, and nose were mottled with patches of brown. There were dark rings, like a panda’s, around its eyes. The creature’s face was flattened, with a humped back and long claws. In fact, it had many of the features of Ursus arctos horribilis, the North American grizzly bear.
Martell’s bear triggered an international sensation. Canadian wildlife officials seized the body and submitted DNA samples to a genetics lab to find out what it was. Tests confirmed the animal was indeed a half-breed, the product of a grizzly bear father and a polar bear mother.2 It was the first evidence of grizzly/polar bear copulation ever reported in the wild. News outlets announced the arrival of a “Hairy Hybrid”3 and the blogosphere erupted with either wonderment and proposed names—“pizzly?” “grizzlar?” “grolar bear?”—or outrage that the only known specimen had been shot dead. A “Save the Pizzly” Web site hawked T-shirts, coffee mugs, and stuffed dolls. Martell was subjected to angry criticism; in response he pointed out that the world would never have learned of the thing—whatever it was called—if not for his fine shot.
For this bizarre tryst even to have happened required that a grizzly wander far north into polar bear territory, a formerly rare phenomenon that biologists are now beginning to see more often. Journalists were quick to make a climate change connection: Was this, they wondered, a preview of nature’s response to global warming? But scientists like Ian Stirling, a leading polar bear biologist, were rightly skeptical of drawing strong conclusions from what was, after all, an isolated event. That changed in 2010, when a second specimen was shot. Tests confirmed it was the offspring of a hybrid mother; in other words, they are breeding .4 The coming decades will show whether Martell’s bear, now stuffed and snarling in his living room, is just the latest biological indicator among many that something big is going down on our planet.
If you enjoy watching wildlife in your backyard, perhaps you’ve noticed something. All around the world there are animals, plants, fish, and insects creeping to higher latitudes and elevations. From spittlebugs in California to butterflies in Spain and trees in New Zealand, it is a broad pattern that biologists are discovering. By 2003 a global inventory of this phenomenon found that on average, plants and animals are shifting their ranges about six kilometers toward the poles, and six meters higher in elevation, every decade. Over the past thirty years phenological cycles—the annual rhythm of plant flowering, bird migrations, birthing babies, and so on—have shifted earlier in the spring by more than four days per decade.5
If these numbers don’t sound large to you, they should. Imagine your lawn crawling north, away from your house, at a speed of five and one-half feet each day. Or that your birthday arrived ten hours sooner each year. That’s how fast these biological shifts are happening. Life-forms are migrating—and it’s going on right outside your window.
The 2006 pizzly story—like the record-shattering Atlantic hurricane season in 2005, or the strange weather patterns that rained out the Winter Olympic Games in Vancouver while burying the eastern U.S. seaboard with “Snowpocalypse” in 20106—is yet another example of something that might have been triggered by climate change or might not. Such events are eye-catching in the news of the day but not, taken in isolation, conclusive of anything. In contrast, while the painstaking statistical analyses of decades of field research on spittlebugs and trees may not rouse the daily news cycle, it does me. It is deeply important, a compelling discovery that provides real understanding about the future. It is a megatrend, and megatrends are what this book is all about.
The Thought Experiment
This is a book about our future. Climate change is but one component of it. We will explore other big trends as well, in things like human population, economic integration, and international law. We will study geography and history to show how their preexisting conditions will leave lasting marks on the future. We will look to sophisticated new computer models to project the futures of gross domestic product, greenhouse gases, and supplies of natural resources. By examining these trends collectively, and identifying convergences and parallels between them, it becomes possible to imagine, with reasonable scientific credibility, what our world might look like in forty years’ time, should things continue on as they are now. This is a thought experiment about our world in 2050.
It can be fun imagining what our world might look like by then. Robots and flying cars? Custom-grown body parts? A hydrogen economy? As any disappointed sci-fi buff will tell you, the pace of reality is usually slower than human imagination. Fans of George Orwell’s book 1984, the television series Lost in Space and Space 1999, the films 2001: A Space Odyssey, and (it’s looking like) Blade Runner—set in a perpetually raining 2019 Los Angeles—see their landmark years come and go. But outside of the ongoing technical explosions in information and biotechnologies, our lives are considerably less different than the writers of these fictional works imagined they would be.
We’ve discovered quarks and flung people into space, yet still depend on the internal combustion engine. We’ve cracked DNA and grown a human ear on a mouse’s back, yet are still dying of cancer. We’ve created fluorescent green pigs by inserting jellyfish genes into them (Green Eggs and Ham, anyone?), yet still catch wild fish from the sea and use dirt and water to grow our food. Nuclear power is but a pale shadow of what was hoped for it in the 1950s. We still use boats, trucks, and trains to move goods. And even in this unprecedented era of globalization, the fundamental principles of our markets and economies differ surprisingly little from the days of Adam Smith, more than two hundred years ago.
But in other, sneakier ways, things have changed profoundly. Imagine describing to a 1950 California tomato farmer how in the next fifty years he would grow genetically programmed seeds, see the water in his state tilted from one end to the other, and experience a tripling of the state’s population. Imagine explaining he would one day compete with Chinese farmers to sell tomatoes to Italians who would blend them with beans from Mexico to make canned goods for British supermarkets.7
Any of these would blow our yesteryear farmer’s mind. But to us, they are familiar, even boring. They fly below our radar because they have crept upon us, hiding in plain sight over the course of decades. But that doesn’t mean transformations like these aren’t huge, fast, and profound. Big changes often just sort of ease their way in. And quietly change the world.
What will our world look like in 2050? Our distribution of people and power? The state of the natural world? Which countries will lead, and which ones suffer? Where do you think you’ll be in 2050?
The answers to these questions, at least in this book, derive from a core argument: The northern quarter of our planet’s latitudes will undergo tremendous transformation over the course of this century, making them a place of increased human activity, higher strategic value, and greater economic importance than today. I loosely define this “New North” as all land and oceans lying 45° N latitude or higher currently held by the United States, Canada, Iceland, Greenland (Denmark), Norway, Sweden, Finland, and Russia.
These eight countries, which control vast territories and seas extending as far north as the Arctic Ocean, comprise a new “Northern Rim” roughly encircling that ocean. Developments in these Northern Rim countries, here coined the NORC countries, or “NORCs,” are explored in Parts II and III (Chapters 5 through 10). Part I (Chapters 2 through 4) presents powerful worldwide trends in human population, economics, energy and resource demand, climate change, and other factors keenly important to our global civilization and ecosystem. Besides imagining what life might be like for most of us by 2050, these first chapters identify some critical world pressures that are stimulating the New North into existence.
Before beginning our travels around this 2050 world, there are some rules to establish.
Fortunately, we have the tools, the models, and the knowledge to construct an informed thought experiment of what we might expect to see unfold over the next forty years. However, as in any experiment, we must first define the assumptions and ground rules upon which its outcomes are contingent.
1. No Silver Bullets. Incremental advances in technology for the next forty years are assumed. No cold nuclear fusion or diesel-growing fungus8 will suddenly solve all our energy problems; no God-like genetic engineering to grow wheat without water. This is not to say a radical technology breakthrough can’t or won’t happen; only that the possibility will not be entertained here.
2. No World War III. The two “world” wars in the first half of the twentieth century recast the map and forged economic, political, and infrastructural changes that reverberate to this day. A nuclear or major, multicountry, conventional war like World War II would be a game changer and is not imagined here (indeed, empirical evidence suggests that over the long run we may be becoming somewhat less violent9). However, the possibility of lesser armed conflicts, like the ones ongoing today in the Middle East and Africa, is evaluated. Major laws and treaties, once made, are assumed to stay in place.
3. No Hidden Genies. A decades-long global depression, an unstoppable killer disease pandemic, a meteorite impact, or other low-probability, high-impact event is not imagined here. However, this rule is relaxed in Chapter 9 to explore six plausible, if unlikely, outcomes, like an abrupt climate change or the collapse of global trade—both of which have happened before and could happen again.
4. The Models Are Good Enough. Some of the conclusions reached in this book stem from experiments using computer models of complex phenomena, like climate and economies. Models are tools, not oracles. All have their flaws and limitations.10 But for the broad-scale purposes of this book, they are excellent. I will focus on the robust, uncontroversial messages of these models rather than push the limits of their capabilities. As before, this rule is relaxed in Chapter 9 to explore some plausible outcomes lying outside our current modeling capacity.
The purpose of these rules is to introduce conservatism to the thought experiment. By favoring likely, forseeable trajectories over unlikely, exciting ones, we avoid sacrificing a more probable outcome to a good story. By pursuing multiple lines of argument rather than one grand idea, we avoid the so-called “foxes and hedgehogs” trap, by lessening the likelihood that an important actor will be overlooked.11 By concentrating on the most robust simulations of computer models, we steer the conversation toward the science that is best understood, rather than poorly understood.
Why even try to project forty years into the future anyway? To imagine the world in 2050, we must closely study what is happening today, and why. By forcing our minds to take the long view, we can identify factors that might seem beneficial in the near term, but lead to undesired consequences in the long term, and vice versa. After all, doing good things (or at least, less bad things) for the long term is a worthy goal. I certainly don’t believe the future is predetermined: Much of what does or does not happen forty years from now rests on actions or inactions taken between now and then.
Some of the changes I will present will be perceived as good or bad, depending on the reader’s own perspective. To be sure, some of them, like species extinctions, no one wishes to see. But others, like military spending and energy development, evoke valid, strongly opposed reactions. My goal is not to argue one side or another, but to pull together trends and evidence into a bigger picture as well and objectively as I can. The reader can take it from there.
But before we can intelligently discuss the future, we must first understand the past. In roughly historical order of their rise in significance, here are four global forces that have been busily shaping our 2050 world for tens to hundreds of years.
FOUR GLOBAL FORCES
The first global force is demography, which essentially means the ups, downs, and movements of different population groups within the human race. Demographic measures include things like birth rates, income, age structure, ethnicity, and migration flows. We shall examine all of these in due course but for now, let us start with the most basic yet profound measure of all: the total number of people living on Earth.
Before the invention of agriculture some twelve thousand years ago, there were perhaps one million persons in the world.12 That is roughly the present-day population of San Jose, California. People foraged and hunted the land, living in small mobile clans. It took twelve thousand years (until about 1800 A.D.) for our numbers to grow to one billion. But then, oh boy, liftoff.
Our second billion arrived in 1930, a mere 130 years later. The global Great Depression was under way. Adolf Hitler led his Nazi Party to stunning victory in Germany’s Reichstag elections. My Italian immigrant grandfather, then living in Philadelphia, was thirty-three years old.
Our third billion came just thirty years later in 1960. John Kennedy beat Richard Nixon in the U.S. presidential race, the first satellites were orbiting the Earth, and I was a scant seven years from being born.
Our fourth billion took just fifteen more years. It was 1975 and I was eight. The U.S. president Gerald Ford escaped two assassination attempts (one by Charles Manson’s murderous henchwoman Lynette “Squeaky” Fromme), the Khmer Rouge had taken over Cambodia, and the movie Godfather II ran away with six Academy Awards, including one to the Italian-American actor Robert De Niro.
Our fifth billion came in 1987, now just twelve years after the fourth. The Dow Jones Industrial Average closed above 2,000 for the first time in history and the Irish rock band U2 released their fifth album, The Joshua Tree. Standing outside Berlin’s Brandenburg Gate, U.S. president Ronald Reagan exhorted Soviet leader Mikhail Gorbachev to “tear down this wall.” The world’s last dusky seaside sparrow died of old age on a tiny island preserve in Florida’s Walt Disney World Resort. A self-absorbed college sophomore at the time, I only noticed The Joshua Tree.
Our sixth billion arrived in 1999. This is now very recent history. The United Nations declared 1999 the International Year of Older Persons. The Dow Jones climbed above 11,000 for the first time in history. Internet hookups ballooned and millions of songs, to the dismay of U2 and the rest of the music industry, were swapped for free on Napster. Hugo Chávez became president of Venezuela, and a huge chunk of northern Canada quietly assumed self-rule as the new territory of Nunavut. By then, I was a young professor at UCLA, working toward tenure and starting to notice things. The world vacillated between nervous fretting about Y2K and excitement over the dawn of a new millennium.
11,800 years . . . 130 years . . . 30 years . . . 15 years . . . 12 years. . . . The length of time we need to add another billion has petered down to nearly nothing. One billion is more than triple the 2010 population of the United States, the third most populous country on Earth. Imagine a world in which we added one-plus USA, or two Pakistans, or three Mexicos, every four years. . . . Actually, this requires no imagination at all. It is reality. We will add our seventh billion some time in 2011.
This extraordinary acceleration, foreseen over two centuries ago by Thomas Malthus,13 burst into popular culture again in 1968 when Paul Ehrlich, then a young biology professor at Stanford, jolted the world with The Population Bomb, a terrifying book forecasting global famines, “smog deaths,” and massive human die-offs if we didn’t somehow control our numbers.14 He became a frequent guest on The Tonight Show Starring Johnny Carson and his ideas almost certainly helped nudge China toward its “One-Child” population control policy implemented in 1979.
Arguments against Ehrlich’s ecological approach to human beings charged that it underestimated the limits of our technology and ingenuity. So far, these arguments appear to have been correct. Our numbers have surged on and Ehrlich’s scariest predictions have, as yet, failed to materialize. But even so, generations from now, our descendants will marvel at the twentieth century, a time when our numbers shot from 1.6 to 6.1 billion in a mere blink of time.
What triggered this enormous twentieth-century population spurt? Why did it not happen before, and is it likely to continue into the future?
Fast population growth behaves a lot like a personal savings account. Just as its account balance depends on the spread between the rates of deposit versus spending, the balance of people on Earth depends on the rates at which new people are created (the fertility rate) versus how fast existing people disappear (the death rate).15 When the two rates are equal, population holds steady. When they diverge or converge, population rises or falls accordingly. It doesn’t really matter whether birth rates rise or death rates fall; what matters is the spread and whether rate adjustments are staggered in time or happen simultaneously. Most importantly, once a run-up (or decline) has happened, we are stuck with the new population level, even if the gap between fertility and death rates is then closed and population stability is returned.
From our earliest beginnings until the late nineteenth century, our fertility and death rates, on average, were both high. Mothers had more babies than today, but few of them survived to old age. In the preindustrial era, famine, warfare, and poor health kept death rates high, largely offsetting high fertility. The global population of humans trickled higher, but only very slowly.
However, by the late nineteenth century, industrialization had changed everything in Western Europe, North America, and Japan. Mechanized food production and distribution reduced famine deaths. Local warfare disappeared under the rising control of central governments. Death rates dropped as doctors discovered modern medical procedures and drugs. But fertility rates fell more slowly—cultural expectations are slower to change—so the human population took off. By 1950, New York was the first city in the world to break the ten million mark.
Not only did the Industrial Age bring machines and medicine, it also spurred migration from farms to cities. People increasingly bought what they needed rather than growing or making things themselves. The cost of housing rose; the economy grew. More women entered college and the workplace, squeezing down the number of children families wanted or could afford. Fertility rates began to drop and families became smaller. When fertility rates at last fell to match the death rates, population growth halted, and the industrialized societies that had participated in all this were transformed. Instead of being small, poor, prolific, and death-prone they were now large, rich, and long-lived with few children.
This chain of events, in which a population run-up is at first initiated, then later stabilized, by the forces of modernization is called the Demographic Transition and is a bedrock concept in demography.16 The Demographic Transition supposes that modernization tends to reduce death and fertility rates, but not simultaneously. Because people tend to readily adopt technological advances in medicine and food production, death rates fall first and quickly. But fertility reductions—which tend to be driven by increased education and empowerment of women, an urban lifestyle, access to contraception, downsized family expectations, and other cultural changes—take more time. And just like a bank account, when the death (spending) rate falls faster than the birth (savings) rate, the result is a rapid run-up in the sum total. Even if fertility rates later fall to match death rates—thus completing the Demographic Transition and halting further growth—a new, much larger population balance is then carried forward.
In the twentieth century, one Demographic Transition concluded and another began. In Europe and North America it took from about 1750 to 1950 to complete, making these places the fastest-growing in the world while most of Asia and Africa grew slowly. This growth then slowed or stopped as industrialized countries completed the Demographic Transition, their fertility rates falling to near or even below the death rate.
But in the developing world, a new Demographic Transition that began in the early twentieth century with the arrival of modern medicine has still not finished. Thanks to the inventions of antibiotics and vaccines, along with insecticides to control diseases like malaria, death rates have plummeted17 but fertility rates, while dropping, have fallen less quickly. In some countries they haven’t fallen at all, defying the classic Demographic Transition notion that all modernized women prefer fewer babies. Such discrepancies underline a known weakness of the Demographic Transition model: Not every culture will necessarily adopt the western ideal of a small nuclear family, even after women’s rights, health, and security conditions improve.
So somewhere around 1950, our fastest population growth rates left the OECD countries18 and went to the developing world. Because the base population levels in the latter are so much larger, the resulting surge in world population has been nothing short of phenomenal. In most developing countries the spread between fertility and death rates, while narrowing, remains substantial. This second Demographic Transition is not yet finished, and unlike before, it involves the vast majority of the human race. Until a few decades after it ends—if it ends—world population will continue to grow.
The second global force, only partly related to the first, is the growing demand that human desires place upon the natural resources, services, and gene pool of our planet. Natural resources means both finite assets like hydrocarbons, minerals, and fossil groundwater; and renewable assets like rivers, arable land, wildlife, and wood. Natural services include life essentials like photosynthesis, absorption of carbon dioxide by oceans, and the labors of bees to pollinate our crops. And by gene pool I mean exactly that—the diversity of genes being carried around by all living organisms still existing on Earth.
It’s difficult to comprehend how fully dependent we are upon these things. Steel machines burn oil to grow and harvest our grains, with fertilizers made from natural gas, generating many times over what a farmer and mules could produce on the same land. From the genetic code of organisms we take the building blocks for our food, biotech, and pharmaceutical industries. We frame our buildings with timber, steel, and cement. We take water from the ground or trap it behind dams to grow alfalfa and cotton in the desert. We need trucks and diesel and giant metal-hulled ships to move ores and fish and manufactured goods from the places that have them to places that want them. The resulting trade flows have grown entire economies and glittering cities, with their music and culture and technology. Coal-fired electricity zaps through billions of miles of metal cable to power buildings, electric cars, cell phones, and the Internet. Airplanes and cars burn the sludge of long-dead things, granting us personal freedom and the chance to see the world.
It’s no secret that our twentieth-century expansions in population, modernization, trade, and technology have escalated demand for all of these. Public concern—both for the stability of raw commodity supplies and for the health of the natural world—has been high since the 1970s, especially after the OPEC oil embargo crisis of ’73-’74 and NASA’s launch of ERTS-1 (later renamed Landsat), the first civilian satellite to disseminate graphic images of clear-cuts gnawing away the vast rain forests of the Amazon basin. Today, news feeds crackle with stories about dwindling oil, fights over water, and soaring food prices. Many plants and animals are disappearing as their habitats are converted to plantations and parking lots. Still others have been harvested into oblivion. Fully four-fifths of the world’s land surface (excluding Antarctica) is now directly influenced by human activities.19 The lingering exceptions to this are those places that are truly remote: the northern forests and tundra, the shrinking rain-forest cores of the Congo and Amazon basins, and certain deserts of Africa and Australia and Tibet.
Perhaps no resource pressure has grown faster than our demand for fossil hydrocarbon fuels. This began in Europe, North America, Australia, and Japan and has now spread to China, India, and other modernizing nations. Because the United States has been (and still is) the largest consumer of these fuels, let’s illustrate the rapacity of this phenomenon as it has unfolded there.
In 1776, when the United States of America declared independence from Great Britain after a little over a year of war, most of the fledgling country’s energy came from wood and muscles. Yes, there were sawmills turning waterwheels to cut logs, and coal was used to make coke for casting iron cannons and tools, but the vast majority of America’s energy came from fuelwood, horses, mules, oxen, and human backs.
By the late 1800s, the Industrial Revolution, steam locomotive, and westward expansion had changed all that. Dirty black coal was the shining new prince—fueling factories, coke ovens, foundries, and trains all across the young nation. Coal consumption grew from 10 million short tons per year in 1850, to 330 million short tons just fifty years later.20 Little mining towns sprang up all over Appalachia, like now-defunct Ramseytown in western Pennsylvania, where my grandmother was later born. Nearby Rossiter produced my grandfather, who worked in the coal mines as a teenager.
But in the twentieth century, coal was surpassed. Oil, first drilled out of a quiet Pennsylvania farm in 1859 to make lamp kerosene, caught on slowly at first. Gasoline was originally a junk by-product that some people dumped into rivers to get rid of. But then someone thought of pouring it into a combustion engine, and gasoline became the fuel of Hercules.
Packed inside a single barrel of oil is about the same amount of energy as would be produced from eight years of day labor by an average-sized man. Seizing oil fields became a prime strategic objective in both world wars. The Baku fields of Azerbaijan were a prime reason that Hitler invaded Russia, and it was their oil supply, gushing north to the Russian army, that stopped him.
By the end of World War II, cars and trucks had outgrown the rail system, locomotives had switched to diesel, and the liquid-fuels market was really taking off. Oil consumption surpassed coal in 1951, though sales of both—along with natural gas—continued to rise strongly. In just one hundred years (1900-2000) Americans ramped up their coal consumption from about 330 million to 1.1 billion short tons per year,21,22 a 230% increase. Oil-burning grew from 39 million to 6.6 billion barrels per year,23 a 16,700% increase. In comparison, that old stalwart fuelwood rose a measly 12%, from 101 million to just 113 million cords per year. 24
Although the U.S. population also rose quickly over this same time period (from 76 to 281 million, or +270%), oil consumption rose far faster on a per capita basis. By the beginning of the twenty-first century the average American was burning through more than twenty-four steel drums of oil every year. In 1900, had my Italian grandfather already emigrated to the United States, he would have used just twenty-two gallons, about one-half of one steel drum.
The twentieth century saw similar extraordinary growth in American consumption of iron, nickel, diamonds, water, softwood, salmon, you name it. To varying degrees, this rapid escalation of resource consumption has either happened or is now happening in the rest of the world.
So we see that resource consumption, much like our global population, grew ridiculously fast in a single century. But while the two certainly feed off one another, rising resource demand has less to do with population growth per se than with modernization. My UCLA colleague Jared Diamond illustrates this by considering an individual’s “consumption factor.”25 For the average person living in North America, Western Europe, Japan, or Australia, his or her consumption factor is 32.
If your consumption factor, like mine, is 32, that means you and I each consume thirty-two times more resources and produce thirty-two times more waste than the average citizen of Kenya, for example, with a consumption factor of 1. Put another way, in under two years we plow through more stuff than the average Kenyan does in his entire life. Of the 6.8+ billion of us alive on Earth now, only about a billion—15%—enjoy this lavish lifestyle. The vast majority of the human race lives in developing countries with consumption factors much lower than 32, mostly down toward 1.
Places with a consumption factor of 1 are among the most impoverished, dangerous, and depressing on Earth. Regardless of what country we live in, we all want to see these conditions improve—for security as well as humanitarian reasons. Many charitable people and organizations are working toward this goal, from central governments and NGOs to the United Nations to local churches and individual donors. Most developing countries, too, are striving mightily to industrialize and improve their lot. Organizations large and small, from the World Bank and International Monetary Fund (IMF), to the Grameen Bank and other microlenders, are providing loans to help. Who among us does not want to see such efforts succeed? Who does not want the world’s lingering poverty, hunger, and disease brought to an end?
But therein lies a dilemma. What if you could play God and do the noble, ethically fair thing by converting the entire developing world’s level of material consumption to that now carried out by North Americans, Western Europeans, Japanese, and Australians today. By merely snapping your fingers you could eliminate this misery. Would you?
I sure hope not. The world you just created would be frightening. Global consumption would rise elevenfold. It would be as if the world’s population suddenly went from under 7 billion today to 72 billion. Where would all that meat, fish, water, energy, plastic, metal, and wood come from?
Now let us suppose that this transformation were to happen not instantly but gradually, over the next forty years. Demographers estimate that total world population might level off at around 9.2 billion by 2050. Therefore, if the end goal is for everyone on Earth to live as Americans, Western Europeans, Japanese, and Australians do today, then the natural world must step up to provide enough stuff to support the equivalent of 105 billion people today.
Viewed in this light, lifestyle is an even more potent multiplier of human pressure on the world resource base than is total population itself. Global modernization and prosperity—an eminently laudable and desirable goal—are thus raising our demands upon the natural world now more than ever.
The third global force is globalization. A big word spanning many things, it most commonly refers to increasingly international trade and capital flows but also has political, cultural, and ideological dimensions.26 Frankly, there are about as many definitions for globalization as there are experts who study it. For our purposes here let us simply think of “globalization” very broadly as a set of economic, social, and technological processes that are making the world more interconnected and interdependent.
Most people were aware of how interconnected the world economy had become long before the 2008-09 global financial crisis laid it bare. In his 2006 book The World Is Flat, the New York Times columnist and author Thomas Friedman famously asked, “Where were you when the world went flat?”27Flat is Friedman’s simple metaphor for the opening and leveling of a global playing field for trade and commerce, one that in principle maximizes efficiency and profitability for all because the cheapest ore or cheapest labor can be hunted down to the last corners of Earth.
No doubt everyone has a different answer to Friedman’s question. For me, it was in Burbank in 1998, while waiting in a queue at an IKEA home-furnishings store. It struck me that my arms were filled with products designed in Sweden, built in China, shipped to my store in California, and sold to me by a Mexican cashier. From a single store selling pens and seed packets in tiny Älmhult in 1958, IKEA had grown to three hundred franchises in thirty-seven countries by 2010. At €22 billion (USD $33 billion) annually its economy was bigger than that of the country of Jordan and adding twenty-plus new stores worldwide each year.28 Not only is this single company now a planetary economic force, it is globalizing Swedish culture by cultivating a taste for juicy meatballs and clean Scandinavian furniture design from the United States to China to Saudi Arabia.
Globalization kills economies too. After years of slow bleeding, my wife’s hometown in Michigan crashed when Delphi, a major supplier of automotive parts to General Motors Corporation, went bankrupt. Also, globalization’s spread is very uneven: The world is not so much “flat” as it is lumpy. Some countries, like Singapore and Canada, are integrating broadly and rapidly whereas others, like Myanmar and North Korea, are isolated backwaters.
Taking the long view, the world appears to be in the early phase of an economic transformation to something bigger and more integrated than anything ever seen before. It is more far-reaching and sophisticated than any previous alliance in human history. We will all be potential rivals, but also all potential friends. Alongside the demise of entire sectors will be new markets, new trade, and new partnerships. Gone are the days when General Motors could import rubber and steel and export automobiles. The design, raw materials, components, assembly, and marketing of today’s cars might come from fifty different countries around the world.
But what unleashed this new era of global integration upon us? Was it the blazing speed and easy reach of the Internet—or something deeper? I only noticed it in 1998, but might this phenomenon be older than we think?
Like rising world population and natural resource demand, the present global integration lifted off in the middle of the twentieth century. But unlike the first two, it happened deliberately. It all began with a big conference in the Mount Washington Resort near Bretton Woods, New Hampshire, in July 1944. Over seven hundred delegates from forty-four countries—including Britain’s John Maynard Keynes (whose ideas later found new life in the wake of the 2008 global credit meltdown)—were in attendance.
World War II was drawing to a close. Governments were turning their attention to their shattered economies and how to rebuild them after two catastrophic wars, a global depression, a long escalation of protectionist tariffs, and some crazy currency devaluations. Everyone at the conference wanted to figure out how to stabilize currencies, get loans to war-ravaged countries for rebuilding, and get international trade moving again.
The outcome of this conference was something called the Bretton Woods Agreement. Among other things, it stabilized international currencies by pegging them to the value of gold (which lasted until 1971, when President Richard Nixon dropped the U.S. dollar from the gold standard). But its most persistent legacy was the birth of three new international institutions: the International Monetary Fund (IMF) to administer a new monetary system; the International Bank for Reconstruction and Development (IBRD) to provide loans—today, the World Bank; and the General Agreement on Tariffs and Trade (GATT) to fashion and enforce trade agreements—today, the World Trade Organization (WTO). These three institutions guided much of the global reconstruction effort after the war; and during the 1950s their purpose expanded to giving loans to developing countries to help them industrialize. Today these three powerful institutions—the IMF, World Bank, and WTO—are the prime actors making and enforcing the rules of our global economy.
Up until the demise of the Bretton Woods monetary regulatory system in the early 1970s, it presided for three decades over what some have called the “golden age of controlled capitalism.”29 But by the 1980s, “controlled capitalism” had fallen to a revolution of “neoliberalism”—the deregulation and elimination of tariffs and other controls on international trade and capital flows. The neoliberalism movement was championed by British prime minister Margaret Thatcher and U.S. president Ronald Reagan, but was rooted in the ideas of Adam Smith.
Throughout the 1980s and 1990s the IMF, WTO, and World Bank aggressively pursued agendas of liberalizing (deregulating) trade markets around the world, vigorously urged on by the United States.30 A common tactic was to require developing countries to accept neoliberal reforms to qualify for IMF or World Bank loans. This practice was exemplified by the “Washington Consensus,” a controversial list of hard-nosed reforms including trade deregulation, opening to direct foreign investment, and privatization of state enterprises.31
In the United States, presidents from both political parties also worked to dismantle international trade barriers. Of particular importance to this book was the North American Free Trade Agreement (NAFTA), proposed in 1991 by President George Herbert Walker Bush to remove trade barriers between the United States, Mexico, and Canada. Two years later, President Bill Clinton made NAFTA the cornerstone of his legacy. In his speech at the signing ceremony Clinton pressed the need “to create a new world economy,” with former presidents Bush, Jimmy Carter, and Gerald Ford nodding in attendance. Clinton’s successor also agreed: Fifteen years later, citing a near-quintupling of U.S. free-trade agreements under his watch, outgoing president George W. Bush stated that global trade expansion had been one of the “highest priorities of his administration.”32
Notice that the origins of today’s great global integration are at odds with one of its most widely promulgated myths: that globalization has emerged organically, born from fast Internet technology and the “invisible hand” of free markets. In truth, this global force owes its existence to a long history of entirely purposeful policy decisions, championed especially by the United States and Britain, dating to the waning days of World War II. Many who write about globalization see it as exploding suddenly in the 1970s or 1980s, thus missing the institutional groundwork laid first under Bretton Woods, pressed upon the developing world by its daughter institutions the IMF, WTO, and World Bank, and subsequently advanced by U.S. presidential administrations of both political parties ever since. Its foundations are now codified into decades of historical precedent and a plethora of free-trade treaties. They are engrained in generations of politicians and business CEOs, and were reaffirmed even during the turmoil of the 2008-09 global financial crisis.33 This megatrend has roots going back more than sixty years and is now a deep, powerful global force already shaping the twenty-first century economy.
The fourth global force is climate change. Quite simply, it is observed fact that human industrial activity is changing the chemical composition of the atmosphere such that its overall temperature must, on average, heat up.
The power of greenhouse gases is simply beyond dispute. Their existence was deduced in the 1820s by the French mathematician Joseph Fourier, who noticed that the Earth is far warmer than it ought to be, given its distance from the Sun. Without greenhouse gases our planet would be an icebox, like the Moon and Mars, with temperatures some 60° Fahrenheit colder than today.34 Their magic comes from letting solar radiation easily in but not easily out, roughly analogous to how a closed-up car becomes hotter inside than out from sunlight passing through the window glass.35
The basic physics of this was worked out in the 1890s by the Swedish chemist Svante Arrhenius.36 Like glass, greenhouse gases are transparent to short-wavelength sunlight, allowing it to pass unimpeded through the atmosphere to warm the Earth’s surface (unless blocked by a cloud). But they are opaque to the (invisible) long-wavelength infrared radiation returned from the warmed Earth back to space, instead absorbing it and thus becoming infrared radiators themselves.
Arrhenius was trying to solve the puzzle of ice ages, so was initially interested in global cooling, not warming, but his calculations worked easily well in either direction. He later wondered if humans, by adding carbon dioxide to the air through fossil-fuel burning, could also influence the planet’s climate. He ran the numbers and found that they certainly could, and substantially, too, if the gas’s concentration was raised high enough. His initial estimate of +5°C warming for a doubling of atmospheric CO2, calculated by hand, was remarkably close to the ones generated by far more sophisticated computer models running today. But Arrhenius didn’t think much of this at the time, because he couldn’t imagine humans ever releasing that much carbon dioxide. For humans to double the atmosphere’s CO2, he reasoned, would take at least three thousand years .37
Apparently, the physics of greenhouse gas warming is a lot easier to comprehend than the pace of human industrialization. We’ve already raised the concentration of CO2 in the atmosphere nearly 40%, up from ~280 parts per million by volume (ppmv) in preindustrial times to ~387 ppmv as of 2009. Two-thirds of that rise has been carefully documented since 1958, when the first continuous air sample measurement program was begun by Charles Keeling at Hawaii’s Mauna Loa Observatory as part of the International Geophysical Year. Atmospheric measurements of two other powerful greenhouse gases also released by human activity, methane and nitrous oxide levels, have followed a similar rising pattern. Depending on the choices we make about carbon emissions, CO2 projections for century’s end range anywhere from 450 to 1,550 ppmv, corresponding to a +0.6 to +4.0°C increase in average global temperature on top of the +0.7°C increase already experienced in the twentieth century.38 Many policy pragmatists now feel a +2°C increase is all but assured, after the 2009 Copenhagen Climate Conference failed to produce anything resembling a binding international agreement to curb carbon emissions.
These numbers may sound small but they’re not. At the height of the last ice age, when Chicago was buried under a mile-deep sheet of ice, global temperatures averaged just 5°C (9°F) cooler than today. From historical weather-station data the global average temperature is already +0.8°C warmer than in Arrhenius’ time, with most of that rise since the 1970s. An increase of that magnitude is already much larger than the difference between any one year and the next. As expected, this warming trend varies strongly with geography, with even some local cooling in some places (the details and reasons for this are known and discussed further in Chapter 5). But the global average is trending upward, along with the steady measured growth of greenhouse gas concentrations in the atmosphere.
Not only are average temperatures rising, the way they are rising is consistent with the greenhouse effect but inconsistent with other natural cycles and processes also known to influence climate. Temperatures are warming more by night than by day; more in winter than in summer; more over oceans than over land; more at high latitudes than in the tropics; and in the troposphere but not the stratosphere. All of these are consistent with greenhouse gas forcing but inconsistent with other known causes, like urban heat islands, changing solar brightness, volcanic eruptions, and astronomical cycles. Those, too, influence climate, but none can fully explain what we are seeing today.
In addition to number-crunching weather data, there is plenty of anecdotal evidence that our climate is beginning to act strangely. A staggering thirty-five thousand people were killed in 2003 when a massive heat wave spilled across Europe. Lesser waves killed hundreds more in Japan, China, India, and the United States in the following summers, when the world suffered eleven of the top twelve hottest years ever recorded. That’s dating all the way back to the first weather stations of 1850, when Zachary Taylor was president of the United States, and Italy wasn’t even a country yet. Hurricane Katrina drowned New Orleans in 2005, a record year for tropical storms. Ironically, many of the displaced moved to Houston, where they got pounded again by Hurricane Ike in 2008. That one killed about two hundred people and put a tree through the roof of my best man, then proceeded to black out nearly a million homes across Ohio, Indiana, and Kentucky.
Like the pizzly bear, no one of these events is conclusive of anything. But after enough of them happen, the private sector gets moving. Goldman Sachs and the Harvard Business Review started writing reports on how to contain risk and maximize profits from climate change.39 Multinational corporations like General Electric, Duke Energy, and Dupont began stumping green technology and formed the U.S. Climate Action Partnership, calling on the U.S. federal government to “quickly enact strong national legislation to require significant reductions of greenhouse gas emissions.”40 By 2008 its membership included American International Group, Inc. (AIG), Boston Scientific Corporation, Chrysler LLC, ConocoPhillips, Deere & Company, the Dow Chemical Company, Exelon Corporation, Ford Motor Company, General Motors Corporation, Johnson & Johnson, Marsh, Inc., the National Wildlife Federation, the Nature Conservancy, NRG Energy, Inc., PepsiCo, Rio Tinto, Shell, Siemens Corporation, and Xerox Corporation.41 However, by late 2009 the rush of corporations to join the U.S. Climate Action Partnership had slowed, following the failed climate treaty conference at Copenhagen, some dumb e-mails circulated among a clique of climate scientists (the so-called Climategate scandal, a scientifically minor but politically devastating public-relations fiasco), and a moribund cap-and-trade bill in the U.S. Senate. By 2010 ConocoPhillips, BP America, Caterpillar, and Xerox had pulled out.
Gas molecules are impervious to politics, so all of this is really just the beginning. To underscore just how dramatic our run-up of CO2, methane, and nitrous oxide in the atmosphere is, let’s place it within the much longer context of geological time. Greenhouse gases follow both natural cycles—which fall and rise with ice ages and warm interglacial periods, respectively—and human activity, which proceeds much faster. These two actors operate over totally different time scales, with the ice age variations happening over tens of thousands of years but our human excursion unfolding over tens of years. The natural processes that drive greenhouse-gas shifts—rock weathering, astronomical cycles, the spread of forests or wetlands, ocean turnover, and others—take thousands of years, whereas human excavation and burning of old buried carbon—as illustrated earlier from U.S. history—is an action both massive and brief. And because our human-generated carbon burst is perched atop an already large, slow-moving natural interglacial peak, we are taking the atmosphere to a place the Earth has not seen for hundreds of thousands, perhaps millions, of years. 42
We know this from the ancient memories of glaciers, deep ocean sediments, tree rings, cave speleothems, and other natural archives. Most spectacular are tiny air bubbles trapped within Greenland and Antarctic ice, each a hermetically sealed air sample from the past. Loose air inside a glacier’s surface snowpack gets closed off into bubbles as the weight of still more snowfall fuses it into ice. Annual layers of these bubbles have been quietly laid down for hundreds of thousands of years, before being drilled from the guts of Greenland and Antarctica by a rare breed of scientist. The gas levels inside them prove we have now elevated the concentrations of greenhouse gases in the Earth’s atmosphere higher than they’ve been for at least eight hundred thousand years.
Eight hundred thousand years. Jesus Christ walked barely two thousand years ago, Egypt’s pharaohs four. Our first agricultural civilizations began ten thousand years ago; twenty thousand years before that, there still were Neanderthals alive. But the world has not seen atmospheric CO2 levels like today’s for eight hundred thousand years—and they are now approaching those of fifteen million years ago in the Miocene Epoch, when the world’s temperatures were 3° to 6°C warmer, its oceans acidic, polar ice caps diminished, and sea levels twenty-five to forty meters higher than today.43
This, too, is a global force to be reckoned with.
These four global forces—demographics, resource demand, globalization, and climate change—will shape our future and are recurring themes throughout this book. As each force comes up, the corresponding icon from the set that headed the four preceding sections will head the discussion. While I have described these forces separately they are, of course, intimately intertwined. Greenhouse gas comes from the exploitation of natural resources, which in turn tracks the global economy, which in turn relates partly to population dynamics, and so on.
A fifth force twining through the first four is technology. Fast global communications facilitate global financial markets and trade. Modern health care and pharmacology are shifting population age structures in the developing world. Advances in biotech, nanotech, and materials science affect demand for different resource stocks. Smart grids, solar panels, and geoengineering might combat climate change, and so on. Under our “No Silver Bullets” rule, technological advances like these are evaluated as enablers or brake pads on the four global forces, rather than as an independent force of its own.
The thought experiment is begun. Its assumptions and ground rules are stated, its four overarching themes defined. Let us turn now to the first subject of scrutiny for the year 2050—ourselves.