Seeing Climate Change in Our Past - Your Weather is Your Climate - The Weather of the Future: Heat Waves, Extreme Storms, and Other Scenes from a Climate-Changed Planet - Heidi Cullen 

The Weather of the Future: Heat Waves, Extreme Storms, and Other Scenes from a Climate-Changed Planet - Heidi Cullen (2010)

Part I. Your Weather is Your Climate

Chapter 2. Seeing Climate Change in Our Past

It is not the strongest nor most intelligent of the species that survive; it is the one most adaptable to change.

—Charles Darwin, On the Origin of Species

Here’s something that most climate scientists won’t tell you about climate change: the Earth is going to be fine. As the history of climate change in this chapter shows, the Earth has gone through periods of warming and cooling in the past, and still it remains here. Unfortunately, you can’t say the same for the species that occupy Earth—including us. The fact that the Earth will be fine doesn’t necessarily mean that the human race will. The last 10,000 to 20,000 years have witnessed a period of dramatic growth in human civilization. Indeed, our growth during this time is unique among all species, but it has been highly dependent on the overall consistency of the climate.

In order to make predictions about the man-made climate change of the future and understand just how high the stakes are, we must first look to the natural climate change of the past. We humans see ourselves as highly adaptable creatures; indeed, whether or not we can endure the coming climate change hinges on our adaptability. However, this is not a given. As a species, we have never been forced to adapt to a global increase in temperature like the one we currently face. If climate change does indeed occur, the future for humans will become a lot less certain—we will be much like other animals before us who proved unable to adapt to changing climates.

Take the woolly mammoth, the unofficial mascot of the ice ages. To see a woolly mammoth is to see a climate that no longer exists, and it’s been this way ever since people first started finding mammoth fossils.

Weighing 20,000 pounds, and with tusks 16 feet long, mammoths must have looked quite striking 20,000 years ago, as they strolled around what is now downtown Los Angeles. Obviously, this region was very different 20,000 years ago, and the climate of Earth was very different as well. It’s a time scientists refer to as the Last Glacial Maximum (LGM). Major areas of the Earth were locked in relentless winter, covered in massive sheets of ice that grew in frigid strongholds to the north. Los Angeles was not covered by ice, but it was definitely influenced by the cold elsewhere. Forests, fields, and even mountains were no match for these vast sheets of ice, and they had the battle scars to prove it. The ice was voracious, drinking up the oceans and drawing down the sea level by almost 400 feet. But all this was not a problem for the woolly mammoth—quite the contrary. Woolly mammoths were well adapted to the cold climate of the LGM, with shaggy hair more than 3 feet long to protect them from frigid winds and a 3-inch layer of blubber to keep them warm.

This vast stretch of ice, known as the Laurentide ice sheet, buried what is today Canada, New England, the Midwest, and parts of Washington, Idaho, and Montana under a layer of ice more than 1 mile thick.1Just south of this vast ice sheet stretched a treeless tundra that was equally expansive. This was the summer home of the woolly mammoth. The mammoths nibbled on the coarse tundra grasses with their perfectly adapted, but probably not pearly white, teeth. Woolly mammoth teeth, in fact, were about 6 inches square, the biggest grinding teeth in the animal kingdom. Those teeth were perfectly suited to their mammoths’ ice age vegetarian diet.

Evidence of mammoths has been found throughout the northern hemisphere. Early humans settling along the North Sea coast, sometime between 6,000 and 8,000 years ago, also encountered skeletal remains of the woolly mammoth. The low sea level at the end of the last ice age provided an exposed shelf between the Netherlands and England that the mammoths roamed freely across. When the sea level rose again, rough surf would have crushed the exposed skeletons of the woolly mammoths, but their teeth were tough as nails, and these would have survived and eventually washed up along the shore. All along the North Sea, storm waves would have tossed woolly mammoth fossils up onto the beach, like seashells, for Vikings to find.

I can’t imagine what it must have been like finding a woolly mammoth tooth along the shore all those years ago. Such a discovery most certainly raised some tricky questions that would have been tough to answer at the time. An animal with teeth 6 inches wide? There was nothing walking around the North Sea coast at that time with teeth 6 inches wide. Had the North Sea settlers been able to ask their Stone Age ancestors who lived through the ice age 10,000 years earlier, these ancestors would have been able to explain everything. They had, in fact, hunted the woolly mammoth. But lacking a time machine, the early Viking settlers had to come up with their own story to explain the existence of these very large teeth.

On the basis of the size of the teeth, the Vikings calculated that the animal must have been more than 70 feet tall. In tribute they named their new home, in what is today Denmark and Germany, “Land of the Giants.” They assumed the woolly mammoths were the children of an enormous ice giant to the north who had once ruled all of Scandinavia. The legend went on to say that when the ice giant was killed, his blood made the sea level rise and drowned all his furry children with the big teeth. This explanation, preserved today in Icelandic sagas, is the earliest recorded notion pointing to the existence of an ice age, and actually it’s not that bad an explanation for what happened.

The early Vikings were some of the first people in recorded history to try to understand how and why climate change occurred. Perhaps one of the greatest misconceptions about climate change is the notion that studying it is something that began in the twentieth century. In fact, many of the first important discoveries about global warming were made during the 1800s.

A lot of people are surprised to learn that scientists have been working on the problem of global warming for well over 100 years. The key difference in the beginning, though, was that the scientists weren’t studying humanity’s role in the process: they were trying to understand something that for religious and cultural reasons was a dangerous idea at the time: perhaps the climate on Earth had not always been the same.

The base of climate science today comes from work that was done by these visionaries of the nineteenth century. What’s so impressive about these pioneers is that they were able to see climate in ways no one had ever seen it before. They were trying to find answers to fundamental questions: Why is the sky blue? How old is our planet? Why were woolly mammoth bones popping up in the La Brea Tar Pits in Los Angeles? And so on. These scientists were starting from scratch in building a body of evidence about the Earth’s climate. They had to frame the questions, devise the equipment, and then perform the experiments to come up with reproducible answers. Getting the planet to share its past is like pulling teeth. But, as it turns out, teeth had a lot to say.

In the 1800s when scientists once again began finding 6-inch teeth scattered across North America and Europe and into Siberia, they wanted to do something a little better than a Viking myth about an ice giant. They wanted to use the tools of science to build a rigorous explanation that could stand the test of time. Ironically, they ended up proving that the Vikings weren’t too far off, at least with regard to the giant ice age.

In 1837 Louis Agassiz, a Swiss scientist, stood up before his colleagues at a conference in the Swiss town of Neuchâtel to present a theory suggesting that the Earth had indeed experienced an ice age. Like many others in his day, he had observed the glaciers of his native Switzerland and noticed the marks that these glaciers left behind: rocks with scratches and scars, mounds of debris called moraines that had been pushed up by glaciers, deep valleys, signs that large boulders had been carried long distances. Agassiz came to realize he was seeing classic signs of a process known as glaciation in places where there were no glaciers to be seen.

Agassiz was going up against an explanation that had come from the Bible. At the time, it was widely believed that The Great Flood was the only event with the power to do such heavy lifting.2 The story of Noah’s flood, with just a slight tweak, received almost unanimous support from the scientific community. The tweak had been provided by the great English geologist Charles Lyell. And it was required in order to overcome an inconsistency in the story. Lyell’s revision to The Great Flood was his suggestion that the big boulders dropped off in strange places had, in fact, been transported by icebergs.

But that still left the issue of the strange scars on the rocks. Interestingly, plenty of local villagers at the time had already come to their own conclusions about these scars. Having grown up among glaciers, they didn’t need a scientist to explain the origins of the strange scars. Throughout the towns and villages of Switzerland, it seems many people had already been convinced that the scratches and scars were the result of a flood of ice, not a flood of water as Lyell had suggested. They, in fact, had already come to accept the theory of a great ice age, just like the Vikings before them.

Despite the lukewarm reception of his presentation at the Swiss Society of Natural Sciences in Neuchâtel in 1837, Agassiz persisted. In 1840, he even published a book called Studies on Glaciers.3 With the help of numerous colleagues who had been convinced by his evidence and by the clarity of his argument, Agassiz fought hard to convince skeptics who clung to the theory of The Great Flood. Eventually, the overwhelming strength of the evidence won out. In the end, Agassiz had proved that there was a period of time when large areas of the Earth had been covered by ice sheets. Like the Swiss villagers, he had come up with the simplest and most consistent explanation. The ice age, he said, reached its maximum about 20,000 years ago, and then gave way to an eventual warming.

Let’s go back to the woolly mammoth for a moment. The rise and fall of the woolly mammoth is linked to the rise and fall of the ice ages. The rise began around 300,000 years ago, as the Earth underwent a transition to a cooler climate. The peak of the last glacial period, the LGM, was about 20,000 years ago. After that, over a span of about 12,000 years, much of the ice melted, the sea level rose almost 400 feet, and the temperature rose about 11°F. Fossil evidence suggests that at the peak of the LGM, woolly mammoths could be found across Europe, Asia, and North America. They were so well adapted to the cold that during the last ice age, parts of Siberia may have had an average population density of about sixty woolly mammoths to every 40 square miles. But then, as the climate changed around them, they simply died out. As scientists processed the significance of this connection, they had to invent a word to explain the phenomenon. The word is extinction.

The woolly mammoth, that icon of the ice age, also became an icon of extinction. Before their extinction was recognized, no one had supposed that a robust species could simply disappear. So Louis Agassiz will always be credited not only with his theory of a “great ice age” but also with discovering extinction.

Over the years, Agassiz’s theory of the ice age needed to be refined. The ice sheets were not as large as he had thought, and the ice age didn’t arrive as suddenly as he had thought. Most important, there wasn’t just one great ice age. In Scotland, plant fragments were found sandwiched between layers of glacial deposits. It became increasingly obvious that there had been not just one ice age but several large glaciations, one following another, separated by warm periods. Scientists came to understand that the Earth actually moved into and out of ice ages. And with that amazing discovery, a new crop of scientists began to work on a new theory they called climate change.

To grow a continental-scale ice sheet you need low temperatures. That much was clear. What wasn’t so clear was how the temperatures had been lowered enough to permit the growth of ice on such a massive scale.

One important hypothesis of how the planet regulated its temperature was put forth by the French mathematician and physicist Joseph Fourier in 1824.4 As a physicist, Fourier was interested in understanding some basic principles about the flow of heat around the planet. Specifically, he wanted to use the principles of physics to understand what sets the average surface temperature of Earth. It made perfect sense that the sun’s rays warmed the surface of the Earth, but this left a nagging question: when light from the sun reaches the surface of the Earth and heats it up, why doesn’t the Earth keep warming up until it’s as hot as the sun? Why is the Earth’s temperature set at roughly 59°F—the average temperature at its surface?

Fourier reasoned that there must be some balance between what the sun sends in and what the Earth sends back out, so he coined the term planetary energy balance, which is simply a way of saying that there is a balance between energy coming in from the sun and energy going back out to space. If the Earth continuously receives heat from the sun yet always has an average temperature hovering around 59°F, then it must be sending an equal amount of heat back to space. Fourier suggested that the Earth’s surface must emit invisible infrared radiation that carries the extra heat back into space. Infrared radiation (IR), like sunlight, is a form of light. But it’s a wavelength that our eyes can’t see.

This was a good idea, but when he actually tried to calculate the planet’s temperature using this effect, he got a temperature well below freezing. So he knew he must be missing something. To arrive at 59°F, the Earth’s average temperature, Fourier realized that he needed the atmosphere to pick up the slack. And he discovered a phenomenon he called the greenhouse effect, a process whereby the gases in the Earth’s atmosphere trap certain wavelengths of sunlight, not allowing them to escape back out to space. Like the glass in a greenhouse, these greenhouse gases let sunlight through on its way in from space, but intercept infrared light on its way back out.

In 1849, an Irish scientist, John Tyndall, was able to build on this idea. He had become obsessed with the glaciers he climbed while visiting the Alps on a vacation. Like many other scientists at the time, he wanted to understand how these massive sheets of ice formed and grew. He applied his personal observations of glaciers in the laboratory in 1859, when, at the age of thirty-nine, he began a series of innovative experiments.

Tyndall was intrigued by the concept of a thermostat. We know thermostats today as devices that regulate the temperature of a room by heating or cooling it. Tyndall devised an experiment to test whether the Earth’s atmosphere might act like a thermostat, helping to control the planet’s temperature. He reasoned that it might help explain how ice ages had blanketed parts of the Earth in the past.

For his experiment, Tyndall built a device, called a spectrophotometer, which he used to measure the amount of radiated heat (like the heat radiated from a stove) that gases such as water vapor, carbon dioxide, or ozone could absorb. His experiment showed that different gases in the atmosphere had different abilities to absorb and transmit heat. Some of the gases in the atmosphere—oxygen, nitrogen, and hydrogen—were essentially transparent to both sunlight and IR, but other gases were in fact opaque: they actually absorbed the IR, as if they were bricks in an oven. Those gases include carbon dioxide (CO2) and also methane, nitrous oxide, and water vapor. These greenhouse gases are very good at absorbing infrared light. They spread heat back to the land and the oceans. They let sunlight through on its way in from space, but intercept IR on its way back out. Tyndall knew he was on to something. The fact that certain gases in the atmosphere could absorb IR implied a very clever natural thermostat, just as he had suspected. His top four candidates for a thermostat were water vapor, without which he said the Earth’s surface would be “held fast in the iron grip of frost”; methane; ozone; and, of course, carbon dioxide.5

Tyndall’s experiments proved that Fourier’s greenhouse effect was real. They proved that nitrogen (78 percent) and oxygen (21 percent), the two main gases in the atmosphere, are not greenhouse gases, because a molecule of each of these elements has only two atoms, so it cannot absorb or radiate energy at IR wavelengths. However, water vapor, methane, and carbon dioxide, each of which is a molecule with three or more atoms, are excellent at trapping IR radiation. They absorb about 95 percent of the long-wave or IR radiation emitted from the surface. So, even though there are only trace amounts of these gases in the atmosphere, a little goes a long way toward making it really tough for all the heat to escape back into space. In other words, greenhouse gases in the atmosphere act as a secondary source of heat, in addition to the sun. And the greenhouse gases provide the additional warming that Fourier needed to explain that average temperature of 59°F.

Thanks to Tyndall, it is now accepted that visible light from the sun passes through the Earth’s atmosphere without being blocked by CO2. Only about 50 percent of incoming solar energy reaches the Earth’s surface: about 30 percent is reflected by clouds and the Earth’s surface (especially in icy regions), and about 15 percent is absorbed by water vapor. The sunlight that makes it to the Earth’s surface is absorbed and reemitted at a longer wavelength, IR, that we cannot see, like heat from an oven. Carbon dioxide (like other heat-trapping gases, such as methane and water vapor) absorbs the IR and warms the air, which in turn warms the land and water below it. More carbon dioxide means more warming. This is where the concept of a natural thermostat becomes very powerful—mess with the amount of CO2 in the atmosphere, and you’re resetting the thermostat of the planet.

The idea was good, even profound, but the term greenhouse effect was not entirely accurate. Real greenhouses stay warm without a heater because the sun’s rays shine in, warming the inside of the greenhouse, and the glass keeps the heat from escaping. But in reality the atmosphere is much more sophisticated than a greenhouse. Fourier had figured out something very important. He had figured out that the sun is not our only source of heat. The atmosphere, in fact, is a very powerful backup generator. This was yet another discovery on the road to understanding the relationship between temperature and carbon dioxide, a relationship that turns out to have profound implications for our climate.

Svante Arrhenius (1859–1927), a Swedish physicist and chemist, was another scientist who was smitten with ice ages. He took Tyndall’s thermostat mechanism and ran with it, exploring whether the amount of CO2 in the atmosphere could be fiddled with by an event such as a volcanic eruption. According to Tyndall’s experiments, the additional carbon dioxide released by the volcano could conceivably raise the Earth’s temperature, and Arrhenius wanted to see if that was actually true.

We refer to events or processes that result in changes to the climate as forcings. A volcanic eruption is an example of a natural forcing. A forcing can often result in an amplification (positive) or a reduction (negative) in the amount of change and often comes hand in hand with a feedback—a situation where some effect causes more of itself. In other words, if a forcing is the event that creates change, then the feedback amplifies that change. But keep in mind that a positive feedback is not positive in the sense of being good. Positive refers specifically to the direction of change, not to the desirability of the outcome. A negative feedback tends to reduce or stabilize a process, whereas a positive feedback tends to increase or magnify it.

Maybe, Arrhenius thought, this positive feedback mechanism was responsible for plunging the planet into an ice age. If the atmosphere were to dry out for some reason, the decreasing water vapor would hold less heat and the Earth would cool. Since cooler air holds less water vapor, the atmosphere would tend to dry more, amplifying the cooling. In addition, cooler temperatures would generally lead to increases in snow and ice, and so to yet another positive feedback. When snow and ice cover a region, such as the Arctic or Antarctica, their white, light-reflecting surface tends to bounce sunlight back out to space, helping to further reduce temperature. If regions covered by snow and ice expanded over more of North America and Europe, the climate would cool further while also increasing the ice sheets. Start with a drop in carbon dioxide, continue with a drop in temperature, add some snow and ice, and you’ve made an ice age.

Arrhenius thought his theory was quite solid, but he wanted to prove it mathematically. So he set about a series of grueling calculations that attempted to estimate the temperature response of changing levels of carbon dioxide in the atmosphere. These may have begun as “back of the envelope” calculations, but in 1896 he was confident enough to publish the work for his colleagues to read.6 The end result of all of it was one simple number: 8°F.

That number represented roughly how much Arrhenius thought the Earth’s average temperature would drop if the amount of CO2 in the atmosphere fell by half. Once you factor in the positive feedbacks of water vapor, snow, and ice, an ice age seemed like a reasonable outcome. The only thing Arrhenius still needed was a mechanism for tinkering with atmospheric carbon dioxide, turning down the natural thermostat. And that is what led, in part, to the discovery of the carbon cycle.

Arrhenius asked a colleague, Arvid Högbom, to help him figure out how much carbon dioxide levels in the atmosphere might be able to change. Högbom had compiled estimates of how carbon dioxide flows through various parts of the planet, including emissions from volcanoes, absorption by the oceans, and so forth. This carbon cycle is a fundamental concept that is hugely important. If carbon dioxide really was the natural thermostat that scientists had been searching for, then the next crucial step would be to figure out how CO2 cycles into and out of the ocean, the land, the atmosphere, and living matter such as plants and trees.

It turns out that carbon (the C in carbon dioxide) has the ability to cycle among a few different reservoirs. Relatively small amounts of carbon reside in the atmosphere, the ocean surface, and vegetation. A slightly larger amount is held in soils, and a much larger amount resides in the deep ocean. The biggest reservoir can be found in rocks and sediments. Carbon takes different chemical forms in different reservoirs. In the atmosphere, it is the gas carbon dioxide (CO2).

The carbon cycle can be thought of, metaphorically, as a kind of reincarnation. This cycle is the great natural recycler of carbon atoms. The same carbon atoms in your body today have been used in countless other molecules for millions, even billions, of years. The wood burned in a fireplace last winter produced CO2 that found its way into a tomato plant this spring. The borders are wide open and carbon cycles easily cross different zones. The atoms pair up, get into various substances for a while, come out of those, and go somewhere else—it is a continuous and ongoing cycle.

Here’s a carbon cycle scenario. In phase one, volcanoes and hot springs transfer carbon from deep below the Earth’s crust to the atmosphere. In phase two, the carbon dioxide is scrubbed from the atmosphere by a process called chemical weathering. Basically, when it rains, the rainwater combines with CO2 in the atmosphere to form a weak acid, carbonic acid. That weak acid falls as rain and then chemically reacts with rocks, releasing carbon, which eventually makes its way into the ocean, where it is locked up in the shells of marine plankton.7 After dying, the marine plankton eventually sink to the bottom and turn into rocks.

Here, the scenario gets really interesting. Experiments show that rates of chemical weathering are influenced by three environmental quantities: temperature, precipitation (rain and snow), and plant matter. Temperature, precipitation, and vegetation all act in a mutually reinforcing way to affect the rate of chemical weathering. The higher the temperature, the faster a rock is broken down by chemical weathering. Higher precipitation raises the level of groundwater held in soils and combines with CO2 to form carbonic acid and more rapidly drive the weathering process. Remember that temperature and precipitation are linked; the amount of water vapor that air can hold rises with temperature. Likewise, the amount of vegetation is closely tied to temperature and precipitation. More rainfall means more vegetation, and more vegetation means more carbon stored in the soil.

So, carbon becomes the secret ingredient in adjusting the natural thermostat and changing the Earth’s climate. The beauty of this mechanism is that it’s a big loop. On the one hand, the speed of chemical weathering is tuned to the state of the Earth’s climate. On the other hand, the climate is tuned to the rate at which CO2 is pulled out of the atmosphere by chemical weathering. This is an example of a very sophisticated feedback loop.

Ultimately, chemical weathering is the most likely explanation for Earth’s habitability over most of the 4.6 billion years of its existence. Any factor that heated Earth during any part of its history caused chemical weathering rates to increase. This increase, in turn, drew CO2 out of the atmosphere at faster rates, and eventually resulted in a cooling to offset the warming. On the flip side, any factor that cooled Earth set off the opposite sequence of events. Chemical weathering constantly acts to moderate long-term climate changes by adjusting the CO2 thermostat as needed. If positive feedbacks help push our climate into an ice age, chemical weathering helps to push us out of one.

As a result of chemical weathering, most of Earth’s carbon is tied up below the surface in rocks and pools, including coal, oil, and natural gas. But now, of course, we humans are taking the coal, oil, and natural gas out of the ground and burning it, transferring long-stored carbon to the atmosphere. Nature’s history tells us what to expect.

We tend to think of man-made global warming as a modern concept, something that has come into vogue in the last twenty years or so, but in reality this idea is more than 100 years old. As noted above, the notion that the global climate could be affected by human activities was first put forth by Svante Arrhenius in 1896. He based his proposal on his prediction that emissions of carbon dioxide from the burning of fossil fuels (i.e., coal, petroleum, and natural gas) and other combustion processes would alter atmospheric composition in ways that would lead to global warming. Arrhenius calculated how much the temperature of the Earth would drop if the amount of CO2 in the atmosphere was halved; he also calculated the temperature increase to be expected from a doubling of CO2 in the atmosphere—a rise of about 8°F.

More than a century later, the estimates from state-of-the-art climate models doing the same calculations to determine the increase in temperature due to a doubling of the CO2 concentration show that the calculation by Arrhenius was in the right ballpark. The Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) synthesized the results from eighteen climate models used by groups around the world to estimate climate sensitivity and its uncertainty. They estimated that a doubling of CO2 would lead to an increase in global average temperature of about 5.4°F, with an uncertainty spanning the range from about 3.6°F to 8.1°F. It’s amazing that Arrhenius, doing his calculations by hand and with very few data, came so close to the much more detailed calculations that can be done today.

Arrhenius’s calculations, however, did have some shortcomings. For example, in estimating how long it would take for the CO2 concentration in the atmosphere to double, he assumed that it would rise at a constant rate. With about 1.6 billion people on the planet in 1895 and with relatively small use of fossil fuels, Arrhenius predicted that it would take about 3,000 years for the atmospheric CO2 concentration to double. Unfortunately, when scientists today factor in the quadrupling of world population since then and the increasing demand for energy, doubling is now projected before the end of this century unless substantial cutbacks in emissions are adopted by nations around the world. So, technically, Arrhenius was off by about 2,800 years. (Another of his doubtful predictions was that he firmly believed a warmer world would be a good thing.)

In Arrhenius’s time, the impacts of global warming were mainly left to future investigation—the majority of scientists still needed to be convinced that the concentration of CO2 in the atmosphere could vary, even over very long timescales, and that this variation could affect the climate. Scientists at the time were focused more on trying to understand the gradual shifts that took place over periods a thousand times longer than Arrhenius’s estimate: those that accounted for alternating ice ages and warm periods and, in distant times (more than 65 million years ago), for the presence of dinosaurs. They couldn’t even begin to wrap their minds around climate change on a human timescale of decades or centuries. Nobody thought there was any reason to worry about Arrhenius’s hypothetical future warming, which he suggested would be caused by humans and their burning of fossil fuel. It was an idea that most experts at the time dismissed. Most scientists of the era believed that humanity was simply too small and too insignificant to influence the climate.

Fast-forward to the mid-1950s, and enter Charles David Keeling, a brilliant and passionate scientist who was then beginning his research career at Caltech. Keeling had become obsessed with carbon dioxide and wanted to understand what processes affected fluctuations in the amount of CO2 in the atmosphere. Answering this question required an instrument that didn’t exist, the equivalent of an ultra-accurate “atmospheric Breathalyzer.” So Keeling built his own instrument and then spent months tinkering with it until it was as close to perfect as he could get at measuring the concentration of CO2 in canisters with a range of values of known concentration.

Keeling tried his instrument out by measuring CO2 concentrations in various locations around California and then comparing these samples in the lab against calibration gases. He began to notice that the samples he took in very pristine locations (i.e., spots where air came in off the Pacific Ocean) all yielded the same number. He suspected that he had identified the baseline concentration of CO2 in the atmosphere; a clear signal that wasn’t being contaminated by emissions from factories, farms, or uptake by forests and crops.

With this instrument, called a gas chromatograph, Keeling headed to the Scripps Institution of Oceanography to begin what is perhaps the single most important scientific contribution to the discovery of global warming. Keeling was on a mission to find out, once and for all, if CO2 levels in the atmosphere were increasing. He would spend the next fifty years carefully tracking CO2 and building, data point by data point, the finest instrumental record of the CO2 concentration in the atmosphere, generating a time history that is now known by scientists as the Keeling curve.

The Keeling curve is a monthly record of atmospheric carbon dioxide levels that begins in 1958 and continues to today. The instrument Keeling built, the gas chromatograph, works by passing infrared (IR) light through a sample of air and measuring the amount of IR absorbed by the air. Because carbon dioxide is a greenhouse gas, Keeling knew that the more IR absorbed by the air, the higher the concentration of CO2 in the air. Because CO2 is found in very small concentrations, the gas chromatograph measures in terms of parts per million (ppm).

Keeling knew from his travels around California that he needed to make his measurements at a remote location that wouldn’t be contaminated by local pollution. That’s why he settled on Hawaii. Hawaii’s big island is the site of the volcano Mauna Loa, and Keeling set up his CO2 instrument near the top of Mauna Loa. Isolated in the middle of the Pacific Ocean and at more than 11,000 feet above sea level, the top of the Mauna Loa volcano is an ideal location to make measurements of atmospheric carbon dioxide that reflect global trends, but not local influences such as factories or forests that may boost or lower the carbon dioxide level within their vicinity. The sensors were positioned so that they sampled the incoming ocean breeze well above the thermal inversion layer; thus the air was not affected by nearby human activities, vegetation, or other factors on the island. Obviously, volcanoes are potentially a big source of CO2, but Keeling took this into account when positioning his instrument, locating it upwind of Mauna Loa’s vent and installing sensors to give alerts if the winds shift.

What he found was both disturbing and fascinating, creepy and profound. Keeling, using his Mauna Loa measurements, could see that with each passing year CO2 levels were steadily moving upward. As the years passed and the Mauna Loa data accumulated, Keeling’s CO2 record became increasingly impressive, showing levels of carbon dioxide that were noticeably higher year after year after year. The first instrumental measurements indicated a CO2 concentration of 315 ppm in 1958. The slow rise in its concentration over the first several years was enough to prompt a report from a panel of the President’s Science Advisory Council to President Johnson in 1965, indicating that the early prediction that an increase in CO2 could occur was correct and that global warming would indeed be expected to occur. This was the first instance when a document discussing global warming ended up in front of the president of the United States. It would not be the last.

In 2008, just over fifty years after Keeling started his observations, the concentration at Mauna Loa had reached 385 ppm. Keeling’s measurements thus provided solid evidence that the atmospheric CO2concentration was increasing. If anything proved that Arrhenius had been on to something, it was these data.

One of the most striking aspects of the Keeling curve is a small CO2 wiggle that takes place every year. For every little jump up, there is a little dip back down, so that the whole curve looks saw-toothed. This wiggle happens like clockwork and is timed with the seasons. In the northern hemisphere during fall and winter, plants and leaves die off and decay, releasing CO2 back into the atmosphere and causing a small spike. And then during the spring and summer, when plants are taking CO2 out of the atmosphere in order to grow, carbon dioxide levels drop. Hawaii, along with most of the planet’s landmass, is situated in the northern hemisphere, so the seasonal trend in the Keeling curve is tracking the seasons in the northern hemisphere. The Keeling curve proved many important things at once. It proved that CO2 levels in the atmosphere can indeed change and that they can change on very short timescales.

Keeling’s record was the icing on the cake, and he rightly stands with Agassiz, Tyndall, and Arrhenius among the giants of climate science. He helped prove the reality of global warming by providing the data upon which the pioneering theories of Tyndall and Arrhenius could finally rest. As is the case in research science, Keeling’s painstaking measurements have been verified and supplemented by many others. Measurements at about 100 other sites have confirmed the long-term trend shown by the Keeling curve, although no sites have a record as long as Mauna Loa. Other scientists have also extended the Keeling curve farther back in time, using measurements of CO2 in air trapped in bubbles in polar ice and in mountain glaciers. Ice cores collected from Antarctica and Greenland can be used to reconstruct climate hundreds of thousands of years ago, showing that the preindustrial amount of CO2—the level from a.d. 1000 to 1750—in the atmosphere was about 280 ppm, about 105 ppm below today’s value. The record indicates that the concentation of CO2 has increased about 36 percent in the last 150 years, with about half of that increase happening in the last three decades. In fact, the CO2 concentration is now higher than any seen in at least the past 800,000 years—and probably many millions of years before the earliest ice core measurement.

Over the past century, the evidence has piled up in support of Arrhenius’s explanation of global warming. As the evidence accumulates with each passing year, what was once a fringe hypothesis that sprang from the mind of a single scientist in Sweden is now part of the bedrock of scientific accomplishments. Unfortunately, scientific discoveries are not always good news. And there is a nagging fear among scientists that we’ll prove ourselves to be not so different from the woolly mammoth, the symbol of a climate that no longer exists.