The Weather of the Future: Heat Waves, Extreme Storms, and Other Scenes from a Climate-Changed Planet - Heidi Cullen (2010)

Part I. Your Weather is Your Climate

Chapter 3. The Science of Prediction

If I have seen further, it is by standing on the shoulders of giants.

—Sir Isaac Newton

Prediction is an odd thing. Depending on your personality, predictions are a source of either comfort or anxiety. In one broad stroke, they have the power to reassure or destabilize. Predictions often give us an illusion of control in situations that are inherently out of our control. Nothing exemplifies this better than our relationship to weather forecasts. We can’t stop the weather, but we can at least prepare for it. Ultimately, this preparation is what the science of prediction—be it climate or weather prediction—is all about.

A certain pleasure comes from knowing that meteorologists are generally right about the forecast, and a certain disappointment comes from finding out they got it wrong. Of course, we are not happy when predictions fail. Over the last fifty years, we have grown accustomed to the idea that the weather can be “predicted,” so it feels like violation when a forecast turns out to be incorrect.

A big part of believing predictions like those in this book has to do with trusting and understanding the underlying data and models. Model simulations are the closest thing that scientists have to a crystal ball, and as a result data are the lifeblood of every prediction that weather and climate scientists make. At this point, weather prediction is so ingrained in our lives that we’ve stopped being skeptical about it. Even though the sun is shining, our experience tells us that we should trust the man or woman in front of the map who’s gesturing at swirling shades of green behind it. Unfortunately, the same cannot be said for climate prediction; but as we will see here, the two are really not all that different.

Although weather prediction is now embedded in our psyche, the practice as we know it has been studied for only about 100 years. What we need to understand is that the mechanisms of weather predictions are very similar to those of climate predictions. So if we’re comfortable trusting local forecasters’ predictions about weather, we should probably think about trusting the predictions coming out of the country’s climate laboratories.

The modern-day weather forecast originated on the battlefields of World War I. During that war, a young Quaker ambulance driver, Lewis Fry Richardson, fascinated by the possibility of seeing the weather before it happened, laid the groundwork for the daily weather forecasts that we all live by today.1 Richardson, a true giant in weather forecasting, was also a pioneer in a branch of mathematics called numerical analysis. Numerical analysis looks for ways to find approximate solutions to problems that are too complicated to solve. It also serves as a bridge between people and computers. One of the key differences between people and computers is that computers can do arithmetic lightning fast. Humans, on the other hand, can come up with elegant mathematical equations to represent how the world works. Despite their elegance, those mathematical equations are hard to solve, and that’s where numerical analysis comes in handy. Without it, computer models would not have been possible.

But before there were computers, there was Richardson. He was committed to the idea of generating the very first weather forecast using seven elegant mathematical equations developed by another giant in the field of meteorology, the Norwegian scientist Vilhelm Bjerknes. By the time of World War I, Bjerknes had come up with equations capable of describing the behavior of the atmosphere. The state of the atmosphere at any point could be described by seven values: (1) pressure, (2) temperature, (3) density, (4) water content, and wind—(5) east, (6) north, and (7) up. In essence, Bjerknes presented Richardson with seven complex calculus problems in need of transformation.

Richardson knew that the differential equations could be approximated and simplified using numerical analysis. And once the equations were simplified, he figured that he should be able to generate a weather forecast for central Europe. To do this, he divided the entire atmosphere into discrete columns measuring about 3° east–west and about 125 miles north–south; this division works out to about 12,000 columns on the surface and five rows in the atmosphere. If he calculated the value of each of the seven variables for each cell in the two columns over central Europe, he figured he’d have the first battlefield weather forecast.

Of course, at that time, Richardson did all his work by hand, in “offices” that can most charitably be described as airy—temporary rest camps with a view of the front lines of the fighting. Computers capable of doing the math were still a far-off dream, so with just pencil and paper, this driver with the Friends Ambulance Unit in France tackled the problem of weather prediction. Richardson himself was the computer. His forecast for central Europe was no small undertaking; he later wrote, “The scheme is complicated because the atmosphere is complicated.” Even the simplified procedure required a maddening amount of arithmetic. There was so much arithmetic to be done that calculating a weather forecast just six hours out in time required about six months of work—rather late to be considered an actual forecast.

But Richardson was undaunted, expressing his dream that “someday in the dim future it will be possible to advance the computations faster than the weather advances.” Of course, when he wrote these words, Richardson was imagining people doing the calculating. In the not too distant future, artificial computers would easily outrun time and see the future before it happened.

Unfortunately, the time it took to grind out the forecast calculations wasn’t Richardson’s only problem. The initial conditions he used to start the calculation were both incomplete and imprecise. He just didn’t have all the observational data he needed to fully represent the physical state of the atmosphere. As a result, the first official weather forecast went down in history as a total bust, and with that bust came one of the cardinal rules of weather and climate prediction: your forecast is only as accurate as your data.

Still, though the forecast itself was off, much of what Richardson proposed was right. And luckily for all of us who count on reliable weather forecasts today, Richardson was brave enough to publish his ideas. However, he had to find his manuscript first—he had lost the sole copy during the Battle of Champagne in April 1917. He discovered it months later under a heap of coal. The book, eventually published in 1922, was called Weather Prediction by Numerical Process. And what at first appeared to be nothing more than a failed weather forecast is now widely considered one of the most profound books about meteorology ever written. Richardson had come up with a way to see into the future. But he couldn’t do it alone. He needed computers.

Upon returning from the war, Richardson eventually quit meteorology when he realized that his work was being used for military purposes. A committed pacifist, this gentle giant actually destroyed some of his research to prevent it from being used by the military. He spent much of the rest of his life applying mathematics to the understanding of the causes of war. But as time went on, and the field of theoretical meteorology came of age, Richardson’s early vision of a weather forecast was fully realized. Computers were being developed, and by the late 1940s the first successful numerical weather prediction was performed at the Institute for Advanced Study in Princeton, New Jersey. By the 1950s routine weather forecasts were being produced; these used very simple models that did not take into account variables such as radiation and so led to some fairly large errors.

Yet in spite of these shortcomings, the computers proved very effective at predicting the weather, especially as more advanced forms of data collecting fed more accurate information into the models. Today, the North American Mesoscale (NAM) model developed by the National Weather Service—this is the model that The Weather Channel uses for its forecasts—takes about ninety minutes to ingest all the data (those very important initial conditions), and the actual computer calculations that provide the weather forecast out to eighty-four hours (3.5 days) take less than ninety minutes. So, give a model three hours, and it’ll give you the weather for the entire country for the next three days.

As weather forecasts became more routine and forecasters’ skill increased, scientists began to look for a new challenge; they began to look farther out in time. The goal was to build a model that represented the climate system. This was no small task. Weather models are concerned only about what’s happening in the atmosphere. The atmosphere has a memory of roughly one week. That’s why your local weather forecast goes out only about a week.

Climate models, however, needed to include much more. Scientists had to connect their mathematical version of the atmosphere to mathematical versions of the oceans, the land surface, and sea ice and biology. This was a vast expansion of weather prediction. And so, in the late 1940s, scientists, many of them meteorologists, set out to derive the mathematical equations that would describe the rest of the planet. They were building a computer model that would serve as a planetary stunt double. It would be an entirely new way of looking even farther into the future.

Under the direction of Joseph Smagorinsky at the U.S. Weather Bureau in Washington, D.C., the work started with basic physics equations of fluids and energy, and then kept building from there. Syukuro Manabe, a Japanese meteorologist, arrived in the United States from Tokyo University in 1958 to help Smagorinsky. He began work on an atmospheric model that would include the basics: winds, rain, snow, and sun. He and Smagorinsky also included the greenhouse effect caused by both carbon dioxide and water vapor. This would allow them to eventually test what the increased carbon dioxide would do to the climate system.

In the meantime, building this “twin Earth” required understanding the nitty-gritty of how the world works. Manabe found himself in the library researching topics such as how different soils absorb water. By 1965 he and Smagorinsky had developed a three-dimensional model, which solved the basic equations for the atmosphere and was simple enough that the equations could be calculated efficiently. Still, it’s important to keep in mind that this early model, and others, had no geography: no land and no oceans. Everything was averaged over bands of latitude, with continents and oceans mixed together to form a swamp that exchanged moisture with the atmosphere above it but was unable to absorb heat. All in all, the atmosphere generated by these models looked decent. The model output showed a realistic layered atmosphere, as well as a zone of rising air near the equator, and a subtropical band of deserts.

As the power of computers increased, climate modeling groups began popping up around the world. By the mid- to late 1960s, weather prediction models were already quite accurate at forecasting the weather three days in advance, and the field of meteorology was entering a more mature, operational phase. Also, climate models came to stand squarely on the shoulders of weather prediction models. A good weather forecast was of tremendous importance to the economy, and as a result, the field of weather prediction began to receive more funding. There was a concerted push to improve the data being used to initialize the models. The use of spy satellites for weather “reconnaissance” had been proposed as early as 1950. By 1960, the Department of Defense had used classified spy satellite technology to launch the first weather satellite. By 1969, the design of the Nimbus-3 satellite proved helpful in improving weather forecasts. The satellite’s infrared (IR) detectors could measure the temperature of the atmosphere at various heights all over the world. Ironically, if we remember that Richardson was a pacifist, the science of weather prediction was benefiting from money and technology that originated in the military.

Even with the ongoing improvements in weather data and computer technology, the emerging field of climatology was struggling to avoid the old adage about computers, “Garbage in, garbage out.” When trying to represent global climate, scientists encountered a mind-bogglingly complex system. This was an enormous intellectual challenge. In addition to an atmosphere, climate models include land surfaces, oceans, sea ice, and hydrology—variables that made climatology much more difficult for the primitive computers, and for the scientists crunching the numbers. Also, the climate models needed to run for a much longer time, since instead of dealing with the few days needed for a weather forecast, the scientists were trying to simulate over decades, centuries, and in some cases even thousands of years. These scientists were tackling an immense problem. They were, in a sense, building the Earth from scratch. But along the way they were coming to understand important differences between predicting the weather one day ahead and predicting the climate 100 years ahead.

As Richardson learned the hard way, good data are important. And in his case, not having the precise starting point or initial conditions of the atmosphere took an otherwise great weather forecast and put it on the road to ruin. This dependence on initial conditions showed just how valuable good data are: they enable useful forecasts to go out a week instead of only a few days. Through advances in technology, scientists were able to enhance their data and thus design more accurate weather forecasts, creating predictions that were more accurate and less vulnerable to variation than ever before.2

Interestingly, climate models and weather models are often one and the same. But while climate models simulate actual weather, their results are analyzed differently from weather models. Climate prediction is not nearly as dependent on initial conditions as weather prediction is. In other words, the climate at the end of this century won’t care very much about the weather at the beginning of this century. Climate is not nearly as chaotic as weather (for example, we can easily predict that July will be hotter than January). Climate model output is often analyzed by studying the season-to-season, year-to-year, and even decade-to-decade evolution of the climate. Unlike weather forecast models, they never attempt to predict precisely what a single day will look like. Instead, they look at how the statistics of weather change.

This is a very important distinction between weather and climate models: for climate forecasts, the initial conditions in the atmosphere are not as important as the external forcings that have the ability to alter the character and types of weather (i.e., the statistics or what scientists would call the “distribution” of the weather) that make up the climate.

These forcings include, for instance, the Earth’s distance from the sun; how many trees are growing on the surface of the Earth; and, of course, how much carbon dioxide is in the atmosphere. You can’t use models to simulate changes in the climate unless you know what will happen to the forcings.

And then there are the actual equations that make up the model. Climate models are built from two types of equations. First, there is the physics, which comes in the form of elegant equations such as Newton’s laws of motion and conservation of energy. Second, there are equations, known as parameterizations, that are derived from observations and attempt to represent our current understanding of certain aspects of climate and weather. The physics in these models is universal, whereas parameterizations can vary depending on the team building the model. Parameterizations are a way to estimate all the complicated interactions that have been observed in nature but whose physics can’t be directly represented in models due to limitations in computer resources and speeds. Each model uses different parameterizations to approximate what it cannot represent directly. As a result, different models predict different degrees of warming.

Because parameterizations inevitably introduce uncertainty, climate assessments typically draw on the collective wisdom of about twenty climate model projections, making up an ensemble of model simulations. This ensemble approach gives a better estimate of reality than any one particular model (though some models are better than others). Choosing the ensemble average is a way of drawing on multiple models to reach a consensus, rather than relying on any single model. Weather forecasters do the same. The assumption is that the approximation errors among models tend to cancel each other when we average their projections. As a result, the common, most robust tendencies are captured.

As computational speed and observational data continued to increase and improve, climate model simulations began to look more and more like the real world. It was eventually clear that climate models were ready for prime time; they were good enough to work on the problem of global warming. Those grueling calculations that Arrhenius had labored over could now be done quickly and rather painlessly by computers.

In general, there are two types of climate model runs that test the impact of global warming on the climate system: transient runs and equilibrium runs. In a transient run, greenhouse gases are slowly added to the climate system and the model simulates the impact of the additional CO2 at each time step. In an equilibrium run, the atmospheric CO2 level is instantly doubled, and the model is run with the higher CO2 level until the climate has fully adjusted to the forcings and has reached a new equilibrium. The global average change in surface temperature due to the doubling of CO2 is a number referred to as climate sensitivity.

In 1967, Manabe’s group carried out the first series of climate sensitivity experiments using a very simple equilibrium model that represented the atmosphere averaged over the entire globe. The goal was to estimate what the Earth’s average temperature would be if the level of CO2 in the atmosphere doubled. This was similar to what Arrhenius had done by hand in the 1890s when he estimated that the planet would warm about 8°F. Using his one-dimensional model, Manabe came up with a different number: about 3°F to 4°F. Later, in 1975, Manabe and his collaborator Richard Wetherald published an analysis using a more advanced model that they had designed. This time they came up with roughly 6°F. Though this number was also less than what Arrhenius had come up with, it was taken much more seriously, since it had been derived from methods and a model that were more rigorous than the earlier attempts.

By the end of the 1980s scientists were also working on transient runs of the climate system, testing their climate models with varying levels of CO2 to see what the future might look like. And even when these climate models were still in their infancy, they pointed toward an interesting result. When oceans were included in this model world, they acted to delay the appearance of global warming in the atmosphere for a few decades. They did this by soaking up some of the extra heat. Some people see this time lag as a gift, in the sense that it allows us an opportunity to prepare for and adapt to the coming climate changes. But many see the time lag as a curse, because it gives us reasons to procrastinate.

And it’s tempting to procrastinate if you don’t trust the models. But in the 1980s, climate models were beginning to show a very interesting consistency. You could start twenty different models with twenty different initial conditions, but the runs would all converge when they estimated the change in average annual global temperatures. They would, of course, show random variations in weather patterns for a given region or season, but every single model got steadily warmer over time.

The problem with verification of such results is that it’s not possible to jump to the end of the century to see if a climate model is any good. But scientists can get around this by using their models to simulate events that have already happened. This simulation is called hind-casting, and it’s an efficient way to test whether a climate model is skillful. Successful hind-casting experiments boost our confidence that climate models can capture past events and therefore can serve as a decent guide to the future. By successfully hind-casting a number of past situations (the effects of volcanic eruptions, seasonal variations, etc.), we can increase confidence in model simulations of the future. Basically, we can’t prove that the models are right until the future happens, but we can prove that the models function by using certain rigorous tests.

Scientists have performed hind-casting studies on several major events in climate history to test how well the models can reproduce the climate at those times. They’ve modeled the height of the last ice age about 20,000 years ago, known as the Last Glacial Maximum (LGM), as well as a regional cooling event in Europe and North America roughly 500 years ago, known as the Little Ice Age.

There are also a few, rare opportunities to run a climate model in forecast mode. In June 1991, the eruption of Mount Pinatubo in the Philippines provided a perfect natural climate experiment. Pinatubo had injected about 20 million tons of sulfate aerosols into the stratosphere and created the largest cloud of volcanic aerosol haze and the largest perturbation to the stratospheric aerosol layer since the eruption of Krakatau in 1883. The haze spread around the Earth in about three weeks and attained global coverage after about one year.

Jim Hansen, a leading climate scientist at NASA’s Goddard Institute for Space Studies (GISS) in New York, recognized this as a great opportunity to perform a real-time experiment: to use a climate model to predict how the real world would respond before it actually responded. In other words, his team would use the model to make a climate forecast that could be proved correct or incorrect in a relatively short time. So Hansen and his team added the Mount Pinatubo eruption as a forcing to the GISS climate model and made a prediction about how much the planet would cool over the coming year: about 1°F globally. They also predicted that the cooling would be concentrated in the northern hemisphere and would last about a year. The test involved waiting to see how skillfully the model had captured the real-world cooling. In 1992 there was a pause in the long-term warming, much to the delight of those who were skeptical about global warming. The average global temperature dropped roughly 0.9°F. Roughly a year later, the cooling began to subside and the steady uptick in global temperature resumed. The results were in, and the climate models were proved correct.

Since Manabe’s first experiment with doubled CO2, equilibrium runs have been performed thousands of times using increasingly sophisticated models. Climate models have reached a level of maturity approaching, if not rivaling, that of weather models. Whereas Manabe’s 1967 model was simply one big grid square meant to cover the entire planet, today’s climate models have more than 1 million grid squares that cover the planet. Each grid square is about 70 miles by 70 miles, with twenty-six vertical layers in the atmosphere. The next generation of models will resolve down to 30 miles by 30 miles. And as computers get faster, the resolution will improve further. It’s not impossible that models will one day be able to predict the climate for every square mile on the planet.

Computers have already become a lot faster. In the 1970s, a century of climate took more than a month to run. In the current version of the National Center for Atmospheric Research (NCAR) T85 model, a century’s worth of climate takes as little as thirteen days. Keep in mind that these new models have not only smaller grid boxes but also much more realism, which requires doing more calculations in less time. Perhaps far more telling, despite the impressive advances in data collection, modeling, and computational strength, climate sensitivity hasn’t changed very much. The climate sensitivity estimated by the top global climate models ranges from 3.6°F to 8.1°F for an atmosphere going from about 300 to 600 parts per million (ppm) of CO2. This is not far different from Manabe’s estimate of 6°F in 1975 or Arrhenius’s calculation of 8°F in 1896. It raises the question: how many more times do we have to do this experiment before we believe the answer?

Beyond these specific temperature increases, climate models help us see that global warming isn’t just what’s going on at the poles; these models also reinforce trends that you yourself have probably noticed in your lifetime. In the United States, spring now arrives an average of ten days to two weeks earlier than it did twenty years ago. Many migratory bird species are arriving earlier. For example, a study of northeastern birds that migrate long distances found that birds wintering in the southern United States now arrive back in the Northeast an average of thirteen days earlier than they did during the first half of the last century. Snow cover is melting earlier. Plants are blooming almost two weeks earlier in spring. The ranges of many species in the United States have shifted northward and upward in elevation. For example, a study of Edith’s checkerspot butterfly showed that 40 percent of the populations below 2,400 feet have disappeared, despite the availability of sufficient food and shelter. These are all further reflections of the warming taking place right now.

And then there are the results from climate models. Climate models help us to understand what is happening and why. The experiment itself is fairly straightforward. You take the observed temperature record of the past century and compare it with the temperature simulated by a climate model driven by natural events such as volcanic eruptions and human activities such as combustion of coal, oil, and natural gas. Accounting for just natural factors, the models simulate the behavior of what is called the undisturbed climate system for periods as long as thousands of years—if external conditions like solar radiation remain within their normal bounds for the whole period. In other words, the model simulates the climate of an Earth without us, an Earth undisturbed by burning fossil fuels and by deforestation.

When you take us out of the calculations, you take out all the greenhouse gas emissions human activities have caused since the industrial revolution provided fuel for our cars and factories and large expanses of forests were cleared for agriculture and development. The rationale is simple. If a climate model, run with only natural forcings, cannot re-create the strong warming since the 1970s, then the real world is currently doing something Mother Nature cannot do on her own. If you can establish this, then you’ve successfully established that the temperature trend is truly exceptional.

ch03_img01

Natural variations in temperature of three different 1,000-year climate model simulations compared with observed data for 1850–2008. SOURCE: ADAPTED FROM STOUFFER ET AL., 1999, BY R. ZIEMLINSKI, CLIMATECENTRAL.

The accompanying figure sums this up handily. It shows the natural variations in temperature of three different 1,000-year climate model simulations.3 The variability in these different models is obvious: some periods are warmer, some cooler. But not one of these simulations captures any sign of an extended upward temperature trend. There isn’t a single computer model simulation, called a control run, that exhibits a trend in global temperature as large or sustained as the observed temperature record. Hence, there is no way to explain the recent warming in terms of how the natural system has behaved over the last 1,000 years. If the recent warming trend were a result of natural forcings, then, assuming the models are correct, the model simulations would capture it and you would see a match between the observed record and the climate model. In fact, there isn’t a single model that is able to produce a trend comparable to what we can see in the real world. Houston, we have a problem.

Climate models are not only important for showing us what could happen but are also a valuable tool for showing how much of it is our fault. With hind-casting, scientists can use climate models to isolate the physical fingerprint of human activity and figure out where the heightened levels of carbon in the atmosphere are coming from. Here’s how it works. Different forcings—such as changes in solar radiation, volcano eruptions, or fluctuations in greenhouse gas concentrations—imprint different responses, or fingerprints, on the climate system. In the real world, these forcings are superimposed, one on top of another, making it difficult to assign blame to any single one. Therefore, climate models are used to make sense of the impact of each forcing, estimate the individual contribution of that forcing, and test whether it is responsible for the warming trend.

These forcings can be natural due to changes in solar radiation and volcanic eruptions, and they can be human-induced factors such as greenhouse gas concentrations. To repeat: climate models are used to calculate the fingerprint of each individual forcing, and thereby to distinguish how each forcing affects changes in temperature.

Take volcanoes. The idea that volcanoes affect climate has a long history. In 1784, Benjamin Franklin spoke of a constant dry fog all over Europe and North America that prevented the sun from doing its job and kept summer temperatures much chillier than usual. Franklin correctly attributed the dry fog to a large Icelandic volcano, called Laki, that erupted in 1783. In North America, the winter of 1784 was the longest and one of the coldest on record. There was ice-skating in Charleston Harbor; a huge snowstorm hit the South; the Mississippi River froze at New Orleans; and there was ice in the Gulf of Mexico.

Scientists now know that volcanic eruptions, if large enough, can blast gas and dust into the lower stratosphere,4 the layer of the atmosphere that begins about 6 miles above the Earth’s surface. The strong winds at these altitudes, about 10 to 15 miles up, quickly disperse the volcanic material around the globe. The main gas emitted by the volcanoes, sulfur dioxide, eventually combines with oxygen and water to form sulfuric acid gas. This gas then condenses into fine droplets, or sulfate aerosols, that form a haze. The volcanic haze scatters some of the incoming sunlight back to space, and as a result temperature at the surface of the Earth, sometimes quite drastically, plummets for two to three years.

We’ve long known that solar radiation—like volcanoes—has the ability to affect global temperature, especially because the output of the sun is not constant. The sun has a well-established, roughly eleven-year cycle of total solar irradiance, during which its brightness changes over time. However, satellite measurements of total solar irradiance since 1979 show no increasing trend that could be responsible for global warming. The solar cycle is simply not strong enough to provide the temperature boost we have observed and measured. In addition, the eleven-year cycle is just that, a cycle—not a trend.5 The sun is big and powerful, but its fingerprint simply does not match the observed warming. Its fingerprint is that of slight warming everywhere, including the stratosphere. If changes in solar output had been responsible for the recent climate warming, both the troposphere and the stratosphere would have warmed.6

No one is saying that solar variability and volcanic eruptions aren’t important forms of climate forcing over the Earth’s history. Climate model experiments show that the sun and volcanoes have indeed played an important role in changing temperature at timescales ranging from decades to centuries. In fact, climate model experiments show that prior to the industrial era, much of the variation in average temperatures in the northern hemisphere can be explained either as episodic cooling caused by large volcanic eruptions or as changes in the sun’s output.

The problem is that changes in solar output and new volcanic eruptions simply are not powerful enough to generate the large temperature rise we’re currently witnessing. All of the testing finds that these natural factors cannot explain the warming of recent decades. Climate models can accurately estimate how much warming these natural factors produce, and—to repeat—they do not have the strength to generate the temperature increase we’re seeing.

The only climate models that are able to simulate the changes in temperature we saw in the twentieth century are those that include natural forcings as well as human forcings—such as greenhouse gases.

Hind-casting isn’t the only way to prove that the carbon we’re producing is raising temperatures. Interestingly, scientists have learned that not all carbon in the air is the same; in fact, the carbon that comes from us bears our distinct fingerprint, a chemical smoking gun that shows just how much of this problem really belongs to us.

It turns out that just as humans come into this world with unique sets of fingerprints, so too does carbon. Carbon enters the atmosphere from a lot of different places, and each place stamps the molecules of carbon dioxide with unique fingerprints before sending them off into the atmosphere. Volcanoes emit CO2 into the atmosphere when they erupt; the soil and oceans release CO2 into the atmosphere; and plants and trees give off carbon dioxide when they are cut or burned. Burning coal, oil, and natural gas releases carbon into the atmosphere to form carbon dioxide. When you have the right tools, distinguishing where an individual molecule of CO2 comes from is not hard.

Tracing carbon is a bit like tracing a bullet back to the gun it was shot from, and as with a ballistic test that links bullets to a gun, it helps to understand that not all carbon is the same. Carbon atoms (like many atoms) have variations known as isotopes, and these different isotopes are found in varying amounts around the atmosphere. Some got there from the oceans, others from volcanoes, and others from us. Carbon 12, 13, and 14 are all examples of carbon isotopes that are found in the atmosphere, and each comes from a different combination of sources. Each source, to repeat, has a unique chemical fingerprint. Carbon from the oceans, the atmosphere, and the land contains a healthy mix of carbon 12 and carbon 14. But carbon from fossil fuels has almost no carbon 14 at all.

Scientists use an instrument called a mass spectrometer to measure the amounts of carbon isotopes in the atmosphere and track the origin of the carbon. The mass spectrometer is very precise; it knows exactly which isotope of carbon it is measuring because the different carbon isotopes have different masses. So, the mass spectrometer can distinguish a carbon 12 atom from a carbon 13 atom from a carbon 14 atom. With a spectrometer, scientists can trace where the CO2 in the atmosphere originated by measuring the ratios of the different carbon isotopes. In other words, a spectrometer can say whether a sample of CO2 came from the ocean or from a volcano or from burning a fossil fuel.

According to precise measurements from mass spectrometers at several locations around the globe, the carbon dioxide molecules currently in the atmosphere have very little carbon 13 and carbon 14. Using this chemical signature to trace the carbon back to its source tells us that the increase we’re seeing in atmospheric CO2 did not originate in the oceans. Carbon dioxide that outgases from the oceans is not depleted in carbon 13. This also means that the increase in atmospheric CO2 did not originate from living plants and animals, because CO2 from organisms is not depleted in carbon 14. The chemical fingerprints of the extra carbon dioxide in the atmosphere match only the fingerprints of coal, oil, natural gas, and deforestation because these are the only sources that produce carbon dioxide depleted in carbon 13 and carbon 14.

It’s true that most of the carbon dioxide in the atmosphere today comes from natural sources. But most of the additional CO2 that’s been placed in the atmosphere over the last 250 years comes from us. And it’s the additional CO2 that’s raising temperatures. In terms of molecules of carbon dioxide, roughly one out of every four CO2 molecules in the atmosphere today was put there by us.

All this carbon fingerprinting and the various climate models add up to one inescapable reality: the predictions that scientists have been making for the last twenty years have been getting more accurate. Weather forecasts started out as shaky, debatable calculations but evolved into a system of forecasting that virtually everyone in the world now relies on; similarly, climate prediction has evolved to a point where its results are sounder than ever before. No matter how many different ways the scientists run it, the results come out the same—a warmer planet that’s getting warmer as a result of our carbon emissions.

There’s no realistic way to take comfort in what these numbers are telling us. The forecasts that the models lay out is dire, and even though we don’t see them every night on our local news, we cannot ignore it. So if climate models can show us that the temperature is rising and that it’s our fault, when will our weather start to reflect our predictions about the climate? The short answer is that it already has.