RISKY BUSINESS - MACHINERY, MEASUREMENTS, AND PROBABILITY - Knocking on Heaven's Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World - Lisa Randall 

Knocking on Heaven's Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World - Lisa Randall (2011)

Part III. MACHINERY, MEASUREMENTS, AND PROBABILITY

Chapter 11. RISKY BUSINESS

Nate Silver, the creator of the blog FiveThirtyEight—the most successful predictor of the results of the 2008 presidential election—came to interview me in the fall of 2009 for a book he was writing about forecasting. At that time we faced an economic crisis, an apparently unwinnable war in Afghanistan, escalating health-care costs, potentially irreversible climate change, and other looming threats. I agreed to meet—a bit in the spirit of tit for tat—since I was interested to learn Nate’s views on probability and when and why predictions work.

I was nonetheless somewhat puzzled at being chosen for the interview since my expertise was predicting the results of particle collisions, which I doubt that people in Vegas, never mind the government, were betting on. I thought perhaps Nate would ask about black holes at the LHC. But despite the by then defunct lawsuit that suggested possible dangers, I really doubted Nate would be asking about that scenario, given the far more genuine threats listed above.

Nate in fact wasn’t interested in this topic. He asked far more measured questions about how particle physicists make speculations and predictions for the LHC and other experiments. He is interested in forecasting, and scientists are in the business of making predictions. He wanted to learn more about how we choose our questions and the methods we use to speculate about what might happen—questions we will soon address more fully.

Nonetheless, before considering LHC experiments and speculations for what we might find, this chapter continues our discussion of risk. The strange attitudes about risks today and the confusions about when and how to anticipate them certainly merit some consideration. The news reports the myriad bad consequences of unanticipated or unmitigated problems on a daily basis. Perhaps thinking about particle physics and separation by scale can shed some light on this complicated subject. The LHC black hole lawsuit was certainly misguided, but both this and the truly pressing issues of the day can’t help but alert us to the importance of addressing the subject of risk.

Making particle physics predictions is very different from evaluating risk in the world, and we can only skim the surface of the realities pertinent to risk evaluation and mitigation in a single chapter. Furthermore, the black hole example won’t readily generalize since the risk is essentially nonexistent. Nonetheless, it does help guide us in identifying some of the relevant issues when considering how to evaluate and account for risks. We’ll see that although black holes at the LHC were never a menace, misguided applications of forecasting often are.

RISK IN THE WORLD

When physicists considered predictions for black holes at the LHC, we extrapolated existing scientific theories to as yet unexplored energy scales. We had precise theoretical considerations and clear experimental evidence that allowed us to conclude that nothing disastrous could happen, even if we didn’t yet know what would appear. After careful investigations, all scientists agreed that the risk of danger from black holes was negligible—with no chance that they could be a problem, even over the lifetime of the universe.

This is quite different from how other potential risks are addressed. I’m still a bit mystified how economists and financiers a few years back could fail to anticipate the looming financial crisis—or even after the crisis had been averted possibly set the stage for a new one. Economists and financiers did not share a uniform consensus in their prognoses of smooth sailing, yet no one intervened until the economy teetered on collapse.

In the fall of 2008, I participated in a panel at an interdisciplinary conference. Not for the first or last time, I was asked about the danger of black holes. The vice-chairman of Goldman Sachs International, who was seated to my right, joked to me that the real black hole risk everyone was facing was the economy. And the analogy was remarkably apt.

Black holes trap anything nearby and transform it through strong internal forces. Because black holes are characterized entirely by their mass, charge, and a quantity called angular momentum, they don’t keep track of what went in or how it got there—the information that went in appears to be lost. Black holes release that information only slowly, through subtle correlations in the radiation that leaks out. Furthermore, large black holes decay slowly whereas small ones disappear right away. This means that whereas small black holes don’t last very long, large ones are essentially too big to fail. Any of this ring a bell? Information—plus debts and derivatives—that went into banks became trapped and was transformed into indecipherable, complicated assets. And after that, information—and everything else that went in—was only slowly released.

With too many global phenomena today, we really are doing uncontrolled experiments on a grand scale. Once, on the radio show Coast to Coast, I was asked whether I would proceed with an experiment—no matter how potentially interesting—if it had a chance of endangering the entire world. To the chagrin of the mostly conservative radio audience, my response was that we are already doing such an experiment with carbon emissions. Why aren’t more people worried about that?

As with scientific advances, rarely do abrupt changes happen without any advance indicators. We don’t know that climate will change cataclysmically, but we have already seen indications of melting glaciers and changing weather patterns. The economy might have suddenly failed in 2008, but many financiers knew enough to leave the markets in advance of the collapse. New financial instruments and high carbon levels have the potential to precipitate radical changes. In such real-world situations, the question isn’t whether risk exists. In these cases we need to determine how much caution to exercise if we are to properly account for possible dangers and decide on an acceptable level of caution.

CALCULATING RISK

Ideally, one of the first steps would be to calculate risks. Sometimes people simply get the probabilities wrong. When John Oliver interviewed Walter Wagner, one of the LHC litigants, about black holes on The Daily Show, Wagner forfeited any credibility he might have had when he said the chance of the LHC destroying the Earth was 50—50 since it either will happen or it won’t. John Oliver incredulously responded that he “wasn’t sure that’s how probability works.” Happily, John Oliver is correct, and we can make better (and less egalitarian) probability estimates.

But it’s not always easy. Consider the probability of detrimental climate change—or the probability of a bad situation in the Middle East, or the fate of the economy. These are much more complex situations. It’s not merely that the equations that describe the risks are difficult to solve. It’s that we don’t even necessarily know what the equations are. For climate change, we can do simulations and study the historical record. For the other two, we can try to find analogous historical situations, or make simplified models. But in all three cases, huge uncertainties plague any predictions.

Accurate and trustworthy predictions are difficult. Even when people do their best to model everything relevant, the inputs and assumptions that enter any particular model might significantly affect a conclusion. A prediction of low risk is meaningless if the uncertainties associated with the underlying assumptions are much greater. It’s critical to be thorough and straightforward about uncertainties if a prediction is to have any value.

Before considering other examples, let me recount a small anecdote that illustrates the problem. Early in my physics career, I observed that the Standard Model allowed for a much wider range of values for a particular quantity of interest than had been previously predicted, due to a quantum mechanical contribution whose size depended on the (then) recently measured and surprisingly large value of the top quark mass. When presenting my result at a conference, I was asked to plot my new prediction as a function of top quark mass. I refused, knowing there were several different contributions and the remaining uncertainties allowed for too broad a range of possibilities to permit such a simple curve. However, an “expert” colleague underestimated the uncertainties and made such a plot (not unlike many real-world predictions made today), and—for a while—his prediction was widely referenced. Eventually, when the measured quantity didn’t fall within his predicted range, the disagreement was correctly attributed to his overly optimistic uncertainty estimate. Clearly, it’s better to avoid such embarrassments, both in science and in any real-world situation. We want predictions to be meaningful, and they will be only if we are careful about the uncertainties that we enter.

Real-world situations present even more intractable problems, requiring us to be still more careful about uncertainties and unknowns. We have to be cautious about the utility of quantitative predictions that cannot or do not take account of these issues.

One stumbling block is how to properly account for systemic risks, which are almost always difficult to quantify. In any big interconnected system, the large-scale elements involving the multiple failure models arising from the many interconnections of the smaller pieces are often the least supervised. Information can be lost in transitions or never attended to in the first place. And such systemic problems can amplify the consequences of any other potential risks.

I saw this kind of structural issue firsthand when I was on a committee addressing NASA safety. To accommodate the necessity of appeasing diverse congressional districts, NASA sites are spread throughout the country. Even if any individual site takes care of its piece of equipment, there is less institutional investment in the connections. This then becomes true for the larger organization as well. Information can easily get lost in reporting between different sublayers. In an email to me from the NASA and aerospace industry risk-analyst Joe Fragola, who ran the study, “My experience indicates that risk analyses performed without the joint activity between the subject matter experts, the system integration team and the risk analysis team are doomed to be inadequate. In particular, so called ‘turn-key’ risk analyses become so much actuarial exercise and are only of academic interest.” Too often there is a trade-off between breadth and detail, but both are essential in the long term.

One dramatic consequence of such a failure (among others) was the BP incident in the Gulf of Mexico. In a talk at Harvard in February 2011, Cherry Murray, a Harvard dean and member of the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, cited management failure as one major contributor to the BP incident. Richard Sears, the commission’s senior science and engineering adviser and former vice president for Deepwater Services at Shell Oil Co., described how BP management addressed one problem at a time, without ever formulating the big picture in what he called “hyper-linear thinking.”

Although particle physics is a specialized and difficult enterprise, its goal is to isolate its simple underlying elements and make clear predictions based on our hypotheses. The challenge is to access small distances and high energies, not to address complicated interconnections. Even though we don’t necessarily know which underlying model is correct, we can predict—given a particular model—what sorts of events should occur when, for instance, protons collide with each other at the LHC. When small scales get absorbed into larger ones, effective theories appropriate to the larger scales tell us exactly how the smaller scales enter, as well as the errors we can make by ignoring small-scale details.

In most situations, however, this neat separation by scale that we introduced in Chapter 1 doesn’t readily apply. Despite the sometimes shared methods, in the words of more than one New York banker, “Finance is not a branch of physics.” In climate or banking, knowledge of small-scale interactions can often be essential to determining large-scale results.

This lack of scale separation can have disastrous consequences. Take as an example the collapse of Barings Bank. Before its failure in that year, Barings, founded in 1762, was Britain’s oldest merchant bank. It had financed the Napoleonic wars, the Louisiana Purchase, and the Erie Canal. Yet in 1995, the bad bets made by a sole rogue trader at a small office in Singapore brought it nearly to financial ruin.

More recently, the machinations of Joseph Cassano at AIG led to its near destruction and the threat of major financial collapse for the world as a whole. Cassano headed a relatively small (400-person) unit within the company called AIG Financial Products, or AIGFP. AIG had made reasonably stable bets until Cassano started employing credit-default swaps (a complex investment vehicle promoted by various banks) to hedge the bets made on collateralized debt obligations.

In what seems in retrospect to be a pyramid scheme of hedging, his group ratcheted up $500 billion in credit-default swaps, more than $60 billion of which were tied to subprime mortgages.41 If subunits had been absorbed into larger systems as they are in physics, the smaller piece would have yielded information or activity at a higher level in a controlled manner that a midlevel supervisor could readily handle. But in an unfortunate and unnecessarily excessive violation of separation of scales, Cassano’s machinations went virtually unsupervised and infiltrated the entire operation. His activities weren‘t regulated as securities, they weren’t regulated as gaming, and they weren’t regulated as insurance. The credit-default swaps were distributed all over the globe, and no one had worked through the potential implications. So when the subprime mortgage crisis hit, AIG wasn’t prepared and it imploded with losses. American taxpayers subsequently were left to bail the company out.

Regulators attended to conventional safety issues (to some extent) concerning the soundness of individual institutions, but they didn’t assess the system as a whole, or the interconnected risks built into it. More complex systems with overlapping debts and obligations call for a better understanding of these interconnections and a more comprehensive way of evaluating, comparing, and deciding risks and the tradeoffs for possible benefits.42 This challenge applies to most any large system—as does the time frame that is deemed relevant.

This brings us to a further factor that makes calculating and dealing with risk difficult: our psyches and our market and political systems apply different logic to long-term risks and short-term ones—sometimes sensibly, but often greedily, so. Most economists and some in the financial markets understood that market bubbles don’t continue indefinitely. The risk wasn’t that the bubble would burst—did anyone really think that housing prices would continue doubling within short time frames forever?—but that the bubble would burst in the imminent future. Riding or inflating a bubble, even one that you know is unsustainable, isn’t necessarily shortsighted if you are prepared at any point to take your profits (or bonuses) and close up shop.

In the case of climate change, we don’t actually know how to assign a number to the melting of the Greenland ice cap. The probabilities are even less certain if we ask for the likelihood that it will begin to melt within a definite time frame—say in the next hundred years. But not knowing the numbers is no reason to bury our head in the ice—or the proto-cold water.

We have trouble finding consensus on the risks from climate change and how and when to avert them when the possible environmental consequences arise relatively slowly. And we don’t know how to estimate the cost of action or inaction. Were there to be a dramatic climate-driven event, we would be much more likely to take action immediately. Of course, no matter how fast we were, at that point it would be too late. This means that non-cataclysmic climate changes are worth attending to as well.

Even when we do know the likelihood of certain outcomes, we tend to apply different standards to low-probability events with catastrophic outcomes than to high-probability events with less dramatic results. We hear a lot more about airplane crashes and terrorist attacks than we do about car accidents, even though car accidents kill far more people every year. People talked about black holes even without understanding probabilities because the consequences of the disaster scenario seemed so dire. On the other hand, many small (and not so small) probabilities are neglected altogether when their low visibility keeps them under the radar. Even offshore drilling was considered completely safe by many until the Gulf of Mexico disaster actually occurred.43

A related problem is that sometimes the greatest benefits or costs arise from the tails of distributions—the events that are the least likely and that we know least well.44 Ideally, we‘d like our calculations to be objectively determined by midrange estimates or averages of preexisting related situations. But we don’t have these data if nothing similar ever occurred or if we ignore the possibility altogether. If the costs or benefits are sufficiently high at these tail ends, they dominate the predictions—assuming that you know in advance what they are in the first place. In any case, traditional statistical methods don’t apply when the rates are too low for averages to be meaningful.

The financial crisis happened because of events that were outside the range of what the experts had taken into account. Lots of people made money based on the predictable aspects, but supposedly unlikely events determined some of the more negative developments. When modeling the reliability of financial instruments, most applied the data for the previous few years without allowing for the possibility that the economy might turn down, or turn down at a far more dramatic rate. Assessments about whether to regulate financial instruments were based on a short time frame during which markets had only increased. Even when the possibility of a market drop was admitted, the assumed values for the drop were too low to accurately predict the true cost of lack of regulation to the economy. Virtually no one paid attention to the “unlikely” events that precipitated the crisis. Risks that might otherwise have been apparent therefore never came up for consideration. But even unlikely events need to be considered when they can have significant enough impact.45

Any risk assessment is plagued by the difficulty of evaluating the risk that the underlying assumptions are incorrect. Without such estimates, any estimate becomes subject to intrinsic prejudices. On top of the calculational problems and hidden prejudices buried in these underlying assumptions, many practical policy decisions involve unknown unknowns—factors that can’t be or haven’t been anticipated. Sometimes we simply can’t foresee the precise unlikely event that will cause trouble. This can make any prediction attempts—that will inevitably fail to factor in these unknowns—completely moot.

MITIGATING RISK

Luckily for our search for understanding, we are extremely certain that the probability of producing dangerous black holes is minuscule. We don’t know the precise numerical probability for a catastrophic outcome, but we don’t need to because it’s so negligible. Any event that won’t happen even once in the lifetime of the universe can be safely ignored.

More generally, however, quantifying an acceptable level of risk is extremely difficult. We clearly want to avoid major risks altogether—anything that endangers life, the planet, or anything we hold dear. With risks we can tolerate, we want a way of evaluating who benefits and who stands to lose, and to have a system that would evaluate and anticipate risks accordingly.

The risk analyst Joe Fragola’s comment to me about climate change, along with other potential dangers he is concerned with, was the following: “The real issue is not if these could happen, nor what their consequences would be, but rather what is their probability of occurrence and the associated uncertainty? And how much of our global resources should we allocate to address such risks based not only on the probability of occurrence but also on the probability that we might do something to mitigate them?”

Regulators often rely on so-called cost-benefit analysis to evaluate risk and determine how to deal with it. On the surface, the idea sounds simple enough. Calculate how much you need to pay versus the benefit and see if the proposed change is worth it. This might even be the best available procedure in many circumstances, but it might also dangerously generate a deceptive patina of mathematical rigor. In practice, cost-benefit analysis can be very hard to do. The problems involve not just measuring cost and benefit, which can be a challenge, but defining what we mean by cost and benefit in the first place. Many hypothetical situations involve too many unknowns to reliably calculate either, or to calculate risk in the first place. We can certainly try, but these uncertainties need to be accounted for—or at least recognized.

A sensible system that anticipates costs and risks in the near term and in the future would certainly be useful. But not all trade-offs can even be evaluated solely according to their cost. What if that which is at risk can’t be replaced at all?46 Had the creation of an Earth-eating black hole by the LHC been something that could happen with reasonable probability within our lifetime, or even within a million years, we certainly would have pulled the plug.

And even though we ultimately benefit quite a bit from research in basic science, the economic cost of abandoning a project is rarely calculable either, because the benefits are so difficult to quantify. The goals of the LHC include achieving fundamental knowledge, including a better understanding of masses and forces, and possibly even of the nature of space. The benefits also include an educated and motivated technically trained populace inspired by big questions and deep ideas about the universe and its composition. On a more practical front, we will follow the information advance CERN made with the World Wide Web, with the “grid” that will allow a global processing of information, as well as improvements in magnet technology that will be useful for medical devices such as MRIs. Possible further applications from fundamental science might ultimately be found, but these are almost always impossible to anticipate.

Cost-benefit analyses are difficult to apply to basic science. A lawyer jokingly applied a cost-benefit approach to the LHC, noting that along with the extremely tiny proposed enormous risk, the LHC also had a minuscule chance of stupendous benefits by solving all the problems of the world. Of course, neither outcome readily fits into a standard cost-benefit calculation, though—incredibly—lawyers have tried.47

At least science benefits from its goals being “eternal” truths. If you find the way the world works, it’s true no matter how quickly or slowly you found it. We certainly don’t want scientific progress to be slow. But the year’s delay showed us the danger of too quickly turning on the LHC. In general, scientists try to proceed safely.

Cost-benefit analysis is riddled with difficulty for almost any complex situation—such as climate change policy or banking. Although in principle a cost-benefit analysis makes sense and there may be no fundamental objection, how you apply it makes an enormous difference. Defenders of cost-benefit analysis essentially make a cost-benefit argument to justify the approach when they ask how can we possibly do better—and they might even be right. I’m simply advocating that where we do apply the method, we do it more scientifically. We need to be clear about the uncertainties in any numbers we present. As with any scientific analysis, we need to take errors, assumptions, and biases into account and be open in presenting these.

One factor that matters a great deal for climate change issues is whether the costs or benefits refer to an individual, a nation, or the globe. The potential costs or benefits can also cross these categories, but we don’t always take this into account. One reason that American politicians decided against the Kyoto Protocol was they concluded that the cost would have exceeded the benefit to Americans—American businesses in particular. However, such a calculation didn’t really factor in the long-term costs of instabilities across the globe or the benefits of a regulated environment where new businesses might prosper. Many economic analyses of the costs of climate change mitigation fail to account for the potential additional benefits to the economy through innovation or to stability through less reliance on foreign nations. Too many unknowns about how the world will change are involved.

These examples also raise the question of how to evaluate and mitigate risk that crosses national borders. Suppose black holes really had posed a risk to the planet. Could someone in Hawaii constructively sue an experiment planned for Geneva? According to existing laws, the answer is no, but perhaps a successful suit could have interfered with American financial contributions to the experiment.

Nuclear proliferation is another issue where clearly global stability is at stake. Yet we have limited control over the dangers generated in other nations. Both climate change and nuclear proliferation are issues that are managed nationally but whose dangers are not restricted to the institutions or nations creating the menace. The political problem of what to do when risks cross national boundaries or legal jurisdictions is difficult. But it’s clearly an important question.

As an institution that is truly international, CERN’s success hinges on the shared common goals of many nations. One nation can try to minimize its own contribution, but aside from that, no individual interests are at stake. All involved nations work together since the science they value is the same. The host countries, France and Switzerland, might receive slightly greater economic advantages in labor and infrastructure, but on the whole, it’s not a zero-sum game. No one nation benefits at the expense of another.

Another notable feature of the LHC is that CERN and the member states are responsible should any technical or practical problems occur. The 2008 helium explosion had to be repaired through CERN’s budget. No one, especially those working at the LHC, benefits from mechanical failure or scientific disasters. Cost-benefit analyses, when applied to situations where costs and benefits aren’t fully aligned and the benefactors don’t have full responsibility for the risk they take on, are less useful. It is very different from applying this type of reasoning to the types of closed systems that science tries to address.

In any situation, we want to avoid moral hazards, where people’s interest and risk are not aligned so they may have an incentive to take on greater risk than they would if no one else effectively contributed insurance. We need to have the right incentive structures.

Consider hedge funds, for example. The general partners get a percentage of profits from their fund each year when they make money, but they don’t forfeit a comparable percentage if their fund faces losses or if they go bankrupt. Individuals keep their gains, while their employers—or taxpayers—share the losses. With these parameters, the most profitable strategy for the employees would encourage large fluctuations and instabilities. An efficient system and effective cost-benefit analysis should take into account such allocation of risks, rewards, and responsibilities. They have to factor in the different categories or scales of the people involved.

Banking, too, has obvious moral hazards where risks and benefits aren’t necessarily aligned. A “too big to fail” policy combined with weak leveraging limits yields a situation in which the people who are accountable for losses (taxpayers) are not the same as those who stand to benefit the most (bankers or insurers). One can debate whether bailouts were essential in 2008, but preventing the situation in the first place by aligning risk with responsibility seems like a good idea.

Furthermore, at the LHC, all data about the experiments and risks are readily available. The safety report is on the web. Anyone can read it. Certainly any institution that would expect a bailout were it to fail, or even one that simply speculates in a potentially unstable fashion, should provide enough data to regulatory institutions so that the relative weight of benefits against risks can potentially be evaluated. Ready access to reliable data should help mortgage experts or regulators or others anticipate financial or other potential disasters in the future.

Though not in itself a solution, another factor that could at least improve or clarify the analyses would again be to take “scale”—in terms of categories of those subject to benefits and risks, as well as time ranges—into account. The question of scale translates into the issue of who is involved in a calculation: is it an individual, an organization, a government, or the world, and are we interested in a month, a year, or a decade? A policy that is good for Goldman Sachs might not ultimately benefit the economy as a whole—or the individual whose mortgage is currently under water. That means that even if there were perfectly accurate calculations, they would guarantee the right result only if they were applied to the correct carefully thought through question.

When we make policy or evaluate costs versus benefits, we tend to neglect the possible benefits of global stability and helping others—not just in a moral sense, but in the long-term financial sense as well. In part, this is because these gains are difficult to quantify, and in part it is due to the challenge in making evaluations and creating robust regulations in a world that changes quickly. Still, it’s clear that regulations that consider all possible benefits, not just those to an individual or an institution or a state, will be more reliable, and may even lead to a better world.

The time frame can also influence the computed cost or benefit for policy decisions as do the assumptions the deciding parties make, as we saw with the recent financial crisis. Time scales matter in other ways as well, since acting too hastily can increase risk while rapid transactions can enhance benefits (or profits). But even though fast trades can make pricing more efficient, lightning-fast transactions don’t necessarily benefit the overall economy. An investment banker explained to me how important it was to be able to sell shares at will, but even so he couldn’t explain why they needed to be able to sell them after owning them a few seconds or less—aside from the fact that he and his bank make more money. Such trades create more profits for bankers and their institutions in the short term, but they aggravate existing weaknesses in the financial sector in the long-term. Perhaps even with a short-term competitive disadvantage, a system that inspires more confidence could be more profitable in the long term and therefore prevail. Of course, the banker I mentioned made $2 billion for his institution in a single year, so his employers might not agree on the wisdom of my suggestion. But anyone who ultimately pays for this profit might.

THE ROLE OF EXPERTS

Many people take away the wrong lesson and conclude that the absence of reliable predictions implies an absence of risk. In fact, quite the opposite applies. Until we can definitively rule out particular assumptions or methods, the range of possible outcomes is within the realm of possibility. Despite the uncertainties—or perhaps because of them—with so many models predicting dangerous results, the probability of something very bad happening with climate or with the economy—or with offshore drilling—is not negligibly small. Perhaps one can argue that the chances are small within a definite time frame. However, in the long run, until we have better information, too many scenarios lead to calamitous results to ignore the dangers.

People interested only in the bottom line rally against regulation while those who are interested in safety and predictability argue for it. It is too easy to be tempted to come down on one side or the other, since figuring out where to draw the line is a daunting—if not an impossible—task. As with calculating risk, not knowing the deciding point doesn’t mean there is none or that we shouldn’t aim for the best approximation. Even without the insights necessary to make detailed predictions, structural problems should be addressed.

This brings us to the last important question: Who decides? What is the role of experts, and who gets to evaluate riskiness?

Given the money and bureaucracy and careful oversight involved in the LHC, we can expect that risks were adequately analyzed. Furthermore, at its energies, we aren’t even really in a new regime where the basic underpinnings of particle physics should fail. Physicists are confident the LHC is safe, and we look forward to the results from particle collisions.

This isn’t to say that scientists don’t have a big responsibility. We always need to ensure that scientists are responsible and are attentive to risks. We’d like to be as certain with respect to all scientific enterprises as we were with the LHC. If you are creating matter or microbes or anything else that has not existed before (or drilling deeper or otherwise exploring new frontiers on the Earth for that matter), you need to be certain of not doing anything dramatically bad. The key is to do this rationally, without unfounded fearmongering that would impede progress and benefits. This is true not just for science but for any potentially risky endeavor. The only answer to imagined unknowns and even to “unknown unknowns” is to heed as many reasonable viewpoints as possible and to have the freedom to intervene if necessary. As anyone in the Gulf of Mexico will attest to, you need to be able to turn off the spigots if something goes wrong.

Early in the previous chapter, I summarized some of the objections that bloggers and skeptics made about the methods physicists used for black hole calculations, including relying on quantum mechanics. Hawking did indeed use quantum mechanics to derive black hole decay. Yet despite Feynman’s statement that “no one understands quantum mechanics,” physicists understand its implications, even if we don’t have a deep philosophical insight into why quantum mechanics is true. We believe quantum mechanics because it explains data and solves problems that are impenetrable with classical physics.

When physicists debate quantum mechanics, they don’t dispute its predictions. Its repeated success has forced generations of astonished students and researchers to accept the theory’s legitimacy. Debates today about quantum mechanics concern its philosophical underpinnings. Is there some other theory with more familiar classical premises that nonetheless predicts the bizarre hypotheses of quantum mechanics? Even if people make progress on such issues, it would make no difference to quantum mechanical predictions. Philosophical advances could affect the conceptual framework we use to describe predictions—but not the predictions themselves.

For the record, I find major advances on this front unlikely. Quantum mechanics is probably a fundamental theory. It is richer than classical mechanics. All classical predictions are a limiting case of quantum mechanics, but not vice versa. So it’s hard to believe that we will ultimately interpret quantum mechanics with classical Newtonian logic. Trying to interpret quantum mechanics in terms of classical underpinnings would be like me trying to write this book in Italian. Anything I can say in Italian I can say in English, but because of my limited Italian vocabulary the reverse is far from true.

Still, with or without agreement on philosophical import, all physicists agree on how to apply quantum mechanics. The wacky naysayers are just that. Quantum mechanical predictions are trustworthy and have been tested many times. Even without them, we still have alternative experimental evidence (in the form of the Earth and Sun and neutron stars and white dwarfs) that the LHC is safe.

LHC alarmists also objected to the purported use of string theory. Indeed, using quantum mechanics was just fine but relying on string theory would not have been. But the conclusions about black holes never needed string theory anyway. People do try to use string theory to understand the interior of black holes—the geometry of the apparent singularity at the center where according to general relativity energy becomes infinitely dense. And people have done string-theory-based calculations of black hole evaporation in nonphysical situations that support Hawking’s result. But the computation of black hole decay relies on quantum mechanics and not on a complete theory of quantum gravity. Even without string theory, Hawking could do his calculations. The very questions some bloggers posed reflected the absence of sufficient scientific understanding to weigh the facts.

A more generous interpretation of this objection is as resistance not to the science itself but to scientists with “faith-based” beliefs in their theories. After all, string theory is beyond the experimentally verifiable regime of energies. Yet many physicists think it’s right and continue to work on it. However, the variety of opinions about string theory—even within the scientific community—nicely illustrates just the opposite point. No one would base any safety assessment on string theory. Some physicists support string theory and some do not. Yet everyone knows it is not yet proven or fully fleshed out. Until everyone agreed on string theory’s validity and reliability, trusting string theory for risky situations would be foolhardy. As concerns our safety, the inaccessibility of string theory’s experimental consequences is not the only reason that we don’t yet know if it’s correct—it’s also the reason it isn’t required to predict most real-world phenomena we will encounter in our lifetimes.

Yet despite my confidence that it was okay to rely on experts when evaluating potential risks from the LHC, I recognize the potential limitations of this strategy and don’t quite know how to address them. After all, “experts” told us that derivatives were a way of minimizing risk, not creating potential crises. “Expert” economists told us that deregulation was essential to the competitiveness of American business, not to the potential downfall of the American economy. And “experts” tell us only those in the banking sector understand their transactions sufficiently well to address its woes. How do we know when experts are thinking broadly enough?

Clearly experts can be shortsighted. And experts can have conflicts of interest. Are there any lessons from science here?

I don’t think it is my bias that leads me to say that in the case of LHC black holes, we examined the full range of potential risks that we could logically envision. We thought about both the theoretical arguments and also the experimental evidence. We thought about situations in the cosmos where the same physical conditions applied, yet did not destroy any nearby structure.

It would be nice to be so sanguine that economists do similar comparisons to existing data. But the title of Carmen Reinhart and Kenneth Rogoff’s book This Time Is Different suggests otherwise. Although economic conditions are never identical, some broad measures do indeed repeat themselves in economic bubbles.

The argument made by many today that no one could anticipate the dangers of deregulation also doesn’t stand up. Brooksley Born, the former chairperson of the Commodity Futures Trading Commission, which oversees futures and commodity options markets, did point out the dangers of deregulation—actually she rather reasonably suggested that potential risks be explored—but she was shouted down. There was no solid analysis of whether caution was justified (as it clearly turned out to be) but only a partisan view that moving slowly would be bad for business (as it would have been for Wall Street in the short term).

Economists speaking out about regulation and policy might have a political as well as a financial agenda and that can interfere with doing the right thing. Ideally, scientists pay more attention to the merits of arguments, including those regarding risk, than politics. LHC physicists made serious scientific inquiries to ensure no disasters would occur.

Although perhaps only financial experts understand the details of a particular financial instrument, anyone can consider some basic structural issues. Most people can understand why an overly leveraged economy is unstable, even without predicting or even understanding the precise trigger that might cause a collapse. And most anyone can understand that giving the banks hundreds of billions of dollars with few or no constraints is probably not the best way to spend taxpayers’ money. And even a faucet is built with a reliable means of turning it off—or at least a mop and plan in place to clean up any mess. It’s hard to see why the same shouldn’t apply to deep-sea oil rigs.

Psychological factors enter when we count on experts, as the New York Times economics columnist David Leonhardt explained in 2010 when attributing Mr. Greenspan’s and Mr. Bernanke’s errors to factors that were “more psychological than economic.” He explained, “They got trapped in an echo chamber of conventional wisdom” and “fell victim to the same weakness that bedeviled the engineers of the Challenger space shuttle, the planners of the Vietnam and Iraq wars, and the airline pilots who have made tragic cockpit errors. They didn’t adequately question their own assumptions. It’s an entirely human mistake.”48

The only way to address complicated issues is to listen broadly, even to the outliers. Despite their ability to predict that the economy could collapse into a black hole, self-interested bankers were content to ignore warnings so long as they could. Science is not democratic in the sense that we all get together and vote on the right answer. But if anyone has a valid scientific point, it will ultimately be heard. People will often pay attention to the discoveries and insights from more prominent scientists first. Nonetheless, an unknown who makes a good point will eventually gain an audience.

With the ear of a well-known scientist, an unknown might even be listened to right away. That is how Einstein could present a theory that shook scientific foundations almost immediately. The German physicist Max Planck understood the implications of Einstein’s relativistic insights and was fortuitously in charge of the most important physics journal at the time.

Today we benefit from the rapid spread of ideas over the Internet. Any physicist can write a paper and have it sent out through the physics archive the next day. When Luboš Motl was an undergraduate in the Czech Republic, he solved a scientific problem that a prominent scientist at Rutgers was working on. Tom Banks paid attention to good ideas, even if they came from an institution he had never heard of before. Not everyone is so receptive. But so long as a few people pay attention, an idea, if good and correct, will ultimately enter scientific discourse.

LHC engineers and physicists sacrificed time and money for safety. They wanted to economize as much as possible, but not at the expense of danger or inaccuracy. Everyone’s interests were aligned. No one benefits from a result that doesn’t stand the test of time.

The currency in science is reputation. There are no golden parachutes.

FORECASTING

I hope we all now agree that we shouldn‘t be worrying about black holes—though we do have much else to worry about. In the case of the LHC, we are and should be thinking about all the good things it can do. The particles created there will help us answer deep and fundamental questions about the underlying structure of matter.

To briefly return to my conversation with Nate Silver, I realized how special our situation is. In particle physics, we can restrict ourselves to simple enough systems to exploit the methodical manner in which new results build on old ones. Our predictions sometimes originate in models we know to be correct based on existing evidence. In other cases, we make predictions based on models we have reasons to believe might exist and use experiments to winnow down the possibilities. Even then—without yet knowing if these models will prove correct—we can anticipate what the experimental evidence would be, should the idea turn out to be realized in the world.

Particle physicists exploit our ability to separate according to scale. We know small-scale interactions can be very different from those that occur on large scales, but they nonetheless feed into large-scale interactions in a well-defined way, giving consistency with what we already know.

Forecasting is very different in almost all other cases. For complex systems, we often have to simultaneously address a range of scales. That can be true not only for social organizations, such as a bank in which an irresponsible trader could destabilize AIG and the economy, but even in other sciences. Predictions in those cases can have a great deal of variability.

For example, the goals of biology include predicting biological patterns and even animal and human behavior. But we don’t yet fully understand all the basic functional units or the higher-level organization by which elementary elements produce complex effects. We also don’t know all the feedback loops that threaten to make separating interactions by scale impossible. Scientists can make models, but without better understanding the critical underlying elements or how they contribute to emergent behavior, modelers face a quagmire of data and competing possibilities.

A further challenge is that biological models are designed to match preexisting data, but we don’t yet know the rules. We haven’t identified all the simple independent systems, so it is difficult to know which—if any—model is right. When I spoke with my neuroscientist colleagues, they described the same problem. Without qualitatively new measurements, the best that the models can do is to match all existing data. Since all the surviving models must agree with the data, it is difficult to decisively determine which underlying hypothesis is correct.

It was interesting to talk to Nate about the kind of things he tries to predict. A lot of recent popular books present shaky hypotheses that give predictions that work—except when they don’t. Nate is a lot more scientific. He first became famous for his accurate predictions of baseball games and elections. His analysis was based on careful statistical evaluations of similar situations in the past, where he included as many variables that he could manage to apply historical lessons to as precisely as he could.

He now has to choose wisely where to apply his methods. But he realizes that the kinds of correlations he focuses on can be tricky to interpret. You can say an engine on fire caused a plane crash, but it’s not a surprise to find an engine on fire in a plane going down. What really was the initial cause? You have the same issue when you connect a mutated gene to cancer. It doesn’t necessarily cause the disease even if it is correlated with it.

He is aware of other potential pifalls too. Even with large amounts of data, randomness and noise may enhance or suppress the interesting underlying signals. So Nate won’t work on financial markets or earthquakes or climate. Although in all likelihood he could predict overall trends, the short-term predictions would be inherently uncertain. Nate now studies other places where his methods shed light such as how best to distribute music and movies, as well as questions such as the value of NBA superstars. But he acknowledges that only very few systems can be so accurately quantified.

Nonetheless, Nate told me that forecasters do make one other type of prediction. Many of them do metaforecasting—predicting what people will try to predict.