Like a Bad Dye Job, the Truth Is in the Roots - Think Like a Freak - Steven D. Levitt, Stephen J. Dubner

Think Like a Freak - Steven D. Levitt, Stephen J. Dubner (2014)

Chapter 4. Like a Bad Dye Job, the Truth Is in the Roots

It takes a truly original thinker to look at a problem that everyone else has already looked at and find a new avenue of attack.

Why is this so rare? Perhaps because most of us, when trying to figure out a problem, gravitate toward the nearest and most obvious cause. It’s hard to say whether this is learned behavior or if it dates to our distant past.

In the caveman era, it was a matter of life or death to know if the berries on a particular bush were edible. The proximate cause was usually the one that mattered. Even today, the most proximate cause often makes perfect sense. If your three-year-old child is wailing and your five-year-old is standing nearby with a devilish grin and a plastic hammer, it’s a good bet the hammer had something to do with the wailing.

But the big problems that society cares about—crime and disease and political dysfunction, for instance—are more complicated than that. Their root causes are often not so nearby, or obvious, or palatable. So rather than address their root causes, we often spend billions of dollars treating the symptoms and are left to grimace when the problem remains. Thinking like a Freak means you should work terribly hard to identify and attack the root cause of problems.

Of course this is far more easily said than done. Consider poverty and famine: What causes them? A glib answer is the lack of money and food. So theoretically you can fight poverty and famine by airlifting vast amounts of money and food into poor and hungry places.

That is pretty much what governments and aid groups have been doing for many years. So why do the same problems persist in the same places?

Because poverty is a symptom—of the absence of a workable economy built on credible political, social, and legal institutions. It’s hard to fix that even with planeloads of cash. Similarly, the lack of food is usually not the root cause of famine. “Starvation is the characteristic of some people not having enough food to eat,” the economist Amartya Sen wrote in his landmark book Poverty and Famines. “It is not the characteristic of there being not enough food to eat.” In countries whose political and economic institutions are built to serve the appetites of a corrupt few rather than the multitudes, food is routinely withheld from the people who need it most. In the United States, meanwhile, we throw away an astonishing 40 percent of the food we buy.

Alas, fixing corruption is a lot harder than airlifting food. So even when you do get to the root cause of the problem, you may still be stuck. But as we’ll see in this chapter, the stars occasionally align and the payoff can be huge.

In Freakonomics, we examined the causes of the rise and fall in violent crime in the United States. In 1960, crime began a sudden climb. By 1980, the homicide rate had doubled, reaching a historic peak. For several years crime stayed perilously high but in the early 1990s, it began to fall and kept falling.

So what happened?

A great many explanations were put forth, and in our book we put a number of them under empirical scrutiny. Below are two sets of possible explanations. One had a strong impact on lowering crime and one did not. Can you guess which is which?

Each set is quite plausible, isn’t it? Indeed, until you roll up your sleeves and crunch some data, it is virtually impossible to know the right answer.

So what do the data say?

The A factors, as logical as they may seem, did not contribute to the crime drop. Maybe this surprises you. Gun murders are down? Well, you figure, that must be from all those tough new gun laws—until you examine the data and find that most people who commit crimes with guns are almost entirely unaffected by current gun laws.

You might also think the go-go economy of the 1990s would have helped, but historical data show there is a surprisingly weak relationship between economic cycles and violent crime. Indeed, as the Great Recession took hold in 2007, a chorus of pundits warned that our long, lovely reprieve from violent crime was over. But it wasn’t. Between 2007 and 2010, the worst years of the recession, homicide fell an additional 16 percent. The homicide rate today is, improbably, lower than it was in 1960.

The B factors, meanwhile—more cops, more people in prison, and a collapsing crack market—did contribute to the crime drop. But once we tallied up the cumulative impact of these factors, they still couldn’t account for the entire crime drop. There had to be something else.

Let’s take a closer look at the B factors. Do they address the root causes of crime? Not really. They might more plausibly be called present-tense factors. Sure, hiring more cops and putting more people in prison may shrink the short-term supply of criminals, but what about the long-term supply?

In Freakonomics, we identified one missing factor: the legalization of abortion in the early 1970s. The theory was jarring but simple. A rise in abortion meant that fewer unwanted children were being born, which meant fewer children growing up in the sort of difficult circumstances that increase the likelihood of criminality.

Given the history of abortion in the U.S.—there are few issues as morally and politically fraught—this theory was bound to be discomfiting for abortion opponents and supporters alike. We steeled ourselves for a shouting match.

Interestingly, our argument didn’t generate much hate mail. Why? Our best guess is that readers were smart enough to understand that we had identified abortion as a mechanism for the crime drop but not the actual root cause. So what is the root cause? Simply this: too many children were being brought up in bad environments that led them to crime. As the first post-abortion generation came of age, it included fewer children who’d been raised in such environments.

It can be unsettling, even frightening, to stare a root cause in the eye. Maybe that’s why we so often avoid it. It is a lot easier to argue about cops and prisons and gun laws than the thorny question of what makes a parent fit to raise a child. But if you want to have a worthwhile conversation about crime, it makes sense to start by talking about the benefits of good, loving parents who give their children a chance to lead safe and productive lives.

That may not be a simple conversation. But when you are dealing with root causes, at least you know you are fighting the real problem and not just boxing with shadows.

It may seem daunting to travel backward a generation or two in order to understand the root cause of a problem. But in some cases, a generation is barely the blink of an eye.

Let’s pretend you are a German factory worker. You’re sitting in a beer hall with friends after a shift, demoralized by your financial standing. The national economy is humming along, but it seems as if you and everyone else in town is running in place. The people who live just a few towns over, meanwhile, are doing considerably better. Why?

To find out, we must travel all the way back to the sixteenth century. In 1517, a distraught young German priest named Martin Luther levied a list of ninety-five grievances against the Roman Catholic Church. One practice he found particularly odious was the sale of indulgences—that is, the Church’s practice of raising cash by forgiving the sins of big-ticket donors. (One senses that today Luther would rail against the tax treatment enjoyed by hedge funds and private-equity firms.)

Luther’s bold move launched the Protestant Reformation. Germany at the time was made up of more than one thousand independent territories, each ruled by its own prince or duke. Some of these men followed Luther and embraced Protestantism; others stayed loyal to the Church. This schism would play out for decades all over Europe, often with immense bloodshed. In 1555, a temporary settlement was reached, the Peace of Augsburg, which allowed each German prince to freely select the religion to be practiced in his territory. Moreover, if a Catholic family lived in a territory whose prince chose Protestantism, the Peace allowed them to freely migrate to a Catholic area, and vice versa.

And so it was that Germany became a religious patchwork. Catholicism remained popular in the southeast and northwest while Protestantism took off in the central and northeast regions; other areas were mixed.

Fast-forward some 460 years to today. A young economist named Jörg Spenkuch discovered that if he laid a map of modern Germany over a map of sixteenth-century Germany, he could see that the religious patchwork was largely intact. The old Protestant areas are still largely Protestant while the old Catholic areas are still largely Catholic (except for the former East Germany, which took on a lot of atheism during its Communist period). The choices the princes made centuries ago still hold sway.

Perhaps this isn’t so surprising. Germany, after all, is a nation steeped in tradition. But Spenkuch, while playing around with those maps, found something that did surprise him. The religious patchwork of modern Germany also overlapped with an interesting economic patchwork: the people living in Protestant areas earned more money than those in Catholic areas. Not a great deal more—about 1 percent—but the difference was clear. If the prince in your area had sided with the Catholics, you were likely to be poorer today than if he had followed Martin Luther.

How to explain this income patchwork? There could of course be present-tense reasons. Perhaps the higher earners got more education, or had better marriages, or lived closer to the high-paying jobs found in big cities.

But Spenkuch analyzed the relevant data and found that none of these factors could account for the income gap. Only one factor could: religion itself. He concluded that the people in Protestant areas make more money than the people in Catholic areas simply because they are Protestants!

Why? Was some kind of religious cronyism to blame, with Protestant bosses giving better jobs to Protestant workers? Apparently not. In fact, the data showed that Protestants don’t earn higher hourly wages than Catholics—and yet they do manage to have higher incomes overall. So how does Spenkuch explain the Protestant-Catholic income gap?

He identified three factors:

1. Protestants tend to work a few more hours per week than Catholics.

2. Protestants are more likely than Catholics to be self-employed.

3. Protestant women are more likely than Catholic women to work full-time.

It appears that Jörg Spenkuch found living proof of the Protestant work ethic. That was the theory put forth in the early 1900s by the German sociologist Max Weber, which argued that capitalism took off in Europe in part because Protestants embraced the earthly notion of hard work as part of their divine mission.

So what does all this mean for the disgruntled factory worker drowning his economic sorrows in the beer hall? Unfortunately, not much. For him, it’s probably too late unless he wants to shake up his life and start working harder. But at least he can push his kids to follow the lead of the hardworking Protestants a few towns over.*

Once you start looking at the world through a long lens, you will find many examples of contemporary behaviors that are driven by root causes from centuries past.

Why, for instance, are some Italian towns more likely than others to participate in civic and philanthropic programs? Because, as some researchers argue, during the Middle Ages these towns were free city-states rather than areas ruled by Norman overlords. Such an independent history apparently fosters a lasting trust in civic institutions.

In Africa, some countries that regained independence from their colonial rulers have experienced brutal wars and rampant corruption; others haven’t. Why? One pair of scholars found an answer that goes back many years. When the European powers began their mad “Scramble for Africa” in the nineteenth century, they carved up existing territories by looking at maps from afar. When creating new borders, they considered two essential criteria: land mass and water. The actual Africans who lived in these territories were not a major concern for the colonialists, since to them one African looked pretty much like the next one.

This method might make sense if you are cutting a cherry pie. But a continent is more problematic. These new colonial borders often split up large, harmonious ethnic groups. Suddenly, some members of the group became residents of one new country; others, a second country—along with, often, members of a different ethnic group with whom the first group wasn’t so harmonious. Ethnic strife tended to be tamped down by colonial rule, but when the Europeans eventually returned to Europe, the African countries where unfriendly ethnic groups had been artificially jumbled were far more likely to devolve into war.

The scars of colonialism still haunt South America as well. Spanish conquistadors who found silver or gold in Peru, Bolivia, and Colombia would enslave the locals to work in the mines. What kind of long-term effect did this have? As several economists have found, people in those mining areas are to this day poorer than their neighbors, and their children are less likely to be vaccinated or get an education.

There is another case—a bizarre one, to be sure—in which the long arm of slavery reaches across history. Roland Fryer, an economist at Harvard, is consumed with closing the gap between blacks and whites in education, income, and health. Not long ago, he set out to understand why whites outlive blacks by several years. One thing was clear: heart disease, historically the biggest killer of both whites and blacks, is far more common among blacks. But why?

Fryer crunched all sorts of numbers. But he found that none of the obvious stressors—diet, smoking, even poverty—could account for the entire gap.

Then he found something that might. Fryer happened upon an old illustration called “An Englishman Tastes the Sweat of an African.” It showed a slave trader in West Africa who appeared to be licking the slave’s face.

Why would he do that?

One possibility was that he was somehow screening the slave for illness, not wanting to contaminate the rest of his cargo. Fryer wondered if the slave trader was perhaps testing the slave’s “saltiness.” That, after all, is what sweat tastes like. If so, why—and might this answer inform the broader agenda Fryer was pursuing?

The ocean journey of a slave from Africa to America was long and gruesome; many slaves died en route. Dehydration was a major cause. Who, Fryer wondered, is less likely to suffer from dehydration? Someone with a high degree of salt sensitivity. That is, if you are able to retain more salt, you will also retain more water—and be less likely to die during the Middle Passage. So perhaps the slave trader in the illustration wanted to find the saltier slaves in order to ensure his investment.

Fryer, who is black, mentioned this theory to a Harvard colleague, David Cutler, a prominent health economist who is white. Cutler at first thought it was “absolutely crazy,” but upon deeper inspection it made sense. Indeed, some earlier medical research made a similar claim, although it was in considerable dispute.

Fryer began to fit the pieces together. “You might think anyone who could survive a voyage like this would be very fit and therefore would have a longer life expectancy,” he says. “But actually this peculiar selection mechanism says that you can survive an ordeal such as the Middle Passage, but it’s horrible for hypertension and related diseases. And salt sensitivity is a highly heritable trait, meaning that your descendants, i.e., black Americans, stand a pretty good chance of being hypertensive or of having cardiovascular disease.”

Fryer looked for further evidence that might support his theory. American blacks are about 50 percent more likely to have hypertension than American whites. Again, this could be due to differences like diet and income. So what did the hypertension rates of other black populations look like? Fryer found that among Caribbean blacks—another population brought from Africa as slaves—hypertension rates were also elevated. But he noted that blacks who still live in Africa are statistically indistinguishable from whites in America. The evidence was hardly conclusive, but Fryer was convinced that the selection mechanism of the slave trade could be a long-lasting root cause of African-Americans’ higher mortality rates.

As you can imagine, Fryer’s theory isn’t universally popular. Many people are uncomfortable talking about genetic racial difference at all. “People e-mail me and say, ‘Can’t you see the slippery slope here!? Can you see the perils of this argument?’ ”

Fresh medical research may prove that the salt-sensitivity theory isn’t even right. But if it is, even in small measure, the potential benefits are huge. “There’s something that can be done,” Fryer says. “A diuretic that helps your body get rid of your salts. A little common pill.”

You might think that medicine, with such strong doses of science and logic, is one field in which root causes are always well understood.

Alas, you would be wrong. The human body is a complex, dynamic system about which a great deal remains unknown. Writing as recently as 1997, the medical historian Roy Porter put it this way: “We live in an age of science, but science has not eliminated fantasies about health; the stigmas of sickness, the moral meanings of medicine continue.” As a result, gut hunches are routinely passed off as dogma while conventional wisdom flourishes even when there is no data to back it up.

Consider the ulcer. It is essentially a hole in your stomach or small intestine, producing a searing and surging pain. By the early 1980s, the causes of an ulcer were said to be definitively known: they were inherited or caused by psychological stress and spicy food, either of which could produce an overabundance of stomach acid. To anyone who has ever eaten a pile of jalapeños, this seems plausible. And as any doctor could attest, a patient with a bleeding ulcer was likely to be stressed out. (A doctor might just as easily note that shooting victims tend to bleed a lot, but that doesn’t mean the blood caused the gunshot.)

Since the causes of ulcers were known, so too was the treatment. Patients were advised to relax (to cut down on stress), drink milk (to soothe the stomach), and take a Zantac or Tagamet pill (to block the production of stomach acid).

How well did this work?

To put it charitably: so-so. The treatment did help manage a patient’s pain, but the condition wasn’t cured. And an ulcer is more than a painful nuisance. It can easily become fatal due to peritonitis (caused by a hole going clear through the stomach wall) or complications from bleeding. Some ulcers required major surgery, with all the attendant complications.

Although ulcer patients didn’t make out so well under the standard treatment, the medical community did just fine. Millions of patients required the constant service of gastroenterologists and surgeons, while pharmaceutical companies got rich: the antacids Tagamet and Zantac were the first true blockbuster drugs, taking in more than $1 billion a year. By 1994, the global ulcer market was worth more than $8 billion.

In the past, some medical researcher might have suggested that ulcers and other stomach ailments, including cancer, had a different root cause—perhaps even bacterial. But the medical establishment was quick to point out the glaring flaw in this theory: How could bacteria possibly survive in the acidic cauldron of the stomach?

And so the ulcer-treatment juggernaut rolled on. There wasn’t much of an incentive to find a cure—not, at least, by the people whose careers depended on the prevailing ulcer treatment.

Fortunately the world is more diverse than that. In 1981, a young Australian medical resident named Barry Marshall was on the hunt for a research project. He had just taken up a rotation in the gastroenterology unit at Royal Perth Hospital, where a senior pathologist had stumbled onto a mystery. As Marshall later described it: “We’ve got 20 patients with bacteria in their stomach, where you shouldn’t have bacteria living because there’s too much acid.” The senior doctor, Robin Warren, was looking for a young researcher to help “find out what’s wrong with these people.”

The squiggly bacteria resembled a species called Campylobacter, which can cause infection in people who spend time with chickens. Were these human bacteria indeed Campylobacter? What kind of diseases might they lead to? And why were they so concentrated among patients with gastric trouble?

Barry Marshall, as it turns out, was already familiar with Campylobacter, for his father had worked as a refrigeration engineer in a chicken-packing plant. Marshall’s mother, meanwhile, was a nurse. “We used to have a lot of arguments about what was really true in medicine,” he told an interviewer, the esteemed medical journalist Norman Swan. “She would ‘know’ things because they were folklore, and I would say, ‘That’s old-fashioned. There’s no basis for it in fact.’ ‘Yes, but people have been doing it for hundreds of years, Barry.’ ”

Marshall was excited by the mystery he inherited. Using samples from Dr. Warren’s patients, he tried to culture the squiggly bacteria in the lab. For months, he failed. But after an accident—the culture was left in the incubator three days longer than intended—it finally grew. It wasn’t Campylobacter; it was a previously undiscovered bacteria, henceforth known as Helicobacter pylori.

“We cultured it from lots of people after that,” Marshall recalls. “Then we could say, ‘We know which antibiotic kills these bacteria.’ We figured out how they could live in the stomach, and we could play around with it in the test tube, do all kinds of useful experiments… . We were not looking for the cause of ulcers. We wanted to find out what these bacteria were, and we thought it would be fun to get a nice little publication.”

Marshall and Warren continued to look for this bacteria in patients who came to see them with stomach trouble. The doctors soon made a startling discovery: among 13 patients with ulcers, all 13 also had the squiggly bacteria! Was it possible that H. pylori, rather than merely showing up in these patients, was actually causing the ulcers?

Back in the lab, Marshall tried infecting some rats and pigs with H. pylori to see if they developed ulcers. They didn’t. “So I said, ‘I have to test it out on a human.’ ”

The human, Marshall decided, would be himself. He also decided not to tell anyone, even his wife or Robin Warren. First he had a biopsy taken of his stomach to make sure he didn’t already have H. pylori. All clear. Then he swallowed a batch of the bacteria that he had cultured from a patient. In Marshall’s mind, there were two likely possibilities:

1. He would develop an ulcer. “And then, hallelujah, it’d be proven.”

2. He wouldn’t develop an ulcer. “If nothing happened, my two years of research to that point would have been wasted.”

Barry Marshall was probably the only person in human history rooting for himself to get an ulcer. If he did, he figured it would take a few years for symptoms to arise.

But just five days after he gulped down the H. pylori, Marshall began having vomiting attacks. Hallelujah! After ten days, he had another biopsy taken of his stomach, “and the bacteria were everywhere.” Marshall already had gastritis and was apparently well on his way to getting an ulcer. He took an antibiotic to help wipe it out. His and Warren’s investigation had proved that H. pylori was the true cause of ulcers—and, as further investigation would show, of stomach cancer as well. It was an astonishing breakthrough.

Granted, there was much testing to come—and an enormous pushback from the medical community. Marshall was variously ridiculed, pilloried, and ignored. Are we to seriously believe that some loopy Australian found the cause of ulcers by swallowing a batch of some bacteria that he says he discovered himself? No $8 billion industry is ever happy when its reason for being is under attack. Talk about gastric upset! An ulcer, rather than requiring a lifetime of doctor’s visits and Zantac and perhaps surgery, could now be vanquished with a cheap dose of antibiotics.

It took years for the ulcer proof to fully take hold, for conventional wisdom dies hard. Even today, many people still believe that ulcers are caused by stress or spicy foods. Fortunately, doctors now know better. The medical community finally came to acknowledge that while everyone else was simply treating the symptoms of an ulcer, Barry Marshall and Robin Warren had uncovered its root cause. In 2005, they were awarded the Nobel Prize.

The ulcer discovery, stunning as it was, constitutes just one small step in a revolution that is only beginning to unfold, a revolution aimed toward finding the root cause of illness rather than simply swatting away symptoms.

H. pylori, it turns out, isn’t some lone-wolf bacterial terrorist that managed to slip past security and invade the stomach. In recent years, enterprising scientists—aided by newly powerful computers that facilitate DNA sequencing—have learned that the human gut is home to thousands of species of microbes. Some are good, some are bad, others are situationally good or bad, and many have yet to reveal their nature.

Just how many microbes do each of us host? By one estimate, the human body contains ten times as many microbial cells as human cells, which puts the number easily in the trillions and perhaps in the quadrillions. This “microbial cloud,” as the biologist Jonathan Eisen calls it, is so vast that some scientists consider it the largest organ in the human body. And within it may lie the root of much human health … or illness.

In labs all over the world, researchers have begun to explore whether the ingredients in this sprawling microbial stew—much of which is hereditary—may be responsible for diseases like cancer and multiple sclerosis and diabetes, even obesity and mental illness. Does it seem absurd to think that a given ailment that has haunted humankind for millennia may be caused by the malfunction of a microorganism that has been merrily swimming through our intestines the whole time?

Perhaps—just as it seemed absurd to all those ulcer doctors and pharmaceutical executives that Barry Marshall knew what he was talking about.

To be sure, these are early days in microbial exploration. The gut is still a frontier—think of the ocean floor or the surface of Mars. But already the research is paying off. A handful of doctors have successfully treated patients suffering from intestinal maladies by giving them a transfusion of healthy gut bacteria.

Where do those healthy bacteria come from, and how are they sluiced into the sick person’s gut? Before going further, let us offer two notes of caution:

1. If you happen to be eating as you read this, you may wish to take a break.

2. If you are reading this book many years after it was written (assuming there are still people, and they still read books), the method described below may seem barbarically primitive. In fact we hope that is the case, for it would mean the treatment has proven valuable but that delivery methods have improved.

Okay, so a sick person needs a transfusion of healthy gut bacteria. What is a viable source?

Doctors like Thomas Borody, an Australian gastroenterologist who drew inspiration from Barry Marshall’s ulcer research, have identified one answer: human feces. Yes, it appears that the microbe-rich excrement of a healthy person may be just the medicine for a patient whose own gut bacteria are infected, damaged, or incomplete. Fecal matter is obtained from a “donor” and blended into a saline mixture that, according to one Dutch gastroenterologist, looks like chocolate milk. The mixture is then transfused, often via an enema, into the gut of the patient. In recent years, doctors have found fecal transplants to be effective in wiping out intestinal infections that antibiotics could not. In one small study, Borody claims to have used fecal transplants to effectively cure people who were suffering from ulcerative colitis—which, he says, was “previously an incurable disease.”

But Borody has been going beyond mere intestinal ailments. He claims to have successfully used fecal transplants to treat patients with multiple sclerosis and Parkinson’s disease. Indeed, while Borody is careful to say that much more research is needed, the list of ailments that may have a root cause living in the human gut is nearly endless.

To Borody and a small band of like-minded brethren who believe in the power of poop, we are standing at the threshold of a new era in medicine. Borody sees the benefits of fecal therapy as “equivalent to the discovery of antibiotics.” But first, there is much skepticism to overcome.

“Well, the feedback is very much like Barry Marshall’s,” says Borody. “I was initially ostracized. Even now my colleagues avoid talking about this or meeting me at conferences. Although this is changing. I’ve just had a nice string of invitations to speak at national and international conferences about fecal transplantation. But the aversion is always there. It’d be much nicer if we could come up with a non-fecal-sounding therapy.”

Indeed. One can imagine many patients being turned off by the words fecal transplant or, as researchers call it in their academic papers, “fecal microbiota transplantation.” The slang used by some doctors (“shit swap”) is no better. But Borody, after years of performing this procedure, believes he has finally come up with a less disturbing name.

“Yes,” he says, “we call it a ‘transpoosion.’ ”