LESS IS MORE IN HEALTH CARE - GUT FEELINGS IN ACTION - Gut Feelings: The Intelligence of the Unconscious - Gerd Gigerenzer

Gut Feelings: The Intelligence of the Unconscious - Gerd Gigerenzer (2007)

Part 2. GUT FEELINGS IN ACTION

It is ironic, moreover, that the best lessons in “fast and frugal rules of thumb” may well come from understanding the cognitive processes of those master clinicians who consistently make superb decisions without obvious recourse to the canon of evidence-based medicine.

—C. D. Naylor1

9. LESS IS MORE IN HEALTH CARE

A glass of red wine at dinner prevents heart attacks; butter kills you; all treatments and tests are desirable, as long as you can afford them—most of us have strong intuitions about what is good and bad in health care. Although we act on these beliefs, they are typically based on rumor, hearsay, or trust. Few make a serious effort to find out what medical research knows, although many consult consumer reports when buying a refrigerator or computer. How do economists make health care decisions? We asked 133 male economists at the 2006 meeting of the American Economic Association whether they take PSA (prostate-specific antigen) tests for prostate cancer screening, and why. Among those over fifty, the majority participated in screening, but very few had read any medical literature on the topic and two-thirds said that they did not weigh the pros and cons of screening.2 Most just did whatever their doctor told them to do. Like John Q. Public, they relied on the gut feeling:


If you see a white coat, trust it.


Trust in authority, rumor, and hearsay were efficient guides in human history before the advent of books and medical research. Learning by firsthand experience was potentially deadly; finding out by oneself which plants were poisonous was a bad strategy. Is blind trust in the health expert still sufficient today, or do patients need to research more carefully? The answer depends not only on the expertise of your doctor but on the legal and financial system in which your health care system operates.

CAN DOCTORS TRUST PATIENTS?

Daniel Merenstein, a family physician, is not sure he will ever be the doctor he wants to be. As a third-year resident, he saw a highly educated fifty-three-year-old man for physical examination.3 They discussed the importance of diet, exercise, wearing seat belts, and the risks and benefits of screening for prostate cancer. While proper diet, exercise, and seat belts have proved beneficial to health, there is no proof that men who participate in screening with PSA tests live longer than those who don’t—contrary to what some physicians and patients believe. But there is proof that those who test positive may be harmed through treatments for slow-growing cancers that, even if untreated, would not cause problems in a man’s lifetime. After radical prostatectomy about three out of ten men can become incontinent and six out of ten become impotent.4 This is why nearly all national guidelines recommend that physicians discuss the pros and cons of PSA tests with the patient, and why the U.S. Preventive Services Task Force concludes that the evidence is insufficient to recommend for or against routine PSA screening.5 Merenstein spent much time keeping up-to-date with current medical studies so he could practice what is known as evidence-based medicine. After learning about the pros and cons, the patient declined the PSA test. Merenstein never saw the man again, and after he graduated the patient went to another office. His new doctor ordered PSA testing without discussing with him its risks and benefits.

The patient was unlucky. He was subsequently diagnosed with a horrible, incurable form of prostate cancer. Although there is no evidence that early detection of this cancer could have saved or prolonged the man’s life, Dr. Merenstein and his residency were put to trial in 2003. Merenstein assumed that he’d be accused of failing to discuss prostate cancer screening with the patient. Yet the plaintiff’s attorney claimed that the PSA test was the standard of care in the Commonwealth of Virginia and that Merenstein should have ordered the test, not discussed it. Four Virginia physicians testified that they simply do the test without informing their patients. The defense brought in national experts who testified that the benefits of PSA screening are unproved and questionable, whereas severe harms are documented, and emphasized the national guidelines of shared decision making.

In his closing arguments, the plaintiff’s lawyer contemptuously referred to “evidence-based medicine” as merely a cost-saving method, naming the residency and Merenstein as its disciples and the experts as its founders. He called upon the jury to return a verdict that would teach residencies not to send more doctors out on the streets believing in evidence-based medicine. The jury was convinced. Merenstein was exonerated, but his residency was found liable for $1 million. Before the trial, Merenstein believed in the value of keeping up with the current medical literature and bringing it to the patient. He now looks at the patient as a potential plaintiff. Being burned once, he feels he has no choice but to overtreat patients, even at the risk of causing unnecessary harm, in order to protect himself from them. “I order more tests now, am more nervous around patients; I am not the doctor I should be.”6

CAN PATIENTS TRUST DOCTORS?

The story of young Kevin in the second chapter makes us wonder about the damage caused by overdiagnosis in health care. Merenstein and his residency have learned the hard way that they are supposed to perform tests on their patients in order to protect themselves, even if a test’s potential harms are proved and its potential benefits are not. Clearly something is going wrong with health care. The good old-fashioned gut feeling “If you see a white coat, trust it” has done much good. But it cannot work as well when physicians fear lawsuits, overmedication and overdiagnosis have become a lucrative business, and aggressive direct-to-consumer advertising has become legal. All lead instead to a decrease in the quality and an increase in the costs of health care. Let me define two consequences:7

Overdiagnosis is the detection of a medical condition through testing that otherwise would not have been noticed within the patient’s lifetime.

Overtreatment is the treatment of a medical condition that otherwise would not have been noticed within the patient’s lifetime.

Would you rather receive a thousand dollars in cash or a free total-body computed tomography (CT) scan? In a telephone survey of a random sample of five hundred Americans, 73 percent said they would prefer the CT.8 Do these optimists know what they are getting? Obviously not. There is no evidence to support the benefit or even safety of a total-body CT screening; it is not endorsed by any professional medical organization, and even discouraged by several.9 Nonetheless, CT scans and other high-technology screening tests are successfully marketed by an increasing number of independent entrepreneurs, including physicians. Professional TV actors dressed up as doctors spread slogans like “Take the test, not the chance.”

Physicians who sell CT screening might respond that people have the right to make use of it without waiting years before its effectiveness or harms have been proved—after all, a normal result can give consumers “peace of mind.” Sounds comforting, but is it true that one has peace of mind if the CT results are normal? Absolutely not; it’s more an illusion of certainty. Consider electron-beam CT, which is performed to identify persons with an increased risk of coronary artery disease. The chance that it correctly identifies persons with increased risk is only 80 percent; that is, 20 percent of those who are at risk are sent home with a false peace of mind. Its false-alarm rate is even worse. Among people who are not at risk, 60 percent are nevertheless told that their results are suspicious.10 That is, many of those who have no reason to worry may spend the rest of their lives frightened about a nonexistent medical condition. I have rarely heard of such a poor high-tech test, worse than other noninvasive and less expensive testing methods. I myself would rather pay a thousand dollars to avoid the test—and save my peace of mind.

Do doctors take the tests they recommend to patients? I once gave a lecture to a group of sixty physicians, including representatives of physicians’ organizations and health insurance companies. The atmosphere was casual, and the organizer’s warm personality helped to develop a sense of common agenda. Our discussion turned to breast cancer screening, in which some 75 percent percent of American women over fifty participate. A gynecologist remarked that after a mammogram, it is she, the physician, who is reassured: “I fear not recommending a mammogram to a woman who may later come back with breast cancer and ask me ‘Why didn’t you do a mammogram?’ So I recommend that each of my patients be screened. Yet I believe that mammography screening should not be recommended. But I have no choice. I think this medical system is perfidious, and it makes me nervous.”11 Another doctor asked her whether she herself participates in mammography screening? “No,” she said, “I don’t.” The organizer then asked all sixty physicians the same question (for men: “If you were a woman, would you participate?”). The result was an eye-opener: not a single female doctor in this group participated in screening, and no male physician said he would do so if he were a woman.

If a woman is a lawyer, or the wife of a lawyer, does she get better treatment? Lawyers seem to be regarded by doctors as especially litigious patients who should be treated with caution when it comes to risky procedures such as surgery. The rate of hysterectomy in the general population in Switzerland was 16 percent, whereas among lawyers’ wives it was only 8 percent—among female doctors it was 10 percent.12 In general, the less well educated a woman is and the better private insurance she has, the more likely it is that she’ll get a hysterectomy. Similarly, children in the general population had significantly more tonsillectomies than the children of physicians and lawyers. Lawyers and their children apparently get better treatment, but here, better means less.

So what do you do if your mother is sick and you want to know what your doctor really thinks? Here is a helpful rule:


Don’t ask your doctors what they recommend. Ask them what they would do if it were their mother.


My experience has been that doctors change their advice when I ask about their mother or other relatives. The question shifts their point of view; a mother would not sue. Yet not every patient is ready to accept that doctors are under external pressure, and that patients must therefore take on some responsibility for their treatment. The doctor-patient relationship is deeply emotional, as the case of a friend and novelist illustrates.

“We can’t meet tomorrow morning, I’ve got to go to my doctor,” he told me.

“I hope it’s nothing serious?”

“Only a colonoscopy,” my friend reassured me.

“Only? Do you have pain?”

“No,” he replied, “my doctor said I need to have one, I’m forty-five. Don’t worry, in my family, nobody ever had colon cancer.”

“It can hurt. Did your doctor tell you what the possible benefits of a colonoscopy are?”

“No,” my friend said, “he just said it’s a routine test, recommended by medical organizations.”

“Why don’t we find out on the Internet?”

We first looked up the report of the U.S. Preventive Services Task Force. It said that there is insufficient evidence for or against routine screening with colonoscopy. My friend is Canadian and responded that he does not bank on everything American. So we looked up the Canadian Task Force report, and it had the same result. Just to be sure, we checked Bandolier at Oxford University in the United Kingdom, and once again we found the same result. No serious health association we looked up reported that people should have a routine colonoscopy—after all, a colonoscopy can be extremely unpleasant—but many recommended the simpler, cheaper, and noninvasive fecal occult blood test. What did my friend do? If you think that he canceled his doctor’s appointment the next day, you are as wrong as I was. Unable to bear the evidence, he got up and left, refusing to discuss the issue any further. He wanted to trust his doctor.

DOCTORS’ DILEMMA

Patients tend to trust their doctors, but they do not always consider the situation in which the doctors find themselves. Most physicians try to do their best in a world in which time and knowledge are severely limited. In the United States, the average time patients have to describe their complaints before they are interrupted by their physicians is twenty-two seconds. The total time the physician spends with a patient is five minutes—“how are you” and other formal niceties included. That is markedly different in countries such as Switzerland and Belgium that have an “open market” in which patients have access to more than one general practitioner or specialist. In this competitive situation the doctor invests time in his patients to encourage them to return. Here the average duration of a visit is fifteen minutes.13

Continuing education is indispensable in the rapidly changing world of medicine. Yet most physicians have neither the time to read even a few of the thousands of articles published every month in medical journals nor the methodological skills to evaluate the claims in these articles. Rather, continuing education mostly happens in seminars sponsored by the pharmaceutical industry, usually at a nice vacation spot, with spouses’ and other expenses included. Pharmaceutical firms conveniently provide summaries of scientific studies of their featured products, which their representatives distribute in the form of advertisements and leaflets to physicians. As a recent investigation revealed, these are not neutral summaries. The assertions in 175 different leaflets distributed to German physicians could be verified in only 8 percent of the cases.14 In the remaining 92 percent of cases, statements in the original study were falsely reported, severe side effects of medication were not revealed, the period during which medication could safely be taken was exaggerated, or—should doctors have wanted to check the original studies—the cited source was not provided or was impossible to find. As a consequence, many physicians have only a tenuous connection with the latest medical research.

For patients and doctors alike, geography is destiny. The surgeons in one medical referral region in Vermont removed the tonsils of 8 percent of the children living there, while those in another community removed the tonsils of 70 percent of the children. In one region in Iowa, 15 percent of all men had undergone prostate surgery by age eighty-five; in another region, it was 60 percent. Women are subject to this same geographical power over their bodies. In one region in Maine, 20 percent of the women had a hysterectomy by the age of seventy; in another region, over 70 percent underwent this operation.15 There is little reason to believe that these striking regional differences have much to do with patients’ conditions. Whether or not people undergo a treatment depends on local custom, while the kind of treatment depends on the attending physician. For localized prostate cancer, for instance, most urologists recommend radical surgery, whereas most radiation oncologists recommend radiation treatment. The authors of the Dartmouth Atlas of Health Care conclude that “the system’ of care in the United States is not a system at all, but a largely unplanned and irrational sprawl of resources, undisciplined by the laws of supply and demand.”16

At a time when everyone is worried about exploding health care costs, we spend billions of dollars every year on care that provides little or no benefits to people, and sometimes even causes them harm. Can we counteract these problems and instill in our health care system a good dose of rationality? The system in fact needs a three-pronged cure: it must develop efficient and transparent policies in place of physicians’ defensive practices and local custom; it needs to find common ground between medical experts about good treatment; and finally it needs reform in the practice of litigation that allows physicians to do what is best for the patient rather than follow self-protecting procedures. In the next section, I illustrate how one can achieve the first goal.

HOW TO IMPROVE PHYSICIANS’ JUDGMENTS

There are two classical proposals, both of which follow the spirit of Franklin’s rule. According to clinical-decision theory, patients and doctors should choose between alternative treatments by surveying all possible consequences, and then estimating the numerical probability and utility of each consequence. One then multiplies these, adds them up, and chooses the treatment with the highest expected utility. The beauty of this approach is that it embodies shared decision making: the physician provides the alternatives, consequences, and the probabilities, and the patient is responsible for attaching numbers to the potential benefits and harms. Yet decision theorists have convinced few doctors to engage in this calculation because it is time-consuming, and most patients resist attaching numerical values to the potential harms of a tumor versus those of a heart attack. Proponents of clinical-decision analysis will respond that their intuitions need to be changed, yet proof that expected-utility calculations are the best form of clinical decision does not exist, and there are even reports that they do not always lead to better decisions. Last but not least, when intuition clashes with their deliberate reasoning, people tend to be less satisfied with the choice they make.17

The second proposal is to introduce complex statistical aids for physicians making treatment decisions, which might lead to better results than do their intuitions.18 We’ll see this type of method in the next section. Although these decision aids are more widely adopted than expected-utility calculations, they are still rare in clinical practice, and again at odds with medical intuition. A majority of physicians don’t understand complex decision aids and end up abandoning them. As a result, physicians are left with their own clinical intuition biased by self-protective treatment, specialty, and geography.

Is there a way to respect the nature of intuitions and improve treatment decisions? I believe that the science of intuition provides such an alternative. To that end, I was glad to read in the renowned medical journal Lancet that our research on rules of thumb is starting to have an impact on medicine. As the epigraph to this chapter reveals, rules of thumb are seen as an explication of clinical masterminds’ intuitions. Yet in the same issue of the Lancet, another article provided a different interpretation of our work: “The next frontier will involve fast and frugal heuristics; rules for patients and clinicians alike.”19Here, rules of thumb are seen as an alternative to complex decision analysis. My own conviction is that physicians already use simple rules of thumb but for fear of lawsuits do not always admit it. Instead they tend to use these rules either unknowingly or covertly, leaving them little possibility for systematic learning. The ensuing problems for health care are obvious. My alternative is to develop intuitive decisions into a science, discuss them openly, connect them with the available evidence, and then train medical students to use them in a disciplined and informed way.

The following story illustrates that program. It looks at three ways to make treatment allocations: by clinical intuition, by a complex statistical system, and by a fast and frugal rule of thumb. The story begins several years ago, when I gave a talk to the Society for Medical Decision Making in beautiful Tempe, Arizona. I explained in what situations simple rules can be faster, less costly, and more accurate than complex strategies. When I stepped down from the podium, Lee Green, a medical researcher from the University of Michigan, approached me and said, “Now I think I understand my puzzle.” Here is his story.

TO THE INTENSIVE CARE UNIT?

A man is rushed to a hospital with severe chest pains. The emergency physicians suspect the possibility of a heart attack (acute ischemic heart disease). They need to act, and quickly. Should the man be assigned to the coronary care unit or to a regular nursing bed with electrocardiographic telemetry? This is a routine situation. Every year, between one and two million patients are admitted to coronary care units in the United States.20 How do doctors make this decision?

In a Michigan hospital, doctors relied on the long-term-risk factors of coronary artery disease, including family history, being male, advanced age, smoking, diabetes mellitus, increased serum cholesterol, and hypertension. These physicians sent about 90 percent of the patients with severe chest pain into the coronary care unit. This is a sign of defensive decision making; doctors fear being sued if patients assigned to a regular bed die of a heart attack. As a consequence, the care unit became overcrowded, the quality of care decreased, and costs went up. You might think that even if a patient doesn’t have a heart attack, it was better to be safe than sorry. But being in the ICU carries its own risks. Some twenty thousand Americans die every year from a hospital-transmitted infection, and many more contract one. Such infections are particularly prevalent in the intensive care unit, making it one of the most dangerous places in the hospital—a dear friend of mine died in the ICU from a disease he’d picked up there. Yet when putting patients into this extremely dangerous situation, doctors protect themselves from being sued.

A team of medical researchers from the University of Michigan was called in to improve conditions. When they checked the quality of physicians’ decisions—and quality control is not yet always the rule in hospitals—they found a disturbing result. Not only did doctors send most patients into the unit; they sent those who should have been there (who had a heart attack) as often as those who should not have been there (who did not have a heart attack). Doctors’ decisions were no better than chance, but nobody seemed to notice. As a second study revealed, the long-term-risk factors doctors were looking for were not the most relevant ones for discriminating between patients with and without acute ischemic heart disease. Specifically, the physicians looked for a history of hypertension and diabetes, “pseudo-diagnostic” cues, instead of the nature and location of patients’ symptoms and certain clues in the electrocardiogram, all of which are more powerful predictors of a heart attack.21

What to do? The team first tried to solve the complex problem with a complex strategy. They introduced the heart disease predictive instrument.22 It consists of a chart with some fifty probabilities and a long formula that enable the physician, with the help of a pocket calculator, to compute the probability that a patient should be admitted to the coronary care unit. The physicians were taught to find the right probabilities for each patient, type these into the calculator, press ENTER, and read off the resulting number. If it was higher than a given threshold, the patient was sent to the care unit. A quick glance at the chart makes it clear why the physicians were not happy using this and similar systems (Figure 9-1). They don’t understand them.

image

Figure 9-1: The heart disease predictive instrument chart. It comes with a pocket calculator. If you don’t understand it, then you know why most physicians don’t like it.

Nevertheless, after the physicians were first exposed to the system, their decisions improved markedly and overcrowding eased in the coronary care unit. So the team surmised that calculation, rather than intuition, worked in their cases. But they were well-trained researchers and tested their conclusion by taking the chart and the pocket computer away from the physicians. If calculation were the key, the quality of their decisions should fall back to the initial chance level. Yet the physicians’ performance did not drop. The researchers were surprised. Had the physicians memorized the probabilities on the chart? A test showed that they hadn’t, nor had they understood the formula in the pocket calculator. The researchers then returned the calculator and the chart to the physicians, withdrew them again, and so on. It made no difference. After the physicians’ first exposure to the chart, their intuitions improved permanently, even without further access to the calculating tools. Here is the puzzle: how could the physicians make the right calculations when they no longer had the key tools?

It was at this point that I met Green, the principal investigator, and it was during my talk that he found the answer: the physicians did not need the chart and the calculator because they did not calculate. But what then improved their intuitions? All that seemed to matter were the right cues, which the physicians had memorized. They still worked with their intuitions, but now they knew what to look for, whereas earlier they had looked in the wrong places. This insight opened up a third alternative, beyond mere intuition and complex calculation, a rule of thumb for coronary care allocations, designed by Green together with David Mehr. It corresponded to the natural thinking of physicians but was empirically informed. Let me explain the logic of constructing such a rule.

Transparent Diagnostic Rules

The heart disease predictive instrument was proved effective on some twenty-eight hundred patients in six New England hospitals. Why not use it in another hospital, such as in Michigan? As I mentioned before, it lacks transparency. When systems with heavy calculation and scores of probabilities conflict with their intuitions, physicians tend to avoid the more complicated method.23 Yet there is another drawback to complexity that we saw in the last chapter. When there is high uncertainty, simple diagnostic methods tend to be more accurate. Predicting heart attacks is extremely difficult, and no even remotely perfect method exists.

Let us take for granted that the predictive instrument is excellent for the New England patients, but it does not necessarily follow that it will perform equally well in Michigan. The patients in the Michigan hospitals differ from those in New England, but we do not know how and to what extent. One way to find out would be to start a new study with several thousand patients in the Michigan hospitals. That option is not available, however, and even if it were, such a study would take years. In the absence of data, we can use the simplifying principles introduced in the previous chapters.

But how? One way is to reduce the number of factors in the complex diagnostic instrument, and use one-reason decision making. That would lead to a fast and frugal tree (see below). It is like Take the Best but can solve a different class of problems: classifying one object (or person) into two or more categories.

Fast and Frugal Tree

A fast and frugal tree asks only a few yes-or-no questions and allows for a decision after each one.24 In the tree developed by Green and Mehr (Figure 9-2), if there is a certain anomaly in the electrocardiogram (the so-called ST segment), the patient is immediately admitted to the coronary care unit. No other information is required. If that is not the case, a second cue is considered: whether the patient’s chief complaint was chest pain. If not, the patient is assigned to a regular nursing bed. All other information is ignored. If the answer is yes, then a final question is asked. This third question is a composite one: whether any of the other five factors is present. If so, the patient is sent to the coronary care unit. This decision tree is fast and frugal in several respects. It ignores all fifty probabilities and all but one or a few diagnostic questions.

image

Figure 9-2: A fast and frugal decision tree for coronary care unit allocation (based on Green and Mehr, 1997).

This fast and frugal tree puts the most important factor on the top. Changes in the ST segment send the endangered patients quickly into the care unit. The second factor, chest pain, sends patients who shouldn’t be in the care unit to a regular nursing bed in order to reduce dangerous overcrowding. If neither of these factors is decisive, the third one comes into play. Physicians prefer this fast and frugal tree to a complex system, because it is transparent and can be easily taught.

But how accurate is such a simple rule? If you were rushed to the hospital with severe chest pains, would you prefer to be diagnosed by a few yes-or-no questions or by the chart with probabilities and the pocket calculator? Or would you simply trust a physician’s intuitions? Figure 9-3 shows the diagnostic accuracy of each of these three methods in the Michigan hospital. Recall that there are two aspects to accuracy. On the vertical axis is the proportion of patients correctly assigned to the coronary care unit(i.e., who actually had a heart attack), which should ideally be high; on the horizontal axis is the proportion of patients incorrectly assigned, which should be low. The diagonal line represents performance at the level of chance. Points above the diagonal show performance better than chance, and those below reflect performance worse than chance. A perfect strategy would be in the upper-left-hand corner, but nothing like this exists in the uncertain world of heart disease. Physicians’ accuracy before the intervention of the Michigan researchers was at chance level—even slightly below. As mentioned before, they sent about 90 percent of the patients into the care unit but could not discriminate between those who should be there and those who should not. The performance of the heart disease predictive instrument is represented by the squares; there is more than one square because one can make various trade-offs between misses and false alarms.25 Its accuracy was substantially better than chance.

image

Figure 9-3: Which method can best predict heart attacks? The three methods shown are physicians’ intuitive judgments, the complex heart disease predictive instrument, and the fast and frugal tree.

How did the fast and frugal tree do? Recall that the complex instrument had more information than the simple tree did and made use of sophisticated calculations. Nevertheless, the fast and frugal tree was in fact more accurate in predicting actual heart attacks. It sent fewer patients who suffered a heart attack into the regular bed than did the complex system; that is, it had fewer misses. It also cut physicians’ high false-alarm rate down to almost half. Simplicity had paid off again.26

In general, a fast and frugal tree consists of three building blocks:

Search rule: Look up factors in order of importance.

Stopping rule: Stop the search if a factor allows it.

Decision rule: Classify the object according to this factor.

A fast and frugal tree is different from a full decision tree. Full trees are not rules of thumb; they are information-greedy and complex rather than simple and transparent. Figure 9-4 shows both kinds of trees. A full tree has 2n exits or leaves, whereas a fast and frugal tree has only n + 1 (where n is the number of factors). When looking at four factors, this makes 16 versus 5 leaves (see Figure 9-4). With 20 factors, this makes 1,000,000 versus 21 leaves. Constructing full trees runs into other problems as well. Not only do they quickly become computationally intractable, but as a tree grows in size, there are less and less data available to provide reliable estimates for what to do at each stage. For example, if you start with ten thousand patients and try to divide them up among the million leaves, you will end up with unreliable information. Unlike the full tree, the fast and frugal tree introduces order—which of the factors are the most important ones?—to make itself efficient.

image

Figure 9-4: Full decision trees quickly become computationally intractable when the number of cues increases, whereas fast and frugal trees do not.

Medical Intuition Can Be Trained

The moral of the overcrowding story is this: physicians’ intuitions can be improved not only by complex procedures that are in danger of being misunderstood and avoided, but by simple and empirically informed rules. The latter can reduce overcrowding, increase the quality of care, and decrease the wide variability in physicians’ treatment choices. Geography no longer need be destiny, and physicians no longer need to make unreliable decisions. Yet this change in methodology must be supported by legal reform that frees physicians from a fear of doing the best for their patients. An effective litigation law would start from the simple insights that less can be more and that nothing is absolutely certain.

A systematic training of physicians to use rules of thumb would allow them empirically sound, quick, and transparent diagnostic methods. As Green reported, physicians love the fast and frugal tree and it is still, years later, used in the Michigan hospitals. The next step would be to train physicians to understand the building blocks from which heuristics can be constructed and adjusted for other patient populations, educating clinical intuition across the board. Truly efficient health care requires mastering the art of focusing on what’s important and ignoring the rest.