ADAPTED MINDS - UNCONSCIOUS INTELLIGENCE - Gut Feelings: The Intelligence of the Unconscious - Gerd Gigerenzer

Gut Feelings: The Intelligence of the Unconscious - Gerd Gigerenzer (2007)


Human rational behavior is shaped by a scissors whose blades are the structure of task environments and the computational capabilities of the actor.

—Herbert A. Simon1



An ant rushes over a sandy beach on a path full of twists and turns. It turns right, left, back, then halts, and moves ahead again. How can we explain the complexity of the path it chose? We can think up a sophisticated program in the ant’s brain that might explain its complex behavior, but we’ll find that it does not work. What we have overlooked in our efforts to speculate about the ant’s brain is the ant’s environment. The structure of the wind-and-wave-molded beach, its little hills and valleys, and its obstacles shape the ant’s path. The apparent complexity of the ant’s behavior reflects the complexity of the ant’s environment, rather than the ant’s mind. The ant may be following a simple rule: get out of the sun and back to the nest as quickly as possible, without wasting energy by climbing obstacles such as sand mountains and sticks. Complex behavior does not imply complex mental strategies.

The Nobel laureate Herbert Simon argued that the same holds for humans: “A man, viewed as behaving systems, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself.”2 In this view, people adapt to their environments much as gelatin does; if you wish to know what form it will have when it solidifies, also study the shape of the mold. The ant’s path illustrates a general point: to understand behavior, one has to look at both the mind and its environment.


A lone, hungry rat runs through what psychologists call a T-maze (Figure 5-1, left). It can turn either left or right. If it turns left, it will find food in eight out of ten cases; if it turns right, there will only be food in two out of ten cases. The amount of food it finds is small, so it runs over and over again through the maze. Under a variety of experimental conditions, rats turn left most of the time, as one would expect. But sometimes they turn right, though this is the worse option, puzzling many a researcher. According to the logical principle called maximizing, the rat should always turn left, because there it can expect food 80 percent of the time. Sometimes, rats turn left in only about 80 percent of the cases, and right 20 percent of the time. Their behavior is then called probability matching, because it reflects the 80/20 percent probabilities. It results, however, in a smaller amount of food; the expectation is only 68 percent.3 The rat’s behavior seems irrational. Has evolution miswired the brain of this poor animal? Or are rats simply stupid?

We can understand the rat’s behavior once we look into its natural environment rather than into its small brain. Under the natural conditions of foraging, a rat competes with many other rats and animals for food (the right-hand side of Figure 5-1). If all go to the spot that has the most food, each will get only a small share. The one mutant organism that sometimes chooses the second-best patch would face less competition, get more food, and so be favored by natural selection. Thus, rats seem to rely on a strategy that works in a competitive environment but doesn’t fit the experimental situation, in which an individual is kept in social isolation.


Figure 5-1: Rationality is in numbers. Rats run through a T-maze, in which they get food 80 percent of the time when they turn to the left and 20 percent of the time when they turn to the right. The single rat should turn left all the time, but it often does so only 80 percent of the time, even if this means getting less food. What looks like irrational behavior makes sense when there are many rats competing for limited resources; if all turned left, they would miss out on the food to the right.

The stories of the ant and the rat make the same point. In order to understand behavior, one needs to look not only into the brain or mind but also into the structure of the physical and social environment.


New leaders galvanize companies with inspiring themes and ambitious plans, but they also influence corporate culture in simpler ways. Everyone has his or her personal rules of thumb, which they develop, often unconsciously, to help them make quick decisions. While leaders may not intentionally impose their own rules on the workplace, most employees implicitly follow them. These rules tend to become absorbed into the organizational bloodstream, where they may linger long after the leader has moved on. For example, if an executive makes it clear that excessive e-mail irritates her, employees unsure whether to include her in a message will simply opt against it. A leader who appears suspicious of employee absences discourages people from going to conferences or considering outside educational opportunities. Employees may be grateful that such shortcuts help them avoid protracted mulling over the pros and cons of taking a particular course of action. But as everyone adopts the same rules, the culture shifts: becoming more or less open, more or less inclusive, more or less formal. Because such behavior is difficult to change, leaders should think carefully about what values their rules communicate. They may even want to create new rules to shape the organization to their liking.

When I became director of the Max Planck Institute for Human Development, I wanted to create an interdisciplinary research group whose members actually talked, worked, and published together—a rare thing. Unless one actively creates an environment that supports this goal, collaboration tends to fall apart within a few years or may never get off the ground in the first place. The major obstacle is a mental one. Researchers, like most ordinary people, tend to identify with their ingroup and ignore or even look down on neighboring disciplines. Yet most relevant topics we study today do not respect the historically grown disciplinary borders, and to make progress one must look beyond one’s own narrow point of view. So I came up with a set of rules—not verbalized, but acted upon—that would create the kind of culture I desired. Those rules included

Everyone on the same plane: In my experience, employees who work on different floors interact 50 percent less than those who work on the same floor, and the loss is greater for those working in different buildings. People often behave as if they still lived in the savanna, where they look for others horizontally but not above-or belowground. So when my growing group needed an additional two thousand square feet in which to operate, I vetoed the architect’s proposal that we construct a new building and extended our existing offices horizontally, so that everyone remained on the same plane.

Start on equal footing: To ensure a level playing field at the beginning, I hired all the researchers at once and had them start simultaneously. That way no one knew more than anyone else about the new enterprise, and no one was patronized as a younger sibling.

Daily social gatherings: Informal interaction greases the wheels of formal collaboration. It helps to create trust and curiosity about what others do and know. To ensure a minimum daily requirement of chat, I created a custom. Every day at 4:00 p.m., everyone gathers for conversation and coffee prepared by someone from the group. Because there is no pressure to attend, almost everyone does.

Shared success: If a researcher (or a group) gets an award or publishes an article, he or she provides cake at coffee time. Note that the cake is not given to the successful person. He or she has to buy or bake it, turning everyone else into a beneficiary and sharing the success rather than creating a climate of envy.

Open doors: As director, I try to make myself available for anyone to discuss anything at any time. This open-door policy sets the example for other leaders, who make themselves equally accessible.4

All the members of the original group have since moved on to prestigious appointments elsewhere, but these rules have become an indelible part of who we are and a key to our successful collaboration. Many of the customs have assumed lives of their own—it has been years since I organized afternoon coffee times, yet somehow, every day, they still occur. I would advise all leaders to conduct a mental inventory of their own rules of thumb and to decide whether they want employees to be guided by them. The spirit of an organization is a mirror of the environment the leader creates.


The interplay between mind and environment can be expressed in a powerful analogy coined by Herbert Simon. In the epigraph to this chapter, mind and environment are compared to the blades of a pair of scissors. Just as one cannot understand how scissors cut by looking only at one blade, one will not understand human behavior by studying either cognition or the environment alone. This may seem to be common sense, yet much of psychology has gone a mentalist way, attempting to explain human behavior by attitudes, preferences, logic, or brain imaging and ignoring the structure of the environments in which people live.

Let us have a closer look at one important structure of the environmental blade: its uncertainty, that is, the degree to which surprising, novel, and unexpected things continue to happen. We cannot fully predict the future; usually, we are not even close.


Virtually every other morning I hear an interview on the radio in which a renowned financial expert is asked why certain stocks went up yesterday and others went down. The experts never fail to come up with a detailed, plausible account. The interviewer hardly ever asks the expert to predict which stocks will go up tomorrow. Hindsight is easy, foresight is hard. In hindsight, there is no uncertainty left; we know what has happened, and, if we are imaginative, we can always construct an explanation. In foresight, however, we must face uncertainty.

The stock market is an extreme example of an uncertain environment, with a predictability at or near chance level. The Capital stock-picking contest (chapter 2) that revealed the poor performance of financial experts was not a fluke. A recent study in Stockholm asked professional portfolio managers, analysts, brokers, and investment advisers to predict the performance of twenty blue-chip stocks. Two of the stocks were presented at a time, and the task was to predict which would perform better. A group of laypeople was given the same task, and their predictions were right 50 percent of the time. That is, as one might expect: laypeople performed at chance level, not better and not worse. How well did the professionals do? They picked the winning stock only 40 percent of the time. This result was replicated in a second study with another group of professionals.5 How is it possible that financial experts’ predictions were consistently worse than chance? The professionals base their predictions on complex information concerning each stock, and heavy competition leads them to create stock picks that vary widely from one to the next expert. Since not everyone can be right, this variability tends to decrease overall performance below chance.

Not all environments are as unpredictable as the stock market, but most are characterized by substantial unpredictability. Virtually nobody predicted the fall of the Berlin Wall, political scientists and the people of West and East Berlin included. Forecasters were surprised by the 1989 earthquake in California, the baby-boom population explosion, and the advent of the personal computer. Many don’t seem to realize the limited predictability of our world, and both individuals and firms waste huge amounts of money on consultants. Each year, the “prediction industry”—the World Bank, stock brokerage firms, technology consultants, and business consulting firms, among others—earns some $200 billion as fortune-tellers, despite its generally poor track record. To predict the future is a challenge for laypeople, experts, and politicians alike. As Winston Churchill once complained, the future is one damn thing after another.6


It is a common credo that in predicting the future, one should use as much information as possible and feed it into the most sophisticated computer. A complex problem demands a complex solution, so we are told. In fact, in unpredictable environments, the opposite is true.

High School Dropouts

Marty Brown is the father of two teenagers. He contemplates two high schools he might send his youngest son to, White High and Gray High. Haunted by his elder son’s dropping out of school, Marty searches for a school with a low dropout rate. Neither school, however, provides reliable information about their dropout rates to the public. So Marty gathers information that could help him infer future dropout rates, including the schools’ attendance rates, its writing scores, its social science test scores, the availability of English as a second language program, and class size. From his earlier experience with other schools, he has an idea of which of these are important clues. At some point, his intuition tells him that White High is the better choice. Marty feels strongly about his intuition and sends his youngest son to White High.

How likely is it that Marty’s intuition is correct? In order to answer this question, we need to go through the scheme in Figure 3-4. First, we need to understand the rule of thumb that led to his intuitive feeling, and second, analyze in what environments this rule works. A number of psychological experiments suggest that people often, but not always, base their intuitive judgments on a single good reason.7 A heuristic called Take the Best explains how a gut feeling can result from one-reason decision making. Let us assume that, like many experimental subjects, Marty relies on Take the Best. All he needs is a subjective feeling based on his experience with other schools as to which clues are better than others (this ranking need not be perfect). Assume that the top clues are attendance rate, writing score, and social science test score, in that order. The heuristic looks the clues up one by one and rates their values as high or low. If the first clue, the attendance rate, allows for a decision, then the process is stopped and all other information is ignored; if not, the second clue is checked, and so on. Here is a concrete illustration:


The first clue, attendance rate, is not conclusive; thus the writing score is checked, and is conclusive. Search is stopped and the inference is made that White High has the lower dropout rate.

But how accurate is an intuition based on this rule of thumb? If Marty had used many more reasons, weighing and combining them as in Franklin’s balance sheet method, would he have had a better chance of choosing the right school? I think it is fair to say that almost everyone believed the answer would definitely be yes before 1996, when my research group at the Max Planck Institute discovered the power of one-reason decision making.8 Here is the short story.

Since the reasons for dropping out may vary between regions, let me concentrate on one large city: Chicago. To test the question of whether more clues are better than one good clue, we obtained information from fifty-seven schools on eighteen clues for dropout rates, including the proportion of low-income students, of students with limited English, of Hispanic students, and of black students; the average SAT score; the average income of the teachers; the parent participation rate; attendance rates; writing scores; social science test scores; and the availability of the English as a second language program. Now we were in a position to systematically study the problem Marty faced. How could we predict which of two schools has a higher dropout rate? According to Franklin’s rule, we must consider all eighteen clues, weigh each carefully, and then make a prediction. The modern variant of Franklin’s rule, made convenient by fast computers, is called multiple regression, where multiple stands for multiple clues. It determines the “optimal” weights for each clue, and adds them up, just like in Franklin’s rule, but with complex computations. Our question was, how accurate is the simple Take the Best compared to this sophisticated strategy?

To answer this question, we did a computer simulation and fed in information on half of the schools—the eighteen clues and the schools’ actual dropout rates. Based on this information, the complex strategy estimated the “optimal” weights and Take the Best estimated the order of clues. Then we tested both on the other half of the schools, using clues but no information about dropout rates.9 This is called prediction in Figure 5-2 and corresponds to the situation Marty faces, in which he has experience with some schools but not with both of those he needs to choose from. As a control, we tested both strategies when all information about all schools was available. This hindsight task was to fit the data after the fact and therefore involved no prediction. What were the results?


Figure 5-2: Intuitions based on a simple rule of thumb can be more accurate than complex calculations. How can we predict which Chicago high schools have higher dropout rates? If the facts for all high schools are already known (hindsight), the complex strategy (“multiple regression’’) does better; but if one has to predict dropout rates that are not yet known, the simple rule of thumb (“Take the Best’’) is more accurate.

The simple Take the Best predicted better than the complex strategy did (Figure 5-2), and it did so with less information. On average it looked at only three clues before it stopped, whereas the complex strategy weighed and added all eighteen. To explain what was already known about the schools (hindsight), the complex strategy was best. Yet when making predictions about what was not yet known, one good reason proved to be better than all reasons. If Marty’s intuition follows Take the Best, he is more likely to make the right choice than if he carefully weighs and adds all of the available clues with a sophisticated computer program. This result teaches an important lesson:

In an uncertain environment, good intuitions must ignore information.

But why did ignoring information pay in this case? High school dropout rates are highly unpredictable—in only 60 percent of the cases could the better strategy correctly predict which school had the higher rate. (Note that 50 percent would be chance.) Just as a financial adviser can produce a respectable explanation for yesterday’s stock results, the complex strategy can weigh its many reasons so that the resulting equation fits well with what we already know. Yet, as Figure 5-2 clearly shows, in an uncertain world, a complex strategy can fail exactly because it explains too much in hindsight. Only part of the information is valuable for the future, and the art of intuition is to focus on that part and ignore the rest. A simple rule that relies only on the best clue has a good chance of hitting on that useful piece of information.

Relying on a simple strategy as opposed to a complex one not only has personal consequences for concerned parents like Marty; it can also affect public policy. According to the complex strategy, the best predictors for a high dropout rate were the school’s percentage of Hispanic students, students with limited English, and black students—in that order. In contrast, Take the Best ranked attendance rate first, then writing score, then social science test score. On the basis of the complex analysis, a policy maker might recommend helping minorities to assimilate and supporting the English as a second language program. The simpler and better approach instead suggests that a policy maker should focus on getting students to attend class and teaching them the basics more thoroughly. Policy, not just accuracy, is at stake.

This analysis also provides a possible explanation for the conflict Harry experienced when trying to choose between his two girlfriends using the balance sheet (chapter 1). Mate choice, after all, involves a high degree of uncertainty. Perhaps Harry’s feelings followed Take the Best, his heart going with the one most important reason. In that case, Harry’s intuition may be superior to the complex calculation.


A solution to a given problem is called optimal if one can prove that no better solution exists. Some skeptics might ask, Why should intuition rely on a rule of thumb instead of the optimal strategy? To solve a problem by optimization—rather than by a rule of thumb—implies both that an optimal solution exists and that a strategy exists to find it. Computers would seem to be the ideal tool for finding the best solution to a problem. Yet paradoxically, the advent of high-speed computers has opened our eyes to the fact that the best strategy often cannot be found. Try to solve the following problem.

The Fifty-Cities Campaign Tour

A politician runs for president of the United States and plans to tour its fifty largest cities. Time is pressing, and the candidate travels in a convoy of automobiles. She wants to begin and end in the same city. What is the route with the shortest distance? The organizers have no idea. With a few more brains, couldn’t they determine the shortest route? It seems an easy task. Simply determine all possible routes, measure the total distance, and choose the shortest. For instance, there are only twelve different routes for five cities.10 Using a pocket calculator, it takes only minutes to determine the shortest route. However, with ten cities, there are already some 181,000 different routes, and the calculations become demanding. With fifty cities there are approximately


routes. Not even the fastest computer can check this many possibilities in a lifetime, in a century, or in a millennium. Such a problem is called “computationally intractable.” In other words, we cannot determine the best route, however smart we are. Picking the best solution, optimization, is out of reach. What to do when optimization is no longer possible? Welcome to the world of rules of thumb. In this world, the question is, how does one find a good-enough solution? While we were pondering, the organizers had already finished planning the tour: the same as with the last candidate, with a few incremental changes due to closed highways.


Figure 5-3: The fifty-cities presidential campaign begins and ends in Boston. How can one find the shortest route? Good luck; not even the fastest computer can find the solution!


Consider tick-tack-toe. Player 1 makes a cross in one of a grid of nine squares, player 2 makes a circle in one of the empty squares, player 1 makes another cross, and so on. If a player manages to place three crosses or circles in a row (including diagonally), that person wins. In 1945, a robot was displayed in the entrance hall of the Museum of Science and Industry in Chicago, inviting the visitors to play a game of tick-tack-toe.11 To their amazement, they never managed to beat the robot. It always won or tied because it knew the optimal solution to the game.


Figure 5-4: Tick-tack-toe. Can you find the best strategy?

Player 1 makes a cross in the center square. If player 2 makes a circle in a middle square, as shown in Figure 5-4, player 1 makes a second cross in the adjacent corner square, which forces player 2 to sacrifice the next circle to prevent a row of three. Then player 1 makes a cross in the middle square adjacent to the other two crosses, which guarantees a win. Similarly, one can show that if the circle player makes the first circle in a corner, as opposed to a middle square, the cross player can always force a draw. This strategy either wins or draws, but never loses.

Here, the method to find the solution is enumeration and classification. For instance, for the first move, there are three options: center, corner, and middle square. All nine possibilities are in one of these three classes. The rest is enumeration of possibilities for the subsequent moves. By mere counting, one can prove that no other strategy does better. For simple situations such as tick-tack-toe, we know the best strategy. Good news? Yes and no: knowing the optimal strategy is exactly what makes the game boring.

Now consider chess. For each move, there are on average some thirty possibilities, which makes 3020 sequences in twenty moves, amounting to some


possible sequences of moves. This is a small number compared to that for the city tour problem. Can a chess computer determine the optimal sequence of moves for twenty moves? Deep Blue, the IBM chess computer, can examine some 200 million possible moves per second. At this breathtaking speed, Deep Blue would still need some fifty-five thousand billion years to think twenty moves ahead and pick the best one. (In comparison, the Big Bang is estimated to have occurred only some 14 billion years ago.) But twenty moves are not yet a complete game of chess. As a consequence, chess computers such as Deep Blue cannot find the best sequence of moves but have to rely on rules of thumb, just as grand masters do.

How do we know whether or not an optimal solution can be found for a game, or another well-defined problem? A problem is called “intractable” if the only known way of finding the perfect solution requires checking a number of steps that increase exponentially with the size of the problem. Chess is computationally intractable, as are the classic computer games Tetris, where one has to arrange a sequence of falling blocks, and Minesweeper.12

These games illustrate that even when problems are well defined, the perfect solution is often out of reach. A famous example from astronomy is the three-body problem. Three celestial bodies—such as earth, moon, and sun—move under no other influence than their mutual gravitation. How can we predict their movements? No general solution for this problem (or with four or more bodies) is known, whereas the two-body problem can be solved. With earthly bodies, even two defy a solution. There is no way to perfectly predict the dynamics of their mutual attraction, particularly if the attraction is emotional. In these situations, good rules of thumb become indispensable.

Ill-Defined Problems

Games such as chess and go have a well-defined structure. All permissible moves are defined by a few rules, nonpermissible moves can be easily detected, and what constitutes a victory is unequivocal. Victory in a political debate, on the other hand, is ill-defined. The set of permissible actions is not clearly defined, nor is what constitutes a winner. Is it better arguments, rhetoric, or one-liners? Unlike chess, a debate allows both candidates and their followers to claim victory. Similarly, in most bargaining situations (with the exception of auctions), the rules between buyers and sellers, employers and unions, are incompletely specified and need to be negotiated in the process. In everyday situations, rules are only partially known, can be overthrown by a powerful player, or are kept intentionally ambiguous. Uncertainty is prevalent; deception, lying, and lawbreaking possible. As a consequence, there are no optimal strategies known for winning a battle, leading an organization, bringing up children, or investing in the stock market. But, of course, good-enough strategies do exist.

In fact, people often prefer to retain a certain amount of ambiguity rather than to try to spell out all details. This is even true in legal contracts. The law in many countries assumes that a contract should spell out all possible consequences for the actions of each party, including punishment. Yet every smart lawyer knows that there is no perfectly watertight contract. Moreover, a large proportion of people who enter legal agreements feel that it is better to leave parts of the contract less well defined than they need be. As the legal expert Robert Scott argues, people may sense that there is no way to generate perfect certainty, but bet instead on the psychological factor of reciprocity, a powerful motivation for both sides in a contract. Trying to spell out all eventualities can seem like a lack of trust to the other partner and may actually do more harm than good.

The study of the match between the mind and its environment is still in its infancy. Prevailing explanations in the social sciences still look at only one of Simon’s blades, focusing either on attitudes, traits, preferences, and other factors inside the mind or on external factors such as economic and legal structures. To understand what goes on inside our minds, we must look outside of them, and to understand what goes on outside, we should look inside.