MORAL BEHAVIOR - GUT FEELINGS IN ACTION - Gut Feelings: The Intelligence of the Unconscious - Gerd Gigerenzer

Gut Feelings: The Intelligence of the Unconscious - Gerd Gigerenzer (2007)


There is nothing divine about morality; it is a purely human affair.

—Albert Einstein



On July 13, 1942, the men of the German Reserve Police Battalion 101, stationed in Poland, were awakened at the crack of dawn and driven to the outskirts of a small village. Armed with additional ammunition, but with no idea what to expect, the five hundred men gathered around their well-liked commander, the fifty-three-year-old major Wilhelm Trapp. Nervously, Trapp explained that he and his men had been assigned a frightfully unpleasant task and that the orders came from the highest authorities. There were some eighteen hundred Jews in the village who were said to be involved with the partisans. The order was to take the male Jews of working age to a work camp. The women, children, and elderly were to be shot on the spot. As he spoke, Trapp had tears in his eyes and visibly fought to control himself. He and his men had never before been confronted with such an order. Concluding his speech, Trapp made an extraordinary offer: if any of the older men did not feel up to the task that lay before them, they could step out.

Trapp paused for a moment. The men had a few seconds to decide. A dozen men stepped forward. The others went on to participate in the massacre. Many of them, after they had done their duty once, vomited or had other visceral reactions that made it impossible to continue killing and were then assigned to other tasks. Almost every man was horrified and disgusted by what he was doing. Yet why did only a mere dozen men out of five hundred declare themselves unwilling to participate in the mass murder?

In his seminal book Ordinary Men, historian Christopher Browning describes his search for an answer, based on the documents from the legal prosecution of the Reserve Police Battalion 101 after the war. There were detailed testimonies of some 125 men, many of which “had a ‘feel’ of candor and frankness conspicuously absent from the exculpatory, alibi-laden, and mendacious testimony so often encountered in such court records.”1 An obvious explanation would be anti-Semitism. Yet Browning concludes that this is unlikely. Most of the battalion members were middle-aged family men, considered too old to be drafted into the German army and conscripted instead into the police battalion. Their formative years had taken place in the pre-Nazi era, and they knew different political standards and moral norms. They came from the city of Hamburg, by reputation one of the least nazified cities in Germany, and from a social class that had been anti-Nazi in its political culture. These men did not seem to be a potential group of mass murderers.

Browning examines a second explanation: conformity with authority. But the extensive court interviews indicate that this was not the primary reason either. Unlike in the Milgram experiment, where an authoritative researcher told participants to apply electric shocks to other people, Major Trapp explicitly allowed for “disobedience.” His extraordinary intervention relieved the individual policemen from direct pressure to obey the order from the highest authorities. The men who stepped out experienced no sanctions from him, although Trapp did have to restrain a captain who was furious that the first man to refuse duty was from his company. If it was neither anti-Semitism nor fear of authority, what had turned ordinary men into mass killers? Browning points to several possible causes, including the lack of forewarning and time to think, concern about career advancement, and fear of retribution from other officers. Yet he concludes that there is a different explanation, based on how men in uniforms identify with their comrades. Many policemen seemed to follow a social rule of thumb:

Don’t break ranks.

In Browning’s words, the men felt “the strong urge not to separate themselves from the group by stepping out”2 even if conforming meant violating the moral imperative “don’t kill innocent people.” Stepping out meant losing face by admitting weakness and leaving one’s comrades to do more than their share of the ugly task. For most, it was easier to shoot than to break ranks. Browning ends his book with a disturbing question: “Within virtually every social collective, the peer group exerts tremendous pressures on behavior and sets moral norms. If the men of Reserve Police Battalion 101 could become killers under such circumstances, what group of men cannot?” From a moral point of view, nothing can justify this behavior. Social rules, however, can help us understand why certain situations promote or inhibit morally significant actions.


Since 1995, some fifty thousand U.S. citizens have died waiting in vain for a suitable organ donor. As a consequence, a black market in kidneys and other organs has emerged as an illegal alternative. Although most Americans say they approve of organ donation and in most states it is possible to register online, relatively few have actually signed a donor card. Why are only 28 percent of Americans but a striking 99.9 percent of French citizens potential donors?3 What keeps Americans from signing and saving lives?

If moral behavior were the result of deliberate reasoning, then the problem might be that Americans are not aware of the need for organs. That would call for an information campaign to raise public awareness. Dozens of such campaigns have already been launched in the United States and in other countries yet have failed to change the consent rate. But the French apparently don’t need to enlighten their citizens. One might speculate about national characters. Have the French reached a higher stage of moral development, or are they less anxious than the Americans about having their bodies opened postmortem? Perhaps Americans fear that, as several popular novels and films have suggested, emergency room doctors won’t work as hard to save patients who have agreed to donate their organs. But why are only 12 percent of Germans potential donors, compared to 99.9 percent of Austrians? After all, Germans and Austrians share language and culture and are close neighbors. A glance at the striking differences in Figure 10-1 shows that something very powerful must be at work, something that is stronger than deliberate reasoning, national stereotypes, and individual preferences. I call this force the default rule:

If there is a default, do nothing about it.

How would that rule explain why people in the United States die because there are too few donors, whereas France has plenty? In countries such as the United States, Great Britain, and Germany, the legal default is that nobody is a donor without registering to be one. You need to opt in. In countries such as France, Austria, and Hungary, everyone is a potential donor unless they opt out. The majority of Americans, British, French, Germans, and other nationals seem to employ the same default rule. Their behavior is a consequence of both this rule and the legal environment, leading to the striking contrasts between countries. Interestingly, among those who do not follow the default, most opt in but few opt out—28 percent of Americans opted in and 0.1 percent of the French opted out. If people were guided by stable preferences rather than rules of thumb, the striking differences in Figure 10-1 should not exist. In this classical economic view, the default would have little effect because people would immediately override any default that challenges their preference. After all, one only needs to sign a form to opt in, or to opt out. But the evidence indicates that it is the default rule rather than a stable preference that drives most people’s behavior.


Figure 10-1: Why are so few Americans willing to donate organs? The proportion of citizens who are potential organ donors varies strikingly between countries with opt-in policies and opt-out policies. In the United States, the policy varies from state to state; some have an opt-in policy, whereas others force citizens to make a choice (based on Johnson and Goldstein, 2003).

An online experiment independently demonstrated that people tend to follow the default rule.4 Americans were asked to assume they had just moved into a new state where the default was to be an organ donor and were given the choice to confirm or change this status. Another group was asked the same question, but the status quo was not to be a donor; a third group was required to make a choice without a default. Even in this hypothetical situation, in which sticking with the default took exactly as much effort as departing from it, the default made a difference. When people had to opt out, more than 80 percent were happy with their status as donors—a slightly higher proportion than arose with the no-default vote. Yet when people had to opt in, only half as many said they would change their status to become donors.

One possible rationale behind the default rule could be that the existing default is seen as a reasonable recommendation—primarily because it has been implemented in the first place—and following it relieves a person from many decisions. The default rule is not restricted to moral issues. For instance, the states of Pennsylvania and New Jersey offer drivers the choice between an insurance policy with an unrestricted right to sue and a cheaper one with suit restrictions.5 The unrestricted policy is the default in Pennsylvania, whereas the restricted one is the default in New Jersey. If drivers had preferences concerning the right to sue, one would expect them to ignore the default setting, leaving little variation between the neighboring states. If they instead followed the default rule, more drivers would buy the expensive policy in Pennsylvania. And indeed, 79 percent of the Pennsylvania drivers bought full coverage, whereas only 30 percent of the New Jersey drivers did the same. It was estimated that Pennsylvania drivers spend $450 million each year on full coverage they would not have spent if the default were the same as in New Jersey, and vice versa. Thus, defaults set by institutions can have considerable impact on economic as well as moral behavior. Many people would rather avoid making an active decision, even if it means life or death.


My analysis of moral behavior looks at how the world is, rather than how it should be. The latter is the domain of moral philosophy. The study of moral intuitions will never replace the need for moral prudence and individual responsibility, but it can help us to understand which environments influence moral behavior and so find ways of making changes for the better.

My thesis is that humans have an innate capacity for morals just as they do for language. Children are as genetically prepared to pick up local moral rules as they are the grammar of their native language. From subculture to subculture, they learn subtle distinctions, which resemble the intricacies of local dialects, about how to behave in particular situations. In the same way that native speakers can tell a correct sentence from an incorrect one without being able to explain why, the set of rules underlying the “moral grammar” is typically not in awareness. Moral grammar, I argue, can be described by rules of thumb. Unlike in language, however, these rules are often in conflict with each other, and the result can be either morally repulsive, as in mass killing, or admirable, as with organ donation or risking one’s life to save another. The underlying rule is not good or bad per se. But it can be applied to the wrong situation. I’d summarize my thoughts on moral intuitions into three principles:

· Lack of awareness. A moral intuition, like other gut feelings, appears quickly in consciousness, is strong enough to act upon, and its underlying rationale cannot be verbalized.

· Roots and rules. The intuition is attached to one of three “roots” (individual, extended family, or community) and to an emotional goal (e.g., prevent harm) and can be described by rules of thumb. These are not necessarily specific to moral behavior, but underlie other actions.

· Social environment. Moral behavior is contingent on the social environment. Some moral disasters can be prevented if one knows the rules guiding people’s behavior and the environments triggering these rules.

Moral feelings differ with respect to the roots they are attached to: the individual, the family, or the community. A “classical” liberal, for example, understands morality to be about protecting the rights and liberties of individuals. As long as the rights of each individual are protected, people can do what they want. Other behavior is consequently not seen as a moral issue, but as the result of social conventions or a matter of personal choice. According to this individual-centered view, pornography and drug use are matters of personal taste, whereas homicide and rape are in the moral domain. Yet in other views or cultures, moral feelings extend to the family rather than to the individual alone. In a family-centered culture, each member has a role to play, such as mother, wife, and eldest son, and a lifelong obligation to the entire family. Finally, moral feelings can extend to a community of people who are related symbolically rather than genetically, by religion, local origin, or party membership. The ethics of community include principles that liberals would not acknowledge as the most important moral values, including loyalty to one’s group and respect for authority. Most conservatives embrace the ethics of community and oppose what they see as the narrow moral of individual freedom. Political and religious liberals may have a hard time understanding what conservatives are talking about when they refer to “moral values” or why conservatives would want to restrict the rights of homosexuals who aren’t curbing the rights of others.

The psychologist Jon Haidt proposed five evolved capacities, each like a taste bud: a sensitivity to harm, reciprocity, hierarchy, ingroup, and purity.6 He suggests the mind is prepared to attach moral sentiments to all or several of these, depending on the culture in which it develops. Let me connect the taste buds with the three roots. In a society with an individualistic ethic, only the first two buds are activated: to protect people from harm, and to uphold individual rights by insisting on fairness and reciprocity. According to this ethic, the right to abortion or to free speech and the rejection of torture are moral issues. Western moral psychology has been imprinted with this focus on the individual, so that from its perspective, moral feelings are about personal autonomy.

In a society with a family-oriented ethic, moral feelings concerning harm and reciprocity are rooted in the family, not in the individual. It is the welfare and honor of the family that needs protection. When it leads to nepotism, this ethic may appear suspect from the individualist point of view. In many traditional societies, however, nepotism is a moral obligation, not a crime, and smaller dynasties exist in modern democracies as well, from India to the United States. Yet while individualist societies frown on nepotism, their behavior toward family members can be in turn resented by other societies. When I first visited Russia in 1980, I found myself in a heated discussion with students who were morally outraged that we Westerners dispose of our parents when they are old, delivering them to homes where they eventually die. They found our unwillingness to take care of our own parents repulsive. A family ethic also activates a sensitivity for hierarchy. It creates emotions of respect, duty, and obedience.

In a society with a community orientation, concerns about harm, reciprocity, and hierarchy relate to the community as its root, rather than to the family or individual. Its ethical view activates all five sensitivities, including those for ingroup and purity. Most tribes, religious groups, or nations advocate virtues of patriotism, loyalty, and heroism, and individuals from time immemorial have sacrificed their lives for their ingroup. In times of war, “support our troops” is the prevailing patriotic feeling, and criticizing them is seen as betrayal. Similarly, most communities have a code of purity, pollution, and divinity. People feel disgusted when this code is violated, be it in connection with eating dogs, sex with goats, or simply not taking a shower every day. Whereas in Western countries moral issues tend to center on personal freedom (such as the right to end one’s life), in other societies, moral behavior is more focused on the ethics of community, including duty, respect, and obedience to authority, and on the ethics of divinity, such as attaining purity and sanctity.

Note that these are orientations rather than clear-cut categories. Each human society draws its moral feelings from the three roots, albeit with different emphases. The Ten Commandments of the Bible, the 613 mitzvot, or laws, of the Torah, and most other religious texts address all three. For instance, “You shall not bear false witness against your neighbor” protects the individual rights of others, “Honor your father and mother” ensures respect of familial authority, and “You shall have no other gods besides me” necessitates obeying the laws of divinity in the community. Because moral feelings are anchored in different roots, conflicts will be the rule rather than the exception.

In contrast to my view, moral psychology—like much of moral philosophy—links moral behavior with verbal reasoning and rationality. Lawrence Kohlberg’s theory of cognitive development, for instance, assumes a logical progression of three levels of moral understanding (each subdivided into two stages). At the lowest level, young children define the meaning of what is right in terms of “I like it,” that is, a selfish evaluation of what brings rewards and avoids punishment. At the intermediate “conventional” level, older children and adults judge what is virtuous by whether “the group approves,” that is, by authority or one’s reference group. At the highest “postconventional” level, what is right is defined by objective, abstract, and universal principles detached from the self or the group. In Kohlberg’s words: “We claim that there is a universally valid form of rational moral thought process which all persons could articulate.”7

The evidence for these stages comes from children’s answers to verbally presented moral dilemmas, rather than from observations of actual behavior. Kohlberg’s emphasis on verbalization contrasts with our first principle, lack of awareness. The ability to describe the grammatical rules of one’s native language would be a poor measure of one’s intuitive knowledge of the grammar. Similarly, children may have a much richer moral system than they can tell. Kohlberg’s emphasis on individual rights, justice, fairness, and the welfare of people also assumes the individual to be the root of moral thinking, rather than the community or family. However, years of experimental studies do not suggest that moral growth resembles strict stages. Recall that Kohlberg’s scheme has three levels, each divided in two stages; thus in theory, there are six stages. Yet stages one, five, and six rarely occur in their pure form in either children or adults; the typical child mixes stages two and three, and adults mix the two stages at the conventional level. On a worldwide scale, only 1 or 2 percent of adults were classified to be at the highest level.

I do not doubt that deliberate thinking about good and bad happens, although it may often take place after the fact to justify our actions. But here I’d like to focus on the moral behavior based on gut feelings.


My first principle of moral intuitions states that people are often unaware of the reasons for their moral actions. In these cases, deliberate reasoning is the justification for, rather than the cause of, moral decisions. Consider this story:

Julie and Mark are sister and brother, traveling together in France on a summer vacation from college. One night in a cabin, they decide to make love, using both birth control pills and condoms, just to be sure. They both enjoyed making love but decided not to do it again. They kept that night a secret, which makes them feel even closer to each other. What do you think about that? Was it OK for them to make love?

Most people who hear this story feel immediately that it was wrong for the siblings to make love.8 Only after being asked why they disapprove or even feel disgusted, however, do they begin to search for reasons. One might point out the danger of inbreeding, only to be reminded that Julie and Mark used two forms of birth control. Another begins to stutter, mumbles, and eventually exclaims, “I don’t know why, but I know it’s wrong!” Haidt called this state of mind “morally dumbfounded.” Many of us find incest between siblings, or even cousins, repulsive, although it did not seem to have bothered the royal families of ancient Egypt. Similarly, most of us would refuse to eat the brains of our parents when they die, whereas in other cultures not doing so, leaving them to be eaten by worms, would be an insult to the deceased. According to a long philosophical tradition, the absolute truth of ethical issues can be seen intuitively, without having to reason.9 I agree that moral intuitions often seem self-evident, but not that they are necessarily universal truths. Reasoning rarely engenders moral judgment; rather it searches to explain or justify an intuition after the fact.10

The second principle says that the same rules of thumb can underlie both moral actions and behavior that is not morally colored. As described above, the default rule can solve both problems that we call moral and those we do not. Another example is imitation, which guides behavior in a wide range of situations:11

Do what the majority of your peers do.

This simple rule guides behavior through various states of development, from middle childhood through to teenage and adult life. It virtually guarantees social acceptance in one’s peer group and conformity with the ethics of the community. Violating it may mean being called a coward or an oddball. It can steer moral action, both good and bad (donating to a charity, discriminating against minorities), as well as consumer behavior (what clothes to wear, what CDs to buy). Teenagers tend to buy Nike shoes because their peers do, and skinheads hate foreigners for no other reason than that their peers hate them.

Consider now the don’t-break-ranks rule. This rule has the potential to turn a soldier both into a loyal comrade and a killer. As an American rifleman recalls about comradeship during World War II: “The reason you storm the beaches is not patriotism or bravery. It’s that sense of not wanting to fail your buddies. There’s sort of a special sense of kinship.”12 What appears as inconsistent behavior—how can such a nice guy act so badly; how can that nasty person suddenly be so nice?—can result from the same underlying rule. The rule itself is not good or bad per se, yet it produces actions we might applaud or condemn.

Many psychologists oppose feelings to reasons. Yet I have argued that gut feelings themselves have a rationale based on reasons. The difference between intuition and moral deliberation is that the reasons underlying moral intuitions are typically unconscious. Thus, the relevant distinction is not between feelings and reasons, but between feelings based on unconscious reasons and deliberate reasoning.

The third principle is very practical, saying that when one knows both the mechanisms underlying moral behavior and the environments that trigger them, one can prevent or reduce moral disasters. Consider the case of organ donation. A legal system aware of the fact that rules of thumb guide behavior can make the desired option the default. In the United States, simply switching the default would save the lives of many patients who wait in vain for a donor. Setting proper defaults is a simple solution for what looks like a complex problem. Similarly, consider once again the men of the Reserve Police Battalion 101. These men grew up with the Judeo-Christian commandment “Don’t murder.” With his offer, Major Trapp brought this commandment into conflict with the rule “Don’t break ranks.” Yet Trapp could have framed his offer so that obeying the commandment wouldn’t have conflicted with the need to maintain ranks. If he had asked those men who felt up to the task to step out, the number of men who participated in the killing would likely have been considerably smaller. Since we can’t turn back the clock this is impossible to test, but both of these examples demonstrate that insight into moral intuition can influence moral behavior “from the outside.”

To continue this thought experiment, imagine now the opposite: that the behavior of the reserve policemen was caused by traits such as authoritarianism, attitudes such as anti-Semitism and prejudices against minorities, or other evil motives. In these cases no such potential for immediate intervention would be possible. The social environment—Major Trapp and the other men—should make little difference, and a single policeman isolated from his comrades would have “decided” to kill, just as he did in the real situation when together with his comrades. Traits, in contrast to rules of thumb, give us little hope for change.

Moral gut feelings are based on evolved capacities. One relevant capacity is the intense identification with one’s peer group that is the basis of much that makes humans unique, including the development of culture, art, and cooperation, but is also the starting point of much suffering, from social pressure for group conformity to hatred and violence against other groups. My analysis may be provocative for those who believe that moral actions are generally based on fixed preferences or independent reasoned reflection. But what may seem disillusioning in fact provides a key to avoiding moral disasters.


People tend to organize themselves in various forms of moral institutions, from a local neighborhood church to the Vatican, from shelters for abused women to Amnesty International. A moral institution has a code of honor or purity, defines what is decent or disgusting, and last but not least, tries to have a positive impact on society. The structure of these institutions affects the moral behavior of those who serve them, as well as the rationalization behind a member’s behavior.

Bailing and Jailing

One of the initial decisions the legal system makes is whether to release a defendant on bail unconditionally or punish him with curfew or imprisonment. In the English system, magistrates, most of whom are members of the local community without legal training, are often responsible for making this decision. In England and Wales, magistrates deal with some two million defendants every year. The work involves sitting in court for a morning or afternoon every one or two weeks, and making decisions in a bench of two or three. How should magistrates decide? The law says that they should pay regard to the nature and seriousness of the offense; to the character, community ties, and bail record of the defendant; as well as to the strength of the prosecution’s case, the likely sentence if convicted, and any other factor that appears to be relevant.13 Yet the law is silent on how magistrates should combine these pieces of information, and the legal system does not provide feedback on whether their decisions were in fact appropriate or not. The magistrates are left to their own intuitions.

How do magistrates actually make these millions of decisions? Magistrates tend to say, with confidence, that they thoroughly examine all the evidence in order to treat individuals fairly and without bias. For instance, one explained that the decision “depends on an enormous weight of balancing information, together with our experience and training.” The chairman of the council stated, “We are trained to question and to assess carefully the evidence we are given.”14 As one explained self-assuredly, “You can’t study magistrates’ complex decision making.”

The truth is that one can; people tend to believe they solve complex problems with complex strategies even if they rely on simple ones. To find out what rationale actually underlies magistrates’ intuitive decisions, researchers observed several hundred hearings in two London courts over a four-month period.15 The average time a bench spent with each case was less than ten minutes. The information available to the London magistrates included the defendants’ age, race, gender, strength of community ties, seriousness of offense, kind of offense, number of offenses, relation to the victim, plea (guilty, not guilty, no plea), previous convictions, bail record, the strength of the prosecution’s case, maximum penalty if convicted, circumstances of adjournment, length of adjournment, number of previous adjournments, prosecution request, defense request, previous court bail decisions, and police bail decision. In addition, they saw whether the defendant was present at the bail hearing, whether or not legally represented, and by whom. Not all of this information was present in every case, while additional information was provided in others.

Recall that the magistrates explained—and no doubt also believed—that they carefully examine all the evidence. However, an analysis of the actual bail decisions in court A revealed a simple rule that had the structure of a fast and frugal tree (Figure 10-2, left). It predicted 92 percent of all decisions correctly. When the prosecution opposed bail or requested conditional bail, the magistrates also opposed bail. If not, or if no information was available, a second reason came into play. If a previous court had imposed conditions or remand in custody, the magistrates decided the same. Otherwise they considered a third reason and based their decisions on the actions of the police. The magistrates in court B used a rule of thumb with the same structure and two of the same reasons (Figure 10-2, right).

The rules of thumb in both London courts appear to violate due process. Each bench based their decision on only one reason, such as whether the police had imposed conditions or imprisonment. One could argue that the police, or prosecution, has already looked at all the evidence concerning the defendant, and therefore magistrates simply use a shortcut—although this argument would of course make magistrates dispensable. However, the reasons in the simple tree were related neither to the nature and seriousness of the offense nor to other pieces of information relevant for due process. Furthermore, magistrates actually asked for information concerning the defendant, which they subsequently ignored in their decisions.16 Unless they deliberately deceive the public (and I have no grounds to assume so), these magistrates must be largely unaware of how they make bail decisions.


Figure 10-2: How do English magistrates make bail decisions? Two fast and frugal trees predict the majority of all decisions in two London courts. Magistrates are apparently not aware of their simple rules of thumb (based on Dhami, 2003). No bail = remand in custody or conditional bail; bail = unconditional release.

Higher awareness, however, could open a moral conflict, given the ideal of due process. The magistrates’ official task is to do justice both to a defendant and to the public, so they must try to avoid two errors—similar to those doctors fear—misses and false alarms. A miss occurs when a suspect is released on bail and subsequently commits another crime, threatens a witness, or does not appear in court. A false alarm occurs when a suspect is imprisoned who would not have committed any of these offenses. But a magistrate can hardly solve this task. For one, the English institutions collect no systematic information about the quality of magistrates’ decisions. Even if statistics were kept about when and how often misses occur, it would be impossible to do the same for false alarms: one cannot find out whether an imprisoned person would have committed a crime had he or she been bailed. That is, the magistrates operate in an institution that does not provide feedback about how to protect the defendant and the public. Since they cannot learn how to solve the task they are meant to, they seem to try to solve a different one: to protect themselves rather than the defendant. Magistrates can only be proved to have made a bad decision if a suspect who was released fails to appear in court or commits an offense or crime while on bail. If this happens, magistrates are able to protect themselves against accusations by the media or the victims. The magistrates in court A, for instance, can always argue that neither the prosecution, nor a previous court, nor the police had imposed or requested a punitive decision. Thus, the event was not foreseeable. This defensive decision making is known as “passing the buck.”

The English bail system thus requests that magistrates follow due process, but it doesn’t provide the institutional setting to achieve this goal. The result is a gap between what magistrates do and what they believe they are doing. If magistrates were fully aware of what they are doing, they would come into conflict with the ideal of due process. Here is the starting point to eradicating false self-perceptions and creating the conditions to improve the English bail system.

Split-Brain Institutions

How do institutions shape moral behavior? Like the ant’s behavior on the beach, human behavior adapts to the natural or social environment. Consider another institution that, like the English magistracy, requires its employees to perform a moral duty. The employee can commit two kinds of errors: false alarms and misses. If the institution does not provide systematic feedback concerning false alarms and misses, but blames the employees when a miss occurs, it fosters employees’ instinct for self-protection over their desire to protect their clients and supports self-deception. I call this environmental structure a split-brain institution. The term is borrowed from the fascinating studies of people whose corpus callosum—the connection between the right and left cerebral hemispheres—has been severed.17 A patient with this condition was flashed the picture of a naked body in her left visual field and began to laugh. The experimenter asked her why she was laughing, and she blamed it on his funny tie. The picture only went to her right (nonverbal) side of the brain. Because the brain was split, the left (verbal side) had to do the explaining without any information. Split-brain patients confabulate fascinating post-hoc stories with the left sides of their brains to rationalize behavior initiated by the right side. Similar processes occur in ordinary people. Neuroscientist Mike Gazzaniga, who has studied split-brain patients, calls the verbal side of the brain the interpreter, which comes up with a story to account for behavior produced by unconscious intelligence. I argue that a magistrate’s or any other person’s “interpreter” does the same when it tries to explain a gut feeling.

The analogy only holds to a point. Unlike a split-brain patient, a split-brain institution can impose moral sanctions for confabulating and punishment for awareness of one’s actions. We saw that if magistrates had been fully aware that they were “passing the buck,” they would have realized that their method conflicted with due process. Medical institutions, albeit not moral institutions in the narrow sense, often have a similar split-brain structure. Many Western health systems allow patients to visit a sequence of specialized doctors but do not provide systematic feedback to the doctors concerning the efficacy of their treatments. Doctors are likely to be sued for overlooking a disease but not for overtreatment and overmedication, which promotes doctors’ self-protection over the protection of their patients and supports self-deception.


Simplicity is the ink with which effective moral systems are written. The Ten Commandments are a prime example. According to the Bible, a list of religious precepts was divinely revealed to Moses on Mount Sinai. Engraved on two tablets of stone, their number was small, matching that of human digits. The ten short statements were easy to memorize and have survived for millennia. If God had hired legal advisers on Mount Sinai, they would have complicated matters by adding dozens of further clauses and amendments in an attempt to cover as many aspects of moral life as possible. Completeness, however, does not seem to have been God’s goal. God, I believe, is a satisficer, not a maximizer. He concentrates on the most important issues and ignores the rest.

How many moral rules does a society need? Are ten enough or do we need a system that has the complexity of the American tax law? This law is so comprehensive that even my tax adviser cannot understand all its details. An opaque legal system fails to generate trust and compliance among citizens. Transparency and trust are two sides of the same coin. A complex legal system promotes the interests of lobbying groups who punch innumerable loopholes into its laws. The legal expert Richard Epstein has argued that the ideal of an all-encompassing legal system is an illusion. No system of any complexity can cover more than 95 percent of legal cases; the rest must be decided by judgment. Yet these 95 percent, he argued, can be resolved with a small number of laws. In his seminal book Simple Rules for a Complex World, Epstein, topping Moses, proposed a system of only six laws, including the right to self-ownership and protection against aggression.


So far I have dealt with the question of how behavior is, rather than how it should be. In many situations, people’s moral feelings are based on unconscious rules of thumb. I would not exclude deliberate reasoning as a motivation for moral behavior, but I think it occurs only in unusual contexts, such as in professional debates or in the midst of societal upheaval. Interestingly, the same distinction between simple rules and complex reasoning also exists in moral philosophy, which tries to answer the question of how people ought to behave.

The Ten Commandments exemplify the rules-of-thumb approach. The advantage of a small number of short statements such as “Honor your father and mother” and “Do not commit murder” is that they can easily be understood, memorized, and followed. Simple rules differ from what moral philosophy calls consequentialism,18 in which the ends justify the means. Is it right to torture a suspected terrorist if torture might protect the safety of a country? There are two views. One argument is to consider the consequences of both alternatives (torture or no torture), their probabilities, and choose the one with the highest expected benefit. If the negative consequences of torture are small in comparison to its benefits to a country’s safety, the decision is to torture. The other argument is that there are moral principles such as “Do not torture” that have absolute precedence over any other concerns.

The ideal of maximizing expected utility or happiness is the lifeblood of much moral and legal philosophy. The seventeenth-century French mathematician Blaise Pascal proposed maximizing as the answer to moral problems, such as whether or not one should believe in God.19 He argued that this decision should not be based on blind faith or blind atheism, but on considering the consequences of each action. If one believes in God, but he does not exist, one will forgo a few worldly pleasures. However, if one does not believe in God but he exists, eternal damnation and suffering will result. Therefore, Pascal argued, however small the probability that God exists, the known consequences dictate that believing in God is rational. What counts are the consequences of actions, not the actions themselves. This way of thinking also exists in a collective rather than an individual form, best known by the maxim:

Seek the greatest happiness of the greatest number.

The English legal and social reformer Jeremy Bentham (1748-1832) proposed this guideline and supplied a calculus for actually determining the action that produces the greatest happiness.20 His hedonic calculus is the felicific equivalent of Franklin’s moral algebra, which we met in chapter 1.

The Hedonic Calculus

The value of each pleasure or pain arises from six elements, its

1. intensity,

2. duration,

3. degree of certainty,

4. remoteness,

5. fecundity (the probability of being followed by sensations of the same kind), and

6. purity (the probability of not being followed by sensations of the opposite kind).

To determine the action that likely produces the greatest happiness, and hence moral rightness, Bentham provided the following sequence of instructions for each action. Start with one person whose interests will be affected by the action. Sum up all the values for all potential pleasures and pains the person might experience, and determine the balance for the action. Repeat the process for every other person whose interests are concerned, and determine the balance for all persons. Then repeat the entire process for the next action, and finally choose the action with the highest score.

Bentham’s calculus is the prototype of modern consequentialism. How would it work in our world? Assume a Boeing 747 passenger plane packed with four hundred passengers is heading toward Los Angeles on a cloudy evening. The communication between ground and cockpit suddenly breaks down, and a passenger sends a friend a text message that says the plane has been hijacked. Then there is silence. The ground crew suspect that the plane might be headed straight at the Library Tower, like the planned attack the Bush administration is reported to have foiled once. The Boeing would reach the Library Tower in five minutes, and an F-15 fighter aircraft is in the air, ready to strike. The aircraft would have to act fast to prevent the plane from descending on the target and its parts from falling into a highly populated area. At the same time, one cannot say with certainty whether an attack on the tower will happen. Would you order the F-15 pilot to shoot down the Boeing, killing four hundred innocent passengers plus crew, or not?

This scenario is both simple and complicated for the hedonic calculus. It is simple because there are only two possible actions, to shoot the plane down or to wait and see what happens. Yet it is complicated because the decision has to be made under limited time and knowledge. How many people are in the Library Tower? Is the event really a repeat of the 9/11 attacks, or was the text message an error, perhaps even a bad joke? Might the F-15 pilot shoot down the wrong plane in the cloudy sky? This situation may not be a fair example for the hedonic calculus, since the calculations of pleasures and pains involve lots of guesswork and possibilities for error. To follow the calculus, one would try to estimate for every person whose interests are concerned—each passenger, crew member, person in the tower, person on the ground nearby, and the relatives and close friends of all of these—the intensity, duration, and other dimensions of each potential pain and pleasure caused if the plane were shot down, and the same if it were not.

Although Bentham’s calculus gave birth to the type of moral system that promoted democratic and liberal reforms, it is silent on the practical issues of real-time decision making. Its problem is twofold. First, if there is no known way to obtain reliable estimates of the values involved, one can select those that either advise shooting or not, justifying a decision made on other grounds. Nor is this problem limited to decisions under time constraints. The philosopher Daniel Dennett posed the question of whether the meltdown at Three Mile Island was a good thing or a bad thing.21 In planning an action where such a meltdown could happen with some probability, should one assign a positive or negative utility? Do its long-term effects on nuclear policy, considered positive by many, outweigh its negative consequences? Many years after the event, Dennett concludes that it is still too early to say, and also too early to know when the answer will be available. Second, the advantage of complex calculations of this kind is not proved. We have already seen that even if it’s possible to weigh all reasons, the result is often less accurate than that obtained from one good reason.

After the events of September 11, 2001, the plane scenario seems likely enough that countries have made rulings to cope with it. In February 2006, Germany’s Federal Constitutional Court ruled that sacrificing and deliberately killing innocent citizens because of a suspected terrorist action violates the federal constitution that explicitly protects human dignity. That is, it is illegal to shoot down a hijacked plane with innocent passengers in it. The court also mentioned the danger of false alarms, that is, the possibility of shooting down a plane unnecessarily in moments of confusion and uncertainty. The Russian parliament on the other hand passed a law that does allow passenger planes suspected of being used as flying bombs to be shot down. These disparate legal decisions illustrate the conflict between consequentialism and a type of Kantian first-principles ethic that follows the rule “Don’t kill innocent people as a means to an end.”

These two systems differ in whether they are willing to make trade-offs. The idea that one ought to make trade-offs in order to be morally responsible often conflicts with people’s gut feelings.


Diana and David, a young married couple very much in love, have started their respective careers, she as a real estate broker, he as an architect. They find the perfect spot to build their dream house and take out a mortgage. When the recession hits, they stand to lose everything they own, so they head to Las Vegas for a shot at winning the money they need. After losing at the tables, they are approached by a billionaire instantly attracted to Diana. He offers them one million dollars for a night with her.

If you and your spouse were in this financial crisis, would you accept the proposal? This, the plot of Adrian Lyne’s movie Indecent Proposal, grapples with the morality of trade-offs. Are faithfulness, true love, and honor up for sale? Many people believe nothing would justify trading off these sacred values for money or other secular goods. Economists, however, remind us that we live in a world with scarce resources where eventually everything has its price tag, whether we like it or not. In response, Oscar Wilde is reported to have defined a cynic as someone who knows the price of everything and the value of nothing. The tension in Indecent Proposal arises from the conflict between treating faithfulness as a sacred value or as a commodity. The couple finally accepts the proposal, but after the night is over, they learn that their decision has exacted another price; it threatens to destroy their relationship.

Cultures differ in what they are or aren’t willing to sell. So do liberal Democrats and conservative Republicans. Should the free market be extended to buying and selling body organs, PhDs, and adoption rights for children? Should people have the right to sell themselves as slaves to others? Some cultures sell their children or treat adolescent girls as a commodity to be sold to a bridegroom’s family. Prostitutes earn their living by selling their bodies and sexuality, and politicians are constantly accused of having sold their ideals. If something is deemed to have a moral value, then allowing it to be traded will likely evoke moral outrage. This is one of the reasons why many citizens frown upon experts who attach a monetary value to the life of a person, dependent on age, gender, or education, as in calculations of insurances or industrial safety standards. Similarly, if an automobile company publicly announced that it did not introduce a particular safety precaution for its cars because it would have cost $100 million to save one life, moral outrage would be virtually guaranteed.22 The overwhelming gut feeling in most cultures is that the value of lives should not be expressed in dollars.

This antipathy to trade-offs suggests that moral intuitions are based on rules of thumb that rely on one-reason decision making rather than on weighing and adding consequences. Again, there may be two kinds of people, moral maximizers who make trade-offs and moral satisficers who don’t. Most likely, every one of us has moral values that we might be willing to trade off and those that we wouldn’t. The dividing line will depend on where our moral feelings have their roots. If they are rooted in the autonomy of the individual, trade-offs are unproblematic unless they do harm to other individuals and violate their rights. Yet if the moral domain is rooted in the family or community, then issues that concern hierarchy, ingroup, and purity are not up for sale.