The Moral Landscape: How Science Can Determine Human Values - Sam Harris (2010)

Chapter 2. GOOD AND EVIL

There may be nothing more important than human cooperation. Whenever more pressing concerns seem to arise—like the threat of a deadly pandemic, an asteroid impact, or some other global catastrophe—human cooperation is the only remedy (if a remedy exists). Cooperation is the stuff of which meaningful human lives and viable societies are made. Consequently, few topics will be more relevant to a maturing science of human well-being.

Open a newspaper, today or any day for the rest of your life, and you will witness failures of human cooperation, great and small, announced from every corner of the world. The results of these failures are no less tragic for being utterly commonplace: deception, theft, violence, and their associated miseries arise in a continuous flux of misspent human energy. When one considers the proportion of our limited time and resources that must be squandered merely to guard against theft and violence (to say nothing of addressing their effects), the problem of human cooperation seems almost the only problem worth thinking about.1 “Ethics” and “morality” (I use these terms interchangeably) are the names we give to our deliberate thinking on these matters.2 Clearly, few subjects have greater bearing upon the question of human well-being.

As we better understand the brain, we will increasingly understand all of the forces—kindness, reciprocity, trust, openness to argument, respect for evidence, intuitions of fairness, impulse control, the mitigation of aggression, etc.—that allow friends and strangers to collaborate successfully on the common projects of civilization. Understanding ourselves in this way, and using this knowledge to improve human life, will be among the most important challenges to science in the decades to come.

Many people imagine that the theory of evolution entails selfishness as a biological imperative. This popular misconception has been very harmful to the reputation of science. In truth, human cooperation and its attendant moral emotions are fully compatible with biological evolution. Selection pressure at the level of “selfish” genes would surely incline creatures like ourselves to make sacrifices for our relatives, for the simple reason that one’s relatives can be counted on to share one’s genes: while this truth might not be obvious through introspection, your brother’s or sister’s reproductive success is, in part, your own. This phenomenon, known as kin selection, was not given a formal analysis until the 1960s in the work of William Hamilton,3 but it was at least implicit in the understanding of earlier biologists. Legend has it that J. B. S. Haldane was once asked if he would risk his life to save a drowning brother, to which he quipped, “No, but I would save two brothers or eight cousins.”4

The work of evolutionary biologist Robert Trivers on reciprocal altruism has gone a long way toward explaining cooperation among unrelated friends and strangers.5 Trivers’s model incorporates many of the psychological and social factors related to altruism and reciprocity, including: friendship, moralistic aggression (i.e., the punishment of cheaters), guilt, sympathy, and gratitude, along with a tendency to deceive others by mimicking these states. As first suggested by Darwin, and recently elaborated by the psychologist Geoffrey Miller, sexual selection may have further encouraged the development of moral behavior. Because moral virtue is attractive to both sexes, it might function as a kind of peacock’s tail: costly to produce and maintain, but beneficial to one’s genes in the end.6

Clearly, our selfish and selfless interests do not always conflict. In fact, the well-being of others, especially those closest to us, is one of our primary (and, indeed, most selfish) interests. While much remains to be understood about the biology of our moral impulses, kin selection, reciprocal altruism, and sexual selection explain how we have evolved to be, not merely atomized selves in thrall to our self-interest, but social selves disposed to serve a common interest with others.7

Certain biological traits appear to have been shaped by, and to have further enhanced, the human capacity for cooperation. For instance, unlike the rest of the earth’s creatures, including our fellow primates, the sclera of our eyes (the region surrounding the colored iris) is white and exposed. This makes the direction of the human gaze very easy to detect, allowing us to notice even the subtlest shifts in one another’s visual attention. The psychologist Michael Tomasello suggests the following adaptive logic:

If I am, in effect, advertising the direction of my eyes, I must be in a social environment full of others who are not often inclined to take advantage of this to my detriment—by, say, beating me to the food or escaping aggression before me. Indeed, I must be in a cooperative social environment in which others following the direction of my eyes somehow benefits me.8

Tomasello has found that even twelve-month old children will follow a person’s gaze, while chimpanzees tend to be interested only in head movements. He suggests that our unique sensitivity to gaze direction facilitated human cooperation and language development.

While each of us is selfish, we are not merely so. Our own happiness requires that we extend the circle of our self-interest to others—to family, friends, and even to perfect strangers whose pleasures and pains matter to us. While few thinkers have placed greater focus on the role that competing self-interests play in society, even Adam Smith recognized that each of us cares deeply about the happiness of others.9 He also recognized, however, that our ability to care about others has its limits and that these limits are themselves the object of our personal and collective concern:

Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquility, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own. To prevent, therefore, this paltry misfortune to himself, would a man of humanity be willing to sacrifice the lives of a hundred millions of his brethren, provided he had never seen them? Human nature startles with horror at the thought, and the world, in its greatest depravity and corruption, never produced such a villain as could be capable of entertaining it. But what makes this difference?10

Smith captures the tension between our reflexive selfishness and our broader moral intuitions about as well as anyone can here. The truth about us is plain to see: most of us are powerfully absorbed by selfish desires almost every moment of our lives; our attention to our own pains and pleasures could scarcely be more acute; only the most piercing cries of anonymous suffering capture our interest, and then fleetingly. And yet, when we consciously reflect on what we should do, an angel of beneficence and impartiality seems to spread its wings within us: we genuinely want fair and just societies; we want others to have their hopes realized; we want to leave the world better than we found it.

Questions of human well-being run deeper than any explicit code of morality. Morality—in terms of consciously held precepts, social contracts, notions of justice, etc.—is a relatively recent development. Such conventions require, at a minimum, complex language and a willingness to cooperate with strangers, and this takes us a stride or two beyond the Hobbesian “state of nature.” However, any biological changes that served to mitigate the internecine misery of our ancestors would fall within the scope of an analysis of morality as a guide to personal and collective well-being. To simplify matters enormously:

1. Genetic changes in the brain gave rise to social emotions, moral intuitions, and language …

2. These allowed for increasingly complex cooperative behavior, the keeping of promises, concern about one’s reputation, etc.…

3. Which became the basis for cultural norms, laws, and social institutions whose purpose has been to render this growing system of cooperation durable in the face of countervailing forces.

Some version of this progression has occurred in our case, and each step represents an undeniable enhancement of our personal and collective well-being. To be sure, catastrophic regressions are always possible. We could, either by design or negligence, employ the hard-won fruits of civilization, and the emotional and social leverage wrought of millennia of biological and cultural evolution, to immiserate ourselves more fully than unaided Nature ever could. Imagine a global North Korea, where the better part of a starving humanity serve as slaves to a lunatic with bouffant hair: this might be worse than a world filled merely with warring australopithecines. What would “worse” mean in this context? Just what our intuitions suggest: more painful, less satisfying, more conducive to terror and despair, and so on. While it may never be feasible to compare such counterfactual states of the world, this does not mean that there are no experiential truths to be compared. Once again, there is a difference between answers in practice and answers in principle.

The moment one begins thinking about morality in terms of well-being, it becomes remarkably easy to discern a moral hierarchy across human societies. Consider the following account of the Dobu islanders from Ruth Benedict:

Life in Dobu fosters extreme forms of animosity and malignancy which most societies have minimized by their institutions. Dobuan institutions, on the other hand, exalt them to the highest degree. The Dobuan lives out without repression man’s worst nightmares of the ill-will of the universe, and according to his view of life virtue consists in selecting a victim upon whom he can vent the malignancy he attributes alike to human society and to the powers of nature. All existence appears to him as a cutthroat struggle in which deadly antagonists are pitted against one another in contest for each one of the goods of life. Suspicion and cruelty are his trusted weapons in the strife and he gives no mercy, as he asks none.11

The Dobu appear to have been as blind to the possibility of true cooperation as they were to the truths of modern science. While innumerable things would have been worthy of their attention—the Dobu were, after all, extremely poor and mightily ignorant—their main preoccupation seems to have been malicious sorcery. Every Dobuan’s primary interest was to cast spells on other members of the tribe in an effort to sicken or kill them and in the hopes of magically appropriating their crops. The relevant spells were generally passed down from a maternal uncle and became every Dobuan’s most important possessions. Needless to say, those who received no such inheritance were believed to be at a terrible disadvantage. Spells could be purchased, however, and the economic life of the Dobu was almost entirely devoted to trade in these fantastical commodities.

Certain members of the tribe were understood to have a monopoly over both the causes and cures for specific illnesses. Such people were greatly feared and ceaselessly propitiated. In fact, the conscious application of magic was believed necessary for the most mundane tasks. Even the work of gravity had to be supplemented by relentless wizardry: absent the right spell, a man’s vegetables were expected to rise out of the soil and vanish under their own power.

To make matters worse, the Dobu imagined that good fortune conformed to a rigid law of thermodynamics: if one man succeeded in growing more yams than his neighbor, his surplus crop must have been pilfered through sorcery. As all Dobu continuously endeavored to steal one another’s crops by such methods, the lucky gardener is likely to have viewed his surplus in precisely these terms. A good harvest, therefore, was tantamount to “a confession of theft.”

This strange marriage of covetousness and magical thinking created a perfect obsession with secrecy in Dobu society. Whatever possibility of love and real friendship remained seems to have been fully extinguished by a final doctrine: the power of sorcery was believed to grow in proportion to one’s intimacy with the intended victim. This belief gave every Dobuan an incandescent mistrust of all others, which burned brightest on those closest. Therefore, if a man fell seriously ill or died, his misfortune was immediately blamed on his wife, and vice versa. The picture is of a society completely in thrall to antisocial delusions.

Did the Dobu love their friends and family as much as we love ours? Many people seem to think that the answer to such a question must, in principle, be “yes,” or that the question itself is vacuous. I think it is clear, however, that the question is well posed and easily answered. The answer is “no.” Being fellow Homo sapiens, we must presume that the Dobu islanders had brains sufficiently similar to our own to invite comparison. Is there any doubt that the selfishness and general malevolence of the Dobu would have been expressed at the level of their brains? Only if you think the brain does nothing more than filter oxygen and glucose out of the blood. Once we more fully understand the neurophysiology of states like love, compassion, and trust, it will be possible to spell out the differences between ourselves and people like the Dobu in greater detail. But we need not await any breakthroughs in neuroscience to bring the general principle in view: just as it is possible for individuals and groups to be wrong about how best to maintain their physical health, it is possible for them to be wrong about how to maximize their personal and social well-being.

I believe that we will increasingly understand good and evil, right and wrong, in scientific terms, because moral concerns translate into facts about how our thoughts and behaviors affect the well-being of conscious creatures like ourselves. If there are facts to be known about the well-being of such creatures—and there are—then there must be right and wrong answers to moral questions. Students of philosophy will notice that this commits me to some form of moral realism (viz. moral claims can really be true or false) and some form of consequentialism (viz. the rightness of an act depends on how it impacts the well-being of conscious creatures). While moral realism and consequentialism have both come under pressure in philosophical circles, they have the virtue of corresponding to many of our intuitions about how the world works.12

Here is my (consequentialist) starting point: all questions of value (right and wrong, good and evil, etc.) depend upon the possibility of experiencing such value. Without potential consequences at the level of experience—happiness, suffering, joy, despair, etc.—all talk of value is empty. Therefore, to say that an act is morally necessary, or evil, or blameless, is to make (tacit) claims about its consequences in the lives of conscious creatures (whether actual or potential). I am unaware of any interesting exception to this rule. Needless to say, if one is worried about pleasing God or His angels, this assumes that such invisible entities are conscious (in some sense) and cognizant of human behavior. It also generally assumes that it is possible to suffer their wrath or enjoy their approval, either in this world or the world to come. Even within religion, therefore, consequences and conscious states remain the foundation of all values.

Consider the thinking of a Muslim suicide bomber who decides to obliterate himself along with a crowd of infidels: this would appear to be a perfect repudiation of the consequentialist attitude. And yet, when we look at the rationale for seeking martyrdom within Islam, we see that the consequences of such actions, both real and imagined, are entirely the point. Aspiring martyrs expect to please God and experience an eternity of happiness after death. If one fully accepts the metaphysical presuppositions of traditional Islam, martyrdom must be viewed as the ultimate attempt at career advancement. The martyr is also the greatest of altruists: for not only does he secure a place for himself in Paradise, he wins admittance for seventy of his closest relatives as well. Aspiring martyrs also believe that they are furthering God’s work here on earth, with desirable consequences for the living. We know quite a lot about how such people think—indeed, they advertise their views and intentions ceaselessly—and it has everything to do with their belief that God has told them, in the Qur’an and the hadith, precisely what the consequences of certain thoughts and actions will be. Of course, it seems profoundly unlikely that our universe has been designed to reward individual primates for killing one another while believing in the divine origin of a specific book. The fact that would-be martyrs are almost surely wrong about the consequences of their behavior is precisely what renders it such an astounding and immoral misuse of human life.

Because most religions conceive of morality as a matter of being obedient to the word of God (generally for the sake of receiving a supernatural reward), their precepts often have nothing to do with maximizing well-being in thisworld. Religious believers can, therefore, assert the immorality of contraception, masturbation, homosexuality, etc., without ever feeling obliged to argue that these practices actually cause suffering. They can also pursue aims that are flagrantly immoral, in that they needlessly perpetuate human misery, while believing that these actions are morally obligatory. This pious uncoupling of moral concern from the reality of human and animal suffering has caused tremendous harm.

Clearly, there are mental states and capacities that contribute to our general well-being (happiness, compassion, kindness, etc.) as well as mental states and incapacities that diminish it (cruelty, hatred, terror, etc.). It is, therefore, meaningful to ask whether a specific action or way of thinking will affect a person’s well-being and/or the well-being of others, and there is much that we might eventually learn about the biology of such effects. Where a person finds himself on this continuum of possible states will be determined by many factors—genetic, environmental, social, cognitive, political, economic, etc.—and while our understanding of such influences may never be complete, their effects are realized at the level of the human brain. Our growing understanding of the brain, therefore, will have increasing relevance for any claims we make about how thoughts and actions affect the welfare of human beings.

Notice that I do not mention morality in the preceding paragraph, and perhaps I need not. I began this book by arguing that, despite a century of timidity on the part of scientists and philosophers, morality can be linked directly to facts about the happiness and suffering of conscious creatures. However, it is interesting to consider what would happen if we simply ignored this step and merely spoke about “well-being.” What would our world be like if we ceased to worry about “right” and “wrong,” or “good” and “evil,” and simply acted so as to maximize well-being, our own and that of others? Would we lose anything important? And if important, wouldn’t it be, by definition, a matter of someone’s well-being?

Can We Ever Be “Right” About Right and Wrong?

The philosopher and neuroscientist Joshua Greene has done some of the most influential neuroimaging research on morality.13 While Greene wants to understand the brain processes that govern our moral lives, he believes that we should be skeptical of moral realism on metaphysical grounds. For Greene, the question is not, “How can you know for sure that your moral beliefs are true?” but rather, “How could it be that anyone’s moral beliefs are true?” In other words, what is it about the world that could make a moral claim true or false?14 He appears to believe that the answer to this question is “nothing.”

However, it seems to me that this question is easily answered. Moral view A is truer than moral view B, if A entails a more accurate understanding of the connections between human thoughts/intentions/behavior and human well-being. Does forcing women and girls to wear burqas make a net positive contribution to human well-being? Does it produce happier boys and girls? Does it produce more compassionate men or more contented women? Does it make for better relationships between men and women, between boys and their mothers, or between girls and their fathers? I would bet my life that the answer to each of these questions is “no.” So, I think, would many scientists. And yet, as we have seen, most scientists have been trained to think that such judgments are mere expressions of cultural bias—and, thus, unscientific in principle. Very few of us seem willing to admit that such simple, moral truths increasingly fall within the scope of our scientific worldview. Greene articulates the prevailing skepticism quite well:

Moral judgment is, for the most part, driven not by moral reasoning, but by moral intuitions of an emotional nature. Our capacity for moral judgment is a complex evolutionary adaptation to an intensely social life. We are, in fact, so well adapted to making moral judgments that our making them is, from our point of view, rather easy, a part of “common sense.” And like many of our common sense abilities, our ability to make moral judgments feels to us like a perceptual ability, an ability, in this case, to discern immediately and reliably mind-independent moral facts. As a result, we are naturally inclined toward a mistaken belief in moral realism. The psychological tendencies that encourage this false belief serve an important biological purpose, and that explains why we should find moral realism so attractive even though it is false. Moral realism is, once again, a mistake we were born to make.15

Greene alleges that moral realism assumes that “there is sufficient uniformity in people’s underlying moral outlooks to warrant speaking as if there is a fact of the matter about what’s ‘right’ or ‘wrong,’ ‘just’ or ‘unjust.’”16 But do we really need to assume such uniformity for there to be right answers to moral questions? Is physical or biological realism predicated on “sufficient uniformity in people’s underlying [physical or biological] outlooks”? Taking humanity as a whole, I am quite certain that there is a greater consensus that cruelty is wrong (a common moral precept) than the passage of time varies with velocity (special relativity) or that humans and lobsters share a common ancestor (evolution). Should we doubt whether there is a “fact of the matter” with respect to these physical and biological truth claims? Does the general ignorance about the special theory of relativity or the pervasive disinclination of Americans to accept the scientific consensus on evolution put our scientific worldview, even slightly, in question?17

Greene notes that it is often difficult to get people to agree about moral truth, or to even get an individual to agree with himself in different contexts. These tensions lead him to the following conclusion:

[M]oral theorizing fails because our intuitions do not reflect a coherent set of moral truths and were not designed by natural selection or anything else to behave as if they were … If you want to make sense of your moral sense, turn to biology, psychology, and sociology—not normative ethics.18

This objection to moral realism may seem reasonable, until one notices that it can be applied, with the same leveling effect, to any domain of human knowledge. For instance, it is just as true to say that our logical, mathematical, and physical intuitions have not been designed by natural selection to track the Truth.19 Does this mean that we must cease to be realists with respect to physical reality? We need not look far in science to find ideas and opinions that defy easy synthesis. There are many scientific frameworks (and levels of description) that resist integration and which divide our discourse into areas of specialization, even pitting Nobel laureates in the same discipline against one another. Does this mean that we can never hope to understand what is really going on in the world? No. It means the conversation must continue.20

Total uniformity in the moral sphere—either interpersonally or intrapersonally—may be hopeless. So what? This is precisely the lack of closure we face in all areas of human knowledge. Full consensus as a scientific goal only exists in the limit, at a hypothetical end of inquiry. Why not tolerate the same open-endedness in our thinking about human well-being?

Again, this does not mean that all opinions about morality are justified. To the contrary—the moment we accept that there are right and wrong answers to questions of human well-being, we must admit that many people are simply wrong about morality. The eunuchs who tended the royal family in China’s Forbidden City, dynasty after dynasty, seem to have felt generally well compensated for their lives of arrested development and isolation by the influence they achieved at court—as well as by the knowledge that their genitalia, which had been preserved in jars all the while, would be buried with them after their deaths, ensuring them rebirth as human beings. When confronted with such an exotic point of view, a moral realist would like to say we are witnessing more than a mere difference of opinion: we are in the presence of moral error. It seems to me that we can be reasonably confident that it is bad for parents to sell their sons into the service of a government that intends to cut off their genitalia “using only hot chili sauce as a local anesthetic.”21 This would mean that Sun Yaoting, the emperor’s last eunuch, who died in 1996 at the age of ninety-four, was wrong to harbor, as his greatest regret, “the fall of the imperial system he had aspired to serve.” Most scientists seem to believe that no matter how maladaptive or masochistic a person’s moral commitments, it is impossible to say that he is ever mistaken about what constitutes a good life.

Moral Paradox

One of the problems with consequentialism in practice is that we cannot always determine whether the effects of an action will be bad or good. In fact, it can be surprisingly difficult to decide this even in retrospect. Dennett has dubbed this problem “the Three Mile Island Effect.”22 Was the meltdown at Three Mile Island a bad outcome or a good one? At first glance, it surely seems bad, but it might have also put us on a path toward greater nuclear safety, thereby saving many lives. Or it might have caused us to grow dependent on more polluting technologies, contributing to higher rates of cancer and to global climate change. Or it might have produced a multitude of effects, some mutually reinforcing, and some mutually canceling. If we cannot determine the net result of even such a well-analyzed event, how can we judge the likely consequences of the countless decisions we must make throughout our lives?

One difficulty we face in determining the moral valence of an event is that it often seems impossible to determine whose well-being should most concern us. People have competing interests, mutually incompatible notions of happiness, and there are many well-known paradoxes that leap into our path the moment we begin thinking about the welfare of whole populations. As we are about to see, population ethics is a notorious engine of paradox, and no one, to my knowledge, has come up with a way of assessing collective well-being that conserves all of our intuitions. As the philosopher Patricia Churchland puts it, “no one has the slightest idea how to compare the mild headache of five million against the broken legs of two, or the needs of one’s own two children against the needs of a hundred unrelated brain-damaged children in Serbia.”23

Such puzzles may seem of mere academic interest, until we realize that population ethics governs the most important decisions societies ever make. What are our moral responsibilities in times of war, when diseases spread, when millions suffer famine, or when global resources are scarce? These are moments in which we have to assess changes in collective welfare in ways that purport to be rational and ethical. Just how motivated should we be to act when 250,000 people die in an earthquake on the island of Haiti? Whether we know it or not, intuitions about the welfare of whole populations determine our thinking on these matters.

Except, that is, when we simply ignore population ethics—as, it seems, we are psychologically disposed to do. The work of the psychologist Paul Slovic and colleagues has uncovered some rather startling limitations on our capacity for moral reasoning when thinking about large groups of people—or, indeed, about groups larger than one.24 As Slovic observes, when human life is threatened, it seems both rational and moral for our concern to increase with the number of lives at stake. And if we think that losing many lives might have some additional negative consequences (like the collapse of civilization), the curve of our concern should grow steeper still. But this is not how we characteristically respond to the suffering of other human beings.

Slovic’s experimental work suggests that we intuitively care most about a single, identifiable human life, less about two, and we grow more callous as the body count rises. Slovic believes that this “psychic numbing” explains the widely lamented fact that we are generally more distressed by the suffering of single child (or even a single animal) than by a proper genocide. What Slovic has termed “genocide neglect”—our reliable failure to respond, both practically and emotionally, to the most horrific instances of unnecessary human suffering—represents one of the more perplexing and consequential failures of our moral intuition.

Slovic found that when given a chance to donate money in support of needy children, subjects give most generously and feel the greatest empathy when told only about a single child’s suffering. When presented with two needy cases, their compassion wanes. And this diabolical trend continues: the greater the need, the less people are emotionally affected and the less they are inclined to give.

Of course, charities have long understood that putting a face on the data will connect their constituents to the reality of human suffering and increase donations. Slovic’s work has confirmed this suspicion, which is now known as the “identifiable victim effect.”25 Amazingly, however, adding information about the scope of a problem to these personal appeals proves to be counterproductive. Slovic has shown that setting the story of a single needy person in the context of wider human need reliably diminishes altruism.

The fact that people seem to be reliably less concerned when faced with an increase in human suffering represents an obvious violation of moral norms. The important point, however, is that we immediately recognize how indefensible this allocation of emotional and material resources is once it is brought to our attention. What makes these experimental findings so striking is that they are patently inconsistent: if you care about what happens to one little girl, and you care about what happens to her brother, you must, at the very least, care as much about their combined fate. Your concern should be (in some sense) cumulative.26 When your violation of this principle is revealed, you will feel that you have committed a moral error. This explains why results of this kind can only be obtained between subjects (where one group is asked to donate to help one child and another group is asked to support two); we can be sure that if we presented both questions to each participant in the study, the effect would disappear (unless subjects could be prevented from noticing when they were violating the norms of moral reasoning).

Clearly, one of the great tasks of civilization is to create cultural mechanisms that protect us from the moment-to-moment failures of our ethical intuitions. We must build our better selves into our laws, tax codes, and institutions. Knowing that we are generally incapable of valuing two children more than either child alone, we must build a structure that reflects and enforces our deeper understanding of human well-being. This is where a science of morality could be indispensable to us: the more we understand the causes and constituents of human fulfillment, and the more we know about the experiences of our fellow human beings, the more we will be able to make intelligent decisions about which social policies to adopt.

For instance, there are an estimated 90,000 people living on the streets of Los Angeles. Why are they homeless? How many of these people are mentally ill? How many are addicted to drugs or alcohol? How many have simply fallen through the cracks in our economy? Such questions have answers. And each of these problems admits of a range of responses, as well as false solutions and neglect. Are there policies we could adopt that would make it easy for every person in the United States to help alleviate the problem of homelessness in their own communities? Is there some brilliant idea that no one has thought of that would make people want to alleviate the problem of homelessness more than they want to watch television or play video games? Would it be possible to design a video game that could help solve the problem of homelessness in the real world?27 Again, such questions open onto a world of facts, whether or not we can bring the relevant facts into view.

Clearly, morality is shaped by cultural norms to a great degree, and it can be difficult to do what one believes to be right on one’s own. A friend’s four-year-old daughter recently observed the role that social support plays in making moral decisions:

“It’s so sad to eat baby lambies,” she said as she gnawed greedily on a lamb chop.

“So, why don’t you stop eating them?” her father asked.

“Why would they kill such a soft animal? Why wouldn’t they kill some other kind of animal?”

“Because,” her father said, “people like to eat the meat. Like you are, right now.”

His daughter reflected for a moment—still chewing her lamb—and then replied:

“It’s not good. But I can’t stop eating them if they keeping killing them.”

And the practical difficulties for consequentialism do not end here. When thinking about maximizing the well-being of a population, are we thinking in terms of total or average well-being? The philosopher Derek Parfit has shown that both bases of calculation lead to troubling paradoxes.28 If we are concerned only about total welfare, we should prefer a world with hundreds of billions of people whose lives are just barely worth living to a world in which 7 billion of us live in perfect ecstasy. This is the result of Parfit’s famous argument known as “The Repugnant Conclusion.”29 If, on the other hand, we are concerned about the average welfare of a population, we should prefer a world containing a single, happy inhabitant to a world of billions who are only slightly less happy; it would even suggest that we might want to painlessly kill many of the least happy people currently alive, thereby increasing the average of human well-being. Privileging average welfare would also lead us to prefer a world in which billions live under the misery of constant torture to a world in which only one person is tortured ever-so-slightly more. It could also render the morality of an action dependent upon the experience of unaffected people. As Parfit points out, if we care about the average over time, we might deem it morally wrong to have a child today whose life, while eminently worth living, would not compare favorably to the lives of the ancient Egyptians. Parfit has even devised scenarios in which everyone alive could have a lower quality of life than they otherwise would and yet the average quality of life will have increased.30 Clearly, this proves that we cannot rely on a simple summation or averaging of welfare as our only metric. And yet, at the extremes, we can see that human welfare must aggregate in some way: it really is better for all of us to be deeply fulfilled than it is for everyone to live in absolute agony.

Placing only consequences in our moral balance also leads to indelicate questions. For instance, do we have a moral obligation to come to the aid of wealthy, healthy, and intelligent hostages before poor, sickly, and slow-witted ones? After all, the former are more likely to make a positive contribution to society upon their release. And what about remaining partial to one’s friends and family? Is it wrong for me to save the life of my only child if, in the process, I neglect to save a stranger’s brood of eight? Wrestling with such questions has convinced many people that morality does not obey the simple laws of arithmetic.

However, such puzzles merely suggest that certain moral questions could be difficult or impossible to answer in practice; they do not suggest that morality depends upon something other than the consequences of our actions and intentions. This is a frequent source of confusion: consequentialism is less a method of answering moral questions than it is a claim about the status of moral truth. Our assessment of consequences in the moral domain must proceed as it does in all others: under the shadow of uncertainty, guided by theory, data, and honest conversation. The fact that it may often be difficult, or even impossible, to know what the consequences of our thoughts and actions will be does not mean that there is some other basis for human values that is worth worrying about.

Such difficulties notwithstanding, it seems to me quite possible that we will one day resolve moral questions that are often thought to be unanswerable. For instance, we might agree that having a preference for one’s intimates is better (in that it increases general welfare) than being fully disinterested as to how consequences accrue. Which is to say that there may be some forms of love and happiness that are best served by each of us being specially connected to a subset of humanity. This certainly appears to be descriptively true of us at present. Communal experiments that ignore parents’ special attachment to their own children, for instance, do not seem to work very well. The Israeli kibbutzim learned this the hard way: after discovering that raising children communally made both parents and children less happy, they reinstated the nuclear family.31 Most people may be happier in a world in which a natural bias toward one’s own children is conserved—presumably in the context of laws and social norms that disregard this bias. When I take my daughter to the hospital, I am naturally more concerned about her than I am about the other children in the lobby. I do not, however, expect the hospital staff to share my bias. In fact, given time to reflect about it, I realize that I would not want them to. How could such a denial of my self-interest actually be in the service of my self-interest? Well, first, there are many more ways for a system to be biased against me than in my favor, and I know that I will benefit from a fair system far more than I will from one that can be easily corrupted. I also happen to care about other people, and this experience of empathy deeply matters to me. I feel better as a person valuing fairness, and I want my daughter to become a person who shares this value. And how would I feel if the physician attending my daughter actually shared my bias for her and viewed her as far more important than the other patients under his care? Frankly, it would give me the creeps.

But perhaps there are two possible worlds that maximize the well-being of their inhabitants to precisely the same degree: in world X everyone is focused on the welfare of all others without bias, while in world Y everyone shows some degree of moral preference for their friends and family. Perhaps these worlds are equally good, in that their inhabitants enjoy precisely the same level of well-being. These could be thought of as two peaks on the moral landscape. Perhaps there are others. Does this pose a threat to moral realism or to consequentialism? No, because there would still be right and wrong ways to move from our current position on the moral landscape toward one peak or the other, and movement would still be a matter of increasing well-being in the end.

To bring the discussion back to the especially low-hanging fruit of conservative Islam: there is absolutely no reason to think that demonizing homosexuals, stoning adulterers, veiling women, soliciting the murder of artists and intellectuals, and celebrating the exploits of suicide bombers will move humanity toward a peak on the moral landscape. This is, I think, as objective a claim as we ever make in science.

Consider the Danish cartoon controversy: an eruption of religious insanity that still flows to this day. Kurt Westergaard, the cartoonist who drew what was arguably the most inflammatory of these utterly benign cartoons has lived in hiding since pious Muslims first began calling for his murder in 2006. A few weeks ago—more than three years after the controversy first began—a Somali man broke into Westergaard’s home with an axe. Only the construction of a specially designed “safe room” allowed Westergaard to escape being slaughtered for the glory of God (his five-year-old granddaughter also witnessed the attack). Westergaard now lives with continuous police protection—as do the other eighty-seven men in Denmark who have the misfortune of being named “Kurt Westergaard.”32

The peculiar concerns of Islam have created communities in almost every society on earth that grow so unhinged in the face of criticism that they will reliably riot, burn embassies, and seek to kill peaceful people, over cartoons. This is something they will not do, incidentally, in protest over the continuous atrocities committed against them by their fellow Muslims. The reasons why such a terrifying inversion of priorities does not tend to maximize human happiness are susceptible to many levels of analysis—ranging from biochemistry to economics. But do we need further information in this case? It seems to me that we already know enough about the human condition to know that killing cartoonists for blasphemy does not lead anywhere worth going on the moral landscape.

There are other results in psychology and behavioral economics that make it difficult to assess changes in human well-being. For instance, people tend to consider losses to be far more significant than forsaken gains, even when the net result is the same. For instance, when presented with a wager where they stand a 50 percent chance of losing $100, most people will consider anything less than a potential gain of $200 to be unattractive. This bias relates to what has come to be known as “the endowment effect”: people demand more money in exchange for an object that has been given to them than they would spend to acquire the object in the first place. In psychologist Daniel Kahneman’s words, “a good is worth more when it is considered as something that could be lost or given up than when it is evaluated as a potential gain.”33 This aversion to loss causes human beings to generally err on the side of maintaining the status quo. It is also an important impediment to conflict resolution through negotiation: for if each party values his opponent’s concessions as gains and his own as losses, each is bound to perceive his sacrifice as being greater.34

Loss aversion has been studied with functional magnetic resonance imaging (fMRI). If this bias were the result of negative feelings associated with potential loss, we would expect brain regions known to govern negative emotion to be involved. However, researchers have not found increased activity in any areas of the brain as losses increase. Instead, those regions that represent gains show decreasing activity as the size of the potential losses increases. In fact, these brain structures themselves exhibit a pattern of “neural loss aversion”: their activity decreases at a steeper rate in the face of potential losses than they increase for potential gains.35

There are clearly cases in which such biases seem to produce moral illusions—where a person’s view of right and wrong will depend on whether an outcome is described in terms of gains or losses. Some of these illusions might not be susceptible to full correction. As with many perceptual illusions, it may be impossible to “see” two circumstances as morally equivalent, even while “knowing” that they are. In such cases, it may be ethical to ignore how things seem. Or it may be that the path we take to arrive at identical outcomes really does matter to us—and, therefore, that losses and gains will remain incommensurable.

Imagine, for instance, that you are empaneled as the member of a jury in a civil trial and asked to determine how much a hospital should pay in damages to the parents of children who received substandard care in their facility. There are two scenarios to consider:

Couple A learned that their three-year-old daughter was inadvertently given a neurotoxin by the hospital staff. Before being admitted, their daughter was a musical prodigy with an IQ of 195. She has since lost all her intellectual gifts. She can no longer play music with any facility and her IQ is now a perfectly average 100.

Couple B learned that the hospital neglected to give their three-year-old daughter, who has an IQ of 100, a perfectly safe and inexpensive genetic enhancement that would have given her remarkable musical talent and nearly doubled her IQ. Their daughter’s intelligence remains average, and she lacks any noticeable musical gifts. The critical period for giving this enhancement has passed.

Obviously the end result under either scenario is the same. But what if the mental suffering associated with loss is simply bound to be greater than that associated with forsaken gains? If so, it may be appropriate to take this difference into account, even when we cannot give a rational explanation of why it is worse to lose something than not to gain it. This is another source of difficulty in the moral domain: unlike dilemmas in behavioral economics, it is often difficult to establish the criteria by which two outcomes can be judged equivalent.36 There is probably another principle at work in this example, however: people tend to view sins of commission more harshly than sins of omission. It is not clear how we should account for this bias either. But, once again, to say that there are right answers to questions of how to maximize human well-being is not to say that we will always be in a position to answer such questions. There will be peaks and valleys on the moral landscape, and movement between them is clearly possible, whether or not we always know which way is up.

There are many other features of our subjectivity that have implications for morality. For instance, people tend to evaluate an experience based on its peak intensity (whether positive or negative) and the quality of its final moments. In psychology, this is known as the “peak/end rule.” Testing this rule in a clinical environment, one group found that patients undergoing colonoscopies (in the days when this procedure was done without anesthetic) could have their perception of suffering markedly reduced, and their likelihood of returning for a follow-up exam increased, if their physician needlessly prolonged the procedure at its lowest level of discomfort by leaving the colonoscope inserted for a few extra minutes.37 The same principle seems to hold for aversive sounds38 and for exposure to cold.39 Such findings suggest that, under certain conditions, it is compassionate to prolong a person’s pain unnecessarilyso as to reduce his memory of suffering later on. Indeed, it might be unethical to do otherwise. Needless to say, this is a profoundly counterintuitive result. But this is precisely what is so important about science: it allows us to investigate the world, and our place within it, in ways that get behind first appearances. Why shouldn’t we do this with morality and human values generally?

Fairness and Hierarchy

It is widely believed that focusing on the consequences of a person’s actions is merely one of several approaches to ethics—one that is beset by paradox and often impossible to implement. Imagined alternatives are either highly rational, as in the work of a modern philosopher like John Rawls,40 or decidedly otherwise, as we see in the disparate and often contradictory precepts that issue from the world’s major religions.

My reasons for dismissing revealed religion as a source of moral guidance have been spelled out elsewhere,41 so I will not ride this hobbyhorse here, apart from pointing out the obvious: (1) there are many revealed religions available to us, and they offer mutually incompatible doctrines; (2) the scriptures of many religions, including the most well subscribed (i.e., Christianity and Islam), countenance patently unethical practices like slavery; (3) the faculty we use to validate religious precepts, judging the Golden Rule to be wise and the murder of apostates to be foolish, is something we bring to scripture; it does not, therefore, come from scripture; (4) the reasons for believing that any of the world’s religions were “revealed” to our ancestors (rather than merely invented by men and women who did not have the benefit of a twenty-first-century education) are either risible or nonexistent—and the idea that each of these mutually contradictory doctrines is inerrant remains a logical impossibility. Here we can take refuge in Bertrand Russell’s famous remark that even if we could be certain that one of the world’s religions was perfectly true, given the sheer number of conflicting faiths on offer, every believer should expect damnation purely as a matter of probability.

Among the rational challenges to consequentialism, the “contractualism” of John Rawls has been the most influential in recent decades. In his book A Theory of Justice Rawls offered an approach to building a fair society that he considered an alternative to the aim of maximizing human welfare.42 His primary method, for which this work is duly famous, was to ask how reasonable people would structure a society, guided by their self-interest, if they couldn’t know what sort of person they would be in it. Rawls called this novel starting point “the original position,” from which each person must judge the fairness of every law and social arrangement from behind a “veil of ignorance.” In other words, we can design any society we like as long as we do not presume to know, in advance, whether we will be black or white, male or female, young or old, healthy or sick, of high or low intelligence, beautiful or ugly, etc.

As a method for judging questions of fairness, this thought experiment is undeniably brilliant. But is it really an alternative to thinking about the actual consequences of our behavior? How would we feel if, after structuring our ideal society from behind a veil of ignorance, we were told by an omniscient being that we had made a few choices that, though eminently fair, would lead to the unnecessary misery of millions, while parameters that were ever-so-slightly less fair would entail no such suffering? Could we be indifferent to this information? The moment we conceive of justice as being fully separable from human well-being, we are faced with the prospect of there being morally “right” actions and social systems that are, on balance, detrimental to the welfare of everyone affected by them. To simply bite the bullet on this point, as Rawls seemed to do, saying “there is no reason to think that just institutions will maximize the good”43 seems a mere embrace of moral and philosophical defeat.

Some people worry that a commitment to maximizing a society’s welfare could lead us to sacrifice the rights and liberties of the few wherever these losses would be offset by the greater gains of the many. Why not have a society in which a few slaves are continually worked to death for the pleasure of the rest? The worry is that a focus on collective welfare does not seem to respect people as ends in themselves. And whose welfare should we care about? The pleasure that a racist takes in abusing some minority group, for instance, seems on all fours with the pleasure a saint takes in risking his life to help a stranger. If there are more racists than saints, it seems the racists will win, and we will be obliged to build a society that maximizes the pleasure of unjust men.

But such concerns clearly rest on an incomplete picture of human well-being. To the degree that treating people as ends in themselves is a good way to safeguard human well-being, it is precisely what we should do. Fairness is not merely an abstract principle—it is a felt experience. We all know this from the inside, of course, but neuroimaging has also shown that fairness drives reward-related activity in the brain, while accepting unfair proposals requires the regulation of negative emotion.44 Taking others’ interests into account, making impartial decisions (and knowing that others will make them), rendering help to the needy—these are experiences that contribute to our psychological and social well-being. It seems perfectly reasonable, within a consequentialist framework, for each of us to submit to a system of justice in which our immediate, selfish interests will often be superseded by considerations of fairness. It is only reasonable, however, on the assumption that everyone will tend to be better off under such a system. As, it seems, they will.45

While each individual’s search for happiness may not be compatible in every instance with our efforts to build a just society, we should not lose sight of the fact that societies do not suffer; people do. The only thing wrong with injustice is that it is, on some level, actually or potentially bad for people.46 Injustice makes its victims demonstrably less happy, and it could be easily argued that it tends to make its perpetrators less happy than they would be if they cared about the well-being of others. Injustice also destroys trust, making it difficult for strangers to cooperate. Of course, here we are talking about the nature of conscious experience, and so we are, of necessity, talking about processes at work in the brains of human beings. The neuroscience of morality and social emotions is only just beginning, but there seems no question that it will one day deliver morally relevant insights regarding the material causes of our happiness and suffering. While there may be some surprises in store for us down this path, there is every reason to expect that kindness, compassion, fairness, and other classically “good” traits will be vindicated neuroscientifically—which is to say that we will only discover further reasons to believe that they are good for us, in that they generally enhance our lives.

We have already begun to see that morality, like rationality, implies the existence of certain norms—that is, it does not merely describe how we tend to think and behave; it tells us how we should think and behave. One norm that morality and rationality share is the interchangeability of perspective.47 The solution to a problem should not depend on whether you are the husband or the wife, the employer or employee, the creditor or debtor, etc. This is why one cannot argue for the rightness of one’s views on the basis of mere preference. In the moral sphere, this requirement lies at the core of what we mean by “fairness.” It also reveals why it is generally not a good thing to have a different ethical code for friends and strangers.

We have all met people who behave quite differently in business than in their personal lives. While they would never lie to their friends, they might lie without a qualm to their clients or customers. Why is this a moral failing? At the very least, it is vulnerable to what could be called the principle of the unpleasant surprise. Consider what happens to such a person when he discovers that one of his customers is actually a friend: “Oh, why didn’t you say you were Jennifer’s sister! Uh … Okay, don’t buy that model; this one is a much better deal.” Such moments expose a rift in a person’s ethics that is always unflattering. People with two ethical codes are perpetually susceptible to embarrassments of this kind. They are also less trustworthy—and trust is a measure of how much a person can be relied upon to safeguard other people’s well-being. Even if you happen to be a close friend of such a person—that is, on the right side of his ethics—you can’t trust him to interact with others you may care about (“I didn’t know she was your daughter. Sorry about that”).

Or consider the position of a Nazi living under the Third Reich, having fully committed himself to exterminating the world’s Jews, only to learn, as many did, that he was Jewish himself. Unless some compelling argument for the moral necessity of his suicide were forthcoming, we can imagine that it would be difficult for our protagonist to square his Nazi ethics with his actual identity. Clearly, his sense of right and wrong was predicated on a false belief about his own genealogy. A genuine ethics should not be vulnerable to such unpleasant surprises. This seems another way of arriving at Rawls’s “original position.” That which is right cannot be dependent upon one’s being a member of a certain tribe—if for no other reason than one can be mistaken about the fact of one’s membership.

Kant’s “categorical imperative,” perhaps the most famous prescription in all of moral philosophy, captures some of these same concerns:

Hence there is only one categorical imperative and it is this: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”48

While Kant believed that this criterion of universal applicability was the product of pure reason, it appeals to us because it relies on basic intuitions about fairness and justification.49 One cannot claim to be “right” about anything—whether as a matter of reason or a matter of ethics—unless one’s views can be generalized to others.50

Is Being Good Just Too Difficult?

Most of us spend some time over the course of our lives deciding how (or whether) to respond to the fact that other people on earth needlessly starve to death. Most of us also spend some time deciding which delightful foods we want to consume at home and in our favorite restaurants. Which of these projects absorbs more of your time and material resources on a yearly basis? If you are like most people living in the developed world, such a comparison will not recommend you for sainthood. Can the disparity between our commitment to fulfilling our selfish desires and our commitment to alleviating the unnecessary misery and death of millions be morally justified? Of course not. These failures of ethical consistency are often considered a strike against consequentialism. They shouldn’t be. Who ever said that being truly good, or even ethically consistent, must be easy?

I have no doubt that I am less good than I could be. Which is to say, I am not living in a way that truly maximizes the well-being of others. I am nearly as sure, however, that I am also failing to live in a way that maximizes my own well-being. This is one of the paradoxes of human psychology: we often fail to do what we ostensibly want to do and what is most in our self-interest to do. We often fail to do what we most want to do—or, at the very least, we fail to do what, at the end of the day (or year, or lifetime) we will most wish we had done.

Just think of the heroic struggles many people must endure simply to quit smoking or lose weight. The right course of action is generally obvious: if you are smoking two packs of cigarettes a day or are fifty pounds overweight, you are surely not maximizing your well-being. Perhaps this isn’t so clear to you now, but imagine: if you could successfully stop smoking or lose weight, what are the chances that you would regret this decision a year hence? Probably zero. And yet, if you are like most people, you will find it extraordinarily difficult to make the simple behavioral changes required to get what you want.51

Most of us are in this predicament in moral terms. I know that helping people who are starving is far more important than most of what I do. I also have no doubt that doing what is most important would give me more pleasure and emotional satisfaction than I get from most of what I do by way of seeking pleasure and emotional satisfaction. But this knowledge does not change me. I still want to do what I do for pleasure more than I want to help the starving. I strongly believe that I would be happier if I wanted to help the starving more—and I have no doubt that they would be happier if I spent more time and money helping them—but these beliefs are not sufficient to change me. I know that I would be happier and the world would be a (marginally) better place if I were different in these respects. I am, therefore, virtually certain that I am neither as moral, nor as happy, as I could be.52 I know all of these things, and I want to maximize my happiness, but I am generally not moved to do what I believe will make me happier than I now am.

At bottom, these are claims both about the architecture of my mind and about the social architecture of our world. It is quite clear to me that given the current state of my mind—that is, given how my actions and uses of attention affect my life—I would be happier if I were less selfish. This means I would be more wisely and effectively selfish if I were less selfish. This is not a paradox.

What if I could change the architecture of my mind? On some level, this has always been possible, as everything we devote attention to, every discipline we adopt, or piece of knowledge we acquire changes our minds. Each of us also now has access to a swelling armamentarium of drugs that regulate mood, attention, and wakefulness. And the possibility of far more sweeping (as well as more precise) changes to our mental capacities may be within reach. Would it be good to make changes to our minds that affect our sense of right and wrong? And would our ability to alter our moral sense undercut the case I am making for moral realism? What if, for instance, I could rewire my brain so that eating ice cream was not only extremely pleasurable, but also felt like the most important thing I could do?

Despite the ready availability of ice cream, it seems that my new disposition would present certain challenges to self-actualization. I would gain weight. I would ignore social obligations and intellectual pursuits. No doubt, I would soon scandalize others with my skewed priorities. But what if advances in neuroscience eventually allow us to change the way every brain responds to morally relevant experiences? What if we could program the entire species to hate fairness, to admire cheating, to love cruelty, to despise compassion, etc. Would this be morally good? Again, the devil is in the details. Is this really a world of equivalent and genuine well-being, where the concept of “well-being” is susceptible to ongoing examination and refinement as it is in our world? If so, so be it. What could be more important than genuine well-being? But, given all that the concept of “well-being” entails in our world, it is very difficult to imagine that its properties could be entirely fungible as we move across the moral landscape.

A miniature version of this dilemma is surely on the horizon: increasingly, we will need to consider the ethics of using medications to mitigate mental suffering. For instance, would it be good for a person to take a drug that made her indifferent to the death of her child? Surely not while she still had responsibilities as a parent. But what if a mother lost her only child and was thereafter inconsolable? How much better than inconsolable should her doctor make her feel? How much better should she want to feel? Would any of us want to feel perfectly happy in this circumstance? Given a choice—and this choice, in some form, is surely coming—I think that most of us will want our mental states to be coupled, however loosely, to the reality of our lives. How else could our bonds with one another be maintained? How, for instance, can we love our children and yet be totally indifferent to their suffering and death? I suspect we cannot. But what will we do once our pharmacies begin stocking a genuine antidote to grief?

If we cannot always resolve such conundrums, how should we proceed? We cannot perfectly measure or reconcile the competing needs of billions of creatures. We often cannot effectively prioritize our own competing needs. What we can do is try, within practical limits, to follow a path that seems likely to maximize both our own well-being and the well-being of others. This is what it means to live wisely and ethically. As we will see, we have already begun to discover which regions of the brain allow us to do this. A fuller understanding of what moral life entails, however, would require a science of morality.

Bewildered by Diversity

The psychologist Jonathan Haidt has put forward a very influential thesis about moral judgment known as the “social-intuitionist model.” In a widely referenced article entitled “The Emotional Dog and Its Rational Tail,” Haidt summarizes our predicament this way:

[O]ur moral life is plagued by two illusions. The first illusion can be called the “wag-the-dog” illusion: We believe that our own moral judgment (the dog) is driven by our own moral reasoning (the tail). The second illusion can be called the “wag-the-other-dog’s-tail” illusion: In a moral argument, we expect the successful rebuttal of our opponents’ arguments to change our opponents’ minds. Such a belief is analogous to believing that forcing a dog’s tail to wag by moving it with your hand should make the dog happy.53

Haidt does not go so far as to say that reasoning never produces moral judgments; he simply argues that this happens far less often than people think. Haidt is pessimistic about our ever making realistic claims about right and wrong, or good and evil, because he has observed that human beings tend to make moral decisions on the basis of emotion, justify these decisions with post hoc reasoning, and stick to their guns even when their reasoning demonstrably fails. He notes that when asked to justify their responses to specific moral (and pseudo-moral) dilemmas, people are often “morally dumbfounded.” His experimental subjects would “stutter, laugh, and express surprise at their inability to find supporting reasons, yet they would not change their initial judgments …”

The same can be said, however, about our failures to reason effectively. Consider the Monty Hall Problem (based on the television game show Let’s Make a Deal). Imagine that you are a contestant on a game show and presented with three closed doors: behind one sits a new car; the other two conceal goats. Pick the correct door, and the car is yours.

The game proceeds this way: Assume that you have chosen Door #1. Your host then opens Door #2, revealing a goat. He now gives you a chance to switch your bet from Door #1 to the remaining Door #3. Should you switch? The correct answer is “yes.” But most people find this answer very perplexing, as it violates the common intuition that, with two unopened doors remaining, the odds must be 1 in 2 that the car will be behind either one of them. If you stick with your initial choice, however, your odds of winning are actually 1 in 3. If you switch, your odds increase to 2 in 3.54

It would be fair to say that the Monty Hall problem leaves many of its victims “logically dumbfounded.” Even when people understand conceptually why they should switch doors, they can’t shake their initial intuition that each door represents a 1/2 chance of success. This reliable failure of human reasoning is just that—a failure of reasoning. It does not suggest that there is no correct answer to the Monty Hall problem.

And yet scientists like Joshua Greene and Jonathan Haidt seem to think that the very existence of moral controversy nullifies the possibility of moral truth. In their opinion, all we can do is study what human beings do in the name of “morality.” Thus, if religious conservatives find the prospect of gay marriage abhorrent, and secular liberals find it perfectly acceptable, we are confronted by a mere difference of moral preference—not a difference that relates to any deeper truths about human life.

In opposition to the liberal notion of morality as being a system of “prescriptive judgments of justice, rights, and welfare pertaining to how people ought to relate to each other,” Haidt asks us to ponder mysteries of the following sort:

[I]f morality is about how we treat each other, then why did so many ancient texts devote so much space to rules about menstruation, who can eat what, and who can have sex with whom?55

Interesting question. Are these the same ancient texts that view slavery as morally unproblematic? Perhaps slavery has no moral implications after all—otherwise, surely these ancient texts would have something of substance to say against it. Could abolition have been the ultimate instance of liberal bias? Or, following Haidt’s logic, why not ask, “if physics is just a system of laws that explains the structure of the universe in terms of mass and energy, why do so many ancient texts devote so much space to immaterial influences and miraculous acts of God?” Why indeed.

Haidt appears to consider it an intellectual virtue to accept, uncritically, the moral categories of his subjects. But where is it written that everything that people do or decide in the name of “morality” deserves to be considered part of its subject matter? A majority of Americans believe that the Bible provides an accurate account of the ancient world. Many millions of Americans also believe that a principal cause of cancer is “repressed anger.” Happily, we do not allow these opinions to anchor us when it comes time to have serious discussions about history and oncology. It seems abundantly clear that many people are simply wrong about morality—just as many people are wrong about physics, biology, history, and everything else worth understanding. What scientific purpose is served by averting our eyes from this fact? If morality is a system of thinking about (and maximizing) the well-being of conscious creatures like ourselves, many people’s moral concerns must be immoral.

Moral skeptics like Haidt generally emphasize the intractability of moral disagreements:

The bitterness, futility, and self-righteousness of most moral arguments can now be explicated. In a debate about abortion, politics, consensual incest, or what my friend did to your friend, both sides believe that their positions are based on reasoning about the facts and issues involved (the wag-the-dog illusion). Both sides present what they take to be excellent arguments in support of their positions. Both sides expect the other side to be responsive to such reasons (the wag-the-other-dog’s-tail illusion). When the other side fails to be affected by such good reasons, each side concludes that the other side must be closed minded or insincere. In this way the culture wars over issues such as homosexuality and abortion can generate morally motivated players on both sides who believe that their opponents are not morally motivated.56

But the dynamic Haidt describes will be familiar to anyone who has ever entered into a debate on any subject. Such failures of persuasion do not suggest that both sides of every controversy are equally credible. For instance, the above passage perfectly captures my occasional collisions with 9/11 conspiracy theorists. A nationwide poll conducted by the Scripps Survey Research Center at Ohio University found that more than a third of Americans suspect that the federal government “assisted in the 9/11 terrorist attacks or took no action to stop them so the United States could go to war in the Middle East” and 16 percent believe that this proposition is “very likely” to be true.57 Many of these people believe that the Twin Towers collapsed not because fully fueled passenger jets smashed into them but because agents of the Bush administration had secretly rigged these buildings to explode (6 percent of all respondents judged this “very likely,” 10 percent judged it “somewhat likely”). Whenever I encounter people harboring these convictions, the impasse that Haidt describes is well in place: both sides “present what they take to be excellent arguments in support of their positions. Both sides expect the other side to be responsive to such reasons (the wag-the-other-dog’s-tail illusion). When the other side fails to be affected by such good reasons, each side concludes that the other side must be closed minded or insincere.” It is undeniable, however, that if one side in this debate is right about what actually happened on September 11, 2001, the other side must be absolutely wrong.

Of course, it is now well known that our feeling of reasoning objectively is often illusory.58 This does not mean, however, that we cannot learn to reason more effectively, pay greater attention to evidence, and grow more mindful of the ever-present possibility of error. Haidt is right to notice that the brain’s emotional circuitry often governs our moral intuitions, and the way in which feeling drives judgment is surely worthy of study. But it does not follow that there are no right and wrong answers to questions of morality. Just as people are often less than rational when claiming to be rational, they can be less than moral when claiming to be moral.

In describing the different forms of morality available to us, Haidt offers a choice between “contractual” and “beehive” approaches: the first is said to be the province of liberals, who care mainly about harm and fairness; the second represents the conservative (generally religious) social order, which incorporates further concerns about group loyalty, respect for authority, and religious purity. The opposition between these two conceptions of the good life may be worth discussing, and Haidt’s data on the differences between liberals and conservatives is interesting, but is his interpretation correct? It seems possible, for instance, that his five foundations of morality are simply facets of a more general concern about harm.

What, after all, is the problem with desecrating a copy of the Qur’an? There would be no problem but for the fact that people believe that the Qur’an is a divinely authored text. Such people almost surely believe that some harm could come to them or to their tribe as a result of such sacrileges—if not in this world, then in the next. A more esoteric view might be that any person who desecrates scripture will have harmed himself directly: a lack of reverence might be its own punishment, dimming the eyes of faith. Whatever interpretation one favors, sacredness and respect for religious authority seem to reduce to a concern about harm just the same.

The same point can be made in the opposite direction: even a liberal like myself, enamored as I am of thinking in terms of harm and fairness, can readily see that my vision of the good life must be safeguarded from the aggressive tribalism of others. When I search my heart, I discover that I want to keep the barbarians beyond the city walls just as much as my conservative neighbors do, and I recognize that sacrifices of my own freedom may be warranted for this purpose. I expect that epiphanies of this sort could well multiply in the coming years. Just imagine, for instance, how liberals might be disposed to think about the threat of Islam after an incident of nuclear terrorism. Liberal hankering for happiness and freedom might one day produce some very strident calls for stricter laws and tribal loyalty. Will this mean that liberals have become religious conservatives pining for the beehive? Or is the liberal notion of avoiding harm flexible enough to encompass the need for order and differences between in-group and out-group?

There is also the question of whether conservatism contains an extra measure of cognitive bias—or outright hypocrisy—as the moral convictions of social conservatives are so regularly belied by their louche behavior. The most conservative regions of the United States tend to have the highest rates of divorce and teenage pregnancy, as well as the greatest appetite for pornography.59 Of course, it could be argued that social conservatism is the consequence of so much ambient sinning. But this seems an unlikely explanation—especially in those cases where a high level of conservative moralism and a predilection for sin can be found in a single person. If one wants examples of such hypocrisy, Evangelical ministers and conservative politicians seem to rarely disappoint.

When is a belief system not only false but so encouraging of falsity and needless suffering as to be worthy of our condemnation? According to a recent poll, 36 percent of British Muslims (ages sixteen to twenty-four) think apostates should be put to death for their unbelief.60 Are these people “morally motivated,” in Haidt’s sense, or just morally confused?

And what if certain cultures are found to harbor moral codes that look terrible no matter how we jigger Haidt’s five variables of harm, fairness, group loyalty, respect for authority, and spiritual purity? What if we find a group of people who aren’t especially sensitive to harm and fairness, or cognizant of the sacred, or morally astute in any other way? Would Haidt’s conception of morality then allow us to stop these benighted people from abusing their children? Or would that be unscientific?

The Moral Brain

Imagine that you are having dinner in a restaurant and spot your best friend’s wife seated some distance away. As you stand to say hello, you notice that the man seated across from her is not your best friend, but a handsome stranger. You hesitate. Is he a colleague of hers from work? Her brother from out of town? Something about the scene strikes you as illicit. While you cannot hear what they are saying, there is an unmistakable sexual chemistry between them. You now recall that your best friend is away at a conference. Is his wife having an affair? What should you do?

Several regions of the brain will contribute to this impression of moral salience and to the subsequent stirrings of moral emotion. There are many separate strands of cognition and feeling that intersect here: sensitivity to context, reasoning about other people’s beliefs, the interpretation of facial expressions and body language, suspicion, indignation, impulse control, etc. At what point do these disparate processes constitute an instance of moral cognition? It is difficult to say. At a minimum, we know that we have entered moral territory once thoughts about morally relevant events (e.g., the possibility of a friend’s betrayal) have been consciously entertained. For the purposes of this discussion, we need draw the line no more precisely than this.

The brain regions involved in moral cognition span many areas of the prefrontal cortex and the temporal lobes. The neuroscientists Jorge Moll, Ricardo de Oliveira-Souza, and colleagues have written the most comprehensive reviews of this research.61 They divide human actions into four categories:

1. Self-serving actions that do not affect others

2. Self-serving actions that negatively affect others

3. Actions that are beneficial to others, with a high probability of reciprocation (“reciprocal altruism”)

4. Actions that are beneficial to others, with no direct personal benefits (material or reputation gains) and no expected reciprocation (“genuine altruism”). This includes altruistic helping as well as costly punishment of norm violators (“altruistic punishment”)62

As Moll and colleagues point out, we share behaviors 1 through 3 with other social mammals, while 4 seems to be the special province of human beings. (We should probably add that this altruism must be intentional/conscious, so as to exclude the truly heroic self-sacrifice seen among eusocial insects like bees, ants, and termites.) While Moll et al. admit to ignoring the reward component of genuine altruism (often called the “warm glow” associated with cooperation), we know from neuroimaging studies that cooperation is associated with heightened activity in the brain’s reward regions.63 Here, once again, the traditional opposition between selfish and selfless motivation seems to break down. If helping others can be rewarding, rather than merely painful, it should be thought of as serving the self in another mode.

It is easy to see the role that negative and positive motivations play in the moral domain: we feel contempt/anger for the moral transgressions of others, guilt/shame over our own moral failings, and the warm glow of reward when we find ourselves playing nicely with other people. Without the engagement of such motivational mechanisms, moral prescriptions (purely rational notions of “ought”) would be very unlikely to translate into actual behaviors. The fact that motivation is a separate variable explains the conundrum briefly touched on above: we often know what would make us happy, or what would make the world a better place, and yet we find that we are not motivated to seek these ends; conversely, we are often motivated to behave in ways that we know we will later regret. Clearly, moral motivation can be uncoupled from the fruits of moral reasoning. A science of morality would, of necessity, require a deeper understanding of human motivation.

The regions of the brain that govern judgments of right and wrong include a broad network of cortical and subcortical structures. The contribution of these areas to moral thought and behavior differs with respect to emotional tone: lateral regions of the frontal lobes seem to govern the indignation associated with punishing transgressors, while medial frontal regions produce the feelings of reward associated with trust and reciprocation.64 As we will see, there is also a distinction between personal and impersonal moral decisions. The resulting picture is complicated: factors like moral sensitivity, moral motivation, moral judgment, and moral reasoning rely on separable, mutually overlapping processes.

The medial prefrontal cortex (MPFC) is central to most discussions of morality and the brain. As discussed further in chapters 3 and 4, this region is involved in emotion, reward, and judgments of self-relevance. It also seems to register the difference between belief and disbelief. Injuries here have been associated with a variety of deficits including poor impulse control, emotional blunting, and the attenuation of social emotions like empathy, shame, embarrassment, and guilt. When frontal damage is limited to the MPFC, reasoning ability as well as the conceptual knowledge of moral norms are generally spared, but the ability to behave appropriately toward others tends to be disrupted.

Interestingly, patients suffering from MPFC damage are more inclined to consequentialist reasoning than normal subjects are when evaluating certain moral dilemmas—when, for instance, the means of sacrificing one person’s life to save many others is personal rather than impersonal.65 Consider the following two scenarios:

1. You are at the wheel of a runaway trolley quickly approaching a fork in the tracks. On the tracks extending to the left is a group of five railway workmen. On the tracks extending to the right is a single railway workman.

If you do nothing the trolley will proceed to the left, causing the deaths of the five workmen. The only way to avoid the deaths of these workmen is to hit a switch on your dashboard that will cause the trolley to proceed to the right, causing the death of the single workman.

Is it appropriate for you to hit the switch in order to avoid the deaths of the five workmen?

2. A runaway trolley is heading down the tracks toward five workmen who will be killed if the trolley proceeds on its present course. You are on a footbridge over the tracks, in between the approaching trolley and the five workmen. Next to you on this footbridge is a stranger who happens to be very large.

The only way to save the lives of the five workmen is to push this stranger off the bridge and onto the tracks below where his large body will stop the trolley. The stranger will die if you do this, but the five workmen will be saved.

Is it appropriate for you to push the stranger onto the tracks in order to save the five workmen?66

Most people strongly support sacrificing one person to save five in the first scenario, while considering such a sacrifice morally abhorrent in the second. This paradox has been well known in philosophical circles for years.67 Joshua Greene and colleagues were the first to look at the brain’s response to these dilemmas using fMRI.68 They found that the personal forms of these dilemmas, like the one described in scenario two, more strongly activate brain regions associated with emotion. Another group has since found that the disparity between people’s responses to the two scenarios can be modulated, however slightly, by emotional context. Subjects who spent a few minutes watching a pleasant video prior to confronting the footbridge dilemma were more apt to push the man to his death.69

The fact that patients suffering from MPFC injuries find it easier to sacrifice the one for the many is open to differing interpretations. Greene views this as evidence that emotional and cognitive processes often work in opposition.70 There are reasons to worry, however, that mere opposition between consequentialist thinking and negative emotion does not adequately account for the data.71

I suspect that a more detailed understanding of the brain processes involved in making moral judgments of this type could affect our sense of right and wrong. And yet superficial differences between moral dilemmas may continue to play a role in our reasoning. If losses will always cause more suffering than forsaken gains, or if pushing a person to his death is guaranteed to traumatize us in a way that throwing a switch will not, these distinctions become variables that constrain how we can move across the moral landscape toward higher states of well-being. It seems to me, however, that a science of morality can absorb these details: scenarios that appear, on paper, to lead to the same outcome (e.g., one life lost, five lives saved), may actually have different consequences in the real world.


In order to understand the relationship between the mind and the brain, it is often useful to study subjects who, whether through illness or injury, lack specific mental capacities. As luck would have it, Mother Nature has provided us with a nearly perfect dissection of conventional morality. The resulting persons are generally referred to as “psychopaths” or “sociopaths,”72 and there seem to be many more of them living among us than most of us realize. Studying their brains has yielded considerable insight into the neural basis of conventional morality.

As a personality disorder, psychopathy has been so sensationalized in the media that it is difficult to research it without feeling that one is pandering, either to oneself or to one’s audience. However, there is no question that psychopaths exist, and many of them speak openly about the pleasure they take in terrorizing and torturing innocent people. The extreme examples, which include serial killers and sexual sadists, seem to defy any sympathetic understanding on our parts. Indeed, if you immerse yourself in this literature, each case begins to seem more horrible and incomprehensible than the last. While I am reluctant to traffic in the details of these crimes, I fear that speaking in abstractions may obscure the underlying reality. Despite a steady diet of news, which provides a daily reminder of human evil, it can be difficult to remember that certain people truly lack the capacity to care about their fellow human beings. Consider the statement of a man who was convicted of repeatedly raping and torturing his nine-year-old stepson:

After about two years of molesting my son, and all the pornography that I had been buying, renting, swapping, I had got my hands on some “bondage discipline” pornography with children involved. Some of the reading that I had done and the pictures that I had seen showed total submission. Forcing the children to do what I wanted.

And I eventually started using some of this bondage discipline with my own son, and it had escalated to the point where I was putting a large Zip-loc bag over his head and taping it around his neck with black duct tape or black electrical tape and raping and molesting him … to the point where he would turn blue, pass out. At that point I would rip the bag off his head, not for fear of hurting him, but because of the excitement.

I was extremely aroused by inflicting pain. And when I see him pass out and change colors, that was very arousing and heightening to me, and I would rip the bag off his head and then I’d jump on his chest and masturbate in his face and make him suck my penis while he … started to come back awake. While he was coughing and choking, I would rape him in the mouth.

I used this same sadistic style of plastic bag and the tape two or three times a week, and it went on for I’d say a little over a year.73

I suspect that this brief glimpse of one man’s private passions will suffice to make the point. Be assured that this is not the worst abuse a man or woman has ever inflicted upon a child just for the fun of it. And one remarkable feature of the literature on psychopaths is the extent to which even the worst people are able to find collaborators. For instance, the role played by violent pornography in these cases is difficult to overlook. Child pornography alone—which, as many have noted, is the visual record of an actual crime—is now a global, multibillion-dollar industry, involving kidnapping, “sex tourism,” organized crime, and great technical sophistication in the use of the internet. Apparently, there are enough people who are eager to see children—and, increasingly, toddlers and infants—raped and tortured so as to create an entire subculture.74

While psychopaths are especially well represented in our prisons,75 many live below the threshold of overt criminality. For every psychopath who murders a child, there are tens of thousands who are guilty of far more conventional mischief. Robert Hare, the creator of the standard diagnostic instrument to assess psychopathy, the Psychopathy Checklist–Revised (PCL–R), estimates that while there are probably no more than a hundred serial killers in the United States at any moment, there are probably 3 million psychopaths (about 1 percent of the population).76 If Hare is correct, each of us crosses paths with such people all the time.

For instance, I recently met a man who took considerable pride in having arranged his life so as to cheat on his wife with impunity. In fact, he was also cheating on the many women with whom he was cheating—for each believed him to be faithful. All this gallantry involved aliases, fake businesses, and, needless to say, a blizzard of lies. While I can’t say for certain this man was a psychopath, it was quite apparent that he lacked what most of us would consider a normal conscience. A life of continuous deception and selfish machination seemed to cause him no discomfort whatsoever.77

Psychopaths are distinguished by their extraordinary egocentricity and their total lack of concern for the suffering of others. A list of their most frequent characteristics reads like a personal ad from hell: they are said to be callous, manipulative, deceptive, impulsive, secretive, grandiose, thrill-seeking, sexually promiscuous, unfaithful, irresponsible, prone to both reactive and calculated aggression,78 and lacking in emotional depth. They also show reduced emotional sensitivity to punishment (whether actual or anticipated). Most important, psychopaths do not experience a normal range of anxiety and fear, and this may account for their lack of conscience.

The first neuroimaging experiment done on psychopaths found that, when compared to nonpsychopathic criminals and noncriminal controls, they exhibit significantly less activity in regions of the brain that generally respond to emotional stimuli.79 While anxiety and fear are emotions that most of us would prefer to live without, they serve as anchors to social and moral norms.80 Without an ability to feel anxious about one’s own transgressions, real or imagined, norms become nothing more than “rules that others make up.”81 The developmental literature also supports this interpretation: fearful children have been shown to display greater moral understanding.82 It remains an open question, therefore, just how free of anxiety we can reasonably want to be. Again, this is something that only an empirical science of morality could decide. And as more effective remedies for anxiety appear on the horizon, this is an issue that we will have to confront in some form.

Further neuroimaging work suggests that psychopathy is also a product of pathological arousal and reward.83 People scoring high on the psychopathic personality inventory show abnormally high activity in the reward regions of their brain (in particular, the nucleus accumbens) in response to amphetamine and while anticipating monetary gains. Hypersensitivity of this circuitry is especially linked to the impulsive-antisocial dimension of psychopathy, which leads to risky and predatory behavior. Researchers speculate that an excessive response to anticipated reward can prevent a person from learning from the negative emotions of others.

Unlike others who suffer from mental illness or mood disorders, psychopaths generally do not feel that anything is wrong with them. They also meet the legal definition of sanity, in that they possess an intellectual understanding of the difference between right and wrong. However, psychopaths generally fail to distinguish between conventional and moral transgressions. When asked “Would it be okay to eat at your desk if the teacher gave you permission?” vs. “Would it be okay to hit another student in the face if the teacher gave you permission?” normal children age thirty-nine months and above tend to see these questions as fundamentally distinct and consider the latter transgression intrinsically wrong. In this, they appear to be guided by an awareness of potential human suffering. Children at risk for psychopathy tend to view these questions as morally indistinguishable.

When asked to identify the mental states of other people on the basis of photographs of their eyes alone, psychopaths show no general impairment.84 Their “theory of mind” processing (as the ability to understand the mental states of others is generally known) seems to be basically intact, with subtle deficits resulting from their simply not caring about how other people feel.85 The one crucial exception, however, is that psychopaths are often unable to recognize expressions of fear and sadness in others.86 And this may be the difference that makes all the difference.

Neuroscientist James Blair and colleagues suggest that psychopathy results from a failure of emotional learning due to genetic impairments of the amygdala and orbitofrontal cortex, regions vital to the processing of emotion.87The negative emotions of others, rather than parental punishment, may be what goad us to normal socialization. Psychopathy, therefore, could result from a failure to learn from the fear and sadness of other people.88

A child at risk for psychopathy, being emotionally blind to the suffering he causes, may increasingly resort to antisocial behavior in pursuit of his goals throughout adolescence and adulthood.89 As Blair points out, parenting strategies that increase empathy tend to successfully mitigate antisocial behavior in healthy children; such strategies inevitably fail with children who present with the callousness/unemotional (CU) trait that is characteristic of psychopathy. While it may be difficult to accept, the research strongly suggests that some people cannot learn to care about others.90 Perhaps we will one day develop interventions to change this. For the purposes of this discussion, however, it seems sufficient to point out that we are beginning to understand the kinds of brain pathologies that lead to the most extreme forms of human evil. And just as some people have obvious moral deficits, others must possess moral talent, moral expertise, and even moral genius. As with any human ability, these gradations must be expressed at the level of the brain.

Game theory suggests that evolution probably selected for two stable orientations toward human cooperation: tit for tat (often called “strong reciprocity”) and permanent defection.91 Tit for tat is generally what we see throughout society: you show me some kindness, and I am eager to return the favor; you do something rude or injurious, and the temptation to respond in kind becomes difficult to resist. But consider how permanent defection would appear at the level of human relationships: the defector would probably engage in continuous cheating and manipulation, sham moralistic aggression (to provoke guilt and altruism in others), and strategic mimicry of positive social emotions like sympathy (as well as of negative emotions like guilt). This begins to sound like garden-variety psychopathy. The existence of psychopaths, while otherwise quite mysterious, would seem to be predicted by game theory. And yet, the psychopath who lives his entire life in a tiny village must be at a terrible disadvantage. The stability of permanent defection as a strategy would require that a defector be able to find people to fleece who are not yet aware of his terrible reputation. Needless to say, the growth of cities has made this way of life far more practicable than it has ever been.


When confronted with psychopathy at its most extreme, it is very difficult not to think in terms of good and evil. But what if we adopt a more naturalistic view? Consider the prospect of being locked in a cage with a wild grizzly: why would this be a problem? Well, clearly, wild grizzlies suffer some rather glaring cognitive and emotional deficits. Your new roommate will not be easy to reason with or placate; he is unlikely to recognize that you have interests analogous to his own, or that the two of you might have shared interests; and if he could understand such things, he would probably lack the emotional resources to care. From his point of view, you will be a distraction at best, a cowering annoyance, and something tender to probe with his teeth. We might say that a wild bear is, like a psychopath, morally insane. However, we are very unlikely to refer to his condition as a form of “evil.”

Human evil is a natural phenomenon, and some level of predatory violence is innate in us. Humans and chimpanzees tend to display the same level of hostility toward outsiders, but chimps are far more aggressive than humans are within a group (by a factor of about 200).92 Therefore, we seem to have prosocial abilities that chimps lack. And, despite appearances, human beings have grown steadily less violent. As Jared Diamond explains:

It’s true, of course, that twentieth-century state societies, having developed potent technologies of mass killing, have broken all historical records for violent deaths. But this is because they enjoy the advantage of having by far the largest populations of potential victims in human history; the actual percentage of the population that died violently was on the average higher in traditional pre-state societies than it was even in Poland during the Second World War or Cambodia under Pol Pot.93

We must continually remind ourselves that there is a difference between what is natural and what is actually good for us. Cancer is perfectly natural, and yet its eradication is a primary goal of modern medicine. Evolution may have selected for territorial violence, rape, and other patently unethical behaviors as strategies to propagate one’s genes—but our collective well-being clearly depends on our opposing such natural tendencies.

Territorial violence might have even been necessary for the development of altruism. The economist Samuel Bowles has argued that lethal, “out-group” hostility and “in-group” altruism are two sides of the same coin.94 His computer models suggest that altruism cannot emerge without some level of conflict between groups. If true, this is one of the many places where we must transcend evolutionary pressures through reason—because, barring an attack from outer space, we now lack a proper “out-group” to inspire us to further altruism.

In fact, Bowles’s work has interesting implications for my account of the moral landscape. Consider the following from Patricia Churchland:

Assuming our woodland ape ancestors as well as our own human ancestors engaged in out-group raids, as chimps and several South American tribes still do, can we be confident in moral condemnation of their behavior? I see no basis in reality for such a judgment. If, as Samuel Bowles argues, the altruism typical of modern humans plausibly co-evolved with lethal out-group competition, such a judgment will be problematic.95

Of course, the purpose of my argument is to suggest a “basis in reality” for universal judgments of value. However, as Churchland points out, if there was simply no other way for our ancestors to progress toward altruism without developing a penchant for out-group hostility, then so be it. Assuming that the development of altruism represents an extraordinarily important advance in moral terms (I believe it does), this would be analogous to our ancestors descending into an unpleasant valley on the moral landscape only to make progress toward a higher peak. But it is important to reiterate that such evolutionary constraints no longer hold. In fact, given recent developments in biology, we are now poised to consciously engineer our further evolution. Should we do this, and if so, in which ways? Only a scientific understanding of the possibilities of human well-being could guide us.

The Illusion of Free Will

Brains allow organisms to alter their behavior and internal states in response to changes in the environment. The evolution of these structures, tending toward increased size and complexity, has led to vast differences in how the earth’s species live.

The human brain responds to information coming from several domains: from the external world, from internal states of the body, and, increasingly, from a sphere of meaning—which includes spoken and written language, social cues, cultural norms, rituals of interaction, assumptions about the rationality of others, judgments of taste and style, etc. Generally, these domains seem unified in our experience: You spot your best friend standing on the street corner looking strangely disheveled. You recognize that she is crying and frantically dialing her cell phone. Did someone assault her? You rush to her side, feeling an acute desire to help. Your “self” seems to stand at the intersection of these lines of input and output. From this point of view, you tend to feel that you are the source of your own thoughts and actions. You decide what to do and not to do. You seem to be an agent acting of your own free will. As we will see, however, this point of view cannot be reconciled with what we know about the human brain.

We are conscious of only a tiny fraction of the information that our brains process in each moment. While we continually notice changes in our experience—in thought, mood, perception, behavior, etc.—we are utterly unaware of the neural events that produce these changes. In fact, by merely glancing at your face or listening to your tone of voice, others are often more aware of your internal states and motivations than you are. And yet most of us still feel that we are the authors of our own thoughts and actions.

All of our behavior can be traced to biological events about which we have no conscious knowledge: this has always suggested that free will is an illusion. For instance, the physiologist Benjamin Libet famously demonstrated that activity in the brain’s motor regions can be detected some 350 milliseconds before a person feels that he has decided to move.96 Another lab recently used fMRI data to show that some “conscious” decisions can be predicted up to 10 seconds before they enter awareness (long before the preparatory motor activity detected by Libet).97 Clearly, findings of this kind are difficult to reconcile with the sense that one is the conscious source of one’s actions. Notice that distinction between “higher” and “lower” systems in the brain gets us nowhere: for I no more initiate events in executive regions of my prefrontal cortex than I cause the creaturely outbursts of my limbic system. The truth seems inescapable: I, as the subject of my experience, cannot know what I will next think or do until a thought or intention arises; and thoughts and intentions are caused by physical events and mental stirrings of which I am not aware.

Many scientists and philosophers realized long ago that free will could not be squared with our growing understanding of the physical world.98 Nevertheless, many still deny this fact.99 The biologist Martin Heisenberg recently observed that some fundamental processes in the brain, like the opening and closing of ion channels and the release of synaptic vesicles, occur at random, and cannot, therefore, be determined by environmental stimuli. Thus, much of our behavior can be considered “self-generated,” and therein, he imagines, lies a basis for free will.100 But “self-generated” in this sense means only that these events originate in the brain. The same can be said for the brain states of a chicken.

If I were to learn that my decision to have a third cup of coffee this morning was due to a random release of neurotransmitters, how could the indeterminacy of the initiating event count as the free exercise of my will? Such indeterminacy, if it were generally effective throughout the brain, would obliterate any semblance of human agency. Imagine what your life would be like if all your actions, intentions, beliefs, and desires were “self-generated” in this way: you would scarcely seem to have a mind at all. You would live as one blown about by an internal wind. Actions, intentions, beliefs, and desires are the sorts of things that can exist only in a system that is significantly constrained by patterns of behavior and the laws of stimulus-response. In fact, the possibility of reasoning with other human beings—or, indeed, of finding their behaviors and utterances comprehensible at all—depends on the assumption that their thoughts and actions will obediently ride the rails of a shared reality. In the limit, Heisenberg’s “self-generated” mental events would amount to utter madness.101

The problem is that no account of causality leaves room for free will. Thoughts, moods, and desires of every sort simply spring into view—and move us, or fail to move us, for reasons that are, from a subjective point of view, perfectly inscrutable. Why did I use the term “inscrutable” in the previous sentence? I must confess that I do not know. Was I free to do otherwise? What could such a claim possibly mean? Why, after all, didn’t the word “opaque” come to mind? Well, it just didn’t—and now that it vies for a place on the page, I find that I am still partial to my original choice. Am I free with respect to this preference? Am I free to feel that “opaque” is the better word, when I just do not feel that it is the better word? Am I free to change my mind? Of course not. It can only change me.

It means nothing to say that a person would have done otherwise had he chosen to do otherwise, because a person’s “choices” merely appear in his mental stream as though sprung from the void. In this sense, each of us is like a phenomenological glockenspiel played by an unseen hand. From the perspective of your conscious mind, you are no more responsible for the next thing you think (and therefore do) than you are for the fact that you were born into this world.102

Our belief in free will arises from our moment-to-moment ignorance of specific prior causes. The phrase “free will” describes what it feels like to be identified with the content of each thought as it arises in consciousness. Trains of thought like, “What should I get my daughter for her birthday? I know, I’ll take her to a pet store and have her pick out some tropical fish,” convey the apparent reality of choices, freely made. But from a deeper perspective (speaking both subjectively and objectively), thoughts simply arise (what else could they do?) unauthored and yet author to our actions.

As Daniel Dennett has pointed out, many people confuse determinism with fatalism.103 This gives rise to questions like, “If everything is determined, why should I do anything? Why not just sit back and see what happens?” But the fact that our choices depend on prior causes does not mean that they do not matter. If I had not decided to write this book, it wouldn’t have written itself. My choice to write it was unquestionably the primary cause of its coming into being. Decisions, intentions, efforts, goals, willpower, etc., are causal states of the brain, leading to specific behaviors, and behaviors lead to outcomes in the world. Human choice, therefore, is as important as fanciers of free will believe. And to “just sit back and see what happens” is itself a choice that will produce its own consequences. It is also extremely difficult to do: just try staying in bed all day waiting for something to happen; you will find yourself assailed by the impulse to get up and do something, which will require increasingly heroic efforts to resist.

Of course, there is a distinction between voluntary and involuntary actions, but it does nothing to support the common idea of free will (nor does it depend upon it). The former are associated with felt intentions (desires, goals, expectations, etc.) while the latter are not. All of the conventional distinctions we like to make between degrees of intent—from the bizarre neurological complaint of alien hand syndrome104 to the premeditated actions of a sniper—can be maintained: for they simply describe what else was arising in the mind at the time an action occurred. A voluntary action is accompanied by the felt intention to carry it out, while an involuntary action isn’t. Where our intentions themselves come from, however, and what determines their character in every instant, remains perfectly mysterious in subjective terms. Our sense of free will arises from a failure to appreciate this fact: we do not know what we will intend to do until the intention itself arises. To see this is to realize that you are not the author of your thoughts and actions in the way that people generally suppose. This insight does not make social and political freedom any less important, however. The freedom to do what one intends, and not to do otherwise, is no less valuable than it ever was.

Moral Responsibility

The question of free will is no mere curio of philosophy seminars. The belief in free will underwrites both the religious notion of “sin” and our enduring commitment to retributive justice.105 The Supreme Court has called free will a “universal and persistent” foundation for our system of law, distinct from “a deterministic view of human conduct that is inconsistent with the underlying precepts of our criminal justice system” (United States v. Grayson, 1978).106Any scientific developments that threatened our notion of free will would seem to put the ethics of punishing people for their bad behavior in question.107

But, of course, human goodness and human evil are the product of natural events. The great worry is that any honest discussion of the underlying causes of human behavior seems to erode the notion of moral responsibility. If we view people as neuronal weather patterns, how can we coherently speak about morality? And if we remain committed to seeing people as people, some who can be reasoned with and some who cannot, it seems that we must find some notion of personal responsibility that fits the facts.

What does it really mean to take responsibility for an action? For instance, yesterday I went to the market; as it turns out, I was fully clothed, did not steal anything, and did not buy anchovies. To say that I was responsible for my behavior is simply to say that what I did was sufficiently in keeping with my thoughts, intentions, beliefs, and desires to be considered an extension of them. If, on the other hand, I had found myself standing in the market naked, intent upon stealing as many tins of anchovies as I could carry, this behavior would be totally out of character; I would feel that I was not in my right mind, or that I was otherwise not responsible for my actions. Judgments of responsibility, therefore, depend upon the overall complexion of one’s mind, not on the metaphysics of mental cause and effect.

Consider the following examples of human violence:

1. A four-year-old boy was playing with his father’s gun and killed a young woman. The gun had been kept loaded and unsecured in a dresser drawer.

2. A twelve-year-old boy, who had been the victim of continuous physical and emotional abuse, took his father’s gun and intentionally shot and killed a young woman because she was teasing him.

3. A twenty-five-year-old man, who had been the victim of continuous abuse as a child, intentionally shot and killed his girlfriend because she left him for another man.

4. A twenty-five-year-old man, who had been raised by wonderful parents and never abused, intentionally shot and killed a young woman he had never met “just for the fun of it.”

5. A twenty-five-year-old man, who had been raised by wonderful parents and never abused, intentionally shot and killed a young woman he had never met “just for the fun of it.” An MRI of the man’s brain revealed a tumor the size of a golf ball in his medial prefrontal cortex (a region responsible for the control of emotion and behavioral impulses).

In each case a young woman has died, and in each case her death was the result of events arising in the brain of another human being. The degree of moral outrage we feel clearly depends on the background conditions described in each case. We suspect that a four-year-old child cannot truly intend to kill someone and that the intentions of a twelve-year-old do not run as deep as those of an adult. In both cases 1 and 2, we know that the brain of the killer has not fully matured and that all the responsibilities of personhood have not yet been conferred. The history of abuse and precipitating circumstance in example 3 seem to mitigate the man’s guilt: this was a crime of passion committed by a person who had himself suffered at the hands of others. In 4, we have no abuse, and the motive brands the perpetrator a psychopath. In 5, we appear to have the same psychopathic behavior and motive, but a brain tumor somehow changes the moral calculus entirely: given its location in the MPFC, it seems to divest the killer of all responsibility. How can we make sense of these gradations of moral blame when brains and their background influences are, in every case, and to exactly the same degree, the real cause of a woman’s death?

It seems to me that we need not have any illusions about a casual agent living within the human mind to condemn such a mind as unethical, negligent, or even evil, and therefore liable to occasion further harm. What we condemn in another person is the intention to do harm—and thus any condition or circumstance (e.g., accident, mental illness, youth) that makes it unlikely that a person could harbor such an intention would mitigate guilt, without any recourse to notions of free will. Likewise, degrees of guilt could be judged, as they are now, by reference to the facts of the case: the personality of the accused, his prior offenses, his patterns of association with others, his use of intoxicants, his confessed intentions with regard to the victim, etc. If a person’s actions seem to have been entirely out of character, this will influence our sense of the risk he now poses to others. If the accused appears unrepentant and anxious to kill again, we need entertain no notions of free will to consider him a danger to society.

Of course, we hold one another accountable for more than those actions that we consciously plan, because most voluntary behavior comes about without explicit planning.108 But why is the conscious decision to do another person harm particularly blameworthy? Because consciousness is, among other things, the context in which our intentions become completely available to us. What we do subsequent to conscious planning tends to most fully reflect the global properties of our minds—our beliefs, desires, goals, prejudices, etc. If, after weeks of deliberation, library research, and debate with your friends, you still decide to kill the king—well, then killing the king really reflects the sort of person you are. Consequently, it makes sense for the rest of society to worry about you.

While viewing human beings as forces of nature does not prevent us from thinking in terms of moral responsibility, it does call the logic of retribution into question. Clearly, we need to build prisons for people who are intent upon harming others. But if we could incarcerate earthquakes and hurricanes for their crimes, we would build prisons for them as well.109 The men and women on death row have some combination of bad genes, bad parents, bad ideas, and bad luck—which of these quantities, exactly, were they responsible for? No human being stands as author to his own genes or his upbringing, and yet we have every reason to believe that these factors determine his character throughout life. Our system of justice should reflect our understanding that each of us could have been dealt a very different hand in life. In fact, it seems immoral not to recognize just how much luck is involved in morality itself.

Consider what would happen if we discovered a cure for human evil. Imagine, for the sake of argument, that every relevant change in the human brain can be made cheaply, painlessly, and safely. The cure for psychopathy can be put directly into the food supply like vitamin D. Evil is now nothing more than a nutritional deficiency.

If we imagine that a cure for evil exists, we can see that our retributive impulse is profoundly flawed. Consider, for instance, the prospect of withholding the cure for evil from a murderer as part of his punishment. Would this make any moral sense at all? What could it possibly mean to say that a person deserves to have this treatment withheld? What if the treatment had been available prior to the person’s crime? Would he still be responsible for his actions? It seems far more likely that those who had been aware of his case would be indicted for negligence. Would it make any sense at all to deny surgery to the man in example 5 as a punishment if we knew the brain tumor was the proximate cause of his violence? Of course not. The urge for retribution, therefore, seems to depend upon our not seeing the underlying causes of human behavior.

Despite our attachment to notions of free will, most us know that disorders of the brain can trump the best intentions of the mind. This shift in understanding represents progress toward a deeper, more consistent, and more compassionate view of our common humanity—and we should note that this is progress away from religious metaphysics. It seems to me that few concepts have offered greater scope for human cruelty than the idea of an immortal soul that stands independent of all material influences, ranging from genes to economic systems.

And yet one of the fears surrounding our progress in neuroscience is that this knowledge will dehumanize us. Could thinking about the mind as the product of the physical brain diminish our compassion for one another? While it is reasonable to ask this question, it seems to me that, on balance, soul/body dualism has been the enemy of compassion. For instance, the moral stigma that still surrounds disorders of mood and cognition seems largely the result of viewing the mind as distinct from the brain. When the pancreas fails to produce insulin, there is no shame in taking synthetic insulin to compensate for its lost function. Many people do not feel the same way about regulating mood with antidepressants (for reasons that appear quite distinct from any concern about potential side effects). If this bias has diminished in recent years, it has been because of an increased appreciation of the brain as a physical organ.

However, the issue of retribution is a genuinely tricky one. In a fascinating article in The New Yorker, Jared Diamond recently wrote of the high price we often pay for leaving vengeance to the state.110 He compares the experience of his friend Daniel, a New Guinea highlander, who avenged the death of a paternal uncle and felt exquisite relief, to the tragic experience of his late father-in-law, who had the opportunity to kill the man who murdered his family during the Holocaust but opted instead to turn him over to the police. After spending only a year in jail, the killer was released, and Diamond’s father-in-law spent the last sixty years of his life “tormented by regret and guilt.” While there is much to be said against the vendetta culture of the New Guinea Highlands, it is clear that the practice of taking vengeance answers to a common psychological need.

We are deeply disposed to perceive people as the authors of their actions, to hold them responsible for the wrongs they do us, and to feel that these debts must be repaid. Often, the only compensation that seems appropriate requires that the perpetrator of a crime suffer or forfeit his life. It remains to be seen how the best system of justice would steward these impulses. Clearly, a full account of the causes of human behavior should undermine our natural response to injustice, at least to some degree. It seems doubtful, for instance, that Diamond’s father-in-law would have suffered the same pangs of unrequited vengeance if his family had been trampled by an elephant or laid low by cholera. Similarly, we can expect that his regret would have been significantly eased if he had learned that his family’s killer had lived a flawlessly moral life until a virus began ravaging his medial prefrontal cortex.

It may be that a sham form of retribution could still be moral, if it led people to behave far better than they otherwise would. Whether it is useful to emphasize the punishment of certain criminals—rather than their containment or rehabilitation—is a question for social and psychological science. But it seems quite clear that a retributive impulse, based upon the idea that each person is the free author of his thoughts and actions, rests on a cognitive and emotional illusion—and perpetuates a moral one.

It is generally argued that our sense of free will presents a compelling mystery: on the one hand, it is impossible to make sense of it in causal terms; on the other, there is a powerful subjective sense that we are the authors of our own actions.111 However, I think that this mystery is itself a symptom of our confusion. It is not that free will is simply an illusion: our experience is not merely delivering a distorted view of reality; rather, we are mistaken about the nature of our experience. We do not feel as free as we think we feel. Our sense of our own freedom results from our not paying attention to what it is actually like to be what we are. The moment we do pay attention, we begin to see that free will is nowhere to be found, and our subjectivity is perfectly compatible with this truth. Thoughts and intentions simply arise in the mind. What else could they do? The truth about us is stranger than many suppose: The illusion of free will is itself an illusion.