The Moral Landscape: How Science Can Determine Human Values - Sam Harris (2010)

NOTES

Introduction: The Moral Landscape

1. Bilefsky, 2008; Mortimer & Toader, 2005.

2. For the purposes of this discussion, I do not intend to make a hard distinction between “science” and other intellectual contexts in which we discuss “facts”—e.g., history. For instance, it is a fact that John F. Kennedy was assassinated. Facts of this kind fall within the context of “science,” broadly construed as our best effort to form a rational account of empirical reality. Granted, one doesn’t generally think of events like assassinations as “scientific” facts, but the murder of President Kennedy is as fully corroborated a fact as can be found anywhere, and it would betray a profoundly unscientific frame of mind to deny that it occurred. I think “science,” therefore, should be considered a specialized branch of a larger effort to form true beliefs about events in our world.

3. This is not to deny that cultural conceptions of health can play an important role in determining a person’s experience of illness (more so with some illnesses than others). There is evidence that American notions of mental health have begun to negatively affect the way people in other cultures suffer (Waters, 2010). It has even been argued that, with a condition like schizophrenia, notions of spirit possession are palliative when compared to beliefs about organic brain disease. My point, however, is that whatever contributions cultural differences make to our experience of the world can themselves be understood, in principle, at the level of the brain.

4. Pollard Sacks, 2009.

5. In the interests of both simplicity and relevance, I tend to keep my references to religion focused on Christianity, Judaism, and Islam. Of course, most of what I say about these faiths applies to Hinduism, Buddhism, Sikhism, and to other religions as well.

6. There are many reasons to be pessimistic about the future of Europe: Ye’or, 2005; Bawer, 2006; Caldwell, 2009.

7. Gould, 1997.

8Nature 432, 657 (2004).

9. I am not the first person to argue that morality can and should be integrated with our scientific understanding of the natural world. Of late, the philosophers William Casebeer and Owen Flanagan have each built similar cases (Casebeer, 2003; Flanagan, 2007). Both Casebeer and Flanagan have resurrected Aristotle’s concept of eudaimonia, which is generally translated as “flourishing,” “fulfillment,” or “well-being.” While I rely heavily on these English equivalents, I have elected not to pay any attention to Aristotle. While much of what Aristotle wrote in his Nichomachean Ethics is of great interest and convergent with the case I wish to make, some of it isn’t. And I’d rather not be beholden to the quirks of the great man’s philosophy. Both Casebeer and Flanagan also seem to place greater emphasis on morality as a skill and a form of practical knowledge, arguing that living a good life is more a matter of “knowing how” than of “knowing that.” While I think this distinction is often useful, I’m not eager to give up the fight for moral truth just yet. For instance, I believe that the compulsory veiling of women in Afghanistan tends to needlessly immiserate them and will breed a new generation of misogynistic, puritanical men. This is an instance of “knowing that,” and it is a truth claim about which I am either right or wrong. I am confident that both Casebeer and Flanagan would agree. The difference in our approaches, therefore, seems to me to be more a matter of emphasis. In any case, both Casebeer and Flanagan go into greater philosophical detail than I have on many points, and both their books are well worth reading. Flanagan also offered very helpful notes on an early draft of this book.

10. E. O. Wilson, 1998.

11. Keverne & Curley, 2004; Pedersen, Ascher, Monroe, & Prange, 1982; Smeltzer, Curtis, Aragona, & Wang, 2006; Young & Wang, 2004.

12. Fries, Ziegler, Kurian, Jacoris, & Pollak, 2005.

13. Hume’s argument was actually directed against religious apologists who sought to deduce morality from the existence of God. Ironically, his reasoning has since become one of the primary impediments to linking morality to the rest of human knowledge. However, Hume’s is/ought distinction has always had its detractors (e.g., Searle, 1964); here is Dennett:

If “ought” cannot be derived from “is,” just what can it be derived from?… ethics must be somehow based on an appreciation of human nature—on a sense of what a human being is or might be, and on what a human being might want to have or want to be. If that is naturalism, then naturalism is no fallacy (Dennett, p. 468).

14. Moore [1903], 2004.

15. Popper, 2002, pp. 60–62.

16. The list of scientists who have followed Hume and Moore with perfect obedience is very long and defies citation. For a recent example within neuroscience, see Edelman (2006, pp. 84–91).

17. Fodor, 2007.

18. I recently had the pleasure of hearing the philosopher Patricia Churchland draw this same analogy. (Patricia, I did not steal it!)

19. De Grey & Rae, 2007.

20. The problem with using a strictly hedonic measure of the “good” grows more obvious once we consider some of the promises and perils of a maturing neuroscience. If, for instance, we can one day manipulate the brain so as to render specific behaviors and states of mind more pleasurable than they now are, it seems relevant to wonder whether such refinements would be “good.” It might be good to make compassion more rewarding than sexual lust, but would it be good to make hatred the most pleasurable emotion of all? One can’t appeal to pleasure as the measure of goodness in such cases, because pleasure is what we would be choosing to reassign.

21. Pinker, 2002, pp. 53–54.

22. It should be clear that the conventional distinction between “belief” and “knowledge” does not apply here. As will be made clear in chapter 3, our propositional knowledge about the world is entirely a matter of “belief” in the above sense. Whether one chooses to say that one “believes” X or that one “knows” X is merely a difference of emphasis, expressing one’s degree of confidence. As discussed in this book, propositional knowledge is a form of belief. Understanding belief at the level of the brain has been the focus of my recent scientific research, using functional magnetic resonance imaging (fMRI) (S. Harris et al., 2009; S. Harris, Sheth, & Cohen, 2008).

23. Edgerton, 1992.

24. Cited in Edgerton, 1992, p. 26.

25. Though perhaps even this attributes too much common sense to the field of anthropology, as Edgerton (1992, p. 105) tells us: “A prevailing assumption among anthropologists who study the medical practices of small, traditional societies is that these populations enjoy good health and nutrition … Indeed, we are often told that seemingly irrational food taboos, once fully understood, will prove to be adaptive.”

26. Leher, 2010.

27. Filkins, 2010.

28. For an especially damning look at the Bush administration’s Council on Bioethics, see Steven Pinker’s response to its 555-page report, Human Dignity and Bioethics (Pinker, 2008a).

29. S. Harris, 2004, 2006a, 2006b; S. Harris, 2006c; S. Harris, 2007a, 2007b.

30. Judson, 2008; Chris Mooney, 2005.

Chapter 1: Moral Truth

1. In February of 2010, I spoke at the TED conference about how we might one day understand morality in universal, scientific terms (www.youtube.com/watch?v=Hj9oB4zpHww). Normally, when one speaks at a conference the resulting feedback amounts to a few conversations in the lobby during a coffee break. As luck would have it, however, my TED talk was broadcast on the internet as I was in the final stages of writing this book, and this produced a blizzard of useful commentary.

Many of my critics fault me for not engaging more directly with the academic literature on moral philosophy. There are two reasons why I haven’t done this: First, while I have read a fair amount of this literature, I did not arrive at my position on the relationship between human values and the rest of human knowledge by reading the work of moral philosophers; I came to it by considering the logical implications of our making continued progress in the sciences of mind. Second, I am convinced that every appearance of terms like “metaethics,” “deontology,” “noncognitivism,” “antirealism,” “emotivism,” etc., directly increases the amount of boredom in the universe. My goal, both in speaking at conferences like TED and in writing this book, is to start a conversation that a wider audience can engage with and find helpful. Few things would make this goal harder to achieve than for me to speak and write like an academic philosopher. Of course, some discussion of philosophy will be unavoidable, but my approach is to generally make an end run around many of the views and conceptual distinctions that make academic discussions of human values so inaccessible. While this is guaranteed to annoy a few people, the professional philosophers I’ve consulted seem to understand and support what I am doing.

2. Given my experience as a critic of religion, I must say that it has been quite disconcerting to see the caricature of the overeducated, atheistic moral nihilist regularly appearing in my inbox and on the blogs. I sincerely hope that people like Rick Warren have not been paying attention.

3. Searle, 1995, p. 8.

4. There has been much confusion on this point, and most of it is still influential in philosophical circles. Consider the following from J. L. Mackie:

If there were objective values, then they would be entities or qualities or relations of a very strange sort, utterly different from anything else in the universe. Correspondingly, if we were aware of them, it would have to be by some special faculty of moral perception or intuition, utterly different from our ordinary ways of knowing everything else (Mackie 1977, p. 38).

Clearly, Mackie has conflated the two senses of the term “objective.” We need not discuss “entities or qualities or relations of a very strange sort, utterly different from anything else in the universe” in order to speak about moral truth. We need only admit that the experiences of conscious creatures are lawfully dependent upon states of the universe—and, therefore, that actions can cause more harm than good, more good than harm, or be morally neutral. Good and evil need only consist in this, and it makes no sense whatsoever to claim that an action that harms everyone affected by it (even its perpetrator) might still be “good.” We do not require a metaphysical repository of right and wrong, or actions that are mysteriously right or wrong in themselves, for there to be right and wrong answers to moral questions; we simply need a landscape of possible experiences that can be traversed in some orderly way in light of how the universe actually is. The main criterion, therefore, is that misery and well-being not be completely random. It seems to me that we already know that they are not—and, therefore, that it is possible for a person to be right or wrong about how to move from one state to the other.

5. Is it always wrong to slice open a child’s belly with a knife? No. One might be performing an emergency appendectomy.

6. One could respond by saying that scientists agree about science more than ordinary people agree about morality (I’m not sure this is true). But this is an empty claim, for at least two reasons: (1) it is circular, because anyone who insufficiently agrees with the majority opinion in any domain of science won’t count as a “scientist” (so the definition of scientist is question begging); (2) Scientists are an elite group, by definition. “Moral experts” would also constitute an elite group, and the existence of such experts is completely in line with my argument.

7. Obvious exceptions include “socially constructed” phenomena that require some degree of consensus to be made real. The paper in my pocket really is “money”—but it is only money because a sufficient number of people are willing to treat it as such (see Searle, 1995).

8. Practically speaking, I think we have some very useful intuitions on this front. We care more about creatures that can experience a greater range of suffering and happiness—and we are right to, because suffering and happiness (defined in the widest possible sense) are all that can be cared about. Are all animal lives equivalent? No. Do monkeys suffer more than mice from medical experiments? If so, all other things being equal, it is worse to run experiments on monkeys than on mice.

Are all human lives equivalent? No. I have no problem admitting that certain people’s lives are more valuable than mine (I need only imagine a person whose death would create much greater suffering and prevent much greater happiness). However, it also seems quite rational for us to collectively act as though all human lives were equally valuable. Hence, most of our laws and social institutions generally ignore differences between people. I suspect that this is a very good thing. Of course, I could be wrong about this—and that is precisely the point. If we didn’t behave this way, our world would be different, and these differences would either affect the totality of human well-being, or they wouldn’t. Once again, there are answers to such questions, whether we can ever answer them in practice.

9. At bottom, this is purely a semantic point: I am claiming that whatever answer a person gives to the question “Why is religion important?” can be framed in terms of a concern about someone’s well-being (whether misplaced or not).

10. I do not think that the moral philosophy of Immanuel Kant represents an exception either. Kant’s categorical imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J. S. Mill pointed out at the beginning of Utilitarianism). One could argue, therefore, that what is serviceable in Kant’s moral philosophy amounts to a covert form of consequentialism. I offer a few more remarks about Kant’s categorical imperative below.

11. For instance, many people assume that an emphasis on human “well-being” would lead us to do terrible things like reinstate slavery, harvest the organs of the poor, periodically nuke the developing world, or nurture our children on a continuous drip of heroin. Such expectations are the result of not thinking about these issues seriously. There are rather clear reasons not to do these things—all of which relate to the immensity of suffering that such actions would cause and the possibilities of deeper happiness that they would foreclose. Does anyone really believe that the highest possible state of human flourishing is compatible with slavery, organ theft, and genocide?

12. Are there trade-offs and exceptions? Of course. There may be circumstances in which the very survival of a community requires that certain of these principles be violated. But this doesn’t mean that they aren’t generally conducive to human well-being.

13. Stewart, 2008.

14. I confess that, as a critic of religion, I have paid too little attention to the sexual abuse scandal in the Catholic Church. Frankly, it felt somehow unsportsmanlike to shoot so large and languorous a fish in so tiny a barrel. This scandal was one of the most spectacular “own goals” in the history of religion, and there seemed to be no need to deride faith at its most vulnerable and self-abased. Even in retrospect, it is easy to understand the impulse to avert one’s eyes: Just imagine a pious mother and father sending their beloved child to the Church of a Thousand Hands for spiritual instruction, only to have him raped and terrified into silence by threats of hell. And then imagine this occurring to tens of thousands of children in our own time—and to children beyond reckoning for over a thousand years. The spectacle of faith so utterly misplaced, and so fully betrayed, is simply too depressing to think about.

But there was always more to this phenomenon that should have compelled my attention. Consider the ludicrous ideology that made it possible: the Catholic Church has spent two millennia demonizing human sexuality to a degree unmatched by any other institution, declaring the most basic, healthy, mature, and consensual behaviors taboo. Indeed, this organization still opposes the use of contraception: preferring, instead, that the poorest people on earth be blessed with the largest families and the shortest lives. As a consequence of this hallowed and incorrigible stupidity, the Church has condemned generations of decent people to shame and hypocrisy—or to Neolithic fecundity, poverty, and death by AIDS. Add to this inhumanity the artifice of cloistered celibacy, and you now have an institution—one of the wealthiest on earth—that preferentially attracts pederasts, pedophiles, and sexual sadists into its ranks, promotes them to positions of authority, and grants them privileged access to children. Finally, consider that vast numbers of children will be born out of wedlock, and their unwed mothers vilified, wherever Church teaching holds sway—leading boys and girls by the thousands to be abandoned to Church-run orphanages only to be raped and terrorized by the clergy. Here, in this ghoulish machinery set to whirling through the ages by the opposing winds of shame and sadism, we mortals can finally glimpse how strangely perfect are the ways of the Lord.

In 2009, the Irish Commission to Inquire into Child Abuse (CICA) investigated such of these events as occurred on Irish soil. Their report runs to 2,600 pages (www.childabusecommission.com/rpt/). Having read only an oppressive fraction of this document, I can say that when thinking about the ecclesiastical abuse of children, it is best not to imagine shades of ancient Athens and the blandishments of a “love that dare not speak its name.” Yes, there have surely been polite pederasts in the priesthood, expressing anguished affection for boys who would turn eighteen the next morning. But behind these indiscretions there is a continuum of abuse that terminates in absolute evil. The scandal in the Catholic Church—one might now safely say the scandal that is the Catholic Church—includes the systematic rape and torture of orphaned and disabled children. Its victims attest to being whipped with belts and sodomized until bloody—sometimes by multiple attackers—and then whipped again and threatened with death and hellfire if they breathed a word about their abuse. And yes, many of the children who were desperate or courageous enough to report these crimes were accused of lying and returned to their tormentors to be raped and tortured again.

The evidence suggests that the misery of these children was facilitated and concealed by the hierarchy of the Catholic Church at every level, up to and including the prefrontal cortex of the current pope. In his former capacity as Cardinal Ratzinger, Pope Benedict personally oversaw the Vatican’s response to reports of sexual abuse in the Church. What did this wise and compassionate man do upon learning that his employees were raping children by the thousands? Did he immediately alert the police and ensure that the victims would be protected from further torments? One still dares to imagine such an effulgence of basic human sanity might have been possible, even within the Church. On the contrary, repeated and increasingly desperate complaints of abuse were set aside, witnesses were pressured into silence, bishops were praised for their defiance of secular authority, and offending priests were relocated only to destroy fresh lives in unsuspecting parishes. It is no exaggeration to say that for decades (if not centuries) the Vatican has met the formal definition of a criminal organization devoted—not to gambling, prostitution, drugs, or any other venial sin—but to the sexual enslavement of children. Consider the following passages from the CICA report:

7.129 In relation to one School, four witnesses gave detailed accounts of sexual abuse, including rape in all instances, by two or more Brothers and on one occasion along with an older resident. A witness from the second School, from which there were several reports, described being raped by three Brothers: “I was brought to the infirmary … they held me over the bed, they were animals. … They penetrated me, I was bleeding.” Another witness reported he was abused twice weekly on particular days by two Brothers in the toilets off the dormitory:

One Brother kept watch while the other abused me … [sexually]… then they changed over. Every time it ended with a severe beating. When I told the priest in Confession, he called me a liar. I never spoke about it again.

I would have to go into his … [Br X’s] … room every time he wanted. You’d get a hiding if you didn’t, and he’d make me do it … [masturbate] … to him. One night I didn’t … [masturbate him] … and there was another Brother there who held me down and they hit me with a hurley and they burst my fingers … [displayed scar].…

7.232 Witnesses reported being particularly fearful at night as they listened to residents screaming in cloakrooms, dormitories or in a staff member’s bedroom while they were being abused. Witnesses were conscious that co-residents whom they described as orphans had a particularly difficult time:

The orphan children, they had it bad. I knew … [who they were]… by the size of them, I’d ask them and they’d say they come from … named institution. … They were there from an early age. You’d hear the screams from the room where Br … X … would be abusing them.

There was one night, I wasn’t long there and I seen one of the Brothers on the bed with one of the young boys … and I heard the young lad screaming crying and Br … X … said to me “if you don’t mind your own business you’ll get the same.”… I heard kids screaming and you know they are getting abused and that’s a nightmare in anybody’s mind. You are going to try and break out. … So there was no way I was going to let that happen to me … I remember one boy and he was bleeding from the back passage and I made up my mind, there was no way it … [anal rape]… was going to happen to me. … That used to play on my mind.

This is the kind of abuse that the Church has practiced and concealed since time out of memory. Even the CICA report declined to name the offending priests.

I have been awakened from my unconscionable slumber on this issue by recent press reports (Goodstein and Callender, 2010; Goodstein, 2010a, 2010b; Donadio, 2010a, 2010b; Wakin and McKinley Jr., 2010), and especially by the eloquence of my colleagues Christopher Hitchens (2010a, 2010b, 2010c, and 2010d), and Richard Dawkins (2010a, 2010b).

15. The Church even excommunicated the girl’s mother (http://news.bbc.co.uk/2/hi/americas/7930380.stm).

16. The philosopher Hilary Putnam (2007) has argued that facts and values are “entangled.” Scientific judgments presuppose “epistemic values”—coherence, simplicity, beauty, parsimony, etc. Putnam has pointed out, as I do here, that all the arguments against the existence of moral truth could be applied to scientific truth without any change.

17. Many people find the idea of “moral experts” abhorrent. Indeed, this ramification of my argument has been called “positively Orwellian” and a “recipe for fascism.” Again, these concerns seem to arise from an uncanny reluctance to think about what the concept of “well-being” actually entails or how science might shed light on its causes and conditions. The analogy with health seems important to keep in view: Is there anything “Orwellian” about the scientific consensus on the link between smoking and lung cancer? Has the medical community’s insistence that people should not smoke led to “fascism”? Many people’s reflexive response to the notion of moral expertise is to say, “I don’t want anyone telling me how to live my life.” To which I can only respond, “If there were a way for you and those you care about to be much happier than you now are, would you want to know about it?”

18. This is the subject of that now infamous quotation from Albert Einstein, endlessly recycled by religious apologists, claiming that “science without religion is lame, religion without science is blind.” Far from indicating his belief in God, or his respect for unjustified belief, Einstein was speaking about the primitive urge to understand the universe, along with the “faith” that such understanding is possible:

Though religion may be that which determines the goal, it has, nevertheless, learned from science, in the broadest sense, what means will contribute to the attainment of the goals it has set up. But science can only be created by those who are thoroughly imbued with the aspiration toward truth and understanding. This source of feeling, however, springs from the sphere of religion. To this there also belongs the faith in the possibility that the regulations valid for the world of existence are rational, that is, comprehensible to reason. I cannot conceive of a genuine scientist without that profound faith. This situation may be expressed by an image: science without religion is lame, religion without science is blind (Einstein, 1954, p. 49).

19. These impasses are seldom as insurmountable as skeptics imagine. For instance, Creationist “scientists” can be led to see that the very standards of reasoning they use to vindicate scripture in light of empirical data also reveal hundreds of inconsistencies within scripture—thereby undermining their entire project. The same is true for moral impasses: those who claim to get their morality from God, without reference to any terrestrial concerns, are often susceptible to such concerns in the end. In an extreme case, the New York Times correspondent Thomas Friedman once reported meeting a Sunni militant who had begun fighting alongside the American military against al-Qaeda in Iraq, having been persuaded that the infidel troops were the lesser of two evils. What convinced him? He witnessed a member of al-Qaeda decapitate an eight-year-old girl (Friedman, 2007). It would seem, therefore, that the boundary between the crazy values of Islam and the utterly crazy can be discerned when drawn in the spilled blood of little girls. This is a basis for hope, of sorts.

In fact, I think that morality will be on firmer ground than any other branch of science in the end, since scientific knowledge is only valuable because it contributes to our well-being. Of course, we must include among these contributions the claims of people who say that they value knowledge “for its own sake”—for they are merely describing the mental pleasure that comes with understanding the world, solving problems, etc. It is clear that well-being must take precedence over knowledge, because we can easily imagine situations in which it would be better not to know the truth, or when false knowledge would be desirable. No doubt, there are circumstances in which religious delusion functions in this way: where, for instance, soldiers are vastly outnumbered on the battlefield but, being ignorant of the odds against them and convinced that God is on their side, they manage to draw on emotional resources that would be unavailable to people with complete information and fully justified beliefs. However, the fact that a combination of ignorance and false knowledge can occasionally be helpful is no argument for the general utility of religious faith (much less for its truth). Indeed, the great weakness of religion, apart from the obvious implausibility of its doctrines, is that the cost of holding irrational and divisive beliefs on a global scale is extraordinarily high.

20. The physicist Sean Carroll finds Hume’s analysis of facts and values so compelling that he elevates it to the status of mathematical truth:

Attempts to derive ought from is are like attempts to reach an odd number by adding together even numbers. If someone claims that they’ve done it, you don’t have to check their math; you know that they’ve made a mistake (Carroll, 2010a).

21. This spurious notion of “ought” can be introduced into any enterprise and seem to plant a fatal seed of doubt. Asking why we “ought” to value well-being makes even less sense than asking why we “ought” to be rational or scientific. And while it is possible to say that one can’t move from “is” to “ought,” we should be honest about how we get to “is” in the first place. Scientific “is” statements rest on implicit “oughts” all the way down. When I say, “Water is two parts hydrogen and one part oxygen,” I have uttered a quintessential statement of scientific fact. But what if someone doubts this statement? I can appeal to data from chemistry, describing the outcome of simple experiments. But in so doing, I implicitly appeal to the values of empiricism and logic. What if my interlocutor doesn’t share these values? What can I say then? As it turns out, this is the wrong question. The right question is, why should we care what such a person thinks about chemistry?

So it is with the linkage between morality and well-being: To say that morality is arbitrary (or culturally constructed, or merely personal) because we must first assume that the well-being of conscious creatures is good, is like saying that science is arbitrary (or culturally constructed, or merely personal) because we must first assume that a rational understanding of the universe is good. Yes, both endeavors rest on assumptions (and, as I have said, I think the former will prove to be more firmly grounded), but this is not a problem. No framework of knowledge can withstand utter skepticism, for none is perfectly self-justifying. Without being able to stand entirely outside of a framework, one is always open to the charge that the framework rests on nothing, that its axioms are wrong, or that there are foundational questions it cannot answer. Occasionally some of our basic assumptions do turn out to be wrong or limited in scope—e.g., the parallel postulate of Euclidean geometry does not apply to geometry as a whole—but these errors can be detected only by the light of other assumptions that stand firm.

Science and rationality generally are based on intuitions and concepts that cannot be reduced or justified. Just try defining “causation” in noncircular terms. Or try justifying transitivity in logic: if A = B and B = C, then A = C. A skeptic could say, “This is nothing more than an assumption that we’ve built into the definition of ‘equality.’ Others will be free to define ‘equality’ differently.” Yes, they will. And we will be free to call them “imbeciles.” Seen in this light, moral relativism—the view that the difference between right and wrong has only local validity within a specific culture—should be no more tempting than physical, biological, mathematical, or logical relativism. There are better and worse ways to define our terms; there are more and less coherent ways to think about reality; and there are—is there any doubt about this?—many ways to seek fulfillment in this life and to not find it.

22. We can, therefore, let this metaphysical notion of “ought” fall away, and we will be left with a scientific picture of cause and effect. To the degree that it is in our power to produce the worst possible misery for everyone in this universe, we can say that if we don’t want everyone to experience the worst possible misery, we shouldn’t do X. Can we readily conceive of someone who might hold altogether different values and want all conscious beings, himself included, reduced to the state of worst possible misery? I don’t think so. And I don’t think we can intelligibly ask questions like, “What if the worst possible misery for everyone is actually good?” Such questions seem analytically confused. We can also pose questions like “What if the most perfect circle is really a square?” or “What if all true statements are actually false?” But if someone persists in speaking this way, I see no obligation to take his views seriously.

23. And even if minds were independent of the physical universe, we could still speak about facts relative to their well-being. But we would be speaking about some other basis for these facts (souls, disembodied consciousness, ectoplasm, etc.).

24. On a related point, the philosopher Russell Blackford wrote in response to my TED talk, “I’ve never yet seen an argument that shows that psychopaths are necessarily mistaken about some fact about the world. Moreover, I don’t see how the argument could run.” While I discuss psychopathy in greater detail in the next chapter, here is such an argument in brief: We already know that psychopaths have brain damage that prevents them from having certain deeply satisfying experiences (like empathy) that seem good for people both personally and collectively (in that they tend to increase well-being on both counts). Psychopaths, therefore, don’t know what they are missing (but we do). The position of a psychopath also cannot be generalized; it is not, therefore, an alternative view of how human beings should live (this is one point Kant got right: even a psychopath couldn’t want to live in a world filled with psychopaths). We should also realize that the psychopath we are envisioning is a straw man: watch interviews with real psychopaths, and you will find that they do not tend to claim to be in possession of an alternative morality or to be living deeply fulfilling lives. These people are generally ruled by compulsions that they don’t understand and cannot resist. It is absolutely clear that, whatever they might believe about what they are doing, psychopaths are seeking some form of well-being (excitement, ecstasy, feelings of power, etc.), but because of their neurological and social deficits, they are doing a very bad job of it. We can say that a psychopath like Ted Bundy takes satisfaction in the wrong things, because living a life purposed toward raping and killing women does not allow for deeper and more generalizable forms of human flourishing. Compare Bundy’s deficits to those of a delusional physicist who finds meaningful patterns and mathematical significance in the wrong places. The mathematician John Nash, while suffering the symptoms of his schizophrenia, seems a good example: his “Eureka!” detectors were poorly calibrated; he saw meaningful patterns where his peers would not—and these patterns were a very poor guide to the proper goals of science (i.e., understanding the physical world). Is there any doubt that Ted Bundy’s “Yes! I love this!” detectors were poorly coupled to the possibilities of finding deep fulfillment in this life, or that his obsession with raping and killing young women was a poor guide to the proper goals of morality (i.e., living a fulfilling life with others)?

While people like Bundy may want some very weird things out of life, no one wants utter, interminable misery. People with apparently different moral codes are still seeking forms of well-being that we recognize—like freedom from pain, doubt, fear, etc.—and their moral codes, however vigorously they might want to defend them, are undermining their well-being in obvious ways. And if someone claims to want to be truly miserable, we are free to treat them like someone who claims to believe that 2 + 2 = 5 or that all events are self-caused. On the subject of morality, as on every other subject, some people are not worth listening to.

25. From the White House press release: www.bioethics.gov/about/creation.html.

26. Oxytocin is a neuroactive hormone that appears to govern social recognition in animals and the experience of trust (and its reciprocation) in humans (Zak, Kurzban, & Matzner, 2005; Zak, Stanton, & Ahmadi, 2007).

27. Appiah, 2008, p. 41.

28. The Stanford Encyclopedia of Philosophy has this to say on the subject of moral relativism:

In 1947, on the occasion of the United Nations debate about universal human rights, the American Anthropological Association issued a statement declaring that moral values are relative to cultures and that there is no way of showing that the values of one culture are better than those of another. Anthropologists have never been unanimous in asserting this, and in recent years human rights advocacy on the part of some anthropologists has mitigated the relativist orientation of the discipline. Nonetheless, prominent contemporary anthropologists such as Clifford Geertz and Richard A. Shweder continue to defend relativist positions. http://plato.stanford.edu/entries/moral-relativism/.

1947? Please note that this was the best the social scientists in the United States could do with the crematoria of Auschwitz still smoking. My spoken and written collisions with Richard Shweder, Scott Atran, Mel Konner, and other anthropologists have convinced me that awareness of moral diversity does not entail, and is a poor surrogate for, clear thinking about human well-being.

29. Pinker, 2002, p. 273.

30. Harding, 2001.

31. For a more complete demolition of feminist and multicultural critiques of Western science, see P. R. Gross, 1991; P. R. Gross & Levitt, 1994.

32. Weinberg, 2001, p. 105.

33. Dennett, 1995.

34. Ibid., p. 487.

35. See, for instance, M. D. Hauser, 2006. Experiments show that even eight-month-old infants want to see aggressors punished (Bloom, 2010).

36www.gallup.com/poll/118378/Majority-Americans-Continue-Oppose-Gay-Marriage.aspx.

37. There is now a separate field called “neuroethics,” formed by a confluence of neuroscience and philosophy, which loosely focuses on matters of this sort. Neuroethics is more than bioethics with respect to the brain (that is, it is more than an ethical framework for the conduct of neuroscience): it encompasses our efforts to understand ethics itself as a biological phenomenon. There is a quickly growing literature on neuroethics (recent, book-length introductions can be found in Gazzaniga, 2005, and Levy, 2007), and there are other neuroethical issues that are relevant to this discussion: concerns about mental privacy, lie detection, and the other implications of an advancing science of neuroimaging; personal responsibility in light of deterministic and random processes in the brain (neither of which lend any credence to common notions of “free will”); the ethics of emotional and cognitive enhancement; the implications of understanding “spiritual” experience in physical terms; etc.

Chapter 2: Good and Evil

1. Consider, for instance, how much time and money we spend to secure our homes, places of business, and cars against unwanted entry (and to have doors professionally unlocked when keys are lost). Consider the cost of internet and credit card security, and the time dissipated in the use and retrieval of passwords. When phone service is interrupted for five minutes in a modern society the cost is measured in billions of dollars. I think it safe to say that the costs of preventing theft are far higher. Add to the expense of locking doors, the pains we take to prepare formal contracts—locks of another sort—and the costs soar beyond all reckoning. Imagine a world that had no need for such prophylactics against theft (admittedly, it is difficult). It would be a world of far greater disposable wealth (measured in both time and money).

2. There are other ways of thinking about human cooperation, including politics and law, but I take the normative claims of ethics to be foundational.

3. Hamilton, 1964a, 1964b.

4. McElreath & Boyd, 2007, p. 82.

5. Trivers, 1971.

6. G. F. Miller, 2007.

7. For a recent review that also looks at the phenomenon of indirect reciprocity (i.e., A gives to B; and then B gives to C, or C gives to A, or both), see Nowak, 2005. For doubts about the sufficiency of kin selection and reciprocal altruism to account for cooperation—especially among eusocial insects—see D. S. Wilson & Wilson, 2007; E. O. Wilson, 2005.

8. Tomasello, 2007.

9. Smith, [1759] 1853, p. 3.

10. Ibid. pp. 192–193.

11. Benedict, 1934, p. 172.

12. Consequentialism has undergone many refinements since the original utilitarianism of Jeremy Bentham and John Stuart Mill. My discussion will ignore most of these developments, as they are generally of interest only to academic philosophers. The Stanford Encyclopedia of Philosophy provides a good summary article (Sinnott-Armstrong, 2006).

13. J. D. Greene, 2007; J. D. Greene, Nystrom, Engell, Darley, & Cohen, 2004; J. D. Greene, Sommerville, Nystrom, Darley, & Cohen, 2001.

14. J. D. Greene, 2002, pp. 59–60.

15. Ibid., pp. 204–205.

16. Ibid., p. 264.

17. Let us briefly cover a few more philosophical bases: What would have to be true for a practice like the forced veiling of women to be objectively wrong? Would this practice have to cause unnecessary suffering in all possible worlds? No. It only need cause unnecessary suffering in this world. Must it be analytically true that compulsory veiling is immoral—that is, must the wrongness of the act be built into the meaning of the word “veil”? No. Must it be true a priori—that is, must this practice be wrong independent of human experience? No. The wrongness of the act very much depends on human experience. It is wrong to force women and girls to wear burqas because it is unpleasant and impractical to live fully veiled, because this practice perpetuates a view of women as being the property of men, and because it keeps the men who enforce it brutally obtuse to the possibility of real equality and communication between the sexes. Hobbling half of the population also directly subtracts from the economic, social, and intellectual wealth of a society. Given the challenges that face every society, this is a bad practice in almost every case. Must compulsory veiling be ethically unacceptable without exception in our world? No. We can easily imagine situations in which forcing one’s daughter to wear a burqa could be perfectly moral—perhaps to escape the attention of thuggish men while traveling in rural Afghanistan. Does this slide from brute, analytic, a priori, and necessary truth to synthetic, a posteriori, contingent, exception-ridden truth pose a problem for moral realism? Recall the analogy I drew between morality and chess. Is it always wrong to surrender your Queen in a game of chess? No. But generally speaking, it is a terrible idea. Even granting the existence of an uncountable number of exceptions to this rule, there are still objectively good and objectively bad moves in every game of chess. Are we in a position to say that the treatment of women in traditional Muslim societies is generally bad? Absolutely we are. Should there be any doubt, I recommend that readers consult Ayaan Hirsi Ali’s several fine books on the subject (A. Hirsi Ali, 2006, 2007, 2010).

18. J. D. Greene, 2002, pp. 287–288.

19. The philosopher Richard Joyce (2006) has argued that the evolutionary origins of moral beliefs undermine them in ways that the evolutionary origins of mathematical and scientific beliefs do not. I do not find his reasoning convincing, however. For instance, Joyce asserts that our mathematical and scientific intuitions could have been selected for only by virtue of their accuracy, whereas our moral intuitions were selected for based on an entirely different standard. In the case of arithmetic (which he takes as his model), this may seem plausible. But science has progressed by violating many (if not most) of our innate, proto-scientific intuitions about the nature of reality. By Joyce’s reasoning, we should view these violations as a likely step away from the Truth.

20. Greene’s argument actually seems somewhat peculiar. Consequentialism is not true, because there is simply too much diversity of opinion about morality; but he seems to believe that most people will converge on consequentialist principles if given enough time to reflect.

21. Faison, 1996.

22. Dennett, 1995, p. 498.

23. Churchland, 2008a.

24. Slovic, 2007.

25. This seems related to a more general finding in the reasoning literature, in which people are often found to put more weight on a salient anecdote than on large-sample statistics (Fong, Krantz, & Nisbett, 1986/07; Stanovich & West, 2000). It also appears to be an especially perverse version of what Kahneman and Frederick call “extension neglect” (Kahneman & Frederick, 2005): where our valuations reliably fail to increase with the size of a problem. For instance, the value most people will place on saving 2,000 lives will be less than twice as large as the value they will place on 1,000 lives. Slovic’s result, however, suggests that it could be less valuable (even if the larger group contained the smaller). If ever there were a nonnormative result in moral psychology, this is it.

26. There may be some exceptions to this principle: for instance, if you thought that either child would suffer intolerably if the other died, you might believe that both dying would be preferable than one dying. Whether such cases actually exist, they are clearly exceptions to the general rule that negative consequences should be additive.

27. Does this sound crazy? Jane McGonigal designs games with such real-world outcomes in mind: www.iftf.org/user/46.

28. Parfit, 1984.

29. While Parfit’s argument is rightfully celebrated, and Reasons and Persons is a philosophical masterpiece, a very similar observation first appears in Rawls, [1971] 1999, pp. 140–141.

30. For instance:

How Only France Survives. In one possible future, the worst-off people in the world soon start to have lives that are well worth living. The quality of life in different nations then continues to rise. Though each nation has its fair share of the world’s resources, such things as climate and cultural traditions give to some nations a higher quality of life. The best-off people, for many centuries, are the French.

In another possible future, a new infectious disease makes nearly everyone sterile. French scientists produce just enough of an antidote for all of France’s population. All other nations cease to exist. This has some bad effects on the quality of life for the surviving French. Thus there is no new foreign art, literature, or technology that the French can import. These and other bad effects outweigh any good effects. Throughout this second possible future the French therefore have a quality of life that is slightly lower than it would be in the first possible future (Parfit, ibid., p. 421).

31. P. Singer, 2009, p. 139.

32. Graham Holm, 2010.

33. Kahneman, 2003.

34. LaBoeuf & Shafir, 2005.

35. Tom, Fox, Trepel, & Poldrack, 2007. But as the authors note, this protocol examined the brain’s appraisal of potential loss (i.e., decision utility) rather than experienced losses, where other studies suggest that negative affect and associated amygdala activity can be expected.

36. Pizarro and Uhlmann make a similar observation (D. A. Pizarro & Uhlmann, 2008).

37. Redelmeier, Katz, & Kahneman, 2003.

38. Schreiber & Kahneman, 2000.

39. Kahneman, 2003.

40. Rawls, [1971] 1999; Rawls & Kelly, 2001.

41. S. Harris, 2004, 2006a, 2006d.

42. He later refined his view, arguing that justice as fairness must be understood as “a political conception of justice rather than as part of a comprehensive moral doctrine” (Rawls & Kelly, 2001, p. xvi).

43. Rawls, [1971] 1999, p. 27.

44. Tabibnia, Satpute, & Lieberman, 2008.

45. It is not unreasonable, therefore, to expect people who are seeking to maximize their well-being to also value fairness. Valuing fairness, they will tend to view its breach as less than ethical—that is, as not being conducive to their collective well-being. But what if they don’t? What if the laws of nature allow for different and seemingly antithetical peaks on the moral landscape? What if there is a possible world in which the Golden Rule has become an unshakable instinct, while there is another world of equivalent happiness where the inhabitants reflexively violate it? Perhaps this is a world of perfectly matched sadists and masochists. Let’s assume that in this world every person can be paired, one-for-one, with the saints in the first world, and while they are different in every other way, these pairs are identical in every way relevant to their well-being. Stipulating all these things, the consequentialist would be forced to say that these worlds are morally equivalent. Is this a problem? I don’t think so. The problem lies in how many details we have been forced to ignore in the process of getting to this point. What possible reason do we have to worry that the principles of human well-being are this elastic? This is like worrying that there is a possible world in which the laws of physics, while as consistent as they are in our world, are completely antithetical to physics as we know it. Okay, what if? Exactly how much should this possibility concern us as we try to predict the behavior of matter in our world?

And the Kantian commitment to viewing people as ends in themselves, while a very useful moral principle, is difficult to map onto the world with precision. Not only are the boundaries between self and world hard to define, one’s individuality with respect to one’s own past and future is somewhat mysterious. For instance, we are each heirs to our actions and to our failures of action. Does this have any moral implications? If I am currently disinclined to do some necessary and profitable work, to eat well, to make regular visits to doctor and dentist, to avoid dangerous sports, to wear my seat belt, to save money, etc.—have I committed a series of crimes against the future self who will suffer the consequences of my negligence? Why not? And if I do live prudently, despite the pain it causes me, out of concern for the interests of my future self, is this an instance of my being used as a means to someone else’s end? Am I merely a resource for the person I will be in the future?

46. Rawls’s notion of “primary goods,” access to which must be fairly allocated in any just society, seems parasitic upon a general notion of human well-being. Why are “basic rights and liberties,” “freedom of movement and free choice of occupation,” “the powers and prerogatives of offices and positions of authority,” “income and wealth,” and “the social bases of self-respect” of any interest to us at all if not as constituents of happy human lives? Of course, Rawls is at pains to say that his conception of the “good” is partial and merely political—but to the degree that it is good at all, it seems beholden to a larger conception of human well-being. See Rawls, 2001, pp. 58–60.

47. Cf. Pinker, 2008b.

48. Kant, [1785] 1995, p. 30.

49. As Patricia Churchland notes:

Kant’s conviction that detachment from emotions is essential in characterizing moral obligation is strikingly at odds with what we know about our biological nature. From a biological point of view, basic emotions are Mother Nature’s way of getting us to do what we prudentially ought. The social emotions are a way of getting us to do what we socially ought, and the reward system is a way of learning to use past experiences to improve one’s performance in both domains (Churchland, 2008b).

50. However, one problem that people often have with consequentialism is that it entails moral hierarchy: certain spheres of well-being (i.e., minds) will be more important than others. The philosopher Robert Nozick famously observed that this opens the door to “utility monsters”: hypothetical creatures who could get enormously greater life satisfaction from devouring us than we would lose (Nozick 1974, p. 41). But, as Nozick observes, we are just such utility monsters. Leaving aside the fact that economic inequality allows many of us to profit from the drudgery of others, most of us pay others to raise and kill animals so that we can eat them. This arrangement works out rather badly for the animals. How much do these creatures actually suffer? How different is the happiest cow, pig, or chicken from those who languish on our factory farms? We seem to have decided, all things considered, that it is proper that the well-being of certain species be entirely sacrificed to our own. We might be right about this. Or we might not. For many people, eating meat is simply an unhealthy source of fleeting pleasure. It is very difficult to believe, therefore, that all of the suffering and death we impose on our fellow creatures is ethically defensible. For the sake of argument, however, let’s assume that allowing some people to eat some animals yields a net increase in well-being on planet earth.

In this context, would it be ethical for cows being led to slaughter to defend themselves if they saw an opportunity—perhaps by stampeding their captors and breaking free? Would it be ethical for a fish to fight against the hook in light of the fisherman’s justified desire to eat it? Having judged some consumption of animals to be ethically desirable (or at least ethically acceptable), we appear to rule out the possibility of warranted resistance on their parts. We are their utility monsters.

Nozick draws the obvious analogy and asks if it would be ethical for our species to be sacrificed for the unimaginably vast happiness of some superbeings. Provided that we take the time to really imagine the details (which is not easy), I think the answer is clearly “yes.” There seems no reason to suppose that we must occupy the highest peak on the moral landscape. If there are beings who stand in relation to us as we do to bacteria, it should be easy to admit that their interests must trump our own, and to a degree that we cannot possibly conceive. I do not think that the existence of such a moral hierarchy poses any problems for our ethics. And there is no compelling reason to believe that such superbeings exist, much less ones that want to eat us.

51. Traditional utility theory has been unable to explain why people so often behave in ways that they know they will later regret. If human beings were simply inclined to choose the path leading to their most satisfying option, then willpower would be unnecessary, and self-defeating behavior would be unheard of. In his fascinating book, Breakdown of Will, the psychiatrist George Ainslie examines the dynamics of human decision making in the face of competing preferences. To account for both the necessity of human will, along with its predictable failures, Ainslie presents a model of decision making in which each person is viewed as a community of present and future “selves” in competition, and each “self” discounts future rewards more steeply than seems strictly rational.

The multiplicity of competing interests in the human mind causes us each to function as a loose coalition of interests that may be unified only by resource limitations—like the fact that we have only one body with which to express our desires, moment to moment. This obvious constraint upon our fulfilling mutually incompatible ends keeps us bargaining with our “self” across time: “Ulysses planning for the Sirens must treat Ulysses hearing them as a separate person, to be influenced if possible and forestalled if not” (Ainslie, 2001, p. 40).

Hyperbolic discounting of future rewards leads to curiosities like “preference reversal”: for example, most people prefer $10,000 today to $15,000 three years from now, but prefer $15,000 in thirteen years to $10,000 in ten years. Given that the latter scenario is simply the first seen at a distance of ten years, it seems clear that people’s preferences reverse depending on the length of the delay. The deferral of a reward is less acceptable the closer one gets to the possibility of enjoying it.

52. I am also not as healthy or as well educated as I could be. I believe that such statements are objectively true (even where they relate to subjective facts about me).

53. Haidt, 2001, p. 821.

54. The wisdom of switching doors is seen more easily if you imagine having made your initial selection among a thousand doors, rather than three. Imagine you picked Door #17, and Monty Hall then opens every door except for #562, revealing goats as far as the eye can see. What should you do next? Stick with Door #17 or switch to Door #562? It should be obvious that your initial choice was made in a condition of great uncertainty, with a 1-in-1,000 chance of success and a 999-in-1,000 chance of failure. The opening of 998 doors has given you an extraordinary amount of information—collapsing the remaining odds of 999-in-1,000 on door #562.

55. Haidt, 2008.

56. Haidt, 2001, p. 823.

57http://newspolls.org/question.php?question_id=716. Incidentally, the same research found that 16 percent of Americans also believe that it is “very likely” that the “federal government is withholding proof of the existence of intelligent life from other planets” (http://newspolls.org/question.php? question_id=715).

58. This is especially obvious in split-brain research, when language areas in the left hemisphere routinely confabulate explanations for right-hemisphere behavior (Gazzaniga, 1998; M. S. Gazzaniga, 2005; Gazzaniga, 2008; Gazzaniga, Bogen, & Sperry, 1962).

59. Blow, 2009.

60. “Multiculturalism ‘drives young Muslims to shun British values.’” The Daily Mail (January 29, 2007).

61. Moll, de Oliveira-Souza, & Zahn, 2008; 2005.

62. Moll et al., 2008, p. 162.

63. Including the nucleus accumbens, the caudate nucleus, the ventromedial and orbitofrontal cortex, and the rostral anterior cingulate (Rilling et al., 2002).

64. Though, as is often the case with neuroimaging work, the results do not divide as neatly as all that. In fact, one of Moll’s earlier studies on disgust and moral indignation found medial regions also involved in these negative states (Moll, de Oliveira-Souza et al., 2005).

65. Koenigs et al., 2007.

66. J. D. Greene et al., 2001.

67. This thought experiment was first introduced by Foot (1967) and later elaborated by Thompson (1976).

68. J. D. Greene et al., 2001.

69. Valdesolo & DeSteno, 2006.

70. J. D. Greene, 2007.

71. Moll et al., 2008, p. 168. There is the additional concern, which bedevils much neuroimaging research: the regions that Greene et al. label as “emotional” have been implicated in other types of processing—memory and language, for instance (G. Miller, 2008b). This is an instance of the “reverse inference” problem raised by Poldrack (2006), discussed below in the context of my own research on belief.

72. While some researchers have sought to differentiate these terms, most use them interchangeably.

73. Salter, 2003, pp. 98–99. See also Stone, 2009.

74www.missingkids.com.

75. Twenty percent of male and female prison inmates are psychopaths, and they are responsible for more than 50 percent of serious crimes (Hare, 1999, p. 87). The recidivism rate of psychopaths is three times higher than that of other offenders (and the violent recidivism rate is three to five times higher) (Blair, Mitchell, & Blair, 2005, p. 16).

76. Nunez, Casey, Egner, Hare, & Hirsch, 2005. For reasons that may have something to do with the sensationalism just mentioned, psychopathy does not exist as a diagnostic category, or even as an index entry, in The Diagnostic and Statistical Manual of Mental Disorders (DSM-IV). The two DSM-IV diagnoses that seek to address the behavioral correlates of psychopathy—antisocial personality disorder (ASPD) and conduct disorder—do not capture its interpersonal and emotional components at all. Antisocial behavior is common to several disorders, and people with ASPD may not score high on the PCL-R (de Oliveira-Souza et al., 2008; Narayan et al., 2007). The inadequacies of the DSM-IV’s treatment of the syndrome are very well brought out in Blair et al., 2005. There are many motives for antisocial behavior and many routes to becoming a violent felon. The hallmark of psychopathy isn’t bad behavior per se, but an underlying spectrum of emotional and interpersonal impairments. And psychopathy, as a construct, is far more predictive of specific behaviors (e.g., recidivism) than the DSM-IV criteria are.

77. It would appear, however, that the same could be said of the great Erwin Schrödinger (Teresi, 2010).

78. Frontal lobe injury can result in a condition known as “acquired sociopathy,” which shares some of the features of developmental psychopathy. While they are often mentioned in the same context, acquired sociopathy and psychopathy differ, especially with regard to the type of aggression they produce. Reactive aggression is triggered by an annoying or threatening stimuli and is often associated with anger. Instrumental aggression is purposed toward a goal. The man who lashes out after being jostled on the street has expressed reactive aggression; the man who attacks another man to steal his wallet or to impress his fellow gang members has displayed instrumental aggression. Subjects suffering from acquired sociopathy, who have generally sustained injuries to their orbitofrontal lobes, display poor impulse control and tend to exhibit increased levels of reactive aggression. However, they do not show a heightened tendency toward instrumental aggression. Psychopaths are prone to aggression of both types. Most important, instrumental aggression seems most closely linked to the callousness/unemotional (CU) trait that is the hallmark of the disorder. Studies of same-sex twins suggest that the CU trait is also most associated with heritable causes of antisocial behavior (Viding, Jones, Frick, Moffitt, & Plomin, 2008).

Moll, de Oliveira-Souza, and colleagues found that the correlation between gray matter reductions and psychopathy extends beyond the frontal cortex, and this would explain why acquired sociopathy and psychopathy are distinct disorders. Psychopathy was correlated with gray matter reductions in a wide network of structures: including the bilateral insula, the superior temporal sulci, the supra-marginal/angular gyri, the caudate (head), the fusiform cortex, the middle frontal gyri, among others. It would be exceedingly unlikely to injure such a wide network selectively.

79. Kiehl et al., 2001; Glenn, Raine, & Schug, 2009. However, when given personal vs. impersonal moral dilemmas to solve, unlike MPFC patients, psychopaths tend to produce the same answers as normal controls, albeit without the same emotional response (Glenn, Raine, Schug, Young, & Hauser, 2009).

80. Hare, 1999, p. 76.

81. Ibid., p. 132.

82. Blair et al., 2005.

83. Buckholtz et al., 2010.

84. Richell et al., 2003.

85. Dolan & Fullam, 2004.

86. Dolan & Fullam, 2006; Blair et al., 2005.

87. Blair et al., 2005. The first book-length treatment of psychopathy appears to be Cleckley’s The Mask of Sanity. While it is currently out of print, this book is still widely referenced and much revered. It is worth reading, if only for the author’s highly (and often inadvertently) amusing prose. Hare, 1999, Blair et al., 2005, and Babiak & Hare, 2006, provide more recent book-length discussions of the disorder.

88. Blair et al., 2005. The developmental literature suggests that, because punishment (the unconditioned stimulus) rarely follows a specific transgression (the conditioned stimulus) closely in time, the aversive conditioning brought on by corporal punishment tends to get associated with the person who metes it out, rather than with the behavior in need of correction. Blair also observes that if punishment were the primary source of moral instruction, children would be unable to observe the difference between conventional transgressions (e.g., talking in class) and moral ones (e.g., hitting another student), as breaches of either sort tend to elicit punishment. And yet healthy children can readily distinguish between these forms of misbehavior. Thus, it would seem that they receive their correction directly from the distress that others exhibit when true moral boundaries have been crossed. Other mammals also find the suffering of their conspecifics highly aversive. We know this from work in monkeys (Masserman, Wechkin, & Terris, 1964) and rats (Church, 1959) that would seem scarcely ethical to perform today. For instance, the conclusion of the former study reads: “A majority of rhesus monkeys will consistently suffer hunger rather than secure food at the expense of electroshock to a conspecific.”

89. Subsequent reviews of the neuroimaging literature have produced a somewhat muddled view of the underlying neurology of psychopathy (Raine & Yaling, 2006). While individual studies have found anatomical and functional abnormalities in a wide variety of brain regions—including the amygdala, hippocampus, corpus callosum, and putamen—the only result common to all studies is that psychopaths tend to show reduced gray matter in the prefrontal cortex (PFC). Reductions in gray matter in three regions of the PFC—the medial and lateral orbital areas and the frontal poles—correlate with psychopathy scores, and these regions have been shown in other work to be directly involved in the regulation of social conduct (de Oliveira-Souza et al., 2008). Recent findings suggest that the correlation between cortical thinning and psychopathy may be significant only for the right hemisphere (Yang, Raine, Colletti, Toga, & Narr, 2009). The brains of psychopaths also show reduced white matter connections between orbital frontal regions and the amygdala (M. C. Craig et al., 2009). In fact, the difference in the average volume of gray matter in orbitofrontal regions seems to account for half of the variation in antisocial behavior between the sexes: men and women don’t seem to differ in their experience of anger, but women tend to be both more fearful and more empathetic—and are thus better able to control their antisocial impulses (Jones, 2008).

90. Blair et al. hypothesize that the orbitofrontal deficits of psychopathy underlie the propensity for reactive aggression, while the amygdala dysfunction leads to “impairments in aversive conditioning, instrumental learning, and the processing of fearful and sad expressions” that allow for learned, instrumental aggression and make normal socialization impossible. Kent Kiehl, author of the first fMRI study on psychopathy, now believes that the functional neuroanatomy of the disorder includes a network of structures including the orbital frontal cortex, insula, anterior and posterior cingulate, amygdala, parahippocampal gyrus, and anterior superior temporal gyrus (Kiehl et al., 2001). He refers to this network as the “the paralimbic system” (Kiehl, 2006). Kiehl is currently engaged in a massive and ongoing fMRI study of incarcerated psychopaths, using a 1.5 Tesla scanner housed in a tractor-trailer that can be moved from prison to prison. He hopes to build a neuroimaging database of 10,000 subjects (G. Miller, 2008a; Seabrook, 2008).

91. Trivers, 2002, p. 53. For an extensive discussion of the details here, see Dawkins, [1976] 2006, pp. 202–233.

92. Jones, 2008.

93. Diamond, 2008. Pinker, 2007, makes the same point: “If the wars of the twentieth century had killed the same proportion of the population that die in the wars of a typical tribal society, there would have been two billion deaths, not 100 million.”

It is easy to conclude that life is cheap in an honor culture, ruled by vengeance and the law of talion (“eye for an eye”), but, as William Ian Miller observes, by at least one measure these societies value life even more than we do. Our modern economies thrive because we tend to limit personal liability. If I sell you a defective ladder, and you fall and break your neck, I may have to pay you some compensation. But I will not have to pay you nearly as much as I would be willing to pay to avoid having my own neck broken. In our society we are constrained by the value a court places on the other guy’s neck; in a culture ruled by talion law, we are constrained by the value we place on our own (W. I. Miller, 2006).

94. Bowles, 2006, 2008, 2009.

95. Churchland, 2008a.

96. Libet, Gleason, Wright, & Pearl, 1983.

97. Soon, Brass, Heinze, & Haynes, 2008. Libet later argued that while we don’t have free will with respect to initiating behavior, we might have free will to veto an intention before it becomes effective (Libet, 1999, 2003). I think his reasoning was clearly flawed, as there is every reason to think that a conscious veto must also arise on the basis of unconscious neural events.

98. Fisher, 2001; Wegner, 2002; Wegner, 2004.

99. Heisenberg, 2009; Kandel, 2008; Karczmar, 2001; Libet, 1999; McCrone, 2003; Planck & Murphy, 1932; Searle, 2001; Sperry, 1976.

100. Heisenberg, 2009.

101. One problem with this approach is that quantum mechanical effects are probably not, as a general rule, biologically salient. Quantum effects do drive evolution, as high-energy particles like cosmic rays cause point mutations in DNA, and the behavior of such particles passing through the nucleus of a cell is governed by the laws of quantum mechanics. Evolution, therefore, seems unpredictable in principle (Silver, 2006).

102. The laws of nature do not strike most of us as incompatible with free will because we have not imagined how human action would appear if all cause-and-effect relationships were understood. But imagine that a mad scientist has developed a means of controlling the human brain at a distance: What would it be like to watch him send a person to and fro on the wings of her “will”? Would there be even the slightest temptation to impute freedom to her? No. But this mad scientist is nothing more than causal determinism personified. What makes his existence so inimical to our notion of free will is that when we imagine him lurking behind a person’s thoughts and actions—tweaking electrical potentials, manufacturing neurotransmitters, regulating genes, etc.—we cannot help but let our notions of freedom and responsibility travel up the puppet’s strings to the hand that controls them. To see that the addition of randomness does nothing to change this situation, we need only imagine the scientist basing the inputs to his machine on a shrewd arrangement of roulette wheels. How would such unpredictable changes in the states of a person’s brain constitute freedom?

Swapping any combination of randomness and natural law for a mad scientist, we can see that all the relevant features of a person’s inner life would be conserved—thoughts, moods, and intentions would still arise and beget actions—and yet we are left with the undeniable fact that the conscious mind cannot be the source of its own thoughts and intentions. This discloses the real mystery of free will: if our experience is compatible with its utter absence, how can we say that we see any evidence for it in the first place?

103. Dennett, 2003.

104. The phrase “alien hand syndrome” describes a variety of neurological disorders in which a person no longer recognizes ownership of one of his hands. Actions of the nondominant hand in the split-brain patient can have this character, and in the acute phase after surgery this can lead to overt, intermanual conflict. Zaidel et al. (2003) prefer the phrase “autonomous hand,” as patients typically experience their hand to be out of control but do not ascribe ownership of it to someone else. Similar anomalies can be attributed to other neurological causes: for instance, in sensory alien hand syndrome (following a stroke in the right posterior cerebral artery) the right arm will sometimes choke or otherwise attack the left side of the body (Pryse-Philips, 2003).

105. See S. Harris, 2004, pp. 272–274.

106. Burns & Bechara, 2007, p. 264.

107. Others have made a similar argument. See Burns & Bechara, 2007, p. 264; J. Greene & Cohen, 2004, p. 1776.

108. Cf. Levy, 2007.

109. The neuroscientist Michael Gazzaniga writes:

Neuroscience will never find the brain correlate of responsibility, because that is something we ascribe to humans—to people—not to brains. It is a moral value we demand of our fellow, rule-following human beings. Just as optometrists can tell us how much vision a person has (20/20 or 20/200) but cannot tell us when someone is legally blind or has too little vision to drive a school bus, so psychiatrists and brain scientists might be able to tell us what someone’s mental state or brain condition is but cannot tell us (without being arbitrary) when someone has too little control to be held responsible. The issue of responsibility (like the issue of who can drive school buses) is a social choice. In neuroscientific terms, no person is more or less responsible than any other for actions. We are all part of a deterministic system that someday, in theory, we will completely understand. Yet the idea of responsibility, a social construct that exists in the rules of a society, does not exist in the neuronal structures of the brain (Gazzaniga, 2005, pp. 101–102).

While it is true that responsibility is a social construct attributed to people and not to brains, it is a social construct that can make more or less sense given certain facts about a person’s brain. I think we can easily imagine discoveries in neuroscience, as well as brain imaging technology, that would allow us to attribute responsibility to persons in a far more precise way than we do at present. A “Twinkie defense” would be entirely uncontroversial if we learned that there was something in the creamy center of every Twinkie that obliterated the frontal lobe’s inhibitory control over the limbic system.

But perhaps “responsibility” is simply the wrong construct: for Gazzaniga is surely correct to say that “in neuroscientific terms, no person is more or less responsible than any other for actions.” Conscious actions arise on the basis of neural events of which we are not conscious. Whether they are predictable or not, we do not cause our causes.

110. Diamond, 2008.

111. In the philosophical literature, one finds three approaches to the problem: determinism, libertarianism, and compatibilism. Both determinism and libertarianism are often referred to as “incompatibilist” views, in that both maintain that if our behavior is fully determined by background causes, free will is an illusion. Determinists believe that we live in precisely such a world; libertarians (no relation to the political view that goes by this name) believe that our agency rises above the field of prior causes—and they inevitably invoke some metaphysical entity, like a soul, as the vehicle for our freely acting wills. Compatibilists, like Daniel Dennett, maintain that free will is compatible with causal determinism (see Dennett, 2003; for other compatibilist arguments see Ayer, Chisholm, Strawson, Frankfurt, Dennett, and Watson—all in Watson, 1982). The problem with compatibilism, as I see it, is that it tends to ignore that people’s moral intuitions are driven by deeper, metaphysical notions of free will. That is, the free will that people presume for themselves and readily attribute to others (whether or not this freedom is, in Dennett’s sense, “worth wanting”) is a freedom that slips the influence of impersonal, background causes. The moment you show that such causes are effective—as any detailed account of the neurophysiology of human thought and behavior would—proponents of free will can no longer locate a plausible hook upon which to hang their notions of moral responsibility. The neuroscientists Joshua Greene and Jonathan Cohen make the same point:

Most people’s view of the mind is implicitly dualist and libertarian and not materialist and compatibilist … [I]ntuitive free will is libertarian, not compatibilist. That is, it requires the rejection of determinism and an implicit commitment to some kind of magical mental causation … contrary to legal and philosophical orthodoxy, determinism really does threaten free will and responsibility as we intuitively understand them (J. Greene & Cohen, 2004, pp. 1779–1780).

Chapter 3: Belief

1. Brains do not fossilize, so we cannot examine the brains of our ancient ancestors. But comparing the neuroanatomy of living primates offers some indication of the types of physical adaptations that might have led to the emergence of language. For instance, diffusion-tensor imaging of macaque, chimpanzee, and human brains reveals a gradual increase in the connectivity of the arcuate fasciculus—the fiber tract linking the temporal and frontal lobes. This suggests that the relevant adaptations were incremental, rather than saltatory (Ghazanfar, 2008).

2. N. Patterson, Richter, Gnerre, Lander, & Reich, 2006, 2008.

3. Wade, 2006.

4. Sarmiento, Sawyer, Milner, Deak, & Tattersall, 2007; Wade, 2006.

5. It seems, however, that the Neanderthal copy of the FOXP2 gene carried the same two crucial mutations that distinguish modern humans from other primates (Enard et al., 2002; Krause et al., 2007). FOXP2 is now known to play a central role in spoken language, and its disruption leads to severe linguistic impairments in otherwise healthy people (Lai, Fisher, Hurst, Vargha-Khadem, & Monaco, 2001). The introduction of a human FOXP2 gene into mice changes their ultrasonic vocalizations, decreases exploratory behavior, and alters cortico-basal ganglia circuits (Enard et al., 2009). The centrality of FOXP2 for language development in humans has led some researchers to conclude that Neanderthals could speak (Yong, 2008). In fact, one could argue that the faculty of speech must precede Homo sapiens, as “it is difficult to imagine the emergence of complex subsistence behaviors and selection for a brain size increase of approximately 75 percent, both since about 800,000 years ago, without complex social communication” (Trinkaus, 2007).

Whether or not they could speak, the Neanderthals were impressive creatures. Their average cranial capacity was 1,520 cc, slightly larger than that of their Homo sapien contemporaries. In fact, human cranial capacity has decreased by about 150 cc over the millennia to its current average of 1,340 cc (Gazzaniga, 2008). Generally speaking, the correlation between brain size and cognitive ability is less than straightforward, as there are several species that have larger brains than we do (e.g., elephants, whales, dolphins) without exhibiting signs of greater intelligence. There have been many efforts to find some neuroanatomical measure that reliably tracks cognitive ability, including allometric brain size (brain size proportional to body mass), “encephalization quotient” (brain size proportional to the expected brain size for similar animals, corrected for body mass; for primates EQ = [brain weight] / [0.12 × body weight0.67]), the size of the neocortex relative to the rest of the brain, etc. None of these metrics has proved especially useful. In fact, among primates, there is no better predictor of cognitive ability than absolute brain size, irrespective of body mass (Deaner, Isler, Burkart, & van Schaik, 2007). By this measure, our competition with Neanderthals looks especially daunting.

There are several genes involved in brain development that have been found to be differentially regulated in human beings compared to other primates; two of special interest are microcephalin and ASPM (the abnormal spindlelike microcephaly-associated gene). The modern variant of microcephalin, which regulates brain size, appeared approximately 37,000 years ago (more or less coincident with the ascendance of modern humans) and has increased in frequency under positive selection pressure ever since (P. D. Evans et al., 2005). One modern variant of ASPM, which also regulates brain size, has spread with great frequency in the last 5,800 years (Mekel-Bobrov et al., 2005). As these authors note, this can be loosely correlated with the spread of cities and the development of written language. The possible significance of these findings is also discussed in Gazzaniga (2008).

6. Fitch, Hauser, & Chomsky, 2005; Hauser, Chomsky, & Fitch, 2002; Pinker & Jackendoff, 2005.

7. Regrettably, language is also the basis of our ability to wage war effectively, to perpetrate genocide, and to render our planet uninhabitable.

8. While general information sharing has been undeniably useful, there is good reason to think that the communication of specifically social information has driven the evolution of language (Dunbar, 1998, 2003). Humans also transmit social information (i.e., gossip) in greater quantity and with higher fidelity than nonsocial information (Mesoudi, Whiten, & Dunbar, 2006).

9. Cf. S. Harris, 2004, pp. 243–244.

10. A. R. Damasio, 1999.

11. Westbury & Dennett, 1999.

12. Bransford & McCarrell, 1977.

13. Rumelhart, 1980.

14. Damasio draws a similar distinction (A. R. Damasio, 1999).

15. For the purposes of studying belief in the lab, therefore, there seems to be little problem in defining the phenomenon of interest: believing a proposition is the act of accepting it as “true” (e.g., marking it as “true” on a questionnaire); disbelieving a proposition is the act of rejecting it as “false”; and being uncertain about the truth value of a proposition is the disposition to do neither of these things, but to judge it, rather, as “undecidable.”

In our search for the neural correlates of subjective states like belief and disbelief, we are bound to rely on behavioral reports. Therefore, having presented an experimental subject with a written statement—e.g., the United States is larger than Guatemala—and watched him mark it as “true,” it may occur to us to wonder whether we can take him at his word. Does he really believe that the United States is larger than Guatemala? Does this statement, in other words, really seem true to him? This is rather like worrying, with reference to a subject who has just performed a lexical decision task, whether a given stimulus really seems like a word to him. While it may seem reasonable to worry that experimental subjects might be poor judges of what they believe, or that they might attempt to deceive experimenters, such concerns seem misplaced—or if appropriate here, they should haunt all studies of human perception and cognition. As long as we are content to rely on subjects to report their perceptual judgments (about when, or whether, a given stimulus appeared), or their cognitive ones (about what sort of stimulus it was), there seems to be no special problem taking reports of belief, disbelief, and uncertainty at face value. This does not ignore the possibility of deception (or self-deception), implicit cognitive conflict, motivated reasoning, and other sources of confusion.

16. Blakeslee, 2007.

17. These considerations run somewhat against David Marr’s influential thesis that any complex information-processing system should be understood first at the level of “computational theory” (i.e., the level of highest abstraction) in terms of its “goals” (Marr, 1982). Thinking in terms of goals can be extremely useful, of course, in that it unifies (and ignores) a tremendous amount of bottom-up detail: the goal of “seeing,” for instance, is complicated at the level of its neural realization and, what is more, it has been achieved by at least forty separate evolutionary routes (Dawkins, 1996, p. 139). Consequently, thinking about “seeing” in terms of abstract computational goals can make a lot of sense. In a structure like the brain, however, the “goals” of the system can never be fully specified in advance. We currently have no inkling what else a region like the insula might be “for.”

18. There has been a long debate in neuroscience over whether the brain is best thought of as a collection of discrete modules or as a distributed, dynamical system. It seems clear, however, that both views are correct, depending on one’s level of focus (J. D. Cohen & Tong, 2001). Some degree of modularity is now an undeniable property of brain organization, as damage to one brain region can destroy a specific ability (e.g., the recognition of faces) while sparing most others. There are also distinct differences in cell types and patterns of connectivity that articulate sharp borders between regions. And some degree of modularity is ensured by limitations on information transfer over large distances in the brain.

While regional specialization is a general fact of brain organization, strict partitioning generally isn’t: as has already been said, most regions of the brain serve multiple functions. And even within functionally specific regions, the boundaries between their current function and their possible functions are provisional, fuzzy, and in the case of any individual brain, guaranteed to be idiosyncratic. For instance, the brain shows a general capacity to recover from focal injuries, and this entails the recruitment and repurposing of other (generally adjacent) brain areas. Such considerations suggest that we cannot expect true isomorphism between brains—or even between a brain and itself across time.

There is legitimate concern, however, that current methods of neuroimaging tend to beg the question in favor of the modularity thesis—leading, among uncritical consumers of this research, to a naïve picture of functional segregation in the brain. Consider functional magnetic resonance imaging (fMRI), which is the most popular method of neuroimaging at present. This technique does not give us an absolute measure of neural activity. Rather, it allows us to compare changes in blood flow throughout the brain between two experimental conditions. We can, for example, compare instances in which subjects believe statements to be true to instances in which they believe statements to be false. The resulting image reveals which regions of the brain are more active in one condition or the other. Because fMRI allows us to detect signal changes throughout the brain, it is not, in principle, blind to widely distributed or combinatorial processing. But its dependence on blood flow as a marker for neural activity reduces spatial and temporal resolution, and the statistical techniques we use to analyze our data require that we focus on relatively large clusters of activity. It is, therefore, in the very nature of the tool to deliver images that appear to confirm the modular organization of brain function (cf. Henson, 2005). The problem, as far as critics are concerned, is that this method of studying the brain ignores the fact that the whole brain is active in both experimental conditions (e.g., during belief and disbelief), and regions that don’t survive this subtractive procedure may well be involved in the relevant information processing.

Functional magnetic resonance imaging (fMRI) also rests on the assumption that there is a more or less linear relationship between changes in blood flow, as measured by blood-oxygen-level-dependent (BOLD) changes in the MR signal, and changes in neuronal activity. While the validity of fMRI seems generally well supported (Logothetis, Pauls, Augath, Trinath, & Oeltermann, 2001), there is some uncertainty about whether the assumed linear relationship between blood flow and neuronal activity holds for all mental processes (Sirotin & Das, 2009). There are also potential problems with comparing one brain state to another on the assumption that changes in brain function are additive in the way that the components of an experimental task may be (this is often referred to as the problem of “pure insertion”) (Friston et al., 1996). There are also questions about what “activity” is indicated by changes in the BOLD signal. The principal correlate of blood-flow changes in the brain appears to be presynaptic/neuromodulatory activity (as measured by local field potentials), not axonal spikes. This fact poses a few concerns related to the interpretation of fMRI data: fMRI cannot readily differentiate activity that is specific to a given task and neuromodulation; nor can it differentiate bottom-up from top-down processing. In fact, fMRI may be blind to the difference between excitatory and inhibitory signals, as metabolism also increases with inhibition. It seems quite possible, for instance, that increases in recurrent inhibition in a given region might be associated with greater BOLD signal but decreased neuronal firing. For a discussion of these and other limitations of the technology, see Logothetis, 2008; M. S. Cohen, 1996, 2001. Such concerns notwithstanding, fMRI remains the most important tool for studying brain function in human beings noninvasively.

A more sophisticated, neural network analysis of fMRI data has shown that representational content—which can appear, under standard methods of data analysis, to be strictly segregated (e.g., face-vs.-object perception in the ventral temporal lobe)—is actually intermingled and dispersed across a wider region of the cortex. Information encoding appears to depend not on strict localization, but on a combinatorial pattern of variations in the intensity of the neural response across regions once thought to be functionally distinct (Hanson, Matsuka, & Haxby, 2004).

There are also epistemological questions about what it means to correlate any mental state with physiological changes in the brain. And yet, while I consider the so-called “hard problem” of consciousness (Chalmers, 1996) a real barrier to scientific explanation, I do not think it will hinder the progress of cognitive neuroscience generally. The distinction between consciousness and its contents seems paramount. It is true that we do not understand how consciousness emerges from the unconscious activity of neural networks—or even how it could emerge. But we do not need such knowledge to compare states of mind through neuroimaging. To consider one among countless examples from the current literature: neuroscientists have begun to investigate how envy and schadenfreude are related in neuroanatomical terms. One group found activity in the ACC (anterior cingulate cortex) to be correlated with envy, and the magnitude of signal change was predictive of activity in the striatum (a region often associated with reward) when subjects witnessed those they envied experiencing misfortune (signifying the pleasure of schadenfreude) (Takahashi et al., 2009). This reveals something about the relationship between these mental states that may not be obvious by introspection. The finding that right-sided lesions in the MPFC impair the perception of envy (a negative emotion), while analogous left-sided lesions impair the perception of schadenfreude (a positive emotion) fills in a few more details (Shamay-Tsoory, Tibi-Elhanany, & Aharon-Peretz, 2007)—as there is a wider literature on the lateralization of positive and negative mental states. Granted, the relationship between envy and schadenfreude was somewhat obvious without our learning their neural correlates. But improvements in neuroimaging may one day allow us to understand the relationship between such mental states with great precision. This may deliver conceptual surprises and even personal epiphanies. And if the mental states and capacities most conducive to human well-being are ever understood in terms of their underlying neurophysiology, neuroimaging may become an integral part of any enlightened approach to ethics.

It seems to me that progress on this front does not require that we solve the “hard problem” of consciousness (or that it even admit of a solution). When comparing mental states, the reality of human consciousness is a given. We need not understand how consciousness relates to the behavior of atoms to investigate how emotions like love, compassion, trust, greed, fear, and anger differ (and interact) in neurophysiological terms.

19. Most inputs to cortical dendrites come from neurons in the same region of cortex: very few arrive from other cortical regions or from ascending pathways. For instance, only 5 percent to 10 percent of inputs to layer 4 of visual cortex arrive from the thalamus (R. J. Douglas & Martin, 2007).

20. The apparent (qualified) existence of “grandmother cells” notwithstanding (Quiroga, Reddy, Kreiman, Koch, & Fried, 2005). For a discussion of the limits of traditional “connectionist” accounts of mental representation, see Doumas & Hummel, 2005.

21. These data were subsequently published as Harris, S., Sheth, & Cohen 2008.

22. The post-hoc analysis of neuroimaging data is a limitation of many studies, and in our original paper we acknowledged the importance of distinguishing between results predicted by a specific model of brain function and those that arise in the absence of a prior hypothesis. This caveat notwithstanding, I believe that too much has been made of the distinction between descriptive and hypothesis-driven research in science generally and in neuroscience in particular. There must always be a first experimental observation, and one gets no closer to physical reality by running a follow-up study. To have been the first person to observe blood-flow changes in the right fusiform gyrus in response to visual stimuli depicting faces (Sergent, Ohta, & MacDonald, 1992)—and to have concluded, on the basis of these data, that this region of cortex plays a role in facial recognition—was a perfectly legitimate instance of scientific induction. Subsequent corroboration of these results increased our collective confidence in this first set of data (Kanwisher, McDermott, & Chun, 1997) but did not constitute an epistemological advance over the first study. All subsequent hypothesis-driven research that has taken the fusiform gyrus as a region of interest derives its increased legitimacy from the descriptive study upon which it is based (or, as has often been the case in neuroscience, from the purely descriptive, clinical literature). If the initial descriptive study was in error, then any hypothesis based on it would be empty (or only accidentally correct); if the initial work was valid, then follow-up work would merely corroborate it and, perhaps, build upon it. The injuries suffered by Phineas Gage and H.M. were inadvertent, descriptive experiments, and the wealth of information learned from these cases—arguably more than was learned from any two experiments in the history of neuroscience—did not suffer for lack of prior hypothesis. Indeed, these clinical observations became the basis of all subsequent hypotheses about the function of the frontal and medial temporal lobes.

23. E. K. Miller & Cohen, 2001; Desimone & Duncan, 1995. While damage to the PFC can result in a range of deficits, the most common is haphazard, inappropriate, and impulsive behavior, along with the inability to acquire new behavioral rules (Bechara, Damasio, & Damasio, 2000). As many parents can attest, the human capacity for self-regulation does not fully develop until after adolescence; this is when the white-matter connections in the PFC finally mature (Sowell, Thompson, Holmes, Jernigan, & Toga, 1999).

24. Spinoza, [1677] 1982.

25. D. T. K. Gilbert, 1991; D. T. K. Gilbert, Douglas, & Malone, 1990; J. P. Mitchell, Dodson, & Schacter, 2005.

26. This truth bias may interact with (or underlie) what has come to be known as the “confirmation bias” or “positive test strategy” heuristic in reasoning (Klayman & Ha, 1987): people tend to seek evidence that confirms an hypothesis rather than evidence that refutes it. This strategy is known to produce frequent reasoning errors. Our bias toward belief may also explain the “illusory-truth effect,” where mere exposure to a proposition, even when it was revealed to be false or attributed to an unreliable source, increases the likelihood that it will later be remembered as being true (Begg, Robertson, Gruppuso, Anas, & Needham, 1996; J. P. Mitchell et al., 2005).

27. This was due to a greater decrease in signal during disbelief trials than during belief trials. This region of the brain is known to have a high level of resting-state activity and to show reduced activity compared to baseline for a wide variety of cognitive tasks (Raichle et al., 2001).

28. Bechara et al., 2000. The MPFC is also activated by reasoning tasks that incorporate high emotional salience (Goel & Dolan, 2003b; Northoff et al., 2004). Individuals with MPFC lesions test normally on a variety of executive function tasks but often fail to integrate appropriate emotional responses into their reasoning about the world. They also fail to habituate normally to unpleasant somatosensory stimuli (Rule, Shimamura, & Knight, 2002). The circuitry in this region that links decision making to emotions seems rather specific, as MPFC lesions do not disrupt fear conditioning or the normal modulation of memory by emotionally charged stimuli (Bechara et al., 2000). While reasoning appropriately about the likely consequences of their actions, these persons seem unable to feel the difference between good and bad choices.

29. Hornak et al., 2004; O’Doherty, Kringelbach, Rolls, Hornak, & Andrews, 2001.

30. Matsumoto & Tanaka, 2004.

31. Schnider, 2001.

32. Northoff et al., 2006.

33. Kelley et al., 2002.

34. When compared with both belief and uncertainty, disbelief was associated in our study with bilateral activation of the anterior insula, a primary region for the sensation of taste (Faurion, Cerf, Le Bihan, & Pillias, 1998; O’Doherty, Rolls, Francis, Bowtell, & McGlone, 2001). This area is widely thought to be involved with negatively valenced feelings like disgust (Royet, Plailly, Delon-Martin, Kareken, & Segebarth, 2003; Wicker et al., 2003), harm avoidance (Paulus, Rogalsky, Simmons, Feinstein, & Stein, 2003), and the expectation of loss in decision tasks (Kuhnen & Knutson, 2005). The anterior insula has also been linked to pain perception (Wager et al., 2004) and even to the perception of pain in others (T. Singer et al., 2004). The frequent association between activity in the anterior insula and negative affect appears to make at least provisional sense of the emotional tone of disbelief.

While disgust is regularly classed as a primary human emotion, infants and toddlers do not appear to feel it (Bloom, 2004, p. 155). This would account for some of their more arresting displays of incivility. Interestingly, people suffering from Huntington’s disease, as well as presymptomatic carriers of the HD allele, exhibit reduced feelings of disgust and are generally unable to recognize the emotion in others (Calder, Keane, Manes, Antoun, & Young, 2000; Gray, Young, Barker, Curtis, & Gibson, 1997; Halligan, 1998; Hayes, Stevenson, & Coltheart, 2007; I. J. Mitchell, Heims, Neville, & Rickards, 2005; Sprengelmeyer, Schroeder, Young, & Epplen, 2006). The recognition deficit has been correlated with reduced activity in the anterior insula (Hennenlotter et al., 2004; Kipps, Duggins, McCusker, & Calder, 2007)—though other work has found that HD patients and carriers are impaired in processing a range of (predominantly negative) emotions: including disgust, anger, fear, sadness, and surprise (Henley et al., 2008; Johnson et al., 2007; Snowden et al., 2008).

We must be careful not to draw too strong a connection between disbelief and disgust (or any other mental state) on the basis of these data. While a connection between these states of mind seems intuitively plausible, equating disbelief with disgust represents a “reverse inference” of a sort known to be problematic in the field of neuroimaging (Poldrack, 2006). One cannot reliably infer the presence of a mental state on the basis of brain data alone, unless the brain regions in question are known to be truly selective for a single mental state. If it were known, for instance, that the anterior insulae were active if and only if subjects experienced disgust, then we could draw quite a strong inference about the role of disgust in disbelief. But there are very few regions of the brain whose function is so selective as to justify inferences of this kind. The anterior insula, for instance, appears to be involved in a wide range of neutral/positive states—including time perception, music appreciation, self-recognition, and smiling (A. D. Craig, 2009).

And there may also be many forms of disgust: While subjects tend to rate a wide range of stimuli as equivalently “disgusting,” one group found that disgust associated with pathogen-related acts, social-sexual acts (e.g., incest), and nonsexual moral violations activated different (but overlapping) brain networks (J. S. Borg, Lieberman, & Kiehl, 2008). To further complicate matters, they did not find the insula implicated in any of this disgust processing, with the exception of the subjects’ response to incest. This group is not alone in suggesting that the insula may not be selective for disgust and may be more generally sensitive to other factors, including self-monitoring and emotional salience. As the authors note, the difficulty in interpreting these results is compounded by the fact that their subjects were engaged in a memory task and not required to explicitly evaluate how disgusting a stimulus was until after the scanning session. This may have selected against insular activity; at least one other study suggests that the insula may only be preferentially active in response to attended stimuli (Anderson, Christoff, Panitz, De Rosa, & Gabrieli, 2003).

35. These results seem to pull the rug out from under one widely subscribed view in moral philosophy, generally described as “non-cognitivism.” Non-cognitivists hold that moral claims lack propositional content and, therefore, do not express genuine beliefs about the world. Unfortunately for this view, our brains appear to be unaware of this breakthrough in metaethics: we seem to accept the truth of moral assertions in the same way as we accept any other statements of fact.

In this first experiment on belief, we also analyzed the brain’s response to uncertainty: the mental state in which the truth value of a proposition cannot be judged. Not knowing what one believes to be true—Is the hotel north of Main Street, or south of Main Street? Was he talking to me, or to the man behind me?—has obvious behavioral/emotional consequences. Uncertainty prevents the link between thought and subsequent behavior/emotion from forming. It can be distinguished readily from belief and disbelief in this regard, because in the latter states, the mind has settled upon a specific, actionable representation of the world. The results of our study suggest two mechanisms that might account for this difference.

The contrasts—uncertainty minus belief and uncertainty minus disbelief—yielded signal in the anterior cingulate cortex (ACC). This region of the brain has been widely implicated in error detection (Schall, Stuphorn, & Brown, 2002) and response conflict (Gehring & Fencsik, 2001), and it regularly responds to increases in cognitive load and interference (Bunge, Ochsner, Desmond, Glover, & Gabrieli, 2001). It has also been shown to play a role in the perception of pain (Coghill, McHaffie, & Yen, 2003).

The opposite contrasts, belief minus uncertainty and disbelief minus uncertainty, showed increased signal in the caudate nucleus, which is part of the basal ganglia. One of the primary functions of the basal ganglia is to provide a route by which cortical association areas can influence motor action. The caudate has displayed context-specific, anticipatory, and reward-related activity in a variety of animal studies (Mink, 1996) and has been associated with cognitive planning in humans (Monchi, Petrides, Strafella, Worsley, & Doyon, 2006). It has also been shown to respond to feedback in both reasoning and guessing tasks when compared to the same tasks without feedback (Elliott, Frith, & Dolan, 1997).

In cognitive terms, one of the principal features of feedback is that it systematically removes uncertainty. The fact that both belief and disbelief showed highly localized signal changes in the caudate, when compared to uncertainty, appears to implicate basal ganglia circuits in the acceptance or rejection of linguistic representations of the world. Delgado et al. showed that the caudate response to feedback can be modulated by prior expectations (Delgado, Frank, & Phelps, 2005). In a trust game played with three hypothetical partners (neutral, bad, and good), they found that the caudate responded strongly to violations of trust by a neutral partner, to a lesser degree with a bad partner, but not at all when the partner was assumed to be morally good. On their account, it seems that the assumption of moral goodness in a partner led subjects to ignore or discount feedback. This result seems convergent with our own: one might say that subjects in their study were uncertain of what to conclude when a trusted collaborator failed to cooperate.

The ACC and the caudate display an unusual degree of connectivity, as the surgical lesioning of the ACC (a procedure known as a cingulotomy) causes atrophy of the caudate, and the disruption of this pathway is thought to be the basis of the procedure’s effect in treating conditions like obsessive-compulsive disorder (Rauch et al., 2000; Rauch et al., 2001).

There are, however, different types of uncertainty. For instance, there is a difference between expected uncertainty—where one knows that one’s observations are unreliable—and unexpected uncertainty, where something in the environment indicates that things are not as they seem. The difference between these two modes of cognition has been analyzed within a Bayesian statistical framework in terms of their underlying neurophysiology. It appears that expected uncertainty is largely mediated by acetylcholine and unexpected uncertainty by norepinephrine (Yu & Dayan, 2005). Behavioral economists sometimes distinguish between “risk” and “ambiguity”: the former being a condition where probability can be assessed, as in a game of roulette, the latter being the uncertainty borne of missing information. People are generally more willing to take even very low-probability bets in a condition of risk than they are to act in a condition of missing information. One group found that ambiguity was negatively correlated with activity in the dorsal striatum (caudate/putamen) (Hsu, Bhatt, Adolphs, Tranel, & Camerer, 2005). This result fits very well with our own, as the uncertainty provoked by our stimuli would have taken the form of “ambiguity” rather than “risk.”

36. There are many factors that bias our judgment, including: arbitrary anchors on estimates of quantity, availability biases on estimates of frequency, insensitivity to the prior probability of outcomes, misconceptions of randomness, nonregressive predictions, insensitivity to sample size, illusory correlations, overconfidence, valuing of worthless evidence, hindsight bias, confirmation bias, biases based on ease of imaginability, as well as other nonnormative modes of thinking. See Baron, 2008; J. S. B. T. Evans, 2005; Kahneman, 2003; Kahneman, Krueger, Schkade, Schwarz, & Stone, 2006; Kahneman, Slovic, & Tversky, 1982; Kahneman & Tversky, 1996; Stanovich & West, 2000; Tversky & Kahneman, 1974.

37. Stanovich & West, 2000.

38. Fong et al., 1986/07. Once again, asking whether something is rationally or morally normative is distinct from asking whether it has been evolutionarily adaptive. Some psychologists have sought to minimize the significance of the research on cognitive bias by suggesting that subjects make decisions using heuristics that conferred adaptive fitness on our ancestors. As Stanovich and West (2000) observe, what serves the genes does not necessarily advance the interests of the individual. We could also add that what serves the individual in one context may not serve him in another. The cognitive and emotional mechanisms that may (or may not) have optimized us for face-to-face conflict (and its resolution) have clearly not prepared us to negotiate conflicts waged from afar—whether with email or other long-range weaponry.

39. Ehrlinger, Johnson, Banner, Dunning, & Kruger, 2008; Kruger & Dunning, 1999.

40. Jost, Glaser, Kruglanski, & Sulloway, 2003. Amodio et al. (2007) used EEG to look for differences in neurocognitive function between liberals and conservatives on a Go/No-Go task. They found that liberalism correlated with increased event-related potentials in the anterior cingulate cortex (ACC). Given the ACC’s well-established role in mediating cognitive conflict, they concluded that this difference might, in part, explain why liberals are less set in their ways than conservatives, and more aware of nuance, ambiguity, etc. Inzlicht (2009) found a nearly identical result for religious nonbelievers versus believers.

41. Rosenblatt, Greenberg, Solomon, Pyszczynski, & Lyon, 1989.

42. Jost et al., 2003, p. 369.

43. D. A. Pizarro & Uhlmann, 2008.

44. Kruglanski, 1999. The psychologist Drew Westen describes motivated reasoning as “a form of implicit affect regulation in which the brain converges on solutions that minimize negative and maximize positive affect states” (Westen, Blagov, Harenski, Kilts, & Hamann, 2006). This seems apt.

45. The fact that this principle often breaks down, spectacularly and unselfconsciously, in the domain of religion is precisely why one can reasonably question whether the world’s religions are in touch with reality at all.

46. Bechara et al., 2000; Bechara, Damasio, Tranel, & Damasio, 1997; A. Damasio, 1999.

47. S. Harris et al., 2008.

48. Burton, 2008.

49. Frith, 2008, p. 45.

50. Silver, 2006, pp. 77–78.

51. But this allele has also been linked to a variety of psychological traits, like novelty seeking and extraversion, which might also account for its persistence in the genome (Benjamin et al., 1996).

52. Burton, 2008, pp. 188–195.

53. Joseph, 2009.

54. Houreld, 2009; LaFraniere, 2007; Harris, 2009.

55. Mlodinow, 2008.

56. Wittgenstein, 1969, p. 206.

57. Analogical reasoning is generally considered a form of induction (Holyoak, 2005).

58. Sloman & Lagnado, 2005; Tenenbaum, Kemp, & Shafto, 2007.

59. For a review of the literature on deductive reasoning see Evans, 2005.

60. Cf. J. S. B. T. Evans, 2005, pp. 178–179.

61. For example, Canessa et al., 2005; Goel, Gold, Kapur, & Houle, 1997; Osherson et al., 1998; Prabhakaran, Rypma, & Gabrieli, 2001; Prado, Noveck, & Van Der Henst, 2009; Rodriguez-Moreno & Hirsch, 2009; Strange, Henson, Friston, & Dolan, 2001. Goel and Dolan (2003a) found that when syllogistic reasoning was modulated by a strong belief bias, the ventromedial prefrontal cortex was preferentially engaged, while such reasoning without an effective belief bias appeared to be driven by a greater activation of the (right) lateral prefrontal cortex. Elliot et al. (1997) found that guessing appears to be mediated by the ventromedial prefrontal cortex. Bechara et al. (1997) report that patients suffering ventromedial prefrontal damage fail to act according to their correct conceptual beliefs while engaged in a gambling task. Prior to our 2008 study, it was unclear how these findings would relate to belief and disbelief per se. They suggested, however, that the medial prefrontal cortex would be among our regions of interest.

While decision making is surely related to belief processing, the “decisions” that neuroscientists have tended to study are those that precede voluntary movements in tests of sensory discrimination (Glimcher, 2002). The initiation of such movements requires the judgment that a target stimulus has appeared—we might even say that this entails the “belief” that an event has occurred—but such studies are not designed to examine belief as a propositional attitude. Decision making in the face of potential reward is obviously of great interest to anyone who would understand the roots of human and animal behavior, but the link to belief per se appears tenuous. For instance, in a visual-decision task (in which monkeys were trained to detect the coherent motion of random dots and signal their direction with eye movements), Gold and Shadlen found that the brain regions responsible for this sensory judgment were the very regions that subsequently initiated the behavioral response (Gold & Shadlen, 2000, 2002; Shadlen & Newsome, 2001). Neurons in these regions appear to act as integrators of sensory information, initiating the trained behavior whenever a threshold of activation has been reached. We might be tempted to say, therefore, that the “belief” that a stimulus is moving to the left is located in the lateral intraparietal area, the frontal eye fields, and the superior colliculus—as these are the brain regions responsible for initiating eye movements. But here we are talking about the “beliefs” of a monkey—a monkey that has been trained to reproduce a stereotyped response to a specific stimulus in expectation of an immediate reward. This is not the kind of “belief” that has been the subject of my research.

The literature on decision making has generally sought to address the link between voluntary action, error detection, and reward. Insofar as the brain’s reward system involves a prediction that a specific behavior will lead to future reward, we might say that this is a matter of belief formation—but there is nothing to indicate that such beliefs are explicit, linguistically mediated, or propositional. We know that they cannot be, as most studies of reward processing have been done in rodents, monkeys, titmice, and pigeons. This literature has investigated the link between sensory judgments and motor responses, not the difference between belief and disbelief in matters of propositional truth. This is not to minimize the fascinating progress that has occurred in this field. In fact, the same economic modeling that allows behavioral ecologists to account for the foraging behavior of animal groups also allows neurophysiologists to describe the activity of the neuronal assemblies that govern an individual animal’s response to differential rewards (Glimcher, 2002). There is also a growing literature on neuroeconomics, which examines human decision making (as well as trust and reciprocity) using neuroimaging. Some of these findings are discussed here.

62. This becomes especially feasible using more sophisticated techniques of data analysis, like multivariate pattern classification (Cox & Savoy, 2003; P. K. Douglas, Harris, & Cohen, 2009). Most analyses of fMRI data are univariate and merely look for correlations between the activity at each point in the brain and the task paradigm. This approach ignores the interrelationships that surely exist between regions. Cox and Savoy demonstrated that a multivariate approach, in which statistical pattern recognition methods are used to look for correlations across all regions, allows for a very subtle analysis of fMRI data in a way that is far more sensitive to distributed patterns of activity (Cox & Savoy, 2003). With this approach, they were able to determine which visual stimulus a subject was viewing (out of ten possible types) by examining a mere 20 seconds of his experimental run.

Pamela Douglas, a graduate student in Mark Cohen’s cognitive neuroscience lab at UCLA, recently took a similar approach to analyzing my original belief data (P. K. Douglas, Harris, & Cohen, 2009). She created an unsupervised machine-learning classifier by first performing an independent component (IC) analysis on each of our subjects’ three scanning sessions. She then selected the IC time-course values that corresponded to the maximum value of the hemodynamic response function (HRF) following either “belief” or “disbelief” events. These values were fed into a selection process, whereby ICs that were “good predictors” were promoted as features in a classification network for training a Naïve Bayes classifier. To test the accuracy of her classification, Douglas performed a leave-one-out cross-validation. Using this criterion, her Naïve Bayes classifier correctly labeled the “left out” trial 90 percent of the time. Given such results, it does not seem far-fetched that, with further refinements in both hardware and techniques of data analysis, fMRI could become a means for accurate lie detection.

63. Holden, 2001.

64. Broad, 2002.

65. Pavlidis, Eberhardt, & Levine, 2002.

66. Allen & Iacono, 1997; Farwell & Donchin, 1991. Spence et al. (2001) appear to have published the first neuroimaging study on deception. Their research suggests that “deception” is associated with bilateral increases in activity in the ventrolateral prefrontal cortex (BA 47), a region often associated with response inhibition and the suppression of inappropriate behavior (Goldberg, 2001).

The results of the Spence study were susceptible to some obvious limitations, however—perhaps most glaring was the fact that the subjects were told precisely when to lie by being given a visual cue. Needless to say, this did much to rob the experiment of verisimilitude. The natural ecology of deception is one in which a potential liar must notice when questions draw near to factual terrain that he is committed to keeping hidden, and he must lie as the situation warrants, while respecting the criteria for logical coherence and consistency that he and his interlocutor share. (It is worth noting that unless one respects the norms of reasoning and belief formation, it is impossible to lie successfully. This is not an accident.) To be asked to lie automatically in response to a visual cue simply does not simulate ordinary acts of deception. Spence et al. did much to remedy this problem in a subsequent study, where subjects could lie at their own discretion and on subjects related to their personal histories (Spence, Kaylor-Hughes, Farrow, & Wilkinson, 2008). This study largely replicated their findings with respect to the primary involvement of the ventrolateral PFC (though now almost entirely in the left hemisphere). There have been other neuroimaging studies of deception—as “guilty knowledge” (Langleben et al., 2002), “feigned memory impairment” (Lee et al., 2005), etc.—but the challenge, apart from reliably finding the neural correlates of any of these states, is to find a result that generalizes to all forms of deception.

It is not entirely obvious that these studies have given us a sound basis for detecting deception through neuroimaging. Focusing on the neural correlates of belief and disbelief might obviate whatever differences exist between types of deception, the mode of stimulus presentation, etc. Is there a difference, for instance, between denying what is true and asserting what is false? Recasting the question in terms of a proposition to be believed or disbelieved might circumvent any problem posed by the “directionality” of a lie. Another group (Abe et al., 2006) took steps to address the directionality issue by asking subjects to alternately deny true knowledge and assert false knowledge. However, this study suffered from the usual limitations, in that subjects were directed when to lie, and their lies were limited to whether they had previously viewed an experimental stimulus.

A functional neuroanatomy of belief might also add to our understanding of the placebo response—which can be both profound and profoundly unhelpful to the process of vetting pharmaceuticals. For instance, 65 percent to 80 percent of the effect of antidepressant medication seems attributable to positive expectation (Kirsch, 2000). There are even forms of surgery that, while effective, are no more effective than sham procedures (Ariely, 2008). While some neuroimaging work has been done in this area, the placebo response is currently operationalized in terms of symptom relief, without reference to a subject’s underlying state of mind (Lieberman et al., 2004; Wager et al., 2004). Finding the neural correlates of belief might allow us to eventually control for this effect during the process of drug design.

67. Stoller & Wolpe, 2007.

68. Grann, 2009.

69. There are, however, reasons to doubt that our current methods of neuroimaging, like fMRI, will yield a practical mind-reading technology. Functional MRI studies as a group have several important limitations. Perhaps first and most important are those of statistical power and sensitivity. If one chooses to analyze one’s data at extremely conservative thresholds to exclude the possibility of type I (false positive) detection errors, this necessarily increases one’s type II (false negative) error. Further, most studies implicitly assume uniform detection sensitivity throughout the brain, a condition known to be violated for the low-bandwidth, fast-imaging scans used for fMRI. Field inhomogeneity also tends to increase the magnitude of motion artifacts. When motion is correlated to the stimuli, this can produce false positive activations, especially in the cortex.

We may also discover that the underlying physics of neuroimaging grants only so much scope for human ingenuity. If so, an era of cheap, covert lie detection might never dawn, and we will be forced to rely upon some relentlessly costly, cumbersome technology. Even so, I think it safe to say that the time is not far off when lying, on the weightiest matters—in court, before a grand jury, during important business negotiations, etc.—will become a practical impossibility. This fact will be widely publicized, of course, and the relevant technology will be expected to be in place, or accessible, whenever the stakes are high. This very assurance, rather than the incessant use of these machines, will change us.

70. Ball, 2009.

71. Pizarro & Uhlmann, 2008.

72. Kahneman, 2003.

73. Rosenhan, 1973.

74. McNeil, Pauker, Sox, & Tversky, 1982.

75. There are other reasoning biases that can affect medical decisions. It is well known, for instance, that the presence of two similar options can create “decisional conflict,” biasing a choice in favor of a third alternative. In one experiment, neurologists and neurosurgeons were asked to determine which patients to admit to surgery first. Half the subjects were given a choice between a woman in her early fifties and a man in his seventies. The other half were given the same two patients, plus another woman in her fifties who was difficult to distinguish from the first: 38 percent of doctors chose to operate on the older man in the first scenario; 58 percent chose him in the second (LaBoeuf & Shafir, 2005). This is a bigger change in outcomes than might be apparent at first glance: in the first case, the woman’s chance of getting the surgery is 62 percent; in the second it is 21 percent.

Chapter 4: Religion

1. Marx, [1843] 1971.

2. Freud, [1930] 1994; Freud & Strachey, [1927] 1975.

3. Weber, [1922] 1993.

4. Zuckerman, 2008.

5. Norris & Inglehart, 2004.

6. Finke & Stark, 1998.

7. Norris & Inglehart, 2004, p. 108.

8. It does not seem, however, that socioeconomic inequality explains religious extremism in the Muslim world, where radicals are, on average, wealthier and more educated than moderates (Atran, 2003; Esposito, 2008).

9http://pewglobal.org/reports/display.php?ReportID=258.

10http://pewforum.org/surveys/campaign08/.

11. Pyysiäinen & Hauser, 2010.

12. Zuckerman, 2008.

13. Paul, 2009.

14. Hall, Matz, & Wood, 2010.

15. Decades of cross-cultural research on “subjective well-being” (SWB) by the World Values Survey (www.worldvaluessurvey.org) indicate that religion may make an important contribution to human happiness and life satisfaction at low levels of societal development, security, and freedom. The happiest and most secure societies, however, tend to be the most secular. The greatest predictors of a society’s mean SWB are social tolerance (of homosexuals, gender equality, other religions, etc.) and personal freedom (Inglehart, Foa, Peterson, & Welzel, 2008). Of course, tolerance and personal freedom are directly linked, and neither seems to flourish under the shadow of orthodox religion.

16. Paul, 2009.

17. Culotta, 2009.

18. Buss, 2002.

19. I am indebted to the biologist Jerry Coyne for pointing this out (personal communication). The neuroscientist Mark Cohen has further observed (personal communication), however, that many traditional societies are far more tolerant of male promiscuity than female—for instance, the sanction for being raped has often been as bad, or worse, than for initiating a rape. Cohen speculates that in such cases religion may offer a post-hoc justification for a biological imperative. This may be so. I would only add that here, as elsewhere, the task of maximizing human well-being is clearly separable from Pleistocene biological imperatives.

20. Foster & Kokko, 2008.

21. Fincher, Thornhill, Murray, & Schaller, 2008.

22. Dawkins, 1994; D. Dennett, 1994; D. C. Dennett, 2006; D. S. Wilson & Wilson, 2007; E. O. Wilson, 2005; E. O. Wilson & Holldobler, 2005, pp. 169–172; Dawkins, 2006.

23. Boyer, 2001; Durkheim & Cosman, [1912] 2001.

24. Stark, 2001, pp. 180–181.

25. Livingston, 2005.

26. Dennett, 2006.

27http://pewforum.org/docs/?DocID=215.

28http://pewforum.org/docs/?DocID=153.

29. Boyer, 2001, p. 302.

30. Barrett, 2000.

31. Bloom, 2004.

32. Brooks, 2009.

33. E. M. Evans, 2001.

34. Hood, 2009.

35. D’Onofrio, Eaves, Murrelle, Maes, & Spilka, 1999.

36. Previc, 2006.

37. In addition, the densities of a specific type of serotonin receptor have been inversely correlated with high scores on the “spiritual acceptance” subscale of the Temperament and Character Inventory (J. Borg, Andree, Soderstrom, & Farde, 2003).

38. Asheim, Hansen & Brodtkorb, 2003; Blumer, 1999; Persinger & Fisher, 1990.

39. Brefczynski-Lewis, Lutz, Schaefer, Levinson, & Davidson, 2007; Lutz, Brefczynski-Lewis, Johnstone, & Davidson, 2008; Lutz, Greischar, Rawlings, Ricard, & Davidson, 2004; Lutz, Slagter, Dunne, & Davidson, 2008; A. Newberg et al., 2001.

40. Anastasi & Newberg, 2008; Azari et al., 2001; A. Newberg, Pourdehnad, Alavi, & d’Aquili, 2003; A. B. Newberg, Wintering, Morgan, & Waldman, 2006; Schjoedt, Stodkilde-Jorgensen, Geertz, & Roepstorff, 2008, 2009.

41. S. Harris et al., 2008.

42. Kapogiannis et al., 2009.

43. S. Harris et al., 2009.

44. D’Argembeau et al., 2008; Moran, Macrae, Heatherton, Wyland, & Kelley, 2006; Northoff et al., 2006; Schneider et al., 2008.

45. Bechara et al., 2000.

46. Hornak et al., 2004; O’Doherty et al., 2003; Rolls, Grabenhorst, & Parris, 2008.

47. Matsumoto & Tanaka, 2004.

48. A direct comparison of belief minus disbelief in Christians and nonbelievers did not show any significant group differences for nonreligious stimuli. For religious stimuli, there were additional regions of the brain that did differ by group; however, these results seem best explained by a common reaction in both groups to statements that violate religious doctrines (i.e., “blasphemous” statements).

The opposite contrast, disbelief minus belief, yielded increased signal in the superior frontal sulcus and the precentral gyrus. The engagement of these areas is not readily explained on the basis of prior work. However, a region-of-interest analysis revealed increased signal in the insula for this contrast. This partially replicates our previous finding for this contrast and supports the work of Kapogiannis et al., who also found signal in the insula to be correlated with the rejection of religious statements deemed false. The significance of the anterior insula for negative affect/appraisal has been discussed above. Because Kapogiannis et al. did not include a nonreligious control condition in their experiment, they interpreted the insula’s recruitment as a sign that violations of religious doctrine might provoke “aversion, guilt, or fear of loss” in people of faith. Whereas, our prior work suggests that the insula is active for disbelief generally.

In our study, Christians appeared to make the largest contribution to the insula signal bilaterally, while the pooled data from both groups produced signal in the left hemisphere exclusively. Kapogiannis et al. also found that religious subjects produced bilateral insula signal on disbelief trials, while data from both believers and nonbelievers yielded signal only on the left. Taken together, these findings suggest that there may be a group difference between religious believers and nonbelievers with respect to insular activity. In fact, Inbar et al. found that heightened feelings of disgust are predictive of social conservatism (as measured by self-reported disgust in response to homosexuality) (Inbar, Pizarro, Knobe, & Bloom, 2009). Our finding of bilateral insula signal for this contrast in our first study might be explained by the fact that we did not control for religious belief (or political orientation) during recruitment. Given the rarity of nonbelievers in the United States, even on college campuses, one would expect that most of the subjects in our first study possessed some degree of religious faith.

49. We obtained these results, despite the fact that our two groups accepted and rejected diametrically opposite statements in half of our experimental trials. This would seem to rule out the possibility that our data could be explained by any property of the stimuli apart from their being deemed “true” or “false” by the participants in our study.

50. Wager et al., 2004.

51. T. Singer et al., 2004.

52. Royet et al., 2003; Wicker et al., 2003.

53. Izuma, Saito, & Sadato, 2008.

54. Another key region that appears to be preferentially engaged by religious thinking is the posterior medial cortex. This area is part of the “resting state” network that shows greater activity during both rest and self-referential tasks (Northoff et al., 2006). It is possible that one difference between responding to religious and nonreligious stimuli is that, for both groups, a person’s answers serve to affirm his or her identity: i.e., for every religious trial, Christians were explicitly affirming their religious worldview, while nonbelievers were explicitly denying the truth claims of religion.

The opposite contrast, nonreligious minus religious statements, produced greater signal in left hemisphere memory networks, including the hippocampus, the parahippocampal gyrus, middle temporal gyrus, temporal pole, and retrosplenial cortex. It is well known that the hippocampus and the parahippocampal gyrus are involved in memory retrieval (Diana, Yonelinas, & Ranganath, 2007). The anterior temporal lobe is also engaged by semantic memory tasks (K. Patterson, Nestor, & Rogers, 2007), and the retrosplenial cortex displays especially strong reciprocal connectivity with structures in the medial temporal lobe (Buckner, Andrews-Hanna, & Schacter, 2008). Thus, judgments about the nonreligious stimuli presented in our study seemed more dependent upon those brain systems involved in accessing stored knowledge.

Among our religious stimuli, the subset of statements that ran counter to Christian doctrine yielded greater signal for both groups in several brain regions, including the ventral striatum, paracingulate cortex, middle frontal gyrus, the frontal poles, and inferior parietal cortex. These regions showed greater signal both when Christians rejected stimuli contrary to their doctrine (e.g., The Biblical god is a myth) and when nonbelievers affirmed the truth of those same statements. In other words, these brain areas responded preferentially to “blasphemous” statements in both subject groups. The ventral striatum signal in this contrast suggests that decisions about these stimuli may have been more rewarding for both groups: Nonbelievers may take special pleasure in making assertions that explicitly negate religious doctrine, while Christians may enjoy rejecting such statements as false.

55. Festinger, Riecken, & Schachter, [1956] 2008.

56. Atran, 2006a.

57. Atran, 2007.

58. Bostom, 2005; Butt, 2007; Ibrahim, 2007; Oliver & Steinberg, 2005; Rubin, 2009; Shoebat, 2007.

59. Atran, 2006b.

60. Gettleman, 2008.

61. Ariely, 2008, p. 177.

62. Pierre, 2001.

63. Larson & Witham, 1998.

64. Twenty-one percent of American adults (and 14 percent of those born on American soil) are functionally illiterate (www.nifl.gov/nifl/facts/reading_facts.html), while only 3 percent of Americans agree with the statement “I don’t believe in God.” Despite their near invisibility, atheists are the most stigmatized minority in the United States—beyond homosexuals, African Americans, Jews, Muslims, Asians, or any other group. Even after September 11, 2001, more Americans would vote for a Muslim for president than would vote for an atheist (Edgell, Geteis, & Hartmann, 2006).

65. Morse, 2009.

66. And if there were a rider to this horse, he would be entirely without structure and oblivious to the details of perception, cognition, emotion, and intention that owe their existence to electrochemical activity in specific regions of the brain. If there is a “pure consciousness” that might occupy such a role, it will bear little resemblance to what most religious people mean by a “soul.” A soul this diaphanous would be just as at home in the brain of a hyena (and seems just as likely to be there) as it would in the brain of a human being.

67. Levy (2007) poses the same question.

68. Collins, 2006.

69. It is worth recalling in this context that it is, in fact, possible for an established scientist to destroy his career by saying something stupid. James Watson, the codiscoverer of the structure of DNA, a Nobel laureate, and the original head of the Human Genome Project, recently accomplished this feat by asserting in an interview that people of African descent appear to be innately less intelligent than white Europeans (Hunte-Grubbe, 2007). A few sentences, spoken off the cuff, resulted in academic defenestration: lecture invitations were revoked, award ceremonies canceled, and Watson was forced to immediately resign his post as chancellor of Cold Spring Harbor Laboratory.

Watson’s opinions on race are disturbing, but his underlying point was not, in principle, unscientific. There may very well be detectable differences in intelligence between races. Given the genetic consequences of a population living in isolation for tens of thousands of years, it would be very surprising if there were no differences between racial or ethnic groups waiting to be discovered. I say this not to defend Watson’s fascination with race, or to suggest that such race-focused research might be worth doing. I am merely observing that there is, at least, a possible scientific basis for his views. While Watson’s statement was obnoxious, one cannot say that his views are utterly irrational or that, by merely giving voice to them, he has repudiated the scientific worldview and declared himself immune to its further discoveries. Such a distinction would have to be reserved for Watson’s successor at the Human Genome Project, Dr. Francis Collins.

70. Collins, 2006, p. 225.

71. Van Biema, 2006; Paulson, 2006.

72. Editorial, 2006.

73. Collins, 2006, p. 178.

74. Ibid., pp. 200–201.

75. Ibid., p. 119.

76. It is true that the mysterious effectiveness of mathematics for describing the physical world has lured many scientists to mysticism, philosophical Platonism, and religion. The physicist Eugene Wigner famously posed the problem in a paper entitled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” (Wigner, 1960). While I’m not at all sure that it exhausts this mystery, I think there is something to be said for Craik’s idea (Craik, 1943) that an isomorphism between brain processes and the processes in the world that they represent might account for the utility of numbers and certain mathematical operations. Is it really so surprising that certain patterns of brain activity (i.e., numbers) can map reliably onto the world?

77. Collins also has a terrible tendency of cherry-picking and misrepresenting the views of famous scientists like Stephen Hawking and Albert Einstein. For instance he writes:

Even Albert Einstein saw the poverty of a purely naturalistic worldview. Choosing his words carefully, he wrote, “science without religion is lame, religion without science is blind.”

The one choosing words carefully here is Collins. As we saw above, when read in context (Einstein, 1954, pp. 41–49), this quote reveals that Einstein did not in the least endorse theism and that his use of the word “God” was a poetical way of referring to the laws of nature. Einstein had occasion to complain about such deliberate distortions of his work:

It was, of course, a lie what you read about my religious convictions, a lie which is being systematically repeated. I do not believe in a personal God and I have never denied this but have expressed it clearly. If something is in me which can be called religious then it is the unbounded admiration for the structure of the world so far as our science can reveal it (cited in R. Dawkins, 2006, p. 36).

78. Wright, 2003, 2008.

79. Polkinghorne, 2003; Polkinghorne & Beale, 2009.

80. Polkinghorne, 2003, pp. 22–23.

81. In 1996, the physicist Alan Sokal submitted the nonsense paper “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” to the journal Social Text. While the paper was patently insane, this journal, which still stands “at the forefront of cultural theory,” avidly published it. The text is filled with gems like following:

[T]he discourse of the scientific community, for all its undeniable value, cannot assert a privileged epistemological status with respect to counter-hegemonic narratives emanating from dissident or marginalized communities … In quantum gravity, as we shall see, the space-time manifold ceases to exist as an objective physical reality; geometry becomes relational and contextual; and the foundational conceptual categories of prior science—among them, existence itself—become problematized and relativized. This conceptual revolution, I will argue, has profound implications for the content of a future postmodern and liberatory science (Sokal, 1996, p. 218).

82. Ehrman, 2005. Bible scholars agree that the earliest Gospels were written decades after the life of Jesus. We don’t have the original texts of any of the Gospels. What we have are copies of copies of copies of ancient Greek manuscripts that differ from one another in literally thousands of places. Many show signs of later interpolation—which is to say that people have added passages to these texts over the centuries, and these passages have found their way into the canon. In fact, there are whole sections of the New Testament, like the Book of Revelation, that were long considered spurious, that were included in the Bible only after many centuries of neglect; and there are other books, like the Shepherd of Hermas, that were venerated as part of the Bible for hundreds of years only to be rejected finally as false scripture. Consequently, it is true to say that generations of Christians lived and died having been guided by scripture that is now deemed to be both mistaken and incomplete by the faithful. In fact, to this day, Roman Catholics and Protestants cannot agree on the full contents of the Bible. Needless to say, such a haphazard and all-too-human process of cobbling together the authoritative word of the Creator of the Universe seems a poor basis for believing that the miracles of Jesus actually occurred.

The philosopher David Hume made a very nice point about believing in miracles on the basis of testimony: “No testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavours to establish …” (Hume, 1996, vol. IV, p. 131). This is a good rule of thumb. Which is more likely, that Mary, the mother of Jesus, would have sex outside of wedlock and then feel the need to lie about it, or that she would conceive a child through parthenogenesis the way aphids and Komodo dragons do? On the one hand, we have the phenomenon of lying about adultery—in a context where the penalty for adultery is death—and on the other, we have a woman spontaneously mimicking the biology of certain insects and reptiles. Hmm …

83. Editorial, 2008.

84. Maddox, 1981.

85. Sheldrake, 1981.

86. I have publicly lamented this double standard on a number of occasions (S. Harris, 2007a; S. Harris & Ball, 2009)

87. Collins, 2006, p. 23.

88. Langford et al., 2006.

89. Masserman et al., 1964.

90. Our picture of chimp notions of fairness is somewhat muddled. There is no question that they notice inequity, but they do not seem to care if they profit from it (Brosnan, 2008; Brosnan, Schiff, & de Waal, 2005; Jensen, Call, & Tomasello, 2007; Jensen, Hare, Call, & Tomasello, 2006; Silk et al., 2005).

91. Range et al., 2009.

92. Siebert, 2009.

93. Silver, 2006, p. 157.

94. Ibid., p. 162.

95. Collins, 2006.

96. Of course, I also received much support, especially from scientists, and even from scientists at the NIH.

97. Miller, it should be noted, is also a believing Christian and the author of Finding Darwin’s God (K. R. Miller, 1999). For all its flaws, this book contains an extremely useful demolition of “intelligent design.”

98. C. Mooney & S. Kirshenbaum, 2009, pp. 97–98.

99. The claim is ubiquitous, even at the highest levels of scientific discourse. From a recent editorial in Nature, insisting on the reality of human evolution:

The vast majority of scientists, and the majority of religious people, see little potential for pleasure or progress in the conflicts between religion and science that are regularly fanned into flame by a relatively small number on both sides of the debate. Many scientists are religious, and perceive no conflict between the values of their science—values that insist on disinterested, objective inquiry into the nature of the Universe—and those of their faith (Editorial, 2007).

From the National Academy of Sciences:

Science can neither prove nor disprove religion … Many scientists have written eloquently about how their scientific studies have increased their awe and understanding of a creator … The study of science need not lessen or compromise faith (National Academy of Sciences [U.S.] & Institute of Medicine [U.S.], 2008, p. 54).

Chapter 5: The Future of Happiness

1. Allen, 2000.

2Los Angeles Times, July 5, 1910.

3. As indicated above, I think it is reasonably clear that concerns about angering God and/or suffering an eternity in hell are based on specific notions of harm. Not believing in God or hell leaves one blissfully unconcerned about such liabilities. Under Haidt’s analysis, concerns about God and the afterlife would seem to fall under the categories of “authority” and/or “purity.” I think such assignments needlessly parcel what is, at bottom, a more general concern about harm.

4. Inbar et al., 2009.

5. Schwartz, 2004.

6. D. T. Gilbert, 2006.

7www.ted.com/talks/daniel_kahneman_the_riddle_of_experience_vs_memory.html.

8. Ibid.

9. Lykken & Tellegen, 1996.

10. D. T. Gilbert, 2006, pp. 220–222.

11. Simonton, 1994.

12. Rilling et al., 2002.