The Moral Landscape: How Science Can Determine Human Values - Sam Harris (2010)

Chapter 3. BELIEF

A candidate for the presidency of the United States once met a group of potential supporters at the home of a wealthy benefactor. After brief introductions, he spotted a bowl of potpourri on the table beside him. Mistaking it for a bowl of trail mix, he scooped up a fistful of this decorative debris—which consisted of tree bark, incense, flowers, pinecones, and other inedible bits of woodland—and delivered it greedily into his mouth.

What our hero did next went unreported (suffice it to say that he did not become the next president of the United States). We can imagine the psychology of the scene, however: the candidate wide-eyed in ambush, caught between the look of horror on his host’s face and the panic of his own tongue, having to quickly decide whether to swallow the vile material or disgorge it in full view of his audience. We can see the celebrities and movie producers feigning not to notice the great man’s gaffe and taking a sudden interest in the walls, ceiling, and floorboards of the room. Some were surely less discreet. We can imagine their faces from the candidate’s point of view: a pageant of ill-concealed emotion, ranging from amazement to schadenfreude.

All such responses, their personal and social significance, and their moment-to-moment physiological effects, arise from mental capacities that are distinctly human: the recognition of another’s intentions and state of mind, the representation of the self in both physical and social space, the impulse to save face (or to help others to save it), etc. While such mental states undoubtedly have analogs in the lives of other animals, we human beings experience them with a special poignancy. There may be many reasons for this, but one is clearly paramount: we alone, among all earth’s creatures, possess the ability to think and communicate with complex language.

The work of archeologists, paleoanthropologists, geneticists, and neuroscientists—not to mention the relative taciturnity of our primate cousins—suggests that human language is a very recent adaptation.1 Our species diverged from its common ancestor with the chimpanzees only 6.3 million years ago. And it now seems that the split with chimps may have been less than decisive, as comparisons between the two genomes, focusing on the greater-than-expected similarity of our X chromosomes, reveal that our species diverged, interbred for a time, and then diverged for good.2 Such rustic encounters notwithstanding, all human beings currently alive appear to have descended from a single population of hunter-gatherers that lived in Africa around 50,000 BCE. These were the first members of our species to exhibit the technical and social innovations made possible by language.3

Genetic evidence indicates that a band of perhaps 150 of these people left Africa and gradually populated the rest of the earth. Their migration would not have been without its hardships, however, as they were not alone: Homo neanderthalensis laid claim to Europe and the Middle East, and Homo erectus occupied Asia. Both were species of archaic humans that had developed along separate evolutionary paths after one or more prior migrations out of Africa. Both possessed large brains, fashioned stone tools similar to those of Homo sapiens, and were well armed. And yet over the next twenty thousand years, our ancestors gradually displaced, and may have physically eradicated, all rivals.4 Given the larger brains and sturdier build of the Neanderthals, it seems reasonable to suppose that only our species had the advantage of fully symbolic, complex speech.5

While there is still controversy over the biological origins of human language, as well as over its likely precursors in the communicative behavior of other animals,6 there is no question that syntactic language lies at the root of our ability to understand the universe, to communicate ideas, to cooperate with one another in complex societies, and to build (one hopes) a sustainable, global civilization.7 But why has language made such a difference? How has the ability to speak (and to read and write of late) given modern humans a greater purchase on the world? What, after all, has been worth communicating these last 50,000 years? I hope it will not seem philistine of me to suggest that our ability to create fiction has not been the driving force here. The power of language surely results from the fact that it allows mere words to substitute for direct experience and mere thoughts to simulate possible states of the world. Utterances like, “I saw some very scary guys in front of that cave yesterday,” would have come in quite handy 50,000 years ago. The brain’s capacity to accept such propositions as true—as valid guides to behavior and emotion, as predictive of future outcomes, etc.—explains the transformative power of words. There is a common term we use for this type of acceptance; we call it “belief.”8

What Is “Belief”?

It is surprising that so little research has been done on belief, as few mental states exert so sweeping an influence over human life. While we often make a conventional distinction between “belief” and “knowledge,” these categories are actually quite misleading. Knowing that George Washington was the first president of the United States and believing the statement “George Washington was the first president of the United States” amount to the same thing. When we distinguish between belief and knowledge in ordinary conversation, it is generally for the purpose of drawing attention to degrees of certainty: I’m apt to say “I know it” when I am quite certain that one of my beliefs about the world is true; when I’m less sure, I may say something like “I believe it is probably true.” Most of our knowledge about the world falls between these extremes. The entire spectrum of such convictions—ranging from better-than-a-coin-toss to I-would-bet-my-life-on-it—expresses gradations of “belief.”

It is reasonable to wonder, however, whether “belief” is really a single phenomenon at the level of the brain. Our growing understanding of human memory should make us cautious: over the last fifty years, the concept of “memory” has decomposed into several forms of cognition that are now known to be neurologically and evolutionarily distinct.9 This should make us wonder whether a notion like “belief” might not also shatter into separate processes when mapped onto the brain. In fact, belief overlaps with certain types of memory, as memory can be equivalent to a belief about the past (e.g., “I had breakfast most days last week”),10 and certain beliefs are indistinguishable from what is often called “semantic memory” (e.g., “The earth is the third planet from the sun”).

There is no reason to think that any of our beliefs about the world are stored as propositions, or within discrete structures, inside the brain.11 Merely understanding a simple proposition often requires the unconscious activation of considerable background knowledge12 and an active process of hypothesis testing.13 For instance, a sentence like “The team was terribly disappointed because the second stage failed to fire,” while easy enough to read, cannot be understood without some general concept of a rocket launch and a team of engineers. So there is more to even basic communication than the mere decoding of words. We must expect that a similar penumbra of associations will surround specific beliefs as well.

And yet our beliefs can be represented and expressed as discrete statements. Imagine hearing any one of the following assertions from a trusted friend:

1. The CDC just announced that cell phones really do cause brain cancer.

2. My brother won $100,000 in Las Vegas over the weekend.

3. Your car is being towed.

We trade in such representations of the world all the time. The acceptance of such statements as true (or likely to be true) is the mechanism by which we acquire most of our knowledge about the world. While it would not make any sense to search for structures in the brain that correspond to specific sentences, we may be able to understand the brain states that allow us to accept such sentences as true.14 When someone says “Your car is being towed,” it is your acceptance of this statement as true that sends you racing out the door. “Belief,” therefore, can be thought of as a process taking place in the present; it is the act of grasping, not the thing grasped.

The Oxford English Dictionary defines multiple senses of the term “belief”:

1. The mental action, condition, or habit, of trusting to or confiding in a person or thing; trust, dependence, reliance, confidence, faith.

2. Mental acceptance of a proposition, statement, or fact as true, on the ground of authority or evidence; assent of the mind to a statement, or to the truth of a fact beyond observation, on the testimony of another, or to a fact or truth on the evidence of consciousness; the mental condition involved in this assent.

3. The thing believed; the proposition or set of propositions held true.

Definition 2 is exactly what we are after, and 1 may apply as well. These first two senses of the term are quite different from the data-centered meaning given in 3.

Consider the following claim: Starbucks does not sell plutonium. I suspect that most of us would be willing to wager a fair amount of money that this statement is generally true—which is to say that we believe it. However, before reading this statement, you are very unlikely to have considered the prospect that the world’s most popular coffee chain might also trade in one of the world’s most dangerous substances. Therefore, it does not seem possible for there to have been a structure in your brain that already corresponded to this belief. And yet you clearly harbored some representation of the world that amounts to this belief.

Many modes of information processing must lay the groundwork for us to judge the above statement as “true.” Most of us know, in a variety of implicit and explicit ways, that Starbucks is not a likely proliferator of nuclear material. Several distinct capacities—episodic memory, semantic knowledge, assumptions about human behavior and economic incentives, inductive reasoning, etc.—conspire to make us accept the above proposition. To say that we already believed that one cannot buy plutonium at Starbucks is to merely put a name to the summation of these processes in the present moment: that is, “belief,” in this case, is the disposition to accept a proposition as true (or likely to be).

This process of acceptance often does more than express our prior commitments, however. It can revise our view of the world in an instant. Imagine reading the following headline in tomorrow’s New York Times: “Most of the World’s Coffee Is Now Contaminated by Plutonium.” Believing this statement would immediately influence your thinking on many fronts, as well as your judgment about the truth of the former proposition. Most of our beliefs have come to us in just this form: as statements that we accept on the assumption that their source is reliable, or because the sheer number of sources rules out any significant likelihood of error.

In fact, everything we know outside of our personal experience is the result of our having encountered specific linguistic propositions—the sun is a star; Julius Caesar was a Roman emperorbroccoli is good for you—and found no reason (or means) to doubt them. It is “belief” in this form, as an act of acceptance, which I have sought to better understand in my neuroscientific research.15

Looking for Belief in the Brain

For a physical system to be capable of complex behavior, there must be some meaningful separation between its input and output. As far as we know, this separation has been most fully achieved in the frontal lobes of the human brain. Our frontal lobes are what allow us to select among a vast range of responses to incoming information in light of our prior goals and present inferences. Such “higher-level” control of emotion and behavior is the stuff of which human personalities are made. Clearly, the brain’s capacity to believe or disbelieve statements of fact—You left your wallet on the barthat white powder is anthraxyour boss is in love with you—is central to the initiation, organization, and control of our most complex behaviors.

But we are not likely to find a region of the human brain devoted solely to belief. The brain is an evolved organ, and there does not seem to be a process in nature that allows for the creation of new structures dedicated to entirely novel modes of behavior or cognition. Consequently, the brain’s higher-order functions had to emerge from lower-order mechanisms. An ancient structure like the insula, for instance, helps monitor events in our gut, governing the perception of hunger and primary emotions like disgust. But it is also involved in pain perception, empathy, pride, humiliation, trust, music appreciation, and addictive behavior.16 It may also play an important role in both belief formation and moral reasoning. Such promiscuity of function is a common feature of many regions of the brain, especially in the frontal lobes.17

No region of the brain evolved in a neural vacuum or in isolation from the other mutations simultaneously occurring within the genome. The human mind, therefore, is like a ship that has been built and rebuilt, plank by plank, on the open sea. Changes have been made to her sails, keel, and rudder even as the waves battered every inch of her hull. And much of our behavior and cognition, even much that now seems essential to our humanity, has not been selected for at all. There are no aspects of brain function that evolved to hold democratic elections, to run financial institutions, or to teach our children to read. We are, in every cell, the products of nature—but we have also been born again and again through culture. Much of this cultural inheritance must be realized differently in individual brains. The way in which two people think about the stock market, or recall that Christmas is a national holiday, or solve a puzzle like the Tower of Hanoi, will almost surely differ between individuals. This poses an obvious challenge when attempting to identify mental states with specific brain states.18

Another factor that makes the strict localization of any mental state difficult is that the human brain is characterized by massive interconnectivity: it is mostly talking to itself.19 And the information it stores must also be more fine-grained than the concepts, symbols, objects, or states that we subjectively experience. Representation results from a pattern of activity across networks of neurons and does not generally entail stable, one-to-one mappings of things/events in the world, or concepts in the mind, to discrete structures in the brain.20 For instance, thinking a simple thought like Jake is married cannot be the work of any single node in a network of neurons. It must emerge from a pattern of connections among many nodes. None of this bodes well for one who would seek a belief “center” in the human brain.

As part of my doctoral research at UCLA, I studied belief, disbelief, and uncertainty with functional magnetic resonance imaging (fMRI).21 To do this, we had volunteers read statements from a wide variety of categories while we scanned their brains. After reading a proposition like, “California is part of the United States” or “You have brown hair,” participants would judge them to be “true,” “false,” or “undecidable” with the click of a button. This was, to my knowledge, the first time anyone had attempted to study belief and disbelief with the tools of neuroscience. Consequently, we had no basis to form a detailed hypothesis about which regions of the brain govern these states of mind.22 It was, nevertheless, reasonable to expect that the prefrontal cortex (PFC) would be involved, given its wider role in controlling emotion and complex behavior.23

The seventeenth-century philosopher Spinoza thought that merely understanding a statement entails the tacit acceptance of its being true, while disbelief requires a subsequent process of rejection.24 Several psychological studies seem to support this conjecture.25 Understanding a proposition may be analogous to perceiving an object in physical space: we may accept appearances as reality until they prove otherwise. The behavioral data acquired in our research support this hypothesis, as subjects judged statements to be “true” more quickly than they judged them to be “false” or “undecidable.”26

When we compared the mental states of belief and disbelief, we found that belief was associated with greater activity in the medial prefrontal cortex (MPFC).27 This region of the frontal lobes is involved in linking factual knowledge with relevant emotional associations,28 in changing behavior in response to reward,29 and in goal-based actions.30 The MPFC is also associated with ongoing reality monitoring, and injuries here can cause people to confabulate—that is, to make patently false statements without any apparent awareness that they are not telling the truth.31 Whatever its cause in the brain, confabulation seems to be a condition in which belief processing has run amok. The MPFC has often been associated with self-representation,32 and one sees more activity here when subjects think about themselves than when they think about others.33

The greater activity we found in the MPFC for belief compared to disbelief may reflect the greater self-relevance and/or reward value of true statements. When we believe a proposition to be true, it is as though we have taken it in hand as part of our extended self: we are saying, in effect, “This is mine. I can use this. This fits my view of the world.” It seems to me that such cognitive acceptance has a distinctly positive emotional valence. We actually like the truth, and we may, in fact, dislike falsehood.34

The involvement of the MPFC in belief processing suggests an anatomical link between the purely cognitive aspects of belief and emotion/reward. Even judging the truth of emotionally neutral propositions engaged regions of the brain that are strongly connected to the limbic system, which governs our positive and negative affect. In fact, mathematical belief (e.g., “2 + 6 + 8 = 16”) showed a similar pattern of activity to ethical belief (e.g., “It is good to let your children know that you love them”), and these were perhaps the most dissimilar sets of stimuli used in our experiment. This suggests that the physiology of belief may be the same regardless of a proposition’s content. It also suggests that the division between facts and values does not make much sense in terms of underlying brain function.35

Of course, we can differentiate my argument concerning the moral landscape from my fMRI work on belief. I have argued that there is no gulf between facts and values, because values reduce to a certain type of fact. This is a philosophical claim, and as such, I can make it before ever venturing into the lab. However, my research on belief suggests that the split between facts and values should look suspicious: First, belief appears to be largely mediated by the MPFC, which seems to already constitute an anatomical bridge between reasoning and value. Second, the MPFC appears to be similarly engaged, irrespective of a belief’s content. This finding of content-independence challenges the fact/value distinction very directly: for if, from the point of view of the brain, believing “the sun is a star” is importantly similar to believing “cruelty is wrong,” how can we say that scientific and ethical judgments have nothing in common?

And we can traverse the boundary between facts and values in other ways. As we are about to see, the norms of reasoning seem to apply equally to beliefs about facts and to beliefs about values. In both spheres, evidence of inconsistency and bias is always unflattering. Similarities of this kind suggest that there is a deep analogy, if not identity, between the two domains.

The Tides of Bias

If one wants to understand how another person thinks, it is rarely sufficient to know whether or not he believes a specific set of propositions. Two people can hold the same belief for very different reasons, and such differences generally matter. In the year 2003, it was one thing to believe that the United States should not invade Iraq because the ongoing war in Afghanistan was more important; it was another to believe it because you think it is an abomination for infidels to trespass on Muslim land. Knowing what a person believes on a specific subject is not identical to knowing how that person thinks.

Decades of psychological research suggest that unconscious processes influence belief formation, and not all of them assist us in our search for truth. When asked to judge the probability that an event will occur, or the likelihood that one event caused another, people are frequently misled by a variety of factors, including the unconscious influence of extraneous information. For instance, if asked to recall the last four digits of their Social Security numbers and then asked to estimate the number of doctors practicing in San Francisco, the resulting numbers will show a statistically significant relationship. Needless to say, when the order of questions is reversed, this effect disappears.36There have been a few efforts to put a brave face on such departures from rationality, construing them as random performance errors or as a sign that experimental subjects have misunderstood the tasks presented to them—or even as proof that research psychologists themselves have been beguiled by false norms of reasoning. But efforts to exonerate our mental limitations have generally failed. There are some things that we are just naturally bad at. And the mistakes people tend to make across a wide range of reasoning tasks are not mere errors; they are systematic errors that are strongly associated both within and across tasks. As one might expect, many of these errors decrease as cognitive ability increases.37 We also know that training, using both examples and formal rules, mitigates many of these problems and can improve a person’s thinking.38

Reasoning errors aside, we know that people often acquire their beliefs about the world for reasons that are more emotional and social than strictly cognitive. Wishful thinking, self-serving bias, in-group loyalties, and frank self-deception can lead to monstrous departures from the norms of rationality. Most beliefs are evaluated against a background of other beliefs and often in the context of an ideology that a person shares with others. Consequently, people are rarely as open to revising their views as reason would seem to dictate.

On this front, the internet has simultaneously enabled two opposing influences on belief: On the one hand, it has reduced intellectual isolation by making it more difficult for people to remain ignorant of the diversity of opinion on any given subject. But it has also allowed bad ideas to flourish—as anyone with a computer and too much time on his hands can broadcast his point of view and, often enough, find an audience. So while knowledge is increasingly open-source, ignorance is, too.

It is also true that the less competent a person is in a given domain, the more he will tend to overestimate his abilities. This often produces an ugly marriage of confidence and ignorance that is very difficult to correct for.39Conversely, those who are more knowledgeable about a subject tend to be acutely aware of the greater expertise of others. This creates a rather unlovely asymmetry in public discourse—one that is generally on display whenever scientists debate religious apologists. For instance, when a scientist speaks with appropriate circumspection about controversies in his field, or about the limits of his own understanding, his opponent will often make wildly unjustified assertions about just which religious doctrines can be inserted into the space provided. Thus, one often finds people with no scientific training speaking with apparent certainty about the theological implications of quantum mechanics, cosmology, or molecular biology.

This point merits a brief aside: while it is a standard rhetorical move in such debates to accuse scientists of being “arrogant,” the level of humility in scientific discourse is, in fact, one of its most striking characteristics. In my experience, arrogance is about as common at a scientific conference as nudity. At any scientific meeting you will find presenter after presenter couching his or her remarks with caveats and apologies. When asked to comment on something that lies to either side of the very knife edge of their special expertise, even Nobel laureates will say things like, “Well, this isn’t really my area, but I would suspect that X is …” or “I’m sure there a several people in this room who know more about this than I do, but as far as I know, X is …” The totality of scientific knowledge now doubles every few years. Given how much there is to know, all scientists live with the constant awareness that whenever they open their mouths in the presence of other scientists, they are guaranteed to be speaking to someone who knows more about a specific topic than they do.

Cognitive biases cannot help but influence our public discourse. Consider political conservatism: this is a fairly well-defined perspective that is characterized by a general discomfort with societal change and a ready acceptance of social inequality. As simple as political conservatism is to describe, we know that it is governed by many factors. The psychologist John Jost and colleagues analyzed data from twelve countries, acquired from 23,000 subjects, and found this attitude to be correlated with dogmatism, inflexibility, death anxiety, need for closure, and anticorrelated with openness to experience, cognitive complexity, self-esteem, and social stability.40 Even the manipulation of a single of these variables can affect political opinions and behavior. For instance, merely reminding people of the fact of death increases their inclination to punish transgressors and to reward those who uphold cultural norms. One experiment showed that judges could be led to impose especially harsh penalties on prostitutes if they were simply prompted to think about death prior to their deliberations.41

And yet after reviewing the literature linking political conservatism to many obvious sources of bias, Jost and his coauthors reach the following conclusion:

Conservative ideologies, like virtually all other belief systems, are adopted in part because they satisfy various psychological needs. To say that ideological belief systems have a strong motivational basis is not to say that they are unprincipled, unwarranted, or unresponsive to reason and evidence.42

This has more than a whiff of euphemism about it. Surely we can say that a belief system known to be especially beholden to dogmatism, inflexibility, death anxiety, and a need for closure will be less principled, less warranted, and less responsive to reason and evidence than it would otherwise be.

This is not to say that liberalism isn’t also occluded by certain biases. In a recent study of moral reasoning,43 subjects were asked to judge whether it was morally correct to sacrifice the life of one person to save one hundred, while being given subtle clues as to the races of the people involved. Conservatives proved less biased by race than liberals and, therefore, more even-handed. Liberals, as it turns out, were very eager to sacrifice a white person to save one hundred nonwhites, but not the other way around—all the while maintaining that considerations of race had not entered into their thinking. The point, of course, is that science increasingly allows us to identify aspects of our minds that cause us to deviate from norms of factual and moral reasoning—norms which, when made explicit, are generally acknowledged to be valid by all parties.

There is a sense in which all cognition can be said to be motivated: one is motivated to understand the world, to be in touch with reality, to remove doubt, etc. Alternately, one might say that motivation is an aspect of cognition itself.44 Nevertheless, motives like wanting to find the truth, not wanting to be mistaken, etc., tend to align with epistemic goals in a way that many other commitments do not. As we have begun to see, all reasoning may be inextricable from emotion. But if a person’s primary motivation in holding a belief is to hew to a positive state of mind—to mitigate feelings of anxiety, embarrassment, or guilt, for instance—this is precisely what we mean by phrases like “wishful thinking” and “self-deception.” Such a person will, of necessity, be less responsive to valid chains of evidence and argument that run counter to the beliefs he is seeking to maintain. To point out nonepistemic motives in another’s view of the world, therefore, is always a criticism, as it serves to cast doubt upon a person’s connection to the world as it is.45

Mistaking Our Limits

We have long known, principally through the neurological work of Antonio Damasio and colleagues, that certain types of reasoning are inseparable from emotion.46 To reason effectively, we must have a feeling for the truth. Our first fMRI study of belief and disbelief seemed to bear this out.47 If believing a mathematical equation (vs. disbelieving another) and believing an ethical proposition (vs. disbelieving another) produce the same changes in neurophysiology, the boundary between scientific dispassion and judgments of value becomes difficult to establish.

However, such findings do not in the least diminish the importance of reason, nor do they blur the distinction between justified and unjustified belief. On the contrary, the inseparability of reason and emotion confirms that the validity of a belief cannot merely depend on the conviction felt by its adherents; it rests on the chains of evidence and argument that link it to reality. Feeling may be necessary to judge the truth, but it cannot be sufficient.

The neurologist Robert Burton argues that the “feeling of knowing” (i.e., the conviction that one’s judgment is correct) is a primary positive emotion that often floats free of rational processes and can occasionally become wholly detached from logical or sensory evidence.48 He infers this from neurological disorders in which subjects display pathological certainty (e.g., schizophrenia and Cotard’s delusion) and pathological uncertainty (e.g., obsessive-compulsive disorder). Burton concludes that it is irrational to expect too much of human rationality. On his account, rationality is mostly aspirational in character and often little more than a façade masking pure, unprincipled feeling.

Other neuroscientists have made similar claims. Chris Frith, a pioneer in the use of functional neuroimaging, recently wrote:

[W]here does conscious reasoning come into the picture? It is an attempt to justify the choice after it has been made. And it is, after all, the only way we have to try to explain to other people why we made a particular decision. But given our lack of access to the brain processes involved, our justification is often spurious: a post-hoc rationalization, or even a confabulation—a “story” born of the confusion between imagination and memory.49

I doubt Frith meant to deny that reason ever plays a role in decision making (though the title of his essay was “No One Really Uses Reason”). He has, however, conflated two facts about the mind: while it is true that all conscious processes, including any effort of reasoning, depend upon events of which we are not conscious, this does not mean that reasoning amounts to little more than a post hoc justification of brute sentiment. We are not aware of the neurological processes that allow us to follow the rules of algebra, but this doesn’t mean that we never follow these rules or that the role they play in our mathematical calculations is generally post hoc. The fact that we are unaware of most of what goes on in our brains does not render the distinction between having good reasons for what one believes and having bad ones any less clear or consequential. Nor does it suggest that internal consistency, openness to information, self-criticism, and other cognitive virtues are less valuable than we generally assume.

There are many ways to make too much of the unconscious underpinnings of human thought. For instance, Burton observes that one’s thinking on many moral issues—ranging from global warming to capital punishment—will be influenced by one’s tolerance for risk. In evaluating the problem of global warming, one must weigh the risk of melting the polar ice caps; in judging the ethics of capital punishment, one must consider the risk of putting innocent people to death. However, people differ significantly with respect to risk tolerance, and these differences appear to be governed by a variety of genes—including genes for the D4 dopamine receptor and the protein stathmin (which is primarily expressed in the amygdala). Believing that there can be no optimal degree of risk aversion, Burton concludes that we can never truly reason about such ethical questions. “Reason” will simply be the name we give to our unconscious (and genetically determined) biases. But is it really true to say that every degree of risk tolerance will serve our purposes equally well as we struggle to build a global civilization? Does Burton really mean to suggest that there is no basis for distinguishing healthy from unhealthy—or even suicidal—attitudes toward risk?

As it turns out, dopamine receptor genes may play a role in religious belief as well. People who have inherited the most active form of the D4 receptor are more likely to believe in miracles and to be skeptical of science; the least active forms correlate with “rational materialism.”50 Skeptics given the drug L-dopa, which increases dopamine levels, show an increased propensity to accept mystical explanations for novel phenomena.51 The fact that religious belief is both a cultural universal and appears to be tethered to the genome has led scientists like Burton to conclude that there is simply no getting rid of faith-based thinking.

It seems to me that Burton and Frith have misunderstood the significance of unconscious cognitive processes. On Burton’s account, worldviews will remain idiosyncratic and incommensurable, and the hope that we might persuade one another through rational argument and, thereby, fuse our cognitive horizons is not only vain but symptomatic of the very unconscious processes and frank irrationality that we would presume to expunge. This leads him to conclude that any rational criticism of religious irrationality is an unseemly waste of time:

The science-religion controversy cannot go away; it is rooted in biology … Scorpions sting. We talk of religion, afterlife, soul, higher powers, muses, purpose, reason, objectivity, pointlessness, and randomness. We cannot help ourselves … To insist that the secular and the scientific be universally adopted flies in the face of what neuroscience tells us about different personality traits generating idiosyncratic worldviews … Different genetics, temperaments, and experience led to contrasting worldviews. Reason isn’t going to bridge this gap between believers and nonbelievers.52

The problem, however, is that we could have said the same about witchcraft. Historically, a preoccupation with witchcraft has been a cultural universal. And yet belief in magic is now in disrepute almost everywhere in the developed world. Is there a scientist on earth who would be tempted to argue that belief in the evil eye or in the demonic origins of epilepsy is bound to remain impervious to reason?

Lest the analogy between religion and witchcraft seem quaint, it is worth remembering that belief in magic and demonic possession is still epidemic in Africa. In Kenya elderly men and women are regularly burned alive as witches.53 In Angola, Congo, and Nigeria the hysteria has mostly targeted children: thousands of unlucky boys and girls have been blinded, injected with battery acid, and otherwise put to torture in an effort to purge them of demons; others have been killed outright; many more have been disowned by their families and rendered homeless.54 Needless to say, much of this lunacy has spread in the name of Christianity. The problem is especially intractable because the government officials charged with protecting these suspected witches also believe in witchcraft. As was the case in the Middle Ages, when the belief in witchcraft was omnipresent in Europe, only a truly panoramic ignorance about the physical causes of disease, crop failure, and life’s other indignities allows this delusion to thrive.

What if we were to connect the fear of witches with the expression of a certain receptor subtype in the brain? Who would be tempted to say that the belief in witchcraft is, therefore, ineradicable?

As someone who has received many thousands of letters and emails from people who have ceased to believe in the God of Abraham, I know that pessimism about the power of reason is unwarranted. People can be led to notice the incongruities in their faith, the self-deception and wishful thinking of their coreligionists, and the growing conflict between the claims of scripture and the findings of modern science. Such reasoning can inspire them to question their attachment to doctrines that, in the vast majority of cases, were simply drummed into them on mother’s knee. The truth is that people can transcend mere sentiment and clarify their thinking on almost any subject. Allowing competing views to collide—through open debate, a willingness to receive criticism, etc.—performs just such a function, often by exposing inconsistencies in a belief system that make its adherents profoundly uncomfortable. There are standards to guide us, even when opinions differ, and the violation of such standards generally seems consequential to everyone involved. Self-contradiction, for instance, is viewed as a problem no matter what one is talking about. And anyone who considers it a virtue is very unlikely to be taken seriously. Again, reason is not starkly opposed to feeling on this front; it entails a feeling for the truth.

Conversely, there are occasions when a true proposition just doesn’t seem right no matter how one squints one’s eyes or cocks one’s head, and yet its truth can be acknowledged by anyone willing to do the necessary intellectual work. It is very difficult to grasp that tiny quantities of matter contain vast amounts of explosive energy, but the equations of physics—along with the destructive yield of our nuclear bombs—confirms that this is so. Similarly, we know that most people cannot produce or even recognize a series of digits or coin tosses that meets a statistical test for randomness. But this has not stopped us from understanding randomness mathematically—or from factoring our innate blindness to randomness into our growing understanding of cognition and economic behavior.55

The fact that reason must be rooted in our biology does not negate the principles of reason. Wittgenstein once observed that the logic of our language allows us to ask, “Was that gunfire?” but not “Was that a noise?”56 This seems to be a contingent fact of neurology, rather than an absolute constraint upon logic. A synesthete, for instance, who experiences crosstalk between his primary senses (seeing sounds, tasting colors, etc.), might be able to pose the latter question without any contradiction. How the world seems to us (and what can be logically said about its seemings) depends upon facts about our brains. Our inability to say that an object is “red and green all over” is a fact about the biology of vision before it is a fact of logic. But that doesn’t prevent us from seeing beyond this very contingency. As science advances, we are increasingly coming to understand the natural limits of our understanding.

Belief and Reasoning

There is a close relationship between belief and reasoning. Many of our beliefs are the product of inferences drawn from particular instances (induction) or from general principles (deduction), or both. Induction is the process by which we extrapolate from past observations to novel instances, anticipate future states of the world, and draw analogies from one domain to another.57 Believing that you probably have a pancreas (because people generally have the same parts), or interpreting the look of disgust on your son’s face to mean that he doesn’t like Marmite, are examples of induction. This mode of thinking is especially important for ordinary cognition and for the practice of science, and there have been a variety of efforts to model it computationally.58 Deduction, while less central to our lives, is an essential component of any logical argument.59 If you believe that gold is more expensive than silver, and silver more expensive than tin, deduction reveals that you also believe gold to be more expensive than tin. Induction allows us to move beyond the facts already in hand; deduction allows us to make the implications of our current beliefs more explicit, to search for counterexamples, and to see whether our views are logically coherent. Of course, the boundaries between these (and other) forms of reasoning are not always easy to specify, and people succumb to a wide range of biases in both modes.

It is worth reflecting on what a reasoning bias actually is: a bias is not merely a source of error; it is a reliable pattern of error. Every bias, therefore, reveals something about the structure of the human mind. And diagnosing a pattern of errors as a “bias” can only occur with reference to specific norms—and norms can sometimes be in conflict. The norms of logic, for instance, don’t always correspond to the norms of practical reasoning. An argument can be logically valid, but unsound in that it contains a false premise and, therefore, leads to a false conclusion (e.g., Scientists are smart; smart people do not make mistakes; therefore, scientists do not make mistakes).60 Much research on deductive reasoning suggests that people have a “bias” for sound conclusions and will judge a valid argument to be invalid if its conclusion lacks credibility. It’s not clear that this “belief bias” should be considered a symptom of native irrationality. Rather, it seems an instance in which the norms of abstract logic and practical reason may simply be in conflict.

Neuroimaging studies have been performed on various types of human reasoning.61 As we have seen, however, accepting the fruits of such reasoning (i.e., belief) seems to be an independent process. While this is suggested by my own neuroimaging research, it also follows directly from the fact that reasoning accounts only for a subset of our beliefs about the world. Consider the following statements:

1. All known soil samples contain bacteria; so the soil in my garden probably contains bacteria as well (induction).

2. Dan is a philosopher, all philosophers have opinions about Nietzsche; therefore, Dan has an opinion about Nietzsche (deduction).

3. Mexico shares a border with the United States.

4. You are reading at this moment.

Each of these statements must be evaluated by different channels of neural processing (and only the first two require reasoning). And yet each has the same cognitive valence: being true, each inspires belief (or being believed, each is deemed “true”). Such cognitive acceptance allows any apparent truth to take its place in the economy of our thoughts and actions, at which time it becomes as potent as its propositional content demands.

A World Without Lying?

Knowing what a person believes is equivalent to knowing whether or not he is telling the truth. Consequently, any external means of determining which propositions a subject believes would constitute a de facto “lie detector.” Neuroimaging research on belief and disbelief may one day enable researchers to put this equivalence to use in the study of deception.62 It is possible that this new approach could circumvent many of the impediments that have hindered the study of deception in the past.

When evaluating the social cost of deception, we need to consider all of the misdeeds—premeditated murders, terrorist atrocities, genocides, Ponzi schemes, etc.—that must be nurtured and shored up, at every turn, by lies. Viewed in this wider context, deception commends itself, perhaps even above violence, as the principal enemy of human cooperation. Imagine how our world would change if, when the truth really mattered, it became impossible to lie. What would international relations be like if every time a person shaded the truth on the floor of the United Nations an alarm went off throughout the building?

The forensic use of DNA evidence has already made the act of denying one’s culpability for certain actions comically ineffectual. Recall how Bill Clinton’s cantatas of indignation were abruptly silenced the moment he learned that a semen-stained dress was en route to the lab. The mere threat of a DNA analysis produced what no grand jury ever could—instantaneous communication with the great man’s conscience, which appeared to be located in another galaxy. We can be sure that a dependable method of lie detection would produce similar transformations, on far more consequential subjects.

The development of mind-reading technology is just beginning—but reliable lie detection will be much easier to achieve than accurate mind reading. Whether or not we ever crack the neural code, enabling us to download a person’s private thoughts, memories, and perceptions without distortion, we will almost surely be able to determine, to a moral certainty, whether a person is representing his thoughts, memories, and perceptions honestly in conversation. The development of a reliable lie detector would only require a very modest advance over what is currently possible through neuroimaging.

Traditional methods for detecting deception through polygraphy never achieved widespread acceptance,63 as they measure the peripheral signs of emotional arousal rather than the neural activity associated with deception itself. In 2002, in a 245-page report, the National Research Council (an arm of the National Academy of Sciences) dismissed the entire body of research underlying polygraphy as “weak” and “lacking in scientific rigor.”64 More modern approaches to lie detection, using thermal imaging of the eyes,65 suffer a similar lack of specificity. Techniques that employ electrical signals at the scalp to detect “guilty knowledge” have limited application, and it is unclear how one can use these methods to differentiate guilty knowledge from other forms of knowledge in any case.66

Methodological problems notwithstanding, it is difficult to exaggerate how fully our world would change if lie detectors ever became reliable, affordable, and unobtrusive. Rather than spirit criminal defendants and hedge fund managers off to the lab for a disconcerting hour of brain scanning, there may come a time when every courtroom or boardroom will have the requisite technology discreetly concealed behind its wood paneling. Thereafter, civilized men and women might share a common presumption: that wherever important conversations are held, the truthfulness of all participants will be monitored. Well-intentioned people would happily pass between zones of obligatory candor, and these transitions will cease to be remarkable. Just as we’ve come to expect that certain public spaces will be free of nudity, sex, loud swearing, and cigarette smoke—and now think nothing of the behavioral constraints imposed upon us whenever we leave the privacy of our homes—we may come to expect that certain places and occasions will require scrupulous truth telling. Many of us might no more feel deprived of the freedom to lie during a job interview or at a press conference than we currently feel deprived of the freedom to remove our pants in the supermarket. Whether or not the technology works as well as we hope, the belief that it generally does work would change our culture profoundly.

In a legal context, some scholars have already begun to worry that reliable lie detection will constitute an infringement of a person’s Fifth Amendment privilege against self-incrimination.67 However, the Fifth Amendment has already succumbed to advances in technology. The Supreme Court has ruled that defendants can be forced to provide samples of their blood, saliva, and other physical evidence that may incriminate them. Will neuroimaging data be added to this list, or will it be considered a form of forced testimony? Diaries, emails, and other records of a person’s thoughts are already freely admissible as evidence. It is not at all clear that there is a distinction between these diverse sources of information that should be ethically or legally relevant to us.

In fact, the prohibition against compelled testimony itself appears to be a relic of a more superstitious age. It was once widely believed that lying under oath would damn a person’s soul for eternity, and it was thought that no one, not even a murderer, should be placed between the rock of Justice and so hard a place as hell. But I doubt whether even many fundamentalist Christians currently imagine that an oath sworn on a courtroom Bible has such cosmic significance.

Of course, no technology is ever perfect. Once we have a proper lie detector in hand, well-intentioned people will begin to suffer its propensity for positive and negative error. This will raise ethical and legal concerns. It is inevitable, however, that we will deem some rate of error to be acceptable. If you doubt this, remember that we currently lock people away in prison for decades—or kill them—all the while knowing that some percentage of the condemned must be innocent, while some percentage of those returned to our streets will be dangerous psychopaths guaranteed to reoffend. We are currently living with a system in which the occasional unlucky person gets falsely convicted of murder, suffers for years in prison in the company of terrifying predators, only to be finally executed by the state. Consider the tragic case of Cameron Todd Willingham, who was convicted of setting fire to the family home and thereby murdering his three children. While protesting his innocence, Willingham served over a decade on death row and was finally executed. It now seems that he was almost surely innocent—the victim of a chance electrical fire, forensic pseudoscience, and of a justice system that has no reliable means of determining when people are telling the truth.68

We have no choice but to rely upon our criminal justice system, despite the fact that judges and juries are very poorly calibrated truth detectors, prone to both type I (false positive) and type II (false negative) errors. Anything that can improve the performance of this antiquated system, even slightly, will raise the quotient of justice in our world.69

Do We Have Freedom of Belief?

While belief might prove difficult to pinpoint in the brain, many of its mental properties are plain to see. For instance, people do not knowingly believe propositions for bad reasons. If you doubt this, imagine hearing the following account of a failed New Year’s resolution:

This year, I vowed to be more rational, but by the end of January, I found that I had fallen back into my old ways, believing things for bad reasons. Currently, I believe that robbing others is a harmless activity, that my dead brother will return to life, and that I am destined to marry Angelina Jolie, just because these beliefs make me feel good.

This is not how our minds work. A belief—to be actually believed—entails the corollary belief that we have accepted it because it seems to be true. To really believe a proposition—whether about facts or values—we must also believe that we are in touch with reality in such a way that if it were not true, one would not believe it. We must believe, therefore, that we are not flagrantly in error, deluded, insane, self-deceived, etc. While the preceding sentences do not suffice as a full account of epistemology, they go a long way toward uniting science and common sense, as well as reconciling their frequent disagreements. There can be no doubt that there is an important difference between a belief that is motivated by an unconscious emotional bias (or other nonepistemic commitments) and a belief that is comparatively free of such bias.

And yet many secularists and academics imagine that people of faith knowingly believe things for reasons that have nothing to do with their perception of the truth. A written debate I had with Philip Ball—who is a scientist, a science journalist, and an editor at Nature—brought this issue into focus. Ball thought it reasonable for a person to believe a proposition just because it makes him “feel better,” and he seemed to think that people are perfectly free to acquire beliefs in this way. People often do this unconsciously, of course, and such motivated reasoning has been discussed above. But Ball seemed to think that beliefs can be consciously adopted simply because a person feels better while under their spell. Let’s see how this might work. Imagine someone making the following statement of religious conviction:

I believe Jesus was born of a virgin, was resurrected, and now answers prayers because believing these things makes me feel better. By adopting this faith, I am merely exercising my freedom to believe in propositions that make me feel good.

How would such a person respond to information that contradicted his cherished belief? Given that his belief is based purely on how it makes him feel, and not on evidence or argument, he shouldn’t care about any new evidence or argument that might come his way. In fact, the only thing that should change his view of Jesus is a change in how the above propositions make him feel. Imagine our believer undergoing the following epiphany:

For the last few months, I’ve found that my belief in the divinity of Jesus no longer makes me feel good. The truth is, I just met a Muslim woman who I greatly admire, and I want to ask her out on a date. As Muslims believe Jesus was not divine, I am worried that my belief in the divinity of Jesus could hinder my chances with her. As I do not like feeling this way, and very much want to go out with this woman, I now believe that Jesus was not divine.

Has a person like this ever existed? I highly doubt it. Why do these thoughts not make any sense? Because beliefs are intrinsically epistemic: they purport to represent the world as it is. In this case, our man is making specific claims about the historical Jesus, about the manner of his birth and death, and about his special connection to the Creator of the Universe. And yet while claiming to represent the world in this way, it is perfectly clear that he is making no effort to stay in touch with the features of the world that should inform his belief. He is only concerned about how he feels. Given this disparity, it should be clear that his beliefs are not based on any foundation that would (or should) justify them to others, or even to himself.

Of course, people do often believe things in part because these beliefs make them feel better. But they do not do this in the full light of consciousness. Self-deception, emotional bias, and muddled thinking are facts of human cognition. And it is a common practice to act as if a proposition were true, in the spirit of: “I’m going to act on X because I like what it does for me and, who knows, X might be true.” But these phenomena are not at all the same as knowingly believing a proposition simply because one wants it to be true.

Strangely, people often view such claims about the constraints of rationality as a sign of “intolerance.” Consider the following from Ball:

I do wonder what [Sam Harris] is implying here. It is hard to see it as anything other than an injunction that “you should not be free to choose what you believe.” I guess that if all Sam means is that we should not leave people so ill-informed that they have no reasonable basis on which to make those decisions, then fair enough. But it does seem to go further—to say that “you should not be permitted to choose what you believe, simply because it makes you feel better.” Doesn’t this sound a little like a Marxist denouncement of “false consciousness,” with the implication that it needs to be corrected forthwith? I think (I hope?) we can at least agree that there are different categories of belief—that to believe one’s children are the loveliest in the world because that makes you feel better is a permissible (even laudable) thing. But I slightly shudder at the notion, hinted here, that a well-informed person should not be allowed to choose their belief freely … surely we cannot let ourselves become proscriptive to this degree?70

What cognitive freedom is Ball talking about? I happen to believe that George Washington was the first president of the United States. Have I, on Ball’s terms, chosen this belief “freely”? No. Am I free to believe otherwise? Of course not. I am a slave to the evidence. I live under the lash of historical opinion. While I may want to believe otherwise, I simply cannot overlook the incessant pairing of the name “George Washington” with the phrase “first president of the United States” in any discussion of American history. If I wanted to be thought an idiot, I could profess some other belief, but I would be lying. Likewise, if the evidence were to suddenly change—if, for instance, compelling evidence of a great hoax emerged and historians reconsidered Washington’s biography, I would be helplessly stripped of my belief—again, through no choice of my own. Choosing beliefs freely is not what rational minds do.

This does not mean, of course, that we have no mental freedom whatsoever. We can choose to focus on certain facts to the exclusion of others, to emphasize the good rather than the bad, etc. And such choices have consequences for how we view the world. One can, for instance, view Kim Jong-il as an evil dictator; one can also view him as a man who was once the child of a dangerous psychopath. Both statements are, to a first approximation, true. (Obviously, when I speak about “freedom” and “choices” of this sort, I am not endorsing a metaphysical notion of “free will.”)

As to whether there are “different categories of belief”: perhaps, but not in the way that Ball suggests. I happen to have a young daughter who does strike me as the “loveliest in the world.” But is this an accurate account of what I believe? Do I, in other words, believe that my daughter is really the loveliest girl in the world? If I learned that another father thought his daughter the loveliest in the world, would I insist that he was mistaken? Of course not. Ball has mischaracterized what a proud (and sane and intellectually honest) father actually believes. Here is what I believe: I believe that I have a special attachment to my daughter that largely determines my view of her (which is as it should be). I fully expect other fathers to have a similar bias toward their own daughters. Therefore, I do not believe that my daughter is the loveliest girl in the world in any objective sense. Ball is simply describing what it’s like to love one’s daughter more than other girls; he is not describing belief as a representation of the world. What I really believe is that my daughter is the loveliest girl in the world for me.

One thing that both factual and moral beliefs generally share is the presumption that we have not been misled by extraneous information.71 Situational variables, like the order in which unrelated facts are presented, or whether identical outcomes are described in terms of gains or losses, should not influence the decision process. Of course, the fact that such manipulations can strongly influence our judgment has given rise to some of the most interesting work in psychology. However, a person’s vulnerability to such manipulations is never considered a cognitive virtue; rather, it is a source of inconsistency that cries out for remedy.

Consider one of the more famous cases from the experimental literature, The Asian Disease Problem:72

Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

If Program A is adopted, 200 people will be saved.

If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.

Which one of the two programs would you favor?

In this version of the problem, a significant majority of people favor Program A. The problem, however, can be restated this way:

If Program A is adopted, 400 people will die.

If Program B is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.

Which one of the two programs would you favor?

Put this way, a majority of respondents will now favor Program B. And yet there is no material or moral difference between these two scenarios, because their outcomes are the same. What this shows is that people tend to be risk-averse when considering potential gains and risk seeking when considering potential losses, so describing the same event in terms of gains and losses evokes different responses. Another way of stating this is that people tend to overvalue certainty: finding the certainty of saving life inordinately attractive and the certainty of losing life inordinately painful. When presented with the Asian Disease Problem in both forms, however, people agree that each scenario merits the same response. Invariance of reasoning, both logical and moral, is a norm to which we all aspire. And when we catch others departing from this norm, whatever the other merits of their thinking, the incoherency of their position suddenly becomes its most impressive characteristic.

Of course, there are many other ways in which we can be misled by context. Few studies illustrate this more powerfully than one conducted by the psychologist David L. Rosenhan,73 in which he and seven confederates had themselves committed to psychiatric hospitals in five different states in an effort to determine whether mental health professionals could detect the presence of the sane among the mentally ill. In order to get committed, each researcher complained of hearing a voice repeating the words “empty,” “hollow,” and “thud.” Beyond that, each behaved perfectly normally. Upon winning admission to the psychiatric ward, the pseudopatients stopped complaining of their symptoms and immediately sought to convince the doctors, nurses, and staff that they felt fine and were fit to be released. This proved surprisingly difficult. While these genuinely sane patients wanted to leave the hospital, repeatedly declared that they experienced no symptoms, and became “paragons of cooperation,” their average length of hospitalization was nineteen days (ranging from seven to fifty-two days), during which they were bombarded with an astounding range of powerful drugs (which they discreetly deposited in the toilet). None were pronounced healthy. Each was ultimately discharged with a diagnosis of schizophrenia “in remission” (with the exception of one who received a diagnosis of bipolar disorder). Interestingly, while the doctors, nurses, and staff were apparently blind to the presence of normal people on the ward, actual mental patients frequently remarked on the obvious sanity of the researchers, saying things like “You’re not crazy. You’re a journalist.”

In a brilliant response to the skeptics at one hospital who had heard of this research before it was published, Rosenhan announced that he would send a few confederates their way and challenged them to spot the coming pseudopatients. The hospital kept vigil, while Rosenhan, in fact, sent no one. This did not stop the hospital from “detecting” a steady stream of pseudopatients. Over a period of a few months fully 10 percent of their new patients were deemed to be shamming by both a psychiatrist and a member of the staff. While we have all grown familiar with phenomena of this sort, it is startling to see the principle so clearly demonstrated: expectation can be, if not everything, almost everything. Rosenhan concluded his paper with this damning summary: “It is clear that we cannot distinguish the sane from the insane in psychiatric hospitals.”

There is no question that human beings regularly fail to achieve the norms of rationality. But we do not merely fail—we fail reliably. We can, in other words, use reason to understand, quantify, and predict our violations of its norms. This has moral implications. We know, for instance, that the choice to undergo a risky medical procedure will be heavily influenced by whether its possible outcomes are framed in terms of survival rates or mortality rates. We know, in fact, that this framing effect is no less pronounced among doctors than among patients.74 Given this knowledge, physicians have a moral obligation to handle medical statistics in ways that minimize unconscious bias. Otherwise, they cannot help but inadvertently manipulate both their patients and one another, guaranteeing that some of the most important decisions in life will be unprincipled.75

Admittedly, it is difficult to know how we should treat all of the variables that influence our judgment about ethical norms. If I were asked, for instance, whether I would sanction the murder of an innocent person if it would guarantee a cure for cancer, I would find it very difficult to say “yes,” despite the obvious consequentialist argument in favor of such an action. If I were asked to impose a one in a billion risk of death on everyone for this purpose, however, I would not hesitate. The latter course would be expected to kill six or seven people, and yet it still strikes me as obviously ethical. In fact, such a diffusion of risk aptly describes how medical research is currently conducted. And we routinely impose far greater risks than this on friends and strangers whenever we get behind the wheel of our cars. If my next drive down the highway were guaranteed to deliver a cure for cancer, I would consider it the most ethically important act of my life. No doubt the role that probability is playing here could be experimentally calibrated. We could ask subjects whether they would impose a 50 percent chance of death upon two innocent people, a 10 percent chance on ten innocent people, etc. How we should view the role that probability plays in our moral judgments is not clear, however. It seems difficult to imagine ever fully escaping such framing effects.

Science has long been in the values business. Despite a widespread belief to the contrary, scientific validity is not the result of scientists abstaining from making value judgments; rather, scientific validity is the result of scientists making their best effort to value principles of reasoning that link their beliefs to reality, through reliable chains of evidence and argument. This is how norms of rational thought are made effective.

To say that judgments of truth and goodness both invoke specific norms seems another way of saying that they are both matters of cognition, as opposed to mere sentiment. That is why one cannot defend one’s factual or moral position by reference to one’s preferences. One cannot say that water is H2O or that lying is wrong simply because one wants to think this way. To defend such propositions, one must invoke a deeper principle. To believe that X is true or that Y is ethical is also to believe others should share these beliefs under similar circumstances.

The answer to the question “What should I believe, and why should I believe it?” is generally a scientific one. Believe a proposition because it is well supported by theory and evidence; believe it because it has been experimentally verified; believe it because a generation of smart people have tried their best to falsify it and failed; believe it because it is true (or seems so). This is a norm of cognition as well as the core of any scientific mission statement. As far as our understanding of the world is concerned—there are no facts without values.