The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life - Robert Trivers (2011)
Chapter 3. Neurophysiology and Levels of Imposed Self-Deception
Although study of the neurophysiology of deceit and self-deception is just beginning, there are already some interesting findings. Evidence suggests a greatly diminished role for the conscious mind in guiding human behavior. Contrary to our imagination, the conscious mind seems to lag behind the unconscious in both action and perception—it is much more observer of action than initiator. The precise details of the neurobiology of active thought suppression suggest that one part of the brain has been co-opted in evolution to suppress another part, a very interesting development if true. At the same time, evidence from social psychology makes it clear that trying to suppress thoughts sometimes produces a rebound effect, in which the thought recurs more often than before. Other work shows that suppressing neural activity in an area of the brain related to lying appears to improve lying, as if the less conscious the more successful.
There is something called induced self-deception, in which the self-deceived person acts not for the benefit of self but for someone who is inducing the self-deception. This can be parent, partner, kin group, society, or whatever, and it is an extremely important factor in human life. You are still practicing self-deception but not for your own benefit. Among other things, it means that we need to be on guard to avoid this fate—not defensive via self-deception but via greater consciousness.
Finally, we have treated self-deception as part of an offensive strategy, but is this really true? Consider the opposite—and conventional—view, that self-deception serves a purely defensive function, for example, protecting our degree of happiness in the face of reality. An extreme form is the notion that we would not get out of bed in the morning if we knew how bad things were—we levitate ourselves out via self-deception. This makes no coherent sense as a general truth, but in practicing self-deception, we may sometimes genuinely fool ourselves for personal benefit (absent any effect on others). Placebo effects and hypnosis provide unusual examples, in that they show direct health benefits from self-deception, although this typically requires a third party, either hypnotist or doctor-model. And people can almost certainly induce positive immune effects with the help of personal self-deception, as we shall see in Chapter 6.
THE NEUROPHYSIOLOGY OF CONSCIOUS KNOWLEDGE
Because we live inside our conscious minds, it is often easy to imagine that decisions arise in consciousness and are carried out by orders emanating from that system. We decide, “Hell, let’s throw this ball,” and we then initiate the signals to throw the ball, shortly after which the ball is thrown. But detailed study of the neurophysiology of action shows otherwise. More than twenty years ago, it was first shown that an impulse to act begins in the brain region involved in motor preparation about six-tenths of a second before consciousness of the intention, after which there is a further delay of as much as half a second before the action is taken. In other words, when we form the conscious intention to throw the ball, areas of the brain involved in throwing have already been activated more than half a second earlier.
Much more recent work, from 2008, gives a more dramatic picture of preconscious neural activity. The original work involved a neural area, the supplementary motor area involved in late motor planning. An important distinction is whether preparatory neural activity is related to a particular decision (throw the ball) or just activation in general (do something). A novel experiment settled the matter. While seeing a series of letters flash in front of him or her, each a half-second apart, an individual is asked to hit one of two buttons (with left or right index finger) whenever he or she feels like it and to remember which letter was seen when the conscious choice was made. After this, the subject had to choose which of four letters was the one he or she saw when consciously deciding to press the button. This served roughly to demarcate when conscious knowledge of the decision is made, since each letter is visible for only half a second and conscious knowledge of intention occurs about one second before the action itself.
What about prior unconscious intention? Computer software can search through fMRI images (showing blood flow associated with neural activity) taken in various parts of the brain during intervals prior to action. Most strikingly, a full seven seconds before consciousness of impending action, activity occurs in the lateral and medial prefrontal cortex, quite some distance from the supplementary motor area and the motor neurons themselves. Given the slowness of the fMRI response, it is estimated that fully ten seconds before consciousness of intent, the neural signals begin that will later give rise to the consciousness and then the behavior itself. This work also helps explain earlier findings that people develop anticipatory skin conductance responses to risky decisions well before they consciously realize that such decisions are risky.
One point is well worth emphasizing. From the time a person becomes conscious of the intent to do something (throw a ball), he or she has about a second to abort the action, and this can occur up to one hundred milliseconds before action (one-tenth of a second). These effects can themselves operate below consciousness—that is, subliminal effects operating at two hundred milliseconds before action can affect the chance of action. In that sense, the proof of a long chain of unconscious neural activity before conscious intention is formed (after which there is about a one-second delay before action) does not obviate the concept of free will, at least in the sense of being able to abort bad ideas and also being able to learn, both consciously and unconsciously, from past experience.
On the flip side, it is now clear that consciousness requires some time for perception to occur. Put another way, a neural signal travels from the toe to the brain in about twenty milliseconds but takes twenty-five times as long, a full five hundred milliseconds (half a second) to register in consciousness. Once again, consciousness lags reality and by a large amount, plenty of time for unconscious biases to affect what enters consciousness.
In short, the best evidence shows that our unconscious mind is ahead of our conscious mind in preparing for decisions, that consciousness occurs relatively late in the process (after about ten seconds), and that there is ample time for the decision to be aborted after consciousness (one second). In addition, incoming information requires about half a second to enter consciousness, so that the conscious mind seems more like a post-hoc evaluator and commentator upon—including rationalizing—our behavior, rather than the initiator of the behavior. Chris Rock, the comedian, says that when you meet him for the first time (conscious mind and all), you are not really meeting him—you are only meeting his representative.
THE NEUROPHYSIOLOGY OF THOUGHT SUPPRESSION
One particular kind of self-deception—consciously mediated efforts at suppressing true information from consciousness—has been studied by neurophysiologists in a most revealing way. The resulting data are striking in our context: different sections of the brain appear to have been co-opted in evolution to suppress the activity of other sections to create self-deceptive thinking.
Consider the active conscious suppression of memory. In real life, we actively attempt to suppress our thoughts: I won’t think about this today; please, God, keep this woman from my mind, and so on. In the laboratory, individuals are instructed to forget an arbitrary set of symbols they have just learned. The effect of such efforts is highly variable, measured as the degree of memory achieved a month later when attempting to recall the symbols. This variation turns out to be associated with variation in the underlying neurophysiology. The more highly the dorsolateral prefrontal cortex (DLPFC) is activated during directed forgetting, the more it suppresses ongoing activity in the hippocampus (where memories are typically stored) and the less is remembered a month later. The DLPFC is otherwise often involved in overcoming cognitive obstacles and in planning and regulating motor activity, including suppressing unwanted responses. One is tempted to imagine that this area of the brain was co-opted for the new function of suppressing memories because it was often involved in affecting other brain areas, in particular, suppressing behavior. There is a physical component to this—I know it well. When I experience an unwanted thought and act to suppress it, I often experience an involuntary twitch in one or both of my arms, as if trying to push something down and out of sight.
THE IRONY OF TRYING TO SUPPRESS ONE’S THOUGHTS
The neurophysiological work employed meaningless strings of letters or numbers during short periods of memorization followed by short periods of attempted forgetting, results measured a month later. But another factor operates if we try to suppress something meaningful. One might easily suppose that a conscious decision to suppress a thought (don’t think of a white bear) could easily be achieved, each recurrence of the thought suppressed more deeply so that soon enough the thought itself fails to recur. But this is not what happens. The mind seems to resist suppression, and under some conditions we do precisely what we are trying to suppress. For example, we may blurt out the very truth we are trying to hide from others, as if involuntarily or contra-voluntarily. The suppressed thought often comes back to consciousness, sometimes at the rate of once per minute, and often for days. As with the neurophysiology of thought suppression, some people are better at thought suppression and some try harder. But few people are completely successful.
Two processes are thought to work simultaneously. On the one hand, there is an effort to consciously suppress the undesired thought, initially and whenever it reappears. On the other hand, an unconscious process to search for the prohibited word, as if looking for errors, that is, thoughts that need additional suppression. This process is itself subject to errors, especially when we are under cognitive load. When one is distracted or overburdened mentally, the unconscious search for the thought is not combined with suppression of it, so that the suppressed thought may burst forth more often than expected.
IMPROVING DECEPTION THROUGH NEURAL INHIBITION
The first great advances in neurophysiology came from the ability to measure ongoing brain activity in space and time, first crudely through EEG and then more precisely through fMRI and PET scans. Now a recent method (as we saw in Chapter 1) has taken the opposite approach and selectively knocked out brain activity in particular parts of the brain to see the effects. This was achieved by applying external electrical stimulation on the scalp to inhibit brain activity directly underneath. For example, stimulation can be applied to a brain area involved in deception (at the anterior prefrontal cortex, aPFC) while a person chooses whether to lie in response to a series of questions designed to determine whether she was involved in the mock crime of stealing money from a room. Although in general we expect any artificially induced effect on life—for example, rapping a person hard on his or her knee—to be negative much more often than positive, this intervention was clearly positive where deception was concerned. At least three key components were altered in an advantageous direction. Reaction time while lying was decreased under inhibition, as was physiological arousal. So people were quicker and more relaxed. The electrical inhibition also appeared to reduce the moral conflict during lying. That is, people felt less guilt under inhibition, and the less guilt they felt, the quicker their response times. In addition, people with this area knocked out lied more frequently on relevant questions and less on irrelevant ones, thus more finely tuning their lying.
This is a very striking result. Artificially suppressing mental activity improves performance. This provides an analogy to self-deception, because the suppression of mental activity can come externally via a magnetic device applied to the skull or internally via neuronal suppression emanating from elsewhere in the brain—via self-deception in service of deceit. The only thing we do not know is whether the external inhibition also knocked out consciousness to aspects of the deception, as we might well expect.
Incidentally, two recent studies in China suggest that the brains of those regarded as pathological liars show more white matter in the areas of the brain believed to be involved in deception. “White matter” refers not to the neurons themselves but to the supporting glial cells that nourish the neurons, especially their long, thin dendritic extensions. We know from work on jugglers that the more they practice, the more white matter shows up in the “juggling center” of their brains, so this correlation with lying may result from repeated practice.
UNCONSCIOUS SELF-RECOGNITION SHOWS SELF-DECEPTION
The classic experimental work demonstrating self-deception took place some thirty years ago and involved (largely unconscious) verbal denial or projection of one’s own voice. In a brilliant series of experiments, true and false information was shown to be simultaneously stored within an individual, but with a strong bias toward the true information being hidden in the unconscious mind and the false in the conscious. In turn, people’s tendency to deny (or project) their voices could be affected by making them feel worse or better about themselves, respectively. Thus, one could argue that the self-deception was ultimately directed toward others.
The experiment was based on a simple fact of human biology. We are physiologically aroused by the sound of a human voice but more so to the sound of our own voice (for example, as played from a tape recorder). We are unconscious of these effects. Thus one can play a game of self-recognition, in which people are asked whether a voice is their own (conscious self-recognition) while at the same time recording (via higher arousal) whether unconscious self-recognition has been achieved.
Here is how it worked. People were asked to read the same paragraph from a book. These recordings were chopped into two-, four-, six-, twelve-, and twenty-four-second segments, and a master tape was created consisting of a mixture of these segments of their own and other voices (matched for age and sex). Meantime, each individual was hooked up to a machine measuring his or her galvanic skin response (GSR), a measure of arousal that is normally twice as great for hearing one’s own voice as hearing someone else’s. People were asked to press a button to indicate that they thought the recording was of themselves and another button to indicate how sure they were.
Several interesting facts were discovered. Some people denied their own voices some of the time; this was the only kind of mistake they made and they seemed to be unconscious of making it (when interviewed later, only one was aware of having made this mistake). And yet the skin had it correct—that is, it showed the large increase in GSR expected upon hearing one’s own voice. By contrast, another set of people heard themselves talking when they were not—they projected their voice, and this was the only error they made. Although half were aware later that they had sometimes made this mistake, the skin once again had it correct. This is unconscious self-recognition shown to be superior to conscious recognition. There were two other categories: those who never made mistakes and those who made both kinds, sometimes fooling even their skin, but for simplicity we neglect these two categories (about which nothing more is known, in any case).
It is well known that making people feel bad about themselves leads to less self-involvement (e.g., looking in the mirror). In the above experiment, people made to feel bad by a poor score on a pseudo-exam just taken (in fact, with grades randomly assigned) started to deny their voices. Made to feel good by a good score, they started to hear themselves talking when they were not. It was as if self-presentation was expanding under success and contracting in response to failure.
Another interesting feature—never analyzed statistically—was that deniers also showed the highest levels of arousal to all stimuli. It was as if they were primed to respond quickly, to deny the reality, and get it out of sight. By contrast, inventing reality (projecting) seems a more relaxed enterprise, with more relaxed arousal levels typical of those who make no mistakes. Perhaps reality that needs to be denied is more threatening than is the absence of reality one wishes to construct. Also, denial can be dealt with quickly, with low cognitive load, but requires an aroused state for quick detection and deletion.
There is a parallel in the way in which the brain responds to familiar faces. Some people have damage to a specific part of their brain that inhibits their ability to recognize familiar faces consciously. When asked to choose familiar over unfamiliar faces or match names with faces, the individual performs at chance levels. He or she nonetheless recognizes familiar faces unconsciously, as shown through changes in brain activity and skin conductance. When asked to state which face he or she trusts more, choice is above chance in the expected direction. Thus, there is some access to unconscious knowledge, but not much.
Can we study this in other animals? Some birds show the human pattern exactly. In playback experiments, they show greater physiological arousal to hearing their own species’song (compared to that of others) but a stronger response still to their own voices. These birds could easily be trained to peck at a button when they recognized their own voice (this would be analogous to verbal self-recognition), while measures of physiological arousal would reveal something closer to unconscious self-recognition (GSR in humans). When birds are made to lose fights, do they start avoiding pecking to their own voice (denial) and when made to win fights, show the opposite effect?
CAN ONE HALF OF THE BRAIN HIDE FROM THE OTHER?
Our left and right brain are connected by a corpus callosum, an ancient vertebrate symmetry that has important effects on daily life. The brains partly receive information independently (left ear, right brain) and also act independently (left brain runs right hand). I have often noticed that my right brain may not actively engage in a search unless the left brain makes the goal explicit by saying it out aloud. That is, I will be searching for an object in the visual world or in my pockets, including left pocket, and I will not find it until I say the word out loud (“lighter”), then suddenly I spot it in my left visual field or feel it in my left pocket (this is a consequence of the brain being cross-wired—left-side information goes primarily to the right brain, which in turn controls movements by the left side). This happens, I believe, because the information I am searching for is not shared freely across the corpus callosum between the two sides of the brain but is apprehended by the right brain only when it hears the name of what is being searched for. Then suddenly the left visual field and left tactile side—under control of the right brain—are open to inspection.
Does this curious fact have anything to do with deceit and self-deception? I believe it does, because when I want to hide something from myself—for example, keys just lifted unconsciously from another person—they are promptly stored in my left pocket, where they will be slow to be discovered even when I am consciously searching for them. Likewise, I have noticed that “inadvertent” touching of women (that is, unconscious prior to the action) occurs exclusively with my left hand and comes as a surprise to my dominant left brain, which controls the right side of my body. In effect, the left brain, the linguistic side, is associated with consciousness; the right side (left hand) is less conscious.
This is supported by evidence that processes of denial—and subsequent rationalization—appear to reside preferentially in the left brain and are inhibited by the right brain. People with paralysis on the right side of the body (due to a stroke in the left brain) never or very rarely deny their condition. But a certain small percentage of those with left-side paralysis deny their stroke (anosognosia) and when confronted with strong counterevidence (film of their inability to move their left arm), they indulge in a remarkable series of rationalizations denying the cause of their paralysis (due to arthritis, not feeling very mobile today, overexercise). This is especially common and strong in individuals with large lesions to the right central side of the brain, and it is consistent with other evidence that the right brain is more emotionally honest and the left actively engaged in self-promotion. Normally people show a shorter response time to threatening words, but those with anosognosia show a longer time, demonstrating that they implicitly repress information regarding their own condition.
So far we have spoken of self-deception evolving in the service of the actor, hiding deception and promoting an illusory self. Now consider effects of others on us. We are highly sensitive to others, and to their opinions, desires, and actions. More to the point, they can manipulate and dominate us. This can result in self-deception being imposed on us by others (with varying degrees of force). Extreme examples are instructive. A captive may come to identify with his or her captor, an abused wife may take on the worldview of her abuser, and molested children may blame themselves for the transgressions against them. These are cases of imposed self-deception, and if they are acting functionally from the standpoint of the victimized (by no means certain), they probably do so by reducing conflict with the dominant individual. At least this is often the theory of the participants themselves. An abused wife may be deeply frightened and may rationalize acquiescence as the path least likely to provoke additional severe assaults—this is most effective if actually believed.
The situations need not be nearly as extreme. Consider birds. In many small species, the male begins dominant—he has the territory into which the female settles. And he can displace her from preferred feeding sites. But as time goes on, his dominance drops, and when she reaches the stage of egg-laying, there is a reversal: she now displaces him from preferred sites. The presumption is that risk of extra-pair paternity and the growing importance of female parental investment shifts the dominance toward her. The very same thing may often be true in human relationships.
This finding caught my attention many years ago because it appeared to capture exactly so many of my own relationships with women, one after the other—I was initially dominant but thoroughly subordinate at the end. It was only later that I noticed that the ruling system of self-deception had changed accordingly—from mine to hers. Initially, discussions were all biased in my favor, but I hardly noticed—wasn’t that the way it should be? Then came a short time when we may have spoken as equals, followed by rapid descent into her system of self-deception—I would apologize to her for what were, in fact, her failings.
Sex, for example, is an attributional nightmare—who is causing what effect on whom?—so sexual dysfunction on either or both sides can easily be seen as caused by the other person. Whether manipulated by guilt or fear of losing the relationship, you may now be practicing self-deception on behalf of someone else, not yourself—a most unenviable position.
IMPLICIT VERSUS EXPLICIT SELF-ESTEEM
Let us consider another example of imposed self-deception, one with deeper social implications. It is possible to measure something called a person’s explicit preference as well as an implicit one. The explicit simply asks people to state their preferences directly—for example, for so-called black people over white (to use the degraded language of the United States), where the actor is one or the other. The implicit measure is more subtle. It asks people to push a right-hand button for “white” names (Chip, Brad, Walter) or “good” words (“joy,” “peace,” “wonderful,” “happy”) and left for “black” names (Tyrone, Malik, Jamal) or “bad” words (“agony,” “nasty,” “war,” “death”)—and then reverses everything, white or bad, black or good. We now look at latencies—how long does it take an individual to respond when he or she must punch white or good versus white or bad—and assume that shorter latencies (quicker responses) means the terms are, by implication, more strongly associated in the brain, hence the term “implicit association test” (IAT). Invented only in 1998, it has now generated an enormous literature, including (unusual for the social sciences) actual improvements in methodology. Several websites harvest enormous volumes of IAT data over the Internet (for example, at Harvard, Yale, and the University of Washington), and these studies have produced some striking findings.
For example, black and white people are similar in their explicit tendency to value self over other, blacks indeed somewhat more strongly so. But when it comes to the implicit measures, whites respond even more strongly in their own favor than they do explicitly, while blacks—on average—prefer white over black, not by a huge margin but, nevertheless, they prefer other to self. This is most unexpected from an evolutionary perspective, where self is the beginning (if not end) of self-interest. To find an organism valuing (unrelated) other people more than self on an implicit measure using generic good terms, such as “pleasure” and “friend,” versus bad, such as “terrible” and “awful,” is to find an organism not obviously oriented toward its own self-interest.
This has the earmarks of an imposed self-deception—valuing yourself less than you do others—and it probably comes with some negative consequences. For example, priming black students for their ethnicity strongly impairs their performance on mental tests. This was indeed one of the first demonstrations of what are now hundreds of “priming” effects. Black and white undergraduates at Stanford arrived in a lab to take a relatively difficult aptitude test. In one situation, the students were simply given the exams; in the other, each was asked to give a few personal facts, one of which was their own ethnicity. Black and white students scored equally well with no prime. With a prime, white scores were slightly (but not significantly) better, while black scores plummeted by nearly half. You can even manipulate one person’s performance in opposite directions by giving opposing primes. Asian women perform better on math tests when primed with “Asian” and worse when primed with “woman.” No one knows how long the effect of such primes endures, nor does anyone know how often a prime appears: how often is an African American reminded that he or she is such? Once a month? Once a day? Every half-hour?
The strong suggestion, then, is that it is possible for a historically degraded and/or despised minority group, now socially subordinate, to have an implicit self-image that is negative, to prefer other to self—indeed, oppressor to self—and to underperform as soon as they are made conscious of the subordinate identity. This suggests the power of imposed or induced self-deception—some or, indeed, many subordinate individuals adopting the dominant stereotype regarding themselves. Not all, of course, and the latter presumably are more likely to oppose their subjugation since they are conscious of it. In any case, revolutionary moments often seem to occur in history when large numbers of individuals have a change in consciousness, regarding themselves and their status. Whether there is an accompanying change in IAT is unknown.
FALSE CONFESSIONS, TORTURE, AND FLATTERY
A few more forms of induced self-deception are worth mentioning. It is surprisingly easy to convince people to make false confessions to major crimes even though this may—and often does—result in incarceration for long periods of time. All that is required is a susceptible victim and good old-fashioned police work applied 24/7: isolation of the victim from others, sleep deprivation, coercive interrogation in which denial and refutation are not permitted, false facts provided, and hypothetical stories told—“we have your blood on the murder weapon; perhaps you woke in a state of semiconsciousness and killed your parents without intending to or being aware of it”—with the implication that a confession will end the interrogation when, in fact, it will only begin the suspect’s misery. People differ in how susceptible they are to these pressures and in how much self-deception is eventually induced. Some go on to create false memories to back up their false confessions—with no obvious benefit to themselves.
There is also a kind of imposed self-deception that could be considered defensive self-deception. Consider an individual being tortured. The pain can be so great that something called disassociation occurs—the pain is separated from other mental systems, presumably to reduce its intensity. It is as if the psyche or nervous system protects itself from severe pain by objectifying it, distancing it, and splitting it off from the rest of the system. One can think of this as being imposed by the torturer but also as a defensive reaction permitting immediate survival under most unfavorable circumstances. We know from many, many personal accounts that this is but a temporary solution and that the torture itself and utter helplessness against it endure long afterward as psychological and biological costs. Of course, there are much more modest forms of disassociation from pain than those of torture—such as a mother distracting her child by tickling him or her.
A relatively gentle form of imposed self-deception is flattery, in which the subordinate gains in status by massaging the ego or self-image of the dominant. In royal courts, the sycophant has ample time to study the king, while the latter pays little attention to the former. The king is also presumed to have limited insight into self on general grounds; being dominant, he has less time and motivation to study his own self-deception.
Imposed self-deceptions are sometimes involved in “cons,” deliberate attempts to extract resources through deception (Chapter 8). For example, in one situation, the con artist’s success depended on him inducing in his victim the conviction that they knew each other already. This was accomplished by wrapping his arms around the shoulders of his (male) victim, and saying, “What have you been up to, old bean?” The victim, if deferential, may quickly create a memory of when they might have met, supplying facts that the con artist can use later as evidence that they did indeed know each other.
One form of induced self-deception is widespread and very important. The ability of leaders to induce self-deception in their subjects has had large historical effects. As we shall see in Chapter 10, false historical narratives widely shared within a population can easily be exploited to arouse sentiments in favor of war. At the same time, political success often may turn on the ability of leaders to arouse the belief in people that something is in their self-interest when it is not.
FALSE MEMORIES OF CHILD ABUSE
In the late 1970s and the 1980s, the emerging evidence of the sexual abuse of children and women set off two epidemics of false accusations, with immense costs to innocent people who were either imprisoned or tried for nonexistent crimes, or publicly accused and shamed. All of these consequences were based on the implantation of false memories, a case of imposed self-deception with large social costs.
The two epidemics were linked. One claimed a high incidence of past childhood sexual abuse in women—discovered only through “recovered memory therapy,” a variety of techniques specifically designed to elicit such memories (or create them). Women went to see a therapist for other reasons, with no past memory of abuse, and emerged convinced that they had been subjected to repeated, sustained abuse. Suggestions from the therapist, leading questions, hypnosis in an effort to retrieve the memories—these were some of the tools that managed to instill what turned out to be false memories.
The second epidemic was a natural outgrowth of the first. If so much unsuspected sexual abuse had been going on in the past, then surely it must be continuing in the present. In 1983 in California, teachers at a preschool were accused of the usual sexual abuse of children, but also of subjecting them to Satanic rituals involving the slaughter of pet rabbits, and even subjecting them to an airplane ride where similar activities took place. This was a common feature of both epidemics—you can impose false memories on other people but you cannot keep the newly freed memory from making up whatever it wishes. The increasingly unlikely “memories” eventually led to the collapse of these movements. But not before dozens of communities had gone through the wrenching trauma of learning that their children had been sexually abused, attacked by robots and lobsters, and forced to eat live frogs.
Some people were imprisoned for imaginary abuses, while some innocent parents had to endure the public shame of others believing they had practiced pedophilia on their own children. Alas, there was no lack of clinical psychologists willing to play the fool and testify in court that in their expert opinion, the women and children were telling the truth.
IS SELF-DECEPTION THE PSYCHE ’S IMMUNE SYSTEM?
The major alternative view of self-deception that comes out of psychology is that self-deception is defensive, whether against our primitive unconscious urges (the Freudian system) or against attacks on our happiness (social psychology). In the latter view, happiness is treated as an outcome in its own right, a part of our mental health. Thus, it is an outcome worth protecting, and for this purpose we have a “psychological immune system” to protect our mental health just as the actual immune system protects our physical health. Healthy people are happy and optimistic, feel a greater sense of control over their lives, and so on. Since self-deception can sometimes create these effects, it is directly selected to do so. We cook the facts, we bias the logic, we overlook the alternatives—in short, we lie to ourselves. Meanwhile, we apparently have a “reasonability center” that determines just how far we will be permitted to protect our happiness via self-deception (without, for example, looking ridiculous to others or becoming dangerously delusional). Why was evolution unable to produce a more sensible way of regulating such an important emotion as happiness?
Regarding the evidence, of course successful organisms are expected to feel happier, more optimistic, and more in control. They are also more likely to show self-enhancement. Does this mean that the self-enhancement is causing the happiness, optimism, and sense of control? Hardly. Depressed people show much less self-enhancement on common traits than do happier souls—they may even show self-deprecation. This is sometimes used to argue that without self-deception, we would all be depressed. This almost certainly inverts cause and effect. A time of depression is not a good time for self-inflation, especially if this inflation is oriented toward others—depression seems instead better suited to opportunities for self-examination.
Before turning to the imaginary psychological immune system, it is well to remember that the real immune system deals with a major problem common to all of life: that of parasites, organisms that eat us from the inside (see Chapter 6). The immune system uses a variety of direct reality-based molecular mechanisms to attack, disable, engulf, and kill a veritable zoo of invading organisms—thousands of species of viruses, bacteria, fungi, protozoa, and worms—themselves using techniques honed over hundreds of millions of years of intense natural selection. The immune system also stores away an accurate and large library of previous attacks, with the appropriate counterresponse programmed in advance.
By contrast, the psychological immune system works not by fixing what makes us unhappy but by putting it in context, rationalizing it, minimizing it, and lying about it. If the physical immune system worked this way, it would do so by telling you, “Okay, you have a bad cold, but at least you don’t have the flu the fellow down the street has.” Thus, the real psychological immune system must be the one that causes us to go out and fix the problem. Guilt motivates us toward reparative altruism, unhappiness toward efforts to improve our lives to diminish the unhappiness, laughter to appreciate the logical absurdities in life, and so on. Self-deception traps us in the system, offering at best temporary gains while failing to address real problems.
It is true that as a highly social species, we are very sensitive to the actions and opinions of others and can be deeply affected by them—lowering our self-opinion and our happiness—but, again, why adopt something as dubious as self-deception to solve this problem? Note that a defensive view of self-deception is congenial to an inflated moral self-image—I am not lying to myself the better to deceive you, but rather I lie to myself to defend against your attacks on myself and my happiness.
There is some slack in the system. You are also part of your own social world. The eye that beholds you could be your eye studying your own behavior. What does it see? First, your conscious act, then your unconscious self? Let us initially assume so. Can fooling this inner eye help in fooling some other part of yourself, sometimes to your benefit? I believe so. We can also try to suppress painful memories about events we cannot affect. A man’s daughter is murdered by an unknown killer: “When she died, I wrapped her memory in blankets and tried to forget it.” Presumably the recurring painful memory serves no purpose and there is no loss in forgetting. There are also various efforts to mold our consciousness that are not, by definition, self-deceptive. They can involve us in various self-improvement projects, including meditation, prayer, optimism, a sense of purpose, meaning, and control, so-called positive illusions. As we shall see in Chapter 6, one important benefit of such projects is improved immune function. Here I wish to discuss two related examples in some depth: the placebo effect and hypnosis. Both demonstrate that belief can cure.
THE PLACEBO EFFECT
The placebo effect and the benefits of hypnosis, including self-hypnosis, are examples of self-beneficial self-deception that usually requires a third party—a person in a lab coat with a stethoscope in the first case and someone swinging a watch and talking to you in a rhythmic way in the second. The “placebo” refers to the fact that a chemically inert or innocuous substance administered as if it were a medicine often produces beneficial—even medicinal—effects. This effect is so consistent and strong that all medical research trials on a new medicine routinely have a placebo control. That is, if you are testing whether a pill helps people with arthritis, you must give an equal number of people a similar-looking pill lacking the key chemical. Only if your medicine works better than the placebo can it be said to have any effect of its own. Of course it would be nice to add a third category to the analysis—no placebo, no medicine—to measure more precisely the placebo effect itself, but doctors have been slow to realize the value of doing this.
What such work does reveal is that a sizable minority of people do not show a placebo effect, while others enjoy strong self-induced effects. This is consistent with what we know about hypnosis, as well as the ability to destroy memory of nonsense material. Presumably this variation is positively associated with the ability to be manipulated by others (indeed, all three examples above involve third-party effects). This suggests that an ability to self-deceive for positive effect is vulnerable to parasitism by others, allowing them to manipulate your suggestibility to their own benefit.
The following effects are very pronounced and demonstrate a clear connection between cost and perceived benefit. The placebo effect is stronger
• the larger the pill,
• the more expensive it is,
• when given in capsule form instead of a pill,
• the more invasive the procedure (injection better than pill, sham surgery is good),
• the more the patient is active (rubbing in the medicine),
• the more it has side effects, and
• the more the “doctor” looks like one (white lab coat with stethoscope).
The color of pills affects their effectiveness in different situations: white for pain (through association with aspirin?); red, orange, and yellow for stimulation; and blue and green for tranquilizers. Indeed, blue placebos can increase sleep via the blueness alone with probable immediate immune benefits (Chapter 6).
The general rules of the placebo effect are consistent with cognitive dissonance theory (Chapter 7)—the more a person commits to a position, the more he or she needs to rationalize the commitment, and greater rationalization apparently produces greater positive effects. Surgery offers repeated examples of the placebo effect. One of the great classics is the case of angina (heart pain) treated surgically in the United States in the 1960s by a minor chest operation in which two arteries near the heart were fused to (allegedly) increase blood flow to the heart, thereby reducing pain. It did the trick—pain was reduced, patients were happy, and so were the surgeons. Then some scientists did a nice study. They subjected a series of people to the same operation, opening the chest and cutting near the arteries, but they did not join any together. Everyone was sewn up the same way and nobody knew who had received which “operation” when later effects were evaluated. The beneficial effects were identical to those of the original operation. In other words, the entire effect seems to be that of a placebo. The joining of the two arteries had nothing to do with any beneficial effect.
Surgery appears to be unusually prone to placebo effects—presumably because of the great cost and the apparent massing of group support. In any case, some interventions are dubious in advance and with potential for future complications—to be corrected by further surgery—for example, think of Michael Jackson’s face. So there are built-in incentives for an entire subdiscipline to develop in unhealthy ways. Remunerectomies, for example, are performed solely to remove a patient’s wallet. Consider arthroscopic surgery, meant to correct defects in the knee, often due to osteoarthritis. A small study suggested that sham operations—with all the features of real ones—produced virtually the same benefits as the actual operations, suggesting that these were mainly beneficial as placebos. The actual operations were associated with greater maximum pain than the placebos, presumably because they were more invasive, but for overall level of pain and other measures, the placebo and surgery produced remarkably similar effects.
For effects on pain, the placebo has been studied in some detail, and there is no question that in some individuals, the mere belief that a pain reliever has been received is sufficient to induce the production of endorphins that, in turn, reduce the sensation of pain. That is, what the brain expects to happen in the near future affects its physiological state. It anticipates, and you can gain the benefit of that anticipation. The tendency of Alzheimer’s patients not to experience placebo effects may be related to their inability to anticipate the future.
Expectancy can create strong placebo effects through a mixture of past experiences of genuine medical effects and placebos. As one author has put it:
The medical treatment that people receive can be likened to conditioning trials. The doctor’s white coat, the voice of a caring person, the smell of a hospital or a practice, the prick of a syringe or the swallowing of a pill have all acquired a specific meaning through previous experience, leading to an expectation of pain relief.
Depression seems especially sensitive to the placebo effect. Numerous studies have shown that genuine antidepressants account for about 25 percent of the improvement, while the placebo effect accounts for the remaining 75 percent. Believing you are getting something to help you is more than half the battle. After all, depression is marked by hopelessness, and placebos offer nothing if not hope. I always think about this when I am being given an antidepressant. I am told not to wait for an effect for at least three or four weeks—“it needs to build up.” In other words, expect no direct test of utility anytime soon, and the usual rule of regression to the mean—or, things get better after they have gotten worse—will give you all the evidence you later need. In the meantime, get with the program! The most recent meta-analysis (2010) reveals a striking (and very welcome) fact. Placebos work as well as antidepressants for mild depression, but for severe depression, there is a sharp bifurcation: real medicine shows strong benefits and placebos almost none. This, as we have noted, is a characteristic feature of self-deception directed toward others: a modest amount works, but a great deal fails to impress.
The ability to produce autostimulatory effects is nicely illustrated by work on female sexuality. Women who appear to be sexually dysfunctional in failing to respond orgasmically can be induced to greater arousal by giving them false feedback on the blood flow to their pelvis (a correlate of arousal) to sexual stimuli. They appear to be talking themselves into greater arousal, somewhat like the sight of a man’s own erection may increase his sexual desire.
There is no doubt that placebo effects operate in athletics as well. Trials have shown that cyclists respond positively to word that they have been given caffeine (without getting any) about half as well as to the caffeine itself (along with word they are getting it). Merely telling the cyclists they are getting a heavier dose of caffeine produces a stronger positive athletic response. Even that cliché of working out—no pain, no gain—has a built-in placebo effect.
One can even induce a placebo effect out of a placebo effect. That is, you can tell someone with irritable bowel syndrome that he or she will now receive a placebo—an inert chemical with no medicine in it—but then tell the person that the placebo effect is powerful, often involuntary, helped by a positive attitude, and finally, that taking the pills faithfully is critical. With this much helpful verbiage, it is not surprising that a placebo identified as such still produces benefits.
The analogy with religion is strong and tempting. Both involve strong belief. Both involve a series of conditioned associations, including common doctor or pastoral elements. And, indeed, until very recently (up to about five thousand years), medicine and religion were one and the same. You can easily imagine that regular religious attendance (especially if the music is good!) would intensify placebo and other immune benefits, just as regular visits to a caring and sensible doctor or adviser might.
A striking feature of placebo effects is that they are highly variable across a population. Typically roughly one-third show very strong effects, perhaps one-third moderate, and one-third none. This is an example of what we have emphasized repeatedly, that the deceit and self-deception system must be an evolving one, with important genetic variation for forms and degree of self-deception. We do not know how much of the variation just mentioned is genetic, but recent work shows that people with depressive disorders differ in the degree to which they show a placebo effect based on particular genes.
What else correlates with a tendency to show a placebo effect? For one thing, suggestibility, as in ease of being hypnotized, is a trait that also shows high variability, some people being highly resistant and others easily manipulated. It should hardly surprise us that ease of being hypnotized and the placebo reaction should co-vary strongly and positively. Each is a kind of self-deception requiring a third party, a hypnotist or “doctor.” When people are divided into those who are easily hypnotized versus those who are not, then hypnotizing the susceptible to concentrate only on the color in which words are printed in the Stroop test (recognizing words denoting color that are written in different colors), causes them to show no interference from the words themselves. But people who are not susceptible show no improvement on the Stroop test. This, then, is a benefit from ease of being hypnotized: greater ability to concentrate or tolerate cognitive load.
We began this chapter with the illusion of conscious control. We then moved successively into deeper and subtler forms of external control—imposed self-deception in general, torture with its disassociations, false accusations of others and of self, the placebo effect, and hypnosis. It would now be valuable to tie these kinds of conflicts into our two major social relationships: the family (Chapter 4) and the two sexes (Chapter 5). When do we impose self-deceptions on family members and on sexual partners, and when and how are these imposed on us?