The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life - Robert Trivers (2011)

Chapter 7. The Psychology of Self-Deception

00

How do we achieve our various self-deceptions? If not in precise mechanistic terms, then in psychological ones, what are the psychological processes that help us achieve self-deception? We both seek out information and act to destroy it, but when do we do which and how do we do it? To give an answer to this, we need to trace the flow of information from the moment it arrives until the moment it leaves, that is, is represented to others. From the “rooter to the tooter,” as we say for pigs. At every single stage—from its biased arrival, to its biased encoding, to organizing it around false logic, to misremembering and then misrepresenting it to others, the mind continually acts to distort information flow in favor of the usual good goal of appearing better than one really is—beneffective to others, for example. Misrepresentation of self to others is believed to be the primary force behind misrepresentation of self to self. This is way beyond simple computational error, the problems of subsampling from larger samples, or valid systems of logic that occasionally go awry. This is self-deception, a series of biasing procedures that affect every aspect of information acquisition and analysis. It is systematic deformation of the truth at each stage of the psychological process. This is why psychology is both the study of information acquisition and analysis and also the study of its continual degradation and destruction.

One important fact is worth stressing at the outset. Self-deception does not require that the truth and falsehood regarding something be simultaneously stored—as in our example of voice recognition (Chapter 3). Falsehood alone may be stored. As we saw for the old-age positivity bias (Chapter 6), the earlier the information is shunted aside—or indeed entirely avoided—the less storage of truth occurs and the less need there will be for (potentially costly) suppression later on. At the same time, since less information is stored, there are greater potential costs associated with complete ignorance. As time after acquisition increases, the choice between suppressing and retaining the truth should be more subtle and complex. The study of exactly how these conflicting forces have played out over time is a completely open field whose exploration will be most revealing.

In what follows, I begin with a review of some of the biasing that takes place during information processing. This is by no means an exhaustive look but more an impressionistic one of the ways in which various psychological processes support a deceptive function. This may include biases in predicting future feelings. Especially important are the roles of denial, projection, and cognitive dissonance in molding deceit and self-deception.

AVOIDING SOME INFORMATION AND SEEKING OUT OTHER

However much we champion freedom of thought, we actually spend much of our time censoring input. We seek out publications that mirror or support our prior views and largely avoid those that don’t. If I see yet another article suggesting the medical benefits of marijuana, you can trust me to give it a careful read; an article on its health hazards is worth at best a quick glance. Regarding tobacco, I couldn’t care less. The scientific facts were established decades ago and it has been years since my last cigarette. So this bias in my attention span is both directly adaptive—I smoke marijuana, so I am interested in its effects—and serves self-deception, because I hype the positive and neglect the negative, the better to defend the behavior from my own inspection and that of others.

A lab experiment measured this kind of bias precisely by confronting people with the chance that they might have a tendency toward a serious medical condition and telling them a simple test would suggest whether they were vulnerable. If they applied their saliva to a strip of material and it changed color, this indicated either vulnerability or not (depending on experimental group). People led to believe that a color change was good looked at the strip 60 percent longer than did those who thought it would be bad (actually the strip never changed color). In another experiment, people listened to a tape describing the dangers of smoking, while being asked to pay attention to content. Meanwhile, there was some background static and the subjects had the option of decreasing its volume. Smokers chose not to decrease the static, while nonsmokers lowered the level, the better to hear what was being said.

Some people avoid taking HIV and other diagnostic tests, the better not to hear bad news. “What I don’t know can’t hurt me.” As expected, this is especially likely when little or nothing can be done either way. It is also not surprising that those who feel more secure about themselves are more willing to consider negative information. In short, we actively avoid learning negative information about ourselves, especially when it can’t lead to any useful counteraction and when we feel otherwise insecure about ourselves. Self-deception is here acting in service of maintaining and projecting a positive self-view.

In many situations, we can choose what to concentrate on. At a cocktail party, we could overhear two conversations. Depending on which views we wish to hear, we may attend to one conversation instead of the other. We are likely to be aware of the general tenor of the information we are avoiding but none of its details, so here again biased processes of information-gathering may work early enough to leave no information at all that may later need to be hidden. In one experiment, people were convinced that they were likely—or highly unlikely—to be chosen for a prospective date. If yes, they spent slightly more time studying the positive rather than negative attributes of the prospective date, but if no, they spent more time looking at the negative, as if already rationalizing their pending disappointment.

BIASED ENCODING AND INTERPRETATION OF INFORMATION

Assuming we do attend to incoming information, we can still do so in a biased way. One experiment invited people to look at a figure that could be either a capital B or the number 13 (or a horse or a seal) and were told the stimulus could be either a letter or a number (or a farm animal or ocean animal). Having been provided differential food reward for the general categories ahead of time, people quickly developed a sharp perceptual bias in the appropriate direction on items presented for only four hundred milliseconds, that is, ones just reaching consciousness. Eye tracking showed that the first look was usually toward the preferred category (about 60 percent). These studies suggest that the impact of motivation on processing information extends to preconscious processing of visual stimuli and thus guides what the visual system presents to conscious awareness. Similar work has now been done using colors.

The point is that our perceptual systems are set up to orient very quickly toward preferred information—in this case, shapes associated with food rewards. This itself has nothing to do with deceit and self-deception—it will often give direct benefits. But the same quick-biasing procedure is available to us when the information is preferred because it boosts our self-esteem, or our ability to fool others. There are few more powerful forces in the service of self-deception than personal fantasies, so when these are aroused, selective attention is expected to be especially intense.

The related effect was shown sixty years ago: hungrier children, when asked to draw a coin, draw coins larger. Instruments to gain satisfaction (money buys food) are more attractive and are perceived as being larger. Recent confirmation shows that a glass appears larger to you when you are thirstier, especially if attention is called to your thirst, and even garden implements appear larger if gardening has subliminally been linked to suggestions that it is fun.

Our initial biases may have surprisingly strong effects. In one experiment, people were preselected for strong attitudes for and against capital punishment. They were then presented with a mixed bag of facts supporting both positions. Instead of leading to group cohesion, this action split the group more sharply. Those who were already against capital punishment now had a new set of arguments at hand, and vice versa. Biased interpretation ran the process. Those in favor of capital punishment accepted pro arguments as sound and rejected anti arguments as unsound. As before, self-affirming thoughts were negatively associated with this behavior—think better of yourself, you practice less self-deception. One important implication is that self-deception is a force that often drives people apart—certainly friends, lovers, neighbors—although under common group aims, such as war, shared self-deceptions are also uniquely powerful in binding people together.

BIASED MEMORY

There are also many processes of memory that can be biased to produce welcome results. We more easily remember positive information about ourselves and either forget the negative or, with time, transmute it to be neutral or even positive. Differential rehearsal, as in telling others, can itself produce the effect, an example of self-deception at the end of the process (the “tooter”) affecting earlier processes. Complementary memory biases may actively work in the same direction. When given a “skills class,” people remember their skills prior to the class as being worse than they rated them at the time, probably to create an illusion of progress. They then later misremember their actual performance after the class as being better than it was, presumably in service of the same delusion. What we are doing here is producing a consistent set of biases in our own favor by a series of biased memories.

Memories are continually distorted in self-serving ways. Men and women both remember having fewer sexual partners, and more sex with each partner, than was actually true. People likewise remember voting in elections they did not and giving to charity when they did not. If they did vote, they remember supporting the winning candidate rather than the one they actually voted for. They remember their children as being more precocious and talented than they were. And so on.

Although people often think of memory as a photo whose sharpness gradually degrades with time, we know that memory is both reconstructive and easily manipulated. That is, people continually re-create their own memories, and it is relatively easy to affect this process in another person. If a police officer asks a witness about a nonexistent red sports car right near an accident, the officer will often learn about the red sports car in subsequent questioning—it can sometimes end up as one of the most vividly remembered details of the accident itself. As mentioned, differential rehearsal of material after the fact can produce reliable biases in memory.

Take another example. Health information can easily be distorted in memory even when it is presented in a clear and memorable fashion. People were given a cholesterol screening and then one, three, and six months later tested for the memory of the result. Respondents usually (89 percent) recalled their risk category accurately and their memory did not decay with time, but more than twice as many people remembered their cholesterol level as lower rather than higher than it actually was. This same kind of memory bias is true of daily experiences in which people recall their good behavior more easily than bad but show no such bias in recalling the behavior of others.

Or we can invent completely fictitious memories. As has been said, “My memory is so good I can remember things that never happened.” One case is memorable in my own life. For many years I told the story of how in 1968 I went deep into the bowels of Harvard’s Widener Library to find a book coauthored by my father in 1948, published by the State Department, which laid out the de-Nazification procedures for all Nazis too unimportant to be hung at Nuremberg. It was a complex system of graded steps. If you were a member of SA, two slaps on the wrist; if SS, you lost your job for five years—that kind of thing. Yet almost none of this is true. No such book exists. Yes, the trip to the bowels took place and a book on Nazis was located with my father as a coauthor and it was published by the State Department. Only it was published in 1943 and is a minor piece on the structure of Nazi organizations in Nazi-occupied territories. Hardly the basis for the reinvention of Germany, but is this not the point of false memory—to improve things, especially appearances? I added nice little touches along the way. I liked to say that I trusted no one, including myself, and thus went to the bowels to see whether this family story was true. But this added to the falsehood, since there really was no “family story” about this minor 1943 work, and is a general feature of false-memory construction—new details are added that support the general argument and then become part of memory.

One can even reverse exactly who is saying what to whom. Gore Vidal remembers an interview with Tom Brokaw on NBC’s morning Today Show in which Brokaw began by asking about Vidal’s writings on bisexuality, to which Vidal replied that he was there to talk politics. Brokaw persisted with bisexuality; Vidal stood firm until they concentrated on politics. Yet years later, when Brokaw was asked what his most difficult interview had been, he cited his interview with Vidal. Reason: Vidal kept insisting they talk about bisexuality when all he wanted to discuss was politics. Positions exactly reversed—and, as expected, in the service of self-improvement: Brokaw looks better being interested in politics than in bisexuality.

In arguments with other people, lab work shows that we naturally tend to remember the good arguments on our side and the poor ones on the other, and to forget those that turn out badly for us and good for the other. This bolsters our own side and image, of course, which presumably is its function. Memory distortions are more powerful the more they are motivated to maintain our self-esteem, to excuse failures or bad decisions, and to push into the deeper past causes of current problems. Thus, most people maintain the illusion of improvement, where such mistakes as must be acknowledged can at least be attributed to the failings of an earlier version of oneself.

RATIONALIZATION AND BIASED REPORTING

We reconstruct internal motives and narratives to rationalize otherwise bad or questionable behavior. We can attribute behavior to external contingencies rather than internal, thereby helping defend ourselves. So a general belief that cheating is not bad—or is unintentional or occurs in a world without free will—will all serve to rationalize our cheating, as indeed they do.

Biases show up in unexpected places, even when there are no clear benefits or costs at issue. The classic experiment in this domain was beautifully designed to put people in an awkward situation with one of two escapes. People were offered the chance to sit next to a crippled person or one who was not. Each was watching a television set in front of him or her. Sometimes the two TVs had the same show, sometimes different ones. When it was the same show, people preferentially chose to sit next to the handicapped person, as if demonstrating their lack of bias, but if the two TVs had different shows, people chose to sit away from the crippled person, as if now having a justification (more interesting show) for an otherwise arbitrary choice. Similarly, a meta-analysis of many studies shows that white Americans choose to help black Americans more or less equally (compared to helping whites) but not when they can rationalize less helping on grounds such as distance or risk. Here people are not denying or misremembering their behavior—rather, they are denying the underlying intention and rationalizing it as the product of external forces. This has the advantage of reducing their responsibility for behavior performed.

A belief in determinism can provide a ready excuse for misbehavior, just as can unconsciousness: the “I had no choice” defense. Relatively deterministic views of human behavior may provide some cover for socially malevolent behavior. Experimentally inducing a deterministic view (reading an essay on how genes and environment together determine human behavior) increases cheating on a computer-based task that permits cryptic cheating. What this work shows is that by manipulating a variable that reduces personal responsibility, we easily induce immoral behavior in ourselves (at least as viewed by others).

PREDICTING FUTURE FEELINGS

It is an interesting fact that we show systematic biases in our ability to predict our own future feelings. We make systematic errors in the process, under the general rule that what we are feeling at the present will extend into the future. When imagining a good outcome, we overestimate our future happiness, and vice versa for a bad one. It is as if we assay our current feelings and then project them into the future. We do not imagine that we will “regress to the mean,” that is, return naturally to the average value of happiness. We do not assume we will be less happy in the future than our current state of happiness or happier in the future if we are currently down. Thus, one week after the 2004 US elections, Kerry supporters were less dejected than they thought they would be and Bush supporters, less ecstatic.

There is evidence that we make similar mistakes when trying to predict the feelings of others, whether friends or strangers. We overestimate the effect of an emotional event on their future feelings, much as we do for ourselves. Indeed, our forecasting of them is positively correlated with their own, but neither is very predictive of the future.

The problem is in the interpretation. Some see this as a form of self-deception in which we are unconscious of the degree to which our system of self-deception will readjust our thinking in the future. I doubt this. We project easily into the future because it expresses our current emotional state. Verbal predictions regarding our future mental states may be a relatively recent invention with limited selective effects. The relevant trade-offs are already built into our behavior whatever our verbal predictions.

Certain exceptions to this rule also stand out. I remember “courting” a Nigerian beauty at a very safe distance at a club in Amsterdam for three hours without ever having the courage to approach her. When she left, she threw me a look of withering contempt that burned right into my soul. If a social psychologist had been there to measure my “affective forecasting,” I doubt I would have guessed that twenty-five years later, the memory still sears in my consciousness. I believe I would have predicted that within a year or two the whole evening would have been completely forgotten.

ARE ALL BIASES DUE TO SELF-DECEPTION?

A hallmark of self-deception is bias. Mere computational error is not enough. Such error is often randomly distributed around the truth and shows no particular pattern. Self-deception produces biases, patterns where the data point in one direction—usually that of self-enhancement or self-justification. Are there biases that are real but not driven by self-deception? Of course there are.

Consider the following. Sounds that are coming toward us are perceived as closer and louder than they really are, while the opposite is true for receding sounds. This is a bias and it has a perfectly good explanation. Approaching objects are inherently more dangerous than are receding ones—hence the value of earlier and more acute detection. Perhaps the organism is measuring distances in Darwinian units rather than Newtonian ones. From that viewpoint, there is no bias.

Or consider another example. From the top of a tree, the drop to the ground looks much farther than does the same distance viewed from the ground up. There is no social component to these biases. You are directly saving yourself—not trying to manipulate the opinions of others. Many other errors have similarly innocent explanations. Some are simple optical illusions, holes in our sensory system that produce startling biases under particular conditions. Others are general rules that work well in most situations but fail badly in some.

Of course the errors we make are very numerous. In the words of one psychologist, we can fall short, overreach, skitter off the edge, miss by a mile, take our eyes off the prize, or throw the baby out with the bathwater. And we can exaggerate our accomplishments, diminish our defects, and act vice versa regarding those of others. Many of these may serve self-deceptive functions but not all. Sometimes when we take our eyes off the prize, we have only been momentarily distracted; sometimes when we miss by a mile we have only (badly) miscalculated. At other times, it is precisely our intention to throw out the baby with the bathwater or to miss by a mile, so in principle we have to scrutinize our biases to see which ones serve the usual goal of self-enhancement or, in some other fashion, deception of others, and which ones subserve the function of rational calculation in our direct self-interest.

DENIAL AND PROJECTION

Denial and projection are fundamental psychological processes—the deletion (or negation) of reality and the creation of new reality. The one virtually requires the other. Projecting reality may require deleting some, while denial tends to create a hole in reality that needs to be filled. For example, denial of personal malfeasance may by necessity require projection onto someone else. Once years ago while driving I took a corner too sharply and my one-year-old baby fell over in the backseat and started to cry. I heard myself harshly berating her nine-year-old sister (my stepdaughter) for not supporting her—as if she should know by now that I like to take my corners on two wheels. The very harshness of my voice served to signal that something was amiss. Surely the child’s responsibility in this misdemeanor was, at most, 10 percent, the remaining 90 percent lying with me, but since I was denying my own portion, she had to endure a tenfold increase in hers. It is as if there is a “responsibility equation” such that decrease of one portion must necessarily be matched by an increase elsewhere.

A rather more serious example of denial and projection concerns 9/11. Any major disaster has multiple causes and multiple responsible parties. There’s nothing wrong with assigning the lion’s share of cause and responsibility to Osama bin Laden and his men, but what about creating a larger picture that looks back over time and includes us (US citizens) in the model, not so much directly causing it as failing to prevent it? If we were capable of self-criticism, what would we admit to? How did we, however indirectly, contribute to this disaster? Surely through repeated inattention to airline safety (see Chapter 9) but also in our foreign policy.

This final admission is often hardest to make and is almost never made publicly, but sensible societies sometimes guide behavior after the fact in a useful way. It is easy for personal biases to affect one’s answer here, but I will set out what seem to me to be obvious questions. To wit, are there no legitimate grievances against the United States and its reckless and sometimes genocidal (Cambodia, Central America) foreign policy in the past fifty years? Is there any chance that our blind backing of Israel—like all our “client states,” right or wrong, you’re our boys—has unleashed some legitimate anger elsewhere, among, say, Palestinians, Lebanese, Syrians, and those who identify with them or with justice itself? In other words, is 9/11 a signal to us that perhaps we should look at our foreign policy more critically and from the viewpoint of multiple others, not just the usual favored few? One need not mention this in public but can start to make small adjustments in private. Again, the larger message is that exterminating one’s enemies is not the only useful counterresponse to their actions, but becomes so if one’s own responsibility is completely denied and self-criticism aborted.

DENIAL IS SELF-REINFORCING

Denial is also self-reinforcing—once you make that first denial, you tend to commit to it: you will deny, deny the denial, deny that, and so on. In the voice-recognition experiments, not only do deniers deny their own voice, they also deny the denial. A person decides that an article on which he is a coauthor is not fraudulent. To do so, he must deny the first wave of incoming evidence, as he duly does. Then comes the second wave. Cave in? Admit fault and cut his losses? Not too likely. Not when he can deny once more and perhaps cite new evidence in support of denial—evidence to which he becomes attached in the next round. He is doubling down at each turn—double or nothing—and as nothing is what he would have gotten at the very beginning, with no cost, he is tempted to justify each prior mistake by doubling down again. Denial leads to denial, with potential costs mounting at each turn.

In trading stock, the three most important rules are “cut your losses, cut your losses, and cut your losses.” This is difficult to do because there is natural resistance. Benefits are nice; we like to enjoy them. But to do so, we must sell a stock after it has risen in value; then we can enjoy the profit. By the same token, we are risk averse. Loss feels bad and is to be avoided. One way to avoid a cost is to hold the stock after it has fallen—loss is only on paper and the stock may soon rebound. Of course, as it sinks lower, one may wish to hold it longer. This style of trading eventually puts one in a most unenviable position, holding a portfolio of losers. Indeed, this is exactly what happens. People trading on their own tend to sell good stocks, buy less good ones, and hold on to their bad ones. Instead, “cut your losses, cut your losses, cut your losses.”

YOUR AGGRESSION, MY SELF-DEFENSE

One of the most common cases of denial coupled with projection concerns aggression—who is responsible for the fight? By adding one earlier action by the other party, we can always push causality back one link, and memory is notoriously weak when it comes to chronological order.

An analogy can be found in animal species that have evolved to create the illusion that they are oriented 180 degrees in the opposite direction and are moving backward instead of forward. For example, a beetle has its very long antennae slung underneath its body so they protrude out the back end, creating the illusion of a head. When attacked, usually at the apparent “head” end (that is, the tail) it rushes straight forward—exactly the opposite of what is expected, helping it to escape. Likewise, there are fish with two large, false eyespots on the rear end of their body, creating the illusion that the head is located there. The fish feed at the bottom, moving slowly backward, but again, when attacked at the apparent “head” end, take off rapidly in the opposite direction. What is notable here is that the opposite of the truth (180 degrees) is more plausible than a smaller deviation from the truth (say, a 20-degree difference in angle of motion). And so also in human arguments. Is this an unprovoked attack or a defensive response to an unprovoked attack? Is causation going in this direction, or 180 degrees opposite? “Mommy, he started it.” “Mommy, she did.”

COGNITIVE DISSONANCE AND SELF-JUSTIFICATION

Cognitive dissonance refers to an internal psychological contradiction that is experienced as a state of tension or discomfort ranging from minor pangs to deep anguish. Thus, people will often act to reduce cognitive dissonance. The individual is seen to hold two cognitions—ideas, attitudes, or beliefs—that are inconsistent: “Smoking will kill you, and I smoke two packs a day.” The contradiction could be resolved by stopping cigarettes or by rationalizing their use: “They relax me, and they prevent weight gain.” Most people jump to the latter task and start generating self-justification in the face of a much more difficult (if healthier) choice. But sometimes there is only one choice, because the cost has already been suffered: you can rationalize it or live with the truth.

Take a classic case. Subjects were split into two groups, one comprising people who would endure a painful or embarrassing test to join a group and the other people who would pay a modest fee. Then each was asked to evaluate the group based on a tape of a group discussion arranged to be as dull and near-incoherent as possible. Those who suffered the higher cost evaluated the group more positively than did those who paid the small entry fee. And the effect is strong. The low-cost people rated the discussion as dull and worthless and the people as unappealing and boring. This is roughly how the tape was designed to appear. By contrast, those who paid the high cost (reading sexually explicit material aloud in an embarrassing situation) claimed to find the discussion interesting and exciting and the people attractive and sharp.

How does that make sense? According to the prevailing orthodoxy, less pain, more gain, and the mind should measure accordingly. What we find is: more pain, more post-hoc rationalization to increase the apparent benefit of the pain. The cost is already gone, and you cannot get it back, but you can create an illusion that the cost was not so great or the return benefit greater. You can choose, in effect, to get that cost back psychologically, and that is exactly what most people do. This particular experiment has been replicated many times with the same result. But it is still not quite clear why this makes sense. Certainly it works in the service of consistency—since you suffered a larger cost, it must have been for a larger benefit. People can be surprisingly unconscious of this effect in their own behavior. Even when the experiment is fully explained and the evidence of individual bias demonstrated, people see that the general result is true but claim that it does not apply to them. They take an internal view of their own behavior, in which lack of consciousness of the manipulating factor means it is not a manipulating factor.

The need to reduce cognitive dissonance also strongly affects our reaction to new information. We like our biases confirmed and we are willing to manipulate and ignore incoming information to bring about that blessed state. This is so regular and strong as to have a name—the confirmation bias. In the words of one British politician, “I will look at any additional evidence to confirm the opinion to which I have already reached.”

So powerful is our tendency to rationalize that negative evidence is often immediately greeted with criticism, distortion, and dismissal so that not much dissonance need be suffered, nor change of opinion required. President Franklin Roosevelt uprooted hundreds of thousands of Japanese-American citizens and interned them for the remainder of World War II, all based on anticipation of possible disloyalty for which no evidence was ever produced except the following classic from a US general: “The very fact that no sabotage has taken place is a disturbing and confirming indication that such action will be taken.”

Supplying a balanced set of information to those with divergent views on a subject, as we saw earlier in the case of capital punishment, does not necessarily bring the two sides closer together; quite the contrary. Facts counter to one’s biases have a way of arousing one’s biases. This can lead to those with strong biases being both the least informed and the most certain in their ignorance. In one experiment, people were fed politically congenial misinformation and an immediate correction. Most people believed the evidence more strongly after the refutation.

One important factor affecting the need for cognitive dissonance reduction is post-hoc rationalization of decisions that can no longer be changed. When women are asked to rank a set of household appliances in terms of attractiveness and then offered a choice between two appliances they have ranked equally attractive, they later rank the one they chose as more attractive than the one they rejected, apparently solely based on ownership. A very simple study showing how people value items more strongly after they have committed to them focused on people buying tickets at a racetrack. Right after they bought their ticket, they were much more confident that it was a good choice than while waiting in line with the intention of buying the same ticket. One upshot of this effect is that people like items more when they cannot return them than when they can, despite the fact that they say they like the option to return items.

A bizarre and extreme case of cognitive dissonance reduction occurs in men sentenced to life imprisonment without the possibility of parole for a crime—let us say a spousal murder, using a knife repeatedly. Surprisingly few will admit that the initial act was a mistake. Quite the contrary: they may be aggressive in its defense. “I would do it again in a second; she deserved everything she got.” It is difficult for them to resist reliving the crime, fantasizing again about the victim’s terror, pain, unanswered screams for help, and so on. They are justifying something with horribly negative consequences (for themselves as well now) that they cannot change. Their fate is instead to relive the pleasures of the original mistake, over and over again.

SOCIAL EFFECTS OF COGNITIVE DISSONANCE REDUCTION

The tendency of cognitive dissonance resolution to drive different individuals apart has been described in terms of a pyramid. Two individuals can begin very close on a subject—at the top of a pyramid, so to speak—but as contradictory forces of cognitive dissonance come into play and self-justification ensues, they may slide down the pyramid in different directions, emerging far apart at the bottom. As two experts on the subject put it:

We make an early, apparently inconsequential decision and then we justify it to reduce the ambiguity of the approach. This starts a process of entrapment—action, justification, further action—that increases our intensity and commitment and may take us far from our original intentions or principles.

As we saw in Chapter 5, this process may be an important force driving married couples toward divorce rather than reconciliation. What determines the degree to which any given individual is prone to move down the pyramid when given the choice is a very important (unanswered) question.

A novel implication of cognitive dissonance concerns the best way to turn a possible foe into a friend. One might think that giving a gift to another would be the best way to start a relationship of mutual giving and cooperation. But it is the other way around—getting the other person to give you a gift is often the better way of inducing positive feelings toward you, if for no other reason than to justify the initial gift. This has been shown experimentally where subjects cajoled into giving a person a gift later rate that person more highly than those not so cajoled. The following folk expression from more than two hundred years ago captures the counterintuitive form of the argument (given reciprocal altruism):

He that has once done you a kindness
will be more ready to do you another
than he whom you yourself have obliged.

COGNITIVE DISSONANCE IN MONKEYS AND YOUNG CHILDREN

It is of some interest to know whether animals show cognitive dissonance and at what age children show such effects. Birds often show the human bias of preferring items for which the birds work harder (in their case, food) over identical items achieved through less work. The same is true sometimes of rats.

A more novel set of experiments shows that when a monkey is forced to choose between two items it is equally fond of (say, a blue M&M instead of a red one), it will then prefer another color (say, a yellow M&M) over the one it just rejected (red), as if needing consistency. That is, having rejected red once, to remain consistent it must do so again. But if the initial choice is made by the human experimenter (blue over red), this either has no effect on the monkey’s subsequent choice or the monkey then chooses the one the human kept for itself, as if this must be the better one.

Nearly identical experiments run on four-year-olds produce nearly identical results. When the children are forced to choose between two equivalent objects, they continue to reject the one they rejected the first time, as if staying true to themselves. That is, having rejected one, the child acts as if there must have been a good reason and rejects it again. This occurs even if the child does not see which item it chose until after having made its choice. Once again, as with the monkeys, when the experimenter makes the choice instead of the child, this either has no effect on how the child chooses or it chooses the one the experimenter kept for itself, as if this must be the better one.

In short, though there are only a few studies of cognitive dissonance in other animals and in children, they tend to give similar results: each party acts as if it is rationalizing its prior choice as having been based on sound logic and hence worth repeating when given the same opportunity. Given the theory advanced in this book, it is tempting to argue that the children and the monkeys may be projecting a general illusion of consistency to impress others.

By now we have laid the foundations for an understanding of the evolution, biology, and psychology of self-deception. We can now apply our logic to everyday life, including airplane crashes, historical narratives, warfare, religion, other intellectual systems, and our own lives. The applications extend in all directions.