The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life - Robert Trivers (2011)

Chapter 8. Self-Deception in Everyday Life


The logic we have been developing applies with full force to everyday life—so much so that its validity can, in part, be tested there. How much does our system of thought help us understand our lives? What interesting facts of everyday life are completely hidden from us until research or logic reveals them? Some biases in our thinking have been studied in surprising detail, and others are known only from anecdotes. I begin with the study of the stock market and what it reveals about sex differences in overconfidence, as well as the unconscious use of language to hype the upside of the market—that is, to encourage trading.


Overconfidence must—in competitive situations—sometimes give an advantage, but insofar as it induces risky and ultimately unprofitable behavior, it must also have costs. Clearly our confidence in ourselves is an important variable in many situations affecting and predicting our behavior. Others would do well to attend to our self-confidence—that is, if they can measure it accurately. After all, they may just have met you, but you have known yourself all your life. So we expect overconfidence on deceptive grounds alone (see Chapter 1). In general, across many species, including our own, males are more likely to profit from overconfidence than are females. Certainly their potential reproductive success is usually higher (because males usually invest less per offspring), so the payoff for successful overconfidence is likely to be higher as well (see Chapter 5).

Stock trading by amateurs (via computer-placed options) provides a nice situation from daily life to study the bias. Competitive interactions are at a minimum—your overconfidence is not directly affecting any of the other investors you are competing against, none of whom knows you—so with no benefits from overconfidence, costs are expected to dominate. Under perfect information, stock prices are at their true value, so that trading produces random effects. Under mildly imperfect information, prices are close to true values, so that trading produces near-random direct effects. But trading is costly, as you pay a fee for every trade. Given these facts, it is clear that there is substantial overtrading in the US stock markets. Nearly 100 percent of stocks change hands every month and five billion are traded per day (2007). Given the cost of each trade, the net effect of this level of trading is negative. To cite but one example, in the general population, males trade stocks more often than do females (45 percent more in one sample), and they suffer accordingly: 2.7 percent annual loss in returns compared to 1.7 percent loss for females. The sex difference probably reflects the possibility of greater reproductive returns for males of financial success than females, an upside bias that is expected in many male activities given their greater chance in general to achieve especially high reproduction.

One work was notable for studying multiple kinds of overconfidence as possible correlates of trading volume. The key correlate to overconfidence turned out to be the good old “above-average effect.” The average investor rated him- or herself above average in ability and past performance. And the more an individual did so, the more he or she traded, even though there was no correlation with actual past performance. This resulted in more trading with no average gain and an average loss due entirely to the transaction costs. Believing that there is more information than in fact there was, that is, underestimating the variance of the signal, was not correlated with trading activity, only overestimation of self.

Overconfidence in currency markets provides a nice contrast. Here transaction costs are negligible (about one-hundredth of the 3 percent stock cost), so there is no immediate downside to overtrading. There is a widespread tendency for professional traders to overestimate their success and their ability to forecast correctly. Overconfidence has no effect on profitability (expected given negligible transaction costs), but there are positive social correlates. Overconfidence is positively associated with individual rank and trading experience. Here cause and effect are by no means certain, since in other domains it is well known that people of superior rank and age show higher confidence, with no superiority in actual performance.

Greater male than female overconfidence has been detected in studies of arithmetic contests. An individual can either be paid piecemeal (50 cents per correct answer adding sets of five numbers for five minutes) or in competition with three others, winner take all: $2 per correct answer for the highest scorer, nothing for the other three. Under perfect information about one’s relative skill, the top one-fourth should choose to compete and the rest should choose to work piecemeal. This is far from what happens: 35 percent of women choose to compete, close to expected, but fully 75 percent of men choose to compete in a task in which, on average, only 25 percent can win. Overall, when matched for ability, women have a 38 percent lower chance of deciding to compete. This means that on the upper end of ability, women undercompete, and on the lower end, men greatly overcompete. This is yet another example of a degree of self-deception—here in the form of overconfidence—having a positive effect under certain circumstances and negative under others, the net effect being negative.

Another cause of misbehavior in stock trading is a tendency toward thrill seeking. Like those who are overconfident, those who have a special need for thrills tend to trade more often to their own disadvantage, and this is independent of overconfidence. Men, in turn, are vastly overrepresented among thrill seekers, at least as measured by speeding tickets, drug use, gambling, and participation in dangerous sports (such as hang gliding). In Finland, those with more speeding tickets trade more often to their own disadvantage. What the advantage is to the thrill seeking has not been measured, but it probably has to do with showing off—the stunt properly executed may be a sight worth recounting.


A nice example of unconscious persuasion concerns metaphors about the stock market taken from daily news broadcasts. The stock market moves up or down in response to a great range of variables, about most of which we are completely ignorant. The movement mirrors a random walk, with no particular pattern. And yet at the end of the day, its movements are described by the media in two kinds of language (agent or object) that are often used for movement more generally. The average listener will be completely unconscious of the metaphors being used. The key distinction is whether an agent controls the movement of something or it is an object moved by outside forces (such as gravity). Here are examples of the agent metaphor for stock movements: “the NASDAQ climbed higher,” “the Dow fought its way upward,” “the S&P dove like a hawk.” The object metaphors sound more like: “the NASDAQ dropped off a cliff,” “the S&P bounced back.”

Agent metaphors tempt us to think that a trend will continue; object ones do not. The interesting point is that there is a systematic bias in the use of the language—up trends are more the action of agents, while down trends are externally caused. Both of these metaphors are stronger for movement that is consistent, and the bias exists whether reporting is occurring after a long up market or a long down market. Even experimental student commentators unconsciously adopt the appropriate bias: agent for the up trends, external factors for the down. Now here is the average upward bias. The more a market moves up during a day, the more it is given an agent metaphor that, in turn, (unconsciously) suggests continued upward movement. Since the opposite is true for down days—less agent metaphor, less expectation of continued downward movement—the net effect is positive. Investment information should lead to more investment, on average. Surely the effect of this bias in media language is to encourage investment overall, just as supplying information about the day’s trends instead of merely reporting them (up or down) gives a greater expectation of a trend and hence greater trading after up movement, at greater net loss (there is a cost to trading and no benefit during a random walk). Perhaps the function of the financial commentators in the first place (from the standpoint of those who employ them) is to hype interest in the market.


The use of metaphor is a key part of language, structuring meaning by embedding more abstract concepts in day-to-day events—such as moving into new spaces at a given rate, and so on. Metaphor often flies just below radar and may have important unconscious effects. Euphemisms, for example, may not just soften meaning but invert it. “Waterboarding” sounds like something you would like to do with your children on a Mediterranean vacation, and “stress positions” the perfect way to end a workout, while all of us could benefit from some good “sleep management.” But each of these, in fact, refers to a form of torture—repeated near-drowning, long-term painful bending and stretching, wholesale sleep deprivation. In the same vein are terms such as “collateral damage” (civilians killed during military operations), “extraordinary rendition” (kidnapping followed by torture), “enhanced interrogation” (torture), “friendly fire” (death at the hands of your own soldiers), and the “final solution” (genocide of European Jews).

There is also something that has been aptly called the euphemism treadmill, in which each new euphemism soon becomes tainted by what it refers to so that a new euphemism must be invented to take its place. “Garbage collection” becomes “sanitation work,” which morphs into “environmental services.” “Toilet” turns into “bathroom” (so you are washing in there), which turns into “restroom” (so you are taking a nap in there). “Slum” to “ghetto” to “inner city,” with “ghetto” making a modest comeback lately as a synonym for lower-class black culture—“he is so‘ghetto.’” It seems as if we are running from the negative connotations of words, with no net progress. The association is soon reestablished, so we have to keep running.

We all know of examples. In my younger days, “retarded” went to “disabled” to “mentally challenged,” and is now a person with “special needs.” “School security guard” is now a “school safety agent.” The other day a phone “operator” told me he was an “information assistant.” Not quite sure how much elevation he thereby achieved, but notice that the euphemism is longer than what it replaces, as often happens—in other words, this enterprise is trending in the wrong direction, at least where efficiency is concerned.

The euphemism treadmill has several important implications. For one thing, it means that concepts are in charge, not words, contradicting entire disciplines (see Chapter 13 for cultural anthropology). That is, the words keep changing, but not, so far as we can see, the underlying concept. It also means that we are expected to be vigilant about the various changes introduced—otherwise, why make them? But any advantage tends to be strictly temporary.

The treadmill also suggests the novel notion that we will finally have relaxed about some of our distinctions—racial, sexual, whatever—when the treadmill stops. Some of the running has deeper meaning than simply running from negativity. Yes, “Negro” is Spanish for black, but it is uncomfortably close to the common “white” mispronunciation of “Nigrah,” itself rather too close to the racially insulting “n-word.” The initial attempt to fight back is to overstate the case. Hence, “black” is chosen not just to achieve parity with “white” but to frighten anti-black people with their worst racial nightmares, the black man unfettered—Black Panthers—invisible at night except for their yellow eyes. Incidentally, “colored people” was the genteel acknowledgment of intermixing (without taking any responsibility for it), so it was condescending. When you are in a time of revolutionary mind change, you push for racial solidarity—“all of us brothers and sisters are ‘black.’” But then you want to move to the next stage, defined not by some other group but by your own roots. All other people do it: Italian Americans, Chinese Americans, Japanese Americans, etc. What is a person supposed to say, “oppressed black slave American”? So there was a natural turn to “African American”—at least it says where most of the genes came from. In this case, then, linguistic change seems to match logically the stages through which a particular group passes.

There is also something one could call the malphemism treadmill, where a word is forced to take on negative connotations. Thus, “tendentious” originally meant strongly stated minority views apt to provoke a response. In the UK and Australia, this is still its meaning, but in the United States, a negative connotation has been added—being of the minority, the views are likely to be wrong—so it is incorrect views that arouse natural resistance. Perhaps the fact that “tendentious” rhymes with “pretentious” makes this shift in meaning easier. Criticism of Israel is often said to be tendentious, which in the United States is often literally true; such criticism is a strongly stated minority opinion likely to provoke disagreement. That it is thereby false is another matter. The larger tendency to produce malphemisms in the press is suggested by the following double whammy: “the tragedy of the vitamin D deficiency epidemic,” probably referring to a small increase in D-deficient individuals with negligible overall health effects.

An extraordinary verbal one-step has been spearheaded in multiple disciples in the past fifty years—the switch from “sex” to “gender” as words to denote the two sexes. From time immemorial (at least a thousand years), sex referred to whether an individual was a male (sperm producer) or a female (egg producer). In the past hundred years, the word was extended to “having sex.” “Gender” was strictly a linguistic term. It referred to the fact that in various languages, words may be feminine, masculine, or neuter, apparently in almost random ways. “Sun” is feminine in German, masculine in Spanish, and neuter in Russian, but “moon” is feminine in Spanish and Russian, and masculine in German. In German, a person’s mouth, neck, bosom, elbows, fingers, nails, feet, and body are masculine, while noses, lips, shoulders, breasts, hands, and toes are feminine and hair, ears, eyes, chin, legs, knees, and the heart are neuter. Pronouns are assigned by gender, so you can say about a turnip, “He is in the kitchen.” You tell me. I have been a biologist for forty-five years and I can see no rhyme or reason to this system. It seems completely arbitrary, and this is perhaps the point. Since grammatical gender is arbitrary and meaningless, so also are biological sex differences if they can be rendered in the language of gender.

In a remarkable burst of activity, in fewer than forty years, “gender” took over entirely in many disciplines as the word for sex. Thus a person’s gender is male or female—not the ending on the word itself, but the person’s actual sex. And likewise, for cows and everyone else, “gender” has replaced sex. The pressure for all of this was twofold: to disassociate sex differences from sexual behavior and to minimize the apparent biological differences between the sexes in favor of differences imposed by verbiage itself (“culture”)—symbolized by the gender of words. The more arbitrary the gender of words, the more arbitrary the assignment of sex differences.


What about linguistic effects at a much smaller level—biases in favor of the initials of one’s own name, for example? People prefer letters that are found in their own first and last names. That is, when choosing between two letters based on attractiveness (asked to do so quickly and with no thought), people consistently choose letters contained within their own names. This is especially true for the first initials of their first and last names, but in fact it is true throughout each name. The effect is robust to various forms of measurement and occurs, so far as can be seen, completely outside of consciousness—nobody appears to be aware they are choosing letters on the basis of self-similarity. The effect is found in every language examined: eleven European languages using the Roman alphabet, as well as Greek and Japanese. A similar effect is found for one’s own birth dates—a preference for these numbers against a random set of numbers. The effect appears in children as young as eight and in university students, demonstrating that the effect remains strong despite the person’s having been exposed by then to millions of letters and many, many numbers.

The simplest explanation would be that the name-letter bias is due solely to familiarity of one’s own name, since familiarity can increase attractiveness, but there is good reason to believe that more than familiarity is involved. Young Japanese women show a strong preference for their first-name letters and a weak one for those of their last name, which they will soon change, while the opposite is true for Japanese men. This suggests that it is the personal significance of the name that produces the effect, not the frequency with which it has been encountered. Nor does the overfrequency of letters have much to do with their popularity, at least at the top end: the most frequent letters are not the most popular. At the bottom end, it is true that many letters that are rarely encountered—W, X, Y, Z, and Q—are also often unattractive, but when encountered often, as W is among the Walloons of Belgium, the letter fails to rise in popularity. More to the point, the name-letter effect is enhanced by such variables as positive parenting style (see below) that are associated with self-esteem, but not obviously with word usage. In short, the name-letter effect appears to be primarily narcissistic: with a minor frequency effect, we love the initials of our names above those of others, because they are our own.

For one brief shining moment, it appeared as if the name-letter effect had widespread important effects on our behavior of which we were completely unconscious. Too many Larry and Laura lawyers, too many Geoffreys publishing in the geosciences. Too many people’s last names (first four letters) match those of towns or streets or states where they live. People appeared to be making major life decisions based on trivial egoistic coincidences. Causality was strongly implied by evidence that people tend to migrate to states that match their own last names. Fortunately, perhaps, the entire edifice collapsed when a very careful reanalysis replicated all the original findings and then showed that every single one was due to hidden biases in procedure or logic. For example, forty years ago, there was a wave of enthusiasm for naming babies Geoffrey, Laura, or Larry. Hence, they are overrepresented in a variety of enterprises today besides the geosciences and law. Likewise, place of birth for the migration study was often noted as place of residence several years later (when the child was first given a social security number) and the subjects may already have migrated away. Since people have a strong tendency to return to where they were born, this alone would create a spurious correlation as, indeed, it did.

What we do know about the costs or benefits associated with the name-letter effect are nonetheless surprising. Preference for one’s own first initials can lead to a real cost, that is, lower performance when one’s own initials are associated with signs of lower performance (though the reverse is not true). Self-love in this context gives a cost but not a benefit. In schools in the United States, Cs and Ds are low grades and As and Bs high. People with a C or a D at the beginning of either their first or last names show lower academic performance (grade-point average) than do those with As, Bs, or other letters, apparently because lower grades (Cs and Ds) are (unconsciously) less aversive to them. It is notable that self-love does not benefit those with initials of A or B—they score just like those with other initials—but self-love harms those with C or D. If your name is Charles Darwin, you will tend to do slightly less well academically than everyone around you. And these biases have ramifying effects in life. When law schools are ranked in terms of quality, students with first initials in their names of either C or D are preferentially located in inferior schools.

For academic performance, one could argue that teachers unconsciously downgrade students with the initials C and D, but direct experiments prove that self-initiated failure works just fine. When given the choice—after trying to solve ten difficult anagrams (of which two are impossible)—people will choose to push a button associated with failure (and a lower possible prize) if it matches their own initials, but they will not show an upward bias. Once again, self-love is associated with failure but not success. Is it possible that some among us tend not to respond to such arbitrary biases and thus succeed more often while seeing life more objectively?

How do these implicit self-biases come about? There is some evidence that early parenting style, both as remembered by individuals and, separately, by their mothers, is associated with the degree of name-letter bias and (in some cases) birth-date bias according to the following rules: warm and positive parenting produces a stronger positive self bias, while being controlling or overprotective has the opposite effect. The variables had similar effects on explicit self-esteem, as measured by asking people to rate themselves on a series of traits, such as “I feel that I have a number of good qualities” (1 to 7—completely true to completely untrue), but the implicit effect is still significant when corrected for explicit self-esteem. Recent work even suggests that daily events can affect one’s name-letter bias, but only among those with low explicit self-esteem; a greater number of negative events in the previous twenty-four hours lowers implicit self-esteem, that is, preference for one’s own name letters.


As we have seen, we usually think of deception where self-image is concerned as involving inflation of self—you are bigger, brighter, better-looking than you really are. But there is a second kind of deception—deceiving down—in which the organism is selected to make itself appear smaller, stupider, and perhaps even uglier, thereby gaining an advantage. In herring gulls and various other seabirds, offspring actively diminish their apparent size and degree of aggressiveness as fledglings, to be permitted to remain near their parents, thereby consuming more parental investment. In many species of fish, frogs, and insects (see Chapter 2), males diminish apparent size, color, and aggressiveness to resemble females and steal paternity of eggs. These findings indicate that deceiving down has often been a viable strategy in other species, and thus is likely to be one in humans as well, which should lead to self-deceptive self-diminishment.

For example, appearing less threatening may permit you to approach more closely. This is a minority strategy that probably owes some of its success to the fact that most people are doing the opposite, so our guard is not as well developed in this direction. I remember students whose approach was so low-key, so noninvasive, you would never imagine that they would end up consuming far more of your time (to less effect) than many of their more talented counterparts who were representing themselves honestly or with an upward bias. Whether they were self-deceiving downward is, of course, difficult to say.

The most memorable version of deceiving down that I know of is referred to in African-American culture as “dummying up.” This can refer to a specific situation in which you pretend not to know anything—for example, complete failure to witness a crime at which you were present or complete ignorance of a hidden relationship. But it can also refer to a general style. You can represent yourself as being less intelligent or less conscious than you really are, often the better to minimize the work you have to do. Thus an employee may dummy up to avoid doing more difficult tasks. I have often watched Spanish-speaking people in Panama and sometimes in the United States represent themselves as understanding much less English than in fact they do, all to gain benefits from English-speaking Americans who readily believe the dummying up—another example of being victimized by one’s own prejudices.

I once asked Huey Newton how he dealt with dummying up directed at him, a problem he must have faced often as head of a major organization (the Black Panther Party). In reply, he imagined a situation in which a waiter always managed to avoid seeing you when you were calling him and otherwise appeared to be working while not actually doing anything. Here is how Huey would dress him down: “Oh, so you are so dumb that you happen to be looking the other way whenever I am trying to get your attention? And you are so dumb that when you know I am watching you, you decide to polish silverware that needs no polishing? And you are so dumb that you are always walking toward the pantry without ever reaching it? Well, you’re not that damn dumb!”—followed by verbal or physical assault. Perhaps the ultimate in dummying up is that alleged of chimpanzees by several African peoples living near them—that the chimps can easily understand human speech but pretend not to in order to avoid being put to work!


It has been argued that visual depictions of the face that show more of the face relative to the rest of the body—that is, the face appears closer to you and is higher in “face-ism”—will give the impression of higher dominance, and people do indeed rate such faces as being more dominant. The word “face,” after all, can be used to imply confrontation, as in “face-off,” “face-to-face,” “in your face,” “loss of face,” and so on. In short, the more I project my face on you, the more dominant I appear.

Consistent with this, the faces of a discriminated-against minority in the United States, African Americans, show lower face-ism than do those of European Americans in a variety of American and European periodicals, American portrait paintings, and US stamps. The difference shows up even when relative status is controlled for. Only when the artist is an African American is there an exception—there is no ethnic difference, with all face-ism ratings being on the high side. The degree of consciousness of the artists about these effects is, of course, unknown, but I would guess that many of the presenters of stimuli are unconscious of the effect, as are almost all of the recipients.

Similar findings have emerged for the two sexes in a wide range of US periodicals (such as Time and Ms.), in 3,500 media photos from eleven countries (including Kenya, Mexico, India, and France), in portraits and self-portraits dating back to the fifteenth century, and in amateur drawings of the faces of the two sexes. In all of these samples, men score higher in face-ism than do women. That is, relatively more of their face is presented in the picture—especially surprising since women have slightly larger heads for a given body size. On the other hand, women have breasts, and this may lead to a bias toward showing less head and more body. In any case, the correlation is true for every single country studied and every century from the seventeenth onward. The general face-ism effect appears to be all but universal, showing up in children’s books, Fortune 500 websites, and prime-time television, among other places. Ms. magazine (feminist) is only slightly less biased in the usual direction than the rest of US publications.

There are some weak associations between higher face-ism and higher perceived intelligence, but no evidence that this affects the between-sex or ethnic comparisons, with one small exception. In photos from a variety of US periodicals, men shown in relatively intellectual professions had higher face-ism scores than similar women, and the effect was reversed for more physical professions.

Even politicians’self-presentations—that is, the photos they choose to post on their websites—show the usual bias, at least in the United States, Canada, Australia, and Norway. The bias remains the same whether twice as many women per men are serving in the legislature or one-tenth as many (compare Norway and the United States). Once again, though, in the United States, African-American politicians are an exception, showing the highest face-ism index for any ethnic group. Again, this suggests awareness among them that higher face-ism equals higher perceived dominance (and perhaps intelligence). Among female politicians in the United States, the more a woman’s votes are interpreted as “pro-women,” the more she emphasizes her face in photos of herself.

The degree to which people are conscious of face-ism is unknown, and so is its mechanism. Does a white photo selector see a black face and say “subordinate,” then search for a relatively low face-ism picture? Or does he or she find black faces somewhat aversive, and so prefer them when they are smaller? And do black people viewing the photos find black pictures attractive and therefore easily tolerated up close, or are they saying “equally dominant” or “I wish myself and people like me to appear equally dominant”?

There is a curious result concerning George W. Bush’s head. Someone thought to analyze his face-ism index in cartoons rendered 78 days before and 134 days after the start of each of his two wars. The authors of this study predicted that, dominant leader that he was, his face-ism index would increase with the outbreak of war. In fact, it decreased in both cases. Because in every major recent US war the president has made sure to appear as if he were forced into it, after every concession and reasonable effort, the authors argued that this lowered his apparent dominance. Or perhaps cartoonists knew something the rest of us did not about how each war would turn out. More likely still, the cartoonists were unconsciously reflecting the bias toward inflating one’s own country (and leaders) prior to war, so as to impress adversaries, but not continuing once war was under way.


There is an analogy between the coevolutionary struggle in nature and struggles in human life over deception in which (over a period of months or years) each move by a deceiver is matched by a countermove from the deceived and vice versa. The advantage lies with the deceiver, who usually has the first move. This is true even of situations in which the very best minds are enlisted in fighting the deception. Consider the ubiquitous invasive “species” of spam, unwanted computer messages. They offer a variety of services to induce a transfer of funds, however small, directly or from third parties. In some cases, companies will send out spam to lure the unsuspecting viewer to their websites, whose visits garner them more pay from the advertising company employing them. When spam first became a problem, computer software engineers leaped in on the side of prevention and protection, devising means of spotting incoming spam and blocking it. This led Bill Gates, in a burst of enthusiasm in 2004, to proclaim that the problem of junk e-mail “will be solved by 2006.” Gates saw that defenses could easily be erected against the set of spamming devices then in use, but he could not imagine that these defenses could quickly be bypassed at little cost and that newer forms of spamming would easily be invented. By 2006, the amount of spam was higher than ever, having doubled in the previous year alone. Spam, of course, is a human invention for human purposes, with the computer and the Internet serving as the tools of replication.

After an initially successful counterattack by the anti-spam forces that resulted in a decrease in spam, the protective measures introduced could all be circumvented so that by the end of 2006, roughly nine out of every ten e-mail messages were junk. The initial attack against spam blended three filtering strategies. Software scanned each incoming message and looked at where the message was from, what words it contained, and which website it was connected to. The first was bypassed in spectacular fashion by devising programs that infected other computers with viruses that sent out the spam instead. In late 2006, an estimated quarter-million computers were unknowingly conscripted to send out spam every day. This achieved two aims at once: no sender’s address that could be screened, and no additional cost to send.

The second screening device searched statistically for word usages suggestive of spam, but this maneuver was overcome by embedding the words in pictures whose extra expense was offset by the first device, the use of pirated computers. Efforts to spot and analyze images were, in turn, offset by “speckling” the images with polka dots and background bouquets of color that interfered with the computer scanners. To block detection of multiple copies of the same message, programs were written that automatically changed a few pixels in each picture. It was as if an individual could change successive fingerprints by minute amounts to evade detection, reminiscent of the ability of octopuses (see Chapter 2) to rapidly spin out a random series of cryptic patterns, again to avoid targeting. The HIV virus uses the same trick, mutating its coat proteins at a high rate to prevent the immune system from concentrating on it. As for the problem of linked sites, some scams do not require any. Spam can hype so-called penny stocks (inexpensive stocks in obscure companies) that may give a quick 5 percent profit in a matter of days, when enough people invest to raise the value, after which the spammer sells his or her interest in the stock and it collapses.

The point is that each move is matched by a countermove and a new move is always possible, so deceiver leads and deceived responds with costs potentially mounting by the year on both sides with no net gain. Intellectual powers among programmers increasingly will be required on both sides. One inevitable cost in this context is the destruction of true information by spam detectors that are too stringent, thus excluding some true information. This, as we saw in Chapter 2, is a universal problem in animal discrimination. Greater powers of discrimination will inevitably increase so-called false negatives—rejecting something as false that is in fact true. So as we act to exclude more spam, we inevitably delete more true messages. And now there is something more dangerous, called malware—special infiltrating codes that download proprietary information and ship it to one’s enemies. As with newly appearing natural parasites (such as living viruses), malware is increasing at a more rapid rate than defenses against it.


One striking discovery is that humor and laughter appear to be positively associated with immune benefits. Humor in turn can be seen as anti-self-deception. Humor is often directed at drawing attention to the contradictions that deceit and self-deception may be hiding. These are seen as humorous. Reversals of fortune associated with showing off—usually entrained by self-deception—are often comical to onlookers. A staple of silent films is the man strutting down the street, dressed to the nines, showing off, with head held high—so that he does not see the banana peel underneath him, producing an almost perfect visual metaphor for self-deception. The organism is directing its behavior toward others, with an upward gaze that causes him to pay no attention to the surface on which he is actually walking. Result: cartwheel and complete loss of bodily control, of the strut, of the head held high, and of the well-presented clothes—the whole show destroyed by a single contradiction.

Those who are low in self-deception (as judged by a classic paper-and-pencil test) appreciate humor more (as measured by actual facial movements in response to comedic material) than do those high in self-deception. At the same time, those with greater implicit biases toward black people or toward traditional sex roles laugh more in response to racially and sexually charged humor than do those with less implicit biases. Is it possible that the greater internal contradiction in them is released by appropriate humor on the subject, resulting in greater laughter? Laughter is an ancient mammalian trait, found in rats as well as chimpanzees. Tickling a rat will produce laughter-like sounds, and the rats will seek out the pleasure of being tickled. Chimpanzees will pant-laugh when being chased, an action that signals that the chase is not aggressive or aversive.

Humor permits discussion of taboo topics and the views of disempow-ered groups. Also, people know self-deception is negative and costly but necessary, so humor permits us to bring out this truth for enjoyment and consumption—we are all self-deceivers. Humor permits a kind of societal-level criticism in which no one need be threatened—it is all just a joke.


Recreational drugs and self-deception are obviously intimately connected. For one thing, drug use is often, to varying degrees at least, harmful and addiction almost invariably so. I am speaking of a wide range of both legal and illegal chemicals with effects from mild to severe: marijuana, alcohol, tobacco, uppers, downers, cocaine, heroin, and so on. Hence, this cost must be rationalized to the mind and, through the mind, to others. Thus, self-deception is a virtual requirement of drug use. I remember the first time I tried cocaine, I said to myself, “Why, this drug will pay for itself! I am so much more clear-headed and will get so much more work done while using it.” Of course, in reality the drug was very expensive and entirely counterproductive where work was concerned. Huey Newton and I used to joke that we could practice drug abuse without self-deception, thus reducing or wiping out the cost, but it was a lie. Even the pleasant joke served to minimize the problem.

A second effect of drug use is often to separate our daily life into an up phase while using the drug and a down phase while recovering from it. This tends to split our personalities into two parts that then may be in conflict. The hungover self may remonstrate with the drunken self of the night before (and more generally), but the drunken state will usually forget all of this as soon as its time comes. It is tempting to imagine that the hungover self is more conscious of the two selves than is the drunken self. The latter is into enjoyment and would wish to suppress information from the other self that might cut into the pleasure. But in the hungover state, you are very aware of what went on the night before. Perhaps when you are drunk, your hungover self watches with dismay and attempts to call out—and sometimes (thank God) some information gets through.

My reason for imagining that the hungover self is the more conscious of the two rides partly on an analogy to split personalities. Many years ago, it was shown that among those rare people with two personalities, the second personality usually emerged in early adulthood and may have been strikingly different from the first. The first could be a shy and retiring British gentleman, the second a flamboyant Spanish fellow with a taste for flamenco. Typically the first personality knew nothing about the second, while the second had been watching the first for many years. Thus, therapy to unite such an individual into a single personality usually focuses on the second personality as the primary one. By analogy, then, the drunken self is like the first personality: it does not know that there is a second personality watching it.

A third factor of some importance is that the cost of drug use/abuse is often experienced as physiological pain, which you are then tempted to add to the pain of a given social interaction and to project it onto those around you. So the pain of arguments is that much greater, but, denying your own responsibility for that portion of the pain due to your drug use, you project your full anger onto the other person. Abusive drunks—surely we have all met one or two by now, if not in the mirror—fit the mold. So drug addicts tend to be irritable and morally righteous about it at the same time.

Finally, let us not forget that decisions made while high—while feeling an unnatural affinity for those close by, while feeling especially good about the future—are expected often to be biased away from one’s true interests, just as the drug boosts us from our natural states. It would be nice to know the answer to the question: Are relatively more self-deceived individuals relatively more likely to be drug addicts? One expects the answer to be yes, but I do not know of any evidence. Certainly it is commonly claimed that con artists and thieves end up ensnared by a hard drug—and I have seen several such cases myself—but for the rest of us people, semi-addicted to milder stuff, I do not know.

Another problem that baffles me is whence the anti-pleasure bias? It is often said by opponents of medical marijuana that we already have legal drugs that promote appetite or suppress pain, so why should we give in to illegal ones? Yet the latter also give pleasure, so that you survive with good appetite and feeling better, so why is the latter not a virtue but an impediment? In fact, I now believe the ideal medicine for a root canal is, in fact, cocaine, and not its chemical analogs (procaine) that numb the pain but don’t make you feel good.


Socially, a potential cost of self-deception is greater manipulation (and deception) by others. If you are unconscious of your actions and others are conscious, they may manipulate your behavior without your being aware of it. Consider the story of a man who insisted, “You can’t make a town man drunk.” This occurred in rural Jamaica some thirty-five years ago, when a man from Kingston (“town”) was passing through and bragging at a bar. Of course, we locals resisted his view and for a while there was a spirited argument. Then one local had a bright idea: he switched sides. He agreed with the town man—you can’t make a town man drunk—and bought him a drink. Soon we all caught on, switched sides, and bought the man a drink. The town man was now in drunkard’s paradise : everyone agreed with his opinions and everyone was buying him drinks. He got drunker and drunker, finally swaying on his chair, then falling to the ground, then vomiting, then slipping and falling in his own vomit. I say this not with pride but to describe the truth: we doubled over with laughter—as he sunk each step lower, we howled the more in pleasure. As Huey Newton was fond of saying, we owned him. We could have robbed him, killed him—he no longer had any control over his destiny. This is a terrible danger in self-deception—not that he was truly deluded into thinking it was impossible to make a Kingstonian drunk but that he had entered into fantasy land, selling one and then believing the fantasy had been bought by others. He was completely unaware of what was going on and he could have died from this as certain as from a heart attack.

This must be a very general and important cost of self-deception. You are trying to deceive others socially by being unconscious of a critical part of social reality. What if others are conscious of that very part while you are not? Your entire environment may be oriented against you, all with superior knowledge, while you peer out, ignorant and hobbled by self-deception. In the town man’s case, it was his sense of superiority that served as a resource mined by those surrounding him.


Bless Bernie Madoff. He has brought con artists back to public attention and given them the attention they deserve, almost as much as when Ponzi swindled thousands of people out of hundreds of thousands of dollars in a pyramid scheme—where early investors are paid high returns, not out of actual earnings but out of the donations of others joining the scheme. As word of mouth spreads about the high returns, more and more want to join the fun. By definition, such an operation can’t continue indefinitely. Typically those who invest early and depart early earn a nice return, as does the swindler himself, though he also may suffer later prison time. Everyone else loses—most people, everything they invested. Madoff stole a staggering $50 billion. He was a classic swindler; smooth and attractive in style, he made you pursue him. Many times he told people “the books are closed” on investment with him, only later to relent and permit them to lose their money with him. As always, some people did not buy in and a few spotted the scheme for what it was. This is what we have expected all along: an evolutionary game, with multiple actors, caught in a frequency-dependent interaction such that most actors will not be forced out of the game anytime soon, and new strategies are always appearing. Incidentally, one of Madoff’s victims had just published a book on gullibility when he learned that it applied to himself: he lost $400,000. In self-defense, he said he was only trying to buy a safe investment with modest returns (more than 10 percent annually) for his family. Modest? What positive feature in the universe increases by more than 10 percent annually, year after year?

Most con artists operate on a much smaller scale. They are professional thieves whose art consists of extracting money voluntarily from others, as did Madoff, just on a much smaller scale. They often survive on the unconsciousness, including self-deception, of their victims, as did Madoff. Here it is useful to distinguish between the “long con” and the “short con.” The long con may run for several days, may result in tens of thousands of dollars lost at the end, and often involves activating the victim’s system of self-deception, while the short con is usually over in a matter of minutes for a few dollars and typically involves lulling the victim into temporary unconsciousness regarding a key variable. During long cons, the victim is often put into a trance-like state of mind, as one of his or her weaknesses, often greed, is amplified by the con artist. Because the same illegal or “special situation” can, in principle, be repeated indefinitely, there is no upward limit to the victim’s fantasies, an easily exploitable resource to help overcome contradictions should they arise. Victims in this state are said to “glow” and to be easily spotted by other con artists. Getting the victim into that state is called “putting him under the ether”—presumably into a deep state of self-deception.

As it looks to the victim: “You’re experiencing the ride singing ‘yo ho ho it’s a pirate’s life for me’ but you never see any of the trappings of the ride itself.” The con artist induces an internal ride in the victim that is very satisfying but is hard to view sideways so as to see where, in fact, the ride is taking you. Once we have taken the bait, we stop asking questions, much as people do in the instrumental phase of any activity, that is, when they are carrying out a project. In the memorable phrase of a great con artist of the street, “I plucked his dreams right out of his head and then sold them back to him—and at a good price, too!”

Incidentally, con artists demonstrate again the importance of frequency-dependent effects. At low frequency they do well, at high frequency not so well. A shopkeeper may be fooled once by a short-change game but usually not twice. The con artist must always be on the move to fresh victims. Here the density-dependent effect occurs directly through learning (and also passing this information on to others), while in other systems it is genetic and may require several generations of selection to show an effect.

A medium-length con (about two hours and netting $40) was run against me years ago in Jamaica. I was leaving Kingston one Saturday morning when a short, wiry man hitched a ride. When I asked him where he was going, he said Caymanas Racecourse, the local horse-racing track. He was a jockey—in fact running in the day’s third race, as he proved to me, pointing to his name on the racing form, a name he had introduced at the very beginning of our relationship. He had recently lost his car in an accident, which had also left him broke. After further discussion, it was proposed that I invest in a gambling scheme—betting, as is perfectly legal, on the day’s races, based on his insider knowledge. I remember my thought processes well. As a seasoned virtual Jamaican, I knew that the races were entirely fixed ahead of time, the general public betting not on horses but on how the race would be thrown. The very fact that this man was proposing such a financially advantageous scheme to me (I provide the cash for betting based on his special knowledge, proceeds to be split evenly) was a testament to my fluency in Jamaican culture—my general likability, if you will, augmented by my cultural competence. Why else had we hit it off so quickly? And it was a scheme that was foolproof as far as his stealing from me was concerned: we would buy matching sets of tickets. Our payoffs were yoked. And now that I had made the key breakthrough, it could be repeated ad libitum, $2,000 won this time, $20,000 the next, and so on.

I do remember one feature of his style that was off-putting: he called me “boss” more than once. This is something I have never liked but in this situation it jarred with my self-image as a fellow Jamaican: someone able to get this opportunity in part because I was not a boss. At one point I asked him not to call me “boss,” as if to say, “please, don’t interfere with my fantasy.”

We bought $80 worth of matching bets, many coupled with each other, so that should multiple horses come in, the winnings would be very large, but if a single horse failed, we would win nothing. No problem for me, I thought. This is as near to a sure thing as I have seen in my lifetime. Let’s maximize gains! The first horse did come in, as my friend crouched down on the imaginary winner and whipped it home—in a bar where we were now drinking. Didn’t he have to run in the third race? Again, this caused some small internal unease because of the obvious contradiction—not only did he risk being late for his own race, but he also risked arriving drunk—but I was willing to suppress the truth to maintain the fantasy. I dropped him at the track and continued on my way. Within four races, all of my bets were busted. Rounding a corner too quickly, now half drunk, I struck a rock and had to change a tire. Outside in the broiling-hot Jamaican sun, the truth had plenty of time to sink in. The man knew nothing about the track, was certainly not a jockey, and could no more predict the future than I but he was only too happy to have a series of risky bets bought for him by a complete stranger who, as an additional bonus, would deliver him to the track.

The whole experience now seems to be a metaphor for self-deception itself: the smooth and seamless takeoff, the intoxicating heights, the occasional doubts easily brushed aside, followed by reality itself and an appreciation of the growing costs: no longer just the monetary losses but also an inability to deal with moment-to-moment reality. The upside is temporary and psychological, while the downside is real and enduring.


Given the importance of perceiving deception, for example, in spotting an intended “terrorist,” there is a great demand for anyone who can scientifically uncover a lie—hence, the vaunted lie-detector test and a series of new ones, accessing deeper regions of our brains. The classical test measures three variables: heart rate, breathing amplitude, and galvanic skin response (GSR), a measure of physiological arousal. A series of innocuous questions are interspersed with incriminating ones, and systematic deviations in the underlying three measures are recorded. Especially significant, it is argued, are contrasts between key lies (“did you kill Betty Sue?”), to which only the perpetrator is guilty, and much more minor infractions, to which most people are probably guilty (“did you ever steal from your office?”). The guilty are presumed to respond more to the main question and the guiltless to the harmless lie. But these hard and fast rules rarely work so well in real life, and some people appear nearly completely unresponsive to variation in these questions.

The only question that gives truly reliable results is called the “guilty knowledge test.” Among otherwise innocuous questions is one interspersed that refers to a fact that only the criminal could know—the victim was lying on a red satin sheet before she met her demise. Any deviation from the background responses is evidence of deception—high arousal, low arousal, anything different from the responses to questions about which the person is ignorant.

I once inadvertently experienced the benefits of the guilty knowledge test when I was trying to counsel a youngster (thirteen years old) about his unfortunate tendency to steal his neighbors’bicycles, an escalation of his previous petty larceny. I told him, “Don’t steal; don’t steal your neighbors’tools; don’t steal your neighbors’ toys.” At first his eyes showed alarm as I talked about stealing, but as I ran down my boring list, he visibly relaxed and looked me in the eye. Then I added “and don’t steal your neighbors’bicycle.” Suddenly his eyes darted up, down, and around, until I continued droning through my list and he relaxed again. Guilty knowledge.

There is now a raft of new lie-detector tests coming out of neurophysiology and heavily funded by “antiterror” money coursing through the US government. Each test tends to claim high success, but this is usually based on modeling neurophysiological data after the fact against known honest and deceptive responses in a study population to gain the tightest fit. The tightness of the fit is then highlighted, but this is an illusion. The key is whether your method applied to a fresh set of subjects gives any fit at all, much less the high one claimed.

Another weakness of this line of work is the tendency to believe that lying per se gives off cues—not a particular kind of lie in a particular kind of situation. Contrast two kinds of lies. A little recorded lie you have waiting and ready for an expected question—where have you been the past two hours? This lie should light up memory areas of the brain, among others. By contrast, a simple denial, in which you suppress the truth and assert a falsehood, should light up areas involved in cognitive control. And so on. But at this time we are nowhere near devising a neurologically valid lie-detector test.