The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life - Robert Trivers (2011)

Chapter 13. Self-Deception and the Structure of the Social Sciences


There is structure to our knowledge. Take science, for example, with its various subdisciplines of mathematics, physics, chemistry, biology, psychology, and others. Or history and philosophy and philology. Or literature, biography, and poetry. How do processes of self-deception affect the structure of knowledge? We have already addressed history; here I wish to focus on social biology and the social sciences, economics, cultural anthropology, and psychology. If we believe, as we have seen over and over, that self-deception deforms human cognitive function among individuals, airline pilots, governmental agencies, war planners, and so on, how can we not imagine that our very systems of knowledge are not likewise systematically deformed?

Of course, I can pretend no overview of this immense subject—all of knowledge itself—but several points strike me as important. First, we expect knowledge to be more deformed, the more deformation is advantageous to those in control. If you are trying to land a missile more accurately or transmit knowledge more quickly, you will be drawn to science itself, which is based on a series of increasingly sophisticated and remorseless anti-deceit and anti-self-deception mechanisms. It seems likely that the enormous success of science in part reflects this feature. Second, it seems manifest that the greater the social content of a discipline, especially human, the greater will be the biases due to self-deception and the greater the retardation of the field compared with less social disciplines. It may be that the intrinsic complexity of social phenomena impedes rapid scientific progress, but modern physics is very complex, and its findings were unearthed by procedures relatively unimpeded by self-deception. The study of history seems to be a conflict between a few honest historians trying to gain a true picture of the past and the greater number, who are primarily interested in promoting an uplifting view of the group past—in short, a false historical narrative.

Another possibility regarding the development of social disciplines is that a prior moral stance regarding a subject may influence the development of theory and knowledge in that subject—so that, in a sense, justice may precede truth (and false justice, untruth). Let us begin with this topic.


The usual assumption within academia is that we will derive a theory of justice from our larger theory of the truth. But what if our prior stance regarding justice impedes our search for the truth? For example, an unconscious bias toward an unjust stance will invite cognitive biases in favor of this stance. The “truth” that one produces on the justice of a situation will have been distorted by the prior commitment to an unjust position. In short, injustice invites self-deception, unconsciousness, and inability to perceive reality, while justice has the opposite effect. This can be a very pervasive effect in life. That is, we can construct social theory—at the microlevel, marriage, family, job; at the macrolevel, society, war, etc.—and think we are pursuing the truth objectively, but we may only be fleshing out our biases. This suggests that an early attachment to fairness or justice may be a lifelong aid in discerning the truth regarding social reality. Of course, if your attachment is to pseudo-justice, one may have exactly the opposite effects. It is possible to use an alleged attachment to justice defensively—for example, to prohibit outside knowledge from entering your discipline—which may lead you far from truth, as we shall see for cultural anthropology. Behavior may cause belief, as I have been arguing, but that still leaves open the question of what causes the behavior in the first place, that is, the just or unjust stance.


The success of science appears in great part to be due to a series of built-in devices that guard against deceit and self-deception at every turn. First, everything is supposed to be explicit. Famous mathematical proofs (Godel’s theorem) begin with a set of all the symbols used and what they mean. By contrast, in the social sciences, entire subdisciplines may flourish in the interstices of poorly defined words. Scientific work is supposed to be described explicitly in detail, with terms and methods defined to permit the work to be repeated exactly in its entirety by anyone else. This is the key guard against untruth: repeating work to see whether the same results emerge. Think of the number of tantalizing hoaxes that are dismissed because they can’t pass this first hurdle—for example, achieving atomic energy via cold fusion. Of course, full-time hoaxes, such as psychoanalysis, preclude experimental tests at the outset (in favor of such bedrock data as clinical lore). The requirement for exact description permitting exact repetition applies not just to experimental work but also to any way of gathering data that reveals patterns of interest.

Experiments are conducted under controlled conditions—that is, with certain key variables held constant and/or varied in a logical and systematic manner. The results are then subjected to a statistical apparatus that has grown very sophisticated in the past one hundred years. Very complex sets of data can now be rigorously searched for information regarded as statistically significant. By convention, data that can be generated by chance more than 5 percent of the time are rejected as unreliable. For important results, such as medical findings, we prefer an error rate of 1 percent or less. Finally, meta-analyses can be performed on large numbers of related studies to see what statistically valid generalizations can be made across the full range of evidence. Every single one of these advances tends to minimize the opportunities for deceit and self-deception. They also permit us to rank information by degree of reliability (statistical significance) and effect size (weak or strong).

The acid test of science is its ability to predict the future, in particular, hitherto unknown facts. Yes, light really is bent by gravity (per Einstein); in an eclipse of the sun, the apparent position of stars in the nearby background was altered by the sun’s gravity. The same principle operates on much more humble work. That ants produce a 1:3 ratio of investment in the sexes (unlike almost all other animals) was first predicted on kinship grounds alone (the female ants producing the ratio are three times as related to their sisters as to their brothers, unlike almost all other species) and has been confirmed by detailed evidence from dozens of scientific studies. Of course, scientists will pretend that “predictions” lack any foreknowledge, when in fact they are “post-dictions.” This is the beauty of Einstein’s prediction compared to that concerning ants: How on earth could Einstein have had advance information about the apparent positions of stars during a solar eclipse more than ten years into the future? By contrast, one can easily bone up on ant sex ratios before launching one’s prediction.

There is one more key requirement to true science. Science asks that, whenever possible, knowledge be built on preexisting knowledge. Key assumptions may already be contradicted (or supported) by preexisting knowledge, and where no such knowledge exists, science suggests the value of producing it. Errors in the foundation—of both buildings and disciplines—are the most costly. Yet there is surprising resistance in many quarters of social science to adopt—much less embrace—this feature of real science.

The structure of the natural sciences is as follows. Physics rests on mathematics, chemistry on physics, biology on chemistry, and, in principle, the social sciences on biology. At least the final step is one devoutly to be wished and soon hopefully achieved. Yet discipline afterdiscipline—from economics to cultural anthropology—continues to resist growing connections to the underlying science of biology, with devastating effects. Instead of employing only assumptions that meet the test of underlying knowledge, one is free to base one’s logic on whatever comes to mind and to pursue this policy full time, in complete ignorance of its futility.

By contrast, mathematics gave physics rigor, physics gave chemistry an exact atomic model, and chemistry gave biology an exact molecular model. And biology? You would think it would have much to give—most important, an explicit, well-tested theory of self-interest, but also a vastly expanded set of evidence, including a detailed understanding of many underlying variables (immunological, endocrinologic, genetic) that would otherwise remain obscure.


In physics, we imagine precious little self-deception. What difference does it make for everyday life whether the gravitational effect of the mu meson is positive or negative? None at all. So the field is expected to advance relatively unimpeded by forces of deceit and self-deception—with one exception. Physicists will overemphasize the importance and value of their work to others. They will talk of producing “a theory of everything” and make other grand claims, but their social utility, in my opinion, is primarily connected to warfare. Their major function has been to build bigger bombs, delivered more accurately to farther distances, and this has probably been their main function reaching back into prehistory. When I read of nine billion euros spent on a supercollider in which tiny particles are accelerated to incredible speeds and then run into one another, I think “bombs.” This factor may lead to more resources being directed toward physics and to some subareas than is objectively sensible, but it is unlikely to have much effect on constructing theory.

In my opinion, a key to the development of the very solid and sophisticated science of physics is the complete absence from its subject matter of social interactions or social content of any sort. More generally, I imagine that the greater the social content of a discipline, the more slowly it will develop, because it faces, in part, greater forces of deceit and self-deception that impede progress. Thus, psychology, sociology, anthropology, and economics have direct implications for our view of ourselves and of others, so one might expect their very structure to be easily deformed by self-deception. The same can be said for some branches of biology, especially social theory and (separately) human genetics. Many of these illusions have in common that function is interpreted at a higher level than is warranted (for example, society instead of individual).


For roughly a century, biologists had the social world analyzed almost upside down. They argued that natural selection favored what was good for the group or the species, when in fact it favors what is good for the individual (measured in survival and reproduction), as Darwin well knew. More precisely, natural selection works on the genes within an individual to promote their own survival and reproduction, which is usually equivalent to what is beneficial for the individual propagating the genes. Yet almost from the moment Darwin’s theory was published, scientists in the discipline reverted to the older view of benefit as serving a higher function (species, ecosystem, and so on), only now they cited Darwin as support for their belief. In turn, the false theory was just the kind of social theory you would expect people to adopt in a group-living species whose members are concerned with increasing one another’s group orientation. This theory also can be used to justify individual behavior by claiming that such behavior serves group benefit (for example, murder justified as population regulation) and can be used to create the ideal of a conflict-free world.

For example, take the classic case of male infanticide, first studied in depth in the langur monkeys of India, and now known for more than one hundred species. Male murder of dependent offspring (fathered by a previous male) was rationalized as a population-control mechanism that kept the species from eating itself out of house and home. Male murder thus served the interests of all. Of course, it did no such thing. Since a nursing infant inhibits its mother’s ovulation, murder of the infant brought the bereaved mother into reproductive readiness quicker, which aided the male’s reproduction but at a cost to the dead infant and its mother. In some populations, as many as 10 percent of all young are murdered by adult males—each murder gaining on average only two months of maternal time for the new male. These deaths are unrelated to population density (as would be expected if they served a population-regulation function), but they are correlated with the frequency with which males take over new groups. What this work shows is that an enormous social cost can be levied every generation by natural selection on males, even though there is only a modest male gain (two months of female labor) compared with the female loss (twelve months of maternal care).

It was famously argued that male aggression is intrinsically good for the species, since it is always better for the species if the stronger of two males takes control of a favored female. But this is precisely what is not known. Whether an aggressively successful male has genes at other loci that are beneficial to his progeny is an open question that must be answered in each separate case (especially by the choosing female). Perhaps the success of aggressive males spreads genes only for aggressiveness, which are otherwise useless for the species (or a female’s daughters). In any case, male elephant seals fighting for access to females clumped together on breeding islands typically kill about 10 percent of the young every year (fathered by other males) by trampling them to death during fights. In what sense is male aggression good for the species? Are they eliminating inferior genes underfoot?

Close relationships are also easily imagined to be conflict-free. Thus, mother/offspring coevolution is allegedly favored—each party evolving to help the other. As we saw in Chapter 4, nothing like this is actually true of real families. Even in the formation of the placenta, the mother does not help the invading fetal tissue—she puts up chemical and physical obstacles (the better to avoid later excess investment). Likewise, in the 1960s, bird watchers liked to imagine that the families they loved to observe were free of conflict, but this was soon proven wrong when rates of extra-pair paternity exceeding 20 percent were regularly reported.

Thus, for years evolutionary biologists have used a form of argumentation that helped cement in the social sciences and elsewhere the notion that evolution favored what was good for the family, the group, the culture, the species, and perhaps even the ecosystem, while minimizing the reality of conflict within any of these entities. Anthropologists soon rationalized warfare itself as favored by evolution because it too was such a nifty population-regulation device. Note that the error is virtually irrelevant for nonsocial traits. The human locking kneecap allows us to stand erect without wasting energy in tensed legs. It evolved because it benefited the individual with the new kneecap, but if you said it evolved to benefit the species, you would not misinterpret the kneecap. Not so for social traits. Here, as we have seen, we can exactly invert the meaning of a trait by failing to see how it is favored among individuals, even though it may be more costly to others. Instead we imagine that everyone benefits. This often amounts to reaffirming Pangloss’s theorem—that everything is for the best in the best of all possible worlds.

Likewise, altruism toward others presents no great problem for species-advantage thinking, because as long as benefit is greater than cost, there is a net benefit for the species. Of course, at the individual level, altruism is a problem to explain and requires special conditions, such as kinship or reciprocal relations, with internal conflict in both cases. The latter generates a sense of fairness to evaluate nonreciprocal relations, an adaptation unnecessary under a group-selected view.


The short answer is no. Economics acts like a science and quacks like one—it has developed an impressive mathematical apparatus and awards itself a Nobel Prize each year—but it is not yet a science. It fails to ground itself in underlying knowledge (in this case, biology). This is curious on its face, because models of economic activity must inevitably be based on some notion of what an individual organism is up to. What are we trying to maximize? Here economists play a shell game. People are expected to attempt to maximize their “utility.” And what is utility? Well, anything people wish to maximize. In some situations, you will try to maximize money acquired, in others food, and in yet others sex over food and money. So we need “preference functions” to tell us when one kind of utility takes precedence over another. These must be empirically determined, since economics by itself can provide no theory for how the organism is expected to rank these variables. But determining all of the preference functions by measurement in all the relevant situations is hopeless from the outset, even for a single organism, much less a group.

As it turns out, biology now has a well-developed theory of exactly what utility is (even if it misrepresented the truth for some one hundred years) based on Darwin’s concept of reproductive success. If you are talking about utility (that is, benefit) to a living creature, then it is useful to know that this ultimately refers to the individual’s inclusive fitness, that is, the number of its surviving offspring plus effects (positive and negative) on the reproductive success of relatives, each devalued by its relatedness to them. In many situations, the added precision of this definition (compared to reproductive success alone) makes no difference, but by resolutely acting as if they can produce a science out of whole cloth, that is, independent of noneconomic scientific knowledge, economists miss out on a whole series of linkages that may be critical. They often implicitly assume, as we noted in the first chapter, that market forces will naturally constrain the cost of deception in social and economic systems, but such a belief fails to correspond with what we know from daily life, much less biology more generally. Yet such is the detachment of this “science” from reality that these contradictions arouse notice only when the entire world is hurtling into an economic depression based on corporate greed wedded to false economic theory.

The mistake is partly related to the fact that “utility” has ambiguity built into it. It can refer to utility of your actions to you or to others, including the rest of your group. Economists easily imagine that the two kinds of utility are well aligned. They often argue that individuals acting for personal utility (undefined) will tend to benefit the group (provide general utility). Thus they tend to be blind to the possibility that unrestrained pursuit of personal utility can have disastrous effects on group benefit. This is a well-known fallacy in biology, with hundreds of examples. Nowhere do we assume in advance that the two kinds of utility are positively aligned. This must be shown separately for any given case.

One recent effort by economics to link up with allied disciplines is called behavioral economics, a link with psychology that is most welcome. But as usual, economists resolutely refuse to make the final link to evolutionary theory, even when going through the motions. That is, even those economists who propose evolutionary explanations of economic behavior often do so with unusual, counterlogical assumptions. For example, a common recent mistake (published in all the best journals) is to assume that our behavior evolved specifically to fit artificial economic games.

To imagine how bizarre this is, consider the ultimatum game described in Chapter 2. People often reject unfair offers of a split of money by anonymous others (for example, 80 percent to the proposer and 20 percent to the recipient) even though they thereby lose money. Thus, the game measures our sense of fairness: How much are we willing to suffer in order to punish someone acting unfairly toward us? But a group of economists (with some anthropologists thrown in for added rigor) has made the extraordinary argument that people are acting as if they had evolved to fit this unusual lab situation. Put differently, that we reject unfair offers at a cost to ourselves in order to punish the perpetrator in a completely anonymous exchange means to them that the bias evolved to fit exactly this situation—one-time exchanges with no possible return benefit for the actor, or relatedness, only a group benefit. Once again, group trumps individual. But this is as logical as arguing that our terror watching a horror film evolved to fit movie showings. Biologists have brought living creatures into the laboratory for centuries to study their traits, but no one I know of has shortcut the study of the function of the trait by imagining that the trait evolved to fit the laboratory.

A recent Nobel winner in economics wondered how it was possible for his well-developed science to fail completely to predict the catastrophic economic events that started in 2008. One part, of course, is that economic events are intrinsically complex, involving many factors, and the final result, the aggregate of the behavior of an enormous number of people, though not quite as complex as the weather, is almost as difficult to predict. As for the cause the economist located, it was infatuation with beautiful mathematics at the cost of attention to reality. Surely this is part of the problem, but nowhere does he suggest that the first piece of reality they should pay attention to—and this has been obvious for some thirty years now—is biology, in particular evolutionary theory. If only thirty years ago economists had built a theory of economic utility on a theory of biological self-interest—forget the beautiful math and pay attention to the relevant math—we might have been spared some of the extravagances of economic thought regarding, for example, built-in anti-deception mechanisms kicking in to protect us from the harmful effects of unrestrained economic egotism by those already at the top.

Finally, when a science is a pretend science rather than the real thing, it also falls into sloppy and biased systems for evaluating the truth. Consider the following, a common occurrence during the past fifteen years. The World Bank advises developing countries to open their markets to foreign goods, let the markets rule, and slash the welfare state. When the program is implemented and fails, the diagnosis is simple: “Our advice was good but you failed to follow it closely enough.” There is little risk of being falsified with this kind of procedure.


Cultural anthropology made a tragic left turn in the mid-1970s from which it has yet to recover (at least in the United States). Before then, the field was called social anthropology and included all forms of human social behavior, especially as displayed by different cultures and peoples. The field was meant to partner with physical anthropology, the study of the body, including fossils and artifacts from the past. But suddenly in the early 1970s, strong social theory emerged from biology and a variety of subjects were addressed seriously for the first time: kinship theory, including parent/offspring relations, relative parental investment, and the evolution of sex differences, the sex ratio, reciprocal altruism and a sense of fairness, and so on. Social anthropologists had a choice: accept the new work, master it, and rewrite their own discipline along the new lines, or reject the new work and protect their own expertise (such as it was). As has been noted, “Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof.” This is perhaps especially true in academia.

Consider your dilemma as a social anthropologist. You have invested twenty years of your life in mastering social anthropology. Along the way, you have completely neglected biology. Now comes the choice: acknowledge biology (painful), invest three years in catching up (nearly unimaginable), then compete with people twenty years younger than you and better trained (impossible)—or instead ride the old horse for all she is worth, whipping social anthropology until she bleeds? Even in physics, it was famously said that the field advanced one funeral at a time—only death could get people to change their minds. But notice the intermediate path not taken. They could have said, “I will not retool myself; it is too late. But I will make sure my students learn something useful about the new work in biology (they can even teach me) while I continue to do my work.” Complete rejection is redolent of self-deception. Outright denial is the easiest immediate path but entrains mounting costs, now onto the third generation, making it ever harder to resist each new wave of denial.

Certainly the social anthropologists rose to the challenge, even renaming their field “cultural anthropology” to more explicitly rule out the relevance of biology in advance. Now we were no longer social organisms but cultural ones. The justification, in turn, was moral. Out of biological thinking flowed biological determinism (the notion that genetics influences daily life), whose downstream effects included fascism, racism, sexism, heterosexism, and other odious “isms.” To mention natural selection was to imply the existence and perhaps even utility of genes, which was prohibited on the moral grounds just given. Thus an entire new area of social theory would be ruled out based on the alleged pernicious influences of its assumptions, which were, in fact, widely accepted as true (genes exist, they affect social traits, natural selection alters their relative frequencies, and this produces meaningful patterns). Once you remove biology from human social life, what do you have? Words. Not even language, which of course is deeply biological, but words alone that then wield magical powers, capable of biasing your every thought, science itself reduced to one of many arbitrary systems of thought.

And what has been the upshot of this? Thirty-five wasted years and counting. Years wasted in not synthesizing social and physical anthropology. Strong people welcome new ideas and make them their own. Weak people run from new ideas, or so it seems, and then are driven into bizarre mind states, such as believing that words have the power to dominate reality, that social constructs such as gender are much stronger than the 300 million years of genetic evolution that went into producing the two sexes—whose facts in any case they remain resolutely ignorant of, the better to develop a thoroughly word-based approach to the subject.

In many ways, cultural anthropology is now all about self-deception—other people’s. Science itself is a social construct, one among many equally valid ways of viewing the world: the properties of viruses may also be social constructs, the penis may, in some meaningful sense, be the square root of–1, and so on. As a result, most US anthropology departments consist of two completely separate sections, in which, as one biological colleague put it, “they think we’re Nazis and we think they are idiots”—hardly a platform for synthesis and mutual growth.


In the 1960s, psychologists often explicitly disavowed the importance of biology. At Harvard, to get a PhD in psychology, you were required to take one semester of physics. This was to give you an idea of what an exact science looked like. No biology was required. Like economists, psychologists were going to create their field out of itself: learning theory, social psychology, psychoanalysis—essentially competing guesses about what was important in human development, none with any foundation. Psychoanalysis was a long-running fraud, as we shall see below, and learning theory made far-reaching and implausible claims about the ability of reinforcement to mold all behavior adaptively. It was soon shown on logical grounds alone that reinforcement could not produce language, or even just associations of actions and their effects when the latter were delayed more than a few moments.

On the positive side, psychology has always concentrated on the individual and was thus congenial to an approach based on individual advantage. Recently a school of evolutionary psychology has developed, while psychology has been increasingly integrated with other areas of biology, sensory physiology long ago but now neurophysiology and immunology. So psychology is rapidly becoming the branch of evolutionary biology it always wished to be.

Social psychology somewhat lags the rest of psychology, another example perhaps of the retarding effects of deceit and self-deception on disciplines with more social content. It has generated artificial methodologies meant to shortcut work and achieve quick results, the curse of psychology for more than a century: wishing to say more than available knowledge permits. A key such method was that of self-reports, or questionnaire-answering behavior—what people say about themselves. In retrospect, it seems unwise to have tried to build a science of human behavior on people’s verbal responses to questions. For one thing, forces of deceit and self-deception—or call them issues of self-presentation and self-perception, if you prefer—loom large. We often do not tell the truth about ourselves to others and we often do not know the truth in the first place. In using these measures, exactly how were they screening out deception, never mind self-deception, to arrive at the truth? And how is this possible in the absence of an explicit theory of deceit and self-deception? Building a science on this foundation led to numerous significant correlations between ill-defined variables that are poorly measured, but little or no cumulative growth over time. Instruments (that is, questionnaires) were said to be well-validated, predictive, and internally consistent, that is, people answer the same way a month apart, the measures correlate with some other measures, and all questions point in the same direction (or are reverse scored). Not a very impressive nod toward methodology, but fortunately this era is coming to a close, with new methodologies that access unconcious biases directly.


Freud claimed to have developed a detailed science of self-deception and human development: psychoanalysis. But one measure of a field is whether it grows and prospers or wilts and withers, and psychoanalysis has not prospered. As it turned out, the empirical foundation for developments in the field was something called clinical lore, essentially what psychiatrists told one another over drinks after a day’s work. That is, when you asked a psychiatrist what his (as he almost always was) basis was for believing that a key part of the female psyche was “penis envy” or that the route to understanding males lay in something called castration anxiety, you were told that the basis was shared experiences, assumptions, and assertions among psychoanalysts about what went on during psychotherapy—something inaccessible to you, unverifiable, and, as a system, providing no hope for improvement. Indeed, the failure to state or develop methodologies capable of producing useful information is almost the definition of nonscience, and in this regard, psychoanalysis has been spectacularly successful. When is the last time you heard of a large, double-blind study of penis envy or castration anxiety?

Freud’s theory consisted of two parts: self-deception and psychosocial development. The theory of self-deception had many creative concepts—denial, projection, reaction formation, ego defense mechanism, and so on, but these were wedded to a larger system that made no sense at all, the id (instinctual forces heavily based on alleged critical transitions in early life—anal, oral, and oedipal), the ego (roughly, the conscious mind), and the superego (the conscience, or something like that, formed by interaction with parents and significant others).

His theory of psychosocial development was corrupt, in the sense that it was built on weak and suspect assumptions that had little or no factual support. The argument was heavily centered on sexual attraction within the nuclear family—and its suppression—but there is good reason to doubt that this should be a major offspring concern. Almost all species of animals are selected to avoid close inbreeding, which has real genetic costs, and they have evolved mechanisms—for example, early exposure to parents and siblings causes sexual disinterest—that minimize inbreeding. This is especially true from the offspring’s viewpoint. That is, fathers may gain in relatedness by forcing sex on a daughter (and therefore a child) sufficient to offset the genetic cost, but the daughter is unlikely to benefit sufficiently in relatedness to offset her cost. The son could in principle benefit from impregnating his mother, but selection would be weak at best, the one ending her reproduction while the other is beginning his, and there are other very good reasons for showing deference to one’s mother (especially for a male’s maternal genes).

Thus, with Freud’s claim that sexual tendencies in the family arose from the unconscious needs of the child, he was committing a classic case of denial and projection—denying the inappropriate sexual advances toward young women by their male relatives (as his women patients were describing to him) and imagining instead that the women were lusting after precisely these couplings.

He was also obtuse to harsh parental treatment as a cause of offspring malfunction. Once again, his tendency was to blame the victim. One of Freud’s celebrated analyses was that of “Wolf Man,” psychotic since adulthood with sensations of being tormented physically, bound and restricted, and unable to control his fears. Freud conjured this whole syndrome as resulting from the child’s failure to mature properly, getting stuck in some early stage of development, but he never considered the father’s possible role in this—indeed speaks warmly of him as a highly regarded educator with numerous books—even though he was a sadistic educator and parent. He advocated tying children into bed at night and using a series of other torture devices, all in the name of good posture. Alas, he applied his theories to his own children. One boy committed suicide; the other survived to become Freud’s “Wolf Man.”

The degree to which Freud’s habit of cocaine abuse during his early years helped fuel his grandiosity is impossible to know, but he certainly easily believed in other phantasmagorical things, for example, that the number twenty-nine played a recurring and decisive role in human life, or that thought could be transported instantaneously across wide distances without the use of electrical devices, and so on. What is truly extraordinary is that he was able to build a cult that took over whole sections of psychiatry and psychology, and provided employment for generations of like-minded people, charging high fees, four times a week, to misinterpret the lives of those they were talking to.

Freud’s own attitude toward empirical verification was nicely summarized when he responded to someone asking if after thirty years of theorizing, perhaps it was time for some experimental testing. Though allowing that experiments could do no harm, Freud said:

The wealth of dependable observations on which these assertions rest, make them independent of experimental verification.

This is an unusual assertion, since it suggests that counterevidence cannot count as actual evidence. Put differently, the worlds of experimental truth and psychoanalytic truth are independent, as indeed they are. Contrast the position of the famous physicist Richard Feynman:

It doesn’t matter how beautiful the guess is, or how smart the guesser is, or how famous the guesser is; if the experiment disagrees with the guess, the guess is wrong. That’s all there is to it.


We have seen numerous ways in which self-deception may deform the structure of intellectual disciplines. This seems obvious in both evolutionary biology and the social sciences, where increasing relevance to human social behavior is matched by decreasing rates of progress, in part because such fields induce more self-deception in their practitioners. One common bias is that life naturally evolves to subserve function at higher levels. Not genes but individuals, not individuals but groups, not groups but species, not species but ecosystems, and, with a little extra energy, not ecosystems but the entire universe. Certainly religion seems to promote this pattern, always tempted to see a larger pattern than is warranted. Science provides some hope, since it has a built-in series of mechanisms that guard against deceit and self-deception, but it too is vulnerable to the construction of pseudo-sciences (Freud), not to mention outright fraud. Over the long haul, however, falsehood has no chance, which is why over time science tends to outstrip competing enterprises.