The Pleasure Instinct: Why We Crave Adventure, Chocolate, Pheromones, and Music - Gene Wallenstein (2008)
Part II. The Pleasures of the Sensory World
Chapter 8. In Search of Pretty Things
One eye sees, the other feels.
The real voyage of discovery consists of not in seeking new landscapes but in having new eyes.
Martin watched the first flurries of snow begin to fall and finally relaxed as he switched off the outside light and urged his body upstairs to bed. The workday had been a frenzy of meetings and deadlines blurred into a whirl that began when he entered the freeway on-ramp in the morning and subsided only after a glass of wine before dinner. His drive home had been the pièce de résistance of the day. Another car seemed to emerge from thin air as he was changing lanes, causing him to veer suddenly. The remainder of his commute was accompanied by white knuckles and a sickening cold sweat—the holidays always seemed to bring out the amateurs.
No matter how tense his day, Martin never seemed to have any difficulty getting to sleep once his head hit the pillow. His last thoughts on this cold December evening were of his beautiful wife and his four-year-old son’s warm, crooked smile—a sight that always overwhelmed him.
Well before daybreak Martin was roused from sleep for his customary middle-of-the-night trip to the bathroom and immediately felt the gripping headache. Stumbling into the bathroom, he noticed a strange numbness in his left leg and the left side of his face. In the dim glow of the bathroom night-light, he saw that something was also wrong with his face. It was hard to say exactly what was different at first, but slowly he realized that his mouth seemed at odds with the rest of his face—the left side kind of drooping placidly. Worse yet, when he moved his right arm toward the drug cabinet, his hand seemed to vanish from sight, only to reappear once it came in contact with the mirror. Later that morning in the hospital, his doctors had a hard time convincing him that although just forty-one years old, he had experienced a stroke that could have ended his life.
After six months of physical therapy, Martin began to feel like himself again.The stroke had been a warning, but he’d survived and come through fairly unscathed save for occasional slurred speech. The new Martin had been molded into a different person altogether—finding more time for relaxing with his family, and even for a daily jog around his neighborhood each morning before work. All of his attending doctors pronounced him fit—and lucky—yet he often felt odd sensations during the day. More and more frequently during his morning jogs, Martin seemed to have brief moments of intense fear and panic. These fleeting episodes occurred especially when he ran through the park at the end of the street and were often triggered by the sudden appearance of a dog—any dog, no matter how small—or even another person walking toward him.
Although by all standard tests Martin had excellent vision, he mentioned during a visit with his doctor that he occasionally had trouble estimating the distance of approaching cars while crossing a street—a problem that had led to several close calls in the past few months. When asked about any other difficulties, he reluctantly confessed that sometimes when talking with a coworker he had difficulty understanding what she was saying, because her mouth seemed to “fade in and out” of sight when she spoke. This must have been a surprise to the attending neurologist, but it was enough of a clue to raise the possibility that his patient was suffering from akinetopsia, or visual motion blindness.
Martin was referred to a specialist that week for a battery of visual acuity, motion, and perimetry tests. His static visual acuity was completely normal. In tests using moving stimuli, however, Martin seemed to have severe deficits in specific portions of his visual field. Objects at rest that began moving proved to be particularly troublesome.
A common test for akinetopsia involves what psychologists and other vision scientists refer to as object tracking.The patient is seated directly in front of a computer display and asked to focus his or her gaze and follow the movement of a colored circle.The circle begins in the middle of the screen, and the tester can manipulate the object’s gradual movement to different corners of the screen. People with normal vision see the circle move smoothly from the center to the corner locations, and grow bored quickly. Patients with akinetopsia have an entirely different experience. They report seeing the still object disappear from the center and reappear in one of the corners. Once the circle moves, it is as good as gone until it comes to rest. Martin had a similar experience in his object tracking test, so his neurologist immediately ordered a second MRI scan of his injured brain. The damage was concentrated on one side of Martin’s posterior parietal lobe, an area toward the back top of the brain that is deeply involved in—not surprisingly—motion perception.
The fact that Martin had stroke damage to his posterior parietal lobe provides an explanation for his akinetopsia, yet the particular way this disorder impacted his life—creating a sudden, blinding fear of dogs and people coming toward him—can only be understood if we consider how the pleasure instinct and experience guide the development of the visual brain.
How Pleasure Fine-Tunes the Visual Brain
The emergence of the earliest primates from the mammalian branch some sixty million years ago came with dramatic changes in the sensory systems of this lineage. Based on the abundance of fossils these animals left behind, we know that they were rather small—probably weighing only a few ounces—and closely resembled modern-day prosimians such as galagos, tarsiers, and lemurs. They were adapted to a much warmer climate than we have today, with tropical rain forests covering a significant portion of the planet. Their small size and prehensile hands and feet allowed them to grab onto and forage among the fine terminal branches that make up the rain forest canopy. This unique niche came with its own challenges in terms of identifying potential foods, usually fruit, seeds, and insects camouflaged against a background of green leaves and thickets, and for recognizing potential predators. These two key selection factors—needing to locate hidden fruit and predators—promoted the gradual shift in a sensory system dominated by smell in most mammals to a new model where vision reigned supreme in the emerging early primates. These new creatures had large, forward-facing eyes with a high density of photoreceptors in the center of their retina, an area called the fovea. In early primates this high concentration of photoreceptors came with new brain-stem circuitry that evolved to focus their visual gaze frontally toward anything that moved, and a dramatic increase in the size of the brain areas devoted to vision relative to those devoted to olfaction.
With the shift toward frontal vision, early primates sacrificed some ability to detect the presence of food or predators using smell, yet these anatomical changes gave them distinct advantages over other groups of mammals. In particular, the shift from side-oriented eyes to a frontal position permitted the evolution of binocularity and stereoscopic vision, two functions critically important for fine visual acuity and determining the size and distance of an object.
The favoring of vision over smell in early primates was not just a matter of the visual cortex getting bigger—entirely new brain areas devoted to specialized visual functions evolved in these animals that never existed in other mammals. One important innovation was the evolution of brain areas in the posterior parietal lobe and medial temporal regions that were devoted to the visual guidance of muscle movement. Evolutionary biologists have argued that the presence of improved frontal visual acuity and a propensity to live among fine tree branches necessitated the development of neural systems designed to improve eye-hand coordination. The emergence of posterior parietal areas for perceiving visual motion is the result of this evolutionary ratchet effect (chapter 2), where one adapted function served as a selection factor for yet another adaptation. In this case, the development of increased frontal visual acuity in combination with prehensile hands and feet made the rain-forest canopy a viable niche for early primates. This shift from ground-level to tree-branch foraging incurred a high survival cost on primates with poor eye-hand coordination. Simply moving from one swaying branch to another might prove fatal if the distance to the next limb or its degree of movement was miscalculated. Such conditions served as strong selection factors in driving the evolution of specialized brain regions devoted to perceiving object motion.
The evolution of brain regions in the posterior parietal lobe and medial temporal areas that facilitate object tracking and eye-hand coordination, in turn, created the conditions in which selection factors arose for the development of color vision. It is thought that until about forty million years ago early primates had only one primary photoreceptor type tuned to a single distribution of light wavelengths. In terms of function, this mechanism allowed a primate to see the world in basic shades of gray. Most modern-day prosimians have two distinct types of photoreceptor (dichromatic) that are maximally sensitive to different light wavelengths.These animals have a rudimentary capacity for color vision. During the early period in the evolutionary history of primates, a mutation caused the genes that normally control photoreceptor development to duplicate.This process resulted in a higher density of photoreceptors, which eventually diverged into two and then three distinct classes, each sensitive to a different wavelength of visible light. Recent work has shown that the advance from dichromacy to trichromatic color vision specifically enhanced the ability of primates to distinguish nutrient-rich fruits from the background coloration of leaves.Thus the evolution of improved eye-hand coordination and frontal vision that gave early primates a novel niche created the conditions where identifying colored fruits against a uniformly green background of leaves served as a selection pressure for color vision.
The evolutionary history of primate vision is a story riddled with ratchet effects, and so too is the ontogenetic development of vision in humans. By the fifth week of gestation, human embryos show the first signs of an early eyecup that begins to differentiate into a lens and a retina. At this point in development, the eyes face laterally to each side, much like an early mammal. The retina itself is derived from the same neural ectoderm that comprises the central nervous system and is therefore considered part of the brain. Each retina is attached to the brain-stem by a broad group of fibers called the optic nerve. By the fourteenth week of gestation, the eyes begin to face forward in the familiar primate form, and photoreceptor cells start to form in the center of the retina and gradually fill in from the fovea outward. Development continues from the retina inward to brain areas, advancing along the same path as normal sensory input. Retinal development is followed by the emergence of several brainstem sites such as the superior colliculus (involved in controlling eye movements), then the visual cell groups in the thalamus (e.g., lateral geniculate nucleus), and finally neocortical areas such as the primary visual cortex (also known as V1).
Primates have tremendous developmental investment in vision. There are more than forty known cortical areas devoted specifically to visual information processing. By the end of the second trimester, a human fetus has a fairly mature primary visual cortex, and most secondary and tertiary visual centers have undergone rapid growth both in terms of cell numbers and synaptic connections between the areas. A human’s visual system, however, does most of its development after birth and is even more dependent on experience for normal maturity than the other senses. While the other sensory systems can be stimulated fairly early in the womb and thus undergo considerable experience-expectant growth before birth, vision is the exception to the rule. A scarcity of light makes its way to the fetus, and consequently most of the visual system’s fine-tuning must be guided by the pleasure instinct after birth.
Any parent will tell you that infants are hungry for visual experiences, yet they are rather particular in their choices. Much like the other sensory systems, there seems to be a characteristic sequence of preferred stimulation types that is attractive for infants. Little Kai, who is now crawling, has followed a pattern of visual development that is an echo of primate phylogeny. At birth his visual system was relatively immature when compared to the sensory systems responsible for touch, taste, smell, and hearing. For the first few weeks, Kai could only lock onto faces and high-contrast objects that were within about five to ten inches of his face. Infants this age have great difficulty focusing on objects outside this range, since the muscles that control lens shape (which affects light refraction) are so immature. By four months, however, Kai could focus his vision across a much larger range of distances and was fascinated by anything that moved. He was particularly fond of ceiling fans, but the love affair always ended once it was turned off, suggesting that it was the motion that piqued his interest. Toy manufacturers are, of course, sensitive to these developmental milestones. One might argue tongue in cheek that the preference of newborns for things that move was an important (albeit artificial) selection factor in the evolution of the mobile (insofar as parents tend to select toys that are attractive to newborns).
Similar to what we have seen with the other sensory systems, the neocortical areas devoted to vision tend to specialize. Once information enters the first neocortical area dedicated to vision (primary visual cortex or V1), it diverges to a number of additional cortical regions, each with a different specialty. Some areas are responsible for processing object motion and location. Others process object form, color, texture, shading, shape, and so forth. Parallel processing is a general operating principle for all sensory systems in the brain.The strength of parallel processing lies in the fact that it facilitates speed of information transfer through the brain and adds to the robustness of the information through built-in redundancy (that is, multiple channels are used).
As I throw a multicolored ball in the air in front of my son, the information makes its way through Kai’s retinas and quickly stimulates brain-stem sites, such as the superior colliculus, that focus his attention toward the object. This brain-stem attention-grabbing circuitry is about as evolutionarily ancient as any primate brain region, since it is found in every vertebrate. From the brain-stem sites, the visual information about the ball travels through Kai’s thalamus and enters his primary visual cortex, which provides him with the first conscious perception of the object. Thus, although the brain-stem activation produced a change in behavior causing Kai to focus on the ball, this processing is beneath the surface of consciousness.
Once the visual information reaches V1, it quickly diverges into two dominant streams: one responsible for processing information about an object’s spatial location and movement (the “where” pathway) and another responsible for processing the form features of the object such as shape, color, and texture. The latter stream is known as the “what” pathway.
The “where” and “what” visual pathways develop and mature at different rates in humans. In primates the brain pathways devoted to processing object motion develop and mature far earlier than those responsible for processing advanced object form information. For example, while Kai was delighted by the appearance of almost any slowly moving object when he was four months old, he is now a ten-month-old clearly in love with bright, primary colors. This is not really a transition from preferring objects that move to preferring objects that have bright colors. Rather, it is the addition of a fondness for bright, primary colors that joins the list of pleasure-inducing forms of stimulation. This sequence is characteristic of all human infants and maps onto the relative maturity of the “where” and “what” pathways.
At four months an infant’s posterior parietal areas that comprise the where pathway are just beginning to undergo a major increase in synaptic pruning. Remember from earlier chapters that during synaptic pruning, experience becomes the crucial instrument in shaping and fine-tuning brain circuitry in the early stages of development. Hence a four-month-old infant’s where pathway is just beginning to enter a phase of synaptic pruning, where its continued development and fine-tuning depend on appropriate stimulation. In this case, appropriate stimulation consists of any experiences that would optimally activate the mature circuit—namely, moving objects.
If we were to design this process in a robot—creating a sensory perception system that depends on experience for fine-tuning—any good engineer would build in a process to increase the probability that the optimal forms of required stimulation are experienced at the right times. Likewise, nature doesn’t rely on the mere dumb luck that a developing infant will just happen to encounter specific forms of stimulation that are required for normal development. Nature has solved this problem by linking the brain circuitry that supports natural reward (primary reinforcing stimuli) with the maturing circuitry from the primary sensory systems. For example, to make the experience of motion perception pleasurable to Kai at four months, the growing circuitry in his posterior parietal lobe begins to establish reciprocal connections to several brain-stem and limbic regions that are involved in natural reward, motivation, and analgesia. Thus the activation of Kai’s posterior parietal lobe circuitry by a slowly moving object such as a ceiling fan begins to be accompanied by pleasurable sensations much like those corresponding to primary reinforcing stimuli (such as sweets).This process ensures that Kai naturally seeks out objects that fill this experience-expectant requirement for the successful fine-tuning of his visual where pathway and the continued refinement of his capacity to discern motion.
Our crawling ten-month-old Kai is now entering a phase where some areas in his what pathway are undergoing extreme synaptic pruning. At about this time regions such as V8, devoted to processing color vision, begin to mature and consequently need proper stimulation for continued growth and refinement. Kai’s emerging attraction to objects that are composed of bright primary colors—reds, greens, and blues—will encourage him to seek out these optimal forms of stimulation that provide the fine-tuning needed in region V8. Indeed, the primary colors correspond to the wavelengths of light that optimally activate distinct classes of brain cells in region V8. If these cells are damaged in an adult, for instance by a stroke or related trauma, the result is a complete loss of color vision with no change in other features associated with visual acuity. The developmental pattern that is seen in infants—object motion pathways maturing before most brain areas involved in visual object recognition—echoes the evolution of vision in primates. Comparative studies suggest that the object motion pathway evolved well before most brain regions that are devoted to object recognition. For instance, the object motion pathway is observed in all mammals, but features that support object recognition such as trichromacy did not appear until the divergence of the primate lineage from other mammals.
The Pleasure of Learning
Vision is no different in terms of general developmental properties than any other sensory system. Genes play a direct role in mapping out the major brain regions dedicated to vision and the general pathways that connect them. Somewhere in the mere twenty-five thousand or so genes that comprise the human genome, there is enough information to ensure that the enormously complicated wiring of the human brain (and the rest of the body, for that matter) is mapped out. Genes do not code specific paths, but rather cause the development of unique molecular markers that are used by growing brain fibers as targets. This process gets the connections approximately right but leaves the remainder of the job—the fine-tuning—to experience.
Fine-tuning the visual system takes a long time. While the neural pathways that mediate motion processing mature fairly early, the circuitry responsible for higher-order processing and detailed visual acuity continues to be fine-tuned by experience well into the toddler period. Like the other sensory systems, brain cells that comprise visual circuitry are not mini-blank slates waiting to be written upon. They come preprogrammed with certain receiver biases from the very beginning. Experiments done in the early 1960s showed that neurons in the primary visual cortex, visual thalamus, and even in the retina itself respond optimally to certain forms of stimulation and are barely activated at all by others. For instance, many cells in V1 tend to respond to straight lines of a particular orientation. If we could record from cells in your primary visual cortex right now, we could perform the following experiment. Imagine I display a completely white screen directly in front of your eyes.While you focus on the screen, I lower a pencil held by its tip until it enters your visual field. Light reflected off the pencil enters your retina, where photoreceptors transduce the light energy into electrical impulses that are then sent along the visual pathways we have been discussing. At each stage in the processing of this image, some cells respond to this particular form of stimulus; however, most remain quiet. Starting with your retina, then your thalamus, and including V1, only a select group of cells have a preferred tuning for this specific orientation of the pencil. Other V1 cells are sensitive to straight lines, but they will only respond when the pencil is rotated to their preferred orientation (for example, horizontal instead of vertical).
Interestingly, our V1 cells are not ordered haphazardly, but have a strict anatomical organization that is related to the degree of angular rotation of a viewed edge. All of the cells can be activated by a straight line, but the line has to have the correct orientation to excite a given cell and get it communicating with other neurons. The functional consequence of this physiological arrangement is edge detection, a capacity that is critical for many aspects of vision, such as identifying the natural boundaries of an object. Many brain theorists envision a scenario where information from multiple edge detector cells is integrated at higher cortical areas to form a representation of the entire object. Experimental evidence shows that multiple cells from V1 with different orientation tuning converge on the same neurons at higher cortical areas such as V2, so this theoretical position has anatomical support.
Edge detection by V1 cells tuned to straight lines of a particular orientation is just one example of preexisting biases that are built into brain cells from the earliest period of development. Tuning biases like this have been found in newborn brain cells in virtually every mammalian species tested. Interestingly, although most cells in V1 have a signal preference shortly after birth, experience plays a critical role in shaping the tuning to match the particular ecological niche that an organism inhabits. In most species, a critical period exists in the earliest stages of visual development where if V1 cells are denied stimuli that match their preferred orientation, they may die or be retuned to another orientation. If the cells are stimulated by the appropriate signals during this period, however, the tuning of the cell becomes increasingly sharpened and specific to the original bias. This has two important effects. First, the increased tuning makes information transfer less noisy, since variation in terms of what kinds of stimuli may excite a cell naturally decreases. A second consequence of this process is that while some signals may be detected quite easily with minimal stimulation, other signals that are slightly different from the preferred tuning will be completely missed. Hence, existing biases that are in place at or near birth can become magnified with experience, while others may die out or even be replaced. Individual brain cells are far from being blank slates at birth.
The critical periods for visual system tuning—like the other sensory systems we have encountered—occur when that particular circuit is undergoing synaptic pruning (see chapter 3). During this period brain cells increase their sensitivity to some forms of stimulation and necessarily lose their responsiveness to others. The sharpened tuning of cortical cells that support visual perception results in increased visual acuity for some features and a decrease for others.
Experiments since the 1960s have demonstrated that cats and monkeys who are denied visual stimulation in a particular eye during this period of extreme plasticity have marked visual deficits as adults. Moreover, the primary visual cortex (and other visual areas) of the deprived animals looks very different from that of normally reared controls. Usually there is an approximately equal portion of visual cortical area in V1 devoted to processing information from each eye. If one eye is covered during the critical period, the portions of V1 that receive information from the competing eye expand and take over the areas that would have been associated with the covered eye. As we have seen in earlier chapters, synaptic pruning is a process that is ruled by competition.Two synaptic connections vying for the same space will each struggle to stabilize into a mature circuit, but the one that is activated by visual experiences the most usually wins. The old adage “Use it or lose it” rings true in this case. With the appropriate stimulation (nurture), the initial visual circuitry laid down by nature is further shaped to match environmental contingencies.
When cats or monkeys are reared in a carefully controlled environment where they only experience lines of a single orientation (for example, all vertical or all horizontal) during the critical period, their V1 cells stop responding to other orientations and retune to fire only at the experienced orientation. Later, as adults, these animals show poor visual acuity for detecting edges at novel orientations relative to control animals.
Humans also show signs of this effect. For instance, at least one study has demonstrated that North American Indians reared in traditional teepee-shaped dwellings have better visual acuity for oblique or diagonal angles when compared to people raised in “carpentered” environments (that is, house and apartments) that are filled predominantly with vertical and horizontal orientations.We begin with a set of preexisting preferences for visual scenes, but there is considerable latitude in how early exposure can retune brain cells that support vision and the connections between them.
At birth, humans have a visual acuity of about 20/600, which is roughly thirty times poorer than that of normal adults. The attraction babies have for high-contrast objects and faces provides just enough stimulation for growing visual cortex cells to continue to mature at a reasonable pace. In the first three months, visual acuity steadily increases. Infants become more attracted to even finer gradations of contrast and are particularly fond of contrasting patterns that have pronounced lateral symmetry. It is not until their ability to experience these more nuanced visual patterns occurs that a second period of tremendous growth and synaptic pruning kicks into gear in V1 and higher cortical areas that are responsible for so-called hyperacuity.
Theoretical calculations of visual acuity based on the actual physical size and density of photoreceptors suggest that we should not be able to see as well as we do. Higher cortical areas, however, have a rich bag of tricks for solving visual problems like completing object patterns from partial or obscured inputs. These mechanisms radically improve visual acuity beyond the expected theoretical limits.
Interestingly, the development of hyperacuity depends more on experience in the second six months than the first. This is because humans must first experience simple visual patterns that promote the development of subcortical areas and V1. Once this circuitry matures in the first six months, babies become increasingly attracted to richer patterns of visual stimulation, such as scenes with more subtle contrasts and strong lateral symmetry. These experiences are, in turn, needed for the stimulation and normal maturation of higher cortical regions during their growth spurt in the second six months that support hyperacuity. This developmental pattern is so lawful that pediatricians often use visual tests of hyperacuity as an indicator of normal brain growth at twelve months.
This sequence is yet another example of how the brain bootstraps its own development. When workers construct a suspension bridge, they first extend a thin cable across the body of water. They then use that small cable to hoist a larger one, followed by a third, and so forth. Before long they have created a thick cable of intertwined wires that can support a Friday afternoon rush hour. The brain does something quite similar in its development. Just enough physiological maturation occurs to facilitate functional capabilities that, in turn, permit stimulation for the next phase of development. We see this modus operandi again and again in brain development—across every sensory system.
Bootstrapping as a mechanism for individual development is somewhat analogous to ratchet effects in phylogenetic development. In an earlier example, we saw how the evolution of visual motion processing paved the way for early primates’ ability to forage among the fine branches that comprise the rain-forest canopy. Living in this new ecological niche created selection pressures for being able to clearly identify pigmented fruits and predators against a background of green forest. Compelling evidence from comparative studies demonstrates that trichomacy evolved in primates as a response to such selection pressures.
Bootstrapping is a highly efficient way to drive development, given that only twenty-five thousand or so genes are available to code the staggering amount of information required to grow a human. Clearly, not every step in development is written into the genes. Bootstrapping as a general developmental mechanism requires only information about the starting conditions and a means to encourage organisms to seek out the appropriate forms of experience necessary to stimulate further growth into the next stage.
A legacy of this process is that adult humans are strangely drawn to the same forms of visual stimulation that supported their brain development as newborns, babies, and toddlers. I’m not implying that the average adult finds pleasure in spending hour upon hour watching a ceiling fan or Big Bird look for Ernie. We are, however, attracted to the same general patterns of stimuli that bootstrapping required for normal brain development. This, of course, is manifested in a diverse number of ways in adults, some of which are undoubtedly flavored by cultural conventions.
Humans across different cultures are attracted to bright primary colors, scenes with pronounced lateral symmetry, and high-contrast objects.These biological preferences have an important influence in shaping what we find attractive. The advertising industry has been aware of these biases in our sensory processing for decades and designs product packaging that taps into these preferences. Often we are not even consciously aware of why we are attracted to a product. Keep in mind, however, that conscious awareness is not something evolution cares about. My son Kai does not need to know that he is attracted to faces and objects with bright colors for these forms of stimulation to benefit his visual development.
In the parlance of evolutionary biology, such innate preferences are sometimes called receiver biases. The term comes from an analogy in signal theory and has been applied to studies of animal communication. Modern communication devices can be built in two very different ways. One method, for example, is to develop a general device that uses a wide range of electromagnetic frequencies—much like the AM/FM radio. A problem with this general approach, however, is that the sender and receiver both have to be on the same frequency for communication to occur. There is a certain amount of luck in this happening, since the probability of the receiver and sender naturally sharing a channel decreases with an increasing number of frequencies.
An alternative approach is to build a receiver that is pretuned to specific frequencies. In this scenario, a radio will only pick up one or two selected frequencies, but with little potential for interference from other signals. Broadcasters who want to reach listeners with these radios will, of course, have to use devices that are specifically tuned to send signals at these frequencies. Only broadcasters who can send signals at the selected frequencies and with sufficient intensity will be successful in their communication attempts.
There are abundant examples where nature has followed suit—adopting either of the two strategies in intraspecies communication. The second approach is intriguing for our present discussion because receiver biases can arise from any number of sources.There are compelling examples where a physical feature that is found to be attractive by one sex and the preference for that feature by the opposite sex did not coevolve via genetic correlations. That is, a mating preference for a feature can sometimes emerge from developmental constraints rather than adaptations related to reproductive success.
It’s important to point out, however, that most receiver biases are probably not associated with pleasure. For instance, Bolivian anuran frogs have an auditory system that is tuned to best hear vocalizations such as mating calls at 800 hertz. This particular receiver bias is the result of the physical properties of mature hair cells embedded in a frog’s cochlea. In this case, stimulation of the frog’s auditory system at 800 hertz is not a developmental requirement for normal brain growth and maturation. Indeed, what seems to work best in the anuran species during development is broad-spectrum stimulation across many different frequencies. Hence this particular receiver bias is not a bootstrapping mechanism. Rather, stimulation of the frog’s auditory system at this frequency is thought to be related more to detecting potential mates that are, in turn, tuned to vocalizing at this particular frequency.
The types of receiver biases we have been focused on thus far are those linked to the activation of key pleasure circuits in the developing brain. These pleasure-related receiver biases persist into adulthood, when they may play a critical role in driving sexual selection.
As we have seen, sexual selection is often used as a theoretical framework for understanding mate choice, but it goes far beyond this realm in terms of explaining behavioral phenotypes. Evolutionary biologists have never had an easy time accounting for the appearance of so many uniquely human functions such as art, music, humor, and dance from a survival of the fittest perspective.This perspective neglects the obvious fact that our ancestors had to both survive and reproduce for their genes to make their way through the ages. Reproduction is itself a competitive act. Individuals must identify what traits are attractive to the opposite sex and do everything possible to amplify their appearance and to hide flaws that might reveal potential weakness. The pioneering biologist Amotz Zahavi argued that organisms are naturally attracted to very specific anatomical features that are used as fitness indicators. A classic example of this is the peacock’s elaborate plume. Peahens tend to be attracted to, and prefer mating with, the most highly ornamented peacocks. The question is why. One argument, discussed earlier, is that highly ornamented peacocks are simply more conspicuous to peahens and therefore better at attracting their attention, but careful field studies fail to support this theoretical position. Experiments that remove the impact of this variable by regulating the amount of time different peacocks are exposed to the same peahen still result in a mating preference given to the peacock with the most elaborate plume.
Zahavi and other biologists have taken the position that an elaborate plume signifies the biological fitness of a peacock, since it provides evidence that the animal is strong enough to survive even though the exaggerated plume diverts precious energy resources toward its growth and maintenance. It thus serves as an energy resource handicap that must be overcome. Others have suggested that besides the energy requirement for growth, maintaining an elaborate plume is like wearing a target, since being conspicuous to potential mates also means being conspicuous to predators. In this perspective, the plume indicates fitness because it represents a survival handicap in the context of predation.
The animal kingdom abounds in examples of sexual dimorphisms such as this, a trait that becomes exaggerated in one sex—often the male—and used to attract potential mates. But why do females of the same species develop preferences for these traits in the first place? Humans are clearly not exempt from this, although the exaggerated traits occur prominently in both sexes. From breast implants to the hair weave inspired by a midlife crisis, we spend vast amounts of time and money pursuing activities designed to improve our attractiveness to the opposite sex. However, many of these “improvements” have no obvious impact on our overall health or survival. Sexual selection theory has been used as a theoretical framework for explaining this human propensity for self-adornment.Although there is notable cultural and individual variability in descriptions of physical traits that are preferred in a potential mate, there is also surprising agreement across the globe on what makes a person physically attractive. Given that there is some consensus as to what makes a person attractive, how did these biases toward specific physical features emerge?
The same forms of visual stimuli that play a role in developmental bootstrapping reemerge in the adult as pleasure-inducing receiver biases. Babies who have an innate fondness for faces and strong symmetry grow up to be adults whose eyes linger longest on potential mates with maximally symmetric faces. Small receiver biases that are present at birth can be magnified by ongoing sexual selection. Say, for example, that most females prefer tall men. Even a small bias toward taller-than-average men will have a significant effect on the evolution of male height in generations to come. Given this bias, tall men will have a higher probability of fostering offspring than shorter-than-average men. Assuming both male height and the preference for tall men in women are genetically correlated, their offspring should be taller and, most importantly, prefer taller mates. Hence, the process of sexual selection can take small receiver biases and shape them into widely accepted notions that define physical attractiveness. In the next chapter, we will see how pleasure-inducing receiver biases have become important detectors of fitness during mate selection and hence serve as perhaps the most powerful driving force of sexual selection. This fundamental mechanism has had a profound impact on many facets of our everyday lives.