THAT’S LIFE: WHY YOU AND YOUR iPOD MUST DIE - Survival of the Sickest: A Medical Maverick Discovers Why We Need Disease - Sharon Moalem

Survival of the Sickest: A Medical Maverick Discovers Why We Need Disease - Sharon Moalem (2007)

Chapter 8. THAT’S LIFE: WHY YOU AND YOUR iPOD MUST DIE

Seth Cook is the oldest living American with a particularly rare genetic disorder. He’s lost all his hair. His skin is covered in wrinkles. His arteries are hardened. His joints hurt from arthritis. He takes an aspirin and a blood thinner every day.

He is twelve years old.

Seth has Hutchinson-Gilford progeria syndrome, often just called progeria. Progeria is very rare—thought to occur in just 1 of every 4 to 8 million births. It’s also very unfair; the word comes from the Greek for prematurely old, and that’s the difficult fate in store for people born with it. Children who have progeria age at up to ten times the speed of people without it. By the time a baby who has progeria is about a year and a half old, his or her skin starts to wrinkle and their hair starts to fall out. Cardiovascular problems, like hardening of the arteries, and degenerative diseases, like arthritis, soon follow. Most people who have progeria die in their teens of a heart attack or a stroke; nobody is known to have lived past thirty.

Hutchinson-Gilford progeria isn’t the only disease that causes accelerated aging—it’s just the most heartbreaking, because it’s the fastest, and it starts at birth. Another aging disorder, Werner syndrome, doesn’t manifest itself until someone carrying the mutation that causes it reaches puberty; it’s sometimes called adult-onset progeria. After puberty, rapid aging sets in, and people who have Werner syndrome usually die of age-related disease by their early fifties. Werner syndrome, although more common than Hutchinson-Gilford progeria, is still very rare, affecting just one in a million.

Because these rapid-aging diseases are so uncommon, they haven’t been the focus of much research (and they’re called orphan diseases for that reason). But that’s starting to change, as scientists have realized that they hold clues about the normal aging process. In April 2003, researchers announced that they had isolated the genetic mutation that causes progeria. The mutation occurs in a gene that is responsible for the production of a protein called lamin A. Normally, lamin A provides structural support for the nuclear membrane, the package that houses your genes at the core of every cell. Lamin A is like the rods that hold up a tent—the nuclear membrane is organized around it and supported by it. In people who have progeria, lamin A is defective and cells deteriorate much more rapidly.

In 2006, a different team of researchers established a link between lamin A deterioration and normal human aging. Tom Misteli and Paola Scaffidi, researchers at the National Institutes of Health, reported in Science that the cells of normal elderly people show the same kinds of defects that are found in the cells of people who have progeria. That’s very significant—it’s the first confirmation that the accelerated aging that characterizes progeria is related to normal human aging on a genetic level.

The implications are far-reaching. More or less since Darwin described adaptation, natural selection, and evolution, scientists have been debating where aging fits into the picture. Is it just wear and tear, the way your favorite shirt picks up little stains and rips and marks over the years, eventually fraying and wearing out? Or is it the product of evolution? In other words, is aging accidental or intentional?

Progeria and the other accelerated-aging diseases suggest that aging is preprogrammed, that it’s part of the design. Think about it—if a single genetic error can trigger accelerated aging in a baby or an adolescent, then aging can’t only be caused by a lifetime of wear and tear. The very existence of the progeria gene demonstrates that there could be genetic controls for aging. That, of course, raises a question you’ve no doubt come to expect. Are we programmed to die?

LEONARD HAYFLICK IS one of the fathers of modern aging research. During the 1960s he discovered that (with one special exception) cells only divide a fixed number of times before they up and quit. This limit on cellular reproduction is appropriately called the Hayflick limit; in humans the limit is around fifty-two to sixty.

The Hayflick limit is related to the loss of a genetic buffer at the end of chromosomes called telomeres. Every time a cell reproduces it loses a little bit of DNA. In order to prevent that information loss from making a difference, your chromosomes have what amounts to extra information at their tips; those bits of information are telomeres.

Imagine you have a manuscript and need to make fifty copies but Kinko’s has just thrown you a curveball. Instead of charging you money, they’re just going to take one page off the end of your manuscript after every copy. That’s a problem—your manuscript is two hundred pages long; if you give them a page after every copy, the last copy is only going to have one hundred fifty pages and whoever gets it is going to miss a quarter of the story. So, being a highly evolved organism with a gift for clever solutions, you add fifty blank pages to the end of your manuscript and present Kinko’s with a two-hundred-fifty-page manuscript. Now, all fifty copies will have the complete story; you won’t lose a page of precious information until you decide to make copy fifty-one. Telomeres are like blank pages; as cells reproduce, telomeres are shortened, and the truly valuable DNA is protected. But once a cell replicates between fifty and sixty times, the telomeres are essentially gone and the good stuff is in jeopardy.

Now, why would we evolve a limit against cellular reproduction?

In a word? Cancer.

IF THERE’S A health-related word more closely associated with fear and mortality than cancer, I don’t know what it is. It’s so widely assumed to be a likely death sentence that, in millions of families, it’s barely spoken out loud; instead it’s only spoken, if at all, in a kind of stage whisper.

As you no doubt know, cancer isn’t a specific disease; it’s a family of diseases characterized by cell growth gone haywire. And the truth is, some cancers are highly treatable—many of them have higher survival rates and better chances for complete recovery than other common health problems, such as heart attacks and strokes.

As we’ve discussed, your body has multiple lines of cancer defense. There are specific genes responsible for tumor suppression. There are genes responsible for creating specialized cancer hunters programmed to seek and destroy cancer cells. There are genes responsible for repairing the genes that fight cancer. Cells even have a mechanism to commit a kind of hara-kiri. Apoptosis, or programmed cell death, occurs when a cell detects that it has become infected or damaged—or when other cells detect a problem, and “convince” the dangerous cell to kill itself. And on top of that there’s the Hayflick limit.

The Hayflick limit is a potent check against cancer—if everything goes wrong in a cell and it becomes cancerous, the Hayflick limit still prevents its unchecked reproduction, essentially shutting down tumor growth before it really gets going. If a cell can only reproduce a specific number of times before it runs out of steam, it can’t reproduce uncontrollably, right?

Right—as far as it goes. The problem is, cancer cells are sneaky little villains with a few tricks up their cellular sleeves. One of those is an enzyme called telomerase. Remember that the Hayflick limit works through telomeres—when they run out, cells die or lose the ability to reproduce. So what does telomerase do? It lengthens those telomeres at the ends of chromosomes. In normal cells telomerase is usually not active and therefore telomeres are usually shortened. But cancer cells can sometimes kick telomerase into high gear, so that the telomeres are replenished more rapidly. When that happens, there’s less loss of genetic information, because the telomere buffer never runs out. The expiration date programmed into cells is canceled, and the cell can reproduce forever.

When cancer cells are successful, it’s usually with the help of telomerase. More than 90 percent of the cells in cancerous human tumors use telomerase. That’s how they become tumors—without telomerase, cancer cells would die out after dividing fifty to sixty times, or perhaps a little longer. With telomerase helping them to short-circuit the Hayflick limit, they can multiply uncontrollably, wreaking the biological havoc we’re all too familiar with. On top of all that, successful cancer cells—the cells we most want to die on their own—have found a way around apoptosis, or programmed cell death. They ignore the suicide command that noncancerous cells obey when they become infected or damaged. In biological terms, that makes cancer cells “immortal”—they can divide forever. Scientists are currently working to perfect a test that detects increased telomerase activity; that could give doctors a powerful new tool to help reveal hidden cancer cells.

The other exceptions to the Hayflick limit, by the way, are those current stars of political, medical, and ethical debate—stem cells. Stem cells are “undifferentiated” cells—in other words, they can divide into many different kinds of cells. A B-cell that makes your antibodies can only divide into another B-cell, and a skin cell can only produce another skin cell. Stem cells can produce many types of cells—the mother of all stem cells, of course, is the single cell that started you off in your mother. A zygote (which is the union of a sperm and an egg) obviously has to be able to produce every kind of cell; otherwise you’d still be a zygote. Stem cells are not subject to the Hayflick limit—they’re also immortal. They pull off this feat by using telomerase to fix their telomeres the same way that some cancer cells do. You can see why scientists believe stem cells have such potential to cure disease and alleviate suffering—they have the potential to become anything and they never run out of steam.

Many scientists believe cancer prevention is the “reason” cells have evolved with a limit on the number of times they can reproduce. The flip side to the Hayflick limit, of course—compromise, compromise—is aging. Once cells hit the limit, future reproductions don’t really work and things start to break down.

CANCER PROTECTION AND the Hayflick limit aren’t the only evolutionary explanations for the aging mechanism. First of all, that doesn’t necessarily explain why different animals—even closely related ones—have such different life expectancies.

It’s interesting to note that, in mammals, with a few exceptions, there’s a close correlation between size and life expectancy. The bigger you are, the longer you live. (That doesn’t mean you should head to Dairy Queen—the bigger the natural size of the species, the longer the average member of the species lives, not the bigger the individual.) The longer life expectancy of larger mammals is at least partially due to their superior ability to repair DNA. But that explains, at least in part, how we live longer; it doesn’t explain why we big creatures developed those superior repair mechanisms.

One theory suggests that there is a direct connection between shorter life expectancy and greater external threats. I’m not just saying that the risk of being eaten reduces an animal’s life expectancy, although it does, of course. Essentially, animals with a greater risk of being eaten evolve to live shorter lives—even if they aren’t eaten. Here’s how—if a species faces significant environmental threats and predators, it’s under greater evolutionary pressure to reproduce at an early age, so it evolves to reach adulthood faster. (A shorter life span also means a shorter length of time between generations, which allows a species to evolve faster—which is important for species that face a lot of environmental threats; that’s one of the things that helps rodents develop resistance to poisons relatively quickly.) At the same time, there’s never any real evolutionary pressure to evolve mechanisms to repair DNA errors that occur over time, because most individuals in the species don’t live long enough to experience those errors. You wouldn’t buy an extended warranty on an iPod if you were only going to keep it for a week. On the flip side, a species that is more dominant in its environment, and that can continue to reproduce for most of its life, will gain an advantage in repairing accumulated DNA errors. If it lives longer, it can reproduce more.

I believe that programmed aging confers an evolutionary benefit on the species, not the individual. According to this thinking, aging acts like a biological version of planned obsolescence. Planned obsolescence is the often denied but never disproved notion that manufacturers of everything from refrigerators to cars build a shelf life into their products, essentially guaranteeing that they wear out after a limited number of years. This does two things—one arguably to the consumer’s benefit, the other certainly to the manufacturer’s benefit. First, it makes the way for new, improved versions. Second, it means you need to buy a new fridge. Some people accused Apple of employing planned obsolescence in the development of its superpopular iPods a few years ago—manufacturing them with batteries that only lasted for about eighteen months and couldn’t be replaced, forcing consumers to buy a new model when their battery died. (Apple now has a battery replacement program, although it’s tantamount to an iPod replacement program—for a small fee, they send you a new or refurbished equivalent to your now-powerless purchase.)

Biogenic obsolescence—that is to say, aging—might accomplish two similar ends. First, by clearing out older models, aging makes room for new models, which is exactly what creates the room for change—for evolution. Second, aging can protect the group by eliminating individuals that have become laden with parasites, preventing them from infecting the next generation. Sex and reproduction, in turn, are the way a species gets upgraded.

THE PROSPECT OF programmed aging opens up the door to all kinds of exciting possibilities. Already, scientists are exploring benefits that may be found by turning aging mechanisms off—and by turning them back on. The possibility of short-circuiting telomerase in cancer cells—the enzyme that cancer cells use to make themselves immortal—may lead to powerful new weapons against cancer.

A year before they did so, the researchers who first linked progeria-related aging to normal aging also demonstrated that it is possible to reverse the cellular damage caused by progeria. They applied a “molecular Band-Aid” to progeria cells in their lab and eliminated the defective lamin A. After a week, more than 90 percent of the cells they treated looked normal. They haven’t been able to reverse progeria in people yet, but every new insight is a step in the right direction. The combined implication of the two studies isn’t exactly a map to Ponce de León’s fabled fountain of youth, but it’s certainly intriguing. Cells in aging humans are programmed to break down in a similar fashion to progeria cells. And scientists have been able to reverse those breakdowns in the lab. The operative words in the last two sentences?

Aging. And reverse. Now that’s something to look forward to.

Speaking of things to look forward to, this book is all about life. About why we are who we are and why we work the way we do. And there’s one place where all of that really comes together—evolution’s ultimate laboratory—the womb.

CONGRATULATIONS! YOU’RE HAVING a baby!

Over the next nine months, millions of years of interaction with disease, parasites, plagues, ice ages, heat waves, and countless other evolutionary pressures—not to mention a little romance—will come together in a stunningly complex interaction of genetic information, cellular reproduction, methyl marking, and the commingling of germ lines to produce your little peanut.

You and your partner are doing the evolution dance, contributing eons of genetic history to the next generation. It’s an amazing, uplifting, deeply moving process. Which is why you should be forgiven when you go to the hospital to have your baby and feel a little put off by the surroundings—just about everybody in the place is sick, trying to ward off disease or death, and you’re there to bring a little life into the world.

You look at the directory to find out where to go and you read something like

CARDIOLOGY

ENDOCRINOLOGY

GASTROENTEROLOGY

GENERAL SURGERY

You skip ahead and read

HEMATOLOGY

INFECTIOUS DISEASES

INTENSIVE CARE UNIT (ICU)

LABORATORY MEDICINE AND PATHOLOGY

And then, finally, there it is—Obstetrics and Gynecology—sandwiched right between those two heartwarmers Neurosurgery and Psychiatry.

Soon you will be hustled upstairs, hurried into a hospital gown, and hooked up to an IV; if you’ve ever been to a hospital before because you were actually sick—instead of pregnant—it’s all probably feeling a bit too familiar right about now. You’re having a baby—couldn’t they make it a little more fun?

Of course, all of the medical drama is for very good reason; in 2000 the United Nations estimated that more than half a million mothers died of complications resulting from pregnancy—but less than 1 percent of those deaths were in the developed world. So there’s no question that modern medicine has helped to remove the great portion of risk from childbirth. But the approach tends to be one that is sort of disease-oriented—usually treating pregnancy as a risk to be managed, rather than an evolutionary miracle that just needs to be helped along.

Perhaps our ability to make pregnancy and childbirth even more safe and comfortable would benefit by asking the same questions we’re starting to ask about our relationship to disease. Why has evolution led humans to give birth the way we do?

CHILDBIRTH IN HUMANS is riskier, is longer, and certainly seems more painful than it is in any of our genetic cousins. Ultimately, that can be traced to two things—crossword puzzles and marching bands. Well, maybe not crossword puzzles and marching bands per se, but it is because of the two characteristically human traits that allow us to do them—big brains and bipedalism. When it comes to birth, those two traits are a tricky combination.

The skeletal adaptations that allow us to walk on two feet changed the structure of the human pelvis—unlike the pelvis of monkeys, apes, and chimps, the human pelvis regularly has to bear the weight of your entire upper body. (Chimps do walk on two legs from time to time, but usually only to carry food or wade across rivers and streams.) The evolution toward bipedalism included selection of a specialized pelvis that makes walking upright possible—which in true evolutionary style came with a compromise. According to Wenda Trevathan, a biological anthropologist who has spent much of her career studying the evolution of birth, the human pelvis is “twisted” in the middle; it starts off pretty wide, and is broad from side to side at the birth canal’s “entrance,” but gets narrower as it goes on, ending in an “exit” that presents a pretty tight squeeze for an infant’s skull.

Millions of years after we learned to walk on two feet, we started evolving bigger brains. Bigger brains need bigger skulls. And eventually (after a few million years, that is) human women with small birth canals were giving birth to human babies with big skulls. That, by the way, is one of the reasons why a newborn’s head is so vulnerable—the skull is actually composed of separate plates connected by tissue called sutures that give it the flexibility to squeeze through the birth canal. The plates don’t start fusing together until the baby is about twelve to eighteen months old, and they don’t become fully fused until adulthood (much later than in chimps).

The big brain is so difficult to get out of the tight birth canal that most of human brain development takes place after birth. When monkeys are born, their brains are more than 65 percent of the size that they’ll be when fully grown. But baby human brains are only 25 percent of the size—that’s one reason babies are so helpless for the first three months; their brains are in a state of rapid development. Many doctors actually call it the fourth trimester.

On top of all that, the human birth canal isn’t one constant shape, so the fetus has to twist its way through. When it does emerge, it’s usually facing away from its mother because of all that twisting, adding one more difficulty to human birth. Chimps and monkeys come out facing their mothers. Imagine a mother chimp squatting during delivery and the baby chimp emerging from the birth canal facing upward toward its mother and you’ve got a pretty good picture. The mother chimp can reach down, cradle the infant’s head from behind its neck, and help with its delivery. In humans, the mother can’t do that (even if she is squatting) because the baby is facing away—if she tries to assist the baby she risks bending its neck or spine the wrong way and causing serious injury. Trevathan believes this “triple threat” of big brains, a pelvis designed for walking, and backward-facing babies led to the nearly universal human tradition of helping one another with delivery. Every other primate generally goes it alone when it comes time to give birth.

If you pause and think about this for a moment in light of everything we know about evolutionary pressure, it’s a little confusing. Why would evolution favor adaptations that made reproduction more dangerous? Well, it wouldn’t—unless it made survival so much more likely that it outweighed the increased reproductive risk. For example, if an adaptation allowed twice as many babies to reach adulthood and get pregnant, it might be worth the risk that a small percentage of them wouldn’t survive childbirth.

It’s pretty clear that big brains are a big advantage. But what about walking upright? Why did we evolve in that direction? Why aren’t we a bunch of smart hominids crawling to the grocery store on all fours or swinging to the library through the trees instead of strolling along a sidewalk?

Something clearly sent our human ancestors off in a different evolutionary direction from the one followed by the ancestors of the modern chimp or ape. Whatever it was, it ultimately prompted a cascade of evolutionary dominoes, with one adaptation leading to another. As a writer named Elaine Morgan (whom we’ll hear more from shortly) put it, “Our ancestors entered the Pliocene [a geological time scale about 2 to 5 million years ago] as hairy quadrupeds with no language and left it hairless, upright and discussing what kinds of bananas they liked best.” And that’s not all. We also became fatter, developed prominent noses with nostrils pointing downward, and lost much of our sense of smell.

So what happened?

THE CONVENTIONAL WISDOM about our shift from all fours to two feet is the “savanna hypothesis.” The savanna theory holds that our apelike ancestors abandoned the dark African forests and moved into the great grassy plains, perhaps because of climate changes that led to massive environmental change. In the forest, food was plentiful—fruits, nuts, and leaves could be found in abundance. But out in the savanna, life was tougher, so the theory goes, and our ancestors had to find new ways to get food. Males began to hunt bravely for meat among the herds of grazing animals. Some combination of these new circumstances—the need to scan the horizon for food or predators, the need to cover long distances between food and water—led the savanna hominid to begin walking upright. Other adaptations were similarly related to the new environment—hunting required tools and cooperation; smarter prehumans made better tools and better teammates, so they survived longer and attracted more mates, and the process selected for bigger brains. The savanna was hot, and all those brave males chasing animals tended to overheat, so they lost their hair to keep them cool.

That’s the conventional theory, anyway.

But Elaine Morgan isn’t a conventionalist, and she isn’t buying it. Morgan is a prolific Welsh writer who originally became interested in evolution more than thirty years ago. As she read books describing the savanna theory, she was immediately skeptical. For starters, she couldn’t understand why evolution—so concerned with reproduction—would be driven only by the requirements of the male. “The whole thing was very focused on the male,” she recalls. “Their premise was that the important thing was the evolution of man-the-hunter. I began to think: ‘They must have this wrong.’” Shouldn’t evolution be at least as influenced toward women and children?

In a word?

Yes.

BY THE TIME Morgan was questioning it, the savanna hypothesis was well entrenched in the scientific community. And like most well-entrenched theories, those who challenged it were generally ignored or ridiculed. But that wasn’t the kind of thing to stop Elaine Morgan. So, certain that the savanna theory’s men-only approach to evolution didn’t make sense, Morgan set out to write a book exposing its flaws. It wasn’t intended to be a scientific book; rather, she attacked the savanna theory with that ancient and highly effective debunker of all things highfalutin—common sense.

The Descent of Woman was published in 1972, and it roundly savaged the idea that male behavior was the driving force in human evolution. Humans started walking on two legs so we could cover distances between water and food faster than we could on four legs? Yeah, right—ever race a cheetah? Even some of the slower quadrupeds can outrun us. We lost our hair because the males got too hot chasing antelope? So why do females have even less hair than males? And what about all those other hairless animals running around the savanna? Oh, right, there aren’t any. Every hairless mammal is aquatic or at least plays in the mud—think of hippos, elephants, and the African warthog. But there aren’t any hairless primates. In researching her book, Morgan came across the work of a marine biologist named Alister Hardy. In 1960, Hardy offered a different theory to explain our evolutionary divergence from other primates. He suggested that a band of woodland apes became isolated on a large island around what is now Ethiopia and adapted to the water, regularly wading, swimming, and foraging for food in lagoons. Hardy first got the idea nearly thirty years earlier when reading a book by Professor Wood Jones, called Man’s Place among the Mammals, which asked why humans were the only land mammals with fat attached to our skin. Pinch your dog or cat and you’ll feel the difference when you grab a fistful of nothing but skin. Hardy was a marine biologist; he made an immediate connection to marine mammals—like hippos, sea lions, and whales—all of which have fat directly attached to the skin. He figured there could only be one reason for humans to share a trait that was otherwise only found in aquatic or semiaquatic mammals—an aquatic or semiaquatic past.

An aquatic ape.

Nobody took Hardy’s theory seriously, not even seriously enough to challenge it. Until Elaine Morgan came along. And she took it seriously enough to write five books about it—so far.

Morgan builds a compelling case. Here’s the essence of the aquatic ape hypothesis, as it’s now known. For a long stretch of time, our prehuman ancestors spent time in and around the water. They caught fish and learned to hold their breath for long periods while diving for food. Their ability to survive on land and water gave them twice as many options to avoid predators as their land-bound cousins—chased by a leopard, the semiaquatic ape could dive into the water; chased by a crocodile, it could run into the forest. Apes that spent time in the water would naturally evolve toward bipedalism—standing upright allowed them to venture into deeper water and still breathe, and the water helped to support their upper bodies, making it easier for their bodies to support them on two feet.

The aquatic ape theory explained why, like many other aquatic mammals, we lost our fur—to become more streamlined in the water. It explained the development of our prominent nose and downward-facing nostrils, which allowed us to dive. The only other primate with a prominent nose (that we know of) is the aptly named proboscis monkey—which just so happens to be semiaquatic itself and can also be seen wading in the water on two legs or going for a swim.

Finally, the aquatic theory may explain why our fat is attached to our skin. Like other aquatic mammals, such as dolphins and seals, it allows us to flow smoothly through the water using less energy. Human babies are also born with significantly more fat than baby chimps or monkeys. Providing all that fat is an additional burden to the mother, so there’s got to be a good reason for it. Most scientists agree it helps to keep the baby warm. (Remember brown fat? The special heat-generating fat that is usually only found in human newborns?) Elaine Morgan thinks that besides keeping babies warm, the extra fat also helps to keep them afloat. Fat is less dense than muscle, so a higher percentage of body fat makes people more buoyant.

The debate over the semiaquatic ape is far from over. Most mainstream anthropologists certainly still subscribe to the savanna hypothesis. And the semiaquatic versus savanna smackdown tends to provoke emotion on both sides that makes it harder to resolve. One of the things that get lost in the scientific shouting is just what the aquatic ape hypothesis actually holds. It doesn’t suggest that there was some prehuman animal that lived mostly underwater and only surfaced periodically for air like some kind of primate whale. A British computer programmer named Algis Kuliukas read Morgan’s work after his wife gave birth in a birthing tub. He was shocked to find that many of the scholars who railed against Morgan’s theory freely acknowledged the possibility that human ancestors spent time in the water and that their time in the water could have influenced evolution. If they acknowledged that, what was all the fuss about?

Kuliukas realized a good deal of the controversy over the theory was related to a lack of understanding over just what the theory actually held. He wrote:

[Some critics]…never really “got” what the theory was. They think they have—but they’re just wrong. They think it’s suggesting that humans went through some “phase” of almost becoming mermaids or something and they reject it as nonsense on that basis.

So Kuliukas decided to try and add a little clarity to the conversation by proposing a simple summation of the aquatic ape hypothesis:

That water has acted as an agent of selection in the evolution of humans more than it has in the evolution of our ape cousins. And that, as a result, many of the major physical differences between humans and the other apes are best explained as adaptations to moving (e.g. wading, swimming and/ or diving) better through various aquatic media and from greater feeding on resources that might be procured from such habitats.

When you put it like that, it starts to sound an awful lot like common sense, don’t you think?

LET’S IMAGINE THAT Alister, Elaine, and Algis are right. Some of our ancestors spent a lot of time in and around the water, so much so that it influenced our evolution. And let’s further assume that it was in this environment that we first learned to stand on our own two feet. That, in turn, allowed for the change to our pelvis and twisted the birth canal, making childbirth more difficult. So that means the first bipedal childbirths might have been of semiaquatic apes in a semiaquatic environment.

That still doesn’t explain the lack of evolutionary pressure against bipedalism and the accompanying reproductive risk caused by the change in pelvic shape. Unless—what if the water changed the equation somehow and made the process easier? If the water made the birthing process easier, then most of the evolutionary pressure would favor the advantages those aquatic apes gained from the shift to two feet.

But if the water made it easier for aquatic apes with small pelvic openings to give birth, then shouldn’t water make it easier for humans with small pelvic openings to give birth?

LEGEND HAS IT that the first medical water birth took place in the early nineteenth century in France. Birth attendants were struggling to help a woman who had been in labor for more than forty-eight hours when one of the midwives suggested a warm bath might help the expectant mother to relax. According to the story, the baby was born shortly after the woman settled into the tub.

A Russian researcher named Igor Tjarkovsky is often credited as the father of modern water birthing. He designed a special tank in the 1960s for water birthing, but the trend didn’t really catch on in the West until the early 1980s or so. The reaction of the medical establishment wasn’t encouraging. In medical journals and the popular press, doctors suggested that water birthing was dangerous, filled with unacceptable risks of infection and drowning. It wasn’t until 1999, when Ruth Gilbert and Pat Tookey of the Institute of Child Health in London published a serious study showing that water birth was at least as safe as conventional methods, that all these predictions of doom and gloom were shown to be largely baseless.

An even more recent Italian study, published in 2005, has confirmed the safety of water birthing—and demonstrated some stunning advantages. The Italian researchers compared 1,600 water births at a single institution over eight years to the conventional births at the same place during the same time.

First of all, there was no increase of infection in either mothers or newborns. In fact, there was apparently an additional protection for the newborn against aspiration pneumonia. Babies don’t gasp for air until they feel air on their face; when they’re underwater, the mammalian diving reflex—present in all mammals—triggers them to hold their breath. (Fetuses do “breathe” while in their mother’s womb, but they’re actually sucking in amniotic fluid, not air, which forms a crucial part of their lung development.) When babies are delivered conventionally, they take their first breath of air as soon as they feel air on their face; sometimes, if they get in a big breath before the doctor can clean their face, this causes them to inhale fecal matter or “birthing residue” that can cause an infection in their lungs—aspiration pneumonia. But babies delivered underwater don’t face that risk—until they’re brought to the surface they don’t switch from fetal circulation to regular circulation, so there’s no risk of them inhaling water, and attendants have plenty of time to clean their faces while they’re still underwater, before lifting them out of it and triggering their first breath.

The study revealed many more benefits. First-time mothers delivering in water had a much shorter first stage of labor. Whether the water relaxed nervous minds or tired muscles or had some other effect, it clearly accelerated the delivery process. Women delivering in water also had a dramatic reduction in the need for episiotomies—the surgical cut routinely performed in hospital births to expand a woman’s vaginal opening in order to prevent complications from tearing. Most of the time they just weren’t necessary—the water simply allowed for more of a stretch.

And perhaps most remarkably, the vast majority of the women who gave birth in water needed no painkillers. Only 5 percent of the women who started their labor in water asked for an epidural—compared to 66 percent of the women who gave birth through conventional means.

The behavior of human newborns in the water offers another tantalizing suggestion that the aquatic ape theory holds water. A child development researcher named Myrtle McGraw documented these surprising abilities back in 1939—not only do very young babies reflexively hold their breath, they also make rhythmic movements that propel them through the water. Dr. McGraw found that this “water-friendly” behavior is instinctual and lasts until babies are about four months old, when the movements become less organized.

Primitive swimming would be an awfully surprising instinct for an animal that evolved into its more or less current form on the hot, dry plains of the African savanna. Especially when that animal is born relatively helpless, with almost no other instinctual behavior besides eating, sleeping, and breathing.

And crying. Can’t forget crying. Of course, if you are having a baby, you won’t.

GIVE YOUR BABY a few years and he or she will trade in the cries for whys. Why do I have to go to bed? Why do you have to go to work? Why can’t I have dessert for breakfast? Why does my stomach hurt? Why?

You tell your toddler to keep the questions coming. That’s what this book is all about. Questions. Two in particular, many times over. The first is, “Why?”

Why do so many Europeans inherit a genetic disorder that fills their organs with iron?

Why do the great majority of people with Type 1 diabetes come from Northern Europe?

Why does malaria want us in bed but the common cold want us at work?

Why do we have so much DNA that doesn’t seem to do anything?

The second question, of course, is, “What can we do with that?”

What can we do with the idea that hemochromatosis protected people from the plague?

What can we do with the possibility that diabetes was an adaptation to the last ice age?

What does it mean for me to understand that malaria wants me laid up and the cold wants me on the move to help them each spread?

And what does it mean that we have all this genetic code that probably came from viruses and sometimes jumps around our genome?

Oh, not much.

Just develop new ways to combat infection by limiting bacterial access to iron and provide better treatment to people whose iron deficiencies are actually natural defenses against highly infectious environments.

Just open up exciting new avenues of research by leading us to explore animals, like the wood frog, that use high blood sugar to survive the cold and manage it successfully.

Just lead us to search for ways to direct the evolution of infectious agents away from virulence and toward harmlessness—instead of waging an antibiotic war that we may never be able to win.

Just…who knows?

If we don’t ask, we’ll never find out.