Curious Folks Ask: 162 Real Answers on Amazing Inventions, Fascinating Products, and Medical Mysteries - Sherry Seethaler (2009)

Chapter 8. Health nuts

Counting calories

How do nutritionists determine the caloric content of a complex dish when it is impossible or impractical to total up the individual ingredients?

One way is to measure the amount of thermal energy produced when the food is completely combusted into carbon dioxide and water in a device called a bomb calorimeter. To avoid overestimating the actual calories available from food, bomb calorimetry measurements of fecal matter must be subtracted from those of the food.

This method is sometimes used for animal feed, but it is not very popular because bomb calorimeters are pricy. Plus, it is kind of a drag for researchers to have to follow people around with baggies to determine the caloric content of what passes through our digestive systems unscathed.

Instead, total energy content is usually determined by adding up the energy contributions of fat, proteins, and carbohydrates in food. Fat can be extracted from the food with chemical solvents and then quantified. Protein contains 16 percent nitrogen on average, so the amount of protein is calculated from food’s total nitrogen content. The amount of carbohydrate is usually calculated from the total mass of the food minus the amount of fat, protein, moisture, and minerals.

Back in the late 19th century, W. O. Atwater and other researchers at the U.S. Department of Agriculture did the dirty work to determine the average amount of energy yielded from fat, protein, and carbohydrates after accounting for losses in digestion. The conversion factors are 9 calories (referred to as kilocalories outside the United States) per gram of fat, 4 calories per gram of protein, and 4 calories per gram of carbohydrate.

The 9-4-4 conversion factors can be misleading because different fats, proteins, and carbohydrates have different structures and digestibilities. For example, multiplying the total grams of carbohydrate by 4 overestimates the amount of energy the body can extract from high-fiber foods. For this reason, the mass of insoluble fiber usually is subtracted from the total carbohydrate before the energy calculation is made.

If the exact ingredients are known, it is possible to use the Atwater specific factor system, which is a series of tables listing the number of calories in the fats, carbohydrates, and proteins in specific foods. For example, the tables reveal that the protein in eggs provides nearly 1 more calorie per gram than the protein in soybeans. On average, the specific factor system yields energy values that are about 5 percent lower than those obtained using the general conversion factors.

Fat carbs, skinny carbs

What’s the big deal about carbohydrates? Why should we be on a low-carb diet? (Or should we?)

You are right to be skeptical. We are at one end of a pendulum swing. Remember not so long ago, when fat was the bad guy and “low-fat” or “no-fat” labels sold food?

Throughout much of the low-fat era, we acted as if as long as we ate foods that were low in fat, we could eat whatever we wanted, even if the foods were high in calories or sugar. The problem is simple: if you take in more calories than you burn, regardless of whether those calories are from fat or carbohydrates, you gain weight.

Cutting carbohydrates results in weight loss only if total calorie intake is reduced, not if the calories from carbohydrates are simply replaced by calories from fat or protein.

Toward the end of the low-fat era, but before the low-carb craze, we started to develop a more nuanced view of fat, accepting, for example, that fish oils can help reduce heart disease. Now we also need to develop a less black-and-white view of carbohydrates.

Not all carbs are created equal. Many of the carbohydrates in the typical American diet come from highly refined grains. To make them easier to use in cooking, grains are milled to remove their outer coating, leaving the starchy portion of the grain. Unfortunately, the outer portion of the grain is high in fiber, B vitamins, and trace minerals such as copper and zinc.

In addition, foods made from refined grains, like white bread, are digested quickly into glucose, causing blood sugar to spike rapidly. Foods that cause rapid increases in blood sugar are said to have a high glycemic index, and diets that are filled with such foods have been linked to heart disease and diabetes.

On the other hand, carbohydrates like brown bread, brown rice, and whole grain pasta have a low glycemic index. Not only do they not cause spikes in blood sugar, but they also are rich in vitamins, minerals, and fiber, which can protect against cancer and decrease cholesterol.

Most Americans do not get enough servings of whole gains every day. You can find out more about carbohydrates and the glycemic index at the Harvard School of Public Health website: http://www.hsph.harvard.edu/nutritionsource/carbohydrates.html.

Perhaps in terms of diet we could learn from the French, who are inclined to define healthy eating more in terms of balance, variety, and freshness. Healthy eating is not about demonizing certain foods. Studies repeatedly show that Americans are not getting enough of the foods that are good for us, especially fruits, vegetables, and whole grains. Hopefully, when the pendulum swings again, “balance” will be the mot du jour.

Combo meal

For breakfast, I usually have orange juice, an egg, toast, and coffee. If I ate these items separately over a period of, say, two to three hours, would I get more nutritional benefit from this combination of food, rather than by eating them all in one sitting?

Despite our natural aversion to pairing certain foods (pickle cookies, anyone?) and the popularity of “dissociated diets,” there is limited scientific data on the value of eating foods separately versus in combination.

It is well known that vitamins and minerals can interact synergistically or antagonistically, but most of these interactions are determined by the composition of the overall diet. For example, a magnesium deficiency can interfere with the body’s metabolism of sodium, potassium, calcium, and phosphorus.

Within a single meal, some nutrient interactions occur. One study showed that when calcium was added to a meal, it significantly reduced iron absorption. So calcium fortification of orange juice or milk in your coffee could reduce the amount of iron your body can extract from your toast and egg. On the other hand, the vitamin C in orange juice facilitates iron absorption. In any case, the body seems to adjust over time, because another study showed that iron levels in the blood were unaffected by taking calcium supplements with two meals daily for a few months.

Proponents of dissociated diets would tell you to eat your egg separately from your toast and juice, and not to add butter or whole milk to your breakfast mix, because intake of carbohydrates, fats, and proteins should be spread throughout the day. One of their arguments is that if carbohydrates, which effectively simulate the release of insulin, are ingested with fats, insulin will cause more fat to be stored.

This argument was debunked in a comparison of two low-calorie diets—one that combined fats and carbohydrates within meals, and another that separated fats and carbohydrates into different meals. Weight loss was the same in the group on the dissociated diet as it was for the group on the mixed diet. Both diets decreased blood glucose, insulin, cholesterol, and blood pressure by a similar amount.

Some people recommend minimizing the intake of liquids with food to avoid diluting the digestive enzymes, but if your goal is to rehydrate, this rule doesn’t hold water. Liquids are better retained when they are consumed with meals.

Of course, individual foods are a combination of many nutrients. For instance, an egg is approximately equal parts protein and fat, with a tad of carbohydrate and a long list of vitamins and minerals. So our digestive systems come equipped to tackle those pickle cookies—if only our palates were up to it.

Chug-a-lug

I try to drink eight glasses of water a day, but instead of drinking one glass every hour or so, I drink three 8-ounce glasses when I wake up in the morning, another three at lunch, and two in the afternoon. Am I getting the necessary hydration that my body needs?

Aliens studying earthlings over the past couple decades would almost certainly have noted a strange phenomenon: we have become as attached to our water bottles as Charlie Brown’s friend Linus is attached to his blanket. (I confess: A large water bottle is within reach as I write this.)

“‘Drink at least eight glasses of water a day.’ Really? Is there scientific evidence for ‘8x8’?” may seem like a surprising title for a journal article (published in the American Journal of Physiology in 2002) considering the ubiquitous nature of this health recommendation. Perhaps even more surprising is the conclusion of its author, Heinz Valtin, a physician and kidney specialist at Dartmouth Medical School.

Although Valtin found some evidence that individuals with very low fluid intake are at greater risk for bladder cancer, colorectal cancer, heart disease, and migraines, the research studies failed to definitively prove the connection. Overall, he concluded that the wide range of claims about the health benefits of drinking eight glasses of water a day is largely unsupported. He also argued that although hot weather and physical activity increase water needs, eight glasses is more than sedentary individuals in a temperate climate typically need.

In terms of getting the maximum hydration from the water consumed, a study found that a few glasses of water ingested over a couple of hours are largely retained, while the same amount of water ingested in 15 minutes is not. Individual variation is large and depends on daily salt intake. The sodium ion concentration in the blood influences a brain sensor called an osmostat, which sends signals that control thirst and water retention.

Water ingested with food is better retained, and contrary to popular belief, it does not slow digestion. Rats allowed to drink during a meal digested their food at the same rate as rats deprived of water during the meal.

Water intake can influence calorie intake. On a 12-week diet, middle-aged and older adults who drank two glasses of water a half hour before meals ate less and lost 5 pounds more than those who did not drink water before meals. Water does not seem to curb appetite in younger adults, but in another study, increasing the water content of the foods themselves decreased calorie intake.

Cocoa craze

Is chocolate good for you? Does it matter what kind of chocolate you eat?

Chocolate has been touted as “the new red wine” for its putative health benefits. The excitement centers on a class of compounds called flavonoids, which are antioxidants. Raw cocoa is one of the richest known sources of flavonoids, with more than 10 percent flavonoids by weight.

Studies indicate that isolated flavonoids, or chocolate that contains flavonoids, may have favorable effects on five risk factors associated with heart disease. First, flavonoids scavenge free radicals, thereby inhibiting the oxidation of low-density lipoprotein (LDL). This process is beneficial because the oxidation of LDL promotes the formation of plaques—deposits—in the arteries. Second, flavonoids inhibit another early event in plaque formation—the adherence of white blood cells to the lining of the arteries.

Third, they increase high-density lipoprotein (HDL), which helps remove cholesterol from the body. Fourth, like aspirin, flavonoids reduce the reactivity of platelets—the smallest structural units in the blood. As a result, platelets become less likely to stick together to form a blood clot. Fifth, flavonoids increase nitric oxide levels, which dilates the blood vessels and reduces blood pressure.

Some evidence suggests that flavonoids protect against cancer and possibly neurodegenerative diseases. They have also been shown to decrease insulin resistance.

However, all the support for the health benefits of flavonoids comes from epidemiological studies and very short-term experimental studies. Although epidemiological studies address the long-term consumption of flavonoids, such studies are problematic because they compare naturally occurring populations, which may differ in more than just their cocoa consumption habits. So far, no long-term experimental studies have addressed the health benefits of chocolate consumption, nor have different types of chocolate been systematically compared.

Not all chocolate is created equal. The concentration of flavonoids depends on the variety of cocoa plant and the growing conditions. By far the most important factor in flavonoid concentration is how the beans were processed. Most chocolate products on the market today contain little or no flavonoids because flavonoids are destroyed by fermentation, roasting, and treatment with alkali. Experimental studies of chocolate consumption often use non-commercially available high-flavonoid chocolate.

If otherwise processed the same way, dark chocolate contains more flavonoids than milk chocolate. In the United States, the Food and Drug Administration mandates that dark chocolate contain at least 15 percent chocolate liquor from ground or melted cocoa nibs. Milk chocolate must contain at least 10 percent. White chocolate does not contain any cocoa solids and therefore is devoid of flavonoids.

Go the distance

There has been speculation that women will do better than men in ultra marathons of 50 miles, 100 miles, or more, because male marathon runners “hit the wall” after about 20 miles, when they’ve used almost all their glycogen and start burning mostly fat. Women don’t have this problem, because they are better at burning fat. Is there much evidence that women are doing better than men in ultra marathons?

Until 1972, women were officially barred from running marathons in the United States, and not until 1984 were they permitted to run the Olympic marathon. Once women were allowed to compete in the 26.2-mile race, their times improved so rapidly that a 1992 article in the journal Nature predicted that women would catch up to men by 1998. That did not happen, but now less than 12 minutes separate the fastest female and male marathoners.

In the ultra (anything longer than a marathon), women have already caught up to men—at least in one race. In 2002 and 2003 a female runner won one of the world’s most grueling races, the Badwater Ultramarathon. The race begins in the Badwater Basin in Death Valley, California, and continues 135 miles to the base of Mount Whitney, with more than 8,500 feet of elevation gain, in brutal summer heat. Women are often in the top five finishers of Badwater.

Many hypotheses have been proposed to explain why women may have an advantage over men in long-distance running. Psychological factors, such as better resistance to pain or better ability to pace oneself, could play a role. Size may matter; lighter runners are better able to maintain a balance between production and dissipation of thermal energy. Some studies, but not all, have found that women burn fat better than men do during prolonged exercise.

After years of dramatic improvements, female distance runners now seem to be improving at the same rate, rather than faster than men. Also, men and women who ran equally fast at near-marathon distance performed similarly in 50-mile and 100-mile races, in the largest comparison to date. So the idea that women have an advantage over men at long distances is still controversial.

Marathons, even ultras, have become a social and fitness phenomenon. More than 400,000 marathon finishing times were recorded in the United States last year, and women are now 40 percent of marathon finishers. Considering how recently women’s distance running gained societal acceptance, I think we ain’t seen nothin’ yet!

Exercise regimen

I try to run four miles a day on my treadmill, but I don’t have the stamina to do it all at one time. Therefore, I run one mile, and later I run another mile, and so on. Is there a drawback to the benefits I receive by breaking it up this way (such as calories burned)?

Many studies have compared the benefits of continuous versus split exercise sessions because any differences have implications for public health recommendations. This research suggests that the health benefits of split exercise sessions compare favorably with those of continuous exercise sessions.

Regardless of the structure of the workout, the number of calories burned is elevated both during and after exercise. This means that the same old couch potato routine burns more calories after a workout. How long your metabolism remains cranked up after exercise depends on the workout’s duration and intensity. The increase in metabolism following exercise is due to the many processes involved in repairing and getting fuel back into the muscles, and removing lactic acid and other cellular waste products.

Splitting the same workout into multiple sessions does not alter how many calories are burned during the total workout, as long as the total work performed remains the same. But some studies show that the post-exercise calorie burn is greater when the workout session is split. The difference is small—it would amount to about one slice of apple pie guilt-free after a month of daily split workouts instead of continuous workouts. Varying the intensity during a continuous workout, while maintaining the same average intensity, has been shown to have the same effect as splitting the workout.

Split and continuous workouts also seem to have an equally favorable effect on blood pressure, other measures of cardiorespiratory fitness, and cholesterol levels. The only caveat is that nearly all the studies comparing continuous and split workouts have been short-term. Therefore, it is possible that undiscovered long-term differences exist in how different workout schedules affect the risk of heart disease, diabetes, and cancer.

The 2008 Physical Activity Guidelines for Americans from the U.S. Department of Health and Human Services (http://www.health.gov/ paguidelines/) recommend that adults get at least 2.5 hours, and preferably at least 5 hours, of moderately intense physical activity each week. Anything that gets you moving is fair game, but a combination of muscle-strengthening and aerobic exercise is ideal. According to the guidelines, episodes of aerobic activity should be at least 10 minutes long.

Pounding the pavement

When walking/jogging through our suburban neighborhood, I stay on the sidewalk for safety. My wife claims that walking/jogging in the street is easier on the joints, because it is a “softer” surface. Assuming that she is not hit by a car first, is there really any difference to an average recreational jogger/walker between concrete and blacktop?

According to conventional wisdom, concrete is a more damaging running surface than asphalt. A runner strikes the ground approximately 1,000 times per mile. Therefore, anything that reduces the impact of each foot strike, even by a small amount, should decrease stress-related injuries.

In May 1997, Runner’s World magazine rated running surfaces from worst (1) to best (10). Here are surfaces and their ratings: snow (2), concrete (2.5), sand (6), asphalt (6), treadmill (6.5), synthetic track (7), cinders (7.5), dirt (8), wood chips (9), and grass (9.5).

Some articles in popular magazines cite clinical studies that claim improper running surface is a leading cause of stress-related injuries. However, studies of injured runners visiting clinics cannot prove that running on a particular surface caused an injury. It is also necessary to determine how many people run on that surface without pulling up lame.

A search turned up four studies in the medical literature that compared injuries in people who ran primarily on concrete versus those who ran primarily on asphalt. In total, more than 4,600 recreational and competitive runners were surveyed and resurveyed over a period of 2 to 12 months. Three of the studies found that running surface made no difference in the number of injuries sustained.

The fourth study found that running surface made no difference in injuries to male runners, but that female runners who ran on concrete more than two-thirds of the time had more injuries than those who ran primarily on asphalt. This was the smallest of the four studies, and it had only 15 female runners who ran primarily on concrete. It is possible that something else about these runners predisposed them to injury.

An interesting biomechanics study, published in Sports Medicine in October 1986, may help explain why runners’ injuries do not seem to be related to surface hardness. The study’s author found, unexpectedly, that the peak value of the vertical force caused by a foot striking concrete was actually lower than on asphalt or grass. The runner’s foot also remained in contact with the concrete a few milliseconds longer than with the other surfaces.

This led to the conclusion that just before striking the surface, the runner subconsciously adjusts leg stiffness based on perception of surface hardness to cushion the landing.

Totally radical

I understand that antioxidants decrease the number of free radicals in the body. How do you determine your number of free radicals?

Free radicals (the kind generated in chemical reactions, not in Berkeley in the ’60s) are molecules with unpaired electrons. Electrons like to hang out in pairs, and when they find themselves solitary, they try to break up happy electron couples in other molecules.

DNA, proteins, and fats can all be damaged by free radicals, and free radicals have been implicated in a wide range of diseases, including cancer, Alzheimer’s disease, and heart disease.

Free radicals’ bad rap is not entirely fair, however. They are produced as a normal part of many chemical reactions in our cells, and they play a number of important roles in the body. For example, our immune system uses free radicals as weapons against invading bacteria and viruses.

It is not possible to find out how many free radicals you have. Doctors don’t test for free radicals because the complexity of different body tissues makes these tests impractical, according to Joseph Scherger, a professor and physician at the University of California San Diego School of Medicine.

Scherger points out that although elevated levels of free radicals cannot be measured directly, their effects can be measured. For example, free radicals can cause inflammation in the blood vessels, which leads to atherosclerosis, or clogging of the arteries. Inflammation increases the amount of a chemical in the blood called C-reactive protein (CRP). So levels of CRP provide indirect information about free radical activity.

Someday, it might be possible to scan for free radical activity in particular organs or tissues. Electron spin resonance—a measurement technique that detects free radicals based on how they behave in a magnetic field—has been used to detect free radicals in small animals. Several challenges to extending this technique to humans exist. For instance, it involves administering chemicals that are unsafe to humans to trap the radicals so that they can be measured.

It might be difficult to draw conclusions from a free radical test, since levels of free radicals are dynamic. Elevated free radicals might be a sign of a chronic problem or your body’s normal response to a temporary infection.

So even with such a test, advice from doctors on how to keep a balance between free radicals and antioxidants would still be to avoid smoking, minimize intake of trans fats, and keep your diet rich in fruits and vegetables.

It’s elemental

What can you tell me about indium—element 49—and its role, if any, in human nutrition? Are there any websites I can look at?

An Internet search turns up some amazing claims about the health benefits of indium. Among the plethora of ailments indium is purported to cure are addictions, hair loss, the appearance of aging, cancer, birth defects, low and high blood pressure, and weight problems. Most of these claims are presented without any supporting evidence, but a few sites refer to—but distort—scientific studies.

For example, one site claims that in 1971, Dr. Henry Schroeder discovered that indium supplements resulted in a lower body weight, especially in females, and may give women the extra boost to burn more calories and lose weight. Schroeder published a paper that year, in The Journal of Nutrition, describing the effects of low doses of indium (as indium chloride) on the growth and life span of mice. However, he reported that indium stunted the growth of mice, especially females, not that it turned them into fat-burning machines.

Also, in contrast to what one would expect if indium really were a cure-all, Schroeder found no statistically significant differences in the life spans, or numbers of tumors, in mice getting indium supplements compared to controls.

A radioactive isotope of indium is used in medicine, including in cancer treatment, but these uses exploit the radioactivity of the particular isotope, not indium’s purported nutritional value.

An increase in the use of indium (nonradioactive, of course) in the electronics industry (for example, in semiconductors and solar cells), and concerns over possible health risks to workers, have stimulated a few recent studies on indium exposure. Animal studies have shown that, in high doses, indium can have adverse effects on the liver and kidneys and on fetal development.

Because many essential minerals are toxic in large doses, the adverse effects of high doses of indium do not disprove the benefits of low doses. However, indium has no known biological function, and the scientific literature does not support the claims about indium’s benefits on health.

Color me young

I have heard that middle-aged people can prevent their hair from turning gray by taking a vitamin B complex containing para-amino benzoic acid (PABA). Is there any truth to this?

PABA probably is most familiar as an ingredient once widely used in sunscreens, but bacteria in our intestines also make it. Although PABA is sometimes called vitamin Bx and is found in foods such as brewer’s yeast, liver, and whole grains along with other B vitamins, it is not officially classified as a vitamin because its intake is not essential for human health.

The claim that PABA can prevent graying of hair has roots in studies conducted in the 1940s and ’50s. They concluded that PABA consumed in large doses caused darkening of hair in some people with white or gray hair. The length of time the hair was gray before PABA treatment began did not appear to influence the darkening effect. One study also noted a darkening of hair in individuals with nongray hair.

The dosages of PABA used in the studies were high, from hundreds to thousands of milligrams per day. Much lower doses (30 milligrams or more) can cause nausea, fever, rashes, and liver toxicity, according to the 2007 edition of Dietary Supplements, by pharmacist and nutritionist Pamela Mason.

In addition, the outcomes of the PABA studies were highly variable. In some, the majority of people taking PABA did not have a change in hair color. It is not clear what factors may have led to the inconsistent results, or even how PABA could reverse graying.

Severe malnutrition can cause graying of hair, as can large deficiencies of individual nutrients, including copper, zinc, and folic acid. Nonetheless, genetics appears to be the dominant factor that determines when an individual’s hair turns gray.

Hair color depends on the presence or absence of the pigment melanin, which is produced in organelles called melanosomes within cells called melanocytes by the process of (don’t worry, tedium does not cause gray hair) melanogenesis.

Gray hair has a marked reduction in the number of active melanocytes within the hair bulb. As a result, fewer melanosomes are incorporated into the growing hair shaft.

The pigment changes are accompanied by alterations in the hair structure as the central, or medullary, layer of the hair thickens and the surrounding cortical layer thins. What triggers the decrease in melanogenesis, and how the pigment changes relate to the changes in the structure and texture of gray hair, are not yet understood.

Vitamin virtues

Does any scientific evidence show whether purchased vitamins (multi- or individual) are effective?

For people with special nutritional needs or vitamin deficiencies, vitamin supplements can be beneficial. An example is supplementation with folic acid before and during pregnancy, which significantly reduces various birth defects, especially spinal deformities.

Even in developed nations, severe vitamin deficiencies are not entirely a thing of the past. Cases of rickets—slowed growth and bone deformities caused by vitamin D deficiency—crop up regularly in small numbers of infants in the United States. Vitamin deficiencies are also common in people over age 65.

More than one-third of U.S. adults take multivitamins, and nearly three-quarters use nutritional supplements of some kind. After multivitamins, the most popular nutritional supplements are calcium, vitamin E, and vitamin C.

Yet, although many studies have examined the effectiveness of multivitamins and individual nutritional supplements on preventing a range of ailments, including cancer, cardiovascular disease, and age-related cognitive declines, results are highly variable. Some show increased risk, others show decreased risk, and still others reveal no effect.

Several comprehensive reviews have concluded that the overall quantity, quality, and consistency of evidence is weak that nutritional supplements benefit the general adult U.S. population. They also call for future research to better control for prior nutritional status of study participants. Furthermore, supplementation studies lasting a few months, or even years, may be inadequate, because chronic diseases can take more than a decade to develop.

Food and Drug Administration oversight of nutritional supplements is loose. Supplements are categorized as food rather than as drugs, which have tighter oversight, and supplements sometimes contain contaminants. For instance, a study by the International Olympic Committee showed that some supplements for athletes contained undeclared steroids.

Products that have the ConsumerLab.com “CL Seal” have been tested for product label accuracy and ingredient quality. Similarly, the verification mark of US Pharmacopeia (USP), a nonprofit, nongovernment organization, signifies that the supplement was produced through USP-verified good manufacturing practices.

The FDA is phasing in new regulations that require manufacturers to evaluate the composition of their supplements. However, none of these oversights ensures that the product works. Without FDA review, labels are allowed to claim that a supplement affects a body structure or function (but not that it prevents or treats disease).

Fuel economy

What is a person’s metabolic rate based on?

Metabolic rate has three components: resting metabolic rate (the energy it takes just to be alive—to breathe and for our cells to go about their daily business), the energy expended on eating (digesting, absorbing, and storing food), and the energy required for all other activities.

Resting metabolic rate accounts for approximately 60 percent of the calories we expend every day. Eating (excluding the calories burned getting to the nearest burger joint) makes up about 10 percent of daily energy expenditure. The remaining 30 percent of calories are burned as a result of activity.

Activity can be divided into exercise and nonexercise activity thermogenesis (NEAT). NEAT is the energy burned during daily activities that are not fitness-related, such as standing, ambulating, and fidgeting. Researchers discovered, by having people wear motion-sensing undergarments, that lean, self-proclaimed “couch potatoes” engage in approximately two hours more NEAT behaviors each day than their obese counterparts. The differences in NEAT meant that the obese people burned 350 fewer calories per day than the lean people. Interestingly, even when the obese people lost weight, they did not increase their NEAT.

In fact, another study showed that even moderate weight loss (15 to 20 pounds, or 7 to 9 kilograms) actually decreases metabolic rate. This finding explains why it is difficult to maintain weight loss through dieting alone: the body burns fewer calories at the new, lower weight. On the other hand, exercise burns calories in the short term and can crank up metabolic rate in the long term by building muscle.

Muscle mass determines a large proportion of the individual differences in metabolic rate, because, even at rest, muscle tissue consumes more fuel than fat. Differences in muscle mass explain why women have, on average, a 10 percent lower total daily energy expenditure than men. Also, metabolism tends to slow down with age because of loss of muscle mass, not just because of reduced activity.

Metabolism is regulated by intricate feedback mechanisms between the body and the brain. For example, during starvation, certain thyroid hormones drop rapidly, leading to a 40 percent decrease in resting metabolic rate. The thyroid gland is under the influence of the pituitary gland in the brain, which receives orders from a brain region known as the hypothalamus. The hypothalamus is influenced by leptin, which is produced by fat cells. When it was discovered a decade ago, leptin (from the Greek leptos, meaning thin) was thought to have potential as a magic skinny pill, but alas, controlling metabolism is not so simple.

Fit to be sweaty

I read somewhere that people who are aerobically fit sweat more than people who are less fit. Is this true?

Studies have shown that people who are aerobically fit do sweat more, and begin sweating more quickly, than people who are less fit when they exercise at similar relative intensities.

“Relative intensity” means a fixed percentage—say, 80 percent—of individuals’ maximal aerobic power, which is ascertained from a person’s oxygen uptake and carbon dioxide production during exercise. To get a fit person to exercise at 80 percent of his or her maximal aerobic power, experimenters need to crank up the tension on an exercise bicycle, or the incline or speed of a treadmill, compared to the setting that gets a nonfit person exercising at 80 percent of maximal power.

Exercise physiologists compare people who are exercising at the same relative intensity, rather than doing an identical task, because they are trying to understand how the body adapts to training, and what happens to sweating, heart rate, oxygen consumption, and so on as people get close to their physical limits, whatever those limits are.

Therefore, a couch potato probably would sweat more than a marathoner when trotting 100 feet to the mailbox. But fit people get more sweaty, more quickly when they push themselves equally hard with respect to their own physical limits. Other individual differences, including gender (on average, men sweat more than women), also influence sweating.

Red and white

My daughter does not eat red meat. I’ve seen the TV commercial from the pork industry that calls pork “the other white meat,” suggesting that it compares to chicken as far as nutrition is concerned. How does pork compare to beef?

This advertisement is a clever marketing ploy by the pork industry, which is attempting to piggyback on the growing popularity of chicken. Since the 1970s, per-capita consumption of chicken has increased, while consumption of beef has declined. Pork consumption has held relatively steady, at 50 pounds per person annually.

Although pork is paler than beef, the U.S. Department of Agriculture classifies all meat from livestock—including pork, veal, beef, and lamb—as red meat. The red color comes from myoglobin, which is an iron-containing protein that holds oxygen in muscle. Pork has less myoglobin than beef, but more than the white meat of chicken.

Hogs are leaner than they used to be due to improved breeding and feeding, but clearly fat content and nutritional value also depend on the cut of pork selected and how it is cooked. Studies have shown that cholesterol and triglyceride levels in consumers of lean pork, lean beef, or white meat (chicken or fish) following a fat-controlled diet are similar. This indicates that these levels depend on the fat content, not the protein source itself.

Meat is a good source of minerals and B vitamins. On average, pork has less iron and zinc than beef, but about the same amount of copper. In terms of the B vitamins, pork has more thiamine than beef and about the same amount of niacin and riboflavin.

Your daughter may be concerned about fat intake, or studies that have linked consumption of red meat to increased risk of certain types of cancer, including colon and breast cancer. The exact relationship between red meat consumption and cancer risk is uncertain because consumers of red meat and nonconsumers usually have other differences in diet. For instance, people who abstain from eating red meat may consume more fruits and vegetables high in antioxidants.

People avoid certain types of meat for a variety of reasons besides health concerns. Certain animals may be labeled sacred or unclean by their religion. They may feel that it is unethical to eat mammals or any animal. They may also be concerned about the environmental impact of factory farms, or that more resources are needed to produce meat than an equal amount of calories from a plant source.

Hold the sunny side

Since the medical profession touts the need to avoid excess egg consumption due to the yolks, I’ve been wondering why science has not made any effort to create a smaller yolk content. Or has this been attempted?

Chickens lay the occasional yolkless egg, but hens that consistently produce meringue-ready eggs would be expensive and technically difficult to breed. After all, the yolk is not just a critical ingredient in hollandaise sauce, but it also provides nourishment for a developing chick. So eggs with no yolk, or a very small yolk, would be sterile.

Each egg starts out as a single cell in the ovary of the mother’s body. The egg cells are already present when a female bird hatches. When she is a few months old, yolk is added to one of these cells. A surge in estrogen stimulates the liver to produce vitellogenin, the major protein in egg yolk. Vitellogenin is transported to the oviduct—the tube that leads from the ovary—via the bloodstream.

The finished yolk passes down the oviduct to the place where the albumen, or egg white, is produced. The albumen is added in layers, and the yolk ends up floating in a watery layer of albumen surrounded by a thick, tough layer of albumen that acts as a shock absorber. The motion of the egg twists the albumen at either end, producing the white stringlike anchors—chalazae—that keep the yolk centered.

If everything is functioning normally, the outer membrane and shell are added further down the oviduct, and the hen lays a perfect egg. When things go awry, eggs can end up with double yolks or no yolks. The oviduct is an assembly line with multiple eggs in progress at once. If two yolks drop into the oviduct at the same time, they may end up encased in the same albumen and shell. Conversely, if something interferes with yolk production, the hen may lay an egg containing albumen only.

The size of the yolk relative to the albumen increases as hens age. Also, across different breeds of chickens, a moderate amount of natural variation occurs in the ratio of yolk to albumen. Theoretically, yolks could be made even smaller by tinkering with one or more of the at least four genes involved in vitellogenin production.

Egg yolks have gotten a bad rap because of their cholesterol content, but the yolk has a richer concentration of vitamins and minerals than the white. Studies have shown that eating an egg or two a day does not increase heart disease risk in healthy individuals.

Grain of salt

Recently, some TV commercials have claimed that their products contain “sea salt that contains less sodium than regular table salt.” Aren’t they both sodium chloride (NaCl)? And don’t we get some of our table salt from seawater? Also, it has become very difficult to avoid excess salt. Some products contain more than 1,000 mg of sodium per serving. Is there any way to get these manufacturers to use a lot less salt?

For people competing in endurance events, hyponatremia—an abnormally low concentration of sodium in the blood—is a real danger when they drink too much water without replenishing the sodium lost in sweat. However, in most countries the average salt (sodium chloride) intake is at least double the maximum 5 grams (about one teaspoonful, or 2,300 milligrams, of sodium) per day recommended by the World Health Organization.

Multiple sources of evidence show that high salt consumption can increase blood pressure. Elevated blood pressure is the single most important cause of heart attack and stroke. However, studies reveal considerable individual variation in the effect of salt consumption on blood pressure.

Salty foods irritate the stomach lining, and high consumption has been linked to stomach cancer. Some evidence also suggests that high salt intake can lead to water retention, increase the risk of kidney stones, contribute to osteoporosis, and worsen asthma symptoms.

Over 85 percent of the mineral composition of seawater is sodium chloride. The purer the sea salt, the more sodium chloride it contains. The label on my inexpensive bottle of sea salt says that it is more than 99 percent sodium chloride. That is about the same as regular table salt, which is mostly mined from deposits left by ancient salt lakes. Low-sodium salt is sodium chloride mixed with another mineral salt, such as potassium chloride.

Most dietary salt comes from processed food, so check the sodium content on the label, because the manufacturer’s claims may be misleading. Salt has been used as a preservative for thousands of years and has other roles in cooking, but it is often added to make poor-quality ingredients palatable. Unfortunately, salty food desensitizes the tongue to salt.

A gradual reduction in salt exposure across the diet can be achieved without affecting consumers’ taste perceptions; this strategy has been effective in several countries. Education, lobbying, and consumer demand would drive manufacturers to make more changes. A creative way to reduce salt intake and high-calorie, low-nutrient processed food might be a family assembly line that makes meals from good-quality ingredients and then freezes homemade TV dinners for those rushed days.

Quicksilver

Is there a difference in the amount of mercury in fish, whether you eat it raw or cook it? Is it possible to avoid the mercury in a fish by how you prepare it?

Mercury in fish is tightly bound to protein and is not removed during cooking processes such as smoking, broiling, baking, boiling, pan frying, and deep frying. Nor does the addition of lemon juice release mercury from its bound state. On the other hand, the cooking method affects the health benefits of fish, and the mercury concentration is strongly dependent on the type of fish.

Mercury originates from natural sources (volcanoes) and human sources (coal-fired power plants, waste incineration, gold mining). Organisms do not readily absorb mercury in the form in which it is usually released into the environment—metallic or inorganic mercury. Once rainwater carries inorganic mercury into lakes and oceans, microbes convert it into methylmercury, or organic mercury. (In chemistry, “organic” refers to carbon-containing compounds and has nothing to do with organic agriculture.)

Organic mercury is readily absorbed by organisms and accumulates in their tissues. It bioaccumulates in the aquatic food chain. In other words, short-lived species low in the food chain (such as shellfish and salmon) have low concentrations of mercury, while longer-lived predators (such as swordfish and shark) have high concentrations. The levels in albacore tuna are lower than those in swordfish but higher than those in salmon.

Industrial catastrophes that have resulted in mass consumption of high levels of mercury reveal that it is toxic to nerve cells, especially in children exposed during their early development. Studies of the effects of exposure to lower levels of mercury have been conflicting, but based on the possible risks, the U.S. Environmental Protection Agency and the U.S. Food and Drug Administration have issued advisories for women of childbearing age, pregnant women, nursing mothers, and young children.

At the same time, studies suggest that intake of fatty acids in fish by pregnant and nursing women is beneficial for the development of brain cells in infants. In addition, fish, except deep-fried fish, has well-documented cardiovascular benefits. For example, omega-3 polyunsaturated fatty acids in fish decrease the risk of heart attack by improving the fluidity of heart cell membranes.

In response to the confusion about the role of fish in a healthy diet, a 2006 article in the Journal of the American Medical Association concluded that, with the exception of a few fish species, the benefits of moderate fish consumption (two servings per week) outweigh the risks. The article recommends that nursing mothers and pregnant women avoid shark, swordfish, golden bass, and king mackerel; limit intake of albacore tuna to 6 ounces per week; and consult advisories for locally caught fish. But they should get at least 12 ounces per week of other fish and shellfish. See http://www.epa.gov/waterscience/fish/ for a list of mercury levels in different species and local fish advisories.