Curious Folks Ask: 162 Real Answers on Amazing Inventions, Fascinating Products, and Medical Mysteries - Sherry Seethaler (2009)
Chapter 1. Ingenious inventions
How does a frost-free freezer work?
In a non-frost-free freezer, water vapor from the air condenses and then freezes on the cooling coils in the freezer (or on the plastic of the freezer compartment covering the coils). If you put off defrosting long enough, eventually so much ice accumulates that there is no longer room for even a TV dinner.
Frost-free freezers prevent this buildup by doing a mini-defrost every six hours or so. A timer turns on a heating coil, which surrounds the cooling coils, and a temperature sensor turns off the heater when the temperature starts rising above freezing.
Full of cold air
How does canned air work? Why is the air cold when it comes out of the can?
The air or gas in the can is under pressure, and it expands as it escapes from the can. Inside the can, where the gas molecules are closer together, there are attractive forces (albeit weak) between the molecules. Because of these forces, heat energy is needed to separate the molecules. The heat comes from the environment or your skin if it is near the nozzle where the gas escapes.
Most refrigerators and air conditioners work by taking advantage of the cooling effect of an expanding gas (or a liquid expanding into a gas). Refrigerator coils contain a gas that the compressor squeezes into a liquid. Compressing the gas generates heat, which escapes through the coils on the back of the refrigerator.
An expansion valve is then opened between the compressed liquid and the heat-exchange coils inside the refrigerator. The abrupt drop in pressure, akin to releasing the nozzle on canned air, causes the liquid to expand rapidly into a gas. As the expansion occurs, heat from the inside of the refrigerator is transferred to the gas.
Air conditioners are similar to refrigerators, except that air conditioners also have fans to help move the cool air into the inside and dissipate the warm air outside.
May the Force be with you
Is a lightsaber (yes, the Star Wars sword) possible?
Glow-in-the-dark Halloween costume accessories aside, it is not possible to solidify light or make it terminate in midair. However, in his book Physics of the Impossible, physicist Michio Kaku explains how to make something akin to a lightsaber. Plasma—an extremely hot ionized gas—could be confined to a hollow rod dotted with small holes that would allow the glowing plasma to escape. Plasma can be hot enough to cut steel. The plasma saber would have to be plugged into a high-energy power supply, though, so it would be more unwieldy than the George Lucas version.
A popular weapon in science fiction is a “graser,” or gamma-ray laser. Has anyone built one? Does any theory suggest that it is or is not possible? What would be likely uses?
Gamma-ray lasers are technically possible. Lasers that produce emissions in the microwave, infrared, visible, ultraviolet, and even X-ray ranges already exist. The trick to producing gamma rays is finding an adequate lasing medium. This is a substance (gas, liquid, or solid) that gets excited when energy is pumped in. It releases that energy as photons, or particles of light, when it returns to the unexcited state.
In other types of lasers, it is the electrons within the atoms of the lasing medium that get excited to higher energy levels. Whether the photons released are lower-energy microwaves or higher-energy X-rays depends on the size of the energy gap between the electrons’ excited and relaxed states.
Gamma rays are too energetic to be produced by electrons jumping from a high to low energy level. Instead, they are produced when an atom’s nucleus switches from a high to low energy state. In laser light, photon emission is organized, but getting gamma-ray photons to move in step with each other requires many nuclei to change energy states in unison. This is trickier than getting electrons to change states in unison.
A few elements, including hafnium, have an excited nucleus state that is long-lived, so these elements show promise as a lasing medium for a gamma-ray laser. The U.S. Department of Defense is interested in the problem because a gamma-ray laser would be a formidable weapon.
The laser would also have many nonmilitary applications. For instance, it could be used to probe atoms and molecules to gain an unprecedented understanding of their structure and function, to treat cancerous tumors, or to kick-start nuclear fusion for energy production.
Catching a wave
Why does my radio crackle with static or some other interference? This occurs on AM stations—in particular, more loudly on distant stations and not as badly on some local stations. Is there any way to eliminate this problem?
Many natural sources (static electricity, lightning, solar flares) and man-made sources (motors, electrical equipment) can interfere with radio reception. AM is more susceptible to static than FM because of differences in the characteristics of the transmitted radio signal.
In AM—amplitude modulation—the height of the radio waves, if you visualize them as waves on the ocean, varies according to the signal. In FM—frequency modulation—it is not the height of the waves, but rather the number of waves passing a given point each second, that encodes your favorite music or radio show. Most interference affects the amplitude rather than the frequency of a radio signal.
In addition, radio waves in the frequency range transmitted by AM radio (near 1 megahertz), but not FM (near 100 megahertz), can reflect off the ionosphere, the upper layer of the atmosphere. Since radio waves travel in straight lines, the curvature of the Earth limits their range. Bouncing off the ionosphere and being reflected to Earth allows AM radio waves to travel long distances compared to (ground-based) FM signals. However, interactions with the ionosphere create static, so more distant AM stations have more static than local ones.
You cannot eliminate natural sources of static, but here are some tips for improving radio reception. Turn off any unneeded appliances. Touch lamps or lamps with dimmer switches may need to be unplugged. If practical, try moving your radio to different places in the house (for example, a windowsill) to see where reception is best.
Just turning the radio or moving the power cord can help, because sometimes the AM radio antenna is inside the radio, and sometimes it is in the cord. It is also possible to purchase an external AM loop antenna for some radios. In the case of your car radio, examine the base of the antenna for signs of corrosion.
Out, damned spot!
After I wash my car windows and let them air-dry on a bright, sunny day, I notice a grid of circles about the size of quarters visible on the glass that wasn’t there before I washed them. On other cars, the grid is sometimes slightly differently sized and sometimes not perfect circles.
These are mineral deposits, such as calcium and iron, left behind when water evaporates. The pattern depends on how the water evaporates (how well water sheets from your car, the amount of wind, and so on). Also, the concentration of minerals varies in water from different sources.
Vinegar, a weak acid, can help dissolve the minerals and is supposedly the secret of expert car detailers. However, it will remove any wax on your car and cannot fix the paint if the minerals have etched it. Auto aficionados seeking to prevent mineral deposits can purchase a water deionizer that attaches to a garden hose.
Glass stretch marks
I have seen a grid of circles on the factory-provided window tints that also appear to act as the laminate layer. This is especially true on the rear windows of older BMWs after you wash them. It can be seen more prominently with polarized sunglasses. It still looks like a pattern of regular circles about the size of quarters throughout the entire glass.
If (as just discussed) the circles are due to mineral deposits left behind as water evaporates, they should appear on all the windows as well as the paint. Also, they will not form a perfectly regular grid.
On the other hand, if the pattern is very regular, and you see it on only the rear and side windows, the grid is part of the safety glass itself. The rear and side windows are usually made from tempered glass, and the tempering process creates a stress pattern that is visible within the glass.
To temper glass, it is heated to 1,200 degrees F (650 degrees C), and then the outer surface of the glass is cooled rapidly by blowing air over it. The center of the glass cools more gradually. As it cools, it contracts, compressing the outer surfaces of the glass together and creating a stress pattern along the midplane of the glass.
Tempered glass is stronger than regular glass, but when it does break, the internal stress causes the glass to shatter into many small pieces. Since it would be dangerous if a stone shattered the windshield while someone was driving, the windshield is made from laminated safety glass. Some upscale auto manufacturers offer laminated safety glass in side windows for added occupant safety and break-in resistance.
Laminated safety glass consists of two sheets of nontempered glass sandwiched together with a sheet of vinyl in the middle, to which the glass adheres when it breaks. Windows made from laminated safety glass lack the grid of circles characteristic of windows made from tempered glass.
The rear windows of certain cars with window tints definitely have a more noticeable grid pattern. It is possible that the plastic tinted layer has some pattern associated with it, but it is more likely that the tint acts like polarized sunglasses to block some of the scattered light, making it easier to discern the stress pattern in the tempered glass.
Since contact lenses move with your eyes as they move, how are bifocal contact lenses possible?
One bifocal contact lens design—called alternating, or translating, vision—is similar to bifocal glasses. Each lens has two segments. The distance correction is on top, and the near correction is below. The eye moves between the two lens powers as the gaze shifts up and down.
Conversely, simultaneous-vision lenses are designed so that the eyes look through both near and distance powers at once, and the visual system determines which power to use.
Alternating-bifocal contact lenses can be weighted, or slightly flattened at the base, so that the lens is supported by the lower lid and is shifted upward relative to the pupil when the gaze is directed downward.
The simplest form of simultaneous vision is monovision. One eye, usually the dominant one, is fitted with the distance correction, and the other eye is fitted with the near correction. More complicated designs are concentric ring lenses and aspheric lenses. Concentric ring designs feature a bull’s-eye pattern of the near and far prescriptions. Aspheric designs have the two powers blended across the lens.
Because simultaneous-vision lenses maintain both the near and far prescription powers in front of the pupil at all times, both powers focus light onto the retina. Therefore, the retina receives two images—one that is in focus and one that is out of focus. Over time, the brain learns to make sense of this strange state of affairs by paying attention to the clear image and ignoring the superimposed out-of-focus image.
The adaptation is not perfect. Monovision reduces depth perception because only one eye receives a clear image of any scene. Simultaneous lenses with more than one power per lens reduce visual acuity—the sharpness of an image—because the out-of-focus image creates a veiling effect on the retina.
Another problem with bifocal contact lenses is that the way lenses fit over an individual’s cornea is unique. As a result, it is not easy to predict where the optical center of the lens will be and whether the power zones will line up correctly with the pupil.
Because of these challenges, bifocal contact lenses are not as popular as single-prescription lenses. However, the technology has improved, and there are more designs to choose from. Which design is best for an individual depends on the shape of the eye as well as a person’s lifestyle and activities.
Why is it so difficult to make a hearing aid that works?
Designing a good hearing aid is actually a difficult engineering problem. When people suffer hearing loss, they often lose the ability to hear some sounds but not others. For example, presbycusis—age-related hearing loss—usually first diminishes the ability to hear higher-pitched sounds.
Therefore, if a hearing aid simply amplified all sounds equally, the sounds that were already audible would become uncomfortably loud. For this reason, hearing aids need to be adjusted to each patient’s particular hearing deficits.
Hearing aids also must amplify speech sounds while minimizing background noise. Directional microphones can help, because a listener usually turns to face a person who is speaking, allowing the microphone to pick up the voice but not sounds from other directions. However, random noise can come from the same direction as the speaker’s voice, especially if sounds reverberate substantially off surfaces in the room.
Sometimes the solution to one problem leads to another. For example, reducing the size of hearing aids is desirable, not just for comfort and aesthetics, but also to minimize the occlusion effect—that hollow sound of one’s own voice when something blocks the ear canals. Unfortunately, shrinking the device places the microphone closer to the hearing aid output and increases feedback. Feedback occurs when some of the amplified sound is fed back to the microphone in a repeating cycle, causing an annoying whistle or squeal.
The technology is improving gradually; the hearing aid industry has been transitioning from analog to digital devices. Digital permits more sophisticated sound processing to enhance speech and reduce feedback and background noise.
On the road again
Assuming all things are equal, does a car get better mileage if the road is wet or dry, the air is very humid or dry, the altitude is high or at sea level, the temperature is very cold or very hot?
According to members of SAE International (the Society of Automotive Engineers), a car gets better mileage:
• When the road is dry. This is because the tires get better traction, and the power is transferred to the road more efficiently.
• In humid conditions. This is because there is less need to throttle the engine. Throttling is a way of controlling the speed of an internal combustion engine, but it consumes some of the engine’s power.
An internal combustion engine is a cylinder in which air and gasoline are mixed, compressed by a piston, and ignited. In the first step of the four-stroke combustion cycle used by most cars, a valve opens to take in air and gasoline as the piston moves downward. Throttling closes the intake valve for part of the time the piston is moving downward, forcing the piston to pull against a partial vacuum, which wastes energy.
For a particular power output, the engine needs a constant amount of oxygen to burn the required amount of fuel. When more water molecules are in the air, some of the oxygen molecules are displaced. Therefore, in humid conditions, the engine must take in a greater volume of air to get the same amount of oxygen. The intake valve can remain open longer, and less work is required to pump the gases through the engine.
• At high altitude. This is because there is less drag on a vehicle in thinner air. It also takes less effort to expel the exhaust, because the atmospheric pressure “pushing back” on the engine is lower. In addition, less throttling occurs, because a larger volume of air must be taken in to get enough oxygen to burn the same amount of fuel.
• When it is very hot (assuming the air conditioning is off). This is because the air density is lower, so, for the same reason just described, there is less need to throttle the engine.
You may not be able to change where you drive or the weather conditions, but making certain that your tires are properly inflated is an excellent way to improve mileage. Inflating them to more than the manufacturer’s recommendation can reduce traction, but inflating too little can reduce the size of your wallet. With too little air, the tires flatten out, resulting in increased rolling friction, which slows down the wheel and decreases gas mileage.
As I was engaged in my weekly chore of raising the weights and slightly resetting the time on our 1780s grandfather clock, I wondered how people of that era could accurately set their clocks, which undoubtedly gained or lost at least a minute or two every week. I assume that those with almanacs could try to approximate the time by coordinating with sunrise or sunset, but I don’t know if that’s true. So how did they set their clocks?
In the late 1700s, the almanac, with its elaborate tables of astronomical and seasonal events, was important in keeping track of time. But back then people still relied on the rising and setting of the sun to mark time. They were much less obsessed than we are now with accurate time-keeping.
In fact, until the late 1800s, cities and towns had independent times, depending on their observation of the sun. Time zones were not considered necessary until trains crisscrossed the country. Pressure from the railroads led the U.S. government to divide the country into four time zones, which were synchronized at noon on November 18, 1883 when the master clock at the U.S. Naval Observatory transmitted the time to major cities via telegraph.
Lost with digital
Why is it possible to point your watch’s hour hand toward the sun and then find south between the hour hand and the 12 (assuming you’re in the Northern Hemisphere)? How does this relate to sundials?
The sun reaches its high point in the sky at astronomical noon—a moment also known as the meridian. (It comes from the same Latin stem as the terms ante meridiem, or a.m., and post meridiem, or p.m.) In the Northern Hemisphere, the sun is due south at the meridian because only between the Tropic of Cancer and the Tropic of Capricorn is the sun ever directly overhead.
Therefore, at noon, the shadow cast by a sundial’s shadow maker—the gnomon—points directly north. For a sundial to tell time, the noon mark must be oriented to true (celestial, not magnetic) north.
As the Earth rotates, the sun appears to move from east to west around the sky, and the shadow cast by the gnomon moves clockwise 15 degrees per hour (360 degrees in 24 hours).
Think of your watch as a little sundial. If you line up the hour hand with a shadow cast by the sun, you can look to the 12 to find the north/south line. However, because 360 degrees on a watch corresponds to 12 hours rather than 24, the north/south line runs through a point halfway between the hour and the 12. This point faces north between 6 a.m. and 6 p.m., after which it faces south.
Even correcting for daylight saving time, your watch is not a perfect measure of direction, because it is set according to your time zone, but astronomical noon varies across a time zone. Also, because of the Earth’s tilt on its axis and its elliptical orbit around the sun, successive astronomical noons are sometimes more and sometimes less than 24 hours apart, causing up to an additional quarter hour difference between watch and sun time.
The date designations B.C. and A.D. (before Christ and after the death of Christ) seem to leave a gap. In other words, how do we account for the time of Christ’s life between these designations? It looks like there is a 30-year life span or so that cannot be included in either the designation “before his life” or “after his death.”
A.D. is from Latin, meaning anno Domini or “in the year of our Lord.” The monk Dionysius Exiguus, who worked out the B.C./A.D. system in the sixth century, assigned A.D. 1 to the year he thought Christ was born. However, most religious scholars place the birth of Christ between 4 and 7 B.C. by comparing what is said in the Bible to known historical and astronomical events.
Let there be light
In 2007, Congress changed the dates on which daylight saving time begins and ends. Have any studies been done to determine if DST has overall economic or societal benefits? I believe it was invented by Benjamin Franklin to aid farmers, but we are far from an agrarian society today.
Benjamin Franklin is often credited with proposing daylight saving time in the Journal de Paris in 1784, but his essay was a tongue-in-cheek recommendation that people go to bed earlier and get up earlier. (See http://webexhibits.org/daylightsaving/franklin3.html.)
DST was not adopted until World War I. The rationale was to conserve energy by aligning traditional work hours with daylight hours to reduce the need for artificial light. Farmers, who disliked having to deliver their goods earlier in the day, successfully fought to get DST repealed after WWI. DST was not readopted until WWII.
Between 1945 and 1966, localities could choose when to observe DST. Mass confusion resulted, with radio and TV stations and transportation companies needing to publish new schedules every time a locality began or ended DST. The Uniform Time Act of 1966 addressed this problem by stipulating that any state that chose to observe DST had to begin on the last Sunday of April and end on the last Sunday of October.
Some studies suggest that DST reduces traffic accidents because the evening rush hour occurs during daylight. On the other hand, one study showed that more accidents occur the Monday after we spring forward, probably because commuters are sleep-deprived and/or in a rush.
Proponents of DST cite figures from a 1975 U.S. Department of Transportation study conducted when DST was extended during the oil embargo. The study found that DST reduced the national electricity load by about 1 percent. In 2001, the California Energy Commission estimated that daily electricity consumption would drop by about 0.5 percent if DST were extended through the winter months.
Energy consumption is thought to decrease during DST because people use less electric lighting in the evenings, which is only partly offset by an increase in the use of lights in the morning. People are also drawn outdoors when there is sunlight and therefore use household appliances less frequently.
However, some studies that examined system-wide energy use, including commercial and residential lighting, as well as heating and air conditioning, found no effect or even negative effects of DST, depending on the climate. Also, some studies suggest an overall energy penalty, considering how much the electricity conservation is offset by people taking advantage of the daylight by using more gasoline to go places in the evenings.
Since 2007, DST runs from the second Sunday in March to the first Sunday in November. Because commerce and lifestyles have changed dramatically since many of the studies on the energy-saving potential of DST were conducted, Congress will review the impact of the DST change and reserves the right to revoke it.
Can you give me a clear and reasonable explanation of the basis of the Fahrenheit scale? We all know that the Celsius or Centigrade scale is based on the freezing and boiling points of water at sea level, but so far nobody has been able to tell me how the Fahrenheit scale was created.
Most historians agree that Daniel Fahrenheit modified a scale developed by the Danish astronomer Ole Rømer. Rømer’s scale had fewer subdivisions and placed the freezing point of water at a fractional degree, which Fahrenheit found cumbersome. There are conflicting accounts about how Fahrenheit calibrated his thermometers, but in a paper he wrote in 1724, Fahrenheit described using three fixed points (as translated in A History of the Thermometer and Its Use in Meteorology, by W. E. Knowles Middleton, 1966).
To get the 0 on his scale, Fahrenheit said he used a mixture of ice, salt, and water. For his second calibration point, at 32 degrees, he used a mixture of ice and water.
Fahrenheit wrote that the third point was fixed at 96 degrees, where “the spirit expands” when the thermometer is held under the armpit or in the mouth of a healthy person long enough to acquire the heat of the body. (Later Fahrenheit’s model thermometers were recalibrated, and normal body temperature ended up at 98.6 degrees.)
Although these are Fahrenheit’s words, Middleton points out that they may not be completely accurate because, as an instrument maker, Fahrenheit might have wanted to conceal his methods.
Spying on Martians
Why don’t we use the Hubble telescope to look at Mars? If it can take such great pictures from deep space, it seems pictures of Mars should be possible. Can the Hubble view something this close?
The Hubble Space Telescope is useful for studying objects in the solar system (other than the Earth and the moon, which are too close). Only space probes that have passed close to Mars have been able to take clearer pictures of the planet than Hubble. Hubble has been used to monitor the atmosphere of Mars to better understand its weather patterns, particularly to gain insight into what causes the enormous dust storms that occur periodically.
To what extent has data from the Hipparcos astrometric satellite been used to identify stars with planets, and to what extent could this data be used if fully exploited for this purpose? Why hasn’t Hipparcos led to identification of thousands of stars with planets?
Hipparcos (HIgh-Precision PARallax COllecting Satellite) was the first space mission dedicated to measuring the distances, motions, colors, and brightness of stars. The mission was named in honor of the second-century B.C. Greek astronomer who, without a telescope, developed a catalog of 1,080 stars.
The distance to a star can be determined mathematically from its apparent shift in position, or parallax, as compared six months apart, when the Earth has revolved from one side of the sun to the other. The Hipparcos satellite was able to determine the distance to more than a million stars with unprecedented accuracy because the satellite was unobstructed by the Earth’s atmosphere, which blurs the starlight reaching telescopes on Earth.
Searching for planets was not one of the goals of the Hipparcos mission. It fact, it was during the time period that Hipparcos was collecting data (1989 to 1993) that the first planets outside the solar system were detected, and the search for extrasolar planets was becoming a hot area in astronomy.
Hipparcos took multiple measurements of the same stars over time. It could detect the dimming of starlight caused by the shadow of a planet passing in front of a star.
Hipparcos detected the dimming of the star 51 Pegasi in 1991, but no one noticed until astronomers monitoring the star from the ground observed the phenomenon in 1999. This prompted the Hipparcos astronomers to reexamine the data from the mission.
Astronomers estimate that only about 10 percent of planets would pass directly in front of a star as seen from a particular vantage point. Also, it is difficult to detect the reflected light from planets directly, because the light from the star drowns it out. Therefore, most planets are detected indirectly from the “wobble” of a star caused by the planet’s gravitational pull. Hipparcos was not designed to detect wobble.
Still, Hipparcos measurements are playing an important role in the search for planets. Hipparcos data on the distance to the stars is helping astronomers determine the mass of the objects causing the wobble. Mass is important because it reveals whether a star’s wobble is due to the presence of a planet or another star (which would be much more massive than a planet).
Hundreds of planets have been detected outside our solar system. See http://planetquest.jpl.nasa.gov/ for the latest count.
We continue to learn a lot about the universe with the Hubble Telescope, but what are we learning with the International Space Station?
The International Space Station research program was envisioned to be highly diverse and multidisciplinary and to include both basic and applied science. However, NASA’s plans for the use of the space station narrowed after President Bush’s Vision for Space Exploration was announced in 2004. Fundamental research in life sciences and microgravity continues, but a major emphasis is preparing for long-duration space missions.
Data is being collected on the effects of space flight and microgravity on human health. Previously, bodily processes such as fluctuations in levels of vitamins, minerals, and hormones could be measured only before and after space flight. Now, the time course of physiological changes can be studied because of the addition of a minus-80-degree freezer on the space station. The freezer is used to store the biological samples collected during the mission until they can be returned to Earth for analysis.
One significant problem for astronauts living in a microgravity environment is loss of bone mineral density. Bone mineral density declines at an average rate of about 1 percent per month on the space station, more than 10 times faster than the average loss in postmenopausal women.
In one study, astronauts wore sensors to measure the forces on legs and feet during daily activities on the space station. The results are being used to design better exercise programs and equipment to curb bone loss in astronauts on future expeditions.
The performance and degradation of hundreds of materials are also being tested in a series of experiments mounted outside the space station. The space environment exposes materials to atomic oxygen, cycles of heating and cooling, radiation, and collisions with small meteoroids. Materials that perform well will be considered for use on satellites and future space exploration vehicles.
Crystallization, melting, solidification, and the behavior of fluids are also being studied. Since these processes are different in zero-gravity conditions, the results of the experiments will address unanswered questions in physics and help in designing better ways to manufacture various materials.
The astronauts on the space station have a great view of Earth, and they have collected hundreds of thousands of images. They observe glaciers and floating rafts of ice, wispy clouds in the upper atmosphere, brilliantly colored auroras, and sprites—flashes of light occurring in the upper atmosphere over thunderstorms. They have also captured unique, high-spatial-resolution shots of city lights and are studying the ecological effects of industrial activities. These observations will reveal long-term planetary changes.
Man or machine
The data you speak of about what we are learning from International Space Station research was quite adequately collected by the Russians during Mir’s 11 years in orbit. There is no valid mission for the International Space Station. I suggest you research the use of humans versus robots for space exploration.
Many unanswered questions about the effects of spaceflight on human health, as well as fundamental questions in materials science, can be explored in the microgravity environment on the International Space Station. However, my description of the space station research was not intended to make the case for a human space program. Whether there is an interesting research plan for the space station is a different question than whether that research justifies the cost of building and maintaining it. The latter is not a purely scientific question.
The total cost to complete the International Space Station is projected to be well over $100 billion, shared by the United States, Russia, Japan, Canada, and several European countries. NASA spends about $2 billion per year on the space station. In addition, it spends approximately $4 billion on the Space Shuttle, which is mainly used to service the space station.
The rationale offered by proponents of manned spaceflight is the human need to explore and the power of manned spaceflight to excite the public. Proponents of human spaceflight acknowledge that geopolitics has always been a large driving force behind government spending on the space program, but they argue that science ultimately benefits. They say it is important to put things into context: NASA’s budget is a tiny percentage of the size of the U.S. defense budget.
The main scientific argument in favor of manned space exploration is that humans can make critical decisions about data collection. For example, although three unmanned Soviet probes collected and returned rock samples from the moon, the Apollo astronauts identified and collected samples that were considerably more diverse and consisted of 1,000 times as much material.
Critics of manned space exploration say that the expense and risk to human life outweigh the benefits. They do not dispute that manned spaceflights have yielded important scientific knowledge, but they argue that robotic missions are revolutionizing our knowledge of the solar system and are becoming even more effective and efficient as the technology improves.
For example, scientists are still learning about Saturn and its moon Titan from the data provided by Cassini and the Huygens probe, and about Mars from the rovers Spirit and Opportunity. The Cassini/Huygens and Spirit/Opportunity missions cost $3 billion and $1 billion, respectively.
When shooting free throws, some basketball players have a very low arc, and others have a high one. Some players like to shoot off the backboard. Has it ever been proven which strategies are best from a mathematical or scientific viewpoint? Also, in baseball, coming off the bat, what angle of the ball gives the most distance?
In basketball, using the backboard provides about a 50 percent better chance of succeeding for close shots (except for very tall players, who can dunk), according to the paper “Basketball Shooting Strategies,” published in Sports Engineering.
Energy absorbed as the ball bounces off the backboard helps compensate for shooting error. As a player gets farther from the basket, the advantage of the backboard diminishes.
The merits of the overhand push shot compared with the underhand loop shot are still disputed. Underhand shots are more stable and allow a player to put more spin on the ball. Overhand shots decrease the distance to the hoop and minimize the velocity of release.
The optimal angle of release for the basketball depends on a player’s position. For instance, in “Basketball Shooting Strategies” it was determined to be 48 degrees (upward from horizontal) for a player attempting a 3-point jump shot 20 feet from the basket, releasing the ball 8 feet above the floor.
In baseball, mathematical models to determine the optimal batting angle must take into consideration nearly 30 factors. These include the physical features of the ball and bat, and the spin, speed, and direction of the pitched ball.
The optimal bat swing angle decreases from about 9 degrees (upward from horizontal) to 7 degrees as the pitch changes from fastball with backspin, to knuckleball with no spin, to curveball with topspin, according to the paper “How to Hit Home Runs,” published in the American Journal of Physics.
Undercutting the ball center with the bat also helps maximize the ball’s range. Optimal undercut is about an inch. Slightly less undercut is needed for a curveball than for a fastball.
Undercutting gives the ball backspin. Because of an aerodynamic lift force, a baseball projected with backspin travels farther than one without. However, a baseball can be projected faster without spin. Therefore, optimal batting trades off spin and speed.
Slow curveballs pitched with topspin can be batted farther than fastballs with backspin because a ball with initial topspin has a larger outgoing backspin. But for a given pitch type, batting range increases with pitch speed.
What is the most widely accepted method by which the Egyptians built the Giza Pyramids?
Most Egyptologists believe that the pyramid is the natural evolution of the burial system, which began with a simple pit and progressed to the mastaba—a rectangular structure made of brick or stone. The first known Egyptian pyramid, the Step Pyramid of Djoser, probably began as a mastaba and was expanded by adding successively smaller mastabas on top.
In the century between the construction of the Step Pyramid and the Great Pyramid at Giza, the ancient Egyptians perfected their craft through trial and error. For example, some archaeologists believe that the tower-like Meidum Pyramid began as a step pyramid and suffered a catastrophic collapse during an attempt to convert it into a true pyramid.
The collapse at Meidum may have occurred while the Bent Pyramid was being constructed. If so, this could explain why the angle of ascent decreases abruptly partway up the Bent Pyramid, and why the construction technique also changed at this bend. Up to the bend, the stones in the pyramid body were laid to slope inward. After the bend, and in later pyramids, the stones were laid horizontally, a more stable configuration.
Construction on the Great Pyramid of Giza began about 2600 B.C. It is estimated to have taken 20 years and perhaps 30,000 workers (although estimates vary widely). The builders were likely a combination of skilled craftsmen and peasants who were unable to farm during the Nile’s flood season.
Some of the stone was quarried nearby, and some came from upriver and was transported by barge at flood time. It is thought that the ancient Egyptians possessed no tools more sophisticated than levers, rollers, and bronze saws. Sleds lubricated with water may have been used to drag the stones up a ramp to the growing pyramid. As each new layer of stone was laid, the ramp was extended in length, as well as height, to keep its slope constant.
The workmanship of the Great Pyramid is extraordinary. For example, it rests on a base of limestone blocks that is within half an inch of being perfectly level. Such accuracy was likely achieved by flooding the area, leaving just the high spots exposed. These would be cut down, some water released, and the process repeated until the base was level.
The Great Pyramid still holds many mysteries. One is the purpose of the four “air shafts” that run diagonally through it. Such shafts would have been a construction nightmare and are absent from previous and subsequent pyramids.
If the Great Pyramid at Giza could be weighed, would it be heavier than every other building in the world?
Modern buildings do not compete with the Great Pyramid at Giza in Egypt in terms of mass. In fact, better materials and design have permitted skyscrapers to become less massive even as they have grown taller. For example, Chicago’s Willis Tower (formerly the Sears Tower) weighs 223,000 tons, 142,000 tons less than the Empire State Building, which was built four decades earlier and is 200 feet (70 meters) shorter.
Designers of modern buildings are usually interested in maximizing internal space; consequently, buildings are up to 95 percent air on the inside. On the other hand, the Great Pyramid is nearly solid stone with the exception of two small burial chambers. Most descriptions of the Great Pyramid give its weight as six million tons.
However, according to Guinness World Records, the largest pyramid is actually the Quetzalcóatl Pyramid in Cholula, Mexico. Its volume is 4.3 million cubic yards, compared to 3.27 million cubic yards for the Great Pyramid at Giza.
Unable to find any estimates of the mass of the Quetzalcóatl Pyramid, I calculated it from the pyramid’s volume and the density of the material it is made from—adobe. My rough calculation puts it at slightly less than the mass of the (granite, basalt, and limestone) Great Pyramid at Giza, making the latter the most massive building in the world.
Today, websites for physics departments hardly mention simple mechanics and the simple machines that produce a mechanical advantage. I couldn’t find any faculty members who conduct research in these areas. Much research has been done on miniaturization (micro machines, lab on a chip, etc.). Just as simple machines are applied on a “macro” scale, I bet there is just as much of an opportunity to employ them on a “micro” scale.
Classical mechanics is built on the foundations of Newton’s laws of inertia, acceleration, action and reaction, and gravitation. It describes the motions of simple machines, such as levers, ramps, screws, pulleys, wheels, and axles, and the compound machines made from them. Therefore, a modern physicist would have difficulty securing funding to research the fundamental principles that govern the behavior of “macro” machines.
However, ongoing academic and industrial research in various fields is applied to making better machines. For example, materials science is an interdisciplinary field that brings together physicists, chemists, engineers, and even biologists, because some man-made materials are inspired by nature. New materials can be tailored for optimal performance, slower aging, and resistance to shear and other types of stresses.
The hubbub about “micro” machines (actually, most of it is about even smaller “nano” machines) is not the result of researchers hopping on a little bandwagon. Experiments in which researchers use an atomic-force microscope to probe molecular machines are analogous to experiments that were once used to develop fundamental laws of macroscopic mechanics. But the functioning of molecular machines is not analogous to that of big machines.
Unlike a large object rolling down an inclined plane, small particles in fluids experience significant drag and thermal fluctuations. Molecules are always in a state of random motion and are constantly colliding with each other. Electromagnetic interactions make molecules sticky. These factors throw a wrench in attempts to miniaturize machines. If it were possible to keep scaling down a macro machine, eventually, due to friction, random motion, and intermolecular interactions, it would stop working.
Yet molecular machines are a reality. As you read this, lots of protein motors are hard at work in your body, moving cargo within cells, beating the cilia in your windpipe, and contracting your muscles. Also, chemists have synthesized very simple molecular machines from smaller molecules. The development of more complex molecular machines will require advances in the understanding of these systems’ chemistry and physics. The interesting outstanding questions about basic scientific principles, along with the availability of new tools for investigating the questions, fuel researchers’ fascination with the tiny.
One evening, I was walking through the house in the dark and was struck by how many little lights there were: clock radios, DVR, appliances, computers, surge protectors. I did a count and came up with 50 LED lights. How much energy will they use in a year, operating 24 hours a day?
The energy use of any device is its power consumption multiplied by how long it is on. The amount of power consumed by light-emitting diodes (LEDs) typically varies from less than a watt to a few watts, and it depends on the other elements in the circuit that draw power. For household LEDs, a good estimate is 0.5 watts per LED.
Therefore, energy used equals 50 LEDs times 0.5 watts per LED times 24 hours times 365 days. This comes out to 219,000 watt-hours, or 219 kilowatt-hours (kWh). According to my last electric bill, the cost per kWh, including taxes and other charges, was almost 14 cents. So the yearly cost of all those little lights is approximately $30.
Why do certain electrical cords (those used by fans, in particular) curl up over time? Certain others do not.
Most small household appliance cords have a jacket made of rubber or plastic, some varieties of which are cheaper and less durable than others. Rubber and plastic consist of long chain-like molecules called polymers. The number of bonds between the chains, which prevent them from sliding past each other, and the length of the chains, give a material its characteristic durability and flexibility.
Over time, pressure (bending), exposure to sunlight, temperature changes, and exposure to certain chemicals can cause the polymer molecules to become misaligned and/or to link, which deform and stiffen the cord jacket, respectively.
Mr. Weasley’s collection
I just returned from a vacation, and I’m wondering how electrical utilization evolved such that the United States, England, and Europe implemented different outlet designs and voltage standards. How many different outlet designs and voltage standards are there in the world?
Enough different types of plugs are in use around the globe to make the plug-collecting hobby of Harry Potter’s best friend’s dad seem stimulating. Electric Current Abroad, a publication available from the U.S. Department of Commerce, lists 12 different plug types, but it says this list includes only those most commonly used.
Electricity was used primarily for lighting when it was introduced to households in the late 1800s. The first domestic appliances were plugged into light sockets. In the 1920s, the first two-prong plugs and sockets were manufactured, followed by three-prong plugs and sockets. The third prong—the ground—is a safety feature. It disconnects the power supply by tripping a breaker or blowing a fuse in the event of a short circuit.
As electricity use in the home and office flourished, different countries came up with their own variations of the two- and three-prong plugs. Even in that pre-globalization era, efforts were undertaken to come up with international standards for plugs, but finding one that would fit with all existing installations was difficult.
The savvy globe-trotter can easily purchase an adaptor to plug into a strange-shaped outlet. But beware—an adaptor just makes it possible to get the plug into the wall. It does not change the line voltage to suit the appliance. Switching the voltage requires a converter or transformer.
Two basic voltage standards are used: the North American 110-120 volts and the European 220-240 volts. They arose as the electric power industry evolved simultaneously in Europe and North America. At that time, many countries were colonized by European powers, so the European standard is more common.
Nearly all countries use alternating current (AC). The first distribution system, designed by Thomas Edison, was direct current (DC). In the end, AC proved to be more practical, because it could be transmitted at higher voltages (and then stepped down to a lower voltage). This allowed smaller wires to be used for a given amount of power. The frequency of AC is 60 Hz (cycles per second) in some countries and 50 Hz in others. The frequency cannot be converted, and certain devices are sensitive to frequency.
Fortunately, many new electronic devices come with power supplies designed for use almost anywhere. Check the label for INPUT, which should list acceptable current type (AC or DC), voltage range, and frequency.
Electrical energy transmitted by wires has long been a staple of our economy. Alternating current is generated in three overlapping phases, with voltage stepped up by transformers for long-distance transmission. Multiple generators are synchronized so that the power phases are in consonance. Now we are adding wind and solar electrical power to our grids. How are these alternative energy sources created or adapted to be compatible with the power grids?
The steam turbines that produce most of the electricity in the United States generate alternating current (AC)—current that reverses direction many times per second. Most of the steam turbines are powered by the burning of fossil fuels, especially coal (nearly 50 percent of electricity generation) and natural gas (over 20 percent), or nuclear fission (nearly 20 percent). Steam is also produced by burning biomass materials, such as wood or waste, from geothermal resources in the Earth’s crust, or by radiant heat from the sun.
Turbines are also driven by flowing or falling water—hydropower—which generates 6 percent of U.S. electricity, and wind, which is a small (1 percent) but growing source of energy. In some cases gears are used to maintain constant generator output in response to variable input. Otherwise, variable-speed generation turbines, including those driven by wind, must be linked to the grid through a device called an inverter that supplies code-compliant power.
Unlike AC produced by turbines, solar photovoltaic cells produce direct current (DC)—current that flows in one direction. An inverter converts DC into AC. If the system is connected to the grid, the inverter also synchronizes the current with the AC cycle on the grid. A basic inverter operates by running DC input to two switches that feed opposite sides of a transformer. The transformer converts direct current into alternating current when the switches are turned on and off rapidly.
In the case of distributed power generation—for example, individuals with photovoltaic systems that send excess power back to the grid—inverters also play a critical safety role. If the power on the grid is interrupted, the inverter switches to “island mode” so that no power is sent to the grid. This feature protects utility transmission line workers attempting to make repairs on the grid.
The existing grid was designed for centralized generation, and it cannot integrate distributed power generation on a large scale. To accelerate integration, the U.S. Department of Energy recently launched the Solar Energy Grid Integration Systems (SEGIS) research initiative. SEGIS aims to develop intelligent system controls that facilitate communication between utilities and distributed photovoltaic systems to improve energy management.
Why does California require the use of tire chains when you drive on snowy roads in winter? We stopped using tire chains in New England about 25 years ago.
Chains provide good traction but can damage roads. Some states ban chains, while others, like California, require them. I can think of three reasons California requires them:
1. The average Californian has less experience driving in the snow than those who grew up in the Great White North.
2. In California, most people do not put snow tires on their cars in the winter.
3. The snowy part of the state also happens to be mountainous, and the combination of unpredictable weather, steep grades, inexperience, and poor tires can spell disaster.
How close does a 1-horsepower engine relate to the power of an actual horse? Did they do actual measurements, or did they just adopt the term? Did they have steam engines before they had the term horsepower? If so, how did they define the power of those early engines?
James Watt is usually credited with introducing the term horsepower in the late 1700s to market his new steam engines. But just as Watt did not invent the steam engine (his rotative steam engines built on earlier pumping steam engines), he was not the first to compare engine power to a horse’s power. Nearly a century earlier, Thomas Savery, who invented the first steam engine that approached commercial success, also stated the power of his engines in terms of the number of horses they could replace.
Between Savery’s and Watts’ time, the power of a horse was defined inconsistently by different engine makers. Watt estimated that the amount of weight a horse could pull over a given distance in a given period of time was 33,000 foot-pounds per minute, which has been the accepted definition of 1 horsepower ever since. Power is also now measured in watts, and 1 horsepower is equivalent to 746 watts.
Sources differ on how Watt came up with his definition of horsepower. Some say he based it on how quickly a draft horse could turn a mill wheel. According to another account, he based it on ponies lifting coal at a coal mine, but he increased the number by 50 percent to estimate the power of a horse, rather than that of a pony. Another source claims that Watt based it on the power of a horse but deliberately overestimated a horse’s power by 50 percent so that he would not be accused of exaggerating the number of horses his engines could replace.
In pulling contests, draft horses have been observed to have a peak power output of nearly 15 horsepower for a few seconds. The average horse cannot work at a rate of 1 horsepower over long periods, but a fit draft horse can sustain 1 horsepower for hours.
Although the definition of horsepower was standardized more than two centuries ago, the method of measuring an engine’s power has varied. For example, before the 1970s, American automakers measured and advertised their engines’ gross power—the power at the engine’s crank-shaft with no belt-driven accessories. Since then automakers have quoted net horsepower—remaining power output after losses caused by standard power-consuming accessories.
Man’s best friend v. 2.0
A group of dog fanciers has created a “new breed,” the Labradoodle, by mating a Labrador with a poodle. How many generations would it take for the “breed” to breed true—that is, for a Labradoodle mated to a Labradoodle to produce a Labradoodle?
It depends on what characteristics (coat color and texture, height, bone structure) define a Labradoodle, and how much variation in each characteristic is considered acceptable.
As any parent knows, genetics can be surprising. For instance, two brown-eyed people can have a blue-eyed child. The gene for brown eyes is dominant—a child who inherits the gene for brown eyes from either parent will have brown eyes. The gene for blue eyes is recessive—a child needs copies of the gene from both parents to be blue-eyed.
As a dog breeder, it is easier to select for a recessive trait, because when a dog has that trait, one can infer its genetic makeup. When a dog has a dominant trait, it could carry two dominant genes for the trait, or the dominant gene and a recessive gene. In the latter case, descendents may crop up that have inherited two copies of the recessive gene. Deducing which ancestors were carrying that recessive gene, and eliminating the gene from the gene pool, may take many generations of crosses.
In practice, genetics is even more complicated. Usually more than two possible genes for a trait exist. Also, the activity of genes can be modified by other genes. For example, multiple genes interact to specify the color and shade of a dog’s coat.
It took more than a quarter century for Boston terriers, black Russian terriers, and golden retrievers, breeds developed since the 1850s (recent enough for a somewhat reliable historical account), to breed true.
The process can be speeded up by increasing inbreeding, because breeding related dogs decreases the amount of variation in the gene pool. However, too much inbreeding can introduce genetic defects, such as decreased immunity and increased risk of cancer.
The Labradoodle originated in Australia in the 1970s or 1980s (accounts differ) as an attempt to produce a low-allergy guide dog for the blind. Labradoodles are not recognized as a breed by the American Kennel Club or other well-respected registries. The Labradoodle breed standard is currently too broad. For example, three categories of coat textures are recognized, ranging from a relatively flat Labrador-like coat to a curly poodle-type coat.
Taking to the sky
I read that the Wright brothers, who flew the first airplane, built their own engine, although at the time nobody knew how to build one and fire it up. There were no tools to make the hole for the piston, and so forth. Can you shed some light on this?
Charles Taylor, a talented machinist who worked in the Wright brothers’ bicycle shop, built the engine for the 1903 Wright Flyer, generally considered the world’s first powered airplane that actually flew. The engine was a 12-horsepower, four-cylinder internal-combustion model weighing 170 pounds.
Taylor purchased some of the parts he needed. The ignition switch came from the local hardware store. Parts that needed to be cast from molten metal were ordered from a foundry. Otherwise, Taylor used the tools in the Wrights’ shop. For example, their lathe bored the holes for the pistons. The shop was set up for metalworking because when the Wrights were not refining their flying machines, they designed and built custom bicycles.
It took Taylor six weeks to make the engine. However, he did not invent the internal combustion engine. Such engines had been around for four decades and were being used in automobiles when the Wrights were building their Flyer. The Wrights wrote to a dozen automobile companies but could not find an engine that was sufficiently light and powerful.
The engine was very simple by today’s standards. Gasoline was fed into the engine via gravity from a fuel tank attached to a wing. There was no carburetor and no spark plugs, and the engine tended to stall.
It is especially impressive that the Wright Flyer took to the sky while its main competition, Samuel Langley’s Aerodrome, which had a considerably more powerful engine, failed. The Wrights’ advantage stemmed from the very scientific approach they took toward designing an airplane.
Unlike other aspiring aviators of the time, the Wrights realized that they would need to control the airplane’s pitch (up-and-down movement), yaw (side-to-side movement), and roll (rotation around an axis running the length of the plane). They built and tested a series of gliders beginning in 1899 and carefully noted the effects of each change they made. They even created their own wind tunnel to test small-scale models of different types of wings.
It took many crashes and improvements of their successive gliders before they felt ready to add a motor. Then, Taylor’s motor, despite its limitations, permitted the Wrights to make history.
When you hold up a carpenter’s level, with the bubble in the middle, what is it level to? The Earth is round, so how can anything be level?
When the bubble is centered, the carpenter’s level is parallel to a tangent to the Earth at that location. A tangent is a straight line that touches a sphere at just one point. It makes a 90-degree angle with the radius of the sphere at that point.
Since Earth is a bumpy sphere, the tangent is not always parallel to the ground itself. On a hillside, the tug of gravity is still toward the center of the planet, so the level is aligned when it is perpendicular to a line to the Earth’s center.
Architecture by numbers
How were the ancient Romans able to engineer and build all their magnificent buildings using their unwieldy number system?
Roman architecture borrows both principles of design and methods of construction from the Greeks. However, it is the Romans’ adoption of concrete as a standard technique of construction that revolutionized architecture. Concrete permits more imaginative design because it can be poured, and because it is strong enough to span vast distances.
Up until the last two centuries, structures were designed and built based on prior experience. A concept would be tried, and if it worked, variations of the concept might be employed for generations. Catastrophic failures were not uncommon, but architects and engineers learned from the failures and modified their designs accordingly.
The Romans used mathematics to design their buildings, particularly geometry and systems of proportions. However, only much more recently has mathematics been used to design buildings by taking into consideration the mechanical properties of the materials being used and the loads acting on a structure. These calculations require calculus, which was not developed until the 17th century.
Had the Romans needed more sophisticated mathematics to design their buildings, their number system might have been a constraint, but lifelong experience with Roman numerals would have made them seem a lot less cumbersome to them than they do to us.