COLORS OF THE COSMOS - WAYS AND MEANS OF NATURE - Death by Black Hole: And Other Cosmic Quandaries - Neil deGrasse Tyson

Death by Black Hole: And Other Cosmic Quandaries - Neil deGrasse Tyson (2014)

SECTION 3. WAYS AND MEANS OF NATURE

Chapter 17. COLORS OF THE COSMOS

Only a few objects in Earth’s nighttime sky are bright enough to trigger our retina’s color-sensitive cones. The red planet Mars can do it. As does the blue supergiant star Rigel (Orion’s right kneecap) and the red supergiant Betelgeuse (Orion’s left armpit). But aside from these standouts, the pickings are slim. To the unaided eye, space is a dark and colorless place.

Not until you aim large telescopes does the universe show its true colors. Glowing objects, like stars, come in three basic colors: red, white, and blue—a cosmic fact that would have pleased the founding fathers. Interstellar gas clouds can take on practically any color at all, depending on which chemical elements are present, and depending on how you photograph them, whereas a star’s color follows directly from its surface temperature: Cool stars are red. Tepid stars are white. Hot stars are blue. Very hot stars are still blue. How about very, very hot places, like the 15-million-degree center of the Sun? Blue. To an astrophysicist, red-hot foods and red-hot lovers both leave room for improvement. It’s just that simple.

Or is it?

A conspiracy of astrophysical law and human physiology bars the existence of green stars. How about yellow stars? Some astronomy textbooks, many science-fiction stories, and nearly every person on the street, comprise the Sun-Is-Yellow movement. Professional photographers, however, would swear the Sun is blue; “daylight” film is color-balanced on the expectation that the light source (presumably the Sun) is strong in the blue. The old blue-dot flash cubes were just one example of the attempt to simulate the Sun’s blue light for indoor shots when using daylight film. Loft artists would argue, however, that the Sun is pure white, offering them the most accurate view of their selected paint pigments.

No doubt the Sun acquires a yellow-orange patina near the dusty horizon during sunrise and sunset. But at high noon, when atmospheric scattering is at a minimum, the color yellow does not spring to mind. Indeed, light sources that are truly yellow make white things look yellow. So if the Sun were pure yellow, then snow would look yellow—whether or not it fell near fire hydrants.

TO AN ASTROPHYSICIST, “cool” objects have surface temperatures between 1,000 and 4,000 degrees Kelvin and are generally described as red. Yet the filament of a high-wattage incandescent lightbulb rarely exceeds 3,000 degrees Kelvin (tungsten melts at 3,680 degrees) and looks very white. Below about 1,000 degrees, objects become dramatically less luminous in the visible part of the spectrum. Cosmic orbs with these temperatures are failed stars. We call them brown dwarfs even though they are not brown and emit hardly any visible light at all.

While we are on the subject, black holes aren’t really black. They actually evaporate, very slowly, by emitting small quantities of light from the edge of their event horizon in a process first described by the physicist Stephen Hawking. Depending on a black hole’s mass, it can emit any form of light. The smaller black holes are, the faster they evaporate, ending their lives in a runaway flash of energy rich in gamma rays, as well as visible light.

MODERN SCIENTIFIC IMAGES shown on television, in magazines, and in books often use a false color palette. TV weather forecasters have gone all the way, denoting things like heavy rainfall with one color and lighter rainfall with another. When astrophysicists create images of cosmic objects, they typically assign an arbitrary sequence of colors to an image’s range of brightness. The brightest part might be red and the dimmest parts blue. So the colors you see bear no relation at all to the actual colors of the object. As in meteorology, some of these images have color sequences that relate to other attributes, such as the object’s chemical composition or temperature. And it’s not uncommon to see an image of a spiral galaxy that has been color-coded for its rotation: the parts coming toward you are shades of blue while the parts moving away are shades of red. In this case, the assigned colors evoke the widely recognized blue and red Doppler shifts that reveal an object’s motion.

For the map of the famous cosmic microwave background, some areas are hotter than average. And, as must be the case, some areas are cooler than average. The range spans about one one-hundred-thousandth of a degree. How do you display this fact? Make the hot spots blue, and the cold spots red, or vice versa. In either case, a very small fluctuation in temperature shows up as an obvious difference on the picture.

Sometimes the public sees a full-color image of a cosmic object that was photographed using invisible light such as infrared, or radio waves. In most of these cases, we have assigned three colors, usually red, green, and blue (or “RGB” for short) to three different regions within the band. From this exercise, a full-color image can be constructed as though we were born with the capacity to see colors in these otherwise invisible parts of the spectrum.

The lesson is that common colors in common parlance can mean very different things to scientists than they do to everybody else. For the occasions when astrophysicists choose to speak unambiguously, we do have tools and methods that quantify the exact color emitted or reflected by an object, avoiding the tastes of the image maker or the messy business of human color perception. But these methods are not public-friendly. They involve the logarithmic ratio of the flux emitted by an object as measured through multiple filters in a well-defined system corrected for the detector’s sensitivity profile. (See, I told you it wasn’t public-friendly.) When that ratio decreases, for example, the object is technically turning blue no matter what color it appears to be.

THE VAGARIES OF human color perception took their toll on the wealthy American astronomer and Mars fanatic Percival Lowell. During the late 1800s and early 1900s, he made quite detailed drawings of the Martian surface. To make such observations, you need steady dry air, which reduces smearing of the planet’s light en route to your eyeball. In the arid air of Arizona, atop Mars Hill, Lowell founded the Lowell Observatory in 1894. The iron-rich, rusty surface of Mars looks red at any magnification, but Lowell also recorded many patches of green at the intersections of what he described and illustrated as canals—artificial waterways, presumably made by real live Martians who were eager to distribute precious water from the polar icecaps to their cities, hamlets, and surrounding farmlands.

Let’s not worry here about Lowell’s alien voyeurism. Instead, let’s just focus on his canals and green patches of vegetation. Percival was the unwitting victim of two well-known optical illusions. First, in almost all circumstances, the brain attempts to create visual order where there is no order at all. The constellations in the sky are prime examples—the result of imaginative, sleepy people asserting order on a random assortment of stars. Likewise, Lowell’s brain interpreted uncorrelated surface and atmospheric features on Mars as large-scale patterns.

The second illusion is that gray, when viewed next to yellow-red, appears green-blue, an effect first pointed out by the French chemist M. E. Chevreul in 1839. Mars displays a dull red on its surface with regions of gray-brown. The green-blue arises from a physiological effect in which a color-neutral area surrounded by a yellow-orange appears bluish green to the eye.

In another peculiar but less embarrassing physiological effect, your brain tends to color balance the lighting environment in which you are immersed. Under the canopy of a rain forest, for example, where nearly all of the light that reaches the jungle floor has been filtered green (for having passed through leaves), a milk-white sheet of paper ought to look green. But it doesn’t. Your brain makes it white in spite of the lighting conditions.

In a more common example, walk past a window at night while the people inside are watching television. If the TV is the only light in the room, the walls will glow a soft blue. But the brains of the people immersed in the light of the television actively color balance their walls and see no such discoloration around them. This bit of physiological compensation may prevent residents of our first Martian colony from taking notice of the prevailing red of their landscape. Indeed, the first images sent back to Earth in 1976 from the Viking lander, though pale, were purposefully color-tinted to a deep red so that they would fulfill the visual expectations of the press.

AT MID-TWENTIETH CENTURY, the night sky was systematically photographed from a location just outside San Diego, California. This seminal database, known as the Palomar Observatory Sky Survey, served as the foundation for targeted, follow-up observations of the cosmos for an entire generation. The cosmic surveyors photographed the sky twice, using identical exposures in two different kinds of black-and-white Kodak film—one ultrasensitive to blue light, the other ultrasensitive to red. (Indeed the Kodak corporation had an entire division whose job it was to serve the photographic frontier of astronomers, whose collective needs helped to push Kodak’s R&D to its limits.) If a celestial object piqued your interest, you’d be sure to look at both the red-and blue-sensitive images as a first indication of the quality of light it emits. For example, extremely red objects are bright on the red image but barely visible on the blue. This kind of information informed subsequent observing programs for the targeted object.

Although modestly sized compared with the largest ground-based telescopes, the 94-inch Hubble Space Telescope has taken spectacular color images of the cosmos. The most memorable of these photographs are part of the Hubble Heritage series that will secure the telescope’s legacy in the hearts and minds of the public. What astrophysicists do to make color images will surprise most people. First, we use the same digital CCD technology found in household camcorders, except that we used it a decade before you did and our detectors are much, much higher quality. Second, we filter the light in any one of several dozen ways before it hits the CCD. For an ordinary color photo, we obtain three successive images of the object, seen through broadband red, green, and blue filters. In spite of their names, taken together these filters span the entire visible spectrum. Next, we combine the three images in software the way the wetware of your brain combines the signals from the red-, green-, and blue-sensitive cones in your retina. This generates a color picture that greatly resembles what you would see if the iris in your eyeball were 94 inches in diameter.

Suppose, however, that the object were emitting light strongly at specific wavelengths due to the quantum properties of its atoms and molecules. If we know this in advance, and use filters tuned to these emissions, we can narrow our image sensitivity to just these wavelengths, instead of using broadband RGB. The result? Sharp features pop out of the picture, revealing structure and texture that would otherwise go unnoticed. A good example lives in our cosmic backyard. I confess to having never actually seen Jupiter’s red spot through a telescope. While sometimes it’s paler than at other times, the best way to see it is through a filter that isolates the red wavelengths of light coming from the molecules in the gas clouds.

In the galaxy, oxygen emits a pure green color when found near regions of star formation, amid the rarefied gas of the interstellar medium. (This was the mysterious element “nebulium” described earlier.) Filter for it and oxygen’s signature arrives at the detector unpolluted by any ambient green light that may also occupy the scene. The vivid greens that jump out of many Hubble images come directly from oxygen’s nighttime emissions. Filter for other atomic or molecular species and the color images become a chemical probe of the cosmos. The Hubble can do this so well that it’s gallery of famous color images bears little resemblance to classical RGB images of the same objects taken by others who have tried to simulate the color response of the human eye.

The debate rages over whether or not these Hubble images contain “true” colors. One thing is certain, they do not contain “false” colors. They are the actual colors emitted by actual astrophysical objects and phenomena. Purists insist that we are doing a disservice to the public by not showing cosmic colors as the human eye would perceive them. I maintain, however, that if your retina were tunable to narrow-band light, then you would see just what the Hubble sees. I further maintain that my “if” in the previous sentence is no more contrived than the “if” in “If your eyes were the size of large telescopes.”

The question remains, if you added together the visible light of all light-emitting objects in the universe, what color would you get? In simpler phrasing, What color is the universe? Fortunately, some people with nothing better to do have actually calculated the answer to this question. After an erroneous report that the universe is a cross between medium aquamarine and pale turquoise, Karl Glazebrook and Ivan Baldry of Johns Hopkins University corrected their calculations and determined that the universe is really a light shade of beige, or perhaps, cosmic latte. Glazebrook and Baldry’s chromatic revelations came from a survey of the visible light from more than 200,000 galaxies, occupying a large and representative volume of the universe.

The nineteenth-century English astronomer Sir John Herschel invented color photography. To the frequent confusion but occasional delight of the public, astrophysicists have been messing with the process ever since--and will continue forever to do so.