The Battery: How Portable Power Sparked a Technological Revolution - Henry Schlesinger (2010)
Chapter 15. The Endless Frontier
“A spider web of metal, sealed in a thin glass container, a wire heated to brilliant glow, in short, the thermionic tube of radio sets…Its gossamer parts, the precise location and alignment involved in its construction, would have occupied a master craftsman of the guild for months; now it is built for thirty cents. The world has arrived at an age of cheap complex devices of great reliability; and something is bound to come of it.”
—Dr. Vannevar Bush, Director of the Office of
Scientific Research and Development (OSRD)
Bombers flowing off assembly lines and warships splashing into the water after christening are iconic images of America’s World War II industrial effort. Sources of pride and propaganda, reports from factory floors on the home front were nearly as ubiquitous and dramatic as dispatches from the distant battlefields of Europe and Asia. The portrayal of American manufacturing was anything but subtle and was documented in everything from newsreels to Norman Rockwell’s 1943 Saturday Evening Post cover of Rosie the Riveter. The message was unashamedly unambiguous, from the halo of an upturned protective welder’s mask above Rosie’s head right down to her foot resting comfortably on a ragged copy of Hitler’s Mein Kampf.
In the public’s mind, America’s manufacturing capacity—its immense size and speed—became linked with military superiority and inevitable victory. “So the American way of war is bound to be like the American way of life…It is the army of a nation of colossal business enterprises, often wastefully run in detail, but winning by their mere scale and by their ability to wait until that scale tells,” wrote the Cambridge professor D. W. Brogan in Harper’s Magazine (May 1944).
Less well known is the parallel, though no less ambitious, scientific and technological war effort. The Manhattan Project, of course, has been well documented, but there were other facilities and efforts that have fallen into obscurity. The Rad Lab at MIT, which worked to perfect the British system of radio detection and ranging (radar) worked on the cutting edge of science and employed some 3,000 personnel to create some hundred different radar systems by the end of the war.
College professors, along with bright graduate and undergraduate students in engineering and talented hobbyists, were pressed into service. At one point during the war, Bell Labs alone employed nearly a thousand scientists and engineers working solely on military projects. Personnel and facilities expanded so quickly that it was not uncommon to see temporary Quonset huts and hastily constructed wooden structures popping up almost overnight adjacent to corporate headquarters. Most of this work, of course, was done in secret, hidden away from the newsreel cameras. And Rosie, unfortunately, had no iconic equivalent among the lab rats who labored to create the next generations of weaponry.
IT WAS ONLY AFTER THE war ended, as books and magazine stories began to appear, that the full scope of the enormous wartime scientific effort began to emerge. In addition to large weapons systems, World War II was a war fought with sophisticated portable devices, many of them relying on batteries.
The most intriguing product of this scientific push was the proximity fuse. Although largely forgotten now, except by military history buffs, the secrecy and manpower that went into creating the technology was surpassed only by the Manhattan Project. Classified as top secret during the war and largely forgotten today, the proximity fuse pushed the limits of the technology in an effort that required scientists and engineers scattered across a half dozen or more facilities with Johns Hopkins’s Applied Physics Lab (APL) serving as the hub. With research begun in 1939 by British scientists, the effort was taken up in the United States by the wartime National Defense Research Committee (NDRC) and its Office of Scientific Research and Development (OSRD) in 1940. From there it grew to include a multitude of military, civilian government, and private entities.
IN THEORY THE PROXIMITY FUSE was an easily understood device based on perfected technology. The idea was simple: a fragmentation bomb that exploded near a target—say within a hundred feet—would be more effective in combat than a projectile or traditional bomb that needed a direct hit. The comparison was one between a shotgun and a rifle. What’s more, the weapon was necessary, if only for antiaircraft artillery, since planes had become faster and more durable in the years between the two wars.
What was needed was a projectile with an electronics system; radio technology would provide the key. A small, basic transmitter that bounced a signal off a plane could trigger the detonation by way of an equally basic receiver capable of picking up those returning radio waves. If engineered right, such a weapon could even blow the German V rocket bombs out of the sky before they reached London.
Simple, except for the design parameters. Engineers could easily make a “bench model” that would work just fine, as long as it stayed safely on the bench. In the field, the unit had to withstand the massive G-Force—something like 20,000 Gs—impact on firing and then even more Gs with the natural spin as it cruised through the air. It’s fair to say that no electrical device had ever been designed for that kind of punishment. The unit had also to be small enough to fit on top of standard-size artillery shells, which meant pushing the limits of miniaturization.
To do this, the teams designed extraordinarily small subminiature vacuum tubes, each tube just slightly larger than a pencil eraser. Very early on, it became obvious that standard wiring wouldn’t work. Radio wiring and all that went along with it was good enough for the living room radio, but far too large to mount on an artillery shell.
To solve the problem, the engineers perfected the concept of the circuit board or printed circuits. Developed in the mid-1930s by the German refugee Paul Eisler, the process used a conductive foil rather than wires to make connections between different components. Although others had been working on similar processes as a means to reduce the jumble of wires in telephone and telegraph switching stations and reduce factory mistakes in wiring radios, Eisler seems to have brought the system up to date.
And then there was the battery. The battery had to be reliable for several seconds, but also capable of being stored for years without losing its charge. National Carbon came up with what may be one of the greatest design solutions in battery history. The battery looked very much like a miniature voltaic pile. However, the small metallic disks, one stacked on top of the other, had a hollow center into which was placed a glass ampoule of electrolyte. When the shell was fired, the ampoule shattered and the natural spin of the projectile distributed the fluid, activating the battery in midflight. From an engineering standpoint, it was a brilliant solution, incorporating the extreme design parameters into the battery’s function.
© Chris Costello
In truth, this was not an entirely new concept. A similar technical solution was used in the Hertz horn or chemical horn naval mines developed in the 1860s by the Prussian scientist Dr. Albert Hertz. In Hertz’s design, which was later perfected by other countries, the battery to detonate the mine was activated when a glass vial filled with acidic electrolyte was broken by a passing ship’s hull.
Still, what made the proximity fuse battery so clever was the fact that the engineers had taken an outdated wet battery format, updated it slightly, and then incorporated what were seemingly insurmountable obstacles—the G-force and rotation—into the design. It was very much a case of engineering lemonade from some very large technical lemons.
The fuses were a breakthrough, but that also proved a problem. What if the enemy got hold of one? If a dud landed harmlessly on the battlefield, the enemy could reverse engineer the thing to use against American targets. Playing it safe, the fuse was first cleared for fighter plane combat in the Pacific theater in 1943. If there was a dud, it would splash into the ocean and sink. It wasn’t until 1945, toward the very end of the war, that the fuse saw widespread use during the pre-invasion bombardment of the Battle of Iwo Jima. In the end, millions were produced and fitted into numerous types of artillery armaments.
After the war, a 1946 magazine ad for Eveready sought to associate the company with proximity fuses—virtually unknown among the general public—with the headline, “The Shell with a ‘Radio Brain’” followed by the line, “Army, Navy lift censorship on mystery weapon that licked V Bomb, Kamikaze Attacks.”
THE WAR EFFORT REQUIRED RELIABLE power far beyond anything previously available. What changed was what these batteries powered. Even prior to America’s entrance into the war in 1941, the War Department (predecessor to the Department of Defense) tasked Motorola to develop a receiver-transmitter for the front lines.
After serving in World War I, Paul Galvin, Motorola’s founder, had begun with a battery company—the Chicago-based D&G Storage Battery Company—before going into the battery eliminator business in the 1920s with Galvin Manufacturing Corporation. After several setbacks, including a failed entrance into the home radio market, he entered the car radio market under the name Motorola.
World War II was the first “battery-powered war” with the introduction of portable radios into combat zones. Back on the home front, factories ran around the clock to produce enough batteries to power the war effort.
© Chris Costello
Now, with a government contract in hand, he produced a portable radio, the Radio Set SCR-300. It could be transported into combat zones by a single soldier, but not particularly easily. Weighing in at over thirty pounds, with a range of about three miles, the unit was rigged as a backpack and featured a telephone-type handset. Motorola produced some 50,000 of the units for the war. Sealed in a steel case, the SCR-300 required eighteen vacuum tubes powered by a single, unique shoebox-size B-80 battery that provided three different power levels for different circuits.
Designed specifically for foot soldiers in the vanguard, the more portable SCR-536 handheld walkie-talkie (or Handie-Talkie) followed in 1941; looking very much like an enlarged version of those clunky cell phones from the 1980s, the device had a range of about one mile and required five tubes and more than a pound of batteries to fire it up. The thing weighed five pounds, and typical battery life was about a single day. Still, it incorporated a few interesting design features. The bulk telephone handset was gone. The case itself functioned as a handset—similar to today’s cell phones—with the mouth earpieces positioned at the top and bottom; the on/off switch was activated when you pulled out the giant forty-inch antenna. Another key advantage was that these were crystal-controlled sets—the crystals being quartz—so that tuning was done by changing out a crystal and not fiddling with the dial. By the end of the war, Galvin had manufactured more than 100,000 of the units.
Early battery advertisements often featured action-packed adventures in which the always reliable battery was the hero of the story.
© Eveready Battery Company, Inc. Reprinted with permission
The question remains: just who invented the walkie-talkie? Like the backpack, it was manufactured by Galvin. However, an independent inventor by the name of Al Gross is also credited. A ham radio operator from Cleveland, Ohio (call sign W8PAL-Gross), is said to have thought up the idea as early as 1938. Fascinated with radios since he sneaked into the radio operator’s “shack” on a cruise with his parents when he was nine years old, Gross came up with the idea of a small, handheld radio while still a teen.
And this is where the story gets complicated. Recruited by the Office of Strategic Services or OSS (forerunner to the Central Intelligence Agency) during the war, Gross worked on what would become known as the Joan-Eleanor project. The small, four-pound, handhelds known as Joans (named for a WAVE Major, Joan Marshall), were issued to intelligence agents behind enemy lines. Compact, they weighed just four pounds and measured 6.5 inches by 2.25 inches by 1.5 inches. The Eleanors (named for Eleanor Goddard, the wife of one of the engineers on the project) weighed some forty pounds and were mounted in aircraft flying at 40,000 feet.
Adding to the confusion were the efforts of the Canadian Donald Hings, who is said to have invented a portable two-way radio in 1937 while working for Consolidated Mining and Smelting Company in Vancouver.
In the end, it was Galvin who received the patent for the walkie-talkie, since patents couldn’t be issued for secret spy gear. And, for those given to quibbling, a case can be made that technically, Gross had only invented half of a walkie-talkie, since half of the system was mounted in an aircraft.
THE MINE DETECTOR WAS INVENTED by a Polish military engineer named Jozef Kosacki who was living in England after the German invasion of Poland in 1939. It was while working at St. Andrews in Scotland that he came up with a device using available technology—a long pole with a flat disk on the end holding two coils in parallel. Weighing in at under 30 pounds, one coil at the end of the pole sent out an oscillating signal and the other received it while the operator listened in on what was essentially a telephone strapped to his waist with a headset. When a metallic object, such as a mine, interrupted the signal, the operator could clearly hear it. Kosacki never patented the device, called the Mine Detector (Polish) Mark I or simply, the Polish Detector, which no doubt saved thousands of lives, giving it to both the British and Americans.
There were weapons as well, such as the M1 rocket launcher, which fired small fin-stabilized rockets against armored vehicles and emplacements. Fired from the shoulder, the M1 launcher was quickly nicknamed “bazooka” by troops in the field after the nonsensical instrument played by radio comic Bob Burns, which he improvised from plumbing pipes. The original bazooka rocket was ignited by two standard-size D cells in the stock, though later models eliminated the batteries altogether in favor of a small magneto activated by the trigger.
Innovative batteries were also introduced at sea. By the end of the war, the U.S. Navy had developed the MK26 torpedo, which used seawater as an electrolyte. This would not only reduce the weight of the torpedo, always a factor aboard ships, but also provide virtually unlimited storage time, since the batteries never “went bad.” Though the MK26 never saw combat, the battery system, developed by Bell Telephone, did provide proof of concept for later torpedoes. The development of the seawater battery would also lead to more peaceful applications, such as automatic triggering of rescue beacons.
Batteries powered the first signal beacons for downed pilots, a forerunner to both the identification transponders and black boxes now carried on private and commercial aircraft. Large bulky things, they consisted of stacks of reinforced batteries and crudely “shock-proofed” circuitry powering a simple radio transmitter in a sturdy metal frame that automatically sent out an SOS signal when the plane crashed.
Naturally, there were also innovations on the other side, notably the Enigma machine. Despite popular belief, the Nazis didn’t invent the Enigma; it was originally built in the 1920s and intended for businesses. As one brochure printed in English proclaimed, “One secret, well-protected, may pay for the whole cost of the machine…” Scaled down considerably over the years, the device was not only portable enough to carry around easily, but featured a series of small electric “lamps” that were battery powered.
A MAJOR PROBLEM WITH MUCH of the new electronics going into the battlefield wasn’t the engineering, but the batteries. Very early on, the military discovered that batteries do not do well in the tropics. Batteries shipped to the Pacific and North African theaters of war arrived depleted. It seems the heat and humidity was speeding up the chemical reactions. What was needed was a battery that could function in any environment.
Sam Ruben came up with the solution. Working with new chemistries and containers in his New Rochelle, New York, lab, he hit on the mercuric-oxide or mercury cell, the first new battery chemistry in over a century. The battery worked well, but Ruben, who had only a tiny lab, couldn’t produce the millions of batteries the war effort required and handed off the contract to P. R. Mallory Company (later Duracell). With its works classified as “top secret,” the company turned out millions of the batteries, later known as the Ruben-Mallory or RM cells, to power the war effort with the company running round-the-clock shifts to meet demand for the “sealed in steel” batteries that powered everything from field radios to the newly designed L-shaped flashlights that soldiers could wear on their belts. Later, as the war intensified, Ray-O-Vac was brought on board to help fill the orders.
As for Ruben, when informed that his royalties would total some $2 million a year, he promptly renegotiated the terms downward. “I felt that it would be unconscionable to receive such large royalties for military requirements during wartime,” he later wrote. “Consequently, we agreed upon a payment of $150,000 per year, which would amply cover laboratory operation and staff expenses.”
And, in one of the stranger technological footnotes, Ruben saw the unlikeliest use for his batteries appear in 1957 when the Soviet Union launched Sputnik (Russian for co-traveler), the first man-made satellite. The twenty-three-inch diameter satellite that circled for three weeks contained not only a radio transmitter, sending out little more than beeps or pulses, but also a temperature regulation system. Surprisingly, both were powered by batteries suspiciously similar to Ruben’s design, at least according to a story in the Soviet publication Young Technique in October 1957. Apparently, the American military had shared the top-secret technology with the Soviet Union during the war. Even more surprising, in 1961 an official Soviet government publication called Knowledge Is Power gave credit to the United States for development of the batteries.
It would not be the last time Ruben’s batteries traveled into space. During the failed Apollo 13 mission, astronauts used light-up pens powered by Ruben’s cells as a light source.
“As you know, due to the explosion, we were forced to ration our electric power and water. With regards to the former, we never turned on the lights in the spacecraft after the accident,” the astronauts James Lovell, Fred Haise, and John Swigert wrote Ruben. “As a result, your pen lights served as a means of ‘seeing’ to do the job during the many hours of darkness when the sunlight was not coming through the windows. We never wore out even one set during the trip; in fact, they still illuminate today…Their size was also a convenience, as it was handy to grip the lights between clinched [sic] teeth to copy the lengthy procedures that were voiced up from earth.”
AMERICA EMERGED VICTORIOUS FROM WORLD War II with companies fattened by profits from “cost-plus” wartime production and endowed with new technology and processes. The infrastructure and knowledge base so hastily assembled during the war was now quickly put to peacetime use. Not only was America’s physical infrastructure of factories, rail lines, and talent pool intact, there was another key component, the GI Bill (Servicemen’s Readjustment Act of 1944). Fearing the economic impact of millions of American soldiers returning home en masse, FDR signed the bill nearly a year before the German surrender.
In addition to providing low-interest, no-money-down mortgages, it also offered college tuition and student stipends for returning vets. By 1947 close to half of the 16 million war veterans were either enrolled in college or receiving job training. At one point veterans made up nearly half of the college students in the United States. All told, some 91,000 scientists and 450,000 engineers studied through GI Bill benefits following World War II, including 14 Nobel Prize–winners in science.
Some saw the peacetime potential of the advanced technology early on. Dr. Vannevar Bush, who envisioned and then headed the National Defense Research Committee as well as its 2.0 wartime version, the Office of Scientific Research and Development, charged with applying the latest technology to warfare, was quick to spot the future. In two landmark essays, “As We May Think” (The Atlantic), and “Science the Endless Frontier: A Report to the President,” he exhibited uncanny prescience as to the future role of technology. Both pieces were written in the summer of 1945, months prior to Japan’s September surrender that marked the end of the war.
Bush, who had received his doctorate from MIT, was something of an amateur inventor himself as well as the cofounder of the American Appliance Company, which would eventually morph into Raytheon (Greek for “light from the gods”), a major defense contractor.
He was that rare member of his nineteenth and twentieth century–spanning generation who was not only welcoming of technological change, but could also extrapolate with a fair amount of accuracy the role it would play in society. Though born in the noisy coal- and steam-powered nineteenth century when most Americans lived on farms, Bush was able to foresee an inevitable future of increasingly advanced circuitry, more sophisticated devices, and an expanding role of technology emerging from cutting-edge science. In “As We May Think,” he imagined a device he called a “memex” that many have compared to the Internet, though it more closely resembled an enormous database.
However, it was in “Science the Endless Frontier” that he forcefully advocated a national effort to promote science. “The pioneer spirit is still vigorous within this nation,” Bush wrote to FDR. “Science offers a largely unexplored hinterland for the pioneer who has the tools for his task. The rewards of such exploration both for the Nation and the individual are great. Scientific progress is one essential key to our security as a nation, to our better health, to more jobs, to a higher standard of living, and to our cultural progress.”
What happened after the war, as Bush’s predictions proved more or less accurate, was something of a mid-twentieth-century version of “beating swords into ploughshares,” that is to say, rewiring bombs into radios and televisions. The value of military technology was not so much in the armaments themselves, but in their components and the processes used to create them. Printed circuit technology, for example, perfected for proximity fuses, seemed to hold particular fascination for private companies.
In a postwar publication issued by the National Bureau of Standards, Cledo Brunetti and Roger W. Curtis reported,
…printed circuits are now the subject of intense interest of manufacturers and research laboratories in this country and abroad. From February to June 1947, the Bureau received over one hundred inquiries from manufacturers seeking to apply printed circuit or printed circuit techniques to the production of electronic items. Proposed applications include radios, hearing aids, television sets, electronic measuring and control equipment, personal radiotelephones [sic], radar, and countless other devices.
By using the printed circuit board technique, in which connections between components were essentially painted on the board, engineers could eliminate much of the birds-nest wiring and the galvanized chassis common in many electrical devices. They could also cut production costs and more or less reduce the wiring of a device to two dimensions and fit a good deal more circuitry into a compact space.
In old movies and television shows, a standard joke was to hit or shake a broken electrical appliance to get it to work. This method was actually often effective—at least temporarily—to reestablish a contact between loose connections. Fix-it shops and television repairmen, both long gone from the American landscape, did a thriving business by hunting out and resoldering loose connections. With the advent of the circuit boards, their days were numbered.
Industry had learned a few other tricks from wartime requirements, like effectively using multipurpose vacuum tubes so that receivers needed only four or five tubes instead of the standard seven or eight. Engineers also began placing components closer together to save space while some simple components even performed double duty. For instance, the World War II–era walkie-talkies were turned on by pulling up the antenna. Why not apply the same principle to consumer and industrial products?
A good many of these clever design innovations came directly from combat requirements, particularly efforts in reducing size. In a lengthy monograph on miniaturization and microminiaturization following the war that was produced for the U.S. Army Electronics Command at Fort Monmouth, New Jersey, the command historian, William R. Stevenson, wrote, “Miniaturization as a major design goal in communications-electronic equipment began to be felt only after the requirements of the Armed Forces began to be impressed on industry. Here adequacy for combat service in many cases was so dependent on size, that miniaturization had to be employed regardless of cost.”
This was all good news, of course, except for the batteries. Although relatively cheap, reliable, and long-lasting, they still weren’t up to the task of powering more than a few tubes, even subminiature tubes, for any length of time. No matter how small you made vacuum tubes—and companies like Raytheon could make them very small indeed—they were still power-hungry little beasts.
Quite simply, the battery industry had run into the brick wall known as Faraday’s First Law of Electrolysis, which logically states that in order to double the output of any battery, the amount of material in that battery must be doubled. New materials, such as mercury and cadmium, had given batteries longer life and more power, but the output was still far too low or the materials much too expensive to meet the needs of the American consumer for any kind of really sophisticated device. And if you had to wait a minute or so for the tubes to “heat up,” the inconvenience was minimal. Even when tubes “blew out,” which they did with annoying regularity, it was easy enough to pull the most likely culprits and test them on countertop devices at the local hardware and drugstores that also stocked replacements. And if radios and other appliances weren’t particularly portable, then that was fine, too. That’s just the way things were.
DESPITE THE PROBLEMS, SOME NICHE companies did strive for miniaturization. Hearing aid manufacturers, for instance, made use of subminiature tubes. Raytheon sought to establish itself in the consumer market, bought up a Chicago company called Belmont Radio Corporation (another wartime manufacturer transitioning back into peacetime products) and quickly introduced what is widely believed to be the first “pocket radio.” Measuring just 3 inches wide, 6 ¼ inches high, and less than an inch thick, the Belmont Boulevard required three batteries, a 22.5 volt B and two 1.5 volt A batteries, to power its five subminiature tubes.
Did people want a miniature and portable radio with an earpiece instead of a speaker? Apparently they didn’t, at least not Belmont’s version. The Boulevard (model 5P113) was a failure in the marketplace, though today it’s extraordinarily rare and much sought after by collectors.
However, what’s really interesting about the Boulevard is just how little it resembles a “pocket radio” or any of today’s portable electronic devices. The metallic case was a two-tone gold and silver, and customers had a choice of trim that included Moroccan leather, pin seal, alligator, or suede. Clearly the Boulevard was a rich man’s toy or gentleman’s accessory, very much like a solid gold cigarette case, engraved hip flask, or sterling silver flashlight. From the twenty-first-century perspective, the Boulevard looked very much like the kind of personal possession designed to become an heirloom, though it was packed with technology destined for obsolescence.
Another company that saw the potential in miniaturization and battery-powered products was the Hamilton Watch Company. A well-known American manufacturer of timepieces with a history that stretched back more than a century, Hamilton made a very large bet on battery power in 1946 by launching an ambitious effort to produce an electric wristwatch. Hamilton probably didn’t really know what it faced, since it took more than a decade to bring the watch to market.
Not only did the company have to develop the watch itself, but Hamilton started off by also trying to design a new kind of battery to power it. In the end, the battery proved too much for the watchmakers and they brought in National Carbon (not yet Eveready) to modify one of Sam Ruben’s button battery designs to create a 1.5 volt battery called the energizer.
Still, the firm was optimistic. The effort moved ahead slowly year after year. Then, a lengthy 1956 story in the New York Times, months prior to the release of the watch, outlined Hamilton’s plan:
“A revolution is coming to the ancient art of timekeeping, specifically in the design of watches,” the story enthused. “It is being sparked by new discoveries in electronics and advances in miniaturization (the process of getting more and more equipment in less and less space), and it will take two forms. Scheduled for the immediate future is the electric watch, no bigger than the one you are wearing, which will run entirely on the current from a battery. In the more distant future, say 1975, is an ‘atomic’ wrist watch, a time piece operated by a midget nuclear power plant.”
Finally, in early January 1957, Hamilton announced to the world the first electric watch. Called the Hamilton 500 or Hamilton Electric, the watch retailed for $175.00 (around $1,300 in constant dollars) in solid gold and $90.00 ($700 in constant dollars) for gold-filled. Unfortunately, despite more than a decade of work, the technological design left much to be desired. The $1.75 energizer battery tended to discharge more quickly than the year-long life cycle Hamilton had envisioned. And, too, the watch was something of a quirky hybrid. Rather than redesign the works, Hamilton simply replaced the mainspring with a small motor-type arrangement to power a balance wheel and traditional array of gears, which didn’t make for the most reliable timepiece.
Still, it was the “watch of the future.” The 1950s were, after all, a time of unbridled and quaintly fanciful predictions when it came to future technology. At a time of unprecedented prosperity, the possibility of personal flying cars, robotic housekeepers, and vacations on Mars seemed more than likely at some future date. Disneyland opened one of its more popular attractions, Tomorrowland, which included a TWA Moonliner rocket ship. And why not? The future along with a whole world of technological miracles was arriving very quickly indeed.
Press reports marveled at the size of the electronic watch’s battery along with the newness of the concept. The little battery was enthusiastically described as able to rest on a fingertip, as large around as a shirt button or an aspirin, and often photographed alongside the watch itself. Since the watch looked rather ordinary, the little energizer was the star.
Perhaps because the new electric watch looked so much like a standard Hamilton—nobody could tell you were wearing a piece of the future on your wrist—the company released a model in a futuristic asymmetrical case. Elvis bought one and so did the host of the Twilight Zone television show, Rod Serling. Even more recently, Will Smith and Tommy Lee Jones wore a pair of Hamilton Electrics for their roles in the Men in Black movies.
Production on the Hamilton Electric ended in 1969 with little fanfare as quartz watches began arriving on the marketplace, beginning with a very limited production run of the Seiko 35 SQ, which sold for about $1,200 (more than $6,000 in constant dollars). The Hamilton Electric’s life as a consumer product lasted just about as long as the time it took to develop it. The “atomic wristwatch” never came to pass, though the National Bureau of Standards did produce an atomic clock as early as 1949, and today consumers have a wide choice of timepieces capable of picking up its broadcast signal for “atomic accuracy.”
Courtesy of the United States Army Communications-Electronics Museum
THE AMERICAN CONSUMER HAD DEVELOPED a taste, even a fascination, for electronic gadgets following the war. Virtually every household in America was wired for electricity, thanks in large part to the Rural Electrification Act of 1936. Even as batteries were becoming marginalized to power toys, flashlights, hearing aids, and a few simple devices, consumers and hobbyists could not get enough of electronic gadgets.
In 1947, a small company in Michigan began buying war surplus components to repackage them into do-it-yourself kits beginning with oscilloscopes, then moving on to more consumer-oriented devices such as ham radios and phonograph amplifiers. The Heath Company became Heathkit, a mail order outfit, not unlike Hugo Gernsback’s Electro Importing Company. The gimmick was that do-it-yourself hobbyists needed only a couple of basic tools to complete the assembly. A few “pleasant evenings at home,” the catalog copy promised, was all it took to build a state-of-the-art amplifier.
It didn’t matter if you knew Ohm’s Law or understood how the thing worked, you could still build yourself a pretty decent ham radio or home stereo at about half the price of an assembled unit bought in a store. Barry Goldwater, notably, was a Heathkit enthusiast, as well as a ham radio operator, building more than a hundred Heathkit projects over the years. The idea was not unlike paint by numbers for electronics. “We Won’t Let You Fail,” was the Heathkit motto. And in New York City at the site of the future World Trade Center, was Radio Row, an area of a few blocks flooded with war surplus equipment, the overflow from the dusty stores spilling out onto the sidewalk in bulging cardboard boxes. Hobbyists would scrounge the boxes brimming and bulging with old tubes, dials, transformers, and equipment with serious-looking faceplates, mysterious dials, and toggle switches. Newspapermen, tow truck operators, and ambulance chasers made pilgrimages down to Radio Row to snatch up army surplus tank radios to monitor police and fire calls.
THEN THE ENTIRE GAME CHANGED, though deceptively quietly, in 1947 when two Bell Lab scientists, John Bardeen and Walter Brattain, in their fourth-floor lab in Murray Hill, New Jersey, poked and prodded the surface of a hunk of gray germanium—an element somewhat similar to its tin and silicon neighbors on the periodic table—with a battery power source to increase the output of an electrical charge. A few years later, in 1956, their work would earn them, along with team leader William Shockley, the Nobel Prize.
What Bardeen and Brattain had done was to “dope” or apply impurities to the germanium. So, depending on what impurities were added, the crystalline structure had either an excess of electrons (called N-type for negative) or few electrons (P-type). If there was a weak current flowing through the circuit of the doped surface with an excess of electrons, you could enhance it by applying an additional charge. Conversely, you could block the current until current was applied by adding another type of impurity. So by stacking the doped surfaces in either a P-N-P or N-P-N configuration, the little devices could be turned into amplifiers or on/off switches.
Naturally, the transistor’s development was not a pure science quest. The transatlantic telephone line between North America and Europe, known as TAT-1, required repeaters to boost the signal. It might have been possible to send a telegraph-quality electrical pulse around the world with a tiny thimble-sized battery, but telephone communications required repeaters that boosted the signal. The flexible repeaters designed by Bell Labs were eight feet long and spaced every thirty-seven miles or so when the initial two cables were put down in the mid-1950s. The British Post Office, responsible for telephone communications, had its own tube repeater. Still, both models relied on vacuum tubes—specially designed and reliable vacuum tubes, but vacuum tubes with a limited life expectancy that would eventually have to be replaced. Transistors held the answer.
Vacuum tubes, of course, could perform the same functions, but they required much higher voltages. The same amount of “work” could now be done with considerably fewer electrical power tubes required in a fraction of the space. And even better, the little sandwiches of semiconductive material didn’t blow out like tubes. Batteries were back as a viable power source.
Several months later Bell scientists had a working device, and a memo was circulated to name the thing. Among the names in contention were “Iotatron,” “Crystal Triode,” and, of course, “Transistor.” The term transistor was a combination of the words transconductance (or transfer) and varistor (a device used to protect circuits against excessive voltages).
In June of the following year, Bell Labs held a press conference at its offices on West Street in lower Manhattan. The press release read, in part, “An amazingly simple device, capable of performing efficiently nearly all of the functions of an ordinary vacuum tube, was demonstrated for the first time yesterday at Bell Telephone Laboratories where it was invented.” A demonstration was given, along with a lengthy technical description of the science involved.
At the time, not many outside the fields of science and technology realized the significance of what the Bell scientists had done. The New York Times didn’t appear to have much enthusiasm for the new device, famously burying the announcement in Chapter 3 in a regular column called “News of Radio.” And even then, it wasn’t the lead item, following an announcement that Eve Arden would be starring in a new show called Our Miss Brooks, “…playing the role of a school teacher who encounters a variety of adventures.” Eve Arden had somehow upstaged one of the most important technological breakthroughs of the twentieth century.
Transistors fared only somewhat better in the now-defunct Herald Tribune and mainstream science and technology magazines like Popular Science and Popular Electronics. To be fair, the New York Timeswasn’t alone in its seeming indifference. Aside from the professional technical journals and a few hobbyist publications, which hailed the announcement with varying degrees of geeky enthusiasm, the overall response was one of muted, earnest, and perfunctory reporting. What the scientists at Bell unveiled at their press conference was not a product like stereophonic sound or CinemaScope images on the big screen that the general public could immediately appreciate, if not fully understand. This new electrical component was tiny and its applications seemed somewhat distant.
On the other hand, the military immediately understood the significance of the transistor and tried to get it classified as top secret. This was more than just institutional paranoia. The Cold War was beginning to take shape—Churchill had delivered his Iron Curtain speech in 1946 at Westminster College in Fulton, Missouri, and George Kennan, the American ambassador to the Soviet Union, transmitted his historic “long telegram” that would form the basis of a decades-long policy of containment of Soviet ambitions, while a year later, President Harry Truman signed into law the National Security Act of 1947.
With tensions between East and West mounting, a device that could work at the heart of weapons and communications systems without many of the shortcomings of vacuum tubes was immensely valuable. Fortunately, Bell Labs eventually prevailed, and the patent for Three Electrode Circuit Element Utilizing Semiconductive Materials was duly filed in 1948, number 2,569, 347.
For years following their discovery, transistors moved forward with incremental improvements. The first commercially available transistor came on the market from Raytheon around 1950, but manufacturers didn’t line up to place large orders, though they did find some use in a handful of obscure industrial applications along with a few do-it-yourself kits that challenged adventurous hobbyists to test their soldering skills in a new format called circuit boards.
At least part of the problem was the fact that there really wasn’t a clearly defined consumer market for transistors. In a portable radio they could eliminate the large A battery, but most radios plugged into the wall. Then, in 1952, the Sonotone Corporation, a manufacturer of hearing aids, became the first company to offer a consumer product using transistors—albeit in hybrid combination with subminiature tubes. Interestingly, this was done under an agreement with AT&T that provided royalty-free licenses to manufacturers like Sonotone in observance of Alexander Bell’s devotion to the deaf. In a bit of historical coincidence, Bardeen’s wife, like Bell’s, was hearing impaired.
Two years later, Bell Labs had built the first all-transistor computer—TRADIC (Transistor Digital Computer or Transistorized Airborne Digital Computer) for the U.S. Air Force using more than 700 transistors and diodes and 10,000 germanium crystal rectifiers. The entire unit fit into just a few square feet. This was a big step forward in the emerging computer field. The state-of-the-art ENIAC (Electronic Numerical Integrator and Computer) was a monster nicknamed “The Giant Brain.” Secretly commissioned by the military during World War II to calculate artillery tables, ENIAC needed some 18,000 vacuum tubes, 1,800 square feet, and constant attention by a dedicated staff to change the tubes, which blew out with maddening regularity.
Extending the comparison, the first microprocessor made by Intel, the 4004, introduced in the early 1970s, packed the equivalent of 2,300 transistors onto a single chip, while today’s processors contain the equivalent of nearly 300 million transistors.
Part of the problem with early transistors was the price. Vacuum tubes were plentiful and inexpensive thanks to economies of scale that pumped millions of them out into the market every year. Tubes were available for under a dollar, while Raytheon’s early transistor, for instance, sold for $18.00 a pop (a little more than $150.00 in constant dollars). Even in the boom years of the 1950s manufacturers were still keenly aware that most consumers were “price sensitive” when it came to household items. Another problem was quality control. Transistors were more difficult to manufacture with a much higher rejection rate than tubes coming off the assembly line, which only added to the cost of the “good” transistors.
What turned things around was not a public clamoring for products packed with transistors, but the military. By the early 1950s, the Pentagon began spending tens of millions of dollars on transistors, actually paying for the construction of a Western Electric plant in Pennsylvania and financing transistor production facilities at existing plants for General Electric, RCA, and Raytheon as well as Sylvania.
It was in the new and ambitious weaponry systems, some of them on the drawing boards since the 1940s, like the first surface-to-air missiles, called the Nike Ajax, that transistors found viable applications. This kind of government investment, similar to the $30,000 Morse managed to squeeze out of Congress for his experimental telegraph line, would be repeated again and again through the years in programs leading to the development of the Internet, GPS, and a host of other technologies.
DESPITE THE DRAWBACKS OF THE new technology, transistors dramatically expanded the kinds of work batteries could perform. Not only did they require less power than tubes, but a lot of them could be packed tightly together on a circuit board to create relatively small, complex devices. This was also a huge step forward in terms of manufacturing. The boards were originally assembled much like tube-based units by assembly lines, primarily made up of women, soldering the components into place. Then, in 1949, two members of the U.S. Army Signal Corps, Moe Abramson and Stanislaus F. Danko, developed what would become known as the “Auto-Sembly” process. To form the circuits, the transistor’s wirelike leads were inserted into tiny precut holes in a circuit board, the ends clipped, and the board run over a bath of molten solder to make the contacts between components. Because of the way each board’s composition was designed, the solder would stick and harden to form the circuits between the transistors’ contacts on the underside, but would not adhere where it wasn’t needed. Companies could eliminate long assembly lines of workers soldering individual connections.
Taken together, these advances made for more durable, compact electrical components, perfect for military gadgets intended to be shot from very large guns, carried by troops, or installed in planes or ships.
BATTERY TECHNOLOGY WAS ALSO ADVANCING. Even toys were becoming more sophisticated, and the old zinc carbon formulations were falling behind. When Lewis Frederick Urry was transferred from Eveready’s Canadian to its Parma, Ohio, facility, his first assignment was to find a way to extend the life of the company’s line of batteries. It was, he knew, a hopeless task. Faraday’s Law again! What Urry did was take up the cause of the alkaline battery, which had been around for years, but never as a consumer item. Edison himself had developed one for cars. But they remained far too expensive for everyday use. Running through the materials, Urry finally found a formulation that included a combination that worked. Where he succeeded was in making the switch to powdered zinc, rather than a solid piece of the metal.
The powder, he realized, offered more surface area for the chemicals to react. It was an innovative solution, but also very conventional when it came to power sources. Since Volta, scientists had been increasing surface area, first by adding disks to voltaic piles, then plates to trough batteries. The Smee battery featured a roughed-up surface and, in a manner of speaking, so did Planté’s lead storage battery.
The first modern alkaline was born, with an estimated life span of forty times that of the zinc-carbon formulation. However, selling the idea to his superiors would prove more difficult. By way of demonstration, Urry used a standard D cell and his new formulation in a pair of toy cars that he raced around the company cafeteria. “Our car went several lengths of this long cafeteria,” he said in one interview, “but the other car barely moved. Everybody was coming out of their labs to watch. They were all oohing and aahing and cheering.”