The Battery: How Portable Power Sparked a Technological Revolution - Henry Schlesinger (2010)
Chapter 17. Smaller and Smaller
“The future of integrated electronics is the future of electronics itself…”
—Gordon Moore, Electronics magazine, 1965
The reign of the vacuum tubes lasted nearly half a century until made obsolete by transistors. Although small, reliable, and energy efficient—at least compared to power-gobbling tubes—transistors remained state-of-the-art for less than a decade. The rapid growth in the complexity of electronic devices challenged transistors’ practicality in much the same way vacuum tubes had been stretched to their limits. Computers, for instance, sometimes requiring upward of 100,000 diodes and 25,000 transistors, were becoming hugely expensive to manufacture. IBM’s 7030 (“Stretch”)—the company’s all-transistor supercomputer conceived in the mid-1950s—used an astonishing 170,000 transistors wired into circuit boards.
And the military, at an early critical juncture in the Cold War, needed new technology for advanced weapons systems. In 1951, the navy briefly sponsored something called Project Tinkertoy, dreamed up by Robert Henry at the National Bureau of Standards. The idea was as simple as it was clever, and the navy thought it had real potential. Small ceramic wafers that snapped together, each about seven-eighths (some documents list it as five-eighths) of an inch square, would house a different type of standardized component. The idea was to cut production time and cost by using automation to attach the transistorized components to the wafers, which could then be quickly assembled into complete units. For whatever reason, the Tinkertoy concept faded quickly, but not before the navy invested nearly $5 million in the effort.
The U.S. Army Signal Corps followed a short time later by investing heavily in a more sophisticated concept called Micro-Modules proposed by RCA. Like Project Tinkertoy, the Micro-Module effort also included ceramic wafers, though slightly smaller versions measuring a little more than a third of an inch square and a hundredth of an inch thick and holding multiple components. As with Tinkertoy, the modules were essentially tiny circuit boards, but they pushed circuit density to impressive new levels. As a demonstration, RCA built a radio into a fountain pen—the ultimate transistor radio—and the army brass loved it. Clearly this was the future for transistors.
But the Micro-Module project, though very clever, was already obsolete by the time it was announced in 1959. In fact, the timing could not have been worse. RCA and the Signal Corps’ joint announcement at the Institute of Radio Engineers (a predecessor component of the Institute for Electrical and Electronics Engineers, or IEEE) convention was overshadowed by Texas Instruments’s debut of the first integrated circuit (IC)—the computer chip. And the computer chip was not an incremental advance, but a substantial technological leap forward. Not only was it significantly smaller than the transistor, but almost from the start its potential seemed nearly unlimited.
TI wasn’t the only player in the new field. Fairchild Semiconductor—financed through the defense contractor, Fairchild Camera and Instrument Corp.—was also in the game. In fact, Fairchild’s patent for the technology was filed before TI’s by several months, but it was held up because of wording. TI had simply written its application more narrowly, speeding the approval process. Of course, the lawyers became involved and the case dragged on for a decade, eventually ending up in the U.S. Court of Customs and Patent Appeals (which ceased to exist in the early 1980s). After much legal wrangling, the court upheld Fairchild’s patent claim while simultaneously giving TI the credit for building the first integrated circuit. But in the end, of course, it didn’t matter. Integrated circuits had arrived and were clearly the future.
As with the first transistor, the military found immediate use for the technology even as chips from both companies began rolling off the line in 1961. And if anyone doubted the potential of the new technology, those reservations were soon put to rest when TI unveiled what it called a molecular electronic computer. Built under contract for the air force in 1961, the diminutive computer measured just 6.3 cubic inches and weighed in at 10 ounces. The unit didn’t include any kind of user interface—neither a screen nor a keyboard—but it got the point across. As TI proudly noted, the miniature package included 47 chips that did the work of about 8,500 transistors, diodes, resistors, and capacitors.
A year or two later, ICs were built for the Minuteman missile systems of the early 1960s as well as for NASA, which was responding to President Kennedy’s challenge to put a man on the moon by the end of the decade.
VERY SOON BATTERIES WOULD BE powering very small components doing highly complex calculations. Just as the telegraph had been responsible for the death of distance by severing the connection between the flow of information and travel time, the IC shattered the long-standing relationship between the complexity of a task and the size of the device performing it. Small and portable devices could now be built to perform extremely complex work quickly. This was made clear a few years ago when a group of electrical engineering students at the University of Pennsylvania decided to commemorate the fiftieth anniversary of ENIAC by replicating its entire processing architecture, all 30 tons of 18,000 vacuum tubes, 7,200 diodes and 1,500 relays on a single computer chip that measured a petite 7.44 by 5.29 square millimeters.
WRITING FOR ELECTRONICS MAGAZINE IN April of 1965, Gordon Moore, who was still heading Fairchild Semiconductor’s R&D effort, before leaving to cofound Intel, predicted a doubling of circuits in ICs every two years and saw no reason why that shouldn’t continue far into the future. “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year,” he wrote. “Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”
The landmark technical essay formed the basis of what’s become known as Moore’s Law—an informal prediction that the number of processing components on a chip doubles every two years. In a very real sense, all technology is interim technology, but some is clearly more temporary than others. What Moore predicted correctly was that ICs would not become the 8-track players of computing.
“The future of integrated electronics is the future of electronics itself,” Moore wrote in the 1965 article.
The advantages of integration will bring about a proliferation of electronics, pushing science into many new areas…integrated circuits will lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment. The electronic wristwatch needs only a display to be feasible today.
Yes, there were electronic wristwatches, but what Moore was referring to was the problem of a practical, small-scale user interface capable of matching the IC’s data output. For large computers, you could always use a CRT, like a television screen or printer of the kind employed by Western Union at the time, both unrealistic solutions for a portable device. And, since the concept of portable implies that the power supply is a convenient size as well, something small and less power thirsty would be needed. The answer, interestingly, arrived nearly simultaneously with the publication of Moore’s essay. Researchers at RCA’s lab in New Jersey were on the verge of a major breakthrough in liquid crystal displays or LCDs.
The LCD can actually be traced to the late nineteenth century, when the Austrian botanist Friedrich Reinitzer happened to notice that some organic crystals—cholesteryl benzoate—exhibited strange properties when exposed to heat. They turned cloudy and then clear at specific temperatures. That is to say, they acted unexpectedly, but consistently, which is always of interest to scientists. He related the discovery to a professor of physics, Otto Lehman, who carried on the research just long enough to note some interesting refraction properties and came up with the name fliessende Kristalle (liquid crystal). The substance was a scientific curiosity, but not much more, until the scientists at RCA got hold of it in the early 1960s.
As it turned out, the liquid crystals reacted not only to heat, but also to an electromagnetic field. So, if you squished the crystals between two panes of thin glass with a conductive surface and applied a relatively small amount of power, they would align and become opaque. Treated with the right kind of dye, they even changed color in a predictable manner.
A few years later, RCA showed off a very primitive version of an LCD and a window that went dark when current was applied. And then…nothing. The powers that be at RCA were far from enthusiastic. “The people who were asked to commercialize [the technology] saw it as a distraction from their main electronic focus,” said George Heilmeier, one of the LCD proponents who left RCA when the project languished and eventually went on to head Defense Advanced Research Projects Agency (DARPA), the military’s primary research and development organization and the agency responsible for the early development of the Internet.
At the time, RCA was one of the most successful companies in the world with a good portion of the cathode ray tube (CRT) market and little interest in pursuing something so esoteric. The whole enterprise lay dormant until 1968 when a Japanese television crew filmed Heilmeier demonstrating his proof of concept LCD for a documentary called Firms of the World: Modern Alchemy.
A year later an engineer at Sharp Corporation recognized the technology as a possible solution to a pocketable calculator display on the drawing boards. RCA, apparently still not interested in the technology, offered little assistance. So, with the American company proving uncooperative and virtually no current material published on the obscure field, the engineers at Sharp did what any good engineer would do—they watched the videotape of Heilmeier’s demonstration. Although all the bottles at the lab had their labels conveniently turned from the camera, the engineers were able to gather enough clues to at least begin their research and somehow managed to perfect the technology within a relatively short amount of time.
Then, in May of 1973 Sharp introduced the Elsi Mate EL-805 pocket calculator to the world. The unit, which housed five ICs, was less than an inch thick and weighed just 7.5 ounces. But the real surprise came in the fact it could run for a hundred hours on a single AA battery. That is to say, power consumption was estimated at 1/9000 of other battery-powered calculators on the market. LCDs had solved the problem of the power-hungry user interface.
By any standard, the Elsi Mate was a breakthrough, far surpassing even TI’s collaboration with Canon on a “portable” calculator introduced in 1970 called the Canon Pocketronic, which weighed in at nearly two pounds and was anything but pocketable at a bulky 4 by 8.2 by 1.9 inches. The TI unit didn’t even have a screen, but rather a thermal paper printer. Users could read the paper printout behind a small glass magnifying window.
The Datamath or TI-2500, introduced in 1972, was better, measuring 3 by 5.5 by 1.7 inches and offering an LED display, but it needed a half dozen AA batteries to power up. The Datamath was followed in the calculator market by the Japanese firm Busicom with its LE-120A or “handy,” which needed only four AA batteries to power its LED display.
And there was a problem. LEDs, though far more energy efficient than incandescent bulbs, were still relatively piggish about power consumption when it came to small battery-operated devices. This was made painfully clear when Hamilton announced its Pulsar digital watch—the first watch with no moving parts—to great fanfare in 1970. Called a “time computer,” the Pulsar was proudly promoted as state-of-the-art. With the equivalent of 1,500 transistors in its ICs, all dedicated to keeping perfect time for the owner, it was well worth the estimated $2,100 price tag (about $10,000 in constant dollars).
Except that even before the first watch was shipped to stores, engineers discovered that the LEDs were draining the batteries at an astonishing rate. According to Hamilton at the time, it was the first use of LEDs in a wristwatch and mistakes were natural. The ICs worked fine, keeping more or less precise time, but it took two years for the company to solve the user interface problem with the LEDs.
First, the engineers replaced the originally planned single silver zinc rechargeable battery with two replaceable power cells (a certificate for a free second set was included with the purchase). To save power the user had to push a button redesigned to light the LEDs on the face. According to the manual, you could push the button twenty-five times a day and the watch would run for a year. It wasn’t the most elegant engineering solution, but the novelty of the watch seemed to outweigh the small inconvenience. Johnny Carson proudly displayed one on The Tonight Show, Richard Nixon was said to have worn one, and James Bond wore one, albeit briefly, in Live and Let Die (1973).
A year later, Seiko came along with a power-sipping LCD watch that set the standard with a continuous time readout. A nearly perfect match to the lower energy requirements of ICs, within a decade LCDs began appearing everywhere, becoming the user interface of choice for a new generation of portable gadgets.