EPILOGUE - When Science Goes Wrong: Twelve Tales From the Dark Side of Discovery - Simon LeVay

When Science Goes Wrong: Twelve Tales From the Dark Side of Discovery - Simon LeVay (2008)

EPILOGUE

There but for the grace of God go I. That is my own reaction to the stories just recounted, and I think most scientists would share it. There are so many opportunities for science to go wrong that scientists who reach the end of their careers without stumbling on one of them can count themselves not just smart or circumspect or morally superior, but also fortunate.

Of course, I picked dramatic or memorable examples of scientific failure for this book. They’re not typical of how science can go wrong, because mostly it does so in more mundane ways. Just as the spectacular successes of science are mere islands in a sea of worthy journeywork, so the spectacular failures are outnumbered by those that are slightly regrettable, modestly burdensome or just partially incorrect. It is the destiny of most scientists to be neither canonised nor vilified by the judgment of history, but to be forgotten.

Similarly, most scientific accidents don’t cause dozens of deaths, as the anthrax release at Sverdlovsk did. Most don’t kill anyone, and, of those that do, most kill just one person - the very person whose mistake caused the accident. One example: in August of 1996, Dartmouth College chemistry professor Karen Wetterhahn spilled a drop or two of dimethylmercury on her gloved hand. The drops penetrated both the glove and her skin, condemning her to a slow, painful death from mercury poisoning.

The example of scientific fraud recounted in this book - Victor Ninov’s alleged fabrication of data supporting the discovery of a new chemical element - is also an extreme, unrepresentative case. Yes, there have been other cases that rival or outdo his: the fraudulent claim by South Korea’s Hwang Woo-Suk to have created human stem cells by cloning is the most dramatic recent example. But most fraud consists of slight prettying-up or cherry-picking of data, omission of references to prior work in order to make one’s own work seem more original, self-plagiarism and the like.

Indeed, fraud merges imperceptibly into acceptable scientific practice. It’s common, for example, for scientists to write up accounts of their research in which the sequence of experiments does not correspond to historical reality, or to introduce a paper with a hypothesis that wasn’t actually formulated until after some of the experimental results came in. This is often thought to be justified: it aids comprehension to present the study as a logical sequence of ideas and experiments. But such deception causes harm if it leaves the reader thinking that a result was predicted by a hypothesis, or that a hypothesis was stimulated by prior results, when they were not. This can cause a scientist’s conclusions to appear more believable than they actually warrant.

It would be an interesting exercise to go back in the scientific literature - say, 20 years or so - and pick a random selection of 100 papers and ask, ‘Were they right in their main findings and conclusions, and were they as original as their authors claimed?’ I don’t know what fraction of them would have significant faults, but it would probably be substantial and certainly much higher than most non-scientists would believe.

Most likely, those pieces of erroneous research would not have gone wrong in any memorable way - no conscious fraud, no switched labels, no blatant plagiarism - nor would they probably have any dire consequences. They probably resulted from countless trivial errors and omissions - the use of reagents whose specificity was less than expected, the selection of human subjects who were not fully representative of the group being investigated, the use of inappropriate statistical tests or a lack of familiarity with the prior literature. Probably, in the ensuing decades, no one ever took the trouble to point out that the studies were wrong or to ask why; many scientific papers are not cited even once by other scientists, after all. The rising tide of scientific progress simply erases them from collective consciousness.

Still, science does sometimes go wrong in ways that are truly dramatic - accidents or drug trials in which people are injured or killed, erroneous claims that grab media attention and that take years to set right. And sometimes scientists themselves are appalled by the uses to which their discoveries are put by others. Take foetal ultrasound monitoring, a technique pioneered by the Scottish gynaecologist and anti-abortion campaigner Ian Donald. ‘My own personal fears are that my researches into early intrauterine life may yet be misused towards its more accurate destruction,’ wrote Donald in 1972. A decade or so later, ultrasound was being used to facilitate the abortion of millions of female foetuses in the third world.

Can anything be done that might cause science to go wrong less often? Should anything be done, even? These are thorny questions that are probably best left to professional ethicists or administrators or philosophers of science, but here are a few thoughts.

For a start, it’s worth pointing out that it may take years or decades for the ill-effects of scientific discoveries and inventions to become evident. Take a field of applied science that I haven’t covered in this book - industrial chemistry. In 1901, a German chemist, Wilhelm Normann, developed a process for turning vegetable oils into solid fats by hydrogenation. At the time, this invention seemed like an unalloyed benefit to humanity: it provided the means to produce inexpensive edible fats that resisted spoilage. It took more than half a century, and millions of premature deaths from heart disease, before the harmful effects of these fats on human health became apparent. In 1928, General Motors chemist Thomas Midgley, Jr, invented chlorofluorocarbon refrigerants - Freons. Again, the invention seemed to offer nothing but benefit to humanity, and it took decades before the downside - the destructive effect of these chemicals on the Earth’s protective ozone layer - was understood. Looking back, it’s hard to see how any programme of regulatory oversight could have anticipated these dire consequences, given the lack of relevant knowledge at the time. In addition, there may never be agreement on the net benefit or harm of a discovery. It may depend on one’s views about abortion, for example, in the case of Donald’s invention. Thus, preventing the long-term ill-effects of scientific inventions and discoveries is about as hard as predicting the future of civilisation and it is probably pointless to try.

Certainly, there can and should be oversight of science, especially in its applications. In medical research, for example, there are the Institutional Review Boards and national regulatory bodies that do their best to see that research using human subjects is conducted ethically and safely. IRBs came up in several chapters of this book. I mentioned how their absence in the 1930s permitted unethical research such as Mary Tudor’s stuttering study to go forward. I described how Robert Iacono circumvented IRBs and all other regulatory oversight by taking his patient to China for experimental surgery for Parkinson’s disease. I also recounted how Jesse Gelsinger died needlessly in a clinical trial that was overseen by a whole web of IRBs and government agencies.

It can be argued, however, that regulatory control and safety consciousness is a significant impediment to science - that it actually causes more harm than it averts. If we look back at some of the historical highlights of medical research, for example, we see what may look in hindsight like reckless risk-taking and a near-total lack of concern for ethical considerations.

William Harvey discovered the circulation of the blood through a series of experiments on unanaesthetised animals that would turn the stomach of a modern reader. Edward Jenner picked an eight-year-old boy at random to test his first smallpox vaccine, then later inoculated him with smallpox in an effort to see if he had become immune to the disease - all, apparently, without so much as a by-your-leave. Walter Reed tested his mosquito theory for the transmission of yellow fever by having his colleagues expose themselves to insects that had fed on previous victims, causing one of them to develop a fatal infection. Thomas Starzl’s first liver-transplant patient died shortly after surgery, as did every one of his patients over the next four years. Who would persist in the face of such odds: a madman, or a visionary?

To wish that none of these things had happened is to wish that none of those great advances saw the light of day. Risk-taking is part of scientific exploration, just as it was part of terrestrial exploration. No one expected that all those ships that set out for the Spice Islands would return safely home, and many didn’t. Maybe we should allow the quest for the Magic Islands of the periodic table to take its victims too, even if they be self-inflicted victims like Victor Ninov.

This would be particularly true if there is a certain indivisible character trait that predisposes people both to the taking of great risks and to great scientific achievement. Many risk-taking scientists never make great discoveries, certainly, but few scientists make great discoveries without taking great risks - if only the risk of devoting a lifetime to the pursuit of a scientific will-o’-the-wisp. ‘My concern,’ James Wilson told the New York Times after Jesse Gelsinger’s death, ‘is, I’m going to get timid, that I’ll get risk averse.’ Which, in his mind at least, meant an end to productive science.

Of course, some of the episodes described in this book happened not in the process of scientific discovery, but during the application of scientific procedures to fairly mundane tasks - the identification of a rapist, the prediction of tomorrow’s weather, the siting of a dam or the production of a germ-warfare agent. In such cases, stricter oversight could hardly cramp scientific creativity. In fact as a result of disasters like the one that struck the St. Francis Dam, large engineering projects are now tightly regulated and reviewed, greatly reducing the risk of a repetition of a calamity. The Houston Crime Lab now operates under much closer oversight than was the case when Josiah Sutton was wrongly convicted. Weather forecasters are better trained and better equipped than they were before the Great October Storm. And germ-warfare agents, hopefully, are no longer being produced.

Even with the mishaps that involved genuinely scientific episodes, some might have been avoided by steps that did not impinge greatly on the process of science itself. Getting a volcanologist to wear a hard hat or to heed seismological warnings hardly seems like a major impediment to the study of volcanoes. Asking a neuroscientist to verify the identity of the drugs he is testing doesn’t seem like putting a major roadblock in his path. Still, there may be an irreducible core of risk in science that cannot be eliminated without eliminating science’s rewards. When one very successful scientist, tissue-engineering pioneer Robert Langer, was recently appointed to the Board of Directors of MIT’s Whitehead Institute, the Institute’s announcement included the following: ‘Bob Langer’s work and life typifies so many of the strengths we aspire to at Whitehead - brash, audacious, risk-taking science.’ Somehow, Langer has parlayed that risk-taking trait into 800 scientific papers and many important discoveries, while at the same time avoiding all the traps that risk-taking makes scientists prone to. I sincerely hope that his science never does ‘go wrong’ in any serious way, but if it should do so, I hope that he and others in his situation are judged for the entirety of their work, not for that one misstep. For it is so often just one such misstep - one momentary spasm of greed, haste, carelessness, credulity or plain bad luck - that leads to disaster.