THE SUICIDE OF SCIENCE - Dark Age America: Climate Change, Cultural Collapse, and the Hard Future Ahead - John Michael Greer

Dark Age America: Climate Change, Cultural Collapse, and the Hard Future Ahead - John Michael Greer (2016)

Chapter 6. THE SUICIDE OF SCIENCE

EVERY HUMAN SOCIETY LIKES TO THINK THAT ITS CORE CULtural and intellectual projects, whatever those happen to be, are the be-all and end-all of human existence. As each society rounds out its trajectory through time with the normal process of decline and fall, in turn, its intellectuals face the dismaying experience of watching those projects fail and betray the hopes that were so fondly confided to them.

It’s important not to underestimate the shattering force of this experience. The plays of Euripides offer cogent testimony of the despair felt by ancient Greek thinkers as their grand project of reducing the world to rational order dissolved in a chaos of competing ideologies and brutal warfare. Fast-forward most of a millennium, and Augustine’s The City of God anatomized the comparable despair of Roman intellectuals at the failure of their dream of a civilized world at peace under the rule of law.

Skip another millennium and a bit, and the collapse of the imagined unity of Christendom into a welter of contending sects and warring nationalities had a similar impact on cultural productions of all kinds as the Middle Ages gave way to the era of the Reformation. No doubt when people a millennium or so from now assess the legacies of the twenty-first century, they’ll have no trouble tracing a similar tone of despair in our arts and literature, driven by the failure of science and technology to live up to the messianic fantasies of perpetual progress that have been loaded onto them since Francis Bacon’s time.

There are several reasons why such projects so reliably fail. To begin with, of course, the grand designs of intellectuals in a mature society normally presuppose access to the kind and scale of resources that such a society supplies to its more privileged inmates. When the resource needs of an intellectual project can no longer be met, it doesn’t matter how useful it would be if it could be pursued further, much less how closely aligned it might happen to be to somebody’s notion of the meaning and purpose of human existence.

Furthermore, as a society begins its one-way trip down the slippery chute labeled “Decline and Fall,” and its ability to find and distribute resources starts to falter, its priorities necessarily shift. Triage becomes the order of the day, and projects that might ordinarily get funding end up out of luck so that more immediate needs can get as much of the available resource base as possible. A society’s core intellectual projects tend to face this fate a good deal sooner than other more pragmatic concerns. When the barbarians are at the gates, funds that might otherwise be used to pay for schools of philosophy tend to get spent hiring soldiers instead.

Modern science, the core intellectual project of the contemporary industrial world, and technological complexity, its core cultural project, are as subject to these same two vulnerabilities as were the corresponding projects of other civilizations. Yes, I’m aware that this is a controversial claim, but I’d argue that it follows necessarily from the nature of both projects. Scientific research, like most things in life, is subject to the law of diminishing returns; what this means in practice is that the more research has been done in any field, the greater an investment is needed on average to make the next round of discoveries. Consider the difference between the absurdly cheap hardware that was used in the late nineteenth century to detect the electron and the fantastically expensive facility that had to be built to detect the Higgs boson; that’s the sort of shift in the cost-benefit ratio of research that I have in mind.

A civilization with ample resources and a thriving economy can afford to ignore the rising cost of research and can gamble that new discoveries will be valuable enough to cover the costs. A civilization facing resource shortages and economic contraction can’t. If the cost of new discoveries in particle physics continues to rise along the same curve that gave us the Higgs boson’s multibillion-Euro price tag, for example, the next round of experiments, or the one after that, could easily rise to the point that, in an era of resource depletion, economic turmoil, and environmental payback, no consortium of nations on the planet will be able to spare the resources for the project. Even if the resources could theoretically be spared, furthermore, there will be many other projects begging for them, and it’s far from certain that another round of research into particle physics would be the best available option.

The project of technological complexification is even more vulnerable to the same effect. Though true believers in progress like to think of new technologies as replacements for older ones, it’s actually more common for new technologies to be layered over existing ones. Consider, as one example out of many, the US transportation grid, in which airlanes, freeways, railroads, local roads, and navigable waterways are all still in use, reflecting most of the history of transport on this continent from colonial times to the present. The more recent the transport mode, by and large, the more expensive it is to maintain and operate, and the exotic new transportation schemes floated in recent years are no exception to that rule.

Now factor in economic contraction and resource shortages. The most complex and expensive parts of the technostructure tend also to be the most prestigious and politically influential, and so the logical strategy of a phased withdrawal from unaffordable complexity—for example, shutting down airports and using the proceeds to make good some of the impact of decades of malign neglect on the nation’s rail network—is rarely if ever a politically viable option. As contraction accelerates, the available resources come to be distributed by way of a political free-for-all in which rational strategies for the future play no significant role. In such a setting, will new technological projects be able to get the kind of ample funding they’ve gotten in the past? Let’s be charitable and simply say that this isn’t likely.

Thus the end of the age of fossil-fueled extravagance means the coming of a period in which science and technology will have a very hard row to hoe, with each existing or proposed project having to compete for a slice of a shrinking pie of resources against many other equally urgent needs. That in itself would be a huge challenge. What makes it much worse is that many scientists, technologists, and their supporters in the lay community are currently behaving in ways that all but guarantee that when the resources are divided up, science and technology will draw the short sticks.

It has to be remembered that science and technology are social enterprises. They don’t happen by themselves in some sort of abstract space insulated from the grubby realities of human collective life. Laboratories, institutes, and university departments are social constructs, funded and supported by the wider society. That funding and support doesn’t happen by accident; it exists because people outside the scientific community believe that the labors of scientists and engineers will benefit the rest of society to a degree that outweighs the costs.

Historically speaking, it’s only in exceptional circumstances that something like scientific research gets as large a cut of a society’s total budget as it does today.1 As recently as a century ago, the sciences received only a tiny fraction of the support they currently receive; a modest number of university positions with limited resources provided most of what institutional backing the sciences got, and technological progress was largely a matter of individual inventors pursuing projects on their own nickel in their off hours—consider the Wright brothers, who carried out the research that led to the first successful airplane in between waiting on customers in their bicycle shop, and without benefit of research grants.

The transformation of scientific research and technological progress from the part-time activity of an enthusiastic fringe culture to its present role as a massively funded institutional process took place over the course of the twentieth century. Plenty of things drove that transformation, but among the critical factors were the successful efforts of scientists, engineers, and the patrons and publicists of science and technology to make a case for science and technology as forces for good in society, producing benefits that would someday be extended to all. In the boom times that followed the Second World War, it was arguably easier to make that case than it had ever been before, but it took a great deal of work—not merely propaganda but actual changes in the way that scientists and engineers interacted with the public and met their concerns—to overcome the public wariness toward science and technology that made the mad scientist such a stock figure in the popular media of the time.

These days, the economic largesse that made it possible for the latest products of industry to reach most American households is increasingly a fading memory, and that’s made life rather more difficult for those who argue for science and technology as forces for good. Still, there’s another factor, which is the increasing failure of the proponents of institutional science and technology to make that case in any way that convinces the general public.

Here’s a homely example. I have a friend who suffered from severe asthma. She was on four asthma medications, each accompanied by its own bevy of nasty side effects, which more or less kept the asthma under control without curing it. After many years of this, she happened to learn that another health problem she had was associated with a dietary allergy, cut the offending food out of her diet, and was startled and delighted to find that her asthma cleared up as well.

After a year with no asthma symptoms, she went to her physician, who expressed surprise that she hadn’t had to come in for asthma treatment in the meantime. She explained what had happened. The doctor admitted that the role of that allergy as a cause of severe asthma was well-known. When she asked the doctor why she hadn’t been told this, so she could make an informed decision, the only response she got was, and I quote, “We prefer to medicate for that condition.”

Most of the people I know have at least one such story to tell about their interactions with the medical industry, in which the convenience and profit of the industry took precedence over the well-being of the patient. Quite a few have simply stopped going to physicians, since the side effects from the medications they received have been reliably worse than the illness they had when they went in. Since today’s mainstream medical industry founds its publicity on its claims to a scientific basis, the growing public unease with mainstream industrial medicine splashes over onto science in general. For that matter, whenever some technology seems to be harming people, it’s a safe bet that somebody in a lab coat with a prestigious title will appear on the media insisting that everything’s all right. Some of the time, the person in the lab coat is correct, but it’s happened often enough that everything was not all right that the trust once reposed in scientific experts is getting noticeably threadbare these days.

Public trust in scientists has taken a beating for several other reasons as well. One of the more awkward of these is the way that the vagaries of scientific opinion concerning climate change have been erased from our collective memory by one side in the current climate debate. It’s probably necessary for me to note here that I find the arguments for disastrous anthropogenic climate change far stronger than the arguments against it, and I have discussed the likely consequences of our civilization’s maltreatment of the atmosphere repeatedly in my books, as well as in an earlier chapter of this one. The fact remains that in my teen years, in the 1970s and 1980s, scientific opinion was still sharply divided on the subject of future climates, and a significant number of experts believed that the descent into a new ice age was likely.

I would encourage anyone who doubts that claim to get past the shouting and obtain copies of the following books, which document that fact: The Weather Machine by Nigel Calder, After the Ice Age by E. C. Pielou, and Ice Ages by Windsor Chorlton, which was part of Time Life’s Planet Earth series. (There are many others, but these are still readily available on the used-book market.) The authors were by no means nonentities. Nigel Calder was a highly respected science writer and media personality, and E. C. Pielou is still one of the most respected Canadian ecologists. Windsor Chorlton occupied a less exalted station in the food chain of science writers, but all the volumes in the Planet Earth series were written in consultation with acknowledged experts and summarized the state of the art in the earth sciences at the time of publication.

Because certain science fiction writers have been among the most vitriolic figures denouncing those who remember the warnings of an imminent ice age, I’d also encourage readers who have their doubts to pick up copies of The Winter of the World by Poul Anderson and The Time of the Great Freeze by Robert Silverberg, both of which are set in an ice age future. My younger readers may not remember these authors; those who do will know that both of them were respected, competent SF writers who paid close attention to the scientific thought of their time and wrote about futures defined by an ice age at the time when this was still a legitimate scientific extrapolation

These books exist. I still own copies of most of them. Those of my readers who take the time to find and read them will discover, in each nonfiction volume, a thoughtfully developed argument suggesting that the Earth would soon descend into a new ice age, and in each of the novels, a lively story set in a future shaped by the new ice age in question. Those arguments turned out to be wrong, no question. They were made by qualified experts, at a time when the evidence concerning climate change was a good deal more equivocal than it’s become since that time, and the more complete evidence that was gathered later settled the matter; but the arguments and the books existed, many people alive today know that they existed, and when scientists associated with climate activism insist that they didn’t, the result is a body blow to public trust in science.

It’s far from the only example of the same kind. Many of my readers will remember the days when all cholesterol was bad and polyunsaturated fats were good for you. Most of my readers will recall drugs that were introduced to the market with loud assurances of safety and efficacy, and then withdrawn in a hurry when those assurances turned out to be dead wrong. Those readers who are old enough may even remember when continental drift was being denounced as the last word in pseudoscience, a bit of history that a number of science writers these days claim never happened.2 Support for science depends on trust in scientists, and that’s become increasingly hard to maintain at a time when it’s unpleasantly easy to point to straightforward falsifications of the kind just outlined.

On top of all this, there’s the impact of the atheist movement on public debates concerning science. I hasten to say that I know quite a few atheists, and the great majority of them are decent, compassionate people who have no trouble with the fact that their beliefs aren’t shared by everyone around them. Unfortunately, the atheists who have managed to seize the public limelight rarely merit description in those terms. Most of my readers will be wearily familiar with the sneering bullies who so often claim to speak for atheism these days; I can promise you that as a public figure in a minority faith, I get to hear from them far too often for my taste.

Mind you, there’s a certain wry amusement in the way that the resulting disputes are playing out in contemporary culture. Even diehard atheists have begun to notice that every time Richard Dawkins opens his mouth, a couple of dozen people decide to give God a second chance. Still, the dubious behavior of the “angry atheist” crowd affects the subject of this chapter at least as powerfully as it does the field of popular religion. A great many of today’s atheists claim the support of scientific materialism for their beliefs, and no small number of the most prominent figures in the atheist movement hold down day jobs as scientists or science educators. In the popular mind, as a result, these people, their beliefs, and their behavior are quite generally conflated with science as a whole.

The implications of all these factors are best explored by way of a simple thought experiment. Let’s say, dear reader, that you’re an ordinary American citizen. Over the last month, you’ve heard one scientific expert insist that the latest fashionable heart drug is safe and effective, while two of your drinking buddies have told you in detail about the ghastly side effects it gave them and three people you know have died from the side effects of other drugs similarly pronounced safe and effective. You’ve heard another scientific expert denounce acupuncture as crackpot pseudoscience, while your Uncle Jeff, who messed up his back in Iraq, got more relief from three visits to an acupuncturist than he got from six years of conventional treatment. You’ve heard still another scientific expert claim yet again that nobody ever said back in the 1970s that the world was headed for a new ice age, and you read the same books I did when you were in high school and know the expert is either misinformed or lying. Finally, you’ve been on the receiving end of yet another diatribe by yet another atheist of the sneering-bully type just mentioned, who vilified your religious beliefs in terms that would count as hate speech in most other contexts and used the prestige of science to justify his views and excuse his behavior.

Given all this, will you vote for a candidate who says that you have to accept a cut in your standard of living in order to keep laboratories and university science departments fully funded?

No, I didn’t think so.

In miniature, that’s the crisis faced by science as we move into the endgame of industrial civilization, just as comparable crises challenged Greek philosophy, Roman jurisprudence, and medieval theology in the endgames of their own societies. When a society assigns one of its core intellectual or cultural projects to a community of specialists, those specialists need to think—and think hard—about the way their words and actions will come across to those outside that community. That’s important enough when the society is still in a phase of expansion; when it tips over its historic peak and begins the long road down, it becomes an absolute necessity—but it’s a necessity that, very often, the specialists in question never get around to recognizing until it’s far too late.

For all the reasons just surveyed, it’s unlikely that science as a living tradition will be able to survive in its current institutional framework as the Long Descent picks up speed around us. It’s by no means certain that it will survive at all. The abstract conviction that science is humanity’s best hope for the future, even if it were more broadly held than it is, offers little protection against the consequences of popular revulsion driven by the corruptions, falsifications, and abusive behaviors sketched out above, especially when this is added to the hard economic realities that beset any civilization’s core intellectual projects in its twilight years. The resurgence of religion in the declining years of a culture, a commonplace of history in the run-up to previous dark ages, could have taken many forms in the historical trajectory of industrial society; at this point I think it’s all too likely to contain a very large dollop of hostility toward science and complex technology.

image

It’s important, in order to make sense of the fate of science and technology in the impending dark age, to recall that these things function as social phenomena, and fill social roles, in ways that have more than a little in common with the intellectual activities of civilizations of the past. That doesn’t mean, as some postmodern theorists have argued, that science and technology are purely social phenomena;3 both of them have to take the natural world into account, and so have an important dimension that transcends the social. That said, the social dimension also exists, and since human beings are social mammals, that dimension has an immense impact on the way that science and technology function in this or any other human society.

From a social standpoint, it’s thus not actually all that relevant that that the scientists and engineers of contemporary industrial society can accomplish things with matter and energy that weren’t within the capacities of Babylonian astrologer-priests, Hindu gurus, Chinese literati, or village elders in New Guinea. Each of these groups has been assigned a particular social role, the role of interpreter of nature, by their respective societies, and each of them is accorded substantial privileges for fulfilling the requirements of that role. It’s therefore possible to draw precise and pointed comparisons between the different bodies of people filling that very common social role in different societies.

The exercise is worth doing, not least because it helps sort out the far-from-meaningless distinction between the aspects of modern science and technology that unfold from their considerable capacities for doing things with matter and energy, on the one hand, and the aspects of modern science and technology that unfold from the normal dynamics of social privilege, on the other. What’s more, since modern science and technology weren’t around in previous eras of decline and fall, but privileged intellectual castes certainly were, recognizing the common features that unite today’s scientists, engineers, and promoters of scientific and technological progress with equivalent groups in past civilizations makes it a good deal easier to anticipate the fate of science and technology in the decades and centuries to come.

A specific example will be more useful here than any number of generalizations, so let’s consider the fate of philosophy in the waning years of the Roman world. The extraordinary intellectual adventure we call classical philosophy began in the Greek colonial cities of Ionia around 585 BCE, when Thales of Miletus first proposed a logical rather than a mythical explanation for the universe, and proceeded through three broad stages from there. The first stage, that of the so-called Presocratics, focused on the natural world, and the questions it asked and tried to answer can more or less be summed up as “What exists?” Its failures and equivocal successes led the second stage, which extended from Socrates through Plato and Aristotle to the Old Academy and its rivals, to focus attention on different questions, which can be summed up just as neatly as “How can we know what exists?”

That was an immensely fruitful shift in focus. It led to the creation of classical logic—one of the great achievements of the human mind—and it also drove the transformations that turned mathematics from an assortment of rules of thumb to an architecture of logical proofs and thus laid the foundations on which Newtonian physics and other quantitative sciences eventually built. Like every other great intellectual adventure of our species, though, it never managed to fulfill all the hopes that had been loaded onto it. The philosopher’s dream of human society made wholly subject to reason turned out to be just as unreachable as the scientist’s dream of the universe made wholly subject to the human will. As that failure became impossible to ignore, classical philosophy shifted focus again, to a series of questions and attempted answers that amounted to “Given what we know about what exists, how should we live?”

That’s the question that drove the last great age of classical philosophy, the age of the Epicureans, the Stoics, and the Neoplatonists. At first these and other schools carried on lively and far-reaching debates, but as the Roman world stumbled toward its end under the burden of its unsolved problems, the philosophers closed ranks. Debates continued, but they focused more and more tightly on narrow technical issues within individual schools. What’s more, the schools themselves closed ranks; pure Stoic, Aristotelian, and Epicurean philosophy gradually dropped out of fashion, and by the fourth century CE, a Neoplatonism enriched with bits and pieces of all the other schools stood effectively alone, the last school standing in the long struggle Thales had kicked off ten centuries before.

Now, I have to confess to a strong personal partiality for the Neoplatonists. It was from Plotinus and Proclus, respectively the first and last great figures of classical Neoplatonism, that I first grasped why philosophy matters and what it can accomplish, and for all its problems—like every philosophical account of the world, it has its share—Neoplatonism still makes intuitive sense to me in a way that few other philosophies do. What’s more, the men and women who defended classical Neoplatonism in its final years were people of great intellectual and personal dignity, committed to proclaiming the truth as they knew it in the face of intolerance and persecution that ended up costing no few of them their lives.4

The awkward fact remains that classical philosophy, like modern science, functioned as a social phenomenon and filled certain social roles. The intellectual power of the final Neoplatonist synthesis and the personal virtues of its last proponents have to be balanced against its blind support of a deeply troubled social order. In all the long history of classical philosophy, it never seems to have occurred to anyone that debates about the nature of justice might reasonably address, say, the ethics of slavery. While a stonecutter like Socrates could take an active role in philosophical debate in Athens in the fourth century BCE, furthermore, the institutionalization of philosophy meant that by the last years of classical Neoplatonism, its practice was restricted to those with ample income and leisure, and its values inevitably became more and more closely tied to the social class of its practitioners.

That’s the thing that drove the ferocious rejection of philosophy by the underclass of the age, the slaves and urban poor who made up the vast majority of the population throughout the Roman Empire and who received little if any benefit from the intellectual achievements of their society. To them, the subtleties of Neoplatonist thought were irrelevant to the increasingly difficult realities of life on the lower end of the social pyramid in a brutally hierarchical and increasingly dysfunctional world. That’s one important reason—there were others, some of which will be considered in a later chapter—why so many of them turned for solace to a new religious movement from the eastern fringes of the empire, a despised sect that claimed that God had been born on Earth as a mere carpenter’s son and communicated through his life and death a way of salvation that privileged the poor and downtrodden above the rich and well-educated.

It was as a social phenomenon, filling certain social roles, that Christianity attracted persecution from the imperial government, and it was in response to Christianity’s significance as a social phenomenon that the imperial government executed an about-face under Constantine and took the new religion under its protection. Like plenty of autocrats before and since, Constantine clearly grasped that the real threat to his position and power came from other members of his own class—in his case, the patrician elite of the Roman world—and saw that he could undercut those threats and counter potential rivals through an alliance of convenience with the leaders of the underclass. That’s the political subtext of the Edict of Milan, which legalized Christianity throughout the empire and brought it imperial patronage.

The patrician class of late Roman times, like its equivalent today, exercised power through a system of interlocking institutions from which outsiders were carefully excluded, and it maintained a prickly independence from the central government. By the fourth century, tensions between the bureaucratic imperial state and the patrician class, with its local power bases and local loyalties, were rising toward a flashpoint. The rise of Christianity thus gave Constantine and his successors an extraordinary opportunity.

Most of the institutions that undergirded patrician power were linked to Pagan religion. Local senates, temple priesthoods, philosophical schools, and other elements of elite culture normally involved duties drawn from the traditional faith. A religious pretext to strike at those institutions must have seemed as good as any other, and the Christian underclass offered one other useful feature: mobs capable of horrific acts of violence against prominent defenders of the patrician order.

That was why, for example, a Christian mob in 415 CE dragged the Neoplatonist philosopher Hypatia from her chariot as she rode home from her teaching gig at the Academy in Alexandria, cudgeled her to death, cut the flesh from her bones with sharpened oyster shells—the cheap pocketknives of the day—and burned the bloody gobbets to ashes. What doomed Hypatia was not only her defense of the old philosophical traditions but also her connection to Alexandria’s patrician class. Her ghastly fate was as much the vengeance of the underclass against the elite as it was an act of religious persecution. She was far from the only victim of violence driven by those paired motives. It was as a result of such pressures that, by the time the emperor Justinian ordered the last academies closed in 529 CE, the classical philosophical tradition was essentially dead.

That’s the sort of thing that happens when an intellectual tradition becomes too closely affiliated with the institutions, ideologies, and interests of a social elite. If the elite falls, so does the tradition—and if it becomes advantageous for anyone else to target the elite, the tradition can be a convenient target, especially if it’s succeeded in alienating most of the population outside the elite in question.

Modern science is extremely vulnerable to such a turn of events. There was a time when the benefits of scientific research and technological development routinely reached the poor as well as the privileged, but that time has long since passed. These days, the benefits of research and development move up the social ladder, while the costs and negative consequences move down. Nearly all the jobs eliminated by automation, globalization, and the computer revolution, for example, used to hire from the bottom end of the job market, and what replaced them was a handful of jobs that require far more extensive (and expensive) education. In the same way, changes in US health care in recent decades have disproportionately benefited the privileged, while subjecting most others to substandard care at prices so high that medical bills are the leading cause of bankruptcy in the US today.5

It’s all very well for the promoters of progress to gabble on about science as the key to humanity’s destiny, but the poor know from hard experience that the destiny thus marketed isn’t for them. To the poor, progress means fewer jobs with lower pay and worse conditions, more surveillance and impersonal violence carried out by governments that show less and less interest in paying even lip service to the concept of civil rights, a rising tide of illnesses caused by environmental degradation and industrial effluents, and glimpses from afar of an endless stream of lavishly advertised tech-derived trinkets, perks and privileges that they will never have. Between the poor and any appreciation for modern science stands a wall made of failed schools, defunded libraries, denied opportunities, and the systematic use of science and technology to benefit other people at their expense. Such a wall, it probably bears noting, makes a good surface against which to sharpen oyster shells.

It seems improbable that anything significant will be done to change this picture until it’s far too late for such changes to have any meaningful effect. Barring dramatic transformations in the distribution of wealth, the conduct of public education, the funding for such basic social amenities as public libraries, and a great deal more, the underclass of the modern industrial world can be expected to grow more and more disenchanted with science as a social phenomenon in our culture, and to turn instead—as their equivalents in the Roman world and so many other civilizations did—to some tradition from the fringes that places itself in stark opposition to everything modern scientific culture stands for. Once that process gets under way, it’s simply a matter of waiting until the corporate elite that funds science, defines its values, and manipulates it for PR purposes becomes sufficiently vulnerable that some other power center decides to take it out, using institutional science as a convenient point of attack.

Saving anything from the resulting wreck will be a tall order. Still, the same historical parallel discussed above offers some degree of hope. The narrowing focus of classical philosophy in its last years meant, among other things, that a substantial body of knowledge that had once been part of the philosophical movement was no longer identified with it by the time the cudgels and shells came out, and much of it was promptly adopted by Christian clerics and monastics as useful for the Church. That’s how classical astronomy, music theory, and agronomy, among other things, found their way into the educational repertoire of Christian monasteries and nunneries in the dark ages. What’s more, once the power of the patrician class was broken, a carefully sanitized version of Neoplatonist philosophy found its way into Christianity, where it’s still a living presence in some denominations today.

Something along the same lines may well happen again as the impending deindustrial dark age grows closer. Certainly today’s defenders of science are doing their best to shove a range of scientific viewpoints out the door. There’s an interesting distinction between the sciences that get this treatment and those that don’t: on the one hand, those that are being flung aside are those that focus on observation of natural systems rather than control of artificial ones; on the other, any science that raises doubts about the possibility or desirability of infinite technological expansion can expect to find itself shivering in the dark outside in very short order.

Thus it’s entirely possible that observational sciences, if they can squeeze through the bottleneck imposed by the loss of funding and prestige, will be able to find a new home in whatever intellectual tradition replaces modern scientific rationalism in the deindustrial future. It’s at least as likely that such dissident sciences as ecology, which has always raised challenging questions about the fantasies of the manipulative sciences, may find themselves eagerly embraced by a future intellectual culture that has no trouble at all recognizing the futility of those fantasies. That said, it’s still going to take some hard work to preserve what’s been learned in those fields—and it’s also going to take more than the usual amount of prudence and plain dumb luck not to get caught up in the conflict when the sharp edge of the shell gets turned on modern science.

image

All the factors already discussed feed into the widening chasm between the sciences and the rest of human culture that C. P. Snow discussed in his famous work The Two Cultures.6 That chasm has opened up a good deal further since Snow’s time, and its impact on the future deserves discussion here, not least because it’s starting to become impossible to ignore, even among those who accept the vision of the universe presented by contemporary scientific thought.

The driving force here is the extreme mismatch between the way science works and the way scientists expect their claims to be received by the general public. Within the community of researchers, the conclusions of the moment are, at least in theory, open to constant challenge, but only from within the scientific community. The general public is not invited to take part in those challenges. Quite the contrary; it’s supposed to treat the latest authoritative pronouncement as truth pure and simple, even when that contradicts the authoritative pronouncements of six months before.

That the authoritative pronouncements of science do contradict themselves on a regular basis will be obvious, as already noted, to anyone who remembers the days when polyunsaturated fats were supposed to be good for you and all cholesterol was bad—but woe betide anyone outside the scientific community who mentions this when a scientist trots out the latest authoritative pronouncement. The reaction is as predictable as it is counterproductive: how dare ordinary citizens express an opinion on the subject!

Now, of course, there are reasons why scientists might not want to field a constant stream of suggestions and challenges from people who don’t have training in relevant disciplines. The fact remains that expecting people to blindly accept whatever scientists say, when scientific opinion on so many subjects has been whirling around like a weathercock for decades now, is not a strategy with a long shelf life. Sooner or later people start asking why they should take anything a scientist says on faith, and for many people in North America today, “sooner or later” has already arrived.

There’s another, darker, reason why such questions are increasingly common just now. I’m thinking here of the recent revelation that the British scientists tasked by the government with making dietary recommendations have been taking payola of various kinds from the sugar industry.7 That’s hardly a new thing these days. Especially but not only in those branches of science concerned with medicine, pharmacology, and nutrition, the prostitution of the scientific process by business interests has become an open scandal. When a scientist gets behind a podium and makes a statement about the safety or efficacy of a drug, a medical treatment, or what have you, the first question asked by an ever-increasing number of people outside the scientific community these days is “Who’s paying him?”

It would be bad enough if that question were being asked because of scurrilous rumors or hostile propaganda. Unfortunately, it’s being asked because there’s nothing particularly unusual about the behavior of the British scientists mentioned above.8 These days, in any field where science comes into contact with serious money, scientific studies are increasingly just another dimension of marketing. From influential researchers being paid to put their names on dubious studies to give them unearned credibility, to the systematic concealment of “outlying” data that doesn’t support the claims made for this or that lucrative product, the corruption of science is an ongoing reality, and one that existing safeguards within the scientific community are not effectively countering.

Scientists have by and large treated the collapse in scientific ethics as an internal matter. That’s a lethal mistake, because the view that matters here is the view from outside. What looks to insiders like a manageable problem that will sort itself out in time, looks from outside the laboratory and the faculty lounge like institutionalized corruption on the part of a self-proclaimed elite whose members cover for each other and are accountable to no one. It doesn’t matter, by the way, how inaccurate that view is in specific cases, how many honest men and women are laboring at lab benches, or how overwhelming the pressure to monetize research that’s brought to bear on scientists by university administrations and corporate sponsors. None of that finds its way into the view from outside, and in the long run, the view from outside is the one that counts.

The corruption of science by self-interest is an old story, and unfortunately it’s most intense in those fields where science impacts the lives of nonscientists most directly: medicine, pharmacology, and nutrition. I mentioned earlier a friend whose lifelong asthma, which landed her in the hospital repeatedly and nearly killed her twice, was cured at once by removing a common allergen from her diet. The physician’s comment, “We prefer to medicate for that,” makes perfect sense from a financial perspective, since a patient who’s cured of an ailment is a good deal less lucrative for the doctor and the rest of the medical profession than one who has to keep on receiving regular treatments and prescriptions. As a result of that interaction among others, though, the friend in question has lost most of what respect she once had for mainstream medicine, and is now using herbalism to meet her health care needs.

It’s an increasingly common story these days, and plenty of other accounts could be added here. The point I want to make, though, is that it’s painfully obvious that the physician who preferred to medicate never thought about the view from outside. I have no way of knowing what combination of external pressures and personal failings led that physician to conceal a less costly cure from my friend and keep her on expensive and ineffective drugs with a gallery of noxious side effects instead, but from outside the walls of the office, it certainly looked like a callous betrayal of whatever ethics the medical profession might still have left—and again, the view from outside is the one that counts.

It counts because institutional science has the authority and prestige it possesses today only because enough of those outside the scientific community accept its claim to speak the truth about nature. Not that many years ago, all things considered, scientists didn’t have that authority or prestige, and no law of nature or of society guarantees that they’ll keep either one indefinitely. Every doctor who would rather medicate than cure, every researcher who treats conflicts of interest as just another detail of business as usual, every scientist who insists in angry tones that nobody without a PhD in this or that discipline is entitled to ask why this week’s pronouncement should be taken any more seriously than the one it just disproved—and let’s not even talk about the increasing, and increasingly public, problem of overt scientific fraud in the pharmaceutical field among others—is hastening the day when modern science is taken no more seriously by the general public than, say, academic philosophy is today.

That day may not be all that far away. That’s the message that should be read, and is far too rarely read, in the accelerating emergence of countercultures that reject the authority of science in one field. As a thoughtful essay in Salon9 pointed out, that crisis of authority is what gives credibility to such movements as climate denialists and “anti-vaxxers” (the growing number of parents who refuse to have their children vaccinated). A good many people these days, when the official voices of the scientific community say this or that, respond by asking “Why should we believe you?”—and too many of them don’t get a straightforward answer that addresses their concerns.

A bit of personal experience from a different field may be relevant here. Back in the late 1980s and early 1990s, when I lived in Seattle, I put a fair amount of time into collecting local folklore concerning ghosts and other paranormal phenomena. I wasn’t doing this out of any particular belief, or for that matter any particular unbelief; instead, I was seeking a sense of the mythic terrain of the Puget Sound region, the landscapes of belief and imagination that emerged from the experiences of people on the land, with an eye toward the career in writing fiction that I then hoped to launch. While I was doing this research, when something paranormal was reported anywhere in the region, I generally got to hear about it fairly quickly, and in the process I got to watch a remarkable sequence of events that repeated itself like a broken record more often than I can count.

Whether the phenomenon that was witnessed was an unusual light in the sky, a seven-foot-tall hairy biped in the woods, a visit from a relative who happened to be dead at the time, or what have you, two things followed promptly once the witness went public. The first was the arrival of a self-proclaimed skeptic, usually a member of CSICOP (the Committee for Scientific Investigation of Claims of the Paranormal), who treated the witness with scorn and condescension, made dogmatic claims about what must have happened, and responded to any disagreement with bullying and verbal abuse. The other thing that followed was the arrival of an investigator from one of the local organizations of believers in the paranormal, who was invariably friendly and supportive, listened closely to the account of the witness, and took the incident seriously. I’ll let you guess which of the proposed explanations the witness usually ended up embracing, not to mention which organization he or she often joined.

The same process on a larger and far more dangerous scale is shaping attitudes toward science across a wide and growing sector of American society. Notice that, unlike climate denialism, the anti-vaxxer movement isn’t powered by billions of dollars of grant money, but it’s getting increasing traction. The reason is as simple as it is painful: parents are asking physicians and scientists, “How do I know this substance you want to put into my child is safe?”—and the answers they’re getting are not providing them with the reassurance they need.

It’s probably necessary here to point out that I’m no fan of the anti-vaxxer movement. Since epidemic diseases are likely to play a massive role in the future ahead of us, I’ve looked into anti-vaxxer arguments with some care, and they don’t convince me at all. It’s clear from the evidence that vaccines do, far more often than not, provide protection against dangerous diseases. While some children are harmed by the side effects of vaccination, that’s true of every medical procedure, and the toll from side effects is orders of magnitude smaller than the annual burden of deaths from these same diseases in the pre-vaccination era. Nor does the anti-vaxxer claim that vaccines cause autism hold water; the epidemiology of autism spectrum disorders simply doesn’t support that claim.

That is to say, I don’t share the beliefs that drive the anti-vaxxer movement. Similarly, I’m sufficiently familiar with the laws of thermodynamics and the chemistry of the atmosphere to know that when the climate denialists insist that dumping billions of tons of carbon dioxide into the atmosphere can’t change its capacity to retain heat, they’re smoking their shorts. I’ve retained enough of a childhood interest in paleontology, and studied enough of biology and genetics since then, to be able to follow the debates between evolutionary biology and so-called creation science, and I’m solidly on Darwin’s side of the bleachers. I could go on; I have my doubts about a few corners of contemporary scientific theory, but then so do plenty of scientists.

That is to say, I don’t agree with the anti-vaxxers, the climate denialists, the creationists, or their equivalents, but I understand why they’ve rejected the authority of science, and it’s not just because they’re ignorant cretins. It’s because they’ve seen far too much of the view from outside. Parents who encounter a medical industry that would rather medicate than heal are more likely to listen to anti-vaxxers; Americans who watch climate change activists demand that the rest of the world cut its carbon footprint, while the activists themselves get to keep cozy middle-class lifestyles, are more likely to believe that global warming is a politically motivated hoax; Christians who see atheists using evolution as a stalking horse for their ideology are more likely to turn to creation science—and all three, and others, are not going to listen to scientists who insist that they’re wrong, until and unless the scientists stop and take a good hard look at how they and their proclamations look when viewed from outside.

I’m far from sure that anybody in the scientific community is willing to take that hard look. It’s possible. The arrogant bullying that has long been standard practice among the self-proclaimed skeptics and “angry atheists” has taken on a sullen and defensive tone recently, as though it’s started to sink in that yelling hate speech at people who disagree with you might not be the best way to win their hearts and minds. Still, for that same act of reflection to get any traction in the scientific community, a great many people in that community are going to have to rethink the way they handle dealing with the public, especially when science, technology, and medicine cause harm. That, in turn, is going to happen only if enough of today’s scientists remember the importance of the view from outside.

That view has another dimension, and it’s a considerably harsher one. Among the outsiders whose opinion of contemporary science matters most are some who haven’t been born yet: our descendants, who will inhabit a world shaped by science and the technologies that have resulted from scientific research. The most likely futures for our descendants are those in which the burdens left behind by today’s science and technology are much more significant than the benefits. Those most likely futures, as noted in previous chapters, will be battered by unstable climate and rising oceans due to anthropogenic climate change; stripped of most of their topsoil, natural resources, and ecosystems; and strewn with the radioactive and chemical trash that our era produced in such abundance and couldn’t be bothered to store safely—and most of today’s advanced technologies will have long since rusted into uselessness, because the cheap abundant energy and other nonrenewable resources that were needed to keep them running all got used up in our time.

People living in such a future aren’t likely to remember that a modest number of scientists signed petitions and wrote position papers protesting some of these things. They’re even less likely to recall the utopian daydreams of perpetual progress and limitless abundance that encouraged so many other people in the scientific community to tell themselves that these things didn’t really matter—and if by chance they do remember those daydreams, their reaction to them won’t be pretty. That science today, like every other human institution in every age, combines high ideals and petty motives in the usual proportions will not matter to them in the least.

Unless something changes sharply very soon, their view from outside may well see modern science—all of it, from the first gray dawn of the scientific revolution straight through to the flame-lit midnight when the last laboratory was sacked and burned by a furious mob—as a wicked dabbling in accursed powers that eventually brought down just retribution upon a corrupt and arrogant age. So long as the proponents and propagandists of science ignore the view from outside, and blind themselves to the ways that their own defense of science is feeding the forces that are rising against it, that’s far more likely to become the default belief system of the deindustrial dark ages than the comfortable fantasies of perpetual scientific advancement cherished by so many people today.