Optimism - The Beginning of Infinity: Explanations That Transform the World - David Deutsch

The Beginning of Infinity: Explanations That Transform the World - David Deutsch (2011)

Chapter 9. Optimism

The possibilities that lie in the future are infinite. When I say ‘It is our duty to remain optimists,’ this includes not only the openness of the future but also that which all of us contribute to it by everything we do: we are all responsible for what the future holds in store. Thus it is our duty, not to prophesy evil but, rather, to fight for a better world.

Karl Popper, The Myth of the Framework (1994)

Martin Rees suspects that civilization was lucky to survive the twentieth century. For throughout the Cold War there was always a possibility that another world war would break out, this time fought with hydrogen bombs, and that civilization would be destroyed. That danger seems to have receded, but in Rees’s book Our Final Century, published in 2003, he came to the worrying conclusion that civilization now had only a 50 per cent chance of surviving the twenty-first century.

Again this was because of the danger that newly created knowledge would have catastrophic consequences. For example, Rees thought it likely that civilization-destroying weapons, particularly biological ones, would soon become so easy to make that terrorist organizations, or even malevolent individuals, could not be prevented from acquiring them. He also feared accidental catastrophes, such as the escape of genetically modified micro-organisms from a laboratory, resulting in a pandemic of an incurable disease. Intelligent robots, and nanotechnology (engineering on the atomic scale), ‘could in the long run be even more threatening’, he wrote. And ‘it is not inconceivable that physics could be dangerous too.’ For instance, it has been suggested that elementary-particle accelerators that briefly create conditions that are in some respects more extreme than any since the Big Bang might destabilize the very vacuum of space and destroy our entire universe.

Rees pointed out that, for his conclusion to hold, it is not necessary for any one of those catastrophes to be at all probable, because we need be unlucky only once, and we incur the risk afresh every time progress is made in a variety of fields. He compared this with playing Russian roulette.

But there is a crucial difference between the human condition and Russian roulette: the probability of winning at Russian roulette is unaffected by anything that the player may think or do. Within its rules, it is a game of pure chance. In contrast, the future of civilization depends entirely on what we think and do. If civilization falls, that will not be something that just happens to us: it will be the outcome of choices that people make. If civilization survives, that will be because people succeed in solving the problems of survival, and that too will not have happened by chance.

Both the future of civilization and the outcome of a game of Russian roulette are unpredictable, but in different senses and for entirely unrelated reasons. Russian roulette is merely random. Although we cannot predict the outcome, we do know what the possible outcomes are, and the probability of each, provided that the rules of the game are obeyed. The future of civilization is unknowable, because the knowledge that is going to affect it has yet to be created. Hence the possible outcomes are not yet known, let alone their probabilities.

The growth of knowledge cannot change that fact. On the contrary, it contributes strongly to it: the ability of scientific theories to predict the future depends on the reach of their explanations, but no explanation has enough reach to predict the content of its own successors - or their effects, or those of other ideas that have not yet been thought of. Just as no one in 1900 could have foreseen the consequences of innovations made during the twentieth century - including whole new fields such as nuclear physics, computer science and biotechnology - so our own future will be shaped by knowledge that we do not yet have. We cannot even predict most of the problems that we shall encounter, or most of the opportunities to solve them, let alone the solutions and attempted solutions and how they will affect events. People in 1900 did not consider the internet or nuclear power unlikely: they did not conceive of them at all.

No good explanation can predict the outcome, or the probability of an outcome, of a phenomenon whose course is going to be significantly affected by the creation of new knowledge. This is a fundamental limitation on the reach of scientific prediction, and, when planning for the future, it is vital to come to terms with it. Following Popper, I shall use the term prediction for conclusions about future events that follow from good explanations, and prophecy for anything that purports to know what is not yet knowable. Trying to know the unknowable leads inexorably to error and self-deception. Among other things, it creates a bias towards pessimism. For example, in 1894 the physicist Albert Michelson made the following prophecy about the future of physics:

The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote … Our future discoveries must be looked for in the sixth place of decimals.

Albert Michelson, address at the opening of the Ryerson Physical Laboratory, University of Chicago, 1894

What exactly was Michelson doing when he judged that there was only an ‘exceedingly remote’ chance that the foundations of physics as he knew them would ever be superseded? He was prophesying the future. How? On the basis of the best knowledge available at the time. But that consisted of the physics of 1894! Powerful and accurate though it was in countless applications, it was not capable of predicting the content of its successors. It was poorly suited even to imagining the changes that relativity and quantum theory would bring - which is why the physicists who did imagine them won Nobel prizes. Michelson would not have put the expansion of the universe, or the existence of parallel universes, or the non-existence of the force of gravity, on any list of possible discoveries whose probability was ‘exceedingly remote’. He just didn’t conceive of them at all.

A century earlier, the mathematician Joseph-Louis Lagrange had remarked that Isaac Newton had not only been the greatest genius who ever lived, but also the luckiest, for ‘the system of the world can be discovered only once.’ Lagrange would never know that some of his own work, which he had regarded as a mere translation of Newton’s into a more elegant mathematical language, was a step towards the replacement of Newton’s ‘system of the world’. Michelson did live to see a series of discoveries that spectacularly refuted the physics of 1894, and with it his own prophecy.

Like Lagrange, Michelson himself had already contributed unwittingly to the new system - in this case with an experimental result. In 1887 he and his colleague Edward Morley had observed that the speed of light relative to an observer remains constant when the observer moves. This astoundingly counter-intuitive fact later became the centrepiece of Einstein’s special theory of relativity. But Michelson and Morley did not realize that that was what they had observed. Observations are theory-laden. Given an experimental oddity, we have no way of predicting whether it will eventually be explained merely by correcting a minor parochial assumption or by revolutionizing entire sciences. We can know that only after we have seen it in the light of a new explanation. In the meantime we have no option but to see the world through our best existing explanations - which include our existing misconceptions. And that biases our intuition. Among other things, it inhibits us from conceiving of significant changes.

When the determinants of future events are unknowable, how should one prepare for them? How can one? Given that some of those determinants are beyond the reach of scientific prediction, what is the right philosophy of the unknown future? What is the rational approach to the unknowable - to the inconceivable? That is the subject of this chapter.

The terms ‘optimism’ or ‘pessimism’ have always been about the unknowable, but they did not originally refer especially to the future, as they do today. Originally, ‘optimism’ was the doctrine that the world - past, present and future - is as good as it could possibly be. The term was first used to describe an argument of Leibniz (1646-1716) that God, being ‘perfect’, would have created nothing less than ‘the best of all possible worlds’. Leibniz believed that this idea solved the ‘problem of evil’, which I mentioned in Chapter 4: he proposed that all apparent evils in the world are outweighed by good consequences that are too remote to be known. Similarly, all apparently good events that fail to happen - including all improvements that humans are unsuccessful in achieving - fail because they would have had bad consequences that would have outweighed the good.

Since consequences are determined by the laws of physics, the larger part of Leibniz’s claim must be that the laws of physics are the best possible too. Alternative laws that made scientific progress easier, or made disease an impossible phenomenon, or made even one disease slightly less unpleasant - in short, any alternative that would seem to be an improvement upon our actual history with all its plagues, tortures, tyrannies and natural disasters - would in fact have been even worse on balance, according to Leibniz.

That theory is a spectacularly bad explanation. Not only can any observed sequence of events be explained as ‘best’ by that method, an alternative Leibniz could equally well have claimed that we live in the worstof all possible worlds, and that every good event is necessary in order to prevent something even better from happening. Indeed, some philosophers, such as Arthur Schopenhauer, have claimed just that. Their stance is called philosophical ‘pessimism’. Or one could claim that the world is exactly halfway between the best possible and the worst possible - and so on. Notice that, despite their superficial differences, all those theories have something important in common: if any of them were true, rational thought would have almost no power to discover true explanations. For, since we can always imagine states of affairs that seem better than what we observe, we would always be mistaken that they were better, no matter how good our explanations were. So, in such a world, the true explanations of events are never even imaginable. For instance, in Leibniz’s ‘optimistic’ world, whenever we try to solve a problem and fail, it is because we have been thwarted by an unimaginably vast intelligence that determined that it was best for us to fail. And, still worse, whenever someone rejects reason and decides instead to rely on bad explanations or logical fallacies - or, for that matter, on pure malevolence - they still achieve, in every case, a better outcome on balance than the most rational and benevolent thought possibly could have. This does not describe an explicable world. And that would be very bad news for us, its inhabitants. Both the original ‘optimism’ and the original ‘pessimism’ are close to pure pessimism as I shall define it.

In everyday usage, a common saying is that ‘an optimist calls a glass half full while a pessimist calls it half empty’. But those attitudes are not what I am referring to either: they are matters not of philosophy but of psychology - more ‘spin’ than substance. The terms can also refer to moods, such as cheerfulness or depression, but, again, moods do not necessitate any particular stance about the future: the statesman Winston Churchill suffered from intense depression, yet his outlook on the future of civilization, and his specific expectations as wartime leader, were unusually positive. Conversely the economist Thomas Malthus, a notorious prophet of doom (of whom more below), is said to have been a serene and happy fellow, who often had his companions at the dinner table in gales of laughter.

Blind optimism is a stance towards the future. It consists of proceeding as if one knows that the bad outcomes will not happen. The opposite approach, blind pessimism, often called the precautionary principle, seeks to ward off disaster by avoiding everything not known to be safe. No one seriously advocates either of these two as a universal policy, but their assumptions and their arguments are common, and often creep into people’s planning.

Blind optimism is also known as ‘overconfidence’ or ‘recklessness’. An often cited example, perhaps unfairly, is the judgement of the builders of the ocean liner Titanic that it was ‘practically unsinkable’. The largest ship of its day, it sank on its maiden voyage in 1912. Designed to survive every foreseeable disaster, it collided with an iceberg in a manner that had not been foreseen. A blind pessimist argues that there is an inherent asymmetry between good and bad consequences: a successful maiden voyage cannot possibly do as much good as a disastrous one can do harm. As Rees points out, a single catastrophic consequence of an otherwise beneficial innovation could put an end to human progress for ever. So the blindly pessimistic approach to building ocean liners is to stick with existing designs and refrain from attempting any records.

But blind pessimism is a blindly optimistic doctrine. It assumes that unforeseen disastrous consequences cannot follow from existing knowledge too (or, rather, from existing ignorance). Not all shipwrecks happen to record-breaking ships. Not all unforeseen physical disasters need be caused by physics experiments or new technology. But one thing we do know is that protecting ourselves from any disaster, foreseeable or not, or recovering from it once it has happened, requires knowledge; and knowledge has to be created. The harm that can flow from any innovation that does not destroy the growth of knowledge is always finite; the good can be unlimited. There would be no existing ship designs to stick with, nor records to stay within, if no one had ever violated the precautionary principle.

Because pessimism needs to counter that argument in order to be at all persuasive, a recurring theme in pessimistic theories throughout history has been that an exceptionally dangerous moment is imminent. Our Final Century makes the case that the period since the mid twentieth century has been the first in which technology has been capable of destroying civilization. But that is not so. Many civilizations in history were destroyed by the simple technologies of fire and the sword. Indeed, of all civilizations in history, the overwhelming majority have been destroyed, some intentionally, some as a result of plague or natural disaster. Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology, better hygiene, or better political or economic institutions. Very few, if any, could have been saved by greater caution about innovation. In fact most had enthusiastically implemented the precautionary principle.

More generally, what they lacked was a certain combination of abstract knowledge and knowledge embodied in technological artefacts, namely sufficient wealth. Let me define that in a non-parochial way as the repertoire of physical transformations that they would be capable of causing.

An example of a blindly pessimistic policy is that of trying to make our planet as unobtrusive as possible in the galaxy, for fear of contact with extraterrestrial civilizations. Stephen Hawking recently advised this, in his television series Into the Universe. He argued, ‘If [extraterrestrials] ever visit us, I think the outcome would be much as when Christopher Columbus first landed in America, which didn’t turn out very well for the Native Americans.’ He warned that there might be nomadic, space-dwelling civilizations who would strip the Earth of its resources, or imperialist civilizations who would colonize it. The science-fiction author Greg Bear has written some exciting novels based on the premise that the galaxy is full of civilizations that are either predators or prey, and in both cases are hiding. This would solve the mystery of Fermi’s problem. But it is implausible as a serious explanation. For one thing, it depends on civilizations becoming convinced of the existence of predator civilizations in space, and totally reorganizing themselves in order to hide from them, before being noticed - which means before they have even invented, say, radio.

Hawking’s proposal also overlooks various dangers of not making our existence known to the galaxy, such as being inadvertently wiped out if benign civilizations send robots to our solar system, perhaps to mine what they consider an uninhabited system. And it rests on other misconceptions in addition to that classic flaw of blind pessimism. One is the Spaceship Earth idea on a larger scale: the assumption that progress in a hypothetical rapacious civilization is limited by raw materials rather than by knowledge. What exactly would it come to steal? Gold? Oil? Perhaps our planet’s water? Surely not, since any civilization capable of transporting itself here, or raw materials back across galactic distances, must already have cheap transmutation and hence does not care about the chemical composition of its raw materials. So essentially the only resource of use to it in our solar system would be the sheer mass of matter in the sun. But matter is available in every star. Perhaps it is collecting entire stars wholesale in order to make a giant black hole as part of some titanic engineering project. But in that case it would cost it virtually nothing to omit inhabited solar systems (which are presumably a small minority, otherwise it is pointless for us to hide in any case); so would it casually wipe out billions of people? Would we seem like insects to it? This can seem plausible only if one forgets that there can be only one type of person: universal explainers and constructors. The idea that there could be beings that are to us as we are to animals is a belief in the supernatural.

Moreover, there is only one way of making progress: conjecture and criticism. And the only moral values that permit sustained progress are the objective values that the Enlightenment has begun to discover. No doubt the extraterrestrials’ morality is different from ours; but that will not be because it resembles that of the conquistadors. Nor would we be in serious danger of culture shock from contact with an advanced civilization: it will know how to educate its own children (or AIs), so it will know how to educate us - and, in particular, to teach us how to use its computers.

A further misconception is Hawking’s analogy between our civilization and pre-Enlightenment civilizations: as I shall explain in Chapter 15, there is a qualitative difference between those two types of civilization. Culture shock need not be dangerous to a post-Enlightenment one.

As we look back on the failed civilizations of the past, we can see that they were so poor, their technology was so feeble, and their explanations of the world so fragmentary and full of misconceptions that their caution about innovation and progress was as perverse as expecting a blindfold to be useful when navigating dangerous waters. Pessimists believe that the present state of our own civilization is an exception to that pattern. But what does the precautionary principle say about that claim? Can we be sure that our present knowledge, too, is not riddled with dangerous gaps and misconceptions? That our present wealth is not pathetically inadequate to deal with unforeseen problems? Since we cannot be sure, would not the precautionary principle require us to confine ourselves to the policy that would always have been salutary in the past - namely innovation and, in emergencies, even blind optimism about the benefits of new knowledge?

Also, in the case of our civilization, the precautionary principle rules itself out. Since our civilization has not been following it, a transition to it would entail reining in the rapid technological progress that is under way. And such a change has never been successful before. So a blind pessimist would have to oppose it on principle.

This may seem like logic-chopping, but it is not. The reason for these paradoxes and parallels between blind optimism and blind pessimism is that those two approaches are very similar at the level of explanation. Both are prophetic: both purport to know unknowable things about the future of knowledge. And since at any instant our best knowledge contains both truth and misconception, prophetic pessimism about any one aspect of it is always the same as prophetic optimism about another. For instance, Rees’s worst fears depend on the unprecedentedly rapid creation of unprecedentedly powerful technology, such as civilization-destroying bio-weapons.

If Rees is right that the twenty-first century is uniquely dangerous, and if civilization nevertheless survives it, it will have had an appallingly narrow escape. Our Final Century mentions only one other example of a narrow escape, namely the Cold War - so that will make two narrow escapes in a row. Yet, by that standard, civilization must already have had a similarly narrow escape during the Second World War. For instance, Nazi Germany came close to developing nuclear weapons; the Japanese Empire did successfully weaponize bubonic plague - and had tested the weapon with devastating effect in China and had plans to use it against the United States. Many feared that even a conventionally won victory by the Axis powers could bring down civilization. Churchill warned of ‘a new dark age, made more sinister and perhaps more protracted by the lights of perverted science’ - though, as an optimist, he worked to prevent that. In contrast, the Austrian writer Stefan Zweig and his wife committed suicide in 1942, in the safety of neutral Brazil, because they considered civilization to be already doomed.

So that would make it three narrow escapes in a row. But was there not a still earlier one? In 1798, Malthus had argued, in his influential essay On Population, that the nineteenth century would inevitably see a permanent end to human progress. He had calculated that the exponentially growing population at the time, which was a consequence of various technological and economic improvements, was reaching the limit of the planet’s capacity to produce food. And this was no accidental misfortune. He believed that he had discovered a law of nature about population and resources. First, the net increase in population, in each generation, is proportional to the existing population, so the population increases exponentially (or ‘in geometrical ratio’, as he put it). But, second, when food production increases - for instance, as a result of bringing formerly unproductive land into cultivation - the increase is the same as it would have been if that innovation had happened at any other time. It is not proportional to whatever the population happens to be. He called this (rather idiosyncratically) an increase ‘in arithmetical ratio’, and argued that ‘Population, when unchecked, increases in a geometrical ratio. Subsistence increases only in an arithmetical ratio. A slight acquaintance with numbers will shew the immensity of the first power in comparison of the second.’ His conclusion was that the relative well-being of humankind in his time was a temporary phenomenon and that he was living at a uniquely dangerous moment in history. The long-term state of humanity must be an equilibrium between the tendency of populations to increase on the one hand and, on the other, starvation, disease, murder and war - just as happens in the biosphere.

In the event, throughout the nineteenth century, a population explosion happened much as Malthus had predicted. Yet the end to human progress that he had foreseen did not, in part because food production increased even faster than the population. Then, during the twentieth century, both increased faster still.

Malthus had quite accurately foretold the one phenomenon, but had missed the other altogether. Why? Because of the systematic pessimistic bias to which prophecy is prone. In 1798 the forthcoming increase in population was more predictable than the even larger increase in the food supply not because it was in any sense more probable, but simply because it depended less on the creation of knowledge. By ignoring that structural difference between the two phenomena that he was trying to compare, Malthus slipped from educated guesswork into blind prophecy. He and many of his contemporaries were misled into believing that he had discovered an objective asymmetry between what he called the ‘power of population’ and the ‘power of production’. But that was just a parochial mistake - the same one that Michelson and Lagrange made. They all thought they were making sober predictions based on the best knowledge available to them. In reality they were all allowing themselves to be misled by the ineluctable fact of the human condition that we do not yet know what we have not yet discovered.

Neither Malthus nor Rees intended to prophesy. They were warning that unless we solve certain problems in time, we are doomed. But that has always been true, and always will be. Problems are inevitable. As I said, many civilizations have fallen. Even before the dawn of civilization, all our sister species, such as the Neanderthals, became extinct through challenges with which they could easily have coped, had they known how. Genetic studies suggest that our own species came close to extinction about 70,000 years ago, as a result of an unknown catastrophe which reduced its total numbers to only a few thousand. Being overwhelmed by these and other kinds of catastrophe would have seemed to the victims like being forced to play Russian roulette. That is to say, it would have seemed to them that no choices that they could have made (except, perhaps, to seek the intervention of the gods more diligently) could have affected the odds against them. But this was a parochial error. Civilizations starved, long before Malthus, because of what they thought of as the ‘natural disasters’ of drought and famine. But it was really because of what we would call poor methods of irrigation and farming - in other words, lack of knowledge.

Before our ancestors learned how to make fire artificially (and many times since then too), people must have died of exposure literally on top of the means of making the fires that would have saved their lives, because they did not know how. In a parochial sense, the weather killed them; but the deeper explanation is lack of knowledge. Many of the hundreds of millions of victims of cholera throughout history must have died within sight of the hearths that could have boiled their drinking water and saved their lives; but, again, they did not know that. Quite generally, the distinction between a ‘natural’ disaster and one brought about by ignorance is parochial. Prior to every natural disaster that people once used to think of as ‘just happening’, or being ordained by gods, we now see many options that the people affected failed to take - or, rather, to create. And all those options add up to the overarching option that they failed to create, namely that of forming a scientific and technological civilization like ours. Traditions of criticism. An Enlightenment.

If a one-kilometre asteroid had approached the Earth on a collision course at any time in human history before the early twenty-first century, it would have killed at least a substantial proportion of all humans. In that respect, as in many others, we live in an era of unprecedented safety: the twenty-first century is the first ever moment when we have known how to defend ourselves from such impacts, which occur once every 250,000 years or so. This may sound too rare to care about, but it is random. A probability of one in 250,000 of such an impact in any given year means that a typical person on Earth would have a far larger chance of dying of an asteroid impact than in an aeroplane crash. And the next such object to strike us is already out there at this moment, speeding towards us with nothing to stop it except human knowledge. Civilization is vulnerable to several other known types of disaster with similar levels of risk. For instance, ice ages occur more frequently than that, and ‘mini ice ages’ much more frequently - and some climatologists believe that they can happen with only a few years’ warning. A ‘super-volcano’ such as the one lurking under Yellowstone National Park could blot out the sun for years at a time. If it happened tomorrow our species could survive, by growing food using artificial light, and civilization could recover. But many would die, and the suffering would be so tremendous that such events should merit almost as much preventative effort as an extinction. We do not know the probability of a spontaneously occurring incurable plague, but we may guess that it is unacceptably high, since pandemics such as the Black Death in the fourteenth century have already shown us the sort of thing that can happen on a timescale of centuries. Should any of those catastrophes loom, we now have at least a chance of creating the knowledge required to survive, in time.

We have such a chance because we are able to solve problems. Problems are inevitable. We shall always be faced with the problem of how to plan for an unknowable future. We shall never be able to afford to sit back and hope for the best. Even if our civilization moves out into space in order to hedge its bets, as Rees and Hawking both rightly advise, a gamma-ray burst in our galactic vicinity would still wipe us all out. Such an event is thousands of times rarer than an asteroid collision, but when it does finally happen we shall have no defence against it without a great deal more scientific knowledge and an enormous increase in our wealth.

But first we shall have to survive the next ice age; and, before that, other dangerous climate change (both spontaneous and human-caused), and weapons of mass destruction and pandemics and all the countless unforeseen dangers that are going to beset us. Our political institutions, ways of life, personal aspirations and morality are all forms or embodiments of knowledge, and all will have to be improved if civilization - and the Enlightenment in particular - is to survive every one of the risks that Rees describes and presumably many others of which we have no inkling.

So - how? How can we formulate policies for the unknown? If we cannot derive them from our best existing knowledge, or from dogmatic rules of thumb like blind optimism or pessimism, where can we derive them from? Like scientific theories, policies cannot be derived from anything. They are conjectures. And we should choose between them not on the basis of their origin, but according to how good they are as explanations: how hard to vary.

Like the rejection of empiricism, and of the idea that knowledge is ‘justified, true belief’, understanding that political policies are conjectures entails the rejection of a previously unquestioned philosophical assumption. Again, Popper was a key advocate of this rejection. He wrote:

The question about the sources of our knowledge … has always been asked in the spirit of: ‘What are the best sources of our knowledge - the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist - no more than ideal rulers - and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’

‘Knowledge without Authority’ (1960)

The question ‘How can we hope to detect and eliminate error?’ is echoed by Feynman’s remark that ‘science is what we have learned about how to keep from fooling ourselves’. And the answer is basically the same for human decision-making as it is for science: it requires a tradition of criticism, in which good explanations are sought - for example, explanations of what has gone wrong, what would be better, what effect various policies have had in the past and would have in the future.

But what use are explanations if they cannot make predictions and so cannot be tested through experience, as they can be in science? This is really the question: how is progress possible in philosophy? As I discussed in Chapter 5, it is obtained by seeking good explanations. The misconception that evidence can play no legitimate role in philosophy is a relic of empiricism. Objective progress is indeed possible in politics just as it is in morality generally and in science.

Political philosophy traditionally centred on a collection of issues that Popper called the ‘who should rule?’ question. Who should wield power? Should it be a monarch or aristocrats, or priests, or a dictator, or a small group, or ‘the people’, or their delegates? And that leads to derivative questions such as ‘How should a king be educated?’ ‘Who should be enfranchised in a democracy?’ ‘How does one ensure an informed and responsible electorate?’

Popper pointed out that this class of questions is rooted in the same misconception as the question ‘How are scientific theories derived from sensory data?’ which defines empiricism. It is seeking a system that derives or justifies the right choice of leader or government, from existing data - such as inherited entitlements, the opinion of the majority, the manner in which a person has been educated, and so on. The same misconception also underlies blind optimism and pessimism: they both expect progress to be made by applying a simple rule to existing knowledge, to establish which future possibilities to ignore and which to rely on. Induction, instrumentalism and even Lamarckism all make the same mistake: they expect explanationless progress. They expect knowledge to be created by fiat with few errors, and not by a process of variation and selection that is making a continual stream of errors and correcting them.

The defenders of hereditary monarchy doubted that any method of selection of a leader by means of rational thought and debate could improve upon a fixed, mechanical criterion. That was the precautionary principle in action, and it gave rise to the usual ironies. For instance, whenever pretenders to a throne claimed to have a better hereditary entitlement than the incumbent, they were in effect citing the precautionary principle as a justification for sudden, violent, unpredictable change - in other words, for blind optimism. The same was true whenever monarchs happened to favour radical change themselves. Consider also the revolutionary utopians, who typically achieve only destruction and stagnation. Though they are blind optimists, what defines them as utopians is their pessimism that their supposed utopia, or their violent proposals for achieving and entrenching it, could ever be improved upon. Additionally, they are revolutionaries in the first place because they are pessimistic that many other people can be persuaded of the final truth that they think they know.

Ideas have consequences, and the ‘who should rule?’ approach to political philosophy is not just a mistake of academic analysis: it has been part of practically every bad political doctrine in history. If the political process is seen as an engine for putting the right rulers in power, then it justifies violence, for until that right system is in place, no ruler is legitimate; and once it is in place, and its designated rulers are ruling, opposition to them is opposition to rightness. The problem then becomes how to thwart anyone who is working against the rulers or their policies. By the same logic, everyone who thinks that existing rulers or policies are bad must infer that the ‘who should rule?’ question has been answered wrongly, and therefore that the power of the rulers is not legitimate, and that opposing it is legitimate, by force if necessary. Thus the very question ‘Who should rule?’ begs for violent, authoritarian answers, and has often received them. It leads those in power into tyranny, and to the entrenchment of bad rulers and bad policies; it leads their opponents to violent destructiveness and revolution.

Advocates of violence usually have in mind that none of those things need happen if only everyone agreed on who should rule. But that means agreeing about what is right, and, given agreement on that, rulers would then have nothing to do. And, in any case, such agreement is neither possible nor desirable: people are different, and have unique ideas; problems are inevitable, and progress consists of solving them.

Popper therefore applies his basic ‘how can we detect and eliminate errors?’ to political philosophy in the form how can we rid ourselves of bad governments without violence? Just as science seeks explanations that are experimentally testable, so a rational political system makes it as easy as possible to detect, and persuade others, that a leader or policy is bad, and to remove them without violence if they are. Just as the institutions of science are structured so as to avoid entrenching theories, but instead to expose them to criticism and testing, so political institutions should not make it hard to oppose rulers and policies, non-violently, and should embody traditions of peaceful, critical discussion of them and of the institutions themselves and everything else. Thus, systems of government are to be judged not for their prophetic ability to choose and install good leaders and policies, but for their ability to remove bad ones that are already there.

That entire stance is fallibilism in action. It assumes that rulers and policies are always going to be flawed - that problems are inevitable. But it also assumes that improving upon them is possible: problems are soluble. The ideal towards which this is working is not that nothing unexpected will go wrong, but that when it does it will be an opportunity for further progress.

Why would anyone want to make the leaders and policies that they themselves favour more vulnerable to removal? Indeed, let me first ask: why would anyone want to replace bad leaders and policies at all? That question may seem absurd, but perhaps it is absurd only from the perspective of a civilization that takes progress for granted. If we did not expect progress, why should we expect the new leader or policy, chosen by whatever method, to be any better than the old? On the contrary, we should then expect any changes on average to do as much harm as good. And then the precautionary principle advises, ‘Better the devil you know than the devil you don’t.’ There is a closed loop of ideas here: on the assumption that knowledge is not going to grow, the precautionary principle is true; and on the assumption that the precautionary principle is true, we cannot afford to allow knowledge to grow. Unless a society is expecting its own future choices to be better than its present ones, it will strive to make its present policies and institutions as immutable as possible. Therefore Popper’s criterion can be met only by societies that expect their knowledge to grow - and to grow unpredictably. And, further, they are expecting that if it did grow, that would help.

This expectation is what I call optimism, and I can state it, in its most general form, thus:

The Principle of Optimism
All evils are caused by insufficient knowledge.

Optimism is, in the first instance, a way of explaining failure, not prophesying success. It says that there is no fundamental barrier, no law of nature or supernatural decree, preventing progress. Whenever we try to improve things and fail, it is not because the spiteful (or unfathomably benevolent) gods are thwarting us or punishing us for trying, or because we have reached a limit on the capacity of reason to make improvements, or because it is best that we fail, but always because we did not know enough, in time. But optimism is also a stance towards the future, because nearly all failures, and nearly all successes, are yet to come.

Optimism follows from the explicability of the physical world, as I explained in Chapter 3. If something is permitted by the laws of physics, then the only thing that can prevent it from being technologically possible is not knowing how. Optimism also assumes that none of the prohibitions imposed by the laws of physics are necessarily evils. So, for instance, the lack of the impossible knowledge of prophecy is not an insuperable obstacle to progress. Nor are insoluble mathematical problems, as I explained in Chapter 8.

That means that in the long run there are no insuperable evils, and in the short run the only insuperable evils are parochial ones. There can be no such thing as a disease for which it is impossible to discover a cure, other than certain types of brain damage - those that have dissipated the knowledge that constitutes the patient’s personality. For a sick person is a physical object, and the task of transforming this object into the same person in good health is one that no law of physics rules out. Hence there is a way of achieving such a transformation - that is to say, a cure. It is only a matter of knowing how. If we do not, for the moment, know how to eliminate a particular evil, or we know in theory but do not yet have enough time or resources (i.e. wealth), then, even so, it is universally true that either the laws of physics forbid eliminating it in a given time with the available resources or there is a way of eliminating it in the time and with those resources.

The same must hold, equally trivially, for the evil of death - that is to say, the deaths of human beings from disease or old age. This problem has a tremendous resonance in every culture - in its literature, its values, its objectives great and small. It also has an almost unmatched reputation for insolubility (except among believers in the supernatural): it is taken to be the epitome of an insuperable obstacle. But there is no rational basis for that reputation. It is absurdly parochial to read some deep significance into this particular failure, among so many, of the biosphere to support human life - or of medical science throughout the ages to cure ageing. The problem of ageing is of the same general type as that of disease. Although it is a complex problem by present-day standards, the complexity is finite and confined to a relatively narrow arena whose basic principles are already fairly well understood. Meanwhile, knowledge in the relevant fields is increasing exponentially.

Sometimes ‘immortality’ (in this sense) is even regarded as undesirable. For instance, there are arguments from overpopulation; but those are examples of the Malthusian prophetic fallacy: what each additional surviving person would need to survive at present-day standards of living is easily calculated; what knowledge that person would contribute to the solution of the resulting problems is unknowable. There are also arguments about the stultification of society caused by the entrenchment of old people in positions of power; but the traditions of criticism in our society are already well adapted to solving that sort of problem. Even today, it is common in Western countries for powerful politicians or business executives to be removed from office while still in good health.

There is a traditional optimistic story that runs as follows. Our hero is a prisoner who has been sentenced to death by a tyrannical king, but gains a reprieve by promising to teach the king’s favourite horse to talk within a year. That night, a fellow prisoner asks what possessed him to make such a bargain. He replies, ‘A lot can happen in a year. The horse might die. The king might die. I might die. Or the horse might talk!’ The prisoner understands that, while his immediate problems have to do with prison bars and the king and his horse, ultimately the evil he faces is caused by insufficient knowledge. That makes him an optimist. He knows that, if progress is to be made, some of the opportunities and some of the discoveries will be inconceivable in advance. Progress cannot take place at all unless someone is open to, and prepares for, those inconceivable possibilities. The prisoner may or may not discover a way of teaching the horse to talk. But he may discover something else. He may persuade the king to repeal the law that he had broken; he may learn a convincing conjuring trick in which the horse would seem to talk; he may escape; he may think of an achievable task that would please the king even more than making the horse talk. The list is infinite. Even if every such possibility is unlikely, it takes only one of them to be realized for the whole problem to be solved. But if our prisoner is going to escape by creating a new idea, he cannot possibly know that idea today, and therefore he cannot let the assumption that it will never exist condition his planning.

Optimism implies all the other necessary conditions for knowledge to grow, and for knowledge-creating civilizations to last, and hence for the beginning of infinity. We have, as Popper put it, a duty to be optimistic - in general, and about civilization in particular. One can argue that saving civilization will be difficult. That does not mean that there is a low probability of solving the associated problems. When we say that a mathematical problem is hard to solve, we do not mean that it is unlikely to be solved. All sorts of factors determine whether mathematicians even address a problem, and with what effort. If an easy problem is not deemed to be interesting or useful, they might leave it unsolved indefinitely, while hard problems are solved all the time.

Usually the hardness of a problem is one of the very factors that cause it to be solved. Thus President John F. Kennedy said in 1962, in a celebrated example of an optimistic approach to the unknown, ‘We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.’ Kennedy did not mean that the moon project, being hard, was unlikely to succeed. On the contrary, he believed that it would. What he meant by a hard task was one that depends on facing the unknown. And the intuitive fact to which he was appealing was that although such hardness is always a negative factor when choosing among means to pursue an objective, when choosing the objective itself it can be a positive one, because we want to engage with projects that will involve creating new knowledge. And an optimist expects the creation of knowledge to constitute progress - including its unforeseeable consequences.

Thus, Kennedy remarked that the moon project would require a vehicle ‘made of new metal alloys, some of which have not yet been invented, capable of standing heat and stresses several times more than have ever been experienced, fitted together with a precision better than the finest watch, carrying all the equipment needed for propulsion, guidance, control, communications, food and survival’. Those were the known problems, which would require as-yet-unknown knowledge. That this was ‘on an untried mission, to an unknown celestial body’ referred to the unknown problems that made the probabilities, and the outcomes, profoundly unknowable. Yet none of that prevented rational people from forming the expectation that the mission could succeed. This expectation was not a judgement of probability: until far into the project, no one could predict that, because it depended on solutions not yet discovered to problems not yet known. When people were being persuaded to work on the project - and to vote for it, and so on - they were being persuaded that our being confined to one planet was an evil, that exploring the universe was a good, that the Earth’s gravitational field was not a barrier but merely a problem, and that overcoming it and all the other problems involved in the project was only a matter of knowing how, and that the nature of the problems made that moment the right one to try to solve them. Probabilities and prophecies were not needed in that argument.

Pessimism has been endemic in almost every society throughout history. It has taken the form of the precautionary principle, and of ‘who should rule?’ political philosophies and all sorts of other demands for prophecy, and of despair in the power of creativity, and of the misinterpretation of problems as insuperable barriers. Yet there have always been a few individuals who see obstacles as problems, and see problems as soluble. And so, very occasionally, there have been places and moments when there was, briefly, an end to pessimism. As far as I know, no historian has investigated the history of optimism, but my guess is that whenever it has emerged in a civilization there has been a mini-enlightenment: a tradition of criticism resulting in an efflorescence of many of the patterns of human progress with which we are familiar, such as art, literature, philosophy, science, technology and the institutions of an open society. The end of pessimism is potentially a beginning of infinity. Yet I also guess that in every case - with the single, tremendous exception (so far) of our own Enlightenment - this process was soon brought to an end and the reign of pessimism was restored.

The best-known mini-enlightenment was the intellectual and political tradition of criticism in ancient Greece which culminated in the so-called ‘Golden Age’ of the city-state of Athens in the fifth century BCE. Athens was one of the first democracies, and was home to an astonishing number of people who are regarded to this day as major figures in the history of ideas, such as the philosophers Socrates, Plato and Aristotle, the playwrights Aeschylus, Aristophanes, Euripides and Sophocles, and the historians Herodotus, Thucydides and Xenophon. The Athenian philosophical tradition continued a tradition of criticism dating back to Thales of Miletus over a century earlier and which had included Xenophanes of Colophon (570-480 BCE), one of the first to question anthropocentric theories of the gods. Athens grew wealthy through trade, attracted creative people from all over the known world, became one of the foremost military powers of the age, and built a structure, the Parthenon, which is to this day regarded as one of the great architectural achievements of all time. At the height of the Golden Age, the Athenian leader Pericles tried to explain what made Athens successful. Though he no doubt believed that the city’s patron goddess, Athena, was on their side, he evidently did not consider ‘the goddess did it’ to be a sufficient explanation for the Athenians’ success. Instead, he listed specific attributes of Athenian civilization. We do not know exactly how much of what he described was flattery or wishful thinking, but, in assessing the optimism of a civilization, what that civilization aspired to be must be even more important than what it had yet succeeded in becoming.

The first attribute that Pericles cited was Athens’ democracy. And he explained why. Not because ‘the people should rule’, but because it promotes ‘wise action’. It involves continual discussion, which is a necessary condition for discovering the right answer, which is in turn a necessary condition for progress:

Instead of looking upon discussion as a stumbling-block in the way of action, we think it an indispensable preliminary to any wise action at all.

Pericles, ‘Funeral Oration’, c. 431 BCE

He also mentioned freedom as a cause of success. A pessimistic civilization considers it immoral to behave in ways that have not been tried many times before, because it is blind to the possibility that the benefits of doing so might offset the risks. So it is intolerant and conformist. But Athens took the opposite view. Pericles also contrasted his city’s openness to foreign visitors with the closed, defensive attitude of rival cities: again, he expected that Athens would benefit from contact with new, unforeseeable ideas, even though, as he acknowledged, this policy gave enemy spies access to the city too. He even seems to have regarded the lenient treatment of children as a source of military strength:

In education, where our rivals from their very cradles by a painful discipline seek after manliness, in Athens we live exactly as we please, and yet are just as ready to encounter every legitimate danger.

A pessimistic civilization prides itself on its children’s conformity to the proper patterns of behaviour, and bemoans every real or imagined novelty.

Sparta was, in all the above respects, the opposite of Athens. The epitome of a pessimistic civilization, it was notorious for its citizens’ austere ‘spartan’ lifestyle, for the harshness of its educational system, and for the total militarization of its society. Every male citizen was a full-time soldier, owing absolute obedience to his superiors, who were themselves obliged to follow religious tradition. All other work was done by slaves: Sparta had reduced an entire neighbouring society, the Messenians, to the status of helots (a kind of serf or slave). It had no philosophers, historians, artists, architects, writers - or other knowledge-creating people of any kind apart from the occasional talented general. Thus almost the entire effort of the society was devoted to preserving itself in its existing state - in other words, to preventing improvement. In 404 BCE, twenty-seven years after Pericles’ funeral oration, Sparta decisively defeated Athens in war and imposed an authoritarian form of government on it. Although, through the vagaries of international politics, Athens became independent and democratic again soon afterwards, and continued for several generations to produce art, literature and philosophy, it was never again host to rapid, open-ended progress. It became unexceptional. Why? I guess that its optimism was gone.

Another short-lived enlightenment happened in the Italian city-state of Florence in the fourteenth century. This was the time of the early Renaissance, a cultural movement that revived the literature, art and science of ancient Greece and Rome after more than a millennium of intellectual stagnation in Europe. It became an enlightenment when the Florentines began to believe that they could improve upon that ancient knowledge. This era of dazzling innovation, known as the Golden Age of Florence, was deliberately fostered by the Medici family, who were in effect the city’s rulers - especially Lorenzo de’ Medici, known as ‘the Magnificent’, who was in charge from 1469 to 1492. Unlike Pericles, the Medici were not devotees of democracy: Florence’s enlightenment began not in politics but in art, and then philosophy, science and technology, and in those fields it involved the same openness to criticism and desire for innovation both in ideas and in action. Artists, instead of being restricted to traditional themes and styles, became free to depict what they considered beautiful, and to invent new styles. Encouraged by the Medici, the wealthy of Florence competed with each other in the innovativeness of the artists and scholars whom they sponsored - such as Leonardo da Vinci, Michelangelo and Botticelli. Another denizen of Florence at this time was Niccolò Machiavelli, the first secular political philosopher since antiquity.

The Medici were soon promoting the new philosophy of ‘humanism’, which valued knowledge above dogma, and virtues such as intellectual independence, curiosity, good taste and friendship over piety and humility. They sent agents all over the known world to obtain copies of ancient books, many of which had not been seen in the West since the fall of the Western Roman Empire. The Medici library made copies which it supplied to scholars in Florence and elsewhere. Florence became a powerhouse of newly revived ideas, new interpretations of ideas, and brand-new ideas.

But that rapid progress lasted for only a generation or so. A charismatic monk, Girolamo Savonarola, began to preach apocalyptic sermons against humanism and every other aspect of the Florentine enlightenment. Urging a return to medieval conformism and self-denial, he proclaimed prophecies of doom if Florence continued on its path. Many citizens were persuaded, and in 1494 Savonarola managed to seize power. He reimposed all the traditional restrictions on art, literature, thought and behaviour. Secular music was banned. Clothing had to be plain. Frequent fasting became effectively compulsory. Homosexuality and prostitution were violently suppressed. The Jews of Florence were expelled. Gangs of ruffians inspired by Savonarola roamed the city searching for taboo artefacts such as mirrors, cosmetics, musical instruments, secular books, and almost anything beautiful. A huge pile of such treasures was ceremonially burned in the so-called ‘Bonfire of the Vanities’ in the centre of the city. Botticelli is said to have thrown some of his own paintings into the fire. It was the bonfire of optimism.

Eventually Savonarola was himself discarded and burned at the stake. But, although the Medici regained control of Florence, optimism did not. As in Athens, the tradition of art and science continued for a while, and, even a century later, Galileo was sponsored (and then abandoned) by the Medici. But by that time Florence had became just another Renaissance city-state lurching from one crisis to another under the rule of despots. Fortunately, somehow that mini-enlightenment was never quite extinguished. It continued to smoulder in Florence and several other Italian city-states, and finally ignited the Enlightenment itself in northern Europe.

There may have been many enlightenments in history, shorter-lived and shining less brilliantly than those, perhaps in obscure subcultures, families or individuals. For example, the philosopher Roger Bacon (1214-94) is noted for rejecting dogma, advocating observation as a way of discovering the truth (albeit by ‘induction’), and making several scientific discoveries. He foresaw the invention of microscopes, telescopes, self-powered vehicles and flying machines - and that mathematics would be a key to future scientific discoveries. He was thus an optimist. But he was not part of any tradition of criticism, and so his optimism died with him.

Bacon studied the works of ancient Greek scientists and of scholars of the ‘Islamic Golden Age’ - such as Alhazen (965-1039), who made several original discoveries in physics and mathematics. During the Islamic Golden Age (between approximately the eighth and thirteenth centuries), there was a strong tradition of scholarship that valued and drew upon the science and philosophy of European antiquity. Whether there was also a tradition of criticism in science and philosophy is currently controversial among historians. But, if there was, it was snuffed out like the others.

It may be that the Enlightenment has ‘tried’ to happen countless times, perhaps even all the way back to prehistory. If so, those mini-enlightenments put our recent ‘lucky escapes’ into stark perspective. It may be that there was progress every time - a brief end to stagnation, a brief glimpse of infinity, always ending in tragedy, always snuffed out, usually without trace. Except this once.

The inhabitants of Florence in 1494 or Athens in 404 BCE could be forgiven for concluding that optimism just isn’t factually true. For they knew nothing of such things as the reach of explanations or the power of science or even laws of nature as we understand them, let alone the moral and technological progress that was to follow when the Enlightenment got under way. At the moment of defeat, it must have seemed at least plausible to the formerly optimistic Athenians that the Spartans might be right, and to the formerly optimistic Florentines that Savonarola might be. Like every other destruction of optimism, whether in a whole civilization or in a single individual, these must have been unspeakable catastrophes for those who had dared to expect progress. But we should feel more than sympathy for those people. We should take it personally. For if any of those earlier experiments in optimism had succeeded, our species would be exploring the stars by now, and you and I would be immortal.

TERMINOLOGY

Blind optimism (recklessness, overconfidence) Proceeding as if one knew that bad outcomes will not happen.

Blind pessimism (precautionary principle) Avoiding everything not known to be safe.

The principle of optimism All evils are caused by insufficient knowledge.

Wealth The repertoire of physical transformations that one is capable of causing.

MEANINGS OF ‘THE BEGINNING OF INFINITY’ ENCOUNTERED IN THIS CHAPTER

- Optimism. (And the end of pessimism.)

- Learning how not to fool ourselves.

- Mini-enlightenments like those of Athens and Florence were potential beginnings of infinity.

SUMMARY

Optimism (in the sense that I have advocated) is the theory that all failures - all evils - are due to insufficient knowledge. This is the key to the rational philosophy of the unknowable. It would be contentless if there were fundamental limitations to the creation of knowledge, but there are not. It would be false if there were fields - especially philosophical fields such as morality - in which there were no such thing as objective progress. But truth does exist in all those fields, and progress towards it is made by seeking good explanations. Problems are inevitable, because our knowledge will always be infinitely far from complete. Some problems are hard, but it is a mistake to confuse hard problems with problems unlikely to be solved. Problems are soluble, and each particular evil is a problem that can be solved. An optimistic civilization is open and not afraid to innovate, and is based on traditions of criticism. Its institutions keep improving, and the most important knowledge that they embody is knowledge of how to detect and eliminate errors. There may have been many short-lived enlightenments in history. Ours has been uniquely long-lived.