The Beginning of Infinity: Explanations That Transform the World - David Deutsch (2011)
Chapter 18. The Beginning
‘This is Earth. Not the eternal and only home of mankind, but only a starting point of an infinite adventure. All you need do is make the decision [to end your static society]. It is yours to make.’
[With that decision] came the end, the final end of Eternity. - And the beginning of Infinity.
Isaac Asimov, The End of Eternity (1955)
The first person to measure the circumference of the Earth was the astronomer Eratosthenes of Cyrene, in the third century BCE. His result was fairly close to the actual value, which is about 40,000 kilometres. For most of history this was considered an enormous distance, but with the Enlightenment that conception gradually changed, and nowadays we think of the Earth as small. That was brought about mainly by two things: first, by the science of astronomy, which discovered titanic entities compared with which our planet is indeed unimaginably tiny; and, second, by technologies that have made worldwide travel and communication commonplace. So the Earth has become smaller both relative to the universe and relative to the scale of human action.
Thus, in regard to the geography of the universe and to our place in it, the prevailing world view has rid itself of some parochial misconceptions. We know that we have explored almost the whole surface of that formerly enormous sphere; but we also know that there are far more places left to explore in the universe (and beneath the surface of the Earth’s land and oceans) than anyone imagined while we still had those misconceptions.
In regard to theoretical knowledge, however, the prevailing world view has not yet caught up with Enlightenment values. Thanks to the fallacy and bias of prophecy, a persistent assumption remains that our existing theories are at or fairly close to the limit of what it is knowable - that we are nearly there, or perhaps halfway there. As the economist David Friedman has remarked, most people believe that an income of about twice their own should be sufficient to satisfy any reasonable person, and that no genuine benefit can be derived from amounts above that. As with wealth, so with scientific knowledge: it is hard to imagine what it would be like to know twice as much as we do, and so if we try to prophesy it we find ourselves just picturing the next few decimal places of what we already know. Even Feynman made an uncharacteristic mistake in this regard when he wrote:
I think there will certainly not be novelty, say for a thousand years. This thing cannot keep going on so that we are always going to discover more and more new laws. If we do, it will become boring that there are so many levels one underneath the other … We are very lucky to live in an age in which we are still making discoveries. It is like the discovery of America - you only discover it once.
The Character of Physical Law (1965)
Among other things, Feynman forgot that the very concept of a ‘law’ of nature is not cast in stone. As I mentioned in Chapter 5, this concept was different before Newton and Galileo, and it may change again. The concept of levels of explanation dates from the twentieth century, and it too will change if I am right that, as I guessed in Chapter 5, there are fundamental laws that look emergent relative to microscopic physics. More generally, the most fundamental discoveries have always, and will always, not only consist of new explanations, but use new modes of explanation. As for being boring, that is merely a prophecy that criteria for judging problems will not evolve as fast as the problems themselves; but there is no argument for that other than a failure of imagination. Even Feynman cannot get round the fact that the future is not yet imaginable.
Shedding that kind of parochialism is something that will have to be done again and again in the future. A level of knowledge, wealth, computer power or physical scale that seems absurdly huge at any given instant will later be considered pathetically tiny. Yet we shall never reach anything like an unproblematic state. Like the guests at Infinity Hotel, we shall never be ‘nearly there’.
There are two versions of ‘nearly there’. In the dismal version, knowledge is bounded by laws of nature or supernatural decree, and progress has been a temporary phase. Though this is rank pessimism by my definition, it has gone under various names - including ‘optimism’ - and has been integral to most world views in the past. In the cheerful version, all remaining ignorance will soon be eliminated or confined to insignificant areas. This is optimistic in form, but the closer one looks, the more pessimistic it becomes in substance. In politics, for instance, utopians promise that a finite number of already-known changes can bring about a perfected human state, and that is a well-known recipe for dogmatism and tyranny.
In physics, imagine that Lagrange had been right that ‘the system of the world can be discovered only once’, or that Michelson had been right that all physics still undiscovered in 1894 was about ‘the sixth place of decimals’. They were claiming to know that anyone who subsequently became curious about what underlay that ‘system of the world’ would be enquiring futilely into the incomprehensible. And that anyone who ever wondered at an anomaly, and suspected that some fundamental explanation contained a misconception, would be mistaken.
Michelson’s future - our present - would have been lacking in explanatory knowledge to an extent that we can no longer easily imagine. A vast range of phenomena already known to him, such as gravity, the properties of the chemical elements, and the luminosity of the sun, remained to be explained. He was claiming that these phenomena would only ever appear as list of facts or rules of thumb, to be memorized but never understood or fruitfully questioned. Every such frontier of fundamental knowledge that existed in 1894 would have been a barrier beyond which nothing would ever be amenable to explanation. There would be no such thing as the internal structure of atoms, no dynamics of space and time, no such subject as cosmology, no explanation for the equations governing gravitation or electromagnetism, no connections between physics and the theory of computation … The deepest structure in the world would be an inexplicable, anthropocentric boundary, coinciding with the boundary of what the physicists of 1894 thought they understood. And nothing inside that boundary - like, say, the existence of a force of gravity - would ever turn out to be profoundly false.
Nothing very important would ever be discovered in the laboratory that Michelson was opening. Each generation of students who studied there, instead of striving to understand the world more deeply than their teachers, could aspire to nothing better than to emulate them - or, at best, to discover the seventh decimal place of some constant whose sixth was already known. (But how? The most sensitive scientific instruments today depend on fundamental discoveries made after 1894.) Their system of the world would for ever remain a tiny, frozen island of explanation in an ocean of incomprehensibility. Michelson’s ‘fundamental laws and facts of physical science’, instead of being the beginning of an infinity of further understanding, as they were in reality, would have been the last gasp of reason in the field.
I doubt that either Lagrange or Michelson thought of himself as pessimistic. Yet their prophecies entailed the dismal decree that no matter what you do, you will understand no further. It so happens that both of them had made discoveries which could have led them to the very progress whose possibility they denied. They should have been seeking that progress, should they not? But almost no one is creative in fields in which they are pessimistic.
I remarked at the end of Chapter 13 that the desirable future is one where we progress from misconception to ever better (less mistaken) misconception. I have often thought that the nature of science would be better understood if we called theories ‘misconceptions’ from the outset, instead of only after we have discovered their successors. Thus we could say that Einstein’s Misconception of Gravity was an improvement on Newton’s Misconception, which was an improvement on Kepler’s. The neo-Darwinian Misconception of Evolution is an improvement on Darwin’s Misconception, and his on Lamarck’s. If people thought of it like that, perhaps no one would need to be reminded that science claims neither infallibility nor finality.
Perhaps a more practical way of stressing the same truth would be to frame the growth of knowledge (all knowledge, not only scientific) as a continual transition from problems to better problems, rather than from problems to solutions or from theories to better theories. This is the positive conception of ‘problems’ that I stressed in Chapter 1. Thanks to Einstein’s discoveries, our current problems in physics embody more knowledge than Einstein’s own problems did. His problems were rooted in the discoveries of Newton and Euclid, while most problems that preoccupy physicists today are rooted in - and would be inaccessible mysteries without - the discoveries of twentieth-century physics.
The same is true in mathematics. Although mathematical theorems are rarely proved false once they have been around for a while, what does happen is that mathematicians’ understanding of what is fundamental improves. Abstractions that were originally studied in their own right are understood as aspects of more general abstractions, or are related in unforeseen ways to other abstractions. And so progress in mathematics also goes from problems to better problems, as does progress in all other fields.
Optimism and reason are incompatible with the conceit that our knowledge is ‘nearly there’ in any sense, or that its foundations are. Yet comprehensive optimism has always been rare, and the lure of the prophetic fallacy strong. But there have always been exceptions. Socrates famously claimed to be deeply ignorant. And Popper wrote:
I believe that it would be worth trying to learn something about the world even if in trying to do so we should merely learn that we do not know much … It might be well for all of us to remember that, while differing widely in the various little bits we know, in our infinite ignorance we are all equal.
Conjectures and Refutations (1963)
Infinite ignorance is a necessary condition for there to be infinite potential for knowledge. Rejecting the idea that we are ‘nearly there’ is a necessary condition for the avoidance of dogmatism, stagnation and tyranny.
In 1996 the journalist John Horgan caused something of a stir with his book The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Age. In it, he argued that the final truth in all fundamental areas of science - or at least as much of it as human minds would ever be capable of grasping - had already been discovered during the twentieth century.
Horgan wrote that he had originally believed science to be ‘open-ended, even infinite’. But he became convinced of the contrary by (what I would call) a series of misconceptions and bad arguments. His basic misconception was empiricism. He believed that what distinguishes science from unscientific fields such as literary criticism, philosophy or art is that science has the ability to ‘resolve questions’ objectively (by comparing theories with reality), while other fields can produce only multiple, mutually incompatible interpretations of any issue. He was mistaken in both respects. As I have explained throughout this book, there is objective truth to be found in all those fields, while finality or infallibility cannot be found anywhere.
Horgan accepts from the bad philosophy of ‘postmodern’ literary criticism its wilful confusion between two kinds of ‘ambiguity’ that can exist in philosophy and art. The first is the ‘ambiguity’ of multiple true meanings, either intended by the author or existing because of the reach of the ideas. The second is the ambiguity of deliberate vagueness, confusion, equivocation or self-contradiction. The first is an attribute of deep ideas, the second an attribute of deep silliness. By confusing them, one ascribes to the best art and philosophy the qualities of the worst. Since, in that view, readers, viewers and critics can attribute any meaning they choose to the second kind of ambiguity, bad philosophy declares the same to be true of all knowledge: all meanings are equal, and none of them is objectively true. One then has a choice between complete nihilism or regarding all ‘ambiguity’ as a good thing in those fields. Horgan chooses the latter option: he classifies art and philosophy as ‘ironic’ fields, irony being the presence of multiple conflicting meanings in a statement.
However, unlike the postmodernists, Horgan thinks that science and mathematics are the shining exceptions to all that. They alone are capable of non-ironic knowledge. But there is also, he concludes, such a thing as ironic science - the kind of science that cannot ‘resolve questions’ because, essentially, it is just philosophy or art. Ironic science can continue indefinitely, but that is precisely because it never resolves anything; it never discovers objective truth. Its only value is in the eye of the beholder. So the future, according to Horgan, belongs to ironic knowledge. Objective knowledge has already reached its ultimate bounds.
Horgan surveys some of the open questions of fundamental science, and judges them all either ‘ironic’ or non-fundamental, in support of his thesis. But that conclusion was made inevitable by his premises alone. For consider the prospect of any future discovery that would constitute fundamental progress. We cannot know what it is, but bad philosophy can already split it, on principle, into a new rule of thumb and a new ‘interpretation’ (or explanation). The new rule of thumb cannot possibly be fundamental: it will just be another equation. Only a trained expert could tell the difference between it and the old equation. The new ‘interpretation’ will by definition be pure philosophy, and hence must be ‘ironic’. By this method, any potential progress can be pre-emptively reinterpreted as non-progress.
Horgan rightly points out that his prophecy cannot be proved false by placing it in the context of previous failed prophecies. The fact that Michelson was wrong about the achievements of the nineteenth century, and Lagrange about those of the seventeenth, does not imply that Horgan was wrong about those of the twentieth. However, it so happens that our current scientific knowledge includes a historically unusual number of deep, fundamental problems. Never before in the history of human thought has it been so obvious that our knowledge is tiny and our ignorance vast. And so, unusually, Horgan’s pessimism contradicts existing knowledge as well as being a prophetic fallacy. For example, the problem-situation of fundamental physics today has a radically different structure from that of 1894. Although physicists then were aware of some phenomena and theoretical issues which we now recognize as harbingers of the revolutionary explanations to come, their importance was unclear at the time. It was hard to distinguish those harbingers from anomalies that would eventually be cleared up with existing explanations plus the tweaking of the ‘sixth place of decimals’ or minor terms in a formula. But today there is no such excuse for denying that some of our problems are fundamental. Our best theories are telling us of profound mismatches between themselves and the reality that they are supposed to explain.
One of the most blatant examples of that is that physics currently has two fundamental ‘systems of the world’ - quantum theory and the general theory of relativity - and they are radically inconsistent. There are many ways of characterizing this inconsistency - known as the problem of quantum gravity - corresponding to the many proposals for solving it that have been tried without success. One aspect is the ancient tension between the discrete and the continuous. The resolution that I described in Chapter 11, in terms of continuous clouds of fungible instances of a particle with diverse discrete attributes, works only if the spacetime in which this happens is itself continuous. But if spacetime is affected by the gravitation of the cloud, then it would acquire discrete attributes.
In cosmology, there has been revolutionary progress even in the few years since The End of Science was written - and also since I wrote The Fabric of Reality soon afterwards. At the time, all viable cosmological theories had the expansion of the universe gradually slowing down, due to gravity, ever since the initial explosion at the Big Bang and for ever in the future. Cosmologists were trying to determine whether, despite slowing down, its expansion rate was sufficient to make the universe expand for ever (like a projectile that has exceeded escape velocity) or whether it would eventually recollapse in a ‘Big Crunch’. Those were believed to be the only two possibilities. I discussed them in The Fabric of Reality because they were relevant to the question: is there a bound on the number of computational steps that a computer can execute during the lifetime of the universe? If there is, then physics will also impose a bound on the amount of knowledge that can be created - knowledge-creation being a form of computation.
Everyone’s first thought was that unbounded knowledge-creation is possible only in a universe that does not recollapse. However, on analysis it turned out that the reverse is true: in universes that expand for ever, the inhabitants would run out of energy. But the cosmologist Frank Tipler discovered that in certain types of recollapsing universes the Big Crunch singularity is suitable for performing the faster-and-faster trick that we used in Infinity Hotel: an infinite sequence of computational steps could be executed in a finite time before the singularity, powered by the ever-increasing tidal effects of the gravitational collapse itself. To the inhabitants - who would eventually have to upload their personalities into computers made of something like pure tides - the universe would last for ever because they would be thinking faster and faster, without limit, as it collapsed, and storing their memories in ever smaller volumes so that access times could also be reduced without limit. Tipler called such universes ‘omega-point universes’. At the time, the observational evidence was consistent with the real universe being of that type.
A small part of the revolution that is currently overtaking cosmology is that the omega-point models have been ruled out by observation. Evidence - including a remarkable series of studies of supernovae in distant galaxies - has forced cosmologists to the unexpected conclusion that the universe not only will expand for ever but has been expanding at an accelerating rate. Something has been counteracting its gravity.
We do not know what. Pending the discovery of a good explanation, the unknown cause has been named ‘dark energy’. There are several proposals for what it might be, including effects that merely give the appearance of acceleration. But the best working hypothesis at present is that in the equations for gravity there is an additional term, of a form first mooted by Einstein in 1915 and then dropped because he realized that his explanation for it was bad. It was proposed again in the 1980s as a possible effect of quantum field theory, but again there is no theory of the physical meaning of such a term that is good enough to predict, for instance, its magnitude. The problem of the nature and effects of dark energy is no minor detail, nor does anything about it suggest a perpetually unfathomable mystery. So much for cosmology being a fundamentally completed science.
Depending on what dark energy turns out to be, it may well be possible to harness it in the distant future, to provide energy for knowledge-creation to continue for ever. Because this energy would have to be collected over ever greater distances, the computation would have to become ever slower. In a mirror image of what would happen in omega-point cosmologies, the inhabitants of the universe would notice no slowdown, because, again, they would be instantiated as computer programs whose total number of steps would be unbounded. Thus dark energy, which has ruled out one scenario for the unlimited growth of knowledge, would provide the literal driving force of another.
The new cosmological models describe universes that are infinite in their spatial dimensions. Because the Big Bang happened a finite time ago, and because of the finiteness of the speed of light, we shall only ever see a finite portion of infinite space - but that portion will continue to grow for ever. Thus, eventually, ever more unlikely phenomena will come into view. When the total volume that we can see is a million times larger than it is now, we shall see things that have a probability of one in a million of existing in space as we see it today. Everything physically possible will eventually be revealed: watches that came into existence spontaneously; asteroids that happen to be good likenesses of William Paley; everything. According to the prevailing theory, all those things exist today, but many times too far away for light to have reached us from them - yet.
Light becomes fainter as it spreads out: there are fewer photons per unit area. That means that ever larger telescopes are needed to detect a given object at ever larger distances. So there may be a limit to how distant - and therefore how unlikely - a phenomenon we shall ever be able to see. Except, that is, for one type of phenomenon: a beginning of infinity. Specifically, any civilization that is colonizing the universe in an unbounded way will eventually reach our location.
Hence a single infinite space could play the role of the infinitely many universes postulated by anthropic explanations of the fine-tuning coincidences. In some ways it could play that role better: if the probability that such a civilization could form is not zero, there must be infinitely many such civilizations in space, and they will eventually encounter each other. If they could estimate that probability from theory, they could test the anthropic explanation.
Furthermore, anthropic arguments could not only dispense with all those parallel universes,* they could dispense with the variant laws of physics too. Recall from Chapter 6 that all the mathematical functions that occur in physics belong to a relatively narrow class, the analytic functions. They have a remarkable property: if an analytic function is non-zero at even one point, then over its entire range it can pass through zero only at isolated points. So this must be true of ‘the probability that an astrophysicist exists’ expressed as a function of the constants of physics. We know little about this function, but we do know that it is non-zero for at least one set of values of the constants, namely ours. Hence we also know that it is non-zero for almost any values. It is presumably unimaginably tiny for almost all sets of values - but, nevertheless, non-zero. And hence, almost whatever the constants were, there would be infinitely many astrophysicists in our single universe.
Unfortunately, at this point the anthropic explanation of fine-tuning has cancelled itself out: astrophysicists exist whether there is fine-tuning or not. So, in the new cosmology even more than in the old one, the anthropic argument does not explain the fine-tuning. Nor, therefore, can it solve the Fermi problem, ‘Where are they?’ It may turn out to be a necessary part of the explanation, but it can never explain anything by itself. Also, as I explained in Chapter 8, any theory involving an anthropic argument must provide a measure for defining probabilities in an infinite set of things. It is unknown how to do that in the spatially infinite universe that cosmologists currently believe we live in.
That issue has a wider scope. For example, there is the so-called ‘quantum suicide argument’ in regard to the multiverse. Suppose you want to win the lottery. You buy a ticket and set up a machine that will automatically kill you in your sleep if you lose. Then, in all the histories in which you do wake up, you are a winner. If you do not have loved ones to mourn you, or other reasons to prefer that most histories not be affected by your premature death, you have arranged to get something for nothing with what proponents of this argument call ‘subjective certainty’. However, that way of applying probabilities does not follow directly from quantum theory, as the usual one does. It requires an additional assumption, namely that when making decisions one should ignore the histories in which the decision-maker is absent. This is closely related to anthropic arguments. Again, the theory of probability for such cases is not well understood, but my guess is that the assumption is false.
A related assumption occurs in the so-called simulation argument, whose most cogent proponent is the philosopher Nick Bostrom. Its premise is that in the distant future the whole universe as we know it is going to be simulated in computers (perhaps for scientific or historical research) many times - perhaps infinitely many times. Therefore virtually all instances of us are in those simulations and not the original world. And therefore we are almost certainly living in a simulation. So the argument goes. But is it really valid to equate ‘most instances’ with ‘near certainty’ like that?
For an inkling of why it might not be, consider a thought experiment. Imagine that physicists discover that space is actually many-layered like puff pastry; the number of layers varies from place to place; the layers split in some places, and their contents split with them. Every layer has identical contents, though. Hence, although we do not feel it, instances of us split and merge as we move around. Suppose that in London space has a million layers, while in Oxford it has only one. I travel frequently between the two cities, and one day I wake up having forgotten which one I am in. It is dark. Should I bet that I am much more likely to be in London, just because a million times as many instances of me ever wake up in London as in Oxford? I think not. In that situation it is clear that counting the number of instances of oneself is no guide to the probability one ought to use in decision-making. We should be counting histories not instances. In quantum theory, the laws of physics tell us how to count histories by measure. In the case of multiple simulations, I know of no good argument for any way of counting them: it is an open question. But I do not see why repeating the same simulation of me a million times should in any sense make it ‘more likely’ that I am a simulation rather than the original. What if one computer uses a million times as many electrons as another to represent each bit of information in its memory? Am I more likely to be ‘in’ the former computer than the latter?
A different issue raised by the simulation argument is this: will the universe as we know it really be simulated often in the future? Would that not be immoral? The world as it exists today contains an enormous amount of suffering, and whoever ran such a simulation would be responsible for recreating it. Or would they? Are two identical instances of a quale the same thing as one? If so, then creating the simulation would not be immoral - no more so than reading a book about past suffering is immoral. But in that case how different do two simulations of people have to be before they count as two people for moral purposes? Again, I know of no good answer to those questions. I suspect that they will be answered only by the explanatory theory from which AI will also follow.
Here is a related but starker moral question. Take a powerful computer and set each bit randomly to 0 or 1 using a quantum randomizer. (That means that 0 and 1 occur in histories of equal measure.) At that point all possible contents of the computer’s memory exist in the multiverse. So there are necessarily histories present in which the computer contains an AI program - indeed, all possible AI programs in all possible states, up to the size that the computer’s memory can hold. Some of them are fairly accurate representations of you, living in a virtual-reality environment crudely resembling your actual environment. (Present-day computers do not have enough memory to simulate a realistic environment accurately, but, as I said in Chapter 7, I am sure that they have more than enough to simulate a person.) There are also people in every possible state of suffering. So my question is: is it wrong to switch the computer on, setting it executing all those programs simultaneously in different histories? Is it, in fact, the worst crime ever committed? Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny? Or is it innocent and trivial?
An even more dubious example of anthropic-type reasoning is the doomsday argument. It attempts to estimate the life expectancy of our species by assuming that the typical human is roughly halfway through the sequence of all humans. Hence we should expect the total number who will ever live to be about twice the number who have lived so far. Of course this is prophecy, and for that reason alone cannot possibly be a valid argument, but let me briefly pursue it in its own terms. First, it does not apply at all if the total number of humans is going to be infinite - for in that case every human who ever lives will live unusually early in the sequence. So, if anything, it suggests that we are at the beginning of infinity.
Also, how long is a human lifetime? Illness and old age are going to be cured soon - certainly within the next few lifetimes - and technology will also be able to prevent deaths through homicide or accidents by creating backups of the states of brains, which could be uploaded into new, blank brains in identical bodies if a person should die. Once that technology exists, people will consider it considerably more foolish not to make frequent backups of themselves than they do today in regard to their computers. If nothing else, evolution alone will ensure that, because those who do not back themselves up will gradually die out. So there can be only one outcome: effective immortality for the whole human population, with the present generation being one of the last that will have short lives. That being so, if our species will nevertheless have a finite lifetime, then knowing the total number of humans who will ever live provides no upper bound on that lifetime, because it cannot tell us how long the potentially immortal humans of the future will live before the prophesied catastrophe strikes.
In 1993 the mathematician Vernor Vinge wrote an influential essay entitled ‘The Coming Technological Singularity’, in which he estimated that, within about thirty years, predicting the future of technology would become impossible - an event that is now known simply as ‘the Singularity’. Vinge associated the approaching Singularity with the achievement of AI, and subsequent discussions have centred on that. I certainly hope that AI is achieved by then, but I see no sign yet of the theoretical progress that I have argued must come first. On the other hand, I see no reason to single out AI as a mould-breaking technology: we already have billions of humans.
Most advocates of the Singularity believe that, soon after the AI breakthrough, superhuman minds will be constructed and that then, as Vinge put it, ‘the human era will be over.’ But my discussion of the universality of human minds rules out that possibility. Since humans are already universal explainers and constructors, they can already transcend their parochial origins, so there can be no such thing as a superhuman mind as such. There can only be further automation, allowing the existing kind of human thinking to be carried out faster, and with more working memory, and delegating ‘perspiration’ phases to (non-AI) automata. A great deal of this has already happened with computers and other machinery, as well as with the general increase in wealth which has multiplied the number of humans who are able to spend their time thinking. This can indeed be expected to continue. For instance, there will be ever-more-efficient human-computer interfaces, no doubt culminating in add-ons for the brain. But tasks like internet searching will never be carried out by super-fast AIs scanning billions of documents creatively for meaning, because they will not want to perform such tasks any more than humans do. Nor will artificial scientists, mathematicians and philosophers ever wield concepts or arguments that humans are inherently incapable of understanding. Universality implies that, in every important sense, humans and AIs will never be other than equal.
Similarly, the Singularity is often assumed to be a moment of unprecedented upheaval and danger, as the rate of innovation becomes too rapid for humans to cope with. But this is a parochial misconception. During the first few centuries of the Enlightenment, there has been a constant feeling that rapid and accelerating innovation is getting out of hand. But our capacity to cope with, and enjoy, changes in our technology, lifestyle, ethical norms and so on has been increasing too, with the weakening and extinction of some of the anti-rational memes that used to sabotage it. In future, when the rate of innovation will also increase due to the sheer increasing clock rate and throughput of brain add-ons and AI computers, then our capacity to cope with that will increase at the same rate or faster: if everyone were suddenly able to think a million times as fast, no one would feel hurried as a result. Hence I think that the concept of the Singularity as a sort of discontinuity is a mistake. Knowledge will continue to grow exponentially or even faster, and that is astounding enough.
The economist Robin Hanson has suggested that there have been several singularities in the history of our species, such as the agricultural revolution and the industrial revolution. Arguably, even the early Enlightenment was a ‘singularity’ by that definition. Who could have predicted that someone who lived through the English Civil War - a bloody struggle of religious fanatics versus an absolute monarch - and through the victory of the religious fanatics in 1651, might also live through the peaceful birth of a society that saw liberty and reason as its principal characteristics? The Royal Society, for instance, was founded in 1660 - a development that would hardly have been conceivable a generation earlier. Roy Porter marks 1688 as the beginning of the English Enlightenment. That is the date of the ‘Glorious Revolution’, the beginning of predominantly constitutional government along with many other rational reforms which were part of that deeper and astonishingly rapid shift in the prevailing world view.
Also, the time beyond which scientific prediction has no access is different for different phenomena. For each phenomenon it is the moment at which the creation of new knowledge may begin to make a significant difference to what one is trying to predict. Since our estimates of that, too, are subject to the same kind of horizon, we should really understand all our predictions as implicitly including the proviso ‘unless the creation of new knowledge intervenes’.
Some explanations do have reach into the distant future, far beyond the horizons that make most other things unpredictable. One of them is that fact itself. Another is the infinite potential of explanatory knowledge - the subject of this book.
To attempt to predict anything beyond the relevant horizon is futile - it is prophecy - but wondering what is beyond it is not. When wondering leads to conjecture, that constitutes speculation, which is not irrational either. In fact it is vital. Every one of those deeply unforeseeable new ideas that make the future unpredictable will begin as a speculation. And every speculation begins with a problem: problems in regard to the future can reach beyond the horizon of prediction too - and problems have solutions.
In regard to understanding the physical world, we are in much the same position as Eratosthenes was in regard to the Earth: he could measure it remarkably accurately, and he knew a great deal about certain aspects of it - immensely more than his ancestors had known only a few centuries before. He must have known about such things as seasons in regions of the Earth about which he had no evidence. But he also knew that most of what was out there was far beyond his theoretical knowledge as well as his physical reach.
We cannot yet measure the universe as accurately as Eratosthenes measured the Earth. And we, too, know how ignorant we are. For instance, we know from universality that AI is attainable by writing computer programs, but we have no idea how to write (or evolve) the right one. We do not know what qualia are or how creativity works, despite having working examples of qualia and creativity inside all of us. We learned the genetic code decades ago, but have no idea why it has the reach that it has. We know that both of the deepest prevailing theories in physics must be false. We know that people are of fundamental significance, but we do not know whether we are among those people: we may fail, or give up, and intelligences originating elsewhere in the universe may be the beginning of infinity. And so on for all the problems I have mentioned and many more.
Wheeler once imagined writing out all the equations that might be the ultimate laws of physics on sheets of paper all over the floor. And then:
Stand up, look back on all those equations, some perhaps more hopeful than others, raise one’s finger commandingly, and give the order ‘Fly!’ Not one of those equations will put on wings, take off, or fly. Yet the universe ‘flies’.
C. W. Misner, K. S. Thorne and J. A.Wheeler, Gravitation (1973)
We do not know why it ‘flies’. What is the difference between laws that are instantiated in physical reality and those that are not? What is the difference between a computer simulation of a person (which must bea person, because of universality) and a recording of that simulation (which cannot be a person)? When there are two identical simulations under way, are there two sets of qualia or one? Double the moral value or not?
Our world, which is so much larger, more unified, more intricate and more beautiful than that of Eratosthenes, and which we understand and control to an extent that would have seemed godlike to him, is nevertheless just as mysterious, yet open, to us now as his was to him then. We have lit only a few candles here and there. We can cower in their parochial light until something beyond our ken snuffs us out, or we can resist. We already see that we do not live in a senseless world. The laws of physics make sense: the world is explicable. There are higher levels of emergence and higher levels of explanation. Profound abstractions in mathematics, morality and aesthetics are accessible to us. Ideas of tremendous reach are possible. But there is also plenty in the world that does not and will not make sense until we ourselves work out how to rectify it. Death does not make sense. Stagnation does not make sense. A bubble of sense within endless senselessness does not make sense. Whether the world ultimately does make sense will depend on how people - the likes of us - chose to think and to act.
Many people have an aversion to infinity of various kinds. But there are some things that we do not have a choice about. There is only one way of thinking that is capable of making progress, or of surviving in the long run, and that is the way of seeking good explanations through creativity and criticism. What lies ahead of us is in any case infinity. All we can choose is whether it is an infinity of ignorance or of knowledge, wrong or right, death or life.