The Beginning of Infinity: Explanations That Transform the World - David Deutsch (2011)
Chapter 5. The Reality of Abstractions
The fundamental theories of modern physics explain the world in jarringly counter-intuitive ways. For example, most non-physicists consider it self-evident that when you hold your arm out horizontally you can feel the force of gravity pulling it downwards. But you cannot. The existence of a force of gravity is, astonishingly, denied by Einstein’s general theory of relativity, one of the two deepest theories of physics. This says that the only force on your arm in that situation is that which you yourself are exerting, upwards, to keep it constantly accelerating away from the straightest possible path in a curved region of spacetime. The reality described by our other deepest theory, quantum theory, which I shall describe in Chapter 11, is even more counter-intuitive. To understand explanations like those, physicists have to learn to think about everyday events in new ways.
The guiding principle is, as always, to reject bad explanations in favour of good ones. In regard to what is or is not real, this leads to the requirement that, if an entity is referred to by our best explanation in the relevant field, we must regard it as really existing. And if, as with the force of gravity, our best explanation denies that it exists, then we must stop assuming that it does.
Furthermore, everyday events are stupendously complex when expressed in terms of fundamental physics. If you fill a kettle with water and switch it on, all the supercomputers on Earth working for the age of the universe could not solve the equations that predict what all those water molecules will do - even if we could somehow determine their initial state and that of all the outside influences on them, which is itself an intractable task.
Fortunately, some of that complexity resolves itself into a higher-level simplicity. For example, we can predict with some accuracy how long the water will take to boil. To do so, we need know only a few physical quantities that are quite easy to measure, such as its mass, the power of the heating element, and so on. For greater accuracy we may also need information about subtler properties, such as the number and type of nucleation sites for bubbles. But those are still relatively ‘high-level’ phenomena, composed of intractably large numbers of interacting atomic-level phenomena. Thus there is a class of high-level phenomena - including the liquidity of water and the relationship between containers, heating elements, boiling and bubbles - that can be well explained in terms of each other alone, with no direct reference to anything at the atomic level or below. In other words, the behaviour of that whole class of high-level phenomena is quasi-autonomous - almost self-contained. This resolution into explicability at a higher, quasi-autonomous level is known as emergence.
Emergent phenomena are a tiny minority. We can predict when the water will boil, and that bubbles will form when it does, but if you wanted to predict where each bubble will go (or, to be precise, what the probabilities of its various possible motions are - see Chapter 11), you would be out of luck. Still less is it feasible to predict the countless microscopically defined properties of the water, such as whether an odd or an even number of its electrons will be affected by the heating during a given period.
Fortunately, we are uninterested in predicting or explaining most of those properties, despite the fact that they are the overwhelming majority. That is because none of them has any bearing on what we want to do with the water - such as understand what it is made of, or make tea. To make tea, we want the water to be boiling, but we do not care what the pattern of bubbles was. We want its volume to be between a certain minimum and maximum, but we do not care how many molecules that is. We can make progress in achieving those purposes because we can express them in terms of those quasi-autonomous emergent properties about which we have good high-level explanations. Nor do we need most of the microscopic details in order to understand the role of water in the cosmic scheme of things, because nearly all of those details are parochial.
The behaviour of high-level physical quantities consists of nothing but the behaviour of their low-level constituents with most of the details ignored. This has given rise to a widespread misconception about emergence and explanation, known as reductionism: the doctrine that science always explains and predicts things reductively, i.e. by analysing them into components. Often it does, as when we use the fact that inter-atomic forces obey the law of conservation of energy to make and explain a high-level prediction that the kettle cannot boil water without a power supply. But reductionism requires the relationship between different levels of explanation always to be like that, and often it is not. For example, as I wrote in The Fabric of Reality:
Consider one particular copper atom at the tip of the nose of the statue of Sir Winston Churchill that stands in Parliament Square in London. Let me try to explain why that copper atom is there. It is because Churchill served as prime minister in the House of Commons nearby; and because his ideas and leadership contributed to the Allied victory in the Second World War; and because it is customary to honour such people by putting up statues of them; and because bronze, a traditional material for such statues, contains copper, and so on. Thus we explain a low-level physical observation - the presence of a copper atom at a particular location - through extremely high-level theories about emergent phenomena such as ideas, leadership, war and tradition.
There is no reason why there should exist, even in principle, any lower-level explanation of the presence of that copper atom than the one I have just given. Presumably a reductive ‘theory of everything’ would in principle make a low-level prediction of the probability that such a statue will exist, given the condition of (say) the solar system at some earlier date. It would also in principle describe how the statue probably got there. But such descriptions and predictions (wildly infeasible, of course) would explain nothing. They would merely describe the trajectory that each copper atom followed from the copper mine, through the smelter and the sculptor’s studio and so on … In fact such a prediction would have to refer to atoms all over the planet, engaged in the complex motion we call the Second World War, among other things. But even if you had the superhuman capacity to follow such lengthy predictions of the copper atom’s being there, you would still not be able to say ‘Ah yes, now I understand why they are there’. [You] would have to inquire into what it was about that configuration of atoms, and those trajectories, that gave them the propensity to deposit a copper atom at this location. Pursuing that inquiry would be a creative task, as discovering new explanations always is. You would have to discover that certain atomic configurations support emergent phenomena such as leadership and war, which are related to one another by high-level explanatory theories. Only when you knew those theories could you understand why that copper atom is where it is.
Even in physics, some of the most fundamental explanations, and the predictions that they make, are not reductive. For instance, the second law of thermodynamics says that high-level physical processes tend towards ever greater disorder. A scrambled egg never becomes unscrambled by the whisk, and never extracts energy from the pan to propel itself upwards into the shell, which never seamlessly reseals itself. Yet, if you could somehow make a video of the scrambling process with enough resolution to see the individual molecules, and play it backwards, and examine any part of it at that scale, you would see nothing but molecules moving and colliding in strict obedience to the low-level laws of physics. It is not yet known how, or whether, the second law of thermodynamics can be derived from a simple statement about individual atoms.
There is no reason why it should be. There is often a moral overtone to reductionism (science should be essentially reductive). This is related both to instrumentalism and to the Principle of Mediocrity, which I criticized in Chapters 1 and 3. Instrumentalism is rather like reductionism except that, instead of rejecting only high-level explanations, it tries to reject all explanations. The Principle of Mediocrity is a milder form of reductionism: it rejects only high-level explanations that involve people. While I am on the subject of bad philosophical doctrines with moral overtones, let me add holism, a sort of mirror image of reductionism. It is the idea that the only valid explanations (or at least the only significant ones) are of parts in terms of wholes. Holists also often share with reductionists the mistaken belief that science can only(or should only) be reductive, and therefore they oppose much of science. All those doctrines are irrational for the same reason: they advocate accepting or rejecting theories on grounds other than whether they are good explanations.
Whenever a high-level explanation does follow logically from low-level ones, that also means that the high-level one implies something about the low-level ones. Thus, additional high-level theories, provided that they were all consistent, would place more and more constraints on what the low-level theories could be. So it could be that all the high-level explanations that exist, taken together, imply all the low-level ones, as well as vice versa. Or it could be that some low-level, some intermediate-level and some high-level explanations, taken together, imply all explanations. I guess that that is so.
Thus, one possible way that the fine-tuning problem might eventually be solved would be if some high-level explanations turned out to be exact laws of nature. The microscopic consequences of that might well seem to be fine-tuned. One candidate is the principle of the universality of computation, which I shall discuss in the next chapter. Another is the principle of testability, for, in a world in which the laws of physics do not permit the existence of testers, they also forbid themselves to be tested. However, in their current form such principles, regarded as laws of physics, are anthropocentric and arbitrary - and would therefore be bad explanations. But perhaps there are deeper versions, to which they are approximations, which are good explanations, well integrated with those of microscopic physics like the second law of thermodynamics is.
In any case, emergent phenomena are essential to the explicability of the world. Long before humans had much explanatory knowledge, they were able to control nature by using rules of thumb. Rules of thumb have explanations, and those explanations were about high-level regularities among emergent phenomena such as fire and rocks. Long before that, it was only genes that were encoding rules of thumb, and the knowledge in them, too, was about emergent phenomena. Thus emergence is another beginning of infinity: all knowledge-creation depends on, and physically consists of, emergent phenomena.
Emergence is also responsible for the fact that discoveries can be made in successive steps, thus providing scope for the scientific method. The partial success of each theory in a sequence of improving theories is tantamount to the existence of a ‘layer’ of phenomena that each theory explains successfully - though, as it then turns out, partly mistakenly.
Successive scientific explanations are occasionally dissimilar in the way they explain their predictions, even in the domain where the predictions themselves are similar or identical. For instance, Einstein’s explanation of planetary motion does not merely correct Newton’s: it is radically different, denying, among many other things, the very existence of central elements of Newton’s explanation, such as the gravitational force and the uniformly flowing time with respect to which Newton defined motion. Likewise the astronomer Johannes Kepler’s theory which said that the planets move in ellipses did not merely correct the celestial-sphere theory, it denied the spheres’ existence. And Newton’s did not substitute a new shape for Kepler’s ellipses, but a whole new way for laws to specify motion - through infinitesimally defined quantities like instantaneous velocity and acceleration. Thus each of those theories of planetary motion was ignoring or denying its predecessor’s basic means of explaining what was happening out there.
This has been used as an argument for instrumentalism, as follows. Each successive theory made small but accurate corrections to what its predecessor predicted, and was therefore a better theory in that sense. But, since each theory’s explanation swept away that of the previous theory, the previous theory’s explanation was never true in the first place, and so one cannot regard those successive explanations as constituting a growth of knowledge about reality. From Kepler to Newton to Einstein we have successively: no force needed to explain orbits; an inverse-square-law force responsible for every orbit; and again no force needed. So how could Newton’s ‘force of gravity’ (as distinct from his equations predicting its effects) ever have been an advance in human knowledge?
It could, and was, because sweeping away the entities through which a theory makes its explanation is not the same as sweeping away the whole of the explanation. Although there is no force of gravity, it is true that something real (the curvature of spacetime), caused by the sun, has a strength that varies approximately according to Newton’s inverse-square law, and affects the motion of objects, seen and unseen. Newton’s theory also correctly explained that the laws of gravitation are the same for terrestrial and celestial objects; it made a novel distinction between mass (the measure of an object’s resistance to being accelerated) and weight (the force required to prevent the object from falling under gravity); and it said that the gravitational effect of an object depends on its mass and not on other attributes such as its density or composition. Later, Einstein’s theory not only endorsed all those features but explained, in turn, why they are so. Newton’s theory, too, had been able to make more accurate predictions than its predecessors precisely because it was more right than they were about what was really happening. Before that, even Kepler’s explanation had included important elements of the true explanation: planetary orbits are indeed determined by laws of nature; those laws are indeed the same for all planets, including the Earth; they do involve the sun; they are mathematical and geometrical in character; and so on. With the hindsight provided by each successive theory, we can see not only where the previous theory made false predictions, but also that wherever it made true predictions this was because it had expressed some truth about reality. So its truth lives on in the new theory - as Einstein remarked, ‘There could be no fairer destiny for any physical theory than that it should point the way to a more comprehensive theory in which it lives on as a limiting case.’
As I explained in Chapter 1, regarding the explanatory function of theories as paramount is not just an idle preference. The predictive function of science is entirely dependent on it. Also, in order to make progress in any field, it is the explanations in existing theories, not the predictions, that have to be creatively varied in order to conjecture the next theory. Furthermore, the explanations in one field affect our understanding of other fields. For instance, if someone thinks that a conjuring trick is due to supernatural abilities of the conjurer, it will affect how they judge theories in cosmology (such as the origin of the universe, or the fine-tuning problem) and in psychology (how the human mind works) and so on.
By the way, it is something of a misconception that the predictions of successive theories of planetary motion were all that similar. Newton’s predictions are indeed excellent in the context of bridge-building, and only slightly inadequate when running the Global Positioning System, but they are hopelessly wrong when explaining a pulsar or a quasar - or the universe as a whole. To get all those right, one needs Einstein’s radically different explanations.
Such large discontinuities in the meanings of successive scientific theories have no biological analogue: in an evolving species, the dominant strain in each generation differs only slightly from that in the previous generation. Nevertheless, scientific discovery is a gradual process too; it is just that, in science, all the gradualness, and nearly all the criticism and rejection of bad explanations, takes place inside the scientists’ minds. As Popper put it, ‘We can let our theories die in our place.’
There is another, even more important, advantage in that ability to criticize theories without staking one’s life on them. In an evolving species, the adaptations of the organisms in each generation must have enough functionality to keep the organism alive, and to pass all the tests that they encounter in propagating themselves to the next generation. In contrast, the intermediate explanations leading a scientist from one good explanation to the next need not be viable at all. The same is true of creative thought in general. This is the fundamental reason that explanatory ideas are able to escape from parochialism, while biological evolution, and rules of thumb, cannot.
That brings me to the main subject of this chapter: abstractions. In Chapter 4 I remarked that pieces of knowledge are abstract replicators that ‘use’ (and hence affect) organisms and brains to get themselves replicated. That is a higher level of explanation than the emergent levels I have mentioned so far. It is a claim that something abstract - something non-physical, such as the knowledge in a gene or a theory - is affecting something physical. Physically, nothing is happening in such a situation other than that one set of emergent entities - such as genes, or computers - is affecting others, which is already anathema to reductionism. But abstractions are essential to a fuller explanation. You know that if your computer beats you at chess, it is really the program that has beaten you, not the silicon atoms or the computer as such. The abstract program is instantiated physically as a high-level behaviour of vast numbers of atoms, but the explanation of why it has beaten you cannot be expressed without also referring to the program in its own right. That program has also been instantiated, unchanged, in a long chain of different physical substrates, including neurons in the brains of the programmers and radio waves when you downloaded the program via wireless networking, and finally as states of long- and short-term memory banks in your computer. The specifics of that chain of instantiations may be relevant to explaining how the program reached you, but it is irrelevant to why it beat you: there, the content of the knowledge (in it, and in you) is the whole story. That story is an explanation that refers ineluctably to abstractions; and therefore those abstractions exist, and really do affect physical objects in the way required by the explanation.
The computer scientist Douglas Hofstadter has a nice argument that this sort of explanation is essential in understanding certain phenomena. In his book I am a Strange Loop (2007) he imagines a special-purpose computer built of millions of dominoes. They are set up - as dominoes often are for fun - standing on end, close together, so that if one of them is knocked over it strikes its neighbour and so a whole stretch of dominoes falls, one after another. But Hofstadter’s dominoes are spring-loaded in such a way that, whenever one is knocked over, it pops back up after a fixed time. Hence, when a domino falls, a wave or ‘signal’ of falling dominoes propagates along the stretch in the direction in which it fell until it reaches either a dead end or a currently fallen domino. By arranging these dominoes in a network with looping, bifurcating and rejoining stretches, one can make these signals combine and interact in a sufficiently rich repertoire of ways to make the whole construction into a computer: a signal travelling down a stretch can be interpreted as a binary ‘1’, and the lack of a signal as a binary ‘0’, and the interactions between such signals can implement a repertoire of operations - such as ‘and’, ‘or’ and ‘not’ - out of which arbitrary computations can be composed.
One domino is designated as the ‘on switch’: when it is knocked over, the domino computer begins to execute the program that is instantiated in its loops and stretches. The program in Hofstadter’s thought experiment computes whether a given number is a prime or not. One inputs that number by placing a stretch of exactly that many dominos at a specified position, before tripping the ‘on switch’. Elsewhere in the network, a particular domino will deliver the output of the computation: it will fall only if a divisor is found, indicating that the input was not a prime.
Hofstadter sets the input to the number 641, which is a prime, and trips the ‘on switch’. Flurries of motion begin to sweep back and forth across the network. All 641 of the input dominos soon fall as the computation ‘reads’ its input - and snap back up and participate in further intricate patterns. It is a lengthy process, because this is a rather inefficient way to perform computations - but it does the job.
Now Hofstadter imagines that an observer who does not know the purpose of the domino network watches the dominoes performing and notices that one particular domino remains resolutely standing, never affected by any of the waves of downs and ups sweeping by.
The observer points at [that domino] and asks with curiosity, ‘How come that domino there is never falling?’
We know that it is the output domino, but the observer does not. Hofstadter continues:
Let me contrast two different types of answer that someone might give. The first type of answer - myopic to the point of silliness - would be, ‘Because its predecessor never falls, you dummy!’
Or, if it has two or more neighbours, ‘Because none of its neighbours ever fall.’
To be sure, this is correct as far as it goes, but it doesn’t go very far. It just passes the buck to a different domino.
In fact one could keep passing the buck from domino to domino, to provide ever more detailed answers that were ‘silly, but correct as far as they go’. Eventually, after one had passed the buck billions of times (many more times than there are dominoes, because the program ‘loops’), one would arrive at that first domino - the ‘on switch’.
At that point, the reductive (to high-level physics) explanation would be, in summary, ‘That domino did not fall because none of the patterns of motion initiated by knocking over the “on switch” ever include it.’ But we knew that already. We can reach that conclusion - as we just have - without going through that laborious process. And it is undeniably true. But it is not the explanation we were looking for because it is addressing a different question - predictive rather than explanatory - namely, if the first domino falls, will the output domino ever fall? And it is asking at the wrong level of emergence. What we asked was: whydoes it not fall? To answer that, Hofstadter then adopts a different mode of explanation, at the right level of emergence:
The second type of answer would be, ‘Because 641 is prime.’ Now this answer, while just as correct (indeed, in some sense it is far more on the mark), has the curious property of not talking about anything physical at all. Not only has the focus moved upwards to collective properties … these properties somehow transcend the physical and have to do with pure abstractions, such as primality.
Hofstadter concludes, ‘The point of this example is that 641’s primality is the best explanation, perhaps even the only explanation, for why certain dominoes did fall and certain others did not fall.’
Just to correct that slightly: the physics-based explanation is true as well, and the physics of the dominoes is also essential to explaining why prime numbers are relevant to that particular arrangement of them. But Hofstadter’s argument does show that primality must be part of any full explanation of why the dominos did or did not fall. Hence it is a refutation of reductionism in regard to abstractions. For the theory of prime numbers is not part of physics. It refers not to physical objects, but to abstract entities - such as numbers, of which there is an infinite set.
Unfortunately, Hofstadter goes on to disown his own argument and to embrace reductionism. Why?
His book is primarily about one particular emergent phenomenon, the mind - or, as he puts it, the ‘I’. He asks whether the mind can consistently be thought of as affecting the body - causing it to do one thing rather than another, given the all-embracing nature of the laws of physics. This is known as the mind-body problem. For instance, we often explain our actions in terms of choosing one action rather than another, but our bodies, including our brains, are completely controlled by the laws of physics, leaving no physical variable free for an ‘I’ to affect in order to make such a choice. Following the philosopher Daniel Dennett, Hofstadter eventually concludes that the ‘I’ is an illusion. Minds, he concludes, can’t ‘push material stuff around’, because ‘physical law alone would suffice to determine [its] behaviour’. Hence his reductionism.
But, first of all, physical laws can’t push anything either. They only explain and predict. And they are not our only explanations. The theory that the domino stands ‘because 641 is a prime (and because the domino network instantiates a primality-testing algorithm)’ is an exceedingly good explanation. What is wrong with it? It does not contradict the laws of physics. It explains more than any explanation purely in terms of those laws. And no known variant of it can do the same job.
Second, that reductionist argument would equally deny that an atom can ‘push’ (in the sense of ‘cause to move’) another atom, since the initial state of the universe, together with the laws of motion, has already determined the state at every other time.
Third, the very idea of a cause is emergent and abstract. It is mentioned nowhere in the laws of motion of elementary particles, and, as the philosopher David Hume pointed out, we cannot perceive causation, only a succession of events. Also, the laws of motion are ‘conservative’ - that is to say, they do not lose information. That means that, just as they determine the final state of any motion given the initial state, they also determine the initial state given the final state, and the state at any time from the state at any other time. So, at that level of explanation, cause and effect are interchangeable - and are not what we mean when we say that a program causes a computer to win at chess, or that a domino remained standing because 641 is a prime.
There is no inconsistency in having multiple explanations of the same phenomenon, at different levels of emergence. Regarding microphysical explanations as more fundamental than emergent ones is arbitrary and fallacious. There is no escape from Hofstadter’s 641 argument, and no reason to want one. The world may or may not be as we wish it to be, and to reject good explanations on that account is to imprison oneself in parochial error.
So the answer ‘Because 641 is a prime’ does explain the immunity of that domino. The theory of prime numbers on which that answer depends is not a law of physics, nor an approximation to one. It is about abstractions, and infinite sets of them at that (such as the set of ‘natural numbers’ 1, 2, 3, …, where the ellipsis ‘ … ’ denotes continuation ad infinitum). It is no mystery how we can have knowledge of infinitely large things, like the set of all natural numbers. That is just a matter of reach. Versions of number theory that confined themselves to ‘small natural numbers’ would have to be so full of arbitrary qualifiers, workarounds and unanswered questions that they would be very bad explanations until they were generalized to the case that makes sense without such ad-hoc restrictions: the infinite case. I shall discuss various sorts of infinity in Chapter 8.
When we use theories about emergent physical quantities to explain the behaviour of water in a kettle, we are using an abstraction - an ‘idealized’ model of the kettle that ignores most of its details - as an approximation to a real physical system. But when we use a computer to investigate prime numbers, we are doing the reverse: we are using the physical computer as an approximation to an abstract one which perfectly models prime numbers. Unlike any real computer, the latter never goes wrong, requires no maintenance, and has unlimited memory and unlimited time to run its program.
Our own brains are, likewise, computers which we can use to learn about things beyond the physical world, including pure mathematical abstractions. This ability to understand abstractions is an emergent property of people which greatly puzzled the ancient Athenian philosopher Plato. He noticed that the theorems of geometry - such as Pythagoras’ theorem - are about entities that are never experienced: perfectly straight lines with no thickness, intersecting each other on a perfect plane to make a perfect triangle. These are not possible objects of any observation. And yet people knew about them - and not just superficially: at the time, such knowledge was the deepest knowledge, of anything, that human beings had ever had. Where did it come from? Plato concluded that it - and all human knowledge - must come from the supernatural.
He was right that it could not have come from observation. But then it could not have even if people had been able to observe perfect triangles (as arguably they could today, using virtual reality). As I explained in Chapter 1, empiricism has multiple fatal flaws. But it is no mystery where our knowledge of abstractions comes from: it comes from conjecture, like all our knowledge, and through criticism and seeking good explanations. It is only empiricism that made it seem plausible that knowledge outside science is inaccessible; and it is only the justified-true-belief misconception that makes such knowledge seem less ‘justified’ than scientific theories.
As I explained in Chapter 1, even in science, almost all rejected theories are rejected for being bad explanations, without ever being tested. Experimental testing is only one of many methods of criticism used in science, and the Enlightenment has made progress by bringing those other methods to bear in non-scientific fields too. The basic reason that such progress is possible is that good explanations about philosophical issues are as hard to find as in science - and criticism is correspondingly effective.
Moreover, experience does play a role in philosophy - only not the role of experimental testing that it plays in science. Primarily, it provides philosophical problems. There would have been no philosophy of science if the issue of how we can acquire knowledge of the physical world had been unproblematic. There would be no such thing as political philosophy if there had not first been a problem of how to run societies. (To avoid misunderstanding, let me stress that experience provides problems only by bringing already-existing ideas into conflict. It does not, of course, provide theories.)
In the case of moral philosophy, the empiricist and justificationist misconceptions are often expressed in the maxim that ‘you can’t derive an ought from an is’ (a paraphrase of a remark by the Enlightenment philosopher David Hume). It means that moral theories cannot be deduced from factual knowledge. This has become conventional wisdom, and has resulted in a kind of dogmatic despair about morality: ‘you can’t derive an ought from an is, therefore morality cannot be justified by reason’. That leaves only two options: either to embrace unreason or to try living without ever making a moral judgement. Both are liable to lead to morally wrong choices, just as embracing unreason or never attempting to explain the physical world leads to factually false theories (and not just ignorance).
Certainly you can’t derive an ought from an is, but you can’t derive a factual theory from an is either. That is not what science does. The growth of knowledge does not consist of finding ways to justify one’s beliefs. It consists of finding good explanations. And, although factual evidence and moral maxims are logically independent, factual and moral explanations are not. Thus factual knowledge can be useful in criticizing moral explanations.
For example, in the nineteenth century, if an American slave had written a bestselling book, that event would not logically have ruled out the proposition ‘Negroes are intended by Providence to be slaves.’ No experience could, because that is a philosophical theory. But it might have ruined the explanation through which many people understood that proposition. And if, as a result, such people had found themselves unable to explain to their own satisfaction why it would be Providential if that author were to be forced back into slavery, then they might have questioned the account that they had formerly accepted of what a black person really is, and what a person in general is - and then a good person, a good society, and so on.
Conversely, advocates of highly immoral doctrines almost invariably believe associated factual falsehoods as well. For instance, ever since the attack on the United States on 11 September 2001, millions of people worldwide have believed it was carried out by the US government, or the Israeli secret service. Those are purely factual misconceptions, yet they bear the imprint of moral wrongness just as clearly as a fossil - made of purely inorganic material - bears the imprint of ancient life. And the link, in both cases, is explanation. To concoct a moral explanation for why Westerners deserve to be killed indiscriminately, one needs to explain factually that the West is not what it pretends to be - and that requires uncritical acceptance of conspiracy theories, denials of history, and so on.
Quite generally, in order to understand the moral landscape in terms of a given set of values, one needs to understand some facts as being a certain way too. And the converse is also true: for example, as the philosopher Jacob Bronowski pointed out, success at making factual, scientific discoveries entails a commitment to all sorts of values that are necessary for making progress. The individual scientist has to value truth, and good explanations, and be open to ideas and to change. The scientific community, and to some extent the civilization as a whole, has to value tolerance, integrity and openness of debate.
We should not be surprised at these connections. The truth has structural unity as well as logical consistency, and I guess that no true explanation is entirely disconnected from any other. Since the universe is explicable, it must be that morally right values are connected in this way with true factual theories, and morally wrong values with false theories.
Moral philosophy is basically about the problem of what to do next - and, more generally, what sort of life to lead, and what sort of world to want. Some philosophers confine the term ‘moral’ to problems about how one should treat other people. But such problems are continuous with problems of individuals choosing what sort of life to lead, which is why I adopt the more inclusive definition. Terminology aside, if you were suddenly the last human on Earth, you would be wondering what sort of life to want. Deciding ‘I should do whatever pleases me most’ would give you very little clue, because what pleases you depends on your moral judgement of what constitutes a good life, not vice versa.
This also illustrates the emptiness of reductionism in philosophy. For if I ask you for advice about what objectives to pursue in life, it is no good telling me to do what the laws of physics mandate. I shall do that in any case. Nor is it any good telling me to do what I prefer, because I don’t know what I prefer to do until I have decided what sort of life I want to lead or how I should want the world to be. Since our preferences are shaped in this way, at least in part, by our moral explanations, it does not make sense to define right and wrong entirely in terms of their utility in meeting people’s preferences. Trying to do so is the project of the influential moral philosophy known as utilitarianism, which played much the same role as empiricism did in the philosophy of science: it acted as a liberating focus for the rebellion against traditional dogmas, while its own positive content contained little truth.
So there is no avoiding what-to-do-next problems, and, since the distinction between right and wrong appears in our best explanations that address such problems, we must regard that distinction as real. In other words, there is an objective difference between right and wrong: those are real attributes of objectives and behaviours. In Chapter 14 I shall argue that the same is true in the field of aesthetics: there is such a thing as objective beauty.
Beauty, right and wrong, primality, infinite sets - they all exist objectively. But not physically. What does that mean? Certainly they can affect you - as examples like Hofstadter’s show - but apparently not in the same sense that physical objects do. You cannot trip over one of them in the street. However, there is less to that distinction than our empiricism-biased common sense assumes. First of all, being affected by a physical object means that something about the physical object has caused a change, via the laws of physics (or, equivalently, that the laws of physics have caused a change via that object). But causation and the laws of physics are not themselves physical objects. They are abstractions, and our knowledge of them comes - just as for all other abstractions - from the fact that our best explanations invoke them. Progress depends on explanation, and therefore trying to conceive of the world as merely a sequence of events with unexplained regularities would entail giving up on progress.
This argument that abstractions really exist does not tell us what they exist as - for instance, which of them are purely emergent aspects of others, and which exist independently of the others. Would the laws of morality still be the same if the laws of physics were different? If they were such that knowledge could best be obtained by blind obedience to authority, then scientists would have to avoid what we think of as the values of scientific inquiry in order to make progress. My guess is that morality is more autonomous than that, and so it makes sense to say that such laws of physics would be immoral, and (as I remarked in Chapter 4) to imagine laws of physics that would be more moral than the real ones.
The reach of ideas into the world of abstractions is a property of the knowledge that they contain, not of the brain in which they may happen to be instantiated. A theory can have infinite reach even if the person who originated it is unaware that it does. However, a person is an abstraction too. And there is a kind of infinite reach that is unique to people: the reach of the ability to understand explanations. And this ability is itself an instance of the wider phenomenon of universality - to which I turn next.
TERMINOLOGY
Levels of emergence Sets of phenomena that can be explained well in terms of each other without analysing them into their constituent entities such as atoms.
Natural numbers The whole numbers 1, 2, 3 and so on.
Reductionism The misconception that science must or should always explain things by analysing them into components (and hence that higher-level explanations cannot be fundamental).
Holism The misconception that all significant explanations are of components in terms of wholes rather than vice versa.
Moral philosophy Addresses the problem of what sort of life to want.
MEANINGS OF ‘THE BEGINNING OF INFINITY’ ENCOUNTERED IN THIS CHAPTER
- The existence of emergent phenomena, and the fact that they can encode knowledge about other emergent phenomena.
- The existence of levels of approximation to true explanations.
- The ability to understand explanations.
- The ability of explanation to escape from parochialism by ‘letting our theories die in our place’.
SUMMARY
Reductionism and holism are both mistakes. In reality, explanations do not form a hierarchy with the lowest level being the most fundamental. Rather, explanations at any level of emergence can be fundamental. Abstract entities are real, and can play a role in causing physical phenomena. Causation is itself such an abstraction.