The Beginning of Infinity: Explanations That Transform the World - David Deutsch (2011)
Chapter 12. A Physicist’s History of Bad Philosophy
With Some Comments on Bad Science
By the way, what I have just outlined is what I call a ‘physicist’s history of physics’, which is never correct …
Richard Feynman, QED:
The Strange Theory of Light and Matter (1985)
READER: So, I am an emergent, quasi-autonomous flow of information in the multiverse.
DAVID: You are.
READER: And I exist in multiple instances, some of them different from each other, some not. And those are the least weird things about the world according to quantum theory.
READER: But your argument is that we have no option but to accept the theory’s implications, because it is the only known explanation of many phenomena and has survived all known experimental tests.
DAVID: What other option would you like to have?
READER: I’m just summarizing.
DAVID: Then yes: quantum theory does have universal reach. But if all you want to explain is how we know that there are other universes, you don’t have to go via the full theory. You need look no further than what a Mach-Zehnder interferometer does to a single photon: the path that was not taken affects the one that was. Or, if you want the same thing writ large, just think of a quantum computer: its output will depend on intermediate results being computed in vast numbers of different histories of the same few atoms.
READER: But that’s just a few atoms existing in multiple instances. Not people.
DAVID: Are you claiming to be made of something other than atoms?
READER: Ah, I see.
DAVID: Also, imagine a vast cloud of instances of a single photon, some of which are stopped by a barrier. Are they absorbed by the barrier that we see, or is each absorbed by a different, quasi-autonomous barrier at the same location?
READER: Does it make a difference?
DAVID: Yes. If they were all absorbed by the barrier we see, it would vaporize.
READER: So it would.
DAVID: And we can ask - as I did in the story of the starship and the twilight zone - what is holding up those barriers? It must be other instances of the floor. And of the planet. And then we can consider the experimenters who set all this up and who observe the results, and so on.
READER: So that trickle of photons through the interferometer really does provide a window on a vast multiplicity of universes.
DAVID: Yes. It’s another example of reach - just a small portion of the reach of quantum theory. The explanation of those experiments in isolation isn’t as hard to vary as the full theory. But in regard to the existence of other universes it’s incontrovertible all the same.
READER: And that’s all there is to it?
READER: But then why is it that only a small minority of quantum physicists agree?
DAVID: Bad philosophy.
READER: What’s that?
Quantum theory was discovered independently by two physicists who reached it from different directions: Werner Heisenberg and Erwin Schrödinger. The latter gave his name to the Schrödinger equation, which is a way of expressing the quantum-mechanical laws of motion.
Both versions of the theory were formulated between 1925 and 1927, and both explained motion, especially within atoms, in new and astonishingly counter-intuitive ways. Heisenberg’s theory said that the physical variables of a particle do not have numerical values. Instead, they are matrices: large arrays of numbers which are related in complicated, probabilistic ways to the outcomes of observations of those variables. With hindsight, we now know that that multiplicity of information exists because a variable has different values for different instances of the object in the multiverse. But, at the time, neither Heisenberg nor anyone else believed that his matrix-valued quantities literally described what Einstein called ‘elements of reality’.
The Schrödinger equation, when applied to the case of an individual particle, described a wave moving through space. But Schrödinger soon realized that for two or more particles it did not. It did not represent a wave with multiple crests, nor could it be resolved into two or more waves; mathematically, it was a single wave in a higher-dimensional space. With hindsight, we now know that such waves describe what proportion of the instances of each particle are in each region of space, and also the entanglement information among the particles.
Although Schrödinger’s and Heisenberg’s theories seemed to describe very dissimilar worlds, neither of which was easy to relate to existing conceptions of reality, it was soon discovered that, if a certain simple rule of thumb was added to each theory, they would always make identical predictions. Moreover, these predictions turned out to be very successful.
With hindsight, we can state the rule of thumb like this: whenever a measurement is made, all the histories but one cease to exist. The surviving one is chosen at random, with the probability of each possible outcome being equal to the total measure of all the histories in which that outcome occurs.
At that point, disaster struck. Instead of trying to improve and integrate those two powerful but slightly flawed explanatory theories, and to explain why the rule of thumb worked, most of the theoretical-physics community retreated rapidly and with remarkable docility into instrumentalism. If the predictions work, they reasoned, why worry about the explanation? So they tried to regard quantum theory as being nothing but a set of rules of thumb for predicting the observed outcomes of experiments, saying nothing (else) about reality. This move is still popular today, and is known to its critics (and even to some of its proponents) as the ‘shut-up-and-calculate interpretation of quantum theory’.
This meant ignoring such awkward facts as (1) the rule of thumb was grossly inconsistent with both theories; hence it could be used only in situations where quantum effects were too small to be noticed. Those happened to include the moment of measurement (because of entanglement with the measuring instrument, and consequent decoherence, as we now know). And (2) it was not even self- consistent when applied to the hypothetical case of an observer performing a quantum measurement on another observer. And (3) both versions of quantum theory were clearly describing some sort of physical process that brought about the outcomes of experiments. Physicists, both through professionalism and through natural curiosity, could hardly help wondering about that process. But many of them tried not to. Most of them went on to train their students not to. This counteracted the scientific tradition of criticism in regard to quantum theory.
Let me define ‘bad philosophy’ as philosophy that is not merely false, but actively prevents the growth of other knowledge. In this case, instrumentalism was acting to prevent the explanations in Schrödinger’s and Heisenberg’s theories from being improved or elaborated or unified.
The physicist Niels Bohr (another of the pioneers of quantum theory) then developed an ‘interpretation’ of the theory which later became known as the ‘Copenhagen interpretation’. It said that quantum theory, including the rule of thumb, was a complete description of reality. Bohr excused the various contradictions and gaps by using a combination of instrumentalism and studied ambiguity. He denied the ‘possibility of speaking of phenomena as existing objectively’ - but said that only the outcomes of observations should count as phenomena. He also said that, although observation has no access to ‘the real essence of phenomena’, it does reveal relationships between them, and that, in addition, quantum theory blurs the distinction between observer and observed. As for what would happen if one observer performed a quantum-level observation on another, he avoided the issue - which became known as the ‘paradox of Wigner’s friend’, after the physicist Eugene Wigner.
In regard to the unobserved processes between observations, where both Schrödinger’s and Heisenberg’s theories seemed to be describing a multiplicity of histories happening at once, Bohr proposed a new fundamental principle of nature, the ‘principle of complementarity’. It said that accounts of phenomena could be stated only in ‘classical language’ - meaning language that assigned single values to physical variables at any one time - but classical language could be used only in regard to some variables, including those that had just been measured. One was not permitted to ask what values the other variables had. Thus, for instance, in response to the question ‘Which path did the photon take?’ in the Mach-Zehnder interferometer, the reply would be that there is no such thing as which path when the path is not observed. In response to the question ‘Then how does the photon know which way to turn at the final mirror, since this depends on what happened on both paths?’, the reply would be an equivocation called ‘particle-wave duality’: the photon is both an extended (non-zero volume) and a localized (zero-volume) object at the same time, and one can choose to observe either attribute but not both. Often this is expressed in the saying ‘It is both a wave and a particle simultaneously.’ Ironically, there is a sense in which those words are precisely true: in that experiment the entire multiversal photon is indeed an extended object (wave), while instances of it (particles, in histories) are localized. Unfortunately, that is not what is meant in the Copenhagen interpretation. There the idea is that quantum physics defies the very foundations of reason: particles have mutually exclusive attributes, period. And it dismisses criticisms of the idea as invalid because they constitute attempts to use ‘classical language’ outside its proper domain (namely describing outcomes of measurements).
Later, Heisenberg called the values about which one was not permitted to ask potentialities, of which only one would become actual when a measurement was completed. How can potentialities that do not happen affect actual outcomes? That was left vague. What caused the transition between ‘potential’ and ‘actual’? The implication of Bohr’s anthropocentric language - which was made explicit in most subsequent presentations of the Copenhagen interpretation - was that the transition is caused by human consciousness. Thus consciousness was said to be acting at a fundamental level in physics.
For decades, various versions of all that were taught as fact - vagueness, anthropocentrism, instrumentalism and all - in university physics courses. Few physicists claimed to understand it. None did, and so students’ questions were met with such nonsense as ‘If you think you’ve understood quantum mechanics then you don’t.’ Inconsistency was defended as ‘complementarity’ or ‘duality’; parochialism was hailed as philosophical sophistication. Thus the theory claimed to stand outside the jurisdiction of normal (i.e. all) modes of criticism - a hallmark of bad philosophy.
Its combination of vagueness, immunity from criticism, and the prestige and perceived authority of fundamental physics opened the door to countless systems of pseudo-science and quackery supposedly based on quantum theory. Its disparagement of plain criticism and reason as being ‘classical’, and therefore illegitimate, has given endless comfort to those who want to defy reason and embrace any number of irrational modes of thought. Thus quantum theory - the deepest discovery of the physical sciences - has acquired a reputation for endorsing practically every mystical and occult doctrine ever proposed.
Not every physicist accepted the Copenhagen interpretation or its descendants. Einstein never did. The physicist David Bohm struggled to construct an alternative that was compatible with realism, and produced a rather complicated theory which I regard as the multiverse theory in heavy disguise - though he was strongly opposed to thinking of it in that way. And in Dublin in 1952 Schrödinger gave a lecture in which at one point he jocularly warned his audience that what he was about to say might ‘seem lunatic’. It was that, when his equation seems to be describing several different histories, they are ‘not alternatives but all really happen simultaneously’. This is the earliest known reference to the multiverse.
Here was an eminent physicist joking that he might be considered mad. Why? For claiming that his own equation - the very one for which he had won the Nobel prize - might be true.
Schrödinger never published that lecture, and seems never to have taken the idea further. Five years later, and independently, the physicist Hugh Everett published a comprehensive theory of the multiverse, now known as the Everett interpretation of quantum theory. Yet it took several more decades before Everett’s work was even noticed by more than a handful of physicists. Even now that it has become well known, it is endorsed by only a small minority. I have often been asked to explain this unusual phenomenon. Unfortunately I know of no entirely satisfactory explanation. But, to understand why it is perhaps not quite as bizarre and isolated an event as it may appear, one has to consider the broader context of bad philosophy.
Error is the normal state of our knowledge, and is no disgrace. There is nothing bad about false philosophy. Problems are inevitable, but they can be solved by imaginative, critical thought that seeks good explanations. That is good philosophy, and good science, both of which have always existed in some measure. For instance, children have always learned language by making, criticizing and testing conjectures about the connection between words and reality. They could not possibly learn it in any other way, as I shall explain in Chapter 16.
Bad philosophy has always existed too. For instance, children have always been told, ‘Because I say so.’ Although that is not always intended as a philosophical position, it is worth analysing it as one, for in four simple words it contains remarkably many themes of false and bad philosophy. First, it is a perfect example of bad explanation: it could be used to ‘explain’ anything. Second, one way it achieves that status is by addressing only the form of the question and not the substance: it is about who said something, not what they said. That is the opposite of truth-seeking. Third, it reinterprets a request for true explanation (why should something-or-other be as it is?) as a request for justification (what entitles you to assert that it is so?), which is the justified-true-belief chimera. Fourth, it confuses the nonexistent authority for ideas with human authority (power) - a much-travelled path in bad political philosophy. And, fifth, it claims by this means to stand outside the jurisdiction of normal criticism.
Bad philosophy before the Enlightenment was typically of the because-I-say-so variety. When the Enlightenment liberated philosophy and science, they both began to make progress, and increasingly there was good philosophy. But, paradoxically, bad philosophy became worse.
I have said that empiricism initially played a positive role in the history of ideas by providing a defence against traditional authorities and dogma, and by attributing a central role - albeit the wrong one - to experiment in science. At first, the fact that empiricism is an impossible account of how science works did almost no harm, because no one took it literally. Whatever scientists may have said about where their discoveries came from, they eagerly addressed interesting problems, conjectured good explanations, tested them, and only lastly claimed to have induced the explanations from experiment. The bottom line was that they succeeded: they made progress. Nothing prevented that harmless (self-)deception, and nothing was inferred from it.
Gradually, though, empiricism did begin to be taken literally, and so began to have increasingly harmful effects. For instance, the doctrine of positivism, developed during the nineteenth century, tried to eliminate from scientific theories everything that had not been ‘derived from observation’. Now, since nothing is ever derived from observation, what the positivists tried to eliminate depended entirely on their own whims and intuitions. Occasionally these were even good. For instance, the physicist Ernst Mach (father of Ludwig Mach of the Mach-Zehnder interferometer), who was also a positivist philosopher, influenced Einstein, spurring him to eliminate untested assumptions from physics - including Newton’s assumption that time flows at the same rate for all observers. That happened to be an excellent idea. But Mach’s positivism also caused him to oppose the resulting theory of relativity, essentially because it claimed that spacetime really exists even though it cannot be ‘directly’ observed. Mach also resolutely denied the existence of atoms, because they were too small to observe. We laugh at this silliness now - when we have microscopes that can see atoms - but the role of philosophy should have been to laugh at it then.
Instead, when the physicist Ludwig Boltzmann used atomic theory to unify thermodynamics and mechanics, he was so vilified by Mach and other positivists that he was driven to despair, which may have contributed to his suicide just before the tide turned and most branches of physics shook off Mach’s influence. From then on there was nothing to discourage atomic physics from thriving. Fortunately also, Einstein soon rejected positivism and became a forthright defender of realism. That was why he never accepted the Copenhagen interpretation. I wonder: if Einstein had continued to take positivism seriously, could he ever have thought of the general theory of relativity, in which spacetime not only exists but is a dynamic, unseen entity bucking and twisting under the influence of massive objects? Or would spacetime theory have come to a juddering halt like quantum theory did?
Unfortunately, most philosophies of science since Mach’s have been even worse (Popper’s being an important exception). During the twentieth century, anti-realism became almost universal among philosophers, and common among scientists. Some denied that the physical world exists at all, and most felt obliged to admit that, even if it does, science has no access to it. For example, in ‘Reflections on my Critics’ the philosopher Thomas Kuhn wrote:
There is [a step] which many philosophers of science wish to take and which I refuse. They wish, that is, to compare [scientific] theories as representations of nature, as statements about ‘what is really out there’.
Imre Lakatos and Alan Musgrave, eds., Criticism and the Growth of Knowledge (1979)
Positivism degenerated into logical positivism, which held that statements not verifiable by observation are not only worthless but meaningless. This doctrine threatened to sweep away not only explanatory scientific knowledge but the whole of philosophy. In particular: logical positivism itself is a philosophical theory, and it cannot be verified by observation; hence it asserts its own meaninglessness (as well as that of all other philosophy).
The logical positivists tried to rescue their theory from that implication (for instance by calling it ‘logical’, as distinct from philosophical), but in vain. Then Wittgenstein embraced the implication and declared all philosophy, including his own, to be meaningless. He advocated remaining silent about philosophical problems, and, although he never attempted to live up to that aspiration, he was hailed by many as one of the greatest geniuses of the twentieth century.
One might have thought that this would be the nadir of philosophical thinking but unfortunately there were greater depths to plumb. During the second half of the twentieth century, mainstream philosophy lost contact with, and interest in, trying to understand science as it was actually being done, or how it should be done. Following Wittgenstein, the predominant school of philosophy for a while was ‘linguistic philosophy’, whose defining tenet was that what seem to be philosophical problems are actually just puzzles about how words are used in everyday life, and that philosophers can meaningfully study only that.
Next, in a related trend that originated in the European Enlightenment but spread all over the West, many philosophers moved away from trying to understand anything. They actively attacked the idea not only of explanation and reality, but of truth, and of reason. Merely to criticize such attacks for being self-contradictory like logical positivism - which they were - is to give them far too much credence. For at least the logical positivists and Wittgenstein were interested in making a distinction between what does and does not make sense - albeit that they advocated a hopelessly wrong one.
One currently influential philosophical movement goes under various names such as postmodernism, deconstructionism and structuralism, depending on historical details that are unimportant here. It claims that because all ideas, including scientific theories, are conjectural and impossible to justify, they are essentially arbitrary: they are no more than stories, known in this context as ‘narratives’. Mixing extreme cultural relativism with other forms of anti-realism, it regards objective truth and falsity, as well as reality and knowledge of reality, as mere conventional forms of words that stand for an idea’s being endorsed by a designated group of people such as an elite or consensus, or by a fashion or other arbitrary authority. And it regards science and the Enlightenment as no more than one such fashion, and the objective knowledge claimed by science as an arrogant cultural conceit.
Perhaps inevitably, these charges are true of postmodernism itself: it is a narrative that resists rational criticism or improvement, precisely because it rejects all criticism as mere narrative. Creating a successful postmodernist theory is indeed purely a matter of meeting the criteria of the postmodernist community - which have evolved to be complex, exclusive and authority-based. Nothing like that is true of rational ways of thinking: creating a good explanation is hard not because of what anyone has decided, but because there is an objective reality that does not meet anyone’s prior expectations, including those of authorities. The creators of bad explanations such as myths are indeed just making things up. But the method of seeking good explanations creates an engagement with reality, not only in science, but in good philosophy too - which is why it works, and why it is the antithesis of concocting stories to meet made-up criteria.
Although there have been signs of improvement since the late twentieth century, one legacy of empiricism that continues to cause confusion, and has opened the door to a great deal of bad philosophy, is the idea that it is possible to split a scientific theory into its predictive rules of thumb on the one hand and its assertions about reality (sometimes known as its ‘interpretation’) on the other. This does not make sense, because - as with conjuring tricks - without an explanation it is impossible to recognize the circumstances under which a rule of thumb is supposed to apply. And it especially does not make sense in fundamental physics, because the predicted outcome of an observation is itself an unobserved physical process.
Many sciences have so far avoided this split, including most branches of physics - though relativity may have had a narrow escape, as I mentioned. Hence in, say, palaeontology, we do not speak of the existence of dinosaurs millions of years ago as being ‘an interpretation of our best theory of fossils’: we claim that it is the explanation of fossils. And, in any case, the theory of evolution is not primarily about fossils or even dinosaurs, but about their genes, of which not even fossils exist. We claim that there really were dinosaurs, and that they had genes whose chemistry we know, even though there is an infinity of possible rival ‘interpretations’ of the same data which make all the same predictions and yet say that neither the dinosaurs nor their genes were ever there.
One of them is the ‘interpretation’ that dinosaurs are only a manner of speaking about certain sensations that palaeontologists have when they gaze at fossils. The sensations are real, but the dinosaurs were not. Or, if they were, we can never know of them. The latter is one of many tangles that one gets into via the justified-true-belief theory of knowledge - for in reality here we are, knowing of them. Then there is the ‘interpretation’ that the fossils themselves come into existence only when they are extracted from the rock in a manner chosen by the palaeontologist and experienced in a way that can be communicated to other palaeontologists. In that case, fossils are certainly no older than the human species. And they are evidence not of dinosaurs, but only of those acts of observation. Or one can say that dinosaurs are real, but not as animals, only as a set of relationships between different people’s experiences of fossils. One can then infer that there is no sharp distinction between dinosaurs and palaeontologists, and that ‘classical language’, though unavoidable, cannot express the ineffable relationship between them. None of those ‘interpretations’ is empirically distinguishable from the rational explanation of fossils. But they are ruled out for being bad explanations: all of them are general-purpose means of denying anything. One can even use them to deny that Schrödinger’s equation is true.
Since explanationless prediction is actually impossible, the methodology of excluding explanation from a science is just a way of holding one’s explanations immune from criticism. Let me give an example from a distant field: psychology.
I have mentioned behaviourism, which is instrumentalism applied to psychology. It became the prevailing interpretation in that field for several decades, and, although it is now largely repudiated, research in psychology continues to downplay explanation in favour of stimulus-response rules of thumb. Thus, for instance, it is considered good science to conduct behaviouristic experiments to measure the extent to which a human psychological state such as, say, loneliness or happiness is genetically coded (like eye colour) or not (such as date of birth). Now, there are some fundamental problems with such a study from an explanatory point of view. First, how can we measure whether different people’s ratings of their own psychological state are commensurable? That is to say, some proportion of the people claiming to have happiness level 8 might be quite unhappy but also so pessimistic that they cannot imagine anything much better. And some of the people who claim only level 3 might in fact be happier than most, but have succumbed to a craze that promises extreme future happiness to those who can learn to chant in a certain way. And, second, if we were to find that people with a particular gene tend to rate themselves happier than people without it, how can we tell whether the gene is coding for happiness? Perhaps it is coding for less reluctance to quantify one’s happiness. Perhaps the gene in question does not affect the brain at all, but only how a person looks, and perhaps better-looking people are happier on average because they are treated better by others. There is an infinity of possible explanations. But the study is not seeking explanations.
It would make no difference if the experimenters tried to eliminate the subjective self-assessment and instead observed happy and unhappy behaviour (such as facial expressions, or how often a person whistles a happy tune). The connection with happiness would still involve comparing subjective interpretations which there is no way of calibrating to a common standard; but in addition there would be an extra level of interpretation: some people believe that behaving in ‘happy’ ways is a remedy for unhappiness, so, for those people, such behaviours might be a proxy for unhappiness.
For these reasons, no behavioural study can detect whether happiness is inborn or not. Science simply cannot resolve that issue until we have explanatory theories about what objective attributes people are referring to when they speak of their happiness, and also about what physical chain of events connects genes to those attributes.
So how does explanation-free science address the issue? First, one explains that one is not measuring happiness directly, but only a proxy such as the behaviour of marking checkboxes on a scale called ‘happiness’. All scientific measurements use chains of proxies. But, as I explained in Chapters 2 and 3, each link in the chain is an additional source of error, and we can avoid fooling ourselves only by criticizing the theory of each link - which is impossible unless an explanatory theory links the proxies to the quantities of interest. That is why, in genuine science, one can claim to have measured a quantity only when one has an explanatory theory of how and why the measurement procedure should reveal its value, and with what accuracy.
There are circumstances under which there is a good explanation linking the measurable proxy such as marking checkboxes with a quantity of interest, and in such cases there need be nothing unscientific about the study. For example, political opinion surveys may ask whether respondents are ‘happy’ with a given politician facing re-election, under the theory that this gives information about which checkbox the respondents will choose in the election itself. That theory is then tested at the election. There is no analogue of such a test in the case of happiness: there is no independent way of measuring it. Another example of bona-fide science would be a clinical trial to test a drug purported to alleviate (particular identifiable types of) unhappiness. In that case, the objective of the study is, again, to determine whether the drug causes behaviour such as saying that one is happier (without also experiencing adverse side effects). If a drug passes that test, the issue of whether it really makes the patients happier, or merely alters their personality to have lower standards or something of that sort, is inaccessible to science until such time as there is a testable explanatory theory of what happiness is
In explanationless science, one may acknowledge that actual happiness and the proxy one is measuring are not necessarily equal. But one nevertheless calls the proxy ‘happiness’ and moves on. One chooses a large number of people, ostensibly at random (though in real life one is restricted to small minorities such as university students, in a particular country, seeking additional income), and one excludes those who have detectable extrinsic reasons for happiness or unhappiness (such as recent lottery wins or bereavement). So one’s subjects are just ‘typical people’ - though in fact one cannot tell whether they are statistically representative without an explanatory theory. Next, one defines the ‘heritability’ of a trait as its degree of statistical correlation with how genetically related the people are. Again, that is a non-explanatory definition: according to it, whether one was a slave or not was once a highly ‘heritable’ trait in America: it ran in families. More generally, one acknowledges that statistical correlations do not imply anything about what causes what. But one adds the inductivist equivocation that ‘they can be suggestive, though.’
Then one does the study and finds that ‘happiness’ is, say, 50 per cent ‘heritable’. This asserts nothing about happiness itself, until the relevant explanatory theories are discovered (at some time in the future - perhaps after consciousness is understood and AIs are commonplace technology). Yet people find the result interesting, because they interpret it via everyday meanings of the words ‘happiness’ and ‘heritable’. Under that interpretation - which the authors of the study, if they are scrupulous, will nowhere have endorsed - the result is a profound contribution to a wide class of philosophical and scientific debates about the nature of the human mind. Press reports of the discovery will reflect this. The headline will say, ‘New Study Shows Happiness 50% Genetically Determined’ - without quotation marks around the technical terms.
So will subsequent bad philosophy. For, suppose that someone now does dare to seek explanatory theories about the cause of human happiness. Happiness is a state of continually solving one’s problems, they conjecture. Unhappiness is caused by being chronically baulked in one’s attempts to do that. And solving problems itself depends on knowing how; so, external factors aside, unhappiness is caused by not knowing how. (Readers may recognize this as a special case of the principle of optimism.)
Interpreters of the study say that it has refuted that theory of happiness. At most 50 per cent of unhappiness can be caused by not knowing how, they say. The other 50 per cent is beyond our control: genetically determined, and hence independent of what we know or believe, pending the relevant genetic engineering. (Using the same logic on the slavery example, one could have concluded in 1860 that, say, 95 per cent of slavery is genetically determined and therefore beyond the power of political action to remedy.)
At this point - taking the step from ‘heritable’ to ‘genetically determined’ - the explanationless psychological study has transformed its correct but uninteresting result into something very exciting. For it has weighed in on a substantive philosophical issue (optimism) and a scientific issue about how the brain gives rise to mental states such as qualia. But it has done so without knowing anything about them.
But wait, say the interpreters. Admittedly we can’t tell whether any genes code for happiness (or part of it). But who cares how the genes cause the effect - whether by conferring good looks or otherwise? The effect itself is real.
The effect is real, but the experiment cannot detect how much of it one can alter without genetic engineering, just by knowing how. That is because the way in which those genes affect happiness may itself depend on knowledge. For instance, a cultural change may affect what people deem to be ‘good looks’, and that would then change whether people tend to be made happier by virtue of having particular genes. Nothing in the study can detect whether such a change is about to happen. Similarly, it cannot detect whether a book will be written one day which will persuade some proportion of the population that all evils are due to lack of knowledge, and that knowledge is created by seeking good explanations. If some of those people consequently create more knowledge than they otherwise would have, and become happier than they otherwise would have been, then part of the 50 per cent of happiness that was ‘genetically determined’ in all previous studies will no longer be so.
The interpreters of the study may respond that it has proved that there can be no such book! Certainly none of them will write such a book, or arrive at such a thesis. And so the bad philosophy will have caused bad science, which will have stifled the growth of knowledge. Notice that this is a form of bad science that may well have conformed to all the best practices of scientific method - proper randomizing, proper controls, proper statistical analysis. All the formal rules of ‘how to keep from fooling ourselves’ may have been followed. And yet no progress could possibly be made, because it was not being sought: explanationless theories can do no more than entrench existing, bad explanations.
It is no accident that, in the imaginary study I have described, the outcome appeared to support a pessimistic theory. A theory that predicts how happy people will (probably) be cannot possibly take account of the effects of knowledge-creation. So, to whatever extent knowledge-creation is involved, the theory is prophecy, and will therefore be biased towards pessimism.
Behaviouristic studies of human psychology must, by their nature, lead to dehumanizing theories of the human condition. For refusing to theorize about the mind as a causative agent is the equivalent of regarding it as a non-creative automaton.
The behaviourist approach is equally futile when applied to the issue of whether an entity has a mind. I have already criticized it in Chapter 7, in regard to the Turing test. The same holds in regard to the controversy about animal minds - such as whether the hunting or farming of animals should be legal - which stems from philosophical disputes about whether animals experience qualia analogous to those of humans when in fear and pain, and, if so, which animals do. Now, science has little to say on this matter at present, because there is as yet no explanatory theory of qualia, and hence no way of detecting them experimentally. But this does not stop governments from trying to pass the political hot potato to the supposedly objective jurisdiction of experimental science. So, for instance, in 1997 the zoologists Patrick Bateson and Elizabeth Bradshaw were commissioned by the National Trust to determine whether stags suffer when hunted. They reported that they do, because the hunt is ‘grossly stressful … exhausting and agonizing’. However, that assumes that the measurable quantities denoted there by the words ‘stress’ and ‘agony’ (such as enzyme levels in the bloodstream) signify the presence of qualia of the same names - which is precisely what the press and public assumed that the study was supposed to discover. The following year, the Countryside Alliance commissioned a study of the same issue, led by the veterinary physiologist Roger Harris, who concluded that the levels of those quantities are similar to those of a human who is not suffering but enjoying a sport such as football. Bateson responded - accurately - that nothing in Harris’s report contradicted his own. But that is because neither study had any bearing on the issue in question.
This form of explanationless science is just bad philosophy disguised as science. Its effect is to suppress the philosophical debate about how animals should be treated, by pretending that the issue has been settled scientifically. In reality, science has, and will have, no access to this issue until explanatory knowledge about qualia has been discovered.
Another way in which explanationless science inhibits progress is that it amplifies errors. Let me give a rather whimsical example. Suppose you have been commissioned to measure the average number of people who visit the City Museum each day. It is a large building with many entrances. Admission is free, so visitors are not normally counted. You engage some assistants. They will not need any special knowledge or competence; in fact, as will become clear, the less competent they are, the better your results are going to be.
Each morning your assistants take up their stations at the doors. They mark a sheet of paper whenever someone enters through their door. After the museum closes, they count all their marks, and you add together all their counts. You do this every day for a specified period, take the average, and that is the number that you report to your client.
However, in order to claim that your count equals the number of visitors to the museum, you need some explanatory theories. For instance, you are assuming that the doors you are observing are precisely the entrances to the museum, and that they lead only to the museum. If one of them leads to the cafeteria or the museum shop as well, you might be making a large error if your client does not consider people who go only there to be ‘visitors to the museum’. There is also the issue of museum staff - do they count as visitors? And there are visitors who leave and come back on the same day, and so on. So you need quite a sophisticated explanatory theory of what the client means by ‘visitors to the museum’ before you can devise a strategy for counting them.
Suppose you count the number of people coming out as well. If you have an explanatory theory saying that the museum is always empty at night, and that no one enters or leaves other than through the doors, and that visitors are never created, destroyed, split or merge, and so on, then one possible use for the outgoing count is to check the ingoing one: you would predict that they should be the same. Then, if they are not the same, you will have an estimate of the accuracy of your count. That is good science. In fact reporting your result without also making an accuracy estimate makes your report strictly meaningless. But unlessyou have an explanatory theory of the interior of the museum - which you never see - you cannot use the outgoing count, or anything else, to estimate your error.
Now, suppose you are doing your study using explanationless science instead - which really means science with unstated, uncriticized explanations, just as the Copenhagen interpretation really assumed that there was only one unobserved history connecting successive observations. Then you might analyse the results as follows. For each day, subtract the count of people entering from the count of those leaving. If the difference is not zero, then - and this is the key step in the study - call that difference the ‘spontaneous-human-creation count’ if it is positive, or the ‘spontaneous-human-destruction count’ if it is negative. If it is exactly zero, call it ‘consistent with conventional physics’.
The less competent your counting and tabulating are, the more often you will find those ‘inconsistencies with conventional physics’. Next, prove that non-zero results (the spontaneous creation or destruction of human beings) are inconsistent with conventional physics. Include this proof in your report, but also include a concession that extraterrestrial visitors would probably be able to harness physical phenomena of which we are unaware. Also, that teleportation to or from another location would be mistaken for ‘destruction’ (without trace) and ‘creation’ (out of thin air) in your experiment and that therefore this cannot be ruled out as a possible cause of the anomalies.
When headlines appear of the form ‘Teleportation Possibly Observed in City Museum, Say Scientists’ and ‘Scientists Prove Alien Abduction is Real,’ protest mildly that you have claimed no such thing, that your results are not conclusive, merely suggestive, and that more studies are needed to determine the mechanism of this perplexing phenomenon.
You have made no false claim. Data can become ‘inconsistent with conventional physics’ by the mundane means of containing errors, just as genes can ‘cause happiness’ by countless mundane means such as affecting your appearance. The fact that your paper does not point this out does not make it false. Moreover, as I said, the crucial step consists of a definition, and definitions, provided only that they are consistent, cannot be false. You have defined an observation of more people entering than leaving as a ‘destruction’ of people. Although, in everyday language, that phrase has a connotation of people disappearing in puffs of smoke, that is not what it means in this study. For all you know, they could be disappearing in puffs of smoke, or in invisible spaceships: that would be consistent with your data. But your paper takes no position on that. It is entirely about the outcomes of your observations.
So you had better not name your research paper ‘Errors Made When Counting People Incompetently’. Aside from being a public-relations blunder, that title might even be considered unscientific, according to explanationless science. For it would be taking a position on the ‘interpretation’ of the observed data, about which it provides no evidence.
In my view this is a scientific experiment in form only. The substance of scientific theories is explanation, and explanation of errors constitutes most of the content of the design of any non-trivial scientific experiment.
As the above example illustrates, a generic feature of experimentation is that the bigger the errors you make, either in the numbers or in your naming and interpretation of the measured quantities, the more exciting the results are, if true. So, without powerful techniques of error-detection and -correction - which depend on explanatory theories - this gives rise to an instability where false results drown out the true. In the ‘hard sciences’ - which usually do good science - false results due to all sorts of errors are nevertheless common. But they are corrected when their explanations are criticized and tested. That cannot happen in explanationless science.
Consequently, as soon as scientists allow themselves to stop demanding good explanations and consider only whether a prediction is accurate or inaccurate, they are liable to make fools of themselves. This is the means by which a succession of eminent physicists over the decades have been fooled by conjurers into believing that various conjuring tricks have been done by ‘paranormal’ means.
Bad philosophy cannot easily be countered by good philosophy - argument and explanation - because it holds itself immune. But it can be countered by progress. People want to understand the world, no matter how loudly they may deny that. And progress makes bad philosophy harder to believe. That is not a matter of refutation by logic or experience, but of explanation. If Mach were alive today I expect he would have accepted the existence of atoms once he saw them through a microscope, behaving according to atomic theory. As a matter of logic, it would still be open to him to say, ‘I’m not seeing atoms, I’m only seeing a video monitor. And I’m only seeing that theory’s predictions about me, not about atoms, come true.’ But the fact that that is a general-purpose bad explanation would be borne in upon him. It would also be open to him to say, ‘Very well, atoms do exist, but electrons do not.’ But he might well tire of that game if a better one seems to be available - that is to say, if rapid progress is made. And then he would soon realize that it is not a game.
Bad philosophy is philosophy that denies the possibility, desirability or existence of progress. And progress is the only effective way of opposing bad philosophy. If progress cannot continue indefinitely, bad philosophy will inevitably come again into the ascendancy - for it will be true.
Bad philosophy Philosophy that actively prevents the growth of knowledge.
Interpretation The explanatory part of a scientific theory, supposedly distinct from its predictive or instrumental part.
Copenhagen interpretation Niels Bohr’s combination of instrumentalism, anthropocentrism and studied ambiguity, used to avoid understanding quantum theory as being about reality.
Positivism The bad philosophy that everything not ‘derived from observation’ should be eliminated from science.
Logical positivism The bad philosophy that statements not verifiable by observation are meaningless.
MEANING OF ‘THE BEGINNING OF INFINITY’ ENCOUNTERED IN THIS CHAPTER
- The rejection of bad philosophy.
Before the Enlightenment, bad philosophy was the rule and good philosophy the rare exception. With the Enlightenment came much more good philosophy, but bad philosophy became much worse, with the descent from empiricism (merely false) to positivism, logical positivism, instrumentalism, Wittgenstein, linguistic philosophy, and the ‘postmodernist’ and related movements.
In science, the main impact of bad philosophy has been through the idea of separating a scientific theory into (explanationless) predictions and (arbitrary) interpretation. This has helped to legitimize dehumanizing explanations of human thought and behaviour. In quantum theory, bad philosophy manifested itself mainly as the Copenhagen interpretation and its many variants, and as the ‘shut-up-and-calculate’ interpretation. These appealed to doctrines such as logical positivism to justify systematic equivocation and to immunize themselves from criticism.