Further Adventures in Space and Time - The Philosophy of Physics (2016)

The Philosophy of Physics (2016)

5
Further Adventures in Space and Time

This chapter steps away from philosophical issues stemming from symmetries but stays firmly focused on space and time. The three topics we cover tend to be of a more epistemological flavour than the previous chapter’s ontological problems: we begin with a look at the idea of the ‘true geometry of the world’ and consider whether we could ever discover such a thing. We then consider a similar problem involved in the idea of measuring time and finding a ‘true time.’ Finally, by way of also limbering up to the next chapter on statistical physics, we consider the status of determinism in physics.

5.1 Can We Know the World’s Geometry?

We tend to think of the world as having some definite geometry, and we might also tend to think that this geometry is one of those things that scientific work can help us discover. For example, depending on what the geometry of space is, the internal angles of a triangle will be more, less, or equal to 180 degrees. If only we could make a big enough triangle, we could test this. (In fact, Carl Friedrich Gauss is reported to have performed such an experiment in the 1820s by measuring the angles of light beamed between three peaks in Hanover - whether this experiment was really supposed to constitute a test of the deviation of the world’s geometry from Euclidean geometry is a matter of debate among historians of mathematics.) Likewise, for other plane figures such as squares and circles, with the measured properties altered accordingly. It is just a matter of measurement. Or is it?

Poincaré’s Parable of the Surveyors

Henri Poincaré famously invoked a kind of ‘discworld’ (long before Terry Pratchett, and also before general relativity came along with its curved spacetime) that any number of (mutually inconsistent) world geometries could be made consistent with our observations of the world, including our direct sensory experience.1 A team of flatland surveyors is confined to a closed Euclidean disc (i.e. with an edge at radius R), armed with rigid rods and light rays to make their measurements. He then adds a temperature dependent rod length and light refraction in this world and makes the temperature fall off as one strays from the disc’s center, with distance ρ. (Note that he actually encloses them in a large sphere, but his discussion suggests taking a cross-section of the disc through the center, with a radius R in which the distance of one of the inhabitants ρ is measured from the center. The temperature is then proportional to R2 − ρ2.) All objects in this world dilate and contract by the same amount (R2 − ρ2) and the thermal equilibrating effect happens at an instant. As one probes further out from the origin the rods contract more and more, becoming smaller and smaller (as well as colder, though the flatlanders wouldn’t be able to measure this since their thermometers suffer the same distortions). The surveyors know nothing of this distorting force, naturally assuming their bodies and instruments were rigid on account of feeling and observing no such effects. A similar force afflicts light rays, which have an index of refraction inversely proportional to R2 − ρ2. In modern discussions, we speak more generally of ‘universal forces’ rather than temperature: all we need is to postulate a force that dilates objects uniformly in the same way so that it goes completely unnoticed.

Of course, in a flat Euclidean world, the ratio of the circumference to the radius is simply 2π. However, with the distorting forces of the temperature or whatever universal field one postulates generating the same behavior, our surveyors, in figuring out the intrinsic geometry of their world (using whatever tools we might use to do the same: string, rulers, lasers, etc.), will find values greater than 2π characteristic of a hyperbolic (Lobachevskian) geometry (i.e. one with negative curvature). Likewise, measured triangles will have internal angles adding up to less than 180 degrees. The effect will be more dramatic as one measures larger and larger radii, circumferences, and triangles. Moreover, since their measuring instruments would shrink as they approached the boundary, they would never reach it (see fig. 5.1). From their results they would (wrongly, by construction: we know it is a finite Euclidean disk) infer that they live in an infinite non-Euclidean world - if distance is defined in terms of what is measured with rulers and the like, then the space is infinite in extent! They have, of course, wrongly it transpires, a (perfectly rational) ‘rigid body hypothesis,’ which ensures that merely moving about in space will not distort shapes and sizes.

But now suppose that maverick physicist Albert Fleinstein (the flatland counterpart of Einstein) points out that all of the surveyors’ results are compatible with the presence of precisely the forces introduced above in a flat, closed Euclidean world. So we have two theories:

T1 The world is infinite and non-Euclidean (hyperbolic).

T2 The world is finite and Euclidean, though with universal forces.

Fig. 5.1 Poincaré’s surveyors are enclosed in a sphere of radius R. As they move a distance ρ from the center, objects that were, e.g. 1 meter long at the center will be just (R2 − ρ2)/R2 meters long - the boundary is unreachable since then ρ = R, so that (R2 − ρ2) = 0. Hence, the space would be deemed infinite by beings confined within its borders.

The problem Poincaré poses is, how can Fleinstein’s fellow discfolk decide from within their world which theory is correct? There is no ‘stepping out’ of their two-dimensional standpoint, to our God’s eye view, to check. According to Poincaré the question is simply undecidable by experience or reason, and must simply be stipulated as a matter of convention. The problem is that any experience or experiment that makes one theory true will also make the other theory true. We have what philosophers of science call ‘underdetermination of theory by evidence’: the evidence can’t decide the matter. But it isn’t a case of simply not having gathered enough data: no possible data, consistent with the construction of the simplified world, can settle the controversy since it will be derivable from either theory.

The conventional choice could be made for various reasons (simplicity, coherence with their other theories, closeness to ‘the experienced world,’ etc.). But there is no absolute correctness to either choice since there is no absolute criterion on which to base it. A convention is just another name for an ‘implicit definition’ (an arbitrary choice of the language employed). For Poincaré facts of geometry are conventions in this sense: free and bounded only by the avoidance of contradiction. Poincaré himself believed that the flatlanders would be best served by choosing the ‘Euclidean space + forces’ option, invoking its superior simplicity and closeness to our everyday intuitions about space.2

Our choice is not .. imposed by experience. It is simply guided by experience. But it remains free; we choose this geometry rather than that geometry, not because it is more true, but because it is more convenient. ([36], p. 145)

Of course, we can easily quibble with this, pointing out that these curious forces are rather messy, while using the geometry of space (to ‘embody’ or ‘geometrize’ the force) is somehow more elegant and unified. This is not really relevant, however: the key point is that one can apparently gerrymander a finite flat space picture with some curious distorting forces to capture all of the empirical facts that a geometrical picture with an effectively infinite negatively curved space might generate. In which case, we cannot be said to know the world’s true geometry. In each case, we define the terms of the theory in such a way that the laws or axioms (e.g. of geometry) come out true.3

A Topological Parable

Hans Reichenbach ([39], pp. 63-66) considers a similar example involving beings that live on the surface of a sphere. One can, he argues, once again generate a parallel story, concerning topological features of a space, by this time redefining aspects concerning the reidentification of objects in the space. One might think it is perfectly easy to tell what the shape of space is in this case by simply planting a flag in the ground and traveling far enough to return to the starting point and finding your flag. Reichenbach points out, however, that this depends on a convention about objects ‘being the same.’ One might have a situation in which one is in a flat space but as one moves out certain features (such as the flag you planted) are mysteriously duplicated so that it only looks as though you are back where you started. There would need to be some principle of ‘pre-established harmony’ in the overall space, where the other stretches of space ‘know’ that the flag was placed a certain distance away so that it could be duplicated accordingly. This example is harder to uphold if we imagine stretching a very long rope around the space so that one could tie a knot in it. The sphere’s surface would provide an obstruction when pulled tight, which should be missing in the flat infinite space with duplication. However, this is too quick: presumably one must have tethered the rope to something (a tree say), which essentially is no different from leaving a flag. All one would see, as one completes the journey, is an end of a rope tied to a tree: there is no certainty that it is the other end of your rope! It might simply lead off in a straight line again, onto the next clone of your world.4

It is clear that this kind of thinking can be ramped up a dimension so that we imagine the same scenario occurring in a universe just like ours - of course, Poincaré was using his case as a possible analogue for our world.

We can ask: is the world open or closed (by analogy with the surface of a sphere)? If it is closed one could imagine setting off from the Earth keeping a straight line course and eventually returning to Earth. We face the same reidentification dilemma: is this the same Earth you left or an identical one some great distance from the ‘real’ Earth? If you think that it is a different Earth then you have to accept all of the strange coincidences (your cup was left in the same place on the desk in your office as the cup here on this imposter desk in this twin Earth’s version of your office). You can try to ‘catch’ the twin Earth (or some other copy) out by leaving a special message locked in a safe where only you have the combination. But you travel once again and come to find the safe opens with your code and the same message is in there: it is an assumption (though perfectly reasonable) that this is the same safe and message; the pre-established harmony story would have the same observational consequences. The more acceptable alternative is, of course, that you live in a universe with ‘Asteroids-geometry’ (a toroidal structure, so that going far enough in one direction brings you back to your starting point). In other words, the kind of topological structure (whether a space is open or closed for example) depends to a certain extent on our preference for a good causal story, with no spooky influences, such as the curious duplicating of one world in another location.

Reichenbach ([39], §17) has also extended this to other scientific facts at the basis of our theories, such as the uniformity of time (relating to the metric of time), which is based on the idea that we stipulate (by a ‘coordinative definition’) that, e.g. a pendulum’s swings cover equal periods of time - this is based on the fact that we can’t compare successive durations, there is simply no way to test such a thing:

We cannot carry back the later time interval and place it next to the earlier one. It is possible to make empirical statements about clocks, but such statements would concern something else. Two clocks stand next to each other, and we observe that the beginning as well as the end of their periods coincide. Further observation may show that the ends of their periods always coincide. This experience teaches us that two clocks standing next to each other and having equal periods once will always have equal periods. But this is all. Whether both clocks require more time for later periods cannot be determined. (ibid., p. 116)

This leads to a curiosity in physics (one that we return to in §5.2): the laws of physics themselves suggest that such periods will be equal, but those very laws were the result of experience with clocks “calibrated according to the principle of the equality of their periods” (ibid.). This is a very tight logical circle! To break out of it, says Reichenbach, requires an acceptance of conventional elements in the measure of time. Again, according to Reichenbach, any such definitions are chosen for the way they simplify description. This does not point to their truth, but identifying such conventional elements (given that they are contributions from the mind) is an essential part of separating out subjective from objective structure in our descriptions of physical reality.

Realism versus Conventionalism

This brings crashing home the point that conventionalism has a tendency to align with anti-realism about whatever is subject to the conventionalist stance. To say that something is conventional is to remove it from the objective world. However, what it also reveals, if we accept it, is that scientific theory-choice goes beyond empirical evidence in such cases.5

More recent work within the area of philosophy of cosmology has tended to focus on the epistemological opacity of various features of spacetimes (in general relativistic universes), which implies that we can never be fully sure of the structure of the universe: there are multiple consistent (but unobservable) developments of the observable part. The issue arises from the existence of causal horizons beyond which we can’t have knowledge. This is rather different to the kind of conventionalism mentioned above: there might be a fact of the matter, but it is simply our empirical limitations (i.e. restrictions on what we are able to experience, observe, and measure) that prevent us from finding them out. Unless we consider not being able to probe higher dimensions as an ‘empirical limitation’ (which seems wrongheaded in any case), the cases discussed by Poincaré and Reichenbach transcend empirical matters.

But what picture of the world are we left with then? One with an ‘indeterminate’ geometry and topology? Or a world with a definite geometry and topology, but that will lie forever from our view? Why should we care in any case? There are many other conventions (driving on the left side of the road in Australia) that we do not fret about: is it ‘really true’! The difference is that we can witness other conventions, and imagine changing them and doing otherwise with visible effects. Changing the convention would make a difference to the world. But that might cause us to be even less impressed by these geometrical examples: if the choice has no observational impact whatsoever then is it a difference worth worrying about? Perhaps a better example, used by Poincaré, is that of different coordinates (Cartesian versus polar) or the choice of units in the making of measurements. We often have to switch from pounds to kilograms, or stone, because of the different conventions for weight measurement in different countries. We don’t think that one is ‘more correct’ and yet these do not make a difference to the measured quantity. We also have the element of convenience of a unit relative to purpose, or providing a better fit with other units, and so on. Likewise there are all sorts of ways of measuring temperature (Fahrenheit, Celsius, etc.), but though there might be a disagreement about the numerical value given, there will be no disagreement about the qualitative aspects: the chicken in the oven will cook in the same amount of time regardless of whether we have the setting at 350 degrees Fahrenheit or 176.6 degrees Celsius. We simply fix some set of units to know what we’re talking about and to be able to specify what to do in a recipe. They give us, as beings that interact with the world and each other, a grip on temperature. Nobody imbues the units with any physical significance beyond their convenience. Again, we seem to be back to anti-realism about conventional elements.

Clark Glymour [18] has argued that even when there are conventionalist choices to be made we need not be forced into anti-realism. There are often reasons to say that while we cannot decide the matter, there is nonetheless a fact of the matter - these situations occur more in the kinds of cases where we are empirically constrained from finding certain things out. He also suggests that the deadlock can be broken with solid methodological considerations beyond empirical factors [19]: empirical equivalence does not mean equivalence in all scientifically relevant respects. General relativity (Einstein’s theory of gravity) also causes some problems for the “free to choose” idea where geometry is concerned (topology is a different matter) since there the field equations involve a dynamical interplay between the geometry and the matter distribution such that they are bound together with the latter seemingly uniquely selecting the former - in §4.4 we saw that this isn’t quite so straightforward as is often supposed.

Another escape from anti-realism is to argue that, as with units, we know that there aren’t really two separate ‘theories of the world’ being offered: one and the same physical content is represented by both systems. This suggests treating a theory as a kind of equivalence class of its observationally identical presentations: theories are systems that tell us what we will observe and explain what we have observed. This is associated with the ‘positivist’ school in philosophy of science according to which what is not observable (such as the difference between the conventionalist scenarios presented by Poincaré) is strictly speaking meaningless. If we don’t want to go down this path then we have to say something about the ontological nature of whatever it is that the two theories are redundantly representing, about which positivism remains silent. One realist option, due to Adolf Grünbaum [21], argues that what is shown by the geometric underdetermination cases is simply that space is ‘metrically amorphous’ (lacking in intrinsic metrical structure) so that Poincaré is seen to be right that geometry is conventional, but this needn’t lead us into anti-realism itself.

‘Structuralist’ positions will point out that the structure revealed by the equivalence class (i.e. whatever is common to both descriptions) exhausts what we can know (epistemic structuralism) or, in more extreme versions of structuralism, exhausts what there is (ontic structuralism). Another option that fits well with such cases in which there doesn’t seem to be a fact of the matter about which is correct is ‘constructive empiricism’ (due to Bas van Fraassen). This is realist about observables (on which the two theories match) but agnostic about the unobservables (on which the two theories do not match). However, to be pushed into such extreme (and global: applying to all theories) positions by a cluster of theories might be going too far. One can potentially rescue realism from some conventionalist dilemmas so long as there is a ‘dictionary’ linking the respective theoretical structures, as well as the matching of the structure of observables. This is hard to deny, but there might be some problem cases that slip through the net, in which the theoretical structures are simply too heterogeneous to be mapped onto one another in the required way.

More recent work, especially that occurring in string theory, has raised the spectre of conventional aspects in physics once again. Transformations known as dualities between (what appear to be) physically different string theories lead to the same observable content. One simple yet striking example is ‘T-duality.’ Here a string theory defined on a space with a large radius r is indistinguishable (using strings and the laws they obey) from a string theory defined on a space with a small radius, 1/r - the details needn’t concern us here. One possible response is that the radius is conventional just as in the geometrical structure of discworld. However, there are other options here, matching those above: we might remain agnostic about the issue, though perhaps still accepting that one of the radii correctly describes the space. Or we might take the theories as simply different ways of talking about the same physical possibility?6 If we follow this latter route then it seems hard to retain the naive picture of the world as strings living in spacetime.

5.2 Measuring Time

If you wear a wristwatch, then as its battery approaches expiry you will notice that it ‘slows down.’ If you’re like me, then this slowing down is gauged relative to your laptop, which has its time set (I’m told) by an atomic clock: we assume that this atomic clock is more reliable and so if there is any drift between watch-time and the laptop-time, we can usually safely assume the problem is the former.

Without this kind of comparison (and assumption), how do we judge whether a clock is slowing down or speeding up? How do we tell whether a pair of time intervals are the same or different, which is what is required? After all, we only experience the world as it unfolds. We can’t measure Newton’s (invisible) absolute time, and even using ‘sensible measures’ (clocks of various kinds) we can’t archive the intervals that have passed (tick-tocked) in order to compare them. Unlike spatial distances, in which we can place objects side by side, intervals are one-off entities. To return to the watch versus laptop example again, how do we know that it wasn’t a case of the laptop clock speeding up because of some fault? The two scenarios would be identical: the relative separation between the times shown doesn’t care what scenario causes it, the slowing down of one or the speeding up of the other.

A Convenient Time

Poincaré identified this as a key problem with time measurement, one that is both practical and philosophical - it is in many ways the temporal analogue of his geometrical conventionalism.7 The problem is that while we can with confidence state when events are before, after, or simultaneous with one another (topological ordering), it is not so easy to state when two intervals of time are identical (the metric properties: the how much): we can’t just ‘sense’ such a thing. He didn’t have the luxury of a laptop set by an atomic clock, but he uses a similar example:

Of two watches we have no right to say that one goes true, the other wrong: we can only say that it is advantageous to conform to the indications of the first. ([37], p. 228)

We might use a pendulum, for example, and assume that its beats are all of equal duration, but we know that there are all sorts of irregularities caused by temperature, air pressure, and so on. Correcting from these (and subtracting them somehow) would still leave the equality approximate, since there are electromagnetic influences and even tiny gravitational perturbances from other astronomical objects beyond the Earth. The pendulum clock is so prone to disturbance that the Earth’s rotation itself was used as a watch instead, so that each full rotation is a tick assumed to have the same duration.

But this new watch has its own problems. There is a slowing down of the Earth’s rotation due to the tides (and other influences), which results in a measured speeding up of the Moon’s (and other bodies’) motion relative to the Earth’s ‘ticks’ (when combined with Newton’s laws of motion). The Earth’s slowing down, however, is measured from the Moon’s apparent speeding up! The observed acceleration of the Moon would be in conflict with Newton’s law and conservation of energy if the Earth’s rotation were taken to be uniform, so an appropriate correction is made, attributing a deceleration to the Earth.

But, as Poincaré points out, this puts the weight on Newton’s laws, which are also approximate, as empirical facts. Moreover, with this definition of time based on Newton’s laws, we could pick any periodic phenomenon as our watch, and so long as we made the appropriate corrections, so that any observed feature remains consistent with Newton’s laws and the conservation of energy, we have much the same principles at work. Some such watch might, however, result in very complex corrections and a messy statement of Newton’s laws. This is the key for Poincaré; as with his discworld, he argues we tend to adopt the more convenient standard of time measurement, rather than the ‘most true’:

Time should be so defined that the equations of mechanics may be as simple as possible. ([37], pp. 227-228)

The similarity to the discworld case should now be clear: we have options for either sticking to one set of laws (or, in Poincaré’s terms, one “enunciation” of the laws) or choosing some other more complex statement. We can say that the Earth is perfectly uniform (modulo corrections for tidal friction and other influences, knocking it off its true course) and makes a fine t for Newton’s equally fine equations, which leads to accelerated motions in systems referred to it, or find a more suitable (more uniform, without corrections) periodic phenomenon.

There is something strangely circular about all this: we can choose to take any one of the planetary objects as a clock, and imposing Newton’s laws (and solid principles of physics, such as the conservation of energy), by assuming those laws and making observations, we make whatever corrections to our clock as are needed to get the whole system consistent. A choice of object is made purely to achieve the greatest simplicity of the form in which the laws are expressed. The time t, then, that features in Newton’s equations, is defined by those very laws (together with observations that are supposed to be used to confirm the laws)! This has much in common with pulling yourself up by your own bootstraps.8

New Standards

A more robust watch is the atomic clock, which is far less susceptible to external perturbations - theoretical calculations show that its various beats are uniform (identical in duration) that it will lose only a second in tens of millions of years.9 But, a second is still an irregularity, and the same procedure must be adopted in our scientific practice: assume some laws of physics (not necessarily Newton’s), add our observations, and then figure out how the system realizing the t in the equations will need to be adjusted to make the observations and the theory consistent.

There are two opposing interpretations of this procedure (i.e. in terms of what clocks are measuring and how t maps to the world): realism and anti-realism (conventionalism) about time. We have already seen the latter: Poincaré’s claim that there is just no fact of the matter about what the ‘true time’ is, only choices that result in simpler and more complex formulations of the laws. But much of the terminology of ‘corrections’ to the time variable at least suggest an underlying true time that our advancing, ever more precise choices of clock are approximating: better clocks in this sense are those that map more faithfully onto Newton’s true time.10 John Lucas explicitly adopts this viewpoint:

The fact that we have a rational theory of clocks vindicates Newton’s doctrine of absolute time. If we really regarded time simply as the measure of process, we should have no warrant for regarding some processes as regular and others as irregular. ([30], p. 91)

However, precision (to a greater or lesser degree) does not necessarily mean more accurate (in terms of mapping onto some quantity in the world: absolute duration). Sklar considers the relation as a causal one, rather than mapping (though only to dispel such a notion):

Of course, deviation of any clock from its ideal rate is something to be explained by causal interaction in the material world. But there is no “causal” explanation as to why clocks in general record time intervals more or less accurately. What we mean by time intervals is just this numerical abstraction and idealization from the uniformity more or less of relative rates of clocks of various kinds of construction. It is, of course, still an important observation of Newton’s that only when we date events by the ideal time metric will our dynamical laws of nature take on their familiar simple form. But that does not seem to call for absolute time as a “cause” either. ([47], p. 74)

Precision can also refer to an ability to control the various errors and perturbations, at least with theoretical knowledge on how to remove them from calculations. Moreover, we have seen that the conventionalist is perfectly capable of biting the bullet and accepting that we really do not have any such (absolute) warrant for distinguishing regular from irregular.

There have been several important advances in time measurement since the turn of the twentieth century. Firstly, there was ‘ephemeris time,’ which essentially followed the idea that since the solar system could be viewed as a kind of clockwork machine, it should be used to define time - ‘ephemeris’ refers to the catalogue (an ‘almanac’) of positions of some astronomical object over time. The ‘hands’ of this clock are the positions of the Moon and planets (relative to the ‘fixed stars’), as determined by Newton’s laws. Ephemeris time is then just the rate at which these ‘tick.’ The unit in this case was the sidereal year (as of 1900: to avoid inevitable fluctuations), or whatever time it took that year for the Earth to perform a complete orbit around the Sun.

The next step was the creation of atomic time. A clock, of course, is simply something that oscillates (preferably in a uniform fashion, modulo the problems raised above), along with a register of the number of cycles that have occurred. The specific oscillator provides the ‘frequency standard’ (e.g. a pendulum, a pulse, the Earth’s rotation, the solar orbit, etc.). Atomic time is based on the recognition that atoms vibrate at specific frequencies, and so it involves an atomic frequency standard. This nicely fits the natural criterion of a standard, that it be ‘universal’ (freely recreatable wherever and whenever one wishes). Atoms of the same kind are identical, unlike pendulums. The atomic second is, however, defined in terms of the second of Ephemeris time: a second of solar time is correlated with 9192631830 ±10 cycles of a caesium atom.

The study of standards is a fascinating one, and hasn’t received nearly enough attention from philosophers of physics.11 However, I raise it here to simply point out that the same philosophical issues are raised regardless of the standard we use. There is the same question of what is being measured: a ‘real time’ or simply physical processes that are linked to the clocks via correlations. However, a future advance (still a ‘work in progress’) will attempt to base a set of standards (for time, space, and mass) purely on the fundamental (universal) constants of nature: Planck’s constant ħ, the constant of gravitation G, and the speed of light, c. I leave it as an interesting exercise for you to figure out what difference (if any) this change in standards would make.

5.3 Determinism and Indeterminism in Physics

The most famous characterization of determinism (indeed, amounting to the very definition of the claim for most, and quoted whenever the word ‘determinism’ is mentioned) is due to Pierre Simon de Laplace:

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world [so that] to it nothing would be uncertain, the future as well as the past would be present to its eyes. (In [33], pp. 281-282)

Hence, the present state of the world is understood to have been ‘brought into existence’ by a unique prior state together with the laws of nature (Laplace’s ‘forces’). We have in here, then, laws of nature and the relation of cause and effect. We also have the notion of predictability (in principle), as based on these other elements. It is not surprising that this vision of a deterministic universe was couched in the framework of the clockwork-conceived Newtonian solar system, with the planets, Sun, and Moon linked by gravitation. For example, given the initial positions and velocities of all particles together with Newton’s second law F = ma, then so long as we know all forces F, given some mass, we will be like Laplace’s ‘intelligence’ (or ‘Demon’) in terms of computing motion. Because of the interlocking nature of the forces between all of the objects in the system, one has the potential to know its state at any instant one could care to choose.

In more modern terms, then, determinism simply means that for some initial condition, given the laws (and any boundary conditions), there is one and only one possible outcome (relative to those laws). We can represent this diagrammatically as follows:

Note that here x(t) might be either to the past or future of x(0) (the initial state): hence, the idea is that the laws and an initial state will determine a unique history. This can be further transformed into a statement about replicating initial conditions, since it follows that whenever some state is reproduced, it will duplicate the behavior of the original: like causes will have like effects, in other words (like replaying a videotape). The phenomenon of chaos often fools people into thinking that it implies a failure of determinism since we lose the ability to predict future states (over certain timescales). However, determinism (in the above sense) is preserved, only the ability to replicate initial conditions is lost, and the laws are such that even small errors in this replication will be pushed into large divergences in later states by the laws. Indeed, in general, if we don’t have a perfect grip on the initial conditions, we will pick up some uncertainty in how the system will evolve.

Indeterminism, by contrast, means that for some initial condition, given the laws, there is more than one possible outcome (though only an individual outcome might be found).12 In this case, we can see that the ‘like causes will have like effects’ principle is violated: we can duplicate the law and the initial conditions, and yet get different behavior. This can be represented by a branching structure as follows:

We saw in the previous chapter (in the context of the hole argument) how the laws of general relativity have some freedom so that the same initial state could lead to what looked like distinct future states, but those states formed an equivalence class under the theory’s central symmetry, which would collapse the branching structure. However, this should lead us to suspect that deciding whether a theory is deterministic or not isn’t quite as simple as one might believe: in that case it was a matter of interpretation whether one collapsed the branching possibilities (that is, whether one viewed them as representing physically distinct possibilities).

Uncertainties and Probabilities

Of course, this branching situation might also reflect uncertainty in terms of our knowledge of the outcome rather than any uncertainty on the part of Nature. A toss of a die is an obvious case in which the uncertainty is in our heads, and if only we knew “all the forces acting in nature at a given instant” on the die, we would be able to compute its outcome. There, of course, we have a branching into six possible outcomes. Of course, though we cannot predict with certainty which of the six outcomes will be realized following a throw, we can in this case have some say over the distribution of events in a large sequence of throws. Likewise, in the above case in which we don’t have perfect knowledge of the initial conditions we will be faced with uncertainty of this epistemic type: there will be a statistical spread of possible outcomes. This links the discussion to probabilities and their interpretation.

There are three broad categories of interpretation regarding probabilities:

· Objectivist. Probabilities pick out ‘real’ features in the world, independently of the existence of humans. The relative frequency interpretation according to which probabilities are ratios of repeated events views probabilities objectively in terms of a correspondence between the probability and the number of times (or percentage) an outcome is found in the repeated run (strictly speaking an infinite run). It is possible to think of objective probabilities in terms of ‘propensities,’ which are dispositions13 to produce the kinds of outcomes that would be able to ground the kinds of relative frequency just mentioned.

· Subjectivist. Probabilities refer to the degree of belief (a value between 0 and 1) that an agent has. It is, of course, dependent on the agent’s beliefs and is sometimes called the ‘personalist’ approach since each agent determines their own interpretation of probabilities. This is tamed (made more ‘objective’) by adding various kinds of constraints so that agents are forced to at least be consistent in their assignments of probabilities. Note that probabilities for single events can be dealt with on this approach (e.g. where we do not have the luxury of an ensemble of copies of the event).

· Evidentialist. Probabilities are objective facts about the levels of support between empirical claims. This is a inductive logic approach analyzing relations between statements: it locates probability neither in the world nor in the head, but in a kind of abstract formal space.

These categories split apart into sub-categories; however, we need not concern ourselves with the details here. What matters is that some treat probability as a feature of the world (‘ontic’: the world itself is ‘chancy’) while others treat it as a mental contribution (‘epistemic’: the world may or may not be chancy, but our knowledge of it is uncertain). To return to the case of the toss of a die above, it might be that there is a fact of the matter, determined by physical law, about which side of the die will be face up, the subjectivist can still assign a probability based on incomplete knowledge of the situation.

In the case of quantum mechanics, the orthodox interpretation is that there is ‘ontic uncertainty’ (objective probability) since the laws are themselves only capable of generating probabilities for outcomes. As we see in Chapter 7, quantum mechanics caused many to give up on the notion of determinism, and the related notion of cause and effect. This reflects the ‘standard viewpoint,’ one replicated across countless popular TV programmes on physics (but also more serious academic literature), that classical physics is deterministic and quantum mechanics came along and destroyed this neat deterministic picture, underwritten by the uncertainty principle (sometimes called ‘the principle of indeterminacy’). However, we need to be far more careful in our assessments of whether a theory is or is not deterministic - and after all, we are concerned with physical theories here (and their interpretation), and whether we think that our world is actually deterministic will depend on what our best theories say, and how faithfully we take them to map to our world.

Defining Determinism

We also need to be careful in how determinism is defined: what exactly are the necessary components of this thesis? We saw that it is usually bundled together with causality and prediction, but this has recently come under fire. When we pull apart these elements, we are left with a formulation that forces us to revise the standard view. Causality faces the troubles identified by David Hume long ago, and solidified by Bertrand Russell: causation does not appear in our theories; rather, all we have are functional relationships of various kinds. Causation faces too many philosophical problems of its own to make it reasonable to base a definition of determinism on it. Likewise prediction, which also fails to secure an ontological notion of determinism since to predict is to perform a mental act, and this is highly dependent on what skills we attribute to whatever is performing such acts. True, if we can make accurate predictions using some theory then it perhaps offers up some evidence toward that theory’s status in terms of determinism, but strictly speaking it belongs in the realm of epistemology. Moreover, the existence of chaos, which involves a lack of predictability yet still, we want to say, is deterministic, should also lead us to wish to tease these two concepts apart. Again, since theories are our guideposts to reality, we ought to couch our definition of determinism is terms of theories and their interpretations.

The preferred formulation of modern philosophers of physics can be discerned from some of what we already said above. Let’s assume that a theory is defined by the states and laws governing its systems of interest. The trick is then to consider pairs of systems that are ‘prepared’ in the same way (i.e. in the same state) at some instant of time. Given such preparation, determinism is the claim that the systems will share the same state at all future times, so long as they are subject to the same laws:

This makes no mention of predictability and causality, though they can easily be incorporated. What’s more, this account can easily be extended to the consideration of entire histories (i.e. universes or worlds): determinism just means that worlds that agree up to some time agree at all times. The hole argument can be considered in just this way by thinking about the state of an entire (instantaneous) slice through the spacetime (the universe) and all of spacetime before that slice and asking whether the future behavior of the fields defined on the slice (from which the state is constructed) are uniquely determined. Couched in our preferred way of thinking about determinism, we then ask whether it is possible to have a second universe identical to the first up to this same slice but differing thereafter. Of course, we found that it was indeed possible to have a second universe provided we treated the spaces’ points as real entities independently of the fields defined with respect to them: the symmetry of the theory means that we can deform the fields to the future of the slice while still producing a legal solution of the equations (i.e. while remaining consistent with the laws). But this same freedom means that the very slicing we used, to set up the test of determinism, is itself unphysical. (If we are thinking in terms of worlds - or rather models of worlds - instead, then the identities between initial and final states above would be instead isomorphisms between [portions of] the worlds.)

Denying Determinism

Several notable violations of determinism occur as a result of ‘interference’ or a breakdown of the theory (so that a solution cannot be extended to later times). For example, a major problem that threatens determinism in a Newtonian universe (the natural environment for Laplace’s demon’s party-trick) is the absence of any speed limit. Causal influences can propagate at whatever speed you like. Interactions can be infinitely rapid: indeed gravitation is a perfect example of such an influence, though not one mediated by a propagating particle of course. This implies that particles are able to shoot off to or in from ‘spatial infinity’ in a finite interval of time, so that they don’t show up asymptotically in the space, or show up without being in the space at the start of the interval - the inward particles have been dubbed ‘space invaders’! This allows for future states to be meddled with in a way not determined by an initial state. Perhaps surprisingly, special relativity makes the world safer for determinism, since the speed limit (involving a transformation of causal structure of spacetime) prohibits such space invaders and their reversals. Another option for outlawing space invaders and defectors is to enforce (global) conservation of energy: after all, particles coming in and out of the world will be bringing (creating) and taking (destroying) energy as they do so - this also raises the point that the number of particles in the world is not invariant, which might be a cause for concern.

Another failure of determinism in Newtonian mechanics concerns the simple breakdown of the applicability of the theory as a result of a (collision) singularity that occurs because of the form of the inverse-square law. This of course contains a 1/r2 term, where the r is the distance between a pair of particles. If we consider the mutual gravitational attraction of a pair of particles, a collision will obviously mean that the distance is zero. The laws of the theory simply cannot determine what will occur after such a singularity has occurred.

An example of Newtonian indeterminism, that puts us in mind of Zeno’s paradoxes, was devised by Jon Pérez Laraudogoitia [35]. Known as a ‘supertask’ (performing infinitely many steps in a finite time), it goes as follows. Firstly, we need infinitely many point masses, arranged along a meter-long line spaced according to the infinite geometric series The first particle, at the start, is taken to be moving toward the second particle at , which is then pushed onto the remaining particles, one after another, all at a rate of one meter per second - obviously, since this is laid out in a 1-meter line, the whole thing will be over in a second. During each (elastic) collision a particle pn will transfer its momentum to pn + 1, thereupon coming to a state of complete rest where pn + 1 was previously at rest - one can envisage a version of the toy known as ‘Newton’s cradle’ with infinitely many balls. After a second all the collisions will have completed, and the entire system will be at rest. But, and this is where the indeterminism springs from, if this is a possible Newtonian process (as it appears to be), then so is its time reverse (since Newtonian mechanics does not have a preferred direction of time). If we play the tape backwards in this case we have what appears to be a spontaneous self-excitation of the particles’ motion at t > 0, which of course conflicts with determinism. What is curious about this example is that the momentum (in this case the energy mv2) that we imparted at the start has been gobbled up by an infinite sequence. This again points to non-conservation of momentum (at least at a global level).14

General relativity is far more complex to deal with, and we see clearly the context-dependence of determinism in the fact that the particular spacetime structure is based on the particular solution to the field equations of the theory. There is simply too much freedom in creating universes - though this provides a theme-park experience for philosophers of physics. Virtually all of the troubles stem from the fact that general relativity’s equations are ‘local’ in that they link curvature and energy at a point. While locally things look quite simple (approximating Minkowski spacetime), globally things can become unhinged in a variety of ways: the global structure is something to be fixed by hand rather than by the theory. Some of these choices (i.e. for some choices of energy distribution) are better suited to determinism than others; some don’t even allow for the setting up of the initial value problem in which determinism is couched (i.e. data on an instantaneous slice that is ‘pushed along’ by the laws, thus generating the spacetime: a solution to Einstein’s equations). Spaces with ‘closed timelike curves’ (theoretically, those permitting time travel) are of this kind. In general relativity, given the dynamical nature of spacetime (coupled to mass and energy), an infinitely dense mass creates infinite curvature, which effectively creates an ‘edge’ to spacetime: a singularity.15 This is a generic feature of the worlds of general relativity. The existence of singularities in general relativity leads to the problem that one will have situations in which the theory cannot predict what will occur at such singularities (as with the Newtonian singularities above). One way of viewing this breakdown of determinism is in terms of a limit of the theory’s applicability, pointing to some successor theory able to deal with the singular behavior, or able to smear it out somehow.

There are more arbitrary ways in which determinism can be made to break down in general relativity, for example by simply ‘deleting’ that part to the future of some slice through spacetime (and that slice itself) in which case we have an abrupt end to the spacetime. This, strange though it may seem, is a physically possible world according to general relativity. While not very satisfying, we can clearly see that handling the issue of determinism in general relativity is fraught with difficulties and exotic potential counterexamples.16

As we will see in Chapter 7, quantum mechanics is not necessarily indeterministic: so long as a kind of nonlocality is preserved in the quantum theory it is perfectly possible to have a deterministic version (known as de Broglie-Bohm theory). The infamous many-worlds interpretation is also deterministic in that the total state (represented by a wavefunction for the entire universe - or, rather, ‘multiverse’) at any time suffices to determine it for all times - what is problematic, however, is making sense of outcomes and their probabilities for realization in a world in which ‘everything happens.’

5.4 Further Readings

As with the previous chapter, many of the discussions of the topics in the present chapter lie more within metaphysics and other areas (such as the philosophy of time, chance, and probability) than philosophy of physics.

Fun

· Craig Callender and Ralph Edney (2001) Introducing Time: A Graphic Guide. Icon Books.
- An excellent overview, in brief cartoons, of many of the major philosophical topics in philosophy of time (dealing mostly with physics-based issues).

Serious

· Barry Dainton (2010) Time and Space (2nd edn). Acumen Publishing.
- A very clear and comprehensive treatment of issues in the philosophy of space and time, including both philosophy of physics and more metaphysical issues.

· Lawrence Sklar (1974) Space, Time, and Spacetime. University of California Press.
- Slightly older, but still comprehensive introduction to issues in the philosophy of spacetime physics. It covers the epistemology of geometry in great depth, and also covers the relationship between causal ordering and time (not covered in this chapter).

Connoisseurs

· John Earman (1986) A Primer on Determinism. Dordrecht: Reidel.
- The classic text that did much to modify the simplistic discussions of determinism in classical and quantum theories. Its ‘connoisseur’ level placement is not an indicator of its reading difficulty: it is a sparkling read, as with his book World Enough and Space-Time.

Notes

1 Actually, Hermann von Helmholtz has the distinction of the creation of a discworld with two-dimensional beings confined to it (with no knowledge of higher dimensions): “On the Origin and Meaning of Geometrical Axioms” (in P. Pesic (ed.) Beyond Geometry: Classic Papers from Riemann to Einstein, Dover Publications, 2007: pp. 53-68). In any case, Poincaré himself uses beings confined to the interior of a sphere (see next note), but it has become ‘conventional’ to speak of Poincaré’s disk!2 John Norton expresses this very clearly by noting that the observational consequences O follow from the conjunction of a geometry G and some physical theory P about the bodies traversing the geometry. That is: G + P = O. Of course, we can preserve O by tweaking either the geometry or the physical theories so long as we perform a compensatory adjustment on the other - this example can be found in Norton’s exceptionally clear guide “Philosophy of Space and Time” (in M. Salmon (ed.), Introduction to the Philosophy of Science, Prentice-Hall, 1992: pp. 179-232).3 For Poincaré, all we have to go on are the observed motions of objects. From these observations we make inferences to a spatial reality underlying this. But Poincare’s response was that it is the group of possible transformations of objects that matters: this is invariant in the cases since the objects are observed to move in the same way in the two scenarios (the whole point of the example being that the same body of evidence is compatible with two conflicting visions of an underlying spatial reality). This is closely related to Felix Klein’s Erlangen Programme in which spatial geometry is characterized by its group of motions. The motions are initially derived from our visual and tactual-motor experience of the world, in bringing about displacements and alterations of objects. The convention of Euclidean space is selected, according to Poincaré, precisely because its group of transformations is the closest match to the coarse (physical) group of displacements we experience in our encounters with the world. It is fascinating to see how what we consider to be ‘pure’ subjects of mathematics like group theory originate in such observations - for more on these origins, see P. Pesic’s collection of the original papers: Beyond Geometry: Classic Papers from Riemann to Einstein, Dover Publications, 2007.4 It is a fun exercise to try and come up with a counterexample that would lead one to definitively tell whether one lived on the surface of a sphere (such as the Earth: though only its surface, with no access to higher dimensions) or not. If you manage this feat, drop a line to the ‘Flat Earth Society’: http://www.tfes.org.5 There are a number of famous cases in which what were thought to be conventional choices were no such thing. For example, David Malament demonstrated that simultaneity in special relativity (a standard example wheeled out by conventionalists) can be shown to be non-conventional (and can be uniquely defined) given certain undeniable assumptions - see his “Causal Theories of Time and the Conventionality of Simultaneity” (Noûs 11, 1977: 293-300).6 I prefer to distinguish such dualities from the standard conventionalist cases. For the reasons why, and a general overview of dualities, see, e.g. my “A Philosopher Looks at String Dualities” (Studies in History and Philosophy of Modern Physics 42(1): 54-67).7 Newton was no slouch, and identified the basis of the problem in his Principia:

In astronomy, absolute time is distinguished from relative time by the equation of common time. For natural days, which are commonly considered equal for the purpose of measuring time, are actually unequal. Astronomers correct this inequality in order to measure celestial motions on the basis of a truer time. It is possible that there is no uniform motion by which time may have an exact measure. All motions can be accelerated and retarded, but the flow of absolute time cannot be changed. The duration or perseverance of the existence of things is the same, whether their motions are rapid or slow or null; accordingly, duration is rightly distinguished from its sensible measures and is gathered from them by means of an astronomical equation. Moreover, the need for using this equation in determining when phenomena occur is proved by experience with a pendulum clock and also by ellipses of the satellites of Jupiter. ([34], p. 410)

Poincaré simply disagrees that duration is distinct from the various relative measures of duration: the measures are not ‘measures of’ some real underlying quantity - see Harvey Brown’s Physical Relativity (Oxford University Press, 2005, §2.2.3) for more on this, including, in later chapters, the story followed into general relativity.

8 Hans Reichenbach expresses this point (that the metric of time, or duration, is a conventional element) nicely, as follows: “It is impossible in an absolute sense to compare two consecutive units of a clock; if we nonetheless wish to call them equal, this assertion has the nature of a definition” (“Methods of Physical Knowledge” [1929]; reprinted in H. Reichenbach et al. (eds.) Hans Reichenbach: Selected Writings 1909-1953, Volume Two, Springer, 1978: p. 184). To establish sameness of duration requires what Reichenbach (and the logical empiricists) call a “coordinative definition”: it is defined by definitional lineage to some observable phenomenon (yet not by experience itself); but as Reichenbach goes onto argue (similarly to Poincaré) any such coordinations (e.g. with the Earth’s rotation, with atoms, with light rays, and so on) involve arbitrary elements (such as a notion of simultaneity, which is a spatial notion that suffers similarly from its own ‘problem of congruence’).9 An excellent semi-popular treatment of atomic clocks can be found in Tony Jones’ Splitting the Second: The Story of Atomic Time (IOP Publishing, 2000). A more advanced, though still very readable treatment of modern time measurement (including discussions of some of the issues raised here) is Claude Audin and Bernard Guinot’s The Measurement of Time: Time, Frequency and the Atomic Clock (Cambridge University Press, 2001).10 Note that while Newton’s laws of motion do not by themselves imply absolute space (since the laws are the same in all uniformly moving frames): we are unable to determine whether events separated in time are spatially coincident - this is, of course, just the content of Galilean relativity. But temporal relationships between spatially separated events have a different status: here we can say whether two spatially distant events are simultaneous or not (according to Newton’s theory). According to Sklar [46], Newton was aware of this difference, which is why he utilizes practical (physical) arguments from astronomy to argue for the reality of absolute time, but thought experiments (the bucket and the globes arguments) to argue for absolute space and motion.11 However, Eran Tal has made a good start in exposing many interesting features of time standards. See e.g. his “Making Time: A Study in the Epistemology of Measurement” (The British Journal for the Philosophy of Science, forthcoming) for a philosophical investigation of time standardization.12 This does not mean that there is only one branch realized. In ‘branching time’ models the world literally (i.e. topologically) takes multiple courses, so that it is the tree that is realized, rather than a single branch - for a discussion of branching in relation to indeterminism, see (though note that it is rather logic-heavy): T. Placek, N. Belnap, and K. Kishida’s “On Topological Issues of Indeterminism” (Erkenntnis 79, 2014: 403-436).13 Of course, it does not help us much by defining propensities in terms of dispositions, since they are just as slippery! The basic idea is best explained by simply thinking of propensities as brute chancy features in the world. Karl Popper famously based such a view on radioactive decay (half life), which seemed to be an irreducibly chancy business - see his “The Propensity Interpretation of Probability” (The British Journal for the Philosophy of Science 10(37), 1959: 25-42).14 Pérez Laraudogoitia offers an excellent summary of a range of supertasks, including more that are relevant to the issue of determinism, in his online encyclopaedia article: http://plato.stanford.edu/entries/spacetime-supertasks/.15 The definition of a singularity in general relativity is much broader than this, and the association with infinite curvature is rather outmoded (though it does capture much of the physical interpretation). The more mathematical treatment involves the idea that a singularity is a kind of ‘boundary’ on which the curves in the spacetime (that might represent motions of observers) end. (For a technical account of singularities relevant to the concerns of this section, see Robert Geroch’s “What is a Singularity in General Relativity?” Annals of Physics 48,1968: 526-540.)16 There is far more to the story than this. A notable addition is provided by the notion of a ‘naked singularity,’ which is a singularity not clothed in the usual event horizon blocking any undetermined surprises from view. But without such a horizon to mask the goings on it is possible for something like the space invaders mentioned earlier to appear! It is a serious piece of physics to try and find ways to forbid such naked singularities (cosmic censorship hypotheses) from finding a home in our world. I refer the interested reader to the brilliant, though technically demanding, Bangs, Crunches, Whimpers, and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, by John Earman (Oxford University Press, 1995). This includes a discussion of supertasks in a range of generally relativistic spacetimes (especially so-called ‘Malament-Hogarth’ spacetimes containing both infinite and finite length worldlines) that appear to be exploitable to test what appear to be unprovable mathematical conjectures that would require infinite time to complete (e.g. Goldbach’s conjecture that every even number is the sum of two primes). The observer with the infinite worldline could simply crank through all even numbers testing whether the conjecture holds while the observer with the finite length worldline sits in waiting for the result to be relayed to them. Whether or not these are just quirky mathematical games or point to something deep about computability in the world remains a matter of debate.