The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions - Shing-Tung Yau, Steve Nadis (2010)

Chapter 3. A NEW KIND OF HAMMER

Despite the rich history of geometry and the spectacular accomplishments to date, we should keep in mind that this is an evolving story rather than a static thing, with the subject constantly reinventing itself. One of the more recent transformations, which has made some contributions to string theory, is geometric analysis, an approach that has swept over the field only in the last few decades. The goal of this approach, broadly stated, is to exploit the powerful methods of analysis, an advanced form of differential calculus, to understand geometric phenomena and, conversely, to use geometric intuition to understand analysis. While this won’t be the last transformation in geometry—with other revolutions being plotted as we speak—geometric analysis has already racked up a number of impressive successes.

My personal involvement in this area began in 1969, during my first semester of graduate studies at Berkeley. I needed a book to read during Christmas break. Rather than selecting Portnoy’s ComplaintThe GodfatherThe Love Machine, or The Andromeda Strain—four top-selling books of that year—I opted for a less popular title, Morse Theory, by the American mathematician John Milnor. I was especially intrigued by Milnor’s section on topology and curvature, which explored the notion that local curvature has a great influence on geometry and topology. This is a theme I’ve pursued ever since, because the local curvature of a surface is determined by taking the derivatives of that surface, which is another way of saying it is based on analysis. Studying how that curvature influences geometry, therefore, goes to the heart of geometric analysis.

Having no office, I practically lived in Berkeley’s math library in those days. Rumor has it that the first thing I did upon arriving in the United States was visit that library, rather than, say, explore San Francisco as others might have done. While I can’t remember exactly what I did, forty years hence, I have no reason to doubt the veracity of that rumor. I wandered around the library, as was my habit, reading every journal I could get my hands on. In the course of rummaging through the reference section during winter break, I came across a 1968 article by Milnor, whose book I was still reading. That article, in turn, mentioned a theorem by Alexandre Preissman that caught my interest. As I had little else to do at the time (with most people away for the holidays), I tried to see if I could prove something related to Preissman’s theorem.

Preissman looked at two nontrivial loops, A and B, on a given surface. A loop is simply a curve that starts at a particular point on a surface and winds around that surface in some fashion until coming back to the same starting point. Nontrivial means the loops cannot be shrunk down to a point while resting on that surface. Some obstruction is preventing that, just as a loop through a donut hole cannot be shrunk indefinitely without slicing clear through the donut (in which case the loop would no longer be on the surface and the donut, topologically speaking, would no longer be a donut). If one were to go around loop A and, from there, immediately go around loop B, the combined path would trace out a new loop called B ╳ A. Conversely, one could just as well go around loop B first and then loop A, thus tracing out a loop called A ╳ B. Preissman proved that in a space whose curvature is everywhere negative—one that slopes inward like the inside of a saddle—the resulting loops B ╳ A and A ╳ B can never be smoothly deformed to the other simply by bending, stretching, or shrinking except in one special case: If a multiple of A (a loop made by going around A once or an integer number of times) can be smoothly deformed to a multiple of B, then the combined loop of B ╳ A can be smoothly deformed to A ╳ B and vice versa. In this single, exceptional case, loops A and B are said to be commuting, just as the operations of addition and multiplication are commutative (2 + 3 = 3 + 2, and 2 ╳ 3 = 3 ╳ 2), whereas subtraction and division are noncommutative (2-3 ≠ 3-2, and 2/3 ≠ 3/2).

My theorem was somewhat more general than Preissman’s. It applied to a space whose curvature is not positive (that is, it can either be negative or, in some places, zero). To prove the more general case, I had to make use of some mathematics that had not previously been linked to topology or differential geometry: group theory. A group is a set of elements to which specific rules apply: There is an identity element (e.g., the number 1) and an inverse element (e.g., 1/x for every x). A group is closed, meaning that when two elements are combined through a specific operation (such as addition or multiplication), the result will also be a member of the group. Furthermore, the operations must obey the associative law—that is, a ╳ (b ╳ c) = (a ╳ b) ╳ c.

The elements of the group I considered (which is known as the fundamental group) consisted of loops one could draw on the surface, such as the aforementioned A and B. A space with nontrivial loops has a nontrivial fundamental group. (Conversely, if every loop can be shrunk to a point, we say the space has a trivial fundamental group.) I proved that if those two elements are commuting, if A ╳ B = B ╳ A, then there must be a “subsurface” of lower dimension—specifically a torus—sitting somewhere inside the surface.

In two dimensions, we can think of a torus as the “product” of two circles. We start with one circle—the one that goes around the donut hole—and imagine that each point on this circle is the center of an identical circle. When you put all those identical circles together, you get a torus. (One could similarly make a donut shape by stringing Cheerios onto a cord and tying the ends together into a tight circle.) And that’s what we mean by saying the product of these two circles is a torus. In the case of my theorem, which built upon Preissman’s work, those two circles are represented by the loops A and B.

Both Preissman’s efforts and mine were rather technical and may seem obscure. But the important point is that both arguments show how the global topology of a surface can affect its overall geometry, not just the local geometry. That’s true because the loops in this example define a fundamental group, which is a global rather than local feature of a space. To demonstrate that one loop can be continuously deformed to another, you may have to move over the entire surface, which makes it a global property of that space. This is, in fact, one of the major themes in geometry today—to see what kind of global geometric structures a given topology can support. We know, for instance, that the average curvature of a surface topologically equivalent to a sphere cannot be negative. Mathematicians have complied a long list of statements like that.

014

3.1—The geometer Charles Morrey (Photo by George M. Bergman)

As far as I could tell, my proof looked OK, and when the vacation was over, I showed it to one of my instructors, Blaine Lawson, who was then a young lecturer at the university. Lawson thought it looked OK , too, and together we used some ideas from that paper to prove a different theorem on a similar subject that related curvature to topology. I was pleased to have finally contributed something to the great body of mathematics, but I didn’t feel as if what I’d done was especially noteworthy. I was still searching for a way to truly make my mark.

It dawned on me that the answer may lie in a class I was taking on nonlinear partial differential equations. The professor, Charles Morrey, impressed me greatly. His course, on a subject that was anything but fashionable, was very demanding, drawing on a textbook that Morrey himself had written, which was extremely difficult to read. Before long, everyone dropped out of the course but me, with many students off protesting the bombing of Cambodia. Yet Morrey still continued his lectures, apparently putting a great deal of effort into their preparation, even with just one student in the class.

Morrey was a master of partial differential equations, and the techniques he’d developed were very deep. It’s fair to say that Morrey’s course laid the foundation for the rest of my mathematics career.

Differential equations pertain to just about anything that takes place or changes on infinitesimal scales, including the laws of physics. Some of the most useful and difficult of these equations, called partial differential equations, describe how something changes with respect to multiple variables. With partial differential equations, we can ask not only how something changes with respect to, say, time but also how it changes with respect to other variables, such as space (as in moving along the x-axis, y-axis, and z-axis). These equations provide a means of peering into the future and seeing how a system might evolve; without them, physics would have no predictive power.

Geometry needs differential equations, too. We use such equations to measure the curvature of objects and how it changes. And this makes geometry essential to physics. To give a simple example, the question of whether a rolling ball will accelerate—whether its velocity will change over time—is strictly governed by the curvature of the ball’s trajectory. That’s one reason curvature is so closely linked to physics. That’s also why geometry—the “science of space” that is all about curvature—is instrumental to so many areas of physics.

The fundamental laws of physics are local, meaning that they can describe behavior in particular (or localized) regions but not everywhere at once. This is even true in general relativity, which attempts to describe the curvature of all spacetime. The differential equations that describe that curvature, after all, are derivatives taken at single points. And that poses a problem for physicists. “So you’d like to go from local information like curvature to figuring out the structure of the whole thing,” says UCLA mathematician Robert Greene. “The question is how.”1

Let’s start by thinking about the curvature of the earth. As it’s hard to measure the entire globe at once, Greene suggests the following picture instead: Imagine a dog attached by leash to a stake in the front yard. If the dog can move a little bit, it can learn something about the curvature of the tiny patch to which it’s confined. In this case, we’ll assume that patch has positive curvature. Imagine that every yard all over the world has a dog attached to a post, and every single patch around those posts has positive curvature. From all these local curvature measurements, one can infer that topologically, the world must be a sphere.

015

3.2—Think of an object moving along a particular path. The velocity, which reflects how the object’s position changes with time, can be obtained by taking the derivative of the position curve. The derivative yields the slope of that curve at a given point in time, which also represents the velocity. The acceleration, which reflects how the object’s velocity changes with time, can be obtained by taking the derivative of the velocity curve. The value of the acceleration at a given point in time is given by the slope of the velocity curve.

There are, of course, more rigorous ways of determining the curvature of a patch than basing it on how the surface feels to a dog. For example, if a leash has length r, as a dog walks around the post with the leash fully extended, the animal will trace out a circle whose circumference is exactly 2πr, assuming the space (or ground) is perfectly flat. The circumference will be somewhat smaller than 2πr on the surface of a sphere (with positive curvature) that slopes “downward” in all directions, and it will be larger than 2πr if the post is sitting in a dip or saddle point (with negative curvature) that slopes downward in some directions and upward in others. So we can determine the curvature of each patch by measuring the longest round-trip journey each dog can make and then combining the results from the various patches.

That’s pretty much what differential geometers do. We measure the curvature locally, at a particular point, but attempt to use that knowledge to understand the entire space. “Curvature governs topology” is the basic slogan that we geometers embrace. And the tools we use to effect that aim are differential equations.

Geometric analysis—a relatively recent development that we’ll be taking up in a moment—carries this idea further, but the general approach of including differential equations in geometry has been going on for centuries, dating back nearly to the invention of calculus itself. Leonhard Euler, the great eighteenth-century Swiss mathematician, was one of the earliest practitioners in this area. Among his many accomplishments was the use of partial differential equations to systematically study surfaces in three-dimensional space. More than two hundred years later, we are, in many ways, still following in Euler’s footsteps. In fact, Euler was one of the first to look at nonlinear equations, and those equations lie at the heart of geometric analysis today.

Nonlinear equations are notoriously difficult to solve, partly because the situations they describe are more complicated. For one thing, nonlinear systems are inherently less predictable than linear systems—the weather being a familiar example—because small changes in the initial conditions can lead to wildly different results. Perhaps the best-known statement of this is the so-called butterfly effect of chaos theory, which fancifully refers to the possibility that air currents generated by the flapping of a butterfly’s wings in one part of the world might conceivably cause a tornado to sprout up elsewhere.

Linear systems, by contrast, hold far fewer surprises and are consequently much easier to comprehend. An algebraic equation like y = 2x is called linear because when you graph it, you literally get a straight line. Any value of xyou pick automatically yields a single value for y. Doubling the value of x doubles the value of y, and vice versa. Change, when it comes, is always proportional; a small change in one parameter will never lead to a fantastically large change in another. Our world would be much easier to understand—though far less interesting—if nature worked that way. But it does not, which is why we have to talk about nonlinear equations.

We do have some methods, however, to make things a bit more manageable. For one thing, we draw on linear theory as much as we can when confronting a nonlinear problem. To analyze a wiggly (nonlinear) curve, for example, we can take the derivative of that curve (or the function defining it) to get the tangents—which are, essentially, linear elements or straight lines—at any point on the curve we want.

Approximating the nonlinear world with linear mathematics is a common practice, but, of course, it does nothing to change the fact that the universe is, at its heart, nonlinear. To truly make sense of it, we need techniques that merge geometry with nonlinear differential equations. That’s what we mean by geometric analysis, an approach that has been useful in string theory and in recent mathematics as well.

I don’t want to give the impression that geometric analysis started in the early 1970s, when I cast my lot with this approach. In mathematics, no one can claim to have started anything from scratch. The idea of geometric analysis, in a sense, dates back to the nineteenth century—to the work of the French mathematician Henri Poincaré, who had in turn built upon the efforts of Riemann and those before him.

Many of my immediate predecessors in mathematics made further critical contributions, so that by the time I came on the scene, the field of nonlinear analysis was already reaching a maturity of sorts. The theory of two-dimensional, nonlinear partial differential equations (of the sort we call elliptic and which will be discussed in Chapter 5) were worked out previously by Morrey, Aleksei Pogorelov, and others. In the 1950s, Ennio De Giorgi and John Nash paved the way for dealing with such equations of higher dimensions and indeed of any dimension. Additional progress on the higher-dimensional theories was subsequently made by people like Morrey and Louis Nirenberg, which meant that I had entered the field at almost the perfect time for applying these techniques to geometric problems.

Nevertheless, while the approach that my colleagues and I were taking in the 1970s was not brand new, our emphasis was rather different. To someone of Morrey’s bent, partial differential equations were fundamental in their own right—a thing of beauty to be studied, rather than a means to an end. And while he was interested in geometry, he saw it primarily as a source of interesting differential equations, which is also how he viewed many areas of physics. Although both of us shared an awe for the power of these equations, our objectives were almost opposite: Instead of trying to extract nonlinear equations from geometric examples, I wanted to use those equations to solve problems in geometry that had previously been intractable.

Up to the 1970s, most geometers had shied away from nonlinear equations, but my contemporaries and I tried not to be intimidated. We vowed to learn what we could to manage these equations and then exploit them in a systematic way. At the risk of sounding immodest, I can say the strategy has paid off, going far beyond what I’d initially imagined. For over the years, we’ve managed to solve through geometric analysis many outstanding problems that have yet to be solved by any other means. “The blending of geometry with [partial differential equation] theory,” notes Imperial College mathematician Simon Donaldson, “has set the tone for vast parts of the subject over the past quarter century.”2

So what do we do in geometric analysis? We’ll start first with the simplest example I can think of. Suppose you draw a circle and compare it with an arbitrary loop or closed curve of somewhat smaller circumference—this could be just a rubber band you’ve carelessly tossed on your desk. The two curves look different and clearly have different shapes. Yet you can also imagine that the rubber band can easily be deformed (or stretched) to make a circle—an identical circle, in fact.

There are many ways of doing so. The question is, what’s the best way? Is there a way to do it that will always work so that, in the process, the curve doesn’t develop a knot or a kink? Can you find a systematic way of deforming that irregular curve into a circle without resorting to trial and error? Geometric analysis can use the geometry of the arbitrary curve (i.e., the rubber band in our example) to prescribe a way of driving that curve to a circle. The process should not be arbitrary. The geometry of the circle ought to determine a precise way, and preferably a canonical way, of getting to a circle. (For mathematicians, canonical is a watered-down way of saying “unique,” which is sometimes too strong. Suppose you want to travel from the north pole to the south pole. There are many great circles connecting these points. Each of these paths offers the shortest route but none of them is unique; we call them canonical instead.)

You can ask the same questions in higher dimensions, too. Instead of a circle and a rubber band, let’s compare a sphere or fully inflated basketball with a deflated basketball with all kinds of dents and dimples. The trick, again, is to turn that deflated basketball into a perfect sphere. Of course, we can do it with a pump, but how can we do it through math? The mathematical equivalent of a pump in geometric analysis is a differential equation, which is the driving mechanism for the evolution of shape by means of tiny, continuous changes. Once you’ve determined the starting point—the geometry of the deflated basketball—and identified the proper differential equation, you’ve solved the problem.

016

3.3—A technique in geometric analysis, called curve shortening flow, can provide a mathematical prescription for turning any non-self-intersecting closed curve into a circle, without running into any complications—such as snags, tangles, or knots—along the way.

The hard part, of course, is finding the right differential equation for the job and even determining whether there is, in fact, an equation that’s up to the task. (Fortunately, Morrey and others have developed tools for analyzing these equations—tools that can tell us whether the problem we’re trying to solve has a solution at all and, if so, whether that solution is unique.)

The class of problems I’ve just described come under the heading of geometric flow. Such problems have lately garnered a good deal of attention, as they were used in solving the hundred-year-old Poincaré conjecture, which we’ll get to later in this chapter. But I should emphasize that problems of this sort constitute just a fraction of the field we now call geometric analysis, which encompasses a far broader range of applications.

When you’re holding a hammer, as the saying goes, every problem looks like a nail. The trick is figuring out which problems are best suited to a particular line of attack. An important class of questions that geometric analysis lets us solve is those involving minimal surfaces. These are nails for which geometric analysis may sometimes be almost the perfect hammer.

Odds are, we’ve all seen minimal surfaces at one time or another. When we dip the plastic ring from a soap bubble kit into the jar of soapy water, surface tension will force the soap film that forms to be perfectly flat, thereby covering the smallest possible area. A minimal surface, to be more mathematical about it, is the smallest possible surface that can span a given closed-loop boundary.

Minimization has been a foundational concept in both geometry and physics for hundreds of years. In the seventeenth century, for example, the French mathematician Pierre de Fermat showed that light traveling through different media always follows the path that takes the least energy (or “least action”), which was one of the first great physics principles expressed in terms of minimization.

“You often see this phenomenon in nature,” explains Stanford mathematician Leon Simon, “because of all the possible configurations you can have, the ones that actually occur have the least energy.”3 The least-area shape corresponds to the lowest energy state, which, other things being equal, tends to be the preferred state. The least-area surface has a surface tension of zero, which is another way of saying its mean curvature is zero. That’s why the surface of a liquid tends to be flat (with zero curvature) and why soap films tend to be flat as well.

A confusing aspect of minimal surfaces stems from the fact that the terminology has not changed over the centuries while the mathematics has become increasingly sophisticated. It turns out there is a large class of related surfaces that are sometimes called minimal surfaces even though they are not necessarily area-minimizing. This class includes the surface whose area is smallest compared with all other surfaces bounded by the same border—which might be called a true minimal surface or “ground state”—but it also includes an even greater number of so-called stationary surfaces that minimize the area in small patches (locally) but not necessarily everywhere (globally). Surfaces in this category, which have zero surface tension and zero mean curvature, are of great interest to mathematicians and engineers. We tend to think of minimal surfaces in terms of a family, all of whose members are similar. And while every minimal surface is intriguing, one stands out as truly exceptional.

Finding the shortest path, or the geodesic, is the one-dimensional version of the generally more complex problem of finding minimal surface areas in higher dimensions. The shortest path between any two points—such as a straight line on a plane, or the segment of a great circle connecting two points on the globe—is sometimes called a geodesic, although that term (to confuse matters further) also includes paths that are not necessarily the shortest but are still of considerable importance to geometers and physicists. If you take two points on a great circle that are not on opposite ends of the “globe,” there will be two ways of going from one to the other—the short way around and the long way around. Both paths, or arcs, are geodesics, but only one represents the shortest distance between those points. The long way around also minimizes length, but it only does so locally, meaning that among all possible paths one might draw that are close to that geodesic, it alone offers the shortest path. But it is not the shortest path among all possibilities, since one could take the short way around instead. (Things get even more complicated on an ellipsoid—a flattened sphere made by rotating an ellipse around one of its axes—on which many geodesics do not minimize length among all possible paths.)

017

3.4—The shortest distance between A and B lies along a “great circle” (which in this case happens to be the equator) by way of point P. This path is also called a geodesic. The path from A to B via point Q is a geodesic too, even though this route obviously does not represent the shortest distance between the two points. (But it is the shortest path compared with all other routes one might take in the vicinity of that arc.)

018

3.5—Joseph Plateau postulated that for any simple closed curve, one could find a minimal surface—a surface, in other words, of minimal area—bounded by that curve. The minimal surface spanning the curve (in bold) in this example, called an Enneper surface, is named after the German mathematician Alfred Enneper. (Image courtesy of John F. Oprea)

To find those minimal distances, we need to use differential equations. To find the minimum values, you look for places where the derivative is zero. A surface of minimum area satisfies a particular differential equation—an equation, namely, that expresses the fact that the mean curvature is zero everywhere. Once you’ve found this specific partial differential equation, you have lots of information to bring to bear on the problem because over the years we’ve learned a lot about these equations.

“But it’s not as if we’ve plundered a well-developed field and just taken things straight off the shelf. It’s been a two-way street, because a lot of information about the behavior of partial differential equations has been developed through geometry,” says Robert Greene.4 To see what we’ve learned through this marriage of geometric analysis and minimal surfaces, let’s resume our discussion of soap films.

In the 1700s, the Belgian physicist Joseph Plateau conducted classic experiments in this area, dipping wires bent into assorted shapes in tubs of soapy water. Plateau concluded that the soap films that formed were always minimal surfaces. He hypothesized, moreover, that for any given closed curve, you can always produce a minimal surface with that same boundary. Sometimes there’s just one surface, and we know it’s unique. Other times there is more than one surface that minimizes the area, and we don’t know how many there are in total.

Plateau’s conjecture was not proved until 1930, when Jesse Douglas and Tibor Rado independently arrived at solutions to the so-called Plateau problem. Douglas was awarded the Fields Medal for his work in 1936, the first year the award was given.

019

3.6—Although the original version of the Plateau problem related to surfaces spanning simple closed curves, you can also ask—and sometimes answer—more complicated versions of that same question such as this: If your boundary consists of not just a single closed curve but rather several closed curves, such as circles, can you find a minimal surface that connects them all? Here are some examples of minimal surfaces that offer solutions to that more complicated framing of the Plateau problem. (Image courtesy of the 3D-XplorMath Consortium)

Not every minimal surface is as simple as a soap film. Some minimal surfaces that mathematicians think about are much more complex—riddled with intricate twists and folds called singularities—yet many of these can be found in nature as well. Following up on the Douglas-Rado work a couple of decades later, Robert Osserman of Stanford (author of a masterful book on geometry called Poetry of the Universe) showed that the minimal surfaces encountered in Plateau-style experiments only exhibit one kind of singularity of a particularly simple sort, which looks like disks or planes crossing each other along a straight line. Then, in the 1970s, a colleague—William Meeks, a professor at the University of Massachusetts with whom I studied at Berkeley—and I carried this a step further.

020

3.7—The mathematician William Meeks (Photo courtesy of Joaquín Pérez)

We looked at situations in which the minimal surfaces are so-called embedded disks, meaning that the surface does not fold back anywhere along its vast extent to cross itself. (Locally, such a crossing would look like the intersection of two or more planes.) In particular, we were interested in convex bodies, where a line segment or geodesic connecting any two points in the object always stays on or in the object. A sphere and a cube are thus convex, while a saddle is not. Nor is any hollow, dented, or crescent-shaped object convex, because lines connecting some points will necessarily stray from the object. We proved that for any closed curve you can draw on the boundary of a convex body, the minimal surface spanning that curve is always embedded: It won’t have any of the folds or crossings that Osserman talked about. In a convex space, we showed, everything goes down nice and smooth.

We had thus settled a major question in geometry that had been debated for decades. But that was not the end of the story. To prove that version of the Plateau problem, Meeks and I drew on something called Dehn’s lemma. (A lemma is a statement proven in the hopes of proving another, more general statement.) This problem was thought to have been proved in 1910 by the German mathematician Max Dehn, but an error was uncovered more than a decade later. Dehn stated that if a disk in a three-dimensional space has a singularity, meaning that it intersects itself in a fold or crisscross, then it can be replaced by a disk with no singularity but with the same boundary circle. The statement would be quite useful, if true, because it means that geometers and topologists could simplify their jobs immensely by replacing a surface that crosses itself with one that has no crossings at all.

021

3.8—Dehn’s lemma, a geometric version of which was proven by William Meeks and the author (Yau), provides a mathematical technique for simplifying a surface that crosses, or intersects, itself into a surface with no crossings, folds, or other singularities. The lemma is typically framed in terms of topology, but the geometric approach taken by Meeks and Yau offers a more precise solution.

The lemma was finally proved in 1956 by the Greek mathematician Christos Papakyriakopoulos—an effort lionized in a limerick penned by John Milnor:

The perfidious lemma of Dehn
drove many a good man insane
but Christos Papa-
akyriakop-
oulos proved it without any pain.

Meeks and I applied Papakyriakopoulos’s topology-based approach to the geometry problem inspired by Plateau. We then flipped that around, using geometry to prove stronger versions of both Dehn’s lemma and the related loop theorem than topologists had been able to achieve. First, we showed that you could find a least-area disk in such a space that was embedded and hence not self-crossing. But in this particular setting (called equivariant), there’s not just one disk to consider but all its symmetry pairs—a situation that was like looking in a multiply bent funhouse mirror that has not just one mirror image but many. (The case we considered involved a finite, though arbitrarily large, number of mirror images or symmetry pairs.) We proved that the minimal surface disk would neither intersect itself nor intersect any of the other disks in its symmetry group. You might say that the disks in that group are all “parallel” to each other, with one exception: In cases where the disks do intersect, they must overlap completely.

While that was considered an important problem on its own, it turned out to be even more important than we thought, as it tied into a famous problem in topology known as the Smith conjecture, which dates back to the 1930s. The American topologist Paul Smith was then thinking about rotating an ordinary, three-dimensional space around an infinitely long vertical axis. Smith knew that, if the axis were a perfectly straight line, the rotation could be easily done. But such a rotation would be impossible, he conjectured, if the axis were knotted.

You might wonder why someone would consider such a strange notion, but this is exactly the sort of thing topologists and geometers worry about. “All your intuition tells you the conjecture is obviously true,” notes Cameron Gordon of the University of Texas, “for how can you possibly rotate space around a knotted line?” The work Meeks and I had done on Dehn’s lemma and the loop theorem contained the last two pieces needed to solve the Smith conjecture. The conjecture was proved by combining our results with those of William Thurston and Hyman Bass. Gordon took on the job of assembling those disparate pieces into a seamless proof that upheld Smith’s original assertion that you cannot rotate a three-dimensional space around a knotted line. It turns out, however, despite how ridiculous it might seem, that the statement is false in higher dimensions, where such rotations around knotted lines are indeed possible.5

This proof was a nice example of geometers and topologists working together to solve a problem that would surely have taken longer had they been pursuing it entirely on their own. It was also the first time I’m aware of that minimal-surface arguments had been applied to a question in topology. Moreover, it provided some validation of the idea of using geometry to solve problems in topology and physics. Although we’ve talked about topology, we haven’t really said much about physics yet, which leaves the question of whether geometric analysis has had anything to contribute there.

At an international geometry conference held at Stanford in 1973, a problem from general relativity came to my attention that would show how powerful geometric analysis could be for physics, although quite a few years passed before I tried to do anything about it. At that conference, the University of Chicago physicist Robert Geroch spoke of a long-standing riddle called the positive mass conjecture, or positive energy conjecture. It states that in any isolated physical system, the total mass or energy must be positive. (In this case, it’s OK to speak of either mass or energy interchangeably, as Einstein showed, most plainly through his famous equation, E = mc2, that the two concepts are equivalent.) Because the universe can be thought of as an isolated system, the conjecture also applies to the universe as a whole. The question was important enough to have warranted its own special session at major general-relativity meetings for years because it related to the stability of spacetime and the consistency of Einstein’s theory itself. Simply put, spacetime cannot be stable unless its overall mass is positive.

At the Stanford conference, Geroch laid down the gauntlet, challenging geometers to solve a problem that physicists, up to that point, had been unable to settle on their own. He felt that geometers might help not only because of the fundamental connection between geometry and gravity but also because the statement that the matter density must be positive is equivalent to saying the average curvature of space at each point must be positive.

Geroch was anxious for some resolution of this issue. “It was hard to believe the conjecture was wrong, but it was equally hard to prove it was right,” he recently said. Yet one cannot rely on intuition when it comes to matters like this, he added, “because it doesn’t always lead us correctly.”6

His challenge stuck in my mind, and several years later, while I was working on a different question with my former graduate student Richard Schoen (now a Stanford professor), it occurred to us that some of the geometric analysis techniques we’d recently developed might be applied to the positive mass conjecture. The first thing we did, employing a strategy commonly used on big problems, was to break the problem up into smaller pieces, which could then be taken on one at a time. We proved a couple of special cases first, before tackling the full conjecture, which is difficult for a geometer even to comprehend, let alone attempt to prove. Moreover, we didn’t believe it was true from a pure geometry standpoint, because it seemed to be too strong a statement.

We weren’t alone. Misha Gromov, a famous geometer now at New York University and the Institut des Hautes Études Scientifiques in France, told us that based on his geometric intuition, the general case was clearly wrong, and many geometers agreed. On the other hand, most physicists thought it was true (as they kept bringing it up at their conferences, year after year). That was enough to inspire us to take a closer look at the idea and see if it made any sense.

022

3.9—Stanford mathematician Richard Schoen

The approach we took involved minimal surfaces. This was the first time anyone had applied that strategy to the positive mass conjecture, probably because minimal surfaces had no obvious connection to the problem. Nevertheless, Schoen and I sensed that this avenue might pay off. Just as in engineering, you need the right tools to solve a problem (although, after a proof is complete, we often find there’s more than one way of arriving at the solution). If the local matter density was in fact positive, as postulated in general relativity, then the geometry had to behave in a manner congruent with that fact. Schoen and I decided that minimal surfaces might offer the best way of determining how the local matter density affects the global geometry and curvature.

The argument is difficult to explain mainly because the Einstein field equation, which relates the physics in this situation to geometry, is a complicated, nonlinear formulation that is not intuitive. Basically, we started off by assuming that the mass of a particular space was not positive. Next we showed that you could construct an area-minimizing surface in such a space whose average curvature was non-negative. The surface, in other words, could have zero average curvature. That would be impossible, however, if the space in which the surface sat was our universe, where the observed matter density is positive. And assuming that general relativity is correct, a positive matter density implies positive curvature.

While this argument might seem circular, it actually is not. The matter density can be positive in a particular space, such as our universe, even though the total mass is not positive. That’s because there are two contributions to total mass—one coming from matter and the other coming from gravity. Even though the matter contribution may be positive, as we assumed in our argument, the gravity contribution could be negative, which means the total mass could be negative, too.

Put in other terms, starting from the premise that the total mass was not positive, we proved that an area-minimizing “soap film” could be found, while at the same time we showed that in a universe like ours, such a film could not exist, because its curvature would be all wrong. The supposition of nonpositive mass had thus led to a major contradiction, pointing to the conclusion that the mass and energy must be positive. We proved this in 1979, thereby resolving the issue as the physicist Geroch had hoped someone might.

That discovery was just the first stage of our work, which Schoen and I broke down into two parts, because the problem Geroch had proposed was really a special case, what physicists call the time-symmetric case. Schoen and I had taken on the special case first, and the argument that brought us to a contradiction was based on that same assumption. To prove the more general case, we needed to solve an equation proposed by P. S. Jang, who had been Geroch’s student. Jang did not try to solve the equation himself, because he believed it had no global solution. Strictly speaking, that was true, but Schoen and I felt we could solve the equation if we made one assumption, which allowed the solution to blow up to infinity at the boundary of a black hole. With that simplifying assumption, we were able to reduce the general case to the special case that we’d already proved.

Our work on this problem received important guidance, as well as motivation, from the physics community. Although our proof rested on pure mathematics—built upon nonlinear arguments few physicists are comfortable with—the intuition of physicists, nevertheless, gave us hope that the conjecture might be true, or was at least worth expending the time and energy to find out. Relying on our geometric intuition in turn, Schoen and I then managed to succeed where physicists had previously failed.

The dominion of geometers in this area did not last long, however. Two years later, the physicist Edward Witten of the Institute for Advanced Study in Princeton proved the positive mass conjecture in an entirely different way—a way that depends on linear (as opposed to nonlinear) equations, which certainly made the argument more accessible to physicists.

Yet both proofs affirmed the stability of spacetime, which was comforting to say the least. “Had the positive mass theorem been untrue, this would have had drastic implications for theoretical physics, since it would mean that conventional spacetime is unstable in general relativity,” Witten explains.7

Although the average citizen has not lost sleep over this issue, the implications concern more than theoretical physicists only, as the concepts extend to the universe as a whole. I say this because the energy of any system tends to drop to the lowest energy level allowable. If the energy is positive, then there is a floor, set at zero, that it must stay above. But if the overall energy can be negative, there is no bottom. The ground state of general relativity theory—the vacuum—would keep dropping to lower and lower energy levels. Spacetime, itself, would keep degenerating and deteriorating until the universe as a whole disappeared. Fortunately, that is not the case. Our universe is still here, and it appears that spacetime has been saved—at least for now. (More on its possible demise later.)

Despite those rather sweeping implications, one might think that the two proofs of the positive mass conjecture were somewhat beside the point. After all, many physicists had been simply operating under the assumption that the positive mass conjecture was true. Did the proofs really change anything? Well, to my mind, there is an important difference between knowing that something is true and assuming it is true. To some extent, that is the difference between science and belief. In this case, we did not know the conjecture was true until it was a proven fact. As Witten stated in his 1981 paper that presented the proof, “it is far from obvious that the total energy is always positive.”8

Beyond those more philosophical issues surrounding the proofs of the positive mass conjecture, the theorem also offers some clues for thinking about mass, which turns out to be a subtle and surprisingly elusive concept in general relativity. The complications stem in part from the intrinsic nonlinearity of the theory itself. That nonlinearity means that gravity, too, is nonlinear. And being nonlinear, gravity can interact with itself and, in the process, create mass—the kind of mass that is especially confusing to deal with.

In general relativity, mass can only be defined globally. In other words, we think in terms of the mass of an entire system, enclosed in a figurative box, as measured from far, far away (from infinity, actually). In the case of “local” mass—the mass of a given body, for instance—there is no clear definition yet, even though this may seem like a simpler issue to the layperson. (Mass density is a similarly ill-defined concept in general relativity.) The question of where mass comes from and how you define it has fascinated me for decades, and when time permits, I continue to work on it with math colleagues like Melissa Liu and Mu-Tao Wang from Columbia. I now feel that we’re finally narrowing in on a definition of local mass, incorporating ideas from various physicists and geometers, and we may even have the problem in hand. But we couldn’t have begun to even think about this issue without first having established as a baseline that the total mass is positive.

In addition, the positive mass theorem led Schoen and me to another general relativity-related proof of some note, this time concerning black holes. When most people think of exotic astrophysical entities like black holes, geometry is the farthest thing from their mind. Yet geometry has a lot to say about black holes, and through geometry, people were able to say something about the existence of these objects before there was strong astronomical evidence for them. This was a major triumph of the geometry of general relativity.

In the 1960s, Stephen Hawking and Roger Penrose proved, through geometry (though a different kind of geometry than we’ve been discussing here) and the laws of general relativity, that if a trapped surface (i.e., an extremely curved surface from which light cannot escape) exists, then the surface would eventually evolve, or devolve, into the kind of singularity thought to lie in the center of a black hole—a place at which we believe the curvature of spacetime approaches infinity. If you find yourself in such a place, spacetime curvature will continue to increase as you move toward the center. And if there’s no cap to the curvature—no upper limit—then the curvature will keep getting bigger until you reach the center, where its value will be infinite.

That’s the funny thing about curvature. When we walk on the surface of Earth, which has a huge radius (about 4,000 miles) compared with us (normally not much more than 6 feet tall), we can’t detect its curvature at all. But if we were to walk on a planet with just a 10- or 20-foot radius (like that inhabited by Saint-Exupéry’s Little Prince), we could not ignore its curvature. Because the curvature of a sphere is inversely proportional to the radius squared, as the radius goes to infinity, the curvature goes to zero. Conversely, as the radius goes to zero, the curvature blows up, so to speak, and goes to infinity.

023

3.10a—Cambridge University physicist Stephen Hawking (Photo by Philip Waterson, LBIPP, LRPS)

024

3.10b—Oxford University mathematician Roger Penrose (© Robert S. Harris [London])

025

3.11—The smaller the sphere, the more sharply it’s curved. Conversely, as the radius of a sphere increases to infinity, its curvature decreases to zero.

Imagine, then, a flash of light emitted simultaneously over the surface of an ordinary two-dimensional sphere. Light will move from this surface in two directions, both inwardly and outwardly. The inward-moving flash will form a surface of rapidly decreasing area that converges toward the center, whereas the surface area of the outgoing flash steadily increases. A trapped surface is different from a typical sphere in that the surface area decreases regardless of whether you move inward or outward.9 You’re trapped no matter which direction you head. In other words, there’s no way out.

How can this be possible? Well, it’s possible, in part, because that’s the definition of a trapped surface. But the explanation also stems from the fact that trapped surfaces have what’s called positive mean curvature taken to the extreme. Even outward-going rays of light get wrapped around by this intense curvature—as if the roof and walls were closing in on them—and, as a result, they end up converging toward the center. “If the surface area is initially decreasing, it will continue to decrease because there’s a focusing effect,” my colleague Schoen explains. “You can also think of great circles on the globe that start at the north pole and separate but because the curvature is positive on a sphere, the lines start to converge and eventually come together at the south pole. Positive curvature gives you this focusing effect.”10

Penrose and Hawking had proved that once formed, trapped surfaces would degenerate into objects from which light cannot escape—objects that we call black holes. But what does it take, exactly, to make a trapped surface? Before Schoen and I began our work, people had generally asserted that if the matter density in a given region were high enough, a black hole would inevitably form, but these arguments were rather vague and involved a lot of hand-waving. No one had ever formulated the statement in a clear, rigorous manner. This is the problem that Schoen and I attacked, again using minimal-surface approaches that came directly out of our work on the positive mass theorem.

We wanted to know the precise conditions under which you’d produce a trapped surface, and in 1979 we proved that when the density of a region reaches twice that of a neutron star (an environment already 100 trillion times denser than water), the curvature will be high enough that a trapped surface will invariably form. Our argument, coupled with that of Hawking and Penrose, spells out the circumstances under which black holes must exist. More specifically, we showed that when a celestial object has a matter density greater than that of a neutron star, it will collapse directly to a black hole and not to another state. This was a purely mathematical discovery about objects whose existence would soon be confirmed by observation. (A few years ago, Demetrios Christodolou of ETH Zurich developed another mechanism for the formation of trapped surfaces through gravitational collapse.)11

More recently, Felix Finster, Niky Kamran, Joel Smoller, and I studied the question of whether spinning black holes are stable in the face of certain perturbations. That is, you could “kick” these objects in various ways, so to speak, but they will not split into two or spin out of control or otherwise fall apart. Although this work looks robust, it is not yet complete and we cannot rule out the possibility of other, more general kinds of kicks that might be destabilizing.

Two years later, Finster, Kamran, Smoller, and I offered what we believe to be the first rigorous mathematical proof of a long-standing black hole problem posed by Roger Penrose. In 1969, Penrose suggested a mechanism for extracting energy from a rotating black hole by drawing down its angular momentum. In this scenario, a piece of matter spiraling toward a black hole can split into two fragments—one that crosses the event horizon and plunges into the hole, and another that is catapulted out with even more energy than the original lump of in-falling matter. Rather than looking at a particle, my colleagues and I considered its analogue—a wave traveling toward the black hole—proving that the mathematics of the Penrose process, as it is known, is entirely sound. While discussing our proof at a 2008 Harvard conference on geometric analysis, Smoller joked that someday, we might use this mechanism to solve our energy crisis.

Although geometers have helped to penetrate some of the enigmas of black holes, the study of these objects now lies primarily in the hands of astrophysicists, who are presently making observations almost to the edge of the event horizon—the point beyond which no observations are possible because nothing (including light) can make it back from the “other side.” Nevertheless, had it not been for the work of theorists like Hawking, Penrose, John Wheeler, Kip Thorne, and others, it’s doubtful that astronomers would have had the confidence to look for such things in the first place.

Despite these great successes, I don’t want to give the impression that this is all there is to geometric analysis. I’ve focused on the developments I know best, namely, those I was directly or indirectly involved in. But the field is much bigger than that, having involved the efforts of more than one hundred top scholars from all over the world, and I’ve given just a small taste of that overall effort. We’ve also managed to get through the bulk of our chapter on geometric analysis without mentioning some of the discipline’s biggest achievements. I cannot describe them all; a mere outline of these topics that I wrote in 2006 filled seventy-five single-spaced pages, but we will discuss three that I consider to be among the most important.

The first of these milestones lies in the realm of four-dimensional topology. The principal goal of a topologist is not unlike that of a taxonomist: to classify the kind of spaces or manifolds that are possible in a given dimension. (A manifold is a space or surface of any dimension, and we will use these words interchangeably. In the next chapter, however, we’ll describe manifolds in greater detail.) Topologists try to lump together objects that have the same basic structure even though there may be wild differences in their outward appearance and detailed structure. Two-dimensional surfaces—with the insistence that they be compact (i.e., bounded and noninfinite) and orientable (having both an inside and an outside)—can be classified by the number of holes they have: Tori, or donut-like surfaces, have at least one hole, whereas the surfaces of topological spheres have none. If two such surfaces have the same number of holes, they are equivalent to a topologist, regardless of how different they may appear. (Thus, both a coffee mug and the donut being dipped in it are tori of genus one. If you prefer milk with your donut, the glass you’re drinking out of will be the topological equivalent of a sphere—made, for instance, by pushing the north pole toward the south pole and then modifying the shape a bit.)

While the two-dimensional situation has been understood for more than a century, higher dimensions have proved more challenging. “Remarkably, the classification is easier in five dimensions and higher,” notes University of Warwick mathematician John D. S. Jones. “Three and four dimensions are by far the most difficult.”12 Coincidentally, these happen to be the dimensions deemed most important in physics. William Thurston worked out a classification scheme in 1982 that carved up three-dimensional space into eight basic types of geometries. This hypothesis, known as Thurston’s geometrization conjecture, was proved about two decades later (as will be discussed shortly).

The assault on the fourth dimension began at about the same time that Thurston advanced his bold proposition. Four-dimensional spaces are not only harder to visualize but also harder to describe mathematically. One example of a four-dimensional object is a three-dimensional object, like a bouncing basketball, whose shape changes over time as it smashes against the ground, recoils, and then expands. The detailed geometry of such shapes is confusing, to say the least, yet essential to understand if we are ever to truly make sense of the four-dimensional spacetime we supposedly inhabit.

Some clues came in 1982, when Simon Donaldson, then a second-year graduate student at Oxford, published the first of several papers on the structure of four-dimensional space. To gain a window into the fourth dimension, Donaldson drew on nonlinear partial differential equations developed in the 1950s by the physicists Chen Ning Yang and Robert Mills. The Yang-Mills equations—which describe the strong forces that bind quarks and gluons inside the atomic nucleus, the weak forces associated with radioactive decay, and the electromagnetic forces that act on charged particles—operate within the context of four-dimensional space. Rather than just trying to solve the equations, which one would normally attempt by drawing on the geometric and topological features of the underlying space, Donaldson turned the problem on its head: The solution to those equations, he reasoned, should yield information about the four-dimensional space in which they operate. More specifically, the solutions should point to key identifying features—what mathematicians call invariants—that can be used to determine whether four-dimensional shapes are different or the same.

026

3.12—The geometer Simon Donaldson

Donaldson’s work shed light on the invariants he was hoping to find, but also turned up unexpected and mysterious phenomena—a new class of “exotic” spaces—that only appear in four dimensions. To explain what exoticmeans, we first have to explain what it means to call two surfaces or manifolds the same. Mathematicians have different ways of comparing manifolds. One is the notion of topological equivalence. Here we can borrow the example from earlier in this chapter of two basketballs, one fully inflated and the other deflated. We say the two objects are effectively the same (or homeomorphic) if we can go from one to the other by folding, bending, squishing, or stretching but not cutting. Going from one manifold to another in this fashion is called continuous mapping. It’s a one-to-one mapping, meaning that a single point on one surface corresponds to a single point on the other. What’s more, any two points that are near each other on one surface will end up being near each other on the other surface as well.

But another way of comparing manifolds is a bit more subtle and more stringent. In this case, the question is whether you can go from one manifold to another smoothly, without introducing what mathematicians call singularities, such as sharp corners or spikes on the surface. Manifolds that are equivalent in this sense are called diffeomorphic. To qualify, a function that takes you from one manifold to another—transferring one set of coordinates in one space to a different set of coordinates in the other space—has to be a smooth function that is differentiable, which of course means you can take the derivative of this function at all times and at all places. A graph of the function would not appear jagged in any sense: There are no hard edges or steep vertical jumps or rises that would render the whole notion of a derivative meaningless.

As an example, let’s place a sphere inside a large ellipsoid, or watermelon-shaped surface, so that the centers of those objects coincide. Drawing radial lines extending outward in all directions from the center will literally match a point on the sphere with a point on the watermelon. You can do this, moreover, for every single point on both the sphere and watermelon. The mapping in this place is not only continuous and one-to-one, but also smooth. There’s nothing especially tricky about the function linking these two objects, as it’s literally a straight line with no zigzags, sharp turns, or anything out of the ordinary. The two objects in this case—a sphere and an ellipsoid—are both homeomorphic and diffeomorphic.

The so-called exotic sphere offers a contrary example. An exotic sphere is a seven-dimensional manifold that is everywhere smooth, yet it cannot be smoothly deformed into a regular, round (seven-dimensional) sphere, even though it can be continuously deformed into one. We say these two surfaces are thus homeomorphic but not diffeomorphic. John Milnor, who was discussed earlier in this chapter, won a Fields Medal, largely based on his work establishing the fact that exotic spaces exist. People didn’t believe such spaces were possible before, which is why the spaces were called exotic.

In two dimensions, flat Euclidean space is about the simplest space you can imagine—just a smooth plane, like a tabletop, that stretches endlessly in all directions. Is a flat two-dimensional disk, which is a subset of that plane, both homeomorphic and diffeomorphic to that plane? Yes it is. You can imagine a bunch of people standing on the plane and grabbing an end of the disk and then walking in an outward direction without stopping. As they march out toward infinity, they’ll cover the plane in a nice, continuous, one-to-one fashion. They’re topologically identical. It’s also pretty easy to imagine that this stretching process, which involves moving a point radially outward, can be done smoothly.

This same basic result holds for three dimensions and every other dimension you might pick except for four dimensions, where you can have manifolds that are homeomorphic to a plane (or flat Euclidean space) without being diffeomorphic. In fact, there are infinitely many four-dimensional manifolds that are homeomorphic but not diffeomorphic to four-dimensional Euclidean space—what we call R4 (to indicate a real, as opposed to complex, coordinate space in four dimensions).

This is a peculiar and puzzling fact about four dimensions. In a spacetime of 3 + 1 dimensions (three spatial dimensions and one of time), for instance, “electric fields and magnetic fields look similar,” Donaldson says. “But in other dimensions, they are geometrically distinct objects. One is a tensor [which is a kind of matrix] and the other is a vector, and you can’t really compare them. Four dimensions is a special case in which both are vectors. Symmetries appear there that you don’t see in other dimensions.”13

No one yet knows, from a fundamental standpoint, exactly what makes four dimensions so special, Donaldson admits. Prior to his work, we knew virtually nothing about “smooth equivalence” (diffeomorphism) in four dimensions, although the mathematician Michael Freeman (formerly at the University of California, San Diego) had provided insights on topological equivalence (homeomorphism). In fact, Freeman topologically classified all four-dimensional manifolds, building on the prior work of Andrew Casson (now at Yale).

Donaldson provided fresh insights that could be applied to the very difficult problem of classifying smooth (diffeomorphic) four-dimensional manifolds, thereby opening a door that had previously been closed. Before his efforts, these manifolds were almost totally impenetrable. And though the mysteries largely remain, at least we now know where to start. On the other hand, Donaldson’s approach was exceedingly difficult to implement in practice. “We worked like dogs trying to extract information from it,” explained Harvard geometer Clifford Taubes.14

In 1994, Edward Witten and his physics colleague Nathan Seiberg came up with a much simpler method for studying four-dimensional geometry, despite the fact that their solution sprang from a theory in particle physics called supersymmetry , whereas Donaldson’s technique sprang from geometry itself. “This new equation had all of the information of the old one,” says Taubes, “but it’s probably 1000 times easier to get all the information out.”15 Taubes has used the Seiberg-Witten approach, as have many others, to further our understanding of geometric structures in four dimensions—a grasp that is still rather tentative but is nevertheless indispensable for pondering questions about spacetime in general relativity.

For most four-dimensional manifolds, Witten showed that the number of solutions to the Seiberg-Witten equation depends solely on the topology of the manifold in question. Taubes then proved that the number of solutions to those equations, which is dictated by topology, is the same as the number of subspaces or curves of a certain type (or family) that can fit within the manifold. Knowing how many curves of this sort can fit in the manifold enables you to deduce the geometry of the manifold, while providing other information as well. So it’s fair to say that Taubes’s theorem has greatly advanced the study of such manifolds.

This whole excursion into the four-dimensional realm, going back to the work of the physicists Yang and Mills in the 1950s, represents a strange episode that has yet to run its course in which physics has influenced math, which has influenced physics. Though its origins were in physics, Yang-Mills theory was aided by geometry, which helped us better understand the forces that bind elementary particles together. That process was turned around by the geometer Donaldson, who exploited Yang-Mills theory to gain insights into the topology and geometry of four-dimensional space. The same pattern, this give-and-take between physics and math, has continued with the work of the physicists Seiberg and Witten and beyond. Taubes summed up the dynamic history this way: “Once upon a time a Martian arrived, gave us the Yang-Mills equations and left. We studied them and out came Donaldson theory. Years later the Martian has returned and given us the Seiberg-Witten equations.”16 While I can’t guarantee that Taubes is right, that’s about as plausible an explanation as I’ve heard.

The second major accomplishment of geometric analysis—and many would place this at the very top—relates to the proof of the famous conjecture formulated in 1904 by Henri Poincaré, which for more than a century stood as the central problem of three-dimensional topology. One reason I consider it so beautiful is that the conjecture can be summed up in a single sentence that nevertheless kept people busy for one hundred years. Stated in simple terms, the conjecture says that a compact three-dimensional space is topologically equivalent to a sphere if every possible loop you can draw in that space can be shrunk to a point without tearing either that loop or the space in the process. We say that a space satisfying this requirement, as discussed earlier in this chapter, has a trivial fundamental group.

The Poincaré conjecture sounds simple enough, but it’s not entirely obvious. Let’s take a two-dimensional analogue, even though the actual problem—and the hardest one to solve—is in three dimensions. Start with a sphere, say a globe, and place a rubber band on the equator. Now let’s gently nudge that rubber band up toward the north pole, keeping it on the surface at all times. If the rubber band is taut enough, when it reaches the north pole, it will shrink down virtually to a point. That’s not the case with a torus. Suppose there’s a rubber band that runs through the hole and around the other side. There’s no way to shrink that rubber band to a point without cutting right through. A rubber band running along the outside of the donut can be nudged to the top of the donut and from there moved down to the donut’s inner ring. But it cannot shrink to a point while maintaining contact with the donut. To a topologist, therefore, a sphere is fundamentally different from a donut or any other manifold with a hole (or multiple holes, for that matter). The Poincaré conjecture is essentially a question about what a sphere really means in topology.

Before getting to the proof itself, I’m going to back up a few decades to the year 1979, when I was still at the Institute for Advanced Study. I had invited more than a dozen researchers from all over the world working in geometric analysis to come to Princeton and try to lay out a foundation for our field. I identified 120 outstanding questions in geometry, about half of which have since been completely solved. The Poincaré conjecture was not on that list. In part this was because there was no need to draw attention to the problem, given that it was arguably the most renowned in all of mathematics. It also didn’t make the list because I was looking for more narrowly defined problems that I felt could be answered in the end—and hopefully within a reasonable time frame. Although we usually learn something through our struggles, we make the most progress by solving problems; that’s what guides mathematicians more than anything else. At the time, however, no one knew exactly how to proceed with Poincaré.

One person who did not participate in our discussions was the mathematician Richard Hamilton, who was then at Cornell and has since settled down in the Columbia math department. Hamilton had just embarked on an ambitious project to find a good dynamical way of changing a complicated, unsmooth metric to a much smoother metric. This effort showed no sign of a short-term payoff, which was apparently how he liked it. He was interested in an extremely difficult set of equations related to the Ricci flow—an example of a geometric flow problem that we touched on earlier in this chapter. Essentially it’s a technique for smoothing out bumps and other irregularities to give convoluted spaces more uniform curvature and geometry so that their essential, underlying shapes might be more readily discerned. Hamilton’s project did not make my list of geometry’s 120 top problems, either, because he hadn’t published anything on it yet. He was still toying around with the idea without trying to make a big splash.

I found out what he was up to in 1979, when I gave a talk at Cornell. Hamilton didn’t think his equations could be used to solve the Poincaré conjecture; he just thought it was an interesting thing to explore. And I must admit that when I first saw the equations, I was skeptical about their utility, too. They looked too difficult to work with. But work with them he did, and in 1983, he published a paper revealing solutions to what are now called the Hamilton equations. In that paper, Hamilton had solved a special case of the Poincaré conjecture—namely, the case in which the Ricci curvature is positive. (We’ll say more about Ricci curvature, which has close ties to physics, in the next chapter.)

My initial skepticism prompted me to go through Hamilton’s paper, line by line, before I could believe it. But his argument quickly won me over—so much so, in fact, that the next thing I did was to get three of my graduate students from Princeton to work on the Hamilton equations right away. I immediately suggested to him that his approach could be used to solve Thurston’s geometrization conjecture about classifying three-dimensional space into specific geometries, which by extension would imply a general proof of Poincaré itself. At the time, I wasn’t aware of any other tools that were up to the job. To my surprise, Hamilton took up the problem with great vigor, pushing ahead with his investigation of Ricci flow over the next twenty years, mostly on his own but with some interactions with me and my students. (Those interactions picked up considerably in 1984, when both Hamilton and I moved to the University of California, San Diego, where we occupied adjacent offices. His seminars on Ricci flow were attended by all my students. We learned a lot from him, though I hope he might have picked up a useful tip or two from me as well. One of the things I missed most upon relocating to Harvard in 1987 was working in such close proximity to Hamilton.)

Regardless of who was around him, Hamilton stuck to his program with steadfast determination. All told, he published a half dozen or so long, important papers—about ninety pages each—and in the end, none of his arguments were wasted. All were ultimately used in the coming ascent of Mount Poincaré.

He showed, for instance, how roundish geometric objects would invariably evolve to spheres—in accordance with Poincaré—as space deformed under the influence of Ricci flow. But more complicated objects, he realized, would inevitably run into snags, producing folds and other singularities. There was no way around it, so he needed to know exactly what kind of singularities could crop up. To catalog the full range of possibilities that might occur, he drew on the work I’d done with Peter Li, which I’d brought to Hamilton’s attention some years before, though he generalized our results in impressive ways.

My contribution to this effort dates back to 1973, when I began using a new technique I had developed for harmonic analysis—a centuries-old area of mathematics that is used to describe equilibrium situations. My method was based on an approach called the maximum principle, which basically involves looking at worst-case scenarios. Suppose, for instance, you want to prove the inequality A < 0. You then ask: What’s the biggest value A can possibly assume? If you can show that even in the worst case—at its largest conceivable value—A is still less than zero, then you’ve finished the job and have my permission to take the rest of the day off. I applied this maximum principle to a variety of nonlinear problems, sometimes in collaboration with my former Hong Kong classmate S. Y. Cheng. Our work concerned questions arising in geometry and physics that are mathematically classified as elliptic. Although problems of this sort can be incredibly difficult, they are simplified by the fact that they do not involve any variation in time and can therefore be considered static and unchanging.

In 1978, Peter Li and I took on the more complicated, time-dependent or dynamic situation. In particular, we studied equations that describe how heat propagates in a body or manifold. We considered situations where a particular variable like entropy—which measures the randomness of a system—changes in time. Our best-known contribution in this area, the Li-Yau inequality, provides a mathematical description of how a variable like heat may change over time. Hamilton looked at the change in a different variable, entropy, which measures the randomness of a system. The Li-Yau relation is called an inequality because something—in this case, the heat or entropy at one point in time—is bigger or smaller than something else, the heat or entropy at another time.

Our approach provided a quantitative way of seeing how a singularity may develop in a nonlinear system, which was done by charting the distance between two points over time. If the two points collided, with the distance between them vanishing to zero, you got a singularity, and understanding those singularities was the key to understanding almost everything about how heat moves. In particular, our technique offered a way of getting as close to the singularity as possible, showing what happened just before the collision occurred—such as how fast the points were moving—which is kind of like trying to reconstruct what happened before a car crash.

To obtain a close-up view of the singularity—or resolve it, as we say in mathematics—we developed a special kind of magnifying glass. In essence, we zoomed in on the region at which space pinches down to a single point. Then we enlarged that region, smoothing out the creases or pinch points in the process. We did this not once or twice but an infinite number of times. Not only did we enlarge the space, so that we could see the whole picture, but we also enlarged time, in a sense, which effectively meant slowing it down. The next step was to compare that description of the point of singularity—or, equivalently, at the limit after an infinite number of blow-ups—with descriptions of the system before the two points collide. The Li-Yau inequality provides an actual measure of the changes in the “before” and “after” shots.

Hamilton took advantage of our approach to get a more detailed look at the Ricci flow, probing the structure of the singularities that might form therein. Incorporating our inequality into his Ricci flow model was a difficult task, which took him nearly five years, because the setting in which his equations resided was far more nonlinear—and hence more complex—than ours.

One of Hamilton’s approaches was to focus on a special class of solutions that appear stationary in a particular frame of reference. In the same way, you can find a rotating reference frame in general relativity where the people and objects on a spinning carousel will not be moving, which makes the situation much simpler to analyze. By picking stationary solutions that were easier to understand, Hamilton figured out the best way of incorporating the Li-Yau estimation methods into his equations. This, in turn, afforded him a clearer picture of Ricci flow dynamics—that is, of how things move and evolve. In particular, he was interested in how singularities arise through these complex motions in spacetime. Ultimately, he was able to describe the structure of all possible singularities that might occur (although he was unable to show that all of these singularities would occur). Of the singularities that Hamilton identified, all but one were manageable—they could be removed by topological “surgery,” a notion that he introduced and studied extensively in four dimensions. The surgical procedure is quite complicated, but if it can be performed successfully, one can show that the space under study is indeed equivalent to a sphere, just as Poincaré had posited.

But there was one kind of singularity—a cigar-shaped protuberance—that Hamilton could not dispose of in this fashion. If he could show that the “cigar” does not appear, he would have understood the singularity problem much better, thereby being a big step closer to solving both the Poincaré and the Thurston conjecture. The key to doing so, Hamilton concluded, was in adapting the Li-Yau estimate to the more general case where curvature does not have to be positive. He immediately enlisted me to work with him on this problem, which turned out to be surprisingly obstinate. Yet we made considerable progress and felt it was only a matter of time before we’d see the project through.

We were surprised when, in November 2002, the first of three papers on geometric applications of Ricci flow techniques posted on the Internet by Grisha Perelman, a geometer based in St. Petersburg, Russia. The second and third papers appeared online less than a year later. In these papers, Perelman aimed to “carry out some details of the Hamilton program” and “give a brief sketch of the proof of the geometrization conjecture.”17 He too had used the Li-Yau inequality to control the behavior of singularities, though he incorporated these equations in a different way than Hamilton had, while introducing many innovations of his own.

In a sense, Perelman’s papers came out of the blue. No one knew that he’d even been working on Ricci flow-related problems, as he’d made a name for himself in an entirely different branch of mathematics called metric geometry, for solving a famous conjecture put forth by the geometers Jeff Cheeger and Detlef Gromoll. But in the years prior to his 2002 online publication, Perelman had largely dropped out of circulation. Occasionally, mathematicians would receive e-mails in which he inquired about the literature on Ricci flow. But nobody guessed that Perelman was seriously working on the Ricci flow as a way to solve the Poincaré conjecture, as he hadn’t told many people (or perhaps any people) exactly what he was up to. In fact, he’d been keeping such a low profile that many of his peers weren’t sure he was still doing mathematics at all.

Equally surprising were the papers themselves—a scant sixty-eight pages in total—which meant that it took people a long time to digest their contents and flesh out the key arguments outlined in his approach. Among other advances, Perelman showed how to get past the cigar singularity problem that Hamilton had not yet resolved. Indeed, it is now widely acknowledged that the program pioneered by Hamilton and carried through by Perelman has solved the long-standing Poincaré problem and the more recent geometrization conjecture.

If that consensus is correct, the collective efforts of Hamilton and Perelman represent a great triumph for mathematics and perhaps the crowning achievement of geometric analysis. These contributions far exceed the established standards for a Fields Medal, which Perelman was duly awarded and which Hamilton deserved as well were he not ineligible due to the prize’s age restriction. (Winners must be no older than 40.) So far as geometric analysis is concerned, I estimate that roughly half of the theorems, lemmas, and other tools developed in this field over the previous three decades were incorporated in the work of Hamilton and Perelman that culminated in proofs of the Poincaré and Thurston geometrization conjectures.

These are some of the nails that the hammer of geometric analysis has helped to drive home. But you may recall that I promised to describe the three biggest successes of geometric analysis. Advances in four-dimensional topology and the Poincaré conjecture, along with the Ricci flow methods that led to its proof, constitute the first two. That leaves number three—a matter I’ve given considerable thought to and which will be taken up next.