A TOUR OF THE TUNNEL - THE CONSCIOUSNESS PROBLEM - The Ego Tunnel - Thomas Metzinger

The Ego Tunnel: The Science of the Mind and the Myth of the Self - Thomas Metzinger (2009)

Part I. THE CONSCIOUSNESS PROBLEM

Chapter 2. A TOUR OF THE TUNNEL

THE ONE-WORLD PROBLEM: THE UNITY OF CONSCIOUSNESS

Once upon a time, I had to write an encyclopedia article on “Consciousness.” The first thing I did was to photocopy all existing encyclopedia articles on the topic I could find and track down the historical references. I wanted to know whether in the long history of Western philosophy there was a common philosophical insight running like a thread through humanity’s perennial endeavor to understand the conscious mind. To my surprise, I found two such essential insights.

The first is that consciousness is a higher-order form of knowledge accompanying thoughts and other mental states. The Latin concept of conscientia is the original root from which all later terminologies in English and the Romance languages developed. This in turn is derived from cum (“with,” “together”) and scire (“to know”). In classical antiquity, as well as in the scholastic philosophy of the Christian Middle Ages, conscientia typically referred either to moral conscience or to knowledge shared by certain groups of people—again, most commonly of moral ideas. Interestingly, being truly conscious was connected to moral insight. (Isn’t it a beautiful notion that becoming conscious in the true sense could be related to moral conscience? Philosophers would have a new definition of the entity they call a zombie—an amoral person, ethically fast asleep but with eyes wide open.)1

In any case, many of the classical theories stated that becoming conscious had to do with installing an ideal observer in your mind, an inner witness providing moral guidance as well as a hidden, entirely private knowledge about the contents of your mental states. Consciousness connected your thoughts with your actions by submitting them to the moral judgment of the ideal observer. Whatever we may think about these early theories of consciousness-as-conscience today, they certainly possessed philosophical depth and great beauty: Consciousness was an inner space providing a point of contact between the real human being and the ideal one inside, the only space in which you could be together with God even before death. From the time of René Descartes (1596-1650), however, the philosophical interpretation of conscientia simply as higher-order knowledge of mental states began to predominate. It has to do with certainty; in an important sense, consciousness is knowing that you know while you know.

The second important insight seems to be the notion of integration: Consciousness is what binds things together into a comprehensive, simultaneous whole. If we have this whole, then a world appears to us. If the information flow from your sensory organs is unified, you experience the world. If your senses come apart, you lose consciousness. Philosophers like Immanuel Kant or Franz Brentano have theorized about this “unity of consciousness”: What exactly is it that, at every single point in time, blends all the different parts of your conscious experience into one single reality? Today it is interesting to note that the first essential insight—knowing that you know something—is mainly discussed in philosophy of mind,2 whereas the neuroscience of consciousness focuses on the problem of integration: how the features of objects are bound together. The latter phenomenon—the One-World Problem of dynamic, global integration—is what we must examine if we want to understand the unity of consciousness. But in the process we may discover how both these essential questions—the top-down version discussed in philosophy of mind and the bottom-up version discussed in the neurosciences—are two sides of the same coin.

What would it be like to have the experience of living in many worlds at the same time, of genuine parallel realities opening up in your mind? Would there be parallel observers, too? The One-World Problem is so simple that it is easily overlooked: In order for a world to appear to us, it has to be one world first. For most of us, it seems obvious that we live our conscious lives in a single reality, and the world we wake up to every morning is the same world we woke up to the day before. Our tunnel is one tunnel; there are no back alleys, side streets, or alternative routes. Only people who have suffered severe psychiatric disorders or have experimented with major doses of hallucinogens can perhaps conceive of what it means to live in more than one tunnel at a time. The unity of consciousness is one of the major achievements of the brain: It is the not-so-simple phenomenological fact that all the contents of your current experience are seamlessly correlated, forming a coherent whole, the world in which you live your life.

But the problem of integration has to be solved on several subglobal levels first. Imagine you are no longer able to bind the various features of a seen object—its color, surface texture, edges, and so on—into a single entity. In a disorder known as apperceptive agnosia, no coherent visual model emerges on the level of conscious experience, despite the fact that all the patient’s low-level visual processes are intact. Sufferers typically have a fully intact visual field that is consciously perceived, but they are unable to recognize what it is they are looking at. They cannot distinguish shapes from or match shapes with each other, for example, or copy drawings. Apperceptive agnosia is usually caused by a lack of oxygen supply to the brain—for instance, through carbon monoxide poisoning. Patients may well have a coherent, integrated visual world-model, but certain types of visual information are no longer available to them to act upon. On a functional level, they cannot use gestalt grouping cues or figure/ground cues to organize their visual field.3 Now imagine you are no longer able to integrate your perception of an object with the categorical knowledge that would allow you to identify it, and you consequently cannot subjectively experience what it is you are perceiving—as in asterognosia (the inability to recognize objects by touch, typically associated with lesions in two regions of the primary somatosensory cortex) or autotopagnosia (the inability to identify and name one’s own body parts, also associated with cortical lesions). There are also patients suffering from what has been called disjunctive agnosia, who cannot integrate seeing and hearing—whose conscious life seems to be taking place in a movie with the wrong soundtrack. As one patient described his experience, someone “was standing in front of me and I could see his mouth moving, but I noticed that the mouth moving did not belong to what I heard.”4

Now, what if everything came apart? There are neurological patients with wounded brains who describe “shattered worlds,” but in these cases there is at least some kind of world left—something that could be experienced as having been shattered in the first place. If the unified, multimodal scene—the Here and Now, the situation as such—dissolves completely, we simply go blank. The world no longer appears to us.

A number of new ideas and hypotheses in the neurosciences suggest how this “world-binding” function works. One such is the dynamical core hypothesis,5 which posits that a highly integrated and internally differentiated neurodynamic pattern emerges from the constant background chatter of millions of neurons incessantly firing away. Giulio Tononi, a neuroscientist at the University of Wisconsin-Madison who is a leading advocate of this hypothesis, speaks of a “functional cluster” of neurons, whereas I have coined the concept of causal density.6

The basic idea is simple: The global neural correlate of consciousness is like an island emerging from the sea—as noted, it is a large set of neural properties underlying consciousness as a whole, underpinning your experiential model of the world in its totality at any given moment. The global NCC has many different levels of description: Dynamically, we can describe it as a coherent island, made of densely coupled relations of cause and effect, emerging from the waters of a much less coherent flow of neural activity. Or we could adopt a neurocomputational perspective and look at the global NCC as something that results from information-processing in the brain and hence functions as a carrier of information. At this point, it becomes something more abstract, which we might envision as an information cloud hovering above a neurobiological substrate. The “border” of this information cloud is functional, not physical; the cloud is physically realized by widely distributed firing neurons in your head. Just like a real cloud, which is made of tiny water droplets suspended in the air, the neuronal activation pattern underlying the totality of your conscious experience is made of millions of tiny electrical discharges and chemical transitions at the synapses. In strict terms, it has no fixed location in the brain, though it is coherent.

But why is it coherent? What holds all the droplets—all the micro-events—together? We do not yet know, but there are some indications that the unified whole appears by virtue of the temporal fine-structure characterizing the conscious brain’s activity—that is, the rhythmic dance of neuronal discharges and synchronous oscillations. This is why the border of this whole is a functional border, outlining the island of consciousness in an ocean made up of a myriad of less integrated and less densely coupled neural micro-events. Whatever information is within this cloud of firing neurons is conscious information. Whatever is within the cloud’s boundary (the “dynamical core”) is part of our inner world; whatever is outside of it is not part of our subjective reality. Conscious experience can thus be seen as a special global property of the overall neural dynamics of your brain, a special form of information-processing based on a globally integrated data format.

We also possess the first mathematical instruments that allow us to describe the causal complexity within the dynamical core of consciousness. Technical details aside, they show us how self-organization in our brains strikes an optimal balance between integration and segregation, creating the wonderful richness and diversity of conscious contents and the unity of consciousness at the same time.

What does all this mean? What we want for consciousness is not a uniform state of global synchrony, a state in which many nerve cells simply fire together simultaneously. We find such uniformity in states of unconsciousness such as deep sleep and during epileptic seizures; in these cases, the synchrony wipes out all the internal complexity: It is as if the synchrony had glossed over all the colors and shapes, the objects making up our world. We want large-scale coherence spanning many areas of the brain and flexibly binding many different contents into a conscious hierarchy: the letters into the page, the page into the book, the hand holding the book into your bodily self, and the self sitting in a chair in the room and understanding the words. We want a unity of consciousness that—internally—is as differentiated as possible. On the other hand, maximal differentiation is not optimal, either, because then our world would fall apart into unconnected pieces of mental content and we would lose consciousness. The trick with consciousness is to achieve just the right trade-off between the parts and the whole—and at any single moment a widely distributed network of neurons in the brain seems to achieve just that, as a cloud of single nerve cells, dispersed in space, fire away in intricate patterns of synchronous activity, perhaps with one pattern becoming embedded in the next. Just like the water droplets that form a real cloud, some elements leave the aggregate at any given moment, while others join it. Consciousness is a large-scale, unified phenomenon emerging from a myriad of physical micro-events. As long as a sufficiently high degree of internal correlation and causal coupling allows this island of dancing micro-events in your brain to emerge, you live in a single reality. A single, unified world appears to you.

This emergence can happen during “offline states” as well: In dreams, however, the binding of contents does not work quite as well, which is why your dream reality is frequently so bizarre, why you have difficulty focusing your attention, why scenes follow each other so quickly. Nevertheless, there is still an overall situation, you are still present, and that is why phenomenal experience continues. But when you move into deep sleep and the island dissolves back into the sea, your world disappears as well. We humans have known this since Greek antiquity: Sleep is the little brother of death; it means letting go of the world.7

One of the intriguing characteristics of current research into consciousness is how old philosophical ideas reappear in the best of cutting-edge neuroscience—in new disguise, as it were. Aristotle and Franz Brentano alike pointed out that consciously perceiving must also mean being aware of the fact that one is consciously perceiving, right now, at this very moment. In a certain sense, we must perceive the perceiving while it happens. If this idea is true, the brain state creating your conscious perception of the book in your hand right now must have two logical parts: one portraying the book and one continuously representing the state itself. One part points at the world, and one at itself. Conscious states could be exactly those states that “metarepresent” themselves while representing something else. This classical idea has logical problems, but the insight itself can perhaps be preserved in an empirically plausible framework.

Work being done by Dutch neuroscientist Victor Lamme in Amsterdam and in Stanislas Dehaene’s lab at the NeuroSpin Center in the CEA campus of Saclay and at the Pitié-Salpêtrière Hospital in Paris converges on the central importance of so-called recurrent connections as a functional basis for consciousness.8 In conscious visual processing, for example, high-level information is dynamically mapped back to low-level information, but it all refers to the same retinal image. Each time your eyes land on a scene (remember, your eye makes about three saccades per second), there is a feedforward-feedback cycle about the current image, and that cycle gives you the detailed conscious percept of that scene. You continuously make conscious snapshots of the world via these feedforward-feedback cycles. In a more general sense, the principle is that the almost continuous feedback-loops from higher to lower areas create an ongoing cycle, a circular nested flow of information, in which what happened a few milliseconds ago is dynamically mapped back to what is coming in right now. In this way, the immediate past continuously creates a context for the present—it filters what can be experienced right now. We see how an old philosophical idea is refined and spelled out by modern neuroscience on the nuts-and-bolts level. A standing context-loop is created. And this may be a deeper insight into the essence of the world-creating function of conscious experience: Conscious information seems to be integrated and unified precisely because the underlying physical process is mapped back onto itself and becomes its own context. If we apply this idea not to single representations, such as the visual experience of an apple in your hand, but to the brain’s unified portrait of the world as a whole, then the dynamic flow of conscious experience appears as the result of a continuous large-scale application of the brain’s prior knowledge to the current situation. If you are conscious, the overall process of perceiving, learning, and living creates a context for itself—and that is how your reality turns into a lived reality.

Another fascinating scientific route into the One-World Problem is increasingly receiving attention. It has long been known that in deep meditation the experience of unity and holistic integration is particularly salient. Thus, if we want to know what consciousness is, why not consult those people who cultivate it in its purest form? Or even better, why not use our modern neuroimaging techniques to look directly into their brains while they maximize the unity and holism of their minds?

Antoine Lutz and his colleagues at the W. M. Keck Laboratory for Functional Brain Imaging and Behavior at the University of Wisconsin studied Tibetan monks who had experienced at least ten thousand hours of meditation. They found that meditators self-induce sustained high-amplitude gamma-band oscillations and global phase-synchrony, visible in EEG recordings made while they are meditating.9 The high-amplitude gamma activity found in some of these meditators seems to be the strongest reported in the scientific literature. Why is this interesting? As Wolf Singer and his coworkers have shown, gamma-band oscillations, caused by groups of neurons firing away in synchrony about forty times per second, are one of our best current candidates for creating unity and wholeness (although their specific role in this respect is still very much debated). For example, on the level of conscious object-perception, these synchronous oscillations often seem to be what makes an object’s various features—the edges, color, and surface texture of, say, an apple—cohere as a single unified percept. Many experiments have shown that synchronous firing may be exactly what differentiates an assembly of neurons that gains access to consciousness from one that also fires away but in an uncoordinated manner and thus does not. Synchrony is a powerful causal force: If a thousand soldiers walk over a bridge together, nothing happens; however, if they march across in lock-step, the bridge may well collapse.

The synchrony of neural responses also plays a decisive role in figure-background segregation—that is, the pop-out effect that lets us perceive an object against a background, allowing a new gestalt to emerge from the perceptual scene. Ulrich Ott is Germany’s leading meditation researcher, working at the Bender Institute of Neuroimaging at the Justus-Liebig-Universität in Giessen. He confronted me with an intriguing idea: Could deep meditation be the process, perhaps the only process, in which human beings can sometimes turn the global background into the gestalt, the dominating feature of consciousness itself? This assumption would fit in nicely with an intuition held by many, among others Antoine Lutz, namely that the fundamental subject/object structure of experience can be transcended in states of this kind.

Interestingly, this high-amplitude oscillatory activity in the brains of experienced meditators emerges over several dozens of seconds. They can’t just switch it on; instead, it begins to unfold only when the meditator manages effortlessly to “step out of the way.” The full-blown meditative state emerges only slowly, but this is exactly what the theory predicts: As a gigantic network phenomenon, the level of neural synchronization underlying the unity of consciousness will require more time to develop, because the amount of time required to achieve synchronization is proportional to the size of the neural assembly—in meditation, an orchestrated group of many hundreds of million nerve cells must be formed. The oscillations also correlate with the meditators’ verbal reports of the intensity of the meditative experience—that is, oscillations are directly related to reports of intensity. Another interesting finding is that there are significant postmeditative changes to the baseline activity of the brain. Apparently, repeated meditative practice changes the deep structure of consciousness. If meditation is seen as a form of mental training, it turns out that oscillatory synchrony in the gamma range opens just the right time window that would be necessary to promote synaptic change efficiently.

To sum up, it would seem that feature-binding occurs when the widely distributed neurons that represent the reflection of light, the surface properties, and the weight of, say, this book start dancing together, firing at the same time. This rhythmic firing pattern creates a coherent cloud in your brain, a network of neurons representing a single object—the book—for you at a particular moment. Holding it all together is coherence in time. Binding is achieved in the temporal dimension. The unity of consciousness is thus seen to be a dynamic property of the human brain. It spans many levels of organization, it self-organizes over time, and it constantly seeks an optimal balance between the parts and the whole as they gradually unfold. It shows up on the EEG as a slowly evolving global property, and, as demonstrated by our meditators, it can be cultivated and explored from the inside, from the first-person perspective. Please also see the interview with Wolf Singer at the end of this chapter.

But the next problem in formulating a complete theory of consciousness is more difficult.

THE NOW PROBLEM: A LIVED MOMENT EMERGES

Here is something that, as a philosopher, I have always found both fascinating and deeply puzzling: A complete scientific description of the physical universe would not contain the information as to what time is “now.” Indeed, such a description would be free of what philosophers call “indexical terms.” There would be no pointers or little red arrows to tell you “You are here!” or “Right now!” In real life, this is the job of the conscious brain: It constantly tells the organism harboring it what place is here and what time is now. This experiential Now is the second big problem for a modern theory of consciousness.10

The biological consciousness tunnel is not a tunnel only in the simple sense of being an internal model of reality in your brain. It is also a time tunnel—or, more precisely, a tunnel of presence. Here we encounter a subtler form of inwardness—namely, an inwardness in the temporal domain, subjectively experienced.

The empirical story will have to deal with short-term memory and working memory, with recurrent loops in neural networks, and with the binding of single events into larger temporal gestalts (often simply called the psychological moment). The truly vexing aspect of the Now Problem is conceptual: It is very hard to say what exactly the puzzle consists of. At this point, philosophers and scientists alike typically quote a passage from the fourteenth chapter of the eleventh book of St. Augustine’s Confessions. Here the Bishop of Hippo famously notes, “What then is time? If no one asks me, I know. If I wish to explain it to one that asketh, I know not.” The primary difficulty with the Now Problem is not the neuroscience but how to state it properly. Let me try: Consciousness is inwardness in time. It makes the world present for you by creating a new space in your mind—the space of temporal internality. Everything is in the Now. Whatever you experience, you experience it as happening at this moment.

You may disagree at first: Is it not true that my conscious, episodic memory of my last walk on the beach refers to something in the past? And is it not true that my conscious thoughts and plans about next weekend’s trip to the mountains refer to the future? Yes, this is true—but they are always embedded in a conscious model of the self as re-membering the starfish on the beach right now, as planning a new route to the peak at this very moment.

A major function of conscious experience consists, as the great British psychologist Richard Gregory has put it, in “flagging the dangerous present.”11 One essential function of consciousness is to help an organism stay in touch with the immediate present—with all those properties in both itself and the environment that may change fast and unpredictably. This idea relates to a classic concept introduced by Bernard Baars of the Neurosciences Institute in San Diego, best known for his book A Cognitive Theory of Consciousness, in which he outlines his global-workspace theory as a model for consciousness. His fruitful metaphor of consciousness as the content of a global workspace of the mind implies that only the critical aspects are represented in consciousness. Conscious information is exactly that information that must be made available for every single one of your cognitive capacities at the same time. You require a conscious representation only if you do not know exactly what will happen next and which capacities (attention, cognition, memory, motor control) you will need to react properly to the challenge around the corner. This critical information must remain active so that different modules or brain mechanisms can access it simultaneously.

My idea is that this simultaneity is precisely why we need the conscious Now. In order to effect this, our brains learned to simulate temporal internality. In order to create a common platform—a blackboard on which messages to our various specialized brain areas can be posted—we need a common frame of reference, and this frame of reference is a temporal one. Although, strictly speaking, no such thing as Now exists in the outside world, it proved adaptive to organize the inner model of the world around such a Now—creating a common temporal frame of reference for all the mechanisms in the brain so that they can access the same information at the same time. A certain point in time had to be represented in a privileged manner in order to be flagged as reality. The past is outside-time, as is the future. But there is also inside-time, this time, the Now, the moment you’re currently living. All your conscious thoughts and feelings take place in this lived moment.

How are we going to find this special form of inwardness in the biological brain? Of course, conscious time experience has other elements. We experience simultaneity. (And have you ever noticed that you cannot will two different actions at the same moment or simultaneously make two decisions?) We experience succession: of the notes in a piece of music, of two thoughts drifting by in our minds, one after the other. We experience duration: A musical tone or an emotion may stay constant over time. From all this emerges what the neuroscientist Ernst Pöppel, one of the pioneer researchers in this field, and his colleague Eva Ruhnau, director of the University of Munich’s Human Science Center, describe as a temporal gestalt :12 Musical notes can form a motif—a bound pattern of sounds constituting a whole that you recognize as such from one instant to the next. Similarly, individual thoughts can form more complex conscious experiences, which may be described as unfolding patterns of reasoning.

By the way, there is an upper limit to what you can consciously experience as taking place in a single moment: It is almost impossible to experience a musical motif, a rhythmic piece of poetry, or a complex thought that lasts for more than three seconds as a unified temporal gestalt. When I was studying philosophy in Frankfurt, professors typically did not extemporize during their lectures; instead, they read from a manuscript for ninety minutes, firing rounds of excessively long, nested sentences, one after another, at their students. I suspected that these lectures were not aimed at successful communication at all (although they were frequently about it) but that this was a kind of intellectual machismo. (“I am going to demonstrate the inferiority of your intelligence to you by spouting fantastically complex and seemingly endless sentences. They will make your short-term buffer collapse, because you cannot integrate them into a single temporal gestalt anymore. You won’t understand a thing, and you will have to admit that your tunnel is smaller than mine!”)

I assume many of my readers have encountered this type of behavior themselves. It is a psychological strategy we inherited from our primate ancestors, a slightly more subtle form of ostentatious display behavior that made its way into academia. What enables this new kind of machismo is the limited capacity of the moving window of the Now. Looking through this window, we see enduring objects and meaningful chains of events. Underlying all these experiences of duration, succession, and the formation of temporal wholes is the rock-solid bed of presence . In order to understand what the appearance of a world is, we urgently need a theory of how the human brain generates this temporal sense of presence.

Presence is a necessary condition for conscious experience. If the brain could solve the One-World Problem but not the Now Problem, a world could not appear to you. In a deep sense, appearance is simply presence, and the subjective sense of temporal immediacy is the definition of an internal space of time.

Is it possible to transcend this subjective Now-ness, to escape the tunnel of presence? Imagine you are lost in a daydream. Completely. Your conscious mind is not “flagging the dangerous present” anymore. Those animals in the history of our planet that did this too often did not stand a chance of becoming our ancestors; they were eaten by other, less pensive animals. But what actually happens at the moment you fully lose contact with your present surroundings, say, in a manifest daydream? You are suddenly somewhere else. Another lived Now emerges in your mind. Now-ness is an essential feature of consciousness.

And, of course, it is an illusion. As modern-day neuroscience tells us, we are never in touch with the present, because neural information-processing itself takes time. Signals take time to travel from your sensory organs along the multiple neuronal pathways in your body to your brain, and they take time to be processed and transformed into objects, scenes, and complex situations. So, strictly speaking, what you are experiencing as the present moment is actually the past.

At this point, it becomes clear why philosophers speak about “phenomenal” consciousness or “phenomenal” experience. A phenomenon is an appearance. The phenomenal Now is the appearance of a Now. Nature optimized our time experience over the last couple of millions of years so that we experience something as taking place now because this arrangement is functionally adequate in organizing our behavioral space. But from a more rigorous, philosophical point of view, the temporal inwardness of the conscious Now is an illusion. There is no immediate contact with reality.

This point gives us a second fundamental insight into the tunnel-like nature of consciousness: The sense of presence is an internal phenomenon, created by the human brain. Not only are there no colors out there, but there is also no present moment. Physical time flows continuously. The physical universe does not know what William James called the “specious present,” nor does it know an expanded, or “smeared,” present moment. The brain is an exception: For certain physical organisms, such as us, it has proved viable to represent the path through reality as if there were an extended present, a chain of individual moments through which we live our lives. I like James’s metaphor, according to which the present is not a knife-edge but a saddleback with a breadth of its own, on which we sit perched and from which we look in two directions into time. Of course, from the illusory smearing of the present moment in human consciousness it does not follow that some kind of nonsmeared present could not exist on the level of physics—but remember, a complete physical description of the universe would not contain the word “now”; there would be no little red arrow telling us “This is your place in the temporal order.” The Ego Tunnel is just the opposite of a God’s-eye view of the world. It has a Now, a Here—and a Me, being there now.

The lived Now has a fascinating double aspect. From an epistemological point of view, it is an illusion (the present is an appearance). The moving window of the conscious Now, though, has proved functionally advantageous for creatures like us: It successfully bundles perception, cognition, and conscious will in a way that selects just the right parameters of interaction with the physical world, in environments like those in which our ancestors fought for survival. In this sense, it is a form of knowledge: functional, nonconceptual knowledge about what will work with this kind of body and these kinds of eyes, ears, and limbs.

What we experience as the present moment embodies implicit knowledge about how we can integrate our sensory perceptions with our motor behavior in a fluid and adaptive manner. However, this type of knowledge applies only to the kind of environment we found on the surface of this planet. Other conscious beings, in other parts of the universe, might have evolved completely different forms of time experience. They might be frozen into an eternal Now or have a fantastically high resolution, living for only a few of our Earth minutes and experiencing more intense individual moments than a million human beings experience in a lifetime. They could be masters of boredom, subjects of an extremely slow passage of time. A good (and more difficult) question is how much room for variation there is in terms of subjective time experience. If my argument is sound, conscious minds can be situated only in one single, real Now at a time—because this is one of the essential features of consciousness. Is it logically possible to live in two or more absolutely equivalent Nows at the same time, to have a subjective perspective originating from multiple points in the temporal order? I don’t think so, because there would no longer be one single, present “self ” who had these experiences. Moreover, it’s hard to imagine a situation in which experiencing multiple lived presents might have been adaptive. Thus, although no such thing as an extended present exists from a strict philosophical point of view or from the perspective of a physicist, there must be deep biological truths and a profound evolutionary wisdom behind the way conscious beings such as ourselves happen to represent time in the brain.

Even given a radically materialist view of mind and consciousness, one must concede that there is a complex physical property that (as far as we know) exists only in biological nervous systems on this planet. This new property is a virtual window of presence, and it is implemented in the brains of vertebrates and particularly of higher mammals. It is the lived Now. The physical passage of time existed before this property emerged, but then something new was added—a representation of time, including an illusory, smeared present, plus the fact that the beings harboring this new property in their brains could not recognize it as a representation. Billions of conscious, time-representing nervous systems created billions of individual perspectives.

At this point, we also touch on a deeper and more general principle running through modern research on consciousness. The more aspects of subjective experience we can explain in a hardheaded, materialistic manner, the more our view of what the self-organizing physical universe itself is will change. Very obviously, and in a strictly no-nonsense, non-metaphorical, and nonmysterious way, the physical universe itself possesses an intrinsic potential for the emergence of subjectivity. Crude versions of objectivism are false, and reality is much richer than we thought.

THE REALITY PROBLEM: HOW YOU WERE BORN AS A NAIVE REALIST

Minimal consciousness is the appearance of a world. However, if we solve the One-World Problem and the Now Problem, all we have is a model of a unified world and a model of the present moment in the brain. We have a representation of a single world and a representation of a single moment. Clearly, the appearance of a world is something different. Imagine you could suddenly apprehend the whole world, your own body, the book in your hands, and all of your current surroundings as a “mental model.” Would this still be conscious experience?

Now, try to imagine something even more difficult: The robust sense of presence you are enjoying right now is itself only a special kind of image. It is a time representation in your brain—a fiction, not the real thing. What would happen if you could distance yourself from the current moment—if the Now-ness of this current moment turned out not to be the real Now but only an elegant portrait of presence in your mind? Would you still be conscious? This is not simply an empirical issue; it also possesses a distinct philosophical flavor. The pivotal question is how to get from a world-model and a Now-model to exactly what you have as you are reading this: the presence of a world.

The answer lies in the transparency of phenomenal representations. Recall that a representation is transparent if the system using it cannot recognize it as a representation. A world-model active in the brain is transparent if the brain has no chance of discovering that it is a model. A model of the current moment is transparent if the brain has no chance of discovering that it is simply the result of information-processing currently going on in itself. Imagine you are watching a movie on TV—2001: A Space Odyssey, say, and you have just watched the scene in which the victorious apeman throws his bone-weapon high into the air, at which point the film jumps into the future, matching the image of the tumbling bone to that of a spacecraft. Dr. Heywood R. Floyd reaches Moon Base Clavius in his lunar landing craft, and discusses with the local Soviet scientists “the potential for culture shock and social disorientation” presented by the discovery of a monolith on the moon. When they arrive at the gigantic black monolith, a member of the exploring party reaches out and strokes its smooth surface, mirroring the awe and curiosity the ape-men exhibited millions of years earlier. The scientists and astronauts gather around it for a group photo, but suddenly an earsplitting high-pitched tone is picked up by their earphones—a tone emitted by the monolith as the sun shines down on it. You are completely engaged in the scene unfolding in front of you, to the point of identifying with the bewildered spacesuited humans. However, you can distance yourself from the movie at any time and become aware that there is a separate you sitting on the couch in the living room and only watching all this. You can also move up close to the screen and inspect the little pixels, thousands of little squares of light rapidly blinking on and off, creating a continuous flowing image as soon as you are a couple of yards away. Not only is this flowing image made up of individual pixels, but the temporal dynamic is not really continuous at all—the individual pixels blink on and off according to a certain rhythm, changing their color in abrupt steps.

You cannot do this distancing with your consciousness. It is a different kind of medium. If you look at the book in your hands and try to apprehend individual pixels, you can’t see any. The appearance of the book is dense and impenetrable. Visual attention cannot dissolve the fluidity, the continuity, of your book experience as it can discover the individual pixels when you take a closer look at the TV screen. The blinding speed with which your brain activates the visual model of the book and integrates it with the tactile sensations in your fingers is simply too fast.

One might argue that this disparity exists because the system creating the “pixels” is also the one trying to detect them. Of course, in the continuous flow of information-processing in the brain, nothing like pixels really exists. Still, could your inability to break the book percept down into pixels be caused by something other than the speed of integration in the brain? If your brain worked much more slowly (say, if it could detect time spans of a year but no briefer), you still wouldn’t be able to detect those “pixels.” You would still perceive a seamless passage of time, because the conscious working of our brain is not a single uniform event but a multilayered chain of events in which different processes are densely coupled and interacting all the time. The brain creates what are called higher-order representations. If you attend to your perception of a visual object (such as this book), then there is at least one second-order process (i.e., attentional processing) taking a first-order process—in this case, visual perception—as its object. If the first-order process—the process creating the seen object, the book in your hands—integrates its information in a smaller time-window than the second-order process (namely, the attention you’re directing at this new inner model), then the integration process on the first-order level will itself become transparent, in the sense that you cannot consciously experience it. By necessity, you are now blind to the fundamental construction process. Transparency is not so much a question of the speed of information-processing as of the speed of different types of processing (such as attention and visual perception) relative to each other.

Just as swiftly and effortlessly, the book-model is bound with other models, such as the models of your hands and of the desk, and seamlessly integrated into your overall conscious space of experience. Because it has been optimized over millions of years, this mechanism is so fast and so reliable that you never notice its existence. It makes your brain invisible to itself. You are in contact only with its content; you never see the representation as such; therefore, you have the illusion of being directly in contact with the world. And that is how you become a naïve realist, a person who thinks she is in touch with an observer-independent reality.

If you talk to neuroscientists as a philosopher, you will be introduced to new concepts and find some of them extremely useful. One I found particularly helpful was the notion of metabolic price. If a biological brain wants to develop a new cognitive capacity, it must pay a price. The currency in which the price is paid is sugar. Additional energy must be made available and more glucose must be burned to develop and stabilize this new capacity. As in nature in general, there is no such thing as a free lunch. If an animal is to evolve, say, color vision, this new trait must pay by making new sources of food and sugar available to it. If a biological organism wants to develop a conscious self or think in concepts or master a language, then this step into a new level of mental complexity must be sustainable. It requires additional neural hardware, and that hardware requires fuel. That fuel is sugar, and the new trait must enable our animal to find this extra amount of energy in its environment.

Likewise, any good theory of consciousness must reveal how it paid for itself. (In principle, consciousness could be a by-product of other traits that paid for themselves, but the fact that it has remained stable over time suggests that it was adaptive.) A convincing theory must explain how having a world appear to you enabled you to extract more energy from your environment than a zombie could. This evolutionary perspective also helps solve the puzzle of naive realism.

Our ancestors did not need to know that a bear-representation was currently active in their brains or that they were currently attending to an internal state representing a slowly approaching wolf. Thus neither image required them to burn precious sugar. All they needed to know was “Bear over there!” or “Wolf approaching from the left!” Knowing that all of this was just a model of the world and of the Now was not necessary for survival. This additional kind of knowledge would have required the formation of what philosophers call metarepresentations, or images about other images, thoughts about thoughts. It would have required additional hardware in the brain and more fuel. Evolution sometimes produces superfluous new traits by chance, but these luxurious properties are rarely sustained over long periods of time. Thus, the answer to the question of why our conscious representations of the world are transparent—why we are constitutionally unable to recognize them as representations—and why this proved a viable, stable, strategy for survival and procreation probably is that the formation of metarepresentations would not have been cost-efficient: It would have been too expensive in terms of the additional sugar we would have had to find in our environment.

A smaller time scale gives another way of understanding why we were all born as naive realists. Why are we unaware of the tunnel-like nature of consciousness? As noted, the robust illusion of being directly in touch with the outside world has to do with the speed of neural information-processing in our brains. Further, subjective experience is not generated by one process alone but by various interacting functions: multisensory integration, short-term memory, attention, and so on. My theory says that, in essence, consciousness is the space of attentional agency: Conscious information is exactly that set of information currently active in our brains to which we can deliberately direct our high-level attention. Low-level attention is automatic and can be triggered by entirely unconscious events. For a perception to be conscious does not mean you deliberately access it with the help of your attentional mechanisms. On the contrary: Most things we’re aware of are on the fringe of our consciousness and not in its focus. But whatever is available for deliberately directed attention is what is consciously experienced. Nevertheless, if we carefully direct our visual attention at an object, we are constitutionally unable to apprehend the earlier processing stages. “Taking a closer look” doesn’t help: We are unable to attend to the construction process that generates the model of the book in our brains. As a matter of fact, attention often seems to do exactly the opposite: by stabilizing the sensory object, we make it even more real.

That is why the walls of the tunnel are impenetrable for us: Even if we believe that something is just an internal construct, we can experience it only as given and never as constructed. This fact may well be cognitively available to us (because we may have a correct theory or concept of it), but it is not attentionally or introspectively available, simply because on the level of subjective experience, we have no point of reference “outside” the tunnel. Whatever appears to us—however it is mediated—appears as reality.

Please try for a moment to inspect closely the holistic experience of seeing and simultaneously touching the book in your hands and of feeling its weight. Try hard to become aware of the construction process in your brain. You will find two things: First, you cannot do it. Second, the surface of the tunnel is not two-dimensional: It possesses considerable depth and is composed of very different sensory qualities—touch, sound, even smell. In short, the tunnel has a high-dimensional, multimodal surface. All this contributes to the fact that you cannot recognize the walls of the tunnel as an inner surface; this simply does not resemble any tunnel experience you’ve ever had.

Why are the walls of the neurophenomenological cave so impenetrable? An answer is that in order to be useful (like the desktop on the graphical user-interface of your personal computer), the inside surface of the cave must be closed and fully realistic. It acts as a dynamic filter. Imagine you could introspectively become aware of ever deeper and earlier phases of your information-processing while looking at the book in your hands. What would happen? The representation would no longer be transparent, but it would still remain inside the tunnel. A flood of interacting patterns would suddenly rush at you; alternative interpretations and intensely competing associations would invade your reality. You would lose yourself in the myriad of micro-events taking place in your brain at every millisecond—you would get lost inside yourself. Your mind would explode into endless loops of self-exploration. Maybe this is what Aldous Huxley meant when, in his 1954 classic, The Doors of Perception, he quoted William Blake: “If the doors of perception were cleansed, everything would appear to man as it is, infinite. For man has closed himself up, till he sees all things through narrow chinks of his cavern.”

The dynamic filter of phenomenal transparency is one of nature’s most intriguing inventions, and it has had far-reaching consequences. Our inner images of the world around us are quite reliable. In order to be good representations, our conscious models of bears, of wolves, of books in our hands, of smiles on our friends’ faces, must serve as a window on the world. This window must be clean and crystal clear. That is what phenomenal transparency is: It contributes to the effortlessness and seamlessness that are the hallmark of reliable conscious perceptions that portray the world around us in a sufficiently accurate manner. We don’t have to know or care about howthis series of little miracles keeps unfolding in our brains; we can simply enjoy conscious experience as an invisible interface to reality. As long as nothing goes wrong, naive realism makes for a very relaxed way of living.

However, questions arise. Are there people who aren’t naive realists, or special situations in which naive realism disappears? My theory—the self-model theory of subjectivity—predicts that as soon as a conscious representation becomes opaque (that is, as soon as we experience it as a representation), we lose naive realism. Consciousness without naive realism does exist. This happens whenever, with the help of other, second-order representations, we become aware of the construction process—of all the ambiguities and dynamical stages preceding the stable state that emerges at the end. When the window is dirty or cracked, we immediately realize that conscious perception is only an interface, and we become aware of the medium itself. We doubt that our sensory organs are working properly. We doubt the existence of whatever it is we are seeing or feeling, and we realize that the medium itself is fallible. In short, if the book in your hands lost its transparency, you would experience it as a state of your mind rather than as an element of the outside world. You would immediately doubt its independent existence. It would be more like a book-thought than a book-perception.

Precisely this happens in various situations—for example, in visual hallucinations during which the patient is aware of hallucinating, or in ordinary optical illusions when we suddenly become aware that we are not in immediate contact with reality. Normally, such experiences make us think something is wrong with our eyes. If you could consciously experience earlier processing stages of the representation of the book in your hands, the image would probably become unstable and ambiguous; it would start to breathe and move slightly. Its surface would become iridescent, shining in different colors at the same time. Immediately you would ask yourself whether this could be a dream, whether there was something wrong with your eyes, whether someone had mixed a potent hallucinogen into your drink. A segment of the wall of the Ego Tunnel would have lost its transparency, and the self-constructed nature of the overall flow of experience would dawn on you. In a nonconceptual and entirely nontheoretical way, you would suddenly gain a deeper understanding of the fact that this world, at this very moment, only appears to you.

What if you were born with an awareness of your internal processing? Obviously you would still not be in contact with reality as such, because you would still only know it under a representation. But you would also continuously represent yourself as representing. As in a dream in which you have become aware that you’re dreaming, your world would no longer be experienced as a reality but as a form of mental content. It would all be one big thought in your mind, the mind of an ideal observer.

We have arrived at a minimalist concept of consciousness. We have an answer to the question of how the brain moves from an internal world-model and an internal Now-model to the full-blown appearance of a world. The answer is this: If the system in which these models are constructed is constitutionally unable to recognize both the world-model and the current psychological moment, the experience of the present, as a model, as only an internal construction, then the system will of necessity generate a reality tunnel. It will have the experience of being in immediate contact with a single, unified world in a single Now. For any such system, a world appears. This is equivalent to the minimal notion of consciousness we took as our starting point.

If we can solve the One-World Problem, the Now Problem, and the Reality Problem, we can also find the global neural correlate of consciousness in the human brain. Recall that there is a specific NCC for forms of conscious content (one for the redness of the rose, another for the rose as a whole, and so on) as well as a global NCC, which is a much larger set of neural properties underlying consciousness as a whole, or all currently active forms of conscious content, underpinning your experiential model of the world in its totality at a given moment. Solving the One-World Problem, the Now Problem, and the Reality Problem involves three steps: First, finding a suitable phenomenological description of what it’s like to have all these experiences; second, analyzing their contents in more detail (the representational level); and third, describing the functions bringing about these contents. Discovering the global NCC means discovering how these functions are implemented in the nervous system. This would also allow us to decide which other beings on this planet enjoy the appearance of a world; these beings will have a recognizable physical counterpart in their brains.

On the most simple and fundamental level, the global NCC will be a dynamic brain state exhibiting large-scale coherence. It will be fully integrated with whatever generates the virtual window of presence, because in a sense it isthis window. Finally, it will have to make earlier processing stages unavailable to high-level attention. I predict that by 2050 we will have found the GNCC, the global neural correlate of consciousness. But I also predict that in the process we will discover a series of technical problems that may not be so easy to solve.

THE INEFFABILITY PROBLEM: WHAT WE WILL NEVER BE ABLE TO TALK ABOUT

Imagine I’m holding color swatches of two similar shades of green up in front of you. There’s a difference between the two shades, but it’s barely noticeable. (The technical term sometimes used by experts in psychophysics is JND, or “just noticeable difference.” The JND is a statistical distinction, not an exact quantity.) The two shades (I’ll call them Green No. 24 and Green No. 25) are the nearest possible neighbors on the color chart; there’s no shade of green between them that you could discriminate. Now I put my hands behind my back, mix the swatches, and hold one up. Is it Green No. 24 or Green No. 25? The interesting discovery is that conscious perception alone does not enable you to tell the difference. This means that understanding consciousness may also involve understanding the subtle and the ultrafine, not just the whole.

We now must move from the global to the more subtle aspects of consciousness. If it is really true that some aspects of the contents of consciousness are ineffable—and many philosophers, including me, believe this to be the case—how are we going to do solid scientific research on them? How can we reductively explain something we cannot even talk about properly?

The contents of consciousness can be ineffable in many different ways. You cannot explain to a blind man the redness of a rose. If the linguistic community you live in does not have a concept for a particular feeling, you may not be able to discover it in yourself or name it so as to share it with others. A third type of ineffability is formed by all those conscious states (“conscious” because they could in principle be attended to) so fleeting you cannot form a memory trace of them: brief flickers on the fringe of your subjective awareness—perhaps a hardly detectable color change or a mild fluctuation in some emotion, or a barely noticeable glimmer in the mélange of your bodily sensations. There might even be longer episodes of conscious experience—during the dream state, say, or under anesthesia—that are systematically unavailable to memory systems in the brain and that no human being has ever reported. Maybe this is also true of the very last moments before death. Here, however, I’m offering a clearer and better-defined example of ineffability to illustrate the Ineffability Problem.

You can’t tell me if the green card I’m holding up is Green No. 24 or Green No. 25. It is well known from perceptual psychology experiments that our ability to discriminate sensory values such as hues greatly exceeds our ability to form direct concepts of them. But in order to talk about this specific shade of green, you need a concept. Using a vague category, like “Some kind of light green,” is not enough, because you lose the determinate value, the concrete qualitative suchness of the experience.

In between 430 and 650 nanometers, human beings can discriminate more than 150 different wavelengths, or different subjective shades, of color. But if asked to reidentify single colors with a high degree of accuracy, they can do so for fewer than 15.13 The same is true for other sensory experiences. Normal listeners can discriminate about 1,400 steps of pitch difference across the audible frequency range, but they can recognize these steps as examples of only about 80 different pitches. The University of Toronto philosopher Diana Raffman has stated the point clearly: “We are much better at discriminating perceptual values (i.e. making same/different judgments) than we are at identifying or recognizing them.”14

Technically, this means we do not possess introspective identity criteria for many of the simplest states of consciousness. Our perceptual memory is extremely limited. You can see and experience the difference between Green No. 24 and Green No. 25 if you see both at the same time, but you are unable consciously to represent the sameness of Green No. 25 over time. Of course, it may appear to you to be the same shade of Green No. 25, but the subjective experience of certainty going along with this introspective belief is itself appearance only, not knowledge. Thus, in a simple, well-defined way, there is an element of ineffability in sensory consciousness: You can experience a myriad of things in all their glory and subtlety without having the means of reliably identifying them. Without that, you cannot speak about them. Certain experts—vintners, musicians, perfume designers—can train their senses to a much finer degree of discrimination and develop special technical terms to describe their introspective experience. For example, connoisseurs may describe the taste of wine as “connected,” “herby,” “nutty,” or “foxy.” Nonetheless, even experts of introspection will never be able to exhaust the vast space of ineffable nuances. Nor can ordinary people identify a match to that beautiful shade of green they saw yesterday. That individual shade is not vague at all; it is what a scientist would call a maximally determinate value, a concrete and absolutely unambiguous content of consciousness.

As a philosopher, I like these kinds of findings, because they elegantly demonstrate how subtle is the flow of conscious experience. They show that there are innumerable things in life you can fathom only by experiencing them, that there is a depth in pure perception that cannot be grasped or invaded by thought or language. I also like the insight that qualia, in the classic sense coined by Clarence Irving Lewis, never really existed—a point also forcefully made by eminent philosopher of consciousness Daniel C. Dennett.15 Qualia is a term philosophers use for simple sensory experiences, such as the redness of red, the awfulness of pain, the sweetness of peach pie. Typically, the idea was that qualia form recognizable inner essences, irreducible simple properties—the atoms of experience. However, in a wonderful way, this story was too simple—empirical consciousness research now shows us the fluidity of subjective experience, its uniqueness, the irreplaceable nature of the single moment of attention. There are no atoms, no nuggets of consciousness.

The Ineffability Problem is a serious challenge for a scientific theory of consciousness—or at least for finding all its neural correlates. The problem is simply put: To pinpoint the minimally sufficient neural correlate of Green No. 24 in the brain, you must assume your subjects’ verbal reports are reliable—that they can correctly identify the phenomenal aspect of Green No. 24 over time, in repeated trials in a controlled experimental setting. They must be able to recognize introspectively the subjectively experienced “suchness” of this particular shade of green—and this seems to be impossible.

The Ineffability Problem arises for the simplest forms of sensory awareness, for the finest nuances of sight and touch, of smell and taste, and for those aspects of conscious hearing that underlie the magic and beauty of a musical experience.16 But it may also appear for empathy, for emotional and intrinsically embodied forms of communication (see chapter 6 and my conversation with Vittorio Gallese, page 174). Once again, these empirical findings are philosophically relevant, because they redirect our attention to something we’ve known all along: Many things you can express by way of music (or other art forms, like dance) are ineffable, because they can never become the content of a mental concept or be put into words. On the other hand, if this is so, sharing the ineffable aspects of our conscious lives becomes a dubious affair: We can never be sure if our communication was successful; there is no certainty about what actually it was we shared. Furthermore, the Ineffability Problem threatens the comprehensiveness of a neuroscientific theory of consciousness. If the primitives of sensory consciousness are evasive, in the sense that even the experiencing subject possesses no internal criteria to reidentify them by introspection, then we cannot match them with the representational content of neural states—even in principle. Some internal criteria exist, but they are crude: absolutes, such as “pure sweetness,” “pure blue,” “pure red,” and so on. But matching Green No. 24 or Green No. 25 with their underlying physical substrates in a systematic manner seems impossible, because these shades are just too subtle. If we cannot do the mapping, we cannot do the reduction—that is, arrive at the claim that your conscious experience of Green No. 24 is identical with a certain brain state in your head.

Remember, reduction is a relationship not between the phenomena themselves but between theories. T1 is reduced to T2. One theory—say, about our subjective, conscious experience—is reduced to another—say, about large-scale dynamics in the brain. Theories are built out of sentences and concepts. But if there are no concepts for certain objects in the domain of one theory, they cannot be mapped onto or reduced to concepts in the other. This is why it may be impossible to do what most hard scientists in consciousness research would like to do: show that Green No. 24 is identical with a state in your head.

What to do? If identification is not possible, elimination seems to be the only alternative. If the qualities of sensory consciousness cannot be turned into what philosophers call proper theoretical entities because we have no identity criteria for them, then the cleanest way of solving the Ineffability Problem may be to follow the path that neurophilosopher Paul Churchland and others suggested long ago—to deny the existence of qualia in the first place. Would the best solution be simply to say that by visually attending to this ineffable shade of Green No. 25 in front of us, we are already directly in touch with a hardware property? That is, what we experience is not some sort of phenomenal representational content but neural dynamics itself? In this view, our experience of Green No. 25 would not be a conscious experience at all but instead something physical—a brain state. For centuries, when speaking about “qualities” and color experiences, we were actually misdescribing states of our own bodies, internal states we never recognized as such—the walls of the Ego Tunnel.

We could then posit that if we lack the necessary first-person knowledge, then we must define third-person criteria for these ineffable states. If there are no adequate phenomenological concepts, let’s form adequate neurobiological concepts instead. Certainly if we look at the brain dynamics underlying what subjects later describe as their conscious experience of greenness, we will observe sameness across time. In principle, we can find objective identity criteria, some mathematical property, something that remains the same in our description connecting the experience of green you had yesterday with the experience you’re having right now. And then could we not communicate our inner experiences in neurobiological terms, by saying something like “Imagine the Cartesian product of the experiential green manifold and the Möbius strip of calmness—that is, mildly K-314γ, but moving to Q-512δ and also slightly resembling the 372.509-dimensional shape of Irish moss in norm-space”?

I actually do like science fiction. This sci-fi scenario is conceivable, in principle. But are we willing to give up our authority over our own inner states—the authority allowing us to say that these two states must be the same because they feel the same? Are we willing to hand this epistemological authority over to the empirical sciences of the mind? This is the core of the Ineffability Problem, and certainly many of us would not be ready to take the jump into a new system of description. Because traditional folk-psychology is not only a theory but also a practice, there may be a number of deeper problems with Churchland’s strategy of what he calls “eliminative materialism.” In his words, “Eliminative materialism is the thesis that our commonsense conception of psychological phenomena constitutes a radically false theory, a theory so fundamentally defective that both the principles and the ontology of that theory will eventually be displaced, rather than smoothly reduced, by completed neuroscience.”17 Churchland has an original and refreshingly different perspective: If we just gave up the idea that we ever had anything like conscious minds in the first place and began to train our native mechanisms of introspection with the help of the new and much more fine-grained conceptual distinctions offered by neuroscience, then we would also discover much more, we would enrich our inner lives by becoming materialists. “I suggest, then, that those of us who prize the flux and content of our subjective phenomenological experience need not view the advance of materialist neuroscience with fear and foreboding,” he has noted. “Quite the contrary. The genuine arrival of a materialist kinematics and dynamics for psychological states and cognitive processes will constitute not a gloom in which our inner life is suppressed or eclipsed, but rather a dawning, in which its marvelous intricacies are finally revealed—most notably, if we apply [it] ourselves, in direct self-conscious introspection.”

Still, many people would be disinclined to turn something that was previously ineffable into a public property about which they could communicate using the vocabulary of neuroscience. They would feel that this was not what they wanted to know at the outset. More important, they might fear that in pursuit of solving the problem, we had lost something deeper along the way. Theories of consciousness have cultural consequences. I will return to this issue.

THE EVOLUTION PROBLEM: COULDN’T ALL OF THIS HAVE HAPPENED IN THE DARK?

The Evolution Problem is one of the most difficult problems for a theory of consciousness. Why, and in what sense, was it necessary to develop something like consciousness in the nervous systems of animals? Couldn’t zombies have evolved instead? Here, the answer is both yes and no.

As I noted in the Introduction, conscious experience is not an all-or-nothing phenomenon; it comes in many shades and flavors. There is a long history of consciousness on this planet. We have strong, converging evidence that all of Earth’s warm-blooded vertebrates (and probably certain other creatures) enjoy phenomenal experience. The basic brain features of sensory consciousness are preserved among mammals and exhibit strong homologies due to common ancestry. They may not have language and conceptual thought, but it is likely that they all have sensations and emotions. They are clearly able to suffer. But since they do all this without verbal reports, it is almost impossible to investigate this issue more deeply. What we must understand is how Homo sapiens managed to acquire—over the course of our biological history and individually as infants—this amazing property of living our lives in the Ego Tunnel successfully and without realizing it.

First, let’s not forget that evolution is driven by chance, does not pursue a goal, and achieved what we now consider the continuous optimization of nervous systems in a blind process of hereditary variation and selection. It is incorrect to assume that evolution had to invent consciousness—in principle it could have been a useless by-product. No necessity was involved. Not everything is an adaptation, and even adaptations are not optimally designed, because natural selection can act only on what is already there. Other routes and solutions were and remain possible. Nevertheless, a lot of what happened in our brains and in those of our ancestors clearly was adaptive and had survival value.

Today, we have a long list of potential candidate functions of consciousness: Among them are the emergence of intrinsically motivating states, the enhancement of social coordination, a strategy for improving the internal selection and resource allocation in brains that got too complex to regulate themselves, the modification and interrogation of goal hierarchies and long-term plans, retrieval of episodes from long-term memory, construction of storable representations, flexibility and sophistication of behavioral control, mind reading and behavior prediction in social interaction, conflict resolution and troubleshooting, creating a densely integrated representation of reality as a whole, setting a context, learning in a single step, and so on. It is hard to believe that consciousness should have none of these functions. Consider one example only.

There is a consensus among many leading figures in the consciousness community that at least one of the central functions of phenomenal experience is making information “globally available” to an organism. Bernard Baars’s global-workspace metaphor has a functional aspect: Put simply, this theory says that conscious information is that subset of active information in the brain that requires monitoring because it’s not clear which of your mental capacities you will need to access this information next. Will you need to direct focal attention at it? Will you need to form a concept of it, to think about it, to report it to other human beings? Will you need to make a flexible behavioral response—one that you have selected and weighed against alternatives? Will you need to link this information to episodic memory, perhaps in order to compare it with things you have seen or heard before? Part of Baars’s idea is that you become conscious of something only when you don’t know which of the tools in your mental toolbox you’ll have to use next.

Note that when you learn a difficult task for the first time, such as tying your shoes or riding a bicycle, your practicing is always conscious. It requires attention, and it takes up many of your resources. Yet as soon as you’ve mastered tying your shoes or riding a bicycle, you forget all about the learning process—to the point that it becomes difficult to teach the skill to your children. It quickly sinks below the threshold of awareness and becomes a fast and efficient subroutine. But whenever the system is confronted with a novel or challenging stimulus, its global workspace is activated and represented in consciousness. This is also the point when you become aware of the process.

Of course, a much more differentiated theory is needed, because there are degrees of availability. Some things in life, such as the ineffable shade of Green No. 25, are available for attention, say, but not for memory or conceptual thought. Other things are available for selective motor control but are accessed so quickly you don’t really attend to them: If 100-yard sprinters were to wait until they consciously heard the starter’s shot, they would already have lost the race; fortunately, their body hears it before they do. There are many degrees of conscious experience, and the closer science looks, the more blurry the border between conscious and unconscious processing becomes. But the general notion of global availability allows us to tell a convincing story about the evolution of consciousness. Here is my part of the story: Consciousness is a new kind of organ.

Biological organisms evolved two different kinds of organs. One kind, such as the liver or the heart, forms part of an organism’s “hardware.” Organs of this type are permanently realized. Then there are “virtual organs”—feelings (courage, anger, desire) and the phenomenal experience of seeing colored objects or hearing music or having a certain episodic memory. The immune response, which is realized only when needed, is another example of a virtual organ: For a certain time, it creates special causal properties, has a certain function, and does a job for the organism. When the job is done, it disappears. Virtual organs are like physical organs in that they fulfill a specific function; they are coherent assemblies of functional properties that allow you to do new things. Though part of a behavioral repertoire on the macro level of observable traits, they can also be seen as composed of billions of concerted micro-events—immune cells or neurons firing away. Unlike a liver or a heart, they are realized transiently. What we subjectively experience are the processes brought about by the ongoing activity of one or many of such virtual organs.

Our virtual organs make information globally available to us, allowing us to access new facts and sometimes entirely new forms of knowledge. Take as an example the fact that you are holding this book in your hands right now. The phenomenal book (i.e., the conscious book-experience) and the phenomenal hands (i.e., the conscious experience of certain parts of a bodily self) are examples of currently active virtual organs. The neural correlates in your brain work for you as object emulators, internally simulating the book you are holding, without your being aware of the fact. The same is true of the conscious hand-experience, which is part of the bodily subject emulator. The brain is also making other facts available to you: the fact that this book exists, that it has certain invariant surface properties, a certain weight, and so on. As soon as all this information about the existence and properties of the book becomes conscious, it is available for the guidance of attention, for further cognitive processing, for flexible behavior.

Now we can begin to see what the central evolutionary function of consciousness must have been: It makes classes of facts globally available for an organism and thereby allows it to attend to them, to think about them, and to react to them in a flexible manner that automatically takes the overall context into account. Only if a world appears to you in the first place can you begin to grasp the fact that an outside reality exists. This is the necessary precondition for discovering the fact that you exist as well. Only if you have a consciousness tunnel can you realize that you are part of this reality and are present in it right now.

Moreover, as soon as this global stage—the consciousness tunnel—has been stabilized, many other types of virtual organs can be generated and begin their dance in your nervous system. Consciousness is an inherently biological phenomenon, and the tunnel is what holds it all together. Within the tunnel, the choreography of your subjective life begins to unfold. You can experience conscious emotions and thereby discover that you have certain goals and needs. You can apprehend yourself as a thinker of thoughts. You can discover that there are other people—other agents—in the environment and learn about your relationship to them; unless a certain type of conscious experience makes this fact globally available to you, you cannot cooperate with them, selectively imitate them, or learn from them in other ways. If you are smart, you may even begin to control their behavior by controlling their conscious states. If you successfully deceive them—if, say, you manage to install a false belief in their minds—then you have activated a virtual organ in another brain.

Phenomenal states are neurocomputational organs that make survival-relevant information globally available within a window of presence. They let you become aware of new facts within a unified psychological moment. Clearly, being able to use all the tools in your mental toolbox to react to new classes of facts must have been a major adaptive advantage. Every new virtual organ, every new sensory experience, every new conscious thought had a metabolic price; it was costly to activate them, if only for a couple of seconds or minutes at a time. But since they paid for themselves in terms of additional glucose, and in terms of security, survival, and procreation, they spread across populations and sustain themselves to this day. They allowed us to discriminate between what we can eat and what we can’t, to search for and detect novel sources of food, to plan our attack on our prey. They allowed us to read other people’s minds and cooperate more efficiently with our fellow hunters. Finally, they allowed us to learn from past experience.

The interim conclusion is that making a world appear in an organism’s brain was a new computational strategy. Flagging the dangerous present world as real kept us from getting lost in our memories and our fantasies. Flagging the present enables a conscious organism to plan different and more efficient ways of escape or of deceiving or stalking its prey, namely by comparing internal dry runs of the target behavior with the features of a given world. If you have a conscious, transparent world-model, you can, for the first time, directly compare what is actual with what is only possible, the actual world with simulated possible worlds you’ve designed in your mind. High-level intelligence means not only having offline states in which you can simulate potential threats or desired outcomes but also comparing the real situation with a number of possible goal-states. After you have found a path from the real world into the most desirable possible world in your mind, you can begin to act.

It is easy to overlook the causal relevance of this first evolutionary step, the fundamental computational goal of conscious experience. It is the one necessary functional property on which everything else rests. We can simply call it “reality generation”: It allowed animals to represent explicitly the fact that something is actually the case. A transparent world-model lets you discover that something is really out there, and by integrating your portrait of the world with the subjective Now, it lets you grasp the fact that the world is present. This step opened up a new level of complexity. Thus, having a global world-model is a new way of processing information about the world in a highly integrated manner. Every conscious thought, every bodily sensation, every sound and every sight, every experience of empathy or of sharing the goals of another human being makes a different class of facts available for the adaptive, flexible, and selective form of processing that only conscious experience can provide. Whatever is elevated to the level of global availability suddenly becomes more fluid and more context-sensitive and is directly related to all other contents of your conscious mind.

The functions of global availability can be specific: Conscious color vision gives you information about nutritional value, as when you notice the luscious red berries among the green leaves. The conscious experience of empathy provides you with a nonlinguistic form of knowledge about the emotional states of a fellow human being. Once you have this form of awareness, you can attend to it, adapt your motor behavior to it, and associate it with memories of the past. Phenomenal states do not just represent facts about berries or about the feelings of other human beings; they also bind these things into a global processing stage and allow you to use all your mental capacities to explore them further. In short, individual conscious experiences from the object level upward are virtual organs that transiently make knowledge available to you in an entirely new data format—the consciousness tunnel. And your unified global model of a single world provides a holistic frame of reference in which all this can take place.

If a creature such as Homo sapiens evolves the additional ability to run offline simulations in its mind, then it can represent possible worlds—worlds that are not experienced as present. This species can have episodic memory. It can develop the ability to plan. It can ask itself, “How would a world look in which I had many children? What would the world be like if I were perfectly healthy? Or if I were rich and famous? And how can I make these things happen? Can I imagine a path leading from the present world into this imagined world?”

Such a being can also enjoy mental time travel, because it can switch back and forth between “inside-time” and “outside-time.” It can compare present experiences to past ones—but it can also hallucinate or get lost in its own daydreams. If it wants to use these new mental abilities properly, its brain must come up with a robust and reliable way to tell the difference between representation and simulation. The being must stay anchored in the real world; if you lose yourself in daydreams, sooner or later another animal will come along and eat you. Therefore, you need a mechanism that reliably shows you the difference between the one real world and the many possible ones. And this trick must be achieved on the level of conscious experience itself, which is not an easy problem. As I discussed, conscious experience already is a simulation and never brings the subject of experience—you—into direct contact with reality. So the question is, How can you avoid getting lost in the labyrinth of your conscious mind?

A major function of the transparent conscious model of reality is to represent facticity—that is, to generate a rock-bottom frame of reference for the organism using it: something that unfailingly defines what is real (even if it isn’t); something you cannot fool around or tamper with. Transparency solved the problem of simulating a multitude of possible inner worlds without getting lost in them; it did so by allowing biological organisms to represent explicitly that one of those worlds is an actual reality. I call this the “world-zero hypothesis.”

Human beings know that some of their conscious experiences do not refer to the real world but are only representations in their minds. Now we can see how fundamental this step was, and we can recognize its functional value. Not only were we able to have conscious thoughts, but we could also experience them as thoughts, rather than hallucinating or getting lost in a fantasy. This step allowed us to become superbly intelligent. It let us compare our memories and goals and plans with our present situation, and it helped us seek mental bridges from the present to a more desirable reality.

The distinction between things that only appear to us and real, objective facts became an element of our lived reality. (Please note that this is probably not true of most other animals on this planet.) By consciously experiencing some elements of our tunnel as mere images or thoughts about the world, we became aware of the possibility of misrepresentation. We understood that sometimes we can be wrong, since reality is only a specific type of appearance. As evolved representational systems, we could now represent one of the most important facts about ourselves—namely, that we are representational systems. We were able to grasp the notions of truth and falsity, of knowledge and illusion. As soon as we had grasped this distinction, cultural evolution exploded, because we became ever more intelligent by systematically increasing knowledge and minimizing illusion.

The discovery of the appearance/reality distinction was possible because we realized that some of the content of our conscious minds is constructed internally and because we could introspectively apprehend the construction process. The technical term here would be phenomenal opacity—the opposite of transparency. Those things in the evolution of consciousness that are old, ultrafast, and extremely reliable—such as the qualities of sensory experience—are transparent; abstract conscious thought is not. From an evolutionary perspective, thinking is very new, quite unreliable (as we all know), and so slow that we can actually observe it going on in our brains. In conscious reasoning, we witness the formation of thoughts; some processing stages are available for introspective attention. Therefore, we know that our thoughts are not given but made.

The inner appearance of a fully realistic world, as present in the here and now, was an elegant way of creating a frame of reference and a reliable anchor for all those kinds of mental activity necessary for higher forms of intelligence. You can grasp and design possible worlds only if a robust first-order reality is already in place. That was the fundamental breakthrough—as well as the central function of consciousness as such. As it turned out, the consciousness tunnel possessed obvious survival value and was adaptive because it supplied a unified and robust frame of reference for higher levels of reality-modeling. Nevertheless, all this is not even half the story: We need to take one last step up the ladder, a big one. Our brief tour d’horizon concludes with the deepest and most difficult puzzle of all: the subjectivity of consciousness.

THE WHO PROBLEM: WHAT IS THE ENTITY THAT HAS CONSCIOUS EXPERIENCE?

Consciousness is always bound to an individual first-person perspective; this is part of what makes it so elusive. It is a subjective phenomenon. Someone has it. In a deep and indisputable way, your inner world truly is not just someone’s inner world but your inner world—a private realm of experience that only you have direct access to.

The conscious mind is not a public object—or such is the orthodox view, which may yet be overthrown by the Consciousness Revolution. In any event, the orthodox view holds that scientific research can be conducted only on objects exhibiting properties that are, at least in principle, observable to all of us. Green No. 24 is not. Neither is the distinct sensory quality of the scent of mixed amber and sandalwood, nor is your empathic experience of understanding the emotions of another human being when you see him in tears. Brain states, on the other hand, are observable. Brain states also clearly have what philosophers call representational content. There are receptive fields for the various sensory stimuli. We know where emotional content originates, and we have good candidates for the seat of episodic memory in the brain, and so on.

Conscious experience has content, too—phenomenal content—and I touched upon it in the Introduction: Its phenomenal content is its subjective character—how an experience privately and inwardly feels to you, what it is like to have it. But this particular content, it seems, is accessible only to a single person—you, the experiencing subject. And who is that?

To form a successful theory of consciousness, we must match first-person phenomenal content to third-person brain content. We must somehow reconcile the inner perspective of the experiencing self with the outside perspective of science. And there will always be many of us who intuitively think this can never be done. Many people think consciousness is ontologically irreducible (as philosophers say), because first-person facts cannot be reduced to third-person facts. It is more likely, however, that consciousness is epistemically irreducible (as philosophers say). The idea is simple: One reality, one kind of fact, but two kinds of knowledge: first-person knowledge and third-person knowledge. Even though consciousness is a physical process, these two different forms of knowing can never be conflated. Knowing every last thing about a person’s brain states will never allow us to know what they are like for the person herself. But the concept of a first-person perspective turns out to be vague the moment we take a close look at it. What is this mysterious first person? What does the word “I” refer to? If not simply to the speaker, does it refer to anything in the known world at all? Is the existence of an experiencing self a necessary component of consciousness? I don’t think it is—for one thing, because there seem to be “self-less” forms of conscious experience. In certain severe psychiatric disorders, such as Cotard’s syndrome, patients sometimes stop using the first-person pronoun and, moreover, claim that they do not really exist. M. David Enoch and William Trethowan have described such cases in their book Uncommon Psychiatric Syndromes: “Subsequently the subject may proceed to deny her very existence, even dispensing altogether with the use of the personal pronoun ‘I’. One patient even called herself ‘Madam Zero’ in order to emphasize her non-existence. One [patient] said, referring to herself, ‘It’s no use. Wrap it up and throw “it” in the dustbin’.”18

Mystics of all cultures and all times have reported deep spiritual experiences in which no “self ” was present, and some of them, too, stopped using the pronoun “I.” Indeed, many of the simple organisms on this planet may have a consciousness tunnel with nobody living in it. Perhaps some of them have only a consciousness “bubble” instead of a tunnel, because, together with the self, awareness of past or future disappears as well.

Note that up to now, in defining the problems for a grand unified theory of consciousness, we have assumed only a minimalist notion: the appearance of a world. But as you are reading these sentences, not only is the light on but there is also somebody home. Human consciousness is characterized by various forms of inwardness, all of which influence one another: First, it is an internal process in the nervous system; second, it creates the experience of being in a world; third, the virtual window of presence gives us temporal internality, a Now. But the deepest form of inwardness was the creation of an internal self/world border.

In evolution, this process started physically, with the development of cell membranes and an immune system to define which cells in one’s body were to be treated as one’s own and which were intruders.19 Billions of years later, nervous systems were able to represent this self/world distinction on a higher level—for instance, as body boundaries delineated by an integrated but as yet unconscious body schema. Conscious experience then elevated this fundamental strategy of partitioning reality to a previously unknown level of complexity and intelligence. The phenomenal self was born, and the conscious experience of being someone gradually emerged. A self-model, an inner image of the organism as a whole, was built into the world-model, and this is how the consciously experienced first-person perspective developed.

How to comprehend subjectivity is the deepest puzzle in consciousness research. In order to overcome it, we must understand how the conscious self was born into the tunnel, how nature managed to evolve a centered model of reality, creating inner worlds that not only appear but that appear to someone. We must understand how the consciousness tunnel turned into an Ego Tunnel.

CHAPTER TWO APPENDIX THE UNITY OF CONSCIOUSNESS: A CONVERSATION WITH WOLF SINGER

00

Wolf Singer is professor of neurophysiology and director of the Department of Neurophysiology at the Max Planck Institute for Brain Research in Frankfurt, Germany. In 2004, he founded the Frankfurt Institute for Advanced Studies (FIAS), which conducts basic theoretical research in various areas of science, bringing together theorists from the disciplines of biology, chemistry, neuroscience, physics, and computer science. His main research interest lies in understanding the neuronal processes underlying higher cognitive functions, such as visual perception, memory, and attention. He is also dedicated to making the results of brain research known to the general public and is a recipient of the Max Planck Prize for Public Science.

Singer has been particularly active in the philosophical debate concerning free will. He is coeditor (with Christoph Engel) of Better Than Conscious? Decision Making, the Human Mind, and Implications for Institutions (2008).

Metzinger: Wolf, given the current state of the art, what is the relation between consciousness and feature-binding?

Singer: A unique property of consciousness is its coherence. The contents of consciousness change continuously, at the pace of the experienced present, but at any one moment all the contents of phenomenal awareness are related to one another, unless there is a pathological condition causing a disintegration of conscious experience. This suggests a close relation between consciousness and binding. It seems that only those results of the numerous computational processes that have been bound successfully will enter consciousness simultaneously. This notion also establishes a close link among consciousness, short-term memory, and attention. Evidence indicates that stimuli need to be attended to in order to be perceived consciously, and only then will they have access to short-term memory.

Metzinger: But why is there a binding problem to begin with?

Singer: The binding problem results from two distinct features of the brain: First, the brain is a highly distributed system, in which a very large number of operations are carried out in parallel; second, it lacks a single convergence center, in which the results of these parallel computations could be evaluated in a coherent way. The various processing modules are interconnected, in an exceedingly dense and complex network of reciprocal connections, and these appear to be generating globally ordered states, by means of powerful self-organizing mechanisms. It follows that representations of complex cognitive contents—perceptual objects, thoughts, action plans, reactivated memories—must have a distributed structure as well. This requires that neurons participating in a distributed representation of a particular type of content convey two messages in parallel: First, they have to signal whether the feature they’re tuned to is present; second, they have to indicate which of the many other neurons they’re cooperating with in forming a distributed representation. It is widely accepted that neurons signal the presence of the feature they encode by increasing their discharge frequency; however, there’s less consensus about how neurons signal with which other neurons they cooperate.

Metzinger: What are the constraints for such a signaling?

Singer: Because representations of cognitive contents can change rapidly, it needs to be decipherable with very high temporal resolution. We’ve proposed that the relation-defining signature is the precise synchronization of the discharges of the individual neurons.

Metzinger: But why synchronization?

Singer: Precise synchronization increases the impact of neuronal discharges, favoring further joint processing of the synchronized messages. Further evidence indicates that such synchronization is best achieved if neurons engage in rhythmic, oscillatory discharges, because oscillatory processes can be synchronized more easily than temporally unstructured activation sequences.

Metzinger: Then this isn’t just a hypothesis—there’s supportive experimental evidence.

Singer: Since the discovery of synchronized oscillatory discharges in the visual cortex more than a decade ago, more and more evidence has supported the hypothesis that synchronization of oscillatory activity may be the mechanism for the binding of distributed brain processes—whereas the relevant oscillation frequencies differ for different structures and in the cerebral cortex typically cover the range of beta- and gamma-oscillations: 20 to 80 Hz. What makes the synchronization phenomena particularly interesting in the present context is that they occur in association with a number of functions relevant for conscious experience.

Metzinger: Which functions are those?

Singer: These oscillations occur during the encoding of perceptual objects, when coherent representations of the various attributes of these objects have to be formed. The oscillations are consistently observed when subjects direct their attention toward an object and retain information about it in working memory. And finally, the oscillations are a distinctive correlate of conscious perception.

Metzinger: What is the evidence here?

Singer: In a test in which subjects are exposed to stimuli that are degraded by noise so that the stimuli are consciously perceived only half the time, you can study the brain activity selectively associated with conscious experience. Since the physical attributes of the stimuli are the same throughout, you can simply compare brain signals in cases where the subjects consciously perceive the stimuli with the signals in cases where they don’t. Investigations reveal that during conscious perception, widely distributed regions of the cerebral cortex transiently engage in precisely synchronized high-frequency oscillations. When the stimuli are not consciously perceived, the various processing regions still engage in high-frequency oscillations—indicating that some stimulus-processing is performed—but these are local processes and do not join into globally synchronized patterns. This suggests that access to consciousness requires that a sufficiently large number of processing areas—or in other words, a sufficient number of distributed computations—be bound by synchronization and that those coherent states be maintained over a sufficiently long period.

Metzinger: This could be interesting from a philosophical perspective. Wouldn’t this ideally account for the unity of consciousness?

Singer: Indeed, this would also account for the unity of consciousness—for the fact that the contents of phenomenal awareness, although they change from moment to moment, are always experienced as coherent. Admittedly, the argument is somewhat circular, but if it is a necessary prerequisite for access to consciousness that activity be sufficiently synchronized across a sufficient number of processing regions, and if synchronization is equivalent with semantic binding, with integrating the meaning, it follows that the contents of consciousness can only be coherent.

Metzinger: What remains to be shown, if what you describe here turns out to be the case?

Singer: Even if the proposed scenario turns out to be true, the question remains as to whether we have arrived at a satisfactory description of the neuronal correlates of consciousness. What do we gain by saying that the neuronal correlate of consciousness is a particular metastable state of a very complex, highly dynamic, nonstationary distributed system—a state characterized by sequences of ever-changing patterns of precisely synchronized oscillations? Further research will lead to more detailed descriptions of such states—but these will likely be abstract, mathematical descriptions of state vectors. Eventually, advanced analytic methods may reveal the semantic content, the actual meaning of such state vectors, and it may become possible to manipulate these states and thereby alter the contents of consciousness, thus providing causal evidence for the relation between neuronal activity and the contents of phenomenal awareness. However, this is probably about as close as we can come, in our attempts to identify the neuronal correlates of consciousness. How these neuronal activation patterns eventually give rise to subjective feelings, emotions, and so on, will probably remain a conundrum for quite some time even if we arrive at precise descriptions of neuronal states corresponding to consciousness.

Metzinger: In your field, what are the most urgent questions, and where is the field moving?

Singer: The most challenging questions are how information is encoded in distributed neuronal networks and how subjective feelings, the so-called qualia, emerge from distributed neuronal activity. It is commonly held that neurons convey information by modulating their discharge rate—that is, by signaling the presence of contents for which they are specialized through increases in their firing rate. However, accumulating evidence suggests that complex cognitive contents are encoded by the activity of distributed assemblies of neurons and that the information is contained in the relations between the amplitudes and in the duration of the discharges. The great challenge for future work is to extract the information encoded in these high-dimensional time series. This requires simultaneous recordings from a large number of neurons and identification of the relevant spatio-temporal patterns. It is still unclear which aspects of the large number of possible patterns the nervous system exploits to encode information, so searching for these patterns will require developing new and highly sophisticated mathematical search algorithms. Thus, we’ll need close collaboration between experimentalists and theoreticians to advance our understanding of the neuronal processes underlying higher cognitive functions.

Metzinger: Wolf, why are you so interested in philosophy, and what kind of philosophy would you like to see in the future? What relevant contributions from the humanities are you waiting for?

Singer: My interest in philosophy is nurtured by the evidence that progress in neurobiology will provide some answers to the classic questions treated in philosophy. This is the case for epistemology, philosophy of mind, and moral philosophy. Progress in cognitive neuroscience will tell us how we perceive and to what extent our perceptions are reconstructions rather than representations of absolute realities. As we learn more about the emergence of mental functions from complex neuronal interactions, we will gain insight into possible solutions of the mind-body problem, and as we learn to understand how our brains assign values and distinguish between appropriate and inappropriate conditions, we will learn more about the evolution and constitution of morality.

Conversely, cognitive neuroscience needs the humanities—for several reasons. First, progress in the neurosciences raises a large number of new ethical problems, and these need to be addressed not only by neurobiologists but also by representatives of the humanities. Second, as neuroscience progresses, more and more phenomena that have traditionally been the subject of humanities research can be investigated with neuroscientific methods; thus, the humanities will provide the taxonomy and description of phenomena awaiting investigation at the neuronal level. Brain research begins with the analysis of such phenomena as empathy, jealousy, altruism, shared attention, and social imprinting—phenomena that have traditionally been described and analyzed by psychologists, sociologists, economists, and philosophers. Classification and precise description of these phenomena are prerequisites for the neuroscientific attempts to identify the underlying neuronal processes. There will undoubtedly be close collaborations in the near future between the neurosciences and the humanities—a fortunate development, as it promises to overcome some of the dividing lines that have segregated the natural sciences from the humanities over the last centuries.