The Ego Tunnel: The Science of the Mind and the Myth of the Self - Thomas Metzinger (2009)
Part II. IDEAS AND DISCOVERIES
Chapter 4. FROM OWNERSHIP TO AGENCY TO FREE WILL
Before the use of external tools could develop, a neurodynamic tool had to be in place in our brains. I have been calling this inner tool the PSM, the phenomenal self-model, a distinct and coherent pattern of neural activity that allows you to integrate parts of the world into an inner image of yourself as a whole. Only if you have a self-model can you experience your hands and your arms as parts of your own body. Only if you have a self-model can you experience certain cognitive processes in your brain as your own thoughts and certain events in the motor parts of your brain as your own intentions and acts of will. Our next step is the step from ownership to agency.
THE ALIEN HAND
Imagine that about ten days after undergoing heart surgery you notice a weakness in your left side and experience difficulties walking. For the past three days, you have also had a more specific problem: Somehow, you keep losing control of your left hand—it is acting on its own. Last night, you awoke several times because your left hand was trying to choke you, and you had to use your right hand to fight it off. During the day, your left hand sometimes unbuttons your hospital gown just after your right hand has buttoned it up. Your left hand crushes the paper cups on your tray or starts fighting with your right hand while you’re trying to answer the phone. It’s an unpleasant situation, to say the least—as if someone “from the moon” were controlling your hand. Sometimes you wonder whether it has a mind of its own.1
What does it mean for something to “have a mind of its own”? Having a mind means possessing inner states that have content and embedding such thoughts and inner images of the world into a self-model. Then the organism harboring them can know that they are occurring within itself. So far, so good. But there is an important aspect of having a mind of your own that we’ve not yet discussed: You also need explicit representations of goal-states—your requirements, your desires, your values, what you want to achieve by acting in the world. And you need a conscious Ego to appropriate these goal-states, to make them your own. Philosophers call this having “practical intentionality”: Mental states are often directed at the fulfillment of your personal goals. Having a mind means being not only a thinker and a knower but also an agent—an acting self, with a will of one’s own.
That is where the Alien Hand syndrome, the neurological disorder just described, comes in. The syndrome was first described in 1908, but the term was not introduced until 1972, and it still isn’t clear what the necessary and sufficient conditions in the brain for this kind of disorder are.2 The alien hand crushing cups on the tray and fighting with the healthy right hand seems to have a will of its own. When the alien hand begins unbuttoning the patient’s gown, this is not automatic behavior like the knee-jerk reflex; it appears to be guided by an explicit goal-representation. Apparently a little agent is embedded in the bigger agent—a subpersonal entity pursuing its own goals by hijacking a body part that belongs to the patient. In another typical case, a patient will pick up a pencil and begin scribbling with one hand, reacting with dismay when she becomes aware of this. She will then immediately withdraw the pencil, pull the alien hand to her side with her “good” hand, and indicate that she did not initiate the scribbling herself.3 Another such case study describes the patient’s left hand groping for nearby objects and picking and pulling at her clothes to the point that she refers to her errant hand as an autonomous entity.4
These cases are interesting from a philosophical point of view, because any convincing philosophical theory of the conscious self will have to explain the dissociation of ownership and agency. Patients suffering from Alien Hand syndrome still experience the hand as their own hand; the conscious sense of ownership is still there, but there is no corresponding experience of will in the patient’s mind. As philosophers say, the “volitional act” is missing, and the goal-state driving the alien hand’s behavior is not represented in the person’s conscious mind. The fact that the arm is clearly a subpersonal part of the body makes it even more striking to see how the patient automatically attributes something like intentionality and personhood to it, treating it as an autonomous agent. This conflict between the hand and the willing self can even become a conflict between the hand and the thinking self. For instance, when one patient’s left hand made a move he did not wish to make in a game of checkers, he corrected the move with his right hand. Then, to his frustration, the isolated functional module in his brain that was driving his left arm caused it to repeat the unwanted move.5
Here is the philosophical problem: Is the unwanted move in the game of checkers an action—that is, a bodily movement directly caused by an explicit goal-representation—or is it only an event, something that just happens, caused by something else? At one extreme of the philosophical spectrum, we find denial of the freedom of will: No such things as “actions” or “agents” exist, and, strictly speaking, predetermined physical events are all that have ever existed. We are all automata. If our hardware is damaged, individual subsystems may act up—a sad fact, but certainly no mystery. The other extreme is to hold that there are no blind, purely physical events in the universe at all, that every single event is a goal-driven action, caused by a person—for instance, by the mind of God. Nothing happens by chance; everything is purposeful and ultimately willed.
In fact, in some psychiatric syndromes, patients experience every consciously perceived event in their environment as directly caused by themselves. In other mental diseases, such as schizophrenia, one may feel that one’s body and thoughts are remote-controlled and that the whole world is one big machine, a soulless and meaningless mechanism grinding away. Note that both types of observations illustrate my claim in chapter 1 that we must view the brain as a reality engine: It is a system that constantly makes assumptions about what exists and what doesn’t, thereby creating an inner reality including time, space, and causal relations. Psychiatric diseases are reality-models—alternate ontologies developed to cope with serious and often specific problems. Interestingly, in almost all cases these alternate ontologies can be mapped onto a philosophical ontology—that is, they will correspond to some well-established metaphysical idea about the deeper structure of reality (radical determinism, say, or the omnipotent, omnipresent God’s-eye view).
But to return to the original question: Do actions as such really exist? A position between the two philosophical extremes would define “action” as a particular kind of physical event. Most events in the physical universe are only events, but an extremely tiny subset are also actions—that is, events caused by an explicit goal-representation in the conscious mind of a rational agent. Goal-states must be owned by being part of a self-model. No Ego Tunnel, no action.
The alien hand, however, is not a distinct entity with an Ego Tunnel. It is just a body part and has no self-model. It does not know about its existence, nor does a world appear to it. Due to a brain lesion, it is driven by one of the many unconscious goal-representations constantly fighting for attention in your brain—plausibly, it is driven by visually perceived objects in your immediate vicinity that give rise to what psychologists and philosophers call affordances. There is good evidence that the brain portrays visual objects not only as such but also in terms of possible movements: Is this something I could grasp? Is this something I could unbutton? Is this something I could eat or drink?
The self-model is an important part of the selection mechanism. Right now, as you are reading this book, it is protecting you from these affordances, preventing them from taking over parts of your body. If I were to put a plate of your favorite chocolate cookies in front of you and if you had the firm determination not to reach for it, how long could you keep concentrating on the book? How long before a brief episode of Alien Hand syndrome would pop up and your left hand would do something you hadn’t told it to do? The stronger and more stable your self-model, the less susceptible you are to the affordances surrounding you. Autonomy comes in degrees; it has to do with immunization, with shielding yourself from infection by potential goal-states in the environment.
The phenomenal experience of ownership and the phenomenal experience of agency are thus intimately related, and both are important aspects of the conscious sense of self. If you lose control over your actions, your sense of self is greatly diminished. This is also true of inner actions; for example, many schizophrenics feel that not only their bodies but even their thoughts are controlled by alien forces. One of my pet ideas for many years might well turn out to be true—namely, that thinking is a motor process. Could thoughts be models of successfully terminated actions but from a God’s-eye view—that is, independent of your own vantage point? Could they be abstract forms of grasping—of holding an object and taking it in, into your self? As I discuss in the chapter on the Empathic Ego, there is solid empirical evidence showing that the hand is represented in Broca’s area, a part of our brain that is of recent evolution, distinguishes us from monkeys, and has to do with language comprehension and abstract meaning. The thinking self would then have grown out of the bodily self, by simulating bodily movements in an abstract, mental space. I have been flirting with this idea for a long time, because it would solve Descartes’ mind-body problem; it would show how a thinking thing—a res cogitans—could have evolved out of an extended thing, a res extensa. And this points to a theme running through much of the recent research on agency and the self: In its origin, the Ego is a neurocomputational device for appropriating and controlling the body—first the physical one and then the virtual one.
There is a kind of agency even more subtle than the ability to experience yourself as a coherent acting self and the direct cause of change: This is what I call attentional agency. Attentional agency is the experience of being the entity that controls what Edmund Husserl described as Blickstrahl der Aufmerksamkeit—the “ray of attention.” As an attentional agent, you can initiate a shift in attention and, as it were, direct your inner flashlight at certain targets: a perceptual object, say, or a specific feeling. In many situations, people lose the property of attentional agency, and consequently their sense of self is weakened. Infants cannot control their visual attention; their gaze seems to wander aimlessly from one object to another, because this part of their Ego is not yet consolidated. Another example of consciousness without attentional control is the dream state, and, as I discuss in the next chapter, the Ego of the dream state is indeed very different from that of the waking state. In other cases, too, such as severe drunkenness or senile dementia, you may lose the ability to direct your attention—and, correspondingly, feel that your “self ” is falling apart.
Then there is cognitive agency, an interesting parallel to what philosophers call the “cognitive subject.” The cognitive subject is a thinker of thoughts and can also ascribe this faculty to herself. But often thoughts just drift by, like clouds. Meditators—like the Tibetan monks in chapter 2—strive to diminish their sense of self, letting their thoughts drift by instead of clinging to their content, attentively but effortlessly letting them dissolve. If you had never had the conscious experience of causing your own thoughts, ordering and sustaining them, being attached to their content, you would never have experienced yourself as a thinking self. That part of your self-model would simply have dried up and withered away. In order to have Descartes’ experience of the Cogito—the robust experience of being a thinking thing, an Ego—you must also have had the experience of deliberately selecting the contents of your mind. This is what the various forms of agency have in common: Agency allows us to select things: our next thought, the next perceptual object we want to focus on, our next bodily movement. It is also the experience of executive consciousness—not only the experience of initiating change but also of carrying it through and sustaining a more complex action over time. At least this is the way we have described our inner experience for centuries.
A related aspect that bodily agency, attentional agency, and cognitive agency have in common is the subjective sense of effort. Phenomenologically, it is an effort to move your body. It is also an effort to focus your attention. And it certainly is an effort to think in a concentrated, logical fashion. What is the neural correlate of the sense of effort? Imagine we knew this neural correlate (we will soon), and we also had a precise and well-tested mathematical model describing what is common to all three kinds of experiencing a sense of effort. Imagine you are a future mathematician who can understand this description in all of its intricate detail. Now, given this detailed conceptual knowledge, you introspect your own sense of effort, very gently, but with great precision. What would happen? If you were to gently and carefully attend to, say, the sense of effort going along with an act of will, would it still appear as something personal, something that belongs to you?
The Alien Hand syndrome forces us to conclude that what we call the will can be outside our self-model as well as inside it. Such goal-directed movements might not even be consciously experienced at all. In a serious neurological disorder called akinetic mutism, patients do nothing but lie silently in their beds. They have a sense of ownership of their body as a whole, but although they are awake (and go through the ordinary sleep-wake cycle), they are not agents: They do not act in any way. They do not initiate any thoughts. They do not direct their attention. They do not talk or move.6 Then there are those cases in which parts of our bodies perform complex goal-directed actions without our having the conscious experience of these being our actions or our goals, without a conscious act of will having preceded them—in short, without the experience of being an agent. Another interesting aspect—and the third empirical fact that any philosophy of the conscious self must explain—is how, for instance, schizophrenics sometimes lose the sense of agency and executive consciousness entirely and feel themselves to be remote-controlled puppets.
Many of our best empirical theories suggest that the special sense of self associated with agency has to do both with the conscious experience of having an intention and with the experience of motor feedback. That is, the experience of selecting a certain goal-state must be integrated with the subsequent experience of bodily movement. The self-model achieves just that. It binds the processes by which the mind creates and compares competing alternatives for action with feedback from your bodily movements. This binding turns the experience of movement into the experience of an action. But note, once again, that neither the “mind” nor the self-model is a little man in the head; there is no one doing the creating, the comparing, and the deciding. If the dynamical-systems theory is correct, then all of this is a case of dynamical self-organization in the brain. If for some reason the two core elements—the selection of a specific movement pattern and ongoing motor feedback—cannot be successfully bound, you might experience your bodily movements as uncontrolled and erratic (or as controlled by someone else, as schizophrenics sometimes do). Or you might experience them as willed and goal-directed but not as self-initiated, as in the Alien Hand syndrome.
Thus, selfhood is something independent, because one can retain the sense of ownership yet lose the sense of agency. But can one also hallucinate agency? The answer is yes—and, oddly, many consciousness philosophers have long ignored this phenomenon. You can have the robust, conscious experience of having intended an action even if this wasn’t the case. By directly stimulating the brain, we can trigger not only the execution of a bodily movement but also the conscious experience of having the urge to perform that movement. We can experimentally induce the conscious experience of will.
Here’s an example. Stéphane Kremer and his colleagues at the University Hospital of Strasbourg stimulated a specific brain region (the ventral bank of the anterior cingulate sulcus) in a female patient with medically intractable epileptic seizures, in order to locate the epileptogenic zone before performing surgery. In this case, the stimulation caused rapid eye movements scanning both sides of the visual field. The patient began to search for the nearest object she could grasp, and the arm that was opposite the stimulated side—her left arm—began to wander to the right. She reported a strong “urge to grasp,” which she was unable to control. As soon as she saw a potential target object, her left hand moved toward it and seized it. On the level of her conscious experience, the irrepressible urge to grasp the object started and ended with the stimulation of her brain. This much is clear: Whatever else the conscious experience of will may be, it seems to be something that can be turned on and off with the help of a small electrical current from an electrode in the brain.7
But there are also ways of elegantly inducing the experience of agency by purely psychological means. In the 1990s at the University of Virginia, psychologists Daniel M. Wegner and Thalia Wheatley investigated the necessary and sufficient conditions for “the experience of conscious will” with the help of an ingenious experiment. In a study they dubbed “I Spy,” they led subjects to experience a causal link between a thought and an action, managing to induce the feeling in their subjects that the subjects had willfully performed an action even though the action had in fact been performed by someone else.8
Each subject was paired with a confederate, who posed as another subject. They sat at a table across from each other and were asked to place their fingertips on a little square board mounted on a computer mouse, enabling them to move the mouse together, Ouija-board style. On a computer screen visible to both was a photograph from a children’s book showing some fifty objects (plastic dinosaurs, cars, swans, and so on).
The real subject and the confederate both wore headphones, and it was explained to them that this was an experiment meant to “investigate people’s feelings of intention for acts and how these feelings come and go.” They were told to move the mouse around the computer screen for thirty seconds or so while listening to separate audio tracks containing random words—some of which would refer to one or another object on the screen—along with ten-second intervals of music. The words on each track would be different, but the timing of the music would be the same. When they heard the music, they were to stop the mouse on an object after a few seconds and “rate each stop they made for personal intentionality.” Unknown to the subject, however, the confederate did not hear any words or music at all but instead received instructions from the experimenters to perform particular movements. For four of the twenty or thirty trials, the confederate was told to stop the mouse on a particular object (each time a different one); these forced stops were made to occur within the prescribed musical interval and at various times after the subject had heard the corresponding word over her headphones (“swan,” say).9
Figure 15: Hallucinated agency. How to make subjects think they initiated a movement they never intended. Figure courtesy of Daniel Wegner.
According to the subjects’ ratings, there was a general tendency to perceive the forced stops as intended. The ratings were highest when the corresponding word occurred between one and five seconds before the stop. Based on these findings, Wegner and Wheatley suggest that the phenomenal experience of will, or mental causation, is governed by three principles: The principle of exclusivity holds that the subject’s thought should be the only introspectively available cause of action; the principle of consistency holds that the subjective intention should be consistent with the action; and the principle of priority holds that the thought should precede the action “in a timely manner.”10
The social context and the long-term experience of being an agent of course contribute to creating the sense of agency. One might suspect that the sense of agency is only a subjective appearance, a swift reconstruction after the act; still, today’s best cognitive neuroscience of the conscious will shows that it is also a preconstruction.11 Experiencing yourself as a willing agent has much to do with, as it were, introspectively peeping into the middle of a long processing chain in your brain. This chain leads from certain preparatory processes that might be described as “assembling a motor command” to the feedback you get from perceiving your movements. Patrick Haggard, of University College London, perhaps the leading researcher in the fascinating and somewhat frightening new field of research into agency and the self, has demonstrated that our conscious awareness of movement is not generated by the execution of ready-made motor commands; instead, it is shaped by preparatory processes in the premotor system of the brain. Various experiments show that our awareness of intention is closely related to the specification of which movements we want to make. When the brain simulates alternative possibilities—say, of reaching for a particular object—the conscious experience of intention seems to be directly related to the selection of a specific movement. That is, the awareness of movement is associated not so much with the actual execution as with an earlier brain stage: the process of preparing a movement by assembling different parts of it into a coherent whole—a motor gestalt, as it were.
Haggard points out that the awareness of intention and the awareness of movement are conceptually distinct, but he speculates that they must derive from a single processing stage in the motor pathway. It looks as though our access to the ongoing motor-processing in our brains is extremely restricted; awareness is limited to a very narrow window of premotor activity, an intermediate phase of a longer process. If Haggard is right, then the sense of agency, the conscious experience of being someone who acts, results from the process of binding the awareness of intention together with the representation of one’s actual movements. This also suggests what subjective awareness of intention is good for: It can detect potential mismatches with events occurring in the world outside the brain.
Whatever the precise technical details turn out to be, we are now beginning to see what the conscious experience of agency is and how to explain its evolutionary function. The conscious experience of will and of agency allows an organism to own the subpersonal processes in its brain responsible for the selection of action goals, the construction of specific movement patterns, and the control of feedback from the body. When this sense of agency evolved in human beings, some of the stages in the immensely complex causal network in our brains were raised to the level of global availability. Now we could attend to them, think about them, and possibly even interrupt them. For the first time, we could experience ourselves as beings with goals, and we could use internal representations of these goals to control our bodies. For the first time, we could form an internal image of ourselves as able to fulfill certain needs by choosing an optimal route. Moreover, conceiving of ourselves as autonomous agents enabled us to discover that other beings in our environment probably were agents, too, who had goals of their own. But I must postpone this analysis of the social dimension of the self for a while and turn to a classical problem of philosophy of mind: the freedom of the will.
HOW FREE ARE WE?
As noted previously, the philosophical spectrum on freedom of the will is a wide one, ranging from outright denial to the claim that all physical events are goal-driven and caused by a divine agent, that nothing happens by chance, that everything is, ultimately, willed. The most beautiful idea, perhaps, is that freedom and determinism can peacefully coexist: If our brains are causally determined in the right way, if they make us causally sensitive to moral considerations and rational arguments, then this very fact makes us free. Determinism and free will are compatible. However, I take no position on free will here, because I am interested in two other points. I address the first by asking one simple question: What does ongoing scientific research on the physical underpinnings of actions and of conscious will tell us about this age-old controversy?
Probably most professional philosophers in the field would hold that given your body, the state of your brain, and your specific environment, you could not act differently from the way you’re acting now—that your actions are preordained, as it were. Imagine that we could produce a perfect duplicate of you, a functionally identical twin who is an exact copy of your molecular structure. If we were to put your twin in exactly the same situation you’re in right now, with exactly the same sensory stimuli impinging on him or her, then initially the twin could not act differently from the way you’re acting. This is a widely shared view: It is, simply, the scientific worldview. The current state of the physical universe always determines the next state of the universe, and your brain is a part of this universe.12
The phenomenal Ego, the experiential content of the human self-model, clearly disagrees with the scientific worldview—and with the widely shared opinion that your functionally identical doppelgänger could not have acted otherwise. If we take our own phenomenology seriously, we clearly experience ourselves as beings that can initiate new causal chains out of the blue—as beings that could have acted otherwise given exactly the same situation. The unsettling point about modern philosophy of mind and the cognitive neuroscience of will, already apparent even at this early stage, is that a final theory may contradict the way we have been subjectively experiencing ourselves for millennia. There will likely be a conflict between the scientific view of the acting self and the phenomenal narrative, the subjective story our brains tell us about what happens when we decide to act.
We now have a theory in hand that explains how subpersonal brain events (for instance, those that specify action goals and assemble suitable motor commands) can become the contents of the conscious self. When certain processing stages are elevated to the level of conscious experience and bound into the self-model active in your brain, they become available for all your mental capacities. Now you experience them as your own thoughts, decisions, or urges to act—as properties that belong to you, the person as a whole. It is also clear why these events popping up in the conscious self necessarily appear spontaneous and uncaused. They are the first link in the chain to cross the border from unconscious to conscious brain processes; you have the impression that they appeared in your mind “out of the blue,” so to speak. The unconscious precursor is invisible, but the link exists. (Recently, this has been shown for the conscious veto, as when you interrupt an intentional action at the last instant.)13 But in fact the conscious experience of intention is just a sliver of a complicated process in the brain. And since this fact does not appear to us, we have the robust experience of being able to spontaneously initiate causal chains from the mental into the physical realm. This is the appearance of an agent. (Here we also gain a deeper understanding of what it means to say that the self-model is transparent. Often the brain is blind to its own workings, as it were.)
The science of the mind is now beginning to reintroduce those hidden facts forcefully into the Ego Tunnel. There will be a conflict between the biological reality tunnel in our heads and the neuroscientific image of humankind, and many people sense that this image might present a danger to our mental health. I think the irritation and deep sense of resentment surrounding public debates on the freedom of the will have little to do with the actual options on the table. These reactions have to do with the (perfectly sensible) intuition that certain types of answers will not only be emotionally disturbing but ultimately impossible to integrate into our conscious self-models. This is the first point.14
A note on the phenomenology of will: It is not as well defined as you might think; color experience, for example, is much crisper. Have you ever tried to observe introspectively what happens when you decide to lift your arm and then the arm lifts? What exactly is the deep, fine-grained structure of cause and effect? Can you really observe how the mental event causes the physical event? Look closely! My prediction is that the closer you look and the more thoroughly you introspect your decision processes, the more you’ll realize that conscious intentions are evasive: The harder you look at them, the more they recede into the background. Moreover, we tend to talk about free will as if we all shared a common subjective experience. This is not entirely true: Culture and tradition exert a strong influence on the way we report such experiences. The phenomenology itself may well be shaped by this, because a self-model also is the window connecting our inner lives with the social practice around us. Free will does not exist in our minds alone—it is also a social institution. The assumption that something like free agency exists, and the fact that we treat one another as autonomous agents, are concepts fundamental to our legal system and the rules governing our societies—rules built on the notions of responsibility, accountability, and guilt. These rules are mirrored in the deep structure of our PSM, and this incessant mirroring of rules, this projection of higher-order assumptions about ourselves, created complex social networks. If one day we must tell an entirely different story about what human will is or is not, this will affect our societies in an unprecedented way. For instance, if accountability and responsibility do not really exist, it is meaningless to punish people (as opposed to rehabilitating them) for something they ultimately could not have avoided doing. Retribution would then appear to be a Stone Age concept, something we inherited from animals. When modern neuroscience discovers the sufficient neural correlates for willing, desiring, deliberating, and executing an action, we will be able to cause, amplify, extinguish, and modulate the conscious experience of will by operating on these neural correlates. It will become clear that the actual causes of our actions, desires, and intentions often have very little to do with what the conscious self tells us. From a scientific, third-person perspective, our inner experience of strong autonomy may look increasingly like what it has been all along: an appearance only. At the same time, we will learn to admire the elegance and the robustness with which nature built only those things into the reality tunnel that organisms needed to know, rather than burdening them with a flood of information about the workings of their brains. We will come to see the subjective experience of free will as an ingenious neurocomputational tool. Not only does it create an internal user-interface that allows the organism to control and adapt its behavior, but it is also a necessary condition for social interaction and cultural evolution.
Imagine that we have created a society of robots. They would lack freedom of the will in the traditional sense, because they are causally determined automata. But they would have conscious models of themselves and of other automata in their environment, and these models would let them interact with others and control their own behavior. Imagine that we now add two features to their internal self- and other-person models: first, the erroneous belief that they (and everybody else) are responsible for their own actions; second, an “ideal observer” representing group interests, such as rules of fairness for reciprocal, altruistic interactions. What would this change? Would our robots develop new causal properties just by falsely believing in their own freedom of the will? The answer is yes; moral aggression would become possible, because an entirely new level of competition would emerge—competition about who fulfills the interests of the group best, who gains moral merit, and so on. You could now raise your own social status by accusing others of being immoral or by being an efficient hypocrite. A whole new level of optimizing behavior would emerge. Given the right boundary conditions, the complexity of our experimental robot society would suddenly explode, though its internal coherence would remain. It could now begin to evolve on a new level. The practice of ascribing moral responsibility—even if based on delusional PSMs—would create a decisive, and very real, functional property: Group interests would become more effective in each robot’s behavior. The price for egotism would rise. What would happen to our experimental robot society if we then downgraded its members’ self-models to the previous version—perhaps by bestowing insight?
A passionate public debate recently took place in Germany on freedom of the will—a failed debate, in my view, because it created more confusion than clarity. Here is the first of the two silliest arguments for the freedom of will: “But I know that I am free, because I experience myself as free!” Well, you also experience the world as inhabited by colored objects, and we know that out there in front of your eyes are only wavelength mixtures of various sorts. That something appears to you in conscious experience and in a certain way is not an argument for anything. The second argument goes like this: “But this would have terrible consequences! Therefore, it cannot be true.” I certainly share that worry (think of the robot society thought experiment), but the truth of a claim must be assessed independently of its psychological or political consequences. This is a point of simple logic and intellectual honesty. But neuroscientists have also added to the confusion—and, interestingly, because they often underestimate the radical nature of their positions. This will be my second point in this section.
Neuroscientists like to speak of “action goals,” processes of “motor selection,” and the “specification of movements” in the brain. As a philosopher (and with all due respect), I must say that this, too, is conceptual nonsense. If one takes the scientific worldview seriously, no such things as goals exist, and there is nobody who selects or specifies an action. There is no process of “selection” at all; all we really have is dynamical self-organization. Moreover, the information-processing taking place in the human brain is not even a rule-based kind of processing. Ultimately, it follows the laws of physics. The brain is best described as a complex system continuously trying to settle into a stable state, generating order out of chaos.
According to the purely physical background assumptions of science, nothing in the universe possesses an inherent value or is a goal in itself; physical objects and processes are all there is. That seems to be the point of the rigorous reductionist approach—and exactly what beings with self-models like ours cannot bring themselves to believe. Of course, there can be goal representations in the brains of biological organisms, but ultimately—if neuroscience is to take its own background assumptions seriously—they refer to nothing. Survival, fitness, well-being, and security as such are not values or goals in the true sense of either word; obviously, only those organisms that internally represented them as goals survived. But the tendency to speak about the “goals” of an organism or a brain makes neuroscientists overlook how strong their very own background assumptions are. We can now begin to see that even hardheaded scientists sometimes underestimate how radical a naturalistic combination of neuroscience and evolutionary theory could be: It could turn us into beings that maximized their overall fitness by beginning to hallucinate goals.
I am not claiming that this is the true story, the whole story, or the final story. I am only pointing out what seems to follow from the discoveries of neuroscience and how these discoveries conflict with our conscious self-model. Subpersonal self-organization in the brain simply has nothing to do with what we mean by “selection.” Of course, complex and flexible behaviors caused by inner images of “goals” still exist, and we may also continue to call these behaviors “actions.” But even if actions, in this sense, continue to be part of the picture, we may learn that agents do not—that is, there is no entity doing the acting.15
The study of phantom limbs helped us understand how parts of our bodies can be portrayed in the phenomenal self-model even if they do not exist or have never existed. Out-of-body experiences and full-body illusions demonstrated how a minimal sense of self and the experience of “global ownership” can emerge. A brief look at the Alien Hand and the neural underpinnings of the willing self gave us an idea of how the feeling of agency would, by necessity, appear in our conscious brains and how this fact could have contributed to the formation of complex societies. Next, investigating the Ego Tunnel during the dream state will give us even deeper insight into the conditions under which a true subject of experience emerges. How does the Dream Tunnel become an Ego Tunnel?