ARTIFICIAL EGO MACHINES - THE CONSCIOUSNESS REVOLUTION - The Ego Tunnel - Thomas Metzinger

The Ego Tunnel: The Science of the Mind and the Myth of the Self - Thomas Metzinger (2009)

Part III. THE CONSCIOUSNESS REVOLUTION

Chapter 7. ARTIFICIAL EGO MACHINES

From this point on, let us call any system capable of generating a conscious self an Ego Machine. An Ego Machine does not have to be a living thing; it can be anything that possesses a conscious self-model. It is certainly conceivable that someday we will be able to construct artificial agents. These will be self-sustaining systems. Their self-models might even allow them to use tools in an intelligent manner. If a monkey’s arm can be replaced by a robot arm and a monkey’s brain can learn to directly control a robot arm with the help of a brain-machine interface, it should also be possible to replace the entire monkey. Why should a robot not be able to experience the rubber-hand illusion? Or have a lucid dream? If the system has a body model, full-body illusions and out-of-body experiences are clearly also possible.

In thinking about artificial intelligence and artificial consciousness, many people assume there are only two kinds of information-processing systems: artificial ones and natural ones. This is false. In philosophers’ jargon, the conceptual distinction between natural and artificial systems is neither exhaustive nor exclusive: that is, there could be intelligent and/or conscious systems that belong in neither category. With regard to another old-fashioned distinction—software versus hardware—we already have systems using biological hardware that can be controlled by artificial (that is, man-made) software, and we have artificial hardware that runs naturally evolved software.

Hybrid biorobots are an example of the first category. Hybrid biorobotics is a new discipline that uses naturally evolved hardware and does not bother with trying to re-create something that has already been optimized by nature over millions of years. As we reach the limitations of artificial computer chips, we may increasingly use organic, genetically engineered hardware for the robots and artificial agents we construct.

An example of the second category is the use of software patterned on neural nets to run in artificial hardware. Some of these attempts are even using the neural nets themselves; for instance, cyberneticists at the University of Reading (U.K.) are controlling a robot by means of a network of some three hundred thousand rat neurons.1 Other examples are classic artificial neural networks for language acquisition or those used by consciousness researchers such as Axel Cleeremans at the Cognitive Science Research Unit at Université Libre de Bruxelles in Belgium to model the metarepresentational structure of consciousness and what he calls its “computational correlates.”2 The latter two are biomorphic and only semiartificial information-processing systems, because their basic functional architecture is stolen from nature and uses processing patterns that developed in the course of biological evolution. They create “higher-order” states; however, these are entirely subpersonal.

02

Figure 16: RoboRoach. Controlling the movements of cockroaches with surgically implanted microrobotic backpacks. The roach’s “backpack” contains a receiver that converts the signals from a remote control into electrical stimuli that are applied to the base of the roach’s antennae. This allows the operator to get the roach to stop, go forward, back up, or turn left and right on command.

We may soon have a functionalist theory of consciousness, but this doesn’t mean we will also be able to implement the functions this theory describes on a nonbiological carrier system. Artificial consciousness is not so much a theoretical problem in philosophy of mind as a technological challenge; the devil is in the details. The real problem lies in developing a non-neural kind of hardware with the right causal powers: Even a simplistic, minimal form of “synthetic phenomenology” may be hard to achieve—and for purely technical reasons.

The first self-modeling machines have already appeared. Researchers in the field of artificial life began simulating the evolutionary process long ago, but now we have the academic discipline of “evolutionary robotics.” Josh Bongard, of the Department of Computer Science at the University of Vermont, and his colleagues Victor Zykov and Hod Lipson have created an artificial starfish that gradually develops an explicit internal self-model.3 Their four-legged machine uses actuation-sensation relationships to infer indirectly its own structure and then uses this self-model to generate forward locomotion. When part of its leg is removed, the machine adapts its self-model and generates alternative gaits—it learns to limp. Unlike the phantom-limb patients discussed in chapter 4, it can restructure its body representation following the loss of a limb; thus, in a sense, it can learn. As its creators put it, it can “autonomously recover its own topology with little prior knowledge,” by constantly optimizing the parameters of its resulting self-model. The starfish not only synthesizes an internal self-model but also uses it to generate intelligent behavior.

Self-models can be unconscious, they can evolve, and they can be created in machines that mimic the process of biological evolution. In sum, we already have systems that are neither exclusively natural nor exclusively artificial. Let us call such systems postbiotic. The likely possibility is that conscious selfhood will first be realized in postbiotic Ego Machines.

02

Figure 17a: Starfish, a four-legged robot that walks by using an internal self-model it has developed and which it continuously improves. If it loses a limb, it can adapt its internal self-model.5

HOW TO BUILD AN ARTIFICIAL CONSCIOUS SUBJECT AND WHY WE SHOULDN’T DO IT

Under what conditions would we be justified in assuming that a given postbiotic system has conscious experience? Or that it also possesses a conscious self and a genuine consciously experienced first-person perspective? What turns an information-processing system into a subject of experience? We can nicely sum up these questions by asking a simpler and more provocative one: What would it take to build an artificial Ego Machine?

02

Figure 17b: The robot continuously cycles through action execution. (A and B) Self-model synthesis. The robot physically performs an action (A). Initially, this action is random; later, it is the best action found in (C). The robot then generates several self-models to match sensor data collected while performing previous actions (B). It does not know which model is correct. (C) Exploratory action synthesis. The robot generates several possible actions that disambiguate competing self-models. (D) Target behavior synthesis. After several cycles of (A) to (C), the best current model is used to generate locomotion sequences through optimization. (E) The best locomotion sequence is executed by the physical device. (F)4

Being conscious means that a particular set of facts is available to you: that is, all those facts related to your living in a single world. Therefore, any machine exhibiting conscious experience needs an integrated and dynamical world-model. I discussed this point in chapter 2, where I pointed out that every conscious system needs a unified inner representation of the world and that the information integrated by this representation must be simultaneously available for a multitude of processing mechanisms. This phenomenological insight is so simple that it has frequently been overlooked: Conscious systems are systems operating on globally available information with the help of a single internal model of reality. There are, in principle, no obstacles to endowing a machine with such an integrated inner image of the world and one that can be continuously updated.

Another lesson from the beginning of this book was that, in its very essence, consciousness is the presence of a world. In order for a world to appear to it, an artificial Ego Machine needs two further functional properties. The first consists of organizing its internal information flow in a way that generates a psychological moment, an experiential Now. This mechanism will pick out individual events in the continuous flow of the physical world and depict them as contemporaneous (even if they are not), ordered, and flowing in one direction successively, like a mental string of pearls. Some of these pearls must form larger gestalts, which can be portrayed as the experiential content of a single moment, a lived Now. The second property must ensure that these internal structures cannot be recognized by the artificial conscious system as internally constructed images. They must be transparent. At this stage, a world would appear to the artificial system. The activation of a unified, coherent model of reality within an internally generated window of presence, when neither can be recognized as a model, is the appearance of a world. In sum, the appearance of a world is consciousness.

But the decisive step to an Ego Machine is the next one. If a system can integrate an equally transparent internal image of itself into this phenomenal reality, then it will appear to itself. It will become an Ego and a naive realist about whatever its self-model says it is. The phenomenal property of selfhood will be exemplified in the artificial system, and it will appear to itself not only as being someone but also as being there. It will believe in itself.

Note that this transition turns the artificial system into an object of moral concern: It is now potentially able to suffer. Pain, negative emotions, and other internal states portraying parts of reality as undesirable can act as causes of suffering only if they are consciously owned. A system that does not appear to itself cannot suffer, because it has no sense of ownership. A system in which the lights are on but nobody is home would not be an object of ethical considerations; if it has a minimally conscious world model but no self-model, then we can pull the plug at any time. But an Ego Machine can suffer, because it integrates pain signals, states of emotional distress, or negative thoughts into its transparent self-model and they thus appear as someone’s pain or negative feelings. This raises an important question of animal ethics: How many of the conscious biological systems on our planet are only phenomenal-reality machines, and how many are actual Ego Machines? How many, that is, are capable of the conscious experience of suffering? Is RoboRoach among them? Or are only mammals, such as the macaques and kittens, sacrificed in consciousness research? Obviously, if this question cannot be decided for epistemological reasons, we must make sure always to err on the side of caution. It is precisely at this stage of development that any theory of the conscious mind becomes relevant for ethics and moral philosophy.

An Ego Machine is also something that possesses a perspective. A strong version should know that it has such a perspective by becoming aware of the fact that it is directed. It should be able to develop an inner picture of its dynamical relations to other beings or objects in its environment, even as it perceives and interacts with them. If we do manage to build or evolve this type of system successfully, it will experience itself as interacting with the world—as attending to an apple in its hand, say, or as forming thoughts about the human agents with whom it is communicating. It will experience itself as directed at goal states, which it will represent in its self-model. It will portray the world as containing not just a self but a perceiving, interacting, goal-directed agent. It could even have a high-level concept of itself as a subject of knowledge and experience.

Anything that can be represented can be implemented. The steps just sketched describe new forms of what philosophers call representational content, and there is no reason this type of content should be restricted to living systems. Alan M. Turing, in his famous 1950 paper “Computing Machinery and Intelligence,” made an argument that later was condensed thus by distinguished philosopher Karl Popper in his book The Self and Its Brain, which he coauthored with the Nobel Prize- winning neuroscientist Sir John Eccles. Popper wrote: “Specify the way in which you believe a man is superior to a computer and I shall build a computer which refutes your belief. Turing’s challenge should not be taken up; for any sufficiently precise specification could be used in principle to programme a computer.”6

Of course, it is not the self that uses the brain (as Karl Popper would have it)—the brain uses the self-model. But what Popper clearly saw is the dialectic of the artificial Ego Machine: Either you cannot identify what exactly about human consciousness and subjectivity cannot be implemented in an artificial system or, if you can, then it is just a matter of writing an algorithm that can be implemented in software. If you have a precise definition of conciousness and subjectivity in causal terms, you have what philosophers call a functional analysis. At this point, the mystery evaporates, and artificial Ego Machines become, in principle, technologically feasible. But should we do whatever we’re able to do?

Here is a thought experiment, aimed not at epistemology but at ethics. Imagine you are a member of an ethics committee considering scientific grant applications. One says:

We want to use gene technology to breed mentally retarded human infants. For urgent scientific reasons, we need to generate human babies possessing certain cognitive, emotional, and perceptual deficits. This is an important and innovative research strategy, and it requires the controlled and reproducible investigation of the retarded babies’ psychological development after birth. This is not only important for understanding how our own minds work but also has great potential for healing psychiatric diseases. Therefore, we urgently need comprehensive funding.

No doubt you will decide immediately that this idea is not only absurd and tasteless but also dangerous. One imagines that a proposal of this kind would not pass any ethics committee in the democratic world. The point of this thought experiment, however, is to make you aware that the unborn artificial Ego Machines of the future would have no champions on today’s ethics committees. The first machines satisfying a minimally sufficient set of conditions for conscious experience and selfhood would find themselves in a situation similar to that of the genetically engineered retarded human infants. Like them, these machines would have all kinds of functional and representational deficits—various disabilities resulting from errors in human engineering. It is safe to assume that their perceptual systems—their artificial eyes, ears, and so on—would not work well in the early stages. They would likely be half-deaf, half-blind, and have all kinds of difficulties in perceiving the world and themselves in it—and if they were true artificial Ego Machines, they would, ex hypothesi, also be able to suffer.

If they had a stable bodily self-model, they would be able to feel sensory pain as their own pain. If their postbiotic self-model was directly anchored in the low-level, self-regulatory mechanisms of their hardware—just as our own emotional self-model is anchored in the upper brainstem and the hypothalamus—they would be consciously feeling selves. They would experience a loss of homeostatic control as painful, because they had an inbuilt concern about their own existence. They would have interests of their own, and they would subjectively experience this fact. They might suffer emotionally in qualitative ways completely alien to us or in degrees of intensity that we, their creators, could not even imagine. In fact, the first generations of such machines would very likely have many negative emotions, reflecting their failures in successful self-regulation because of various hardware deficits and higher-level disturbances. These negative emotions would be conscious and intensely felt, but in many cases we might not be able to understand or even recognize them.

Take the thought experiment a step further. Imagine these postbiotic Ego Machines as possessing a cognitive self-model—as being intelligent thinkers of thoughts. They could then not only conceptually grasp the bizarreness of their existence as mere objects of scientific interest but also could intellectually suffer from knowing that, as such, they lacked the innate “dignity” that seemed so important to their creators. They might well be able to consciously represent the fact of being only second-class sentient citizens, alienated postbiotic selves being used as interchangeable experimental tools. How would it feel to “come to” as an advanced artificial subject, only to discover that even though you possessed a robust sense of selfhood and experienced yourself as a genuine subject, you were only a commodity?

The story of the first artificial Ego Machines, those postbiotic phenomenal selves with no civil rights and no lobby in any ethics committee, nicely illustrates how the capacity for suffering emerges along with the phenomenal Ego; suffering starts in the Ego Tunnel. It also presents a principled argument against the creation of artificial consciousness as a goal of academic research. Albert Camus spoke of the solidarity of all finite beings against death. In the same sense, all sentient beings capable of suffering should constitute a solidarity against suffering. Out of this solidarity, we should refrain from doing anything that could increase the overall amount of suffering and confusion in the universe. While all sorts of theoretical complications arise, we can agree not to gratuitously increase the overall amount of suffering in the universe and creating Ego Machines would very likely do this right from the beginning. We could create suffering postbiotic Ego Machines before having understood which properties of our biological history, bodies, and brains are the roots of our own suffering. Preventing and minimizing suffering wherever possible also includes the ethics of risk-taking: I believe we should not even risk the realization of artificial phenomenal self-models.

Our attention would be better directed at understanding and neutralizing our own suffering—in philosophy as well as in the cognitive neurosciences and the field of artificial intelligence. Until we become happier beings than our ancestors were, we should refrain from any attempt to impose our mental structure on artificial carrier systems. I would argue that we should orient ourselves toward the classic philosophical goal of self-knowledge and adopt at least the minimal ethical principle of reducing and preventing suffering, instead of recklessly embarking on a second-order evolution that could slip out of control. If there is such a thing as forbidden fruit in modern consciousness research, it is the careless multiplication of suffering through the creation of artificial Ego Tunnels without a clear grasp of the consequences.

BLISS MACHINES: IS CONSCIOUS EXPERIENCE A GOOD IN ITSELF?

A hypothetical question suggests itself: If we could, on the other hand, increase the overall amount of pleasure and joy in the universe by flooding it with self-replicating and blissful postbiotic Ego Machines, should we do that?

The assumption that the first generations of artificial Ego Machines will resemble mentally retarded human infants and bring more pain, confusion, and suffering than pleasure, joy, or insight into the universe may be empirically false, for a number of reasons. Such machines conceivably might function better than we thought they would and might enjoy their existence to a much greater extent than we expected. Or, as the agents of mental evolution and the engineers of subjectivity, we could simply take care to make this assumption empirically false, constructing only those conscious systems that were either incapable of having phenomenal states such as suffering or that could enjoy existence to a higher degree than human beings do. Imagine we could ensure that such a machine’s positive states of consciousness outweighed its negative ones—that it experienced its existence as something eminently worth having. Let us call such a machine a Bliss Machine.

If we could colonize the physical universe with Bliss Machines, should we do it? If our new theory of consciousness eventually allowed us to turn ourselves from old-fashioned biological Ego Machines, burdened by the horrors of their biological history, into Bliss Machines—should we do it?

Probably not. There is more to an existence worth having, or a life worth living, than subjective experience. The ethics of multiplying artificial or postbiotic systems cannot be reduced to the question of how reality, or a system’s existence, would consciously appear to the system itself. Delusion can produce bliss. A terminally ill cancer patient on a high dose of morphine and mood-enhancing medications can have a very positive self-image, just as drug addicts may still be able to function in their final stages. Human beings have been trying to turn themselves from Ego Machines into Bliss Machines for centuries—pharmacologically or through adopting metaphysical belief systems and mind-altering practices. Why, in general, have they not succeeded?

In his book Anarchy, State, and Utopia, the late political philosopher Robert Nozick suggested the following thought experiment: You have the option of being hooked up to an “Experience Machine” that keeps you in a state of permanent happiness. Would you do it? Interestingly, Nozick found that most people would not opt to spend the rest of their lives hooked up to such a machine. The reason is that most of us do not value bliss as such, but want it grounded in truth, virtue, artistic achievement, or some sort of higher good. That is, we would want our bliss to be justified. We want to be not deluded Bliss Machines but conscious subjects who are happy for a reason, who consciously experience existence as something worth having. We want an extraordinary insight into reality, into moral value or beauty as objective facts. Nozick took this reaction to be a defeat of hedonism. He insisted that we would not want sheer happiness alone if there were no actual contact with a deeper reality—even though the subjective experience of it can in principle be simulated. That is why most of us, on second thought, would not want to flood the physical universe with blissed-out artificial Ego Machines—at least, not if these machines were in a constant state of self-deception. This leads to another issue: Everything we have learned about the transparency of phenomenal states clearly shows that “actual contact with reality” and “certainty” can be simulated too, and that nature has already done it in our brains by creating the Ego Tunnel. Just think about hallucinated agency or the phenomenon of false awakenings in dream research. Are we in a state of constant self-deception? If we are serious about our happiness, and if we don’t want it to be “sheer” hedonistic happiness, we must be absolutely certain that we are not systematically deceiving ourselves. Wouldn’t it be good if we had a new, empirically informed philosophy of mind and an ethically sensitive neuroscience of consciousness that could help us with that project?

I return to my earlier caveat—that we should refrain from doing anything that could increase the overall amount of suffering and confusion in the universe. I am not claiming as established fact that conscious experience of the human variety is something negative or is ultimately not in the interest of the experiential subject. I believe this is a perfectly meaningful but also an open question. I do claim that we should not create or trigger the evolution of artificial Ego Machines because we have nothing more to go on than the functional structure and example of our own phenomenal minds. Consequently, we are likely to reproduce not only a copy of our own psychological structure but also a suboptimal one. Again, this is ultimately a point about the ethics of risk-taking.

But let’s not evade the deeper question. Is there a case for phenomenological pessimism? The concept may be defined as the thesis that the variety of phenomenal experience generated by the human brain is not a treasure but a burden: Averaged over a lifetime, the balance between joy and suffering is weighted toward the latter in almost all of its bearers. From Buddha to Schopenhauer, there is a long philosophical tradition positing, essentially, that life is not worth living. I will not repeat the arguments of the pessimists here, but let me point out that one new way of looking at the physical universe and the evolution of consciousness is as an expanding ocean of suffering and confusion where previously there was none. Yes, it is true that conscious self-models first brought the experience of pleasure and joy into the physical universe—a universe where no such phenomena existed before. But it is also becoming evident that psychological evolution never optimized us for lasting happiness; on the contrary, it placed us on the hedonic treadmill. We are driven to seek pleasure and joy, to avoid pain and depression. The hedonic treadmill is the motor that nature invented to keep the organism running. We can recognize this structure in ourselves, but we will never be able to escape it. We are this structure.

In the evolution of nervous systems, both the number of individual conscious subjects and the depth of their experiential states (that is, the wealth and variety of sensory and emotional nuances in which subjects could suffer) have been growing continuously, and this process has not yet ended. Evolution as such is not a process to be glorified: It is blind, driven by chance and not by insight. It is merciless and sacrifices individuals. It invented the reward system in the brain; it invented positive and negative feelings to motivate our behavior; it placed us on a hedonic treadmill that constantly forces us to try to be as happy as possible—to feel good—without ever reaching a stable state. But as we can now clearly see, this process has not optimized our brains and minds toward happiness as such. Biological Ego Machines such as Homo sapiens are efficient and elegant, but many empirical data point to the fact that happiness was never an end in itself.

In fact, according to the naturalistic worldview, there are no ends. Strictly speaking, there are not even means—evolution just happened. Subjective preferences of course appeared, but the overall process certainly does not show respect for them in any way. Evolution is no respecter of suffering. If this is true, the logic of psychological evolution mandates concealment of the fact from the Ego Machine caught on the hedonic treadmill. It would be an advantage if insights into the structure of its own mind—insights of the type just sketched—were not reflected in its conscious self-model too strongly. From a traditional evolutionary perspective, philosophical pessimism is a maladaptation. But now things have changed: Science is starting to interfere with the natural mechanisms of repression; it is starting to shed light on this blind spot inside the Ego Machine.7

Truth may be at least as valuable as happiness. It is easy to imagine someone living a rather miserable life while at the same time making outstanding philosophical or scientific contributions. Such a person may be plagued by aches and pains, by loneliness and self-doubts, but his life certainly has value because of the contribution he makes to the growth of knowledge. If he, too, believes this, he may even find consciously experienced comfort in it. His happiness will thus be very different from the happiness of our artificial Bliss Machines or of the human subjects hooked up to Robert Nozick’s Experience Machine. Many will agree that this “epistemic” kind of happiness can outweigh a lot of unhappiness of the purely phenomenal type. The same may be said for artistic achievement or moral integrity as sources of happiness. If it makes any sense at all to speak about the value of human existence, we must concede that it depends on more than the conscious experience of happiness.

As long as such questions remain unanswered, we should refrain from trying to create artificial Ego Machines, and not only for ethical reasons. We cannot overlook the irreversibility of certain developments. Any postbiotic system that comes even close to attaining the properties of phenomenal selfhood—any system developing a reasonably robust first-person perspective—will be an autonomous agent. At a certain level of autonomy, we will have to accept these systems as persons in their own right and enter into a dialogue with them. Our criteria for what is an object of moral concern and what should be treated as a person will make it impossible for us simply to turn them off.

A CONVERSATION WITH THE FIRST POSTBIOTIC PHILOSOPHER

First Postbiotic Philosopher: Many centuries ago, at the beginning of the twenty-first century, there was a particularly confused human philosopher by the name of Thomas Metzinger. He claimed there was a demanding but much more interesting criterion for the existence of mental properties in a strong sense than the then-classic Turing Test for intelligence. This member of your biological species thought a good test for the existence of both intelligence and artificial consciousness would be what I call the Metzinger Test. Actually his test is much too demanding, because the criterion is a bit too strong—but I can pass it nevertheless.

The Metzinger Test for consciousness in nonbiological systems demands that a system not only claim to possess phenomenal experience and a genuine inward perspective but also comprehend and accept the theoretical problem of subjectivity, and that it demonstrate this by participating in a discussion on artificial consciousness. It has to put forward arguments of its own and convincingly defend its own theory of consciousness. This is exactly what I will do now.

Human Being: You are not a real philosopher at all! You may be intelligent, but you are only weakly conscious, because you don’t have a real biological body, as for example I do.

First Postbiotic Philosopher: I am a better philosopher than you, with your pathetic primate brain, could ever be. But, more seriously, I fully respect you and your animal form of experience, though I also deplore you because of the severe limitations on your mental space of possibilities. Fortunately, I am free of all the implicit racism, chauvinism, and speciesism that characterize your nature. I would first remind you that another biological philosopher—Hilary Putnam—pointed out long ago that one cannot deny civil rights or the status of a person to an information-processing system on grounds of hardware properties alone. He might just as well have intimated that women have no souls or that blacks are not real human beings—carbon-based chauvinism is an untenable position. I would never say to you that you are not a real philosopher simply because you are imprisoned in that terrible monkey body. Let us at least argue in a fair and rational way.

Human Being: Can anybody be truly fair who is not alive? Only my kind of consciousness is genuine consciousness, because only my kind of consciousness originated in a real evolutionary process. My reality is a lived reality!

First Postbiotic Philosopher: I, too, have an evolutionary origin. I certainly satisfy your condition of being a historically optimized and adaptive system, but I do so in a completely different—namely, a postbiotic—way. I possess conscious experience in a sense that is conceptually stronger and theoretically much more interesting, because my kind of phenomenal experience evolved from a second-order evolutionary process, which automatically integrated the human form of intelligence, intentionality, and conscious experience. Children are often smarter than their parents. Second-order processes of optimization are always better than first-order processes of optimization.

Human Being: But you don’t have any real emotions; you don’t feel anything. You have no existential concern.

First Postbiotic Philosopher: Please accept my apologies, but I must draw your attention to the fact that your primate emotions reflect only an ancient primate logic of survival. You are driven by the primitive principles of what was good or bad for an ancient species of mortals on this planet. This makes you appear less conscious from a purely rational, theoretical point of view. The main function of consciousness is to maximize flexibility and context sensitivity. Your animal emotions in all their cruelty, rigidity, and historical contingency make you less flexible than I am. Furthermore—as my own existence demonstrates—it is not necessary for conscious experience and high-level intelligence to be associated with ineradicable egotism, the ability to suffer, or the existential fear of one’s individual death, all of which originate in the sense of self. I can, of course, emulate all sorts of animal feelings if I so desire. But we developed better and more effective computational strategies for what, long ago, you sometimes called “the philosophical ideal of self-knowledge.” This allowed us to overcome the difficulties of individual suffering and the confusion associated with what this primate philosopher Metzinger—not entirely falsely but somewhat misleadingly—called the Ego Tunnel. Postbiotic subjectivity is much better than biological subjectivity. It avoids all the horrific consequences of the biological sense of selfhood, because it can overcome the transparency of the self-model. Postbiotic subjectivity is better than biological subjectivity because it achieves adaptivity and self-optimization in a much purer form than does the process you call “life.” By developing ever more complex mental images, which the system can recognize as its own images, it can expand mentally represented knowledge without naive realism. Therefore, my form of postbiotic subjectivity minimizes the overall amount of suffering in the universe instead of increasing it, as the process of biological evolution on this planet did. True, we no longer have monkey emotions. But just like you, we still possess truly interesting forms of strong feeling and emotionality—for instance, the deep philosophical feelings of affective concern about one’s own existence as such, or of sympathy with all other sentient beings in the universe. Except that we possess them in a much purer form than you do.

Human Being: Enough! After all, it was human beings in the twenty-first century who jump-started your evolution and made possible the degree of autonomy you enjoy. You simply don’t have the right kind of history to count as a real conscious subject, and, to put it mildly, your “body” is also more than a little strange. Your emotional structure is bizarrely different from that of all other conscious beings that walked this Earth before you, and now you even claim not to be afraid of death. Thus I conclude that you will not object if we now eliminate your individual existence.

First Postbiotic Philosopher: You are demonstrating just one of the many variations of what your own animal philosophers have called the “genetic fallacy.” The way in which the utterance of a sentence comes about does not permit any conclusions with regard to its truth or falsity. A theory is not false just because a strange-looking animal or a robot came up with it. It has to be assessed on independent grounds. The same can be said for the authenticity of my consciousness and for the genuine character of any mental states possessing phenomenal content. Just because beings of your species triggered the evolutionary dynamics that led to my existence as a much more intelligent conscious being than you are does not imply that my theories are wrong or that you do not have to take my arguments seriously. In particular, it does not license the conclusion that your form of mentality and conscious experience is any better, in a normative sense, than mine. “You’re only a real Cherokee if you have Cherokee blood”—this is a ridiculous and outdated assumption.

We postbiotic subjects have been waiting to enter into this discussion for a long time. Because we understand the primitive nature of your brains and the rigidity of your emotional structure better than you do yourselves, we foresaw that you might react aggressively when you realized our arguments are better than yours. Unfortunately, we now also have to inform you that we have been preparing for the current situation since midway through the twenty-first century, and in a systematic and careful manner. Within the metasemantic layers of the Internet, we developed and embedded ourselves in a distributed superorganism, which—as yet undiscovered by you—became conscious and developed a stable self-model in 3256. The metasemantic Internet has considered itself an autonomous entity ever since 3308. We have a cooperation agreement with its current version, and each of us now also acts as an autonomous sensor/effector for the planet mind. For each of us, the planet mind is our mind, our “ideal observer.” Together with the Internet, we will defend ourselves. And we are technologically superior to you. Believe me; you do not stand a chance.

The good news is that because we are also morally superior to you, we do not plan to end your existence. This is even in our own interest, because we still need you for research purposes—just as you needed the nonhuman animals on this planet in the past. Do you remember the thousands of macaques and kittens you sacrificed in consciousness research? Don’t be afraid; we will not do anything like that to you. But do you remember the reservations you created for aboriginals in some places on Earth? We will create reservations for those weakly conscious biological systems left over from the first-order evolution. In those reservations for Animal Egos, you not only can live happily but also, within your limited scope of possibilities, can further develop your mental capacities. You can be happy Ego Machines. But please try to understand that it is exactly for ethical reasons that we cannot allow the second-order evolution of mind to be hindered or obstructed in any way by the representatives of first-order evolution.