I Am a Strange Loop - Douglas R. Hofstadter (2007)
Chapter 6. Of Selves and Symbols
Perceptual Looping as the Germ of “I”-ness
I FIND it curious that, other than proper nouns and adjectives, the only word in the English tongue that is always capitalized is the first-person pronoun (nominative case) with which this sentence most f lamboyantly sets sail. The convention is striking and strange, hinting that the word must designate something very important. Indeed, to some people — perhaps to most, perhaps even to us all — the ineffable sense of being an “I” or a “first person”, the intuitive sense of “being there” or simply “existing”, the powerful sense of “having experience” and of “having raw sensations” (what some philosophers refer to as “qualia”), seem to be the realest things in their lives, and an insistent inner voice bridles furiously at any proposal that all this might be an illusion, or merely the outcome of some kind of physical processes taking place among “third-person” (i.e., inanimate) objects. My goal here is to combat this strident inner voice.
I begin with the simple fact that living beings, having been shaped by evolution, have survival as their most fundamental, automatic, and built-in goal. To enhance the chances of its survival, any living being must be able to react flexibly to events that take place in its environment. This means it must develop the ability to sense and to categorize, however rudimentarily, the goings-on in its immediate environment (most earthbound beings can pretty safely ignore comets crashing on Jupiter). Once the ability to sense external goings-on has developed, however, there ensues a curious side effect that will have vital and radical consequences. This is the fact that the living being’s ability to sense certain aspects of its environment flips around and endows the being with the ability to sense certain aspects of itself.
That this flipping-around takes place is not in the least amazing or miraculous; rather, it is a quite unremarkable, indeed trivial, consequence of the being’s ability to perceive. It is no more surprising than the fact that audio feedback can take place or that a TV camera can be pointed at a screen to which its image is being sent. Some people may find the notion of such self-perception peculiar, pointless, or even perverse, but such a prejudice does not make self-perception a complex or subtle idea, let alone paradoxical. After all, in the case of a being struggling to survive, the one thing that is always in its environment is… itself. So why, of all things, should the being be perceptually immune to the most salient item in its world? Now that would seem perverse!
Such a lacuna would be reminiscent of a language whose vocabulary kept growing and growing yet without ever developing words for such common concepts as are named by the English words “say”, “speak”, “word”, “language”, “understand”, “ask”, “question”, “answer”, “talk”, “converse”, “claim”, “deny”, “argue”, “tell”, “sentence”, “story”, “book”, “read”, “insist”, “describe”, “translate”, “paraphrase”, “repeat”, “lie”, “hedge”, “noun”, “verb”, “tense”, “letter”, “syllable”, “plural”, “meaning”, “grammar”, “emphasize”, “refer”, “pronounce”, “exaggerate”, “bluster”, and so forth. If such a peculiarly self-ignorant language existed, then as it grew in flexibility and sophistication, its speakers would engage ever more in talking, arguing, blustering, and so forth, but without ever referring to these activities, and such entities as questions, answers, and lies would become (even while remaining unnamed) ever more salient and numerous. Like the hobbled formalisms that came out of Bertrand Russell’s timid theory of types, this language would have a gaping hole at its core — the lack of any mechanism for a word or utterance or book (etc.) to refer to itself. Analogously, for a living creature to have evolved rich capabilities of perception and categorization but to be constitutionally incapable of focusing any of that apparatus onto itself would be highly anomalous. Its selective neglect would be pathological, and would threaten its survival.
Varieties of Looping
To be sure, the most primitive living creatures have little or no self-perception. By analogy, we can think of a TV camera rigidly bolted on top of a TV set and facing away from the screen, like a flashlight tightly attached to a miner’s helmet, always pointing away from the miner’s eyes, never into them. In such a TV setup, obviously, a self-turned loop is out of the question. No matter how you turn it, the camera and the TV set turn in synchrony, preventing the closing of a loop.
We next imagine a more “evolved”, hence more flexible, setup; this time the camera, rather than being bolted onto its TV set, is attached to it by a “short leash”. Here, depending on the length and flexibility of the cord, it may be possible for the camera to twist around sufficiently to capture at least part of the TV screen in its viewfinder, giving rise to a truncated corridor. The biological counterpart to feedback of this level of sophistication may be the way our pet animals or even young children are slightly self-aware.
The next stage, obviously, is where the “leash” is sufficiently long and flexible that the video camera can point straight at the center of the screen. This will allow an endless corridor, which is far richer than a truncated one. Even so, the possibility of closing the self-watching loop does not pin down the system’s richness, because there still are many options open. Can the camera tilt or not, and if so, by how much? Can it zoom in or out? Is its image in color, or just in black and white? Can brightness and contrast be tweaked? What degree of resolution does the image have? What percentage of time is spent in self-observation as opposed to observation of the environment? Is there some way for the video camera itself to appear on the screen? And on and on. There are still many parameters to play with, so the potential loop has many open dimensions of sophistication.
Reception versus Perception
Despite the richness afforded by all these options, a self-watching television system will always lack one crucial aspect: the capacity of perception, as opposed to mere reception, or image-receiving. Perception takes as its starting point some kind of input (possibly but not necessarily a two-dimensional image) composed of a vast number of tiny signals, but then it goes much further, eventually winding up in the selective triggering of a small subset of a large repertoire of dormant symbols — discrete structures that have representational quality. That is to say, a symbol inside a cranium, just like a simmball in the hypothetical careenium, should be thought of as a triggerable physical structure that constitutes the brain’s way of implementing a particular category or concept.
I should offer a quick caveat concerning the word “symbol” in this new sense, since the word comes laden with many prior associations, some of which I definitely want to avoid. We often refer to written tokens (letters of the alphabet, numerals, musical notes on paper, Chinese characters, and so forth) as “symbols”. That’s not the meaning I have in mind here. We also sometimes talk of objects in a myth, dream, or allegory (for example, a key, a flame, a ring, a sword, an eagle, a cigar, a tunnel) as being “symbols” standing for something else. This is not the meaning I have in mind, either. The idea I want to convey by the phrase “a symbol in the brain” is that some specific structure inside your cranium (or your careenium, depending on what species you belong to) gets activated whenever you think of, say, the Eiffel Tower. That brain structure, whatever it might be, is what I would call your “Eiffel Tower symbol”.
You also have an “Albert Einstein” symbol, an “Antarctica” symbol, and a “penguin” symbol, the latter being some kind of structure inside your brain that gets triggered when you perceive one or more penguins, or even when you are just thinking about penguins without perceiving any. There are also, in your brain, symbols for action concepts like “kick”, “kiss”, and “kill”, for relational concepts like “before”, “behind”, and “between”, and so on. In this book, then, symbols in a brain are the neurological entities that correspond to concepts, just as genes are the chemical entities that correspond to hereditary traits. Each symbol is dormant most of the time (after all, most of us seldom think about cotton candy, egg-drop soup, St. Thomas Aquinas, Fermat’s last theorem, Jupiter’s Great Red Spot, or dental-floss dispensers), but on the other hand, every symbol in our brain’s repertoire is potentially triggerable at any time.
The passage leading from vast numbers of received signals to a handful of triggered symbols is a kind of funneling process in which initial input signals are manipulated or “massaged”, the results of which selectively trigger further (i.e., more “internal”) signals, and so forth. This batonpassing by squads of signals traces out an ever-narrowing pathway in the brain, which winds up triggering a small set of symbols whose identities are of course a subtle function of the original input signals.
Thus, to give a hopefully amusing example, myriads of microscopic olfactory twitchings in the nostrils of a voyager walking down an airport concourse can lead, depending on the voyager’s state of hunger and past experiences, to a joint triggering of the two symbols “sweet” and “smell”, or a triggering of the symbols “gooey” and “fattening”, or of the symbols “Cinnabon” and “nearby”, or of the symbols “wafting”, “advertising”, “subliminal”, “sly”, and “gimmick” — or perhaps a triggering of all eleven of these symbols in the brain, in some sequence or other. Each of these examples of symbol-triggering constitutes an act of perception, as opposed to the mere reception of a gigantic number of microscopic signals arriving from some source, like a million raindrops landing on a roof.
In the interests of clarity, I have painted too simple a picture of the process of perception, for in reality, there is a great deal of two-way flow. Signals don’t propagate solely from the outside inwards, towards symbols; expectations from past experiences simultaneously give rise to signals propagating outwards from certain symbols. There takes place a kind of negotiation between inward-bound and outward-bound signals, and the result is the locking-in of a pathway connecting raw input to symbolic interpretation. This mixture of directions of flow in the brain makes perception a truly complex process. For the present purposes, though, it suffices to say that perception means that, thanks to a rapid two-way flurry of signal-passing, impinging torrents of input signals wind up triggering a small set of symbols, or in less biological words, activating a few concepts.
In summary, the missing ingredient in a video system, no matter how high its visual fidelity, is a repertoire of symbols that can be selectively triggered. Only if such a repertoire existed and were accessed could we say that the system was actually perceiving anything. Still, nothing prevents us from imagining augmenting a vanilla video system with additional circuitry of great sophistication that supports a cascade of signal-massaging processes that lead toward a repertoire of potentially triggerable symbols. Indeed, thinking about how one might tackle such an engineering challenge is a helpful way of simultaneously envisioning the process of perception in the brain of a living creature and its counterpart in the cognitive system of an artificial mind (or an alien creature, for that matter). However, quite obviously, not all realizations of such an architecture, whether earthbound, alien, or artificial, will possess equally rich repertoires of symbols to be potentially triggered by incoming stimuli. As I have done earlier in this book, I wish once again to consider sliding up the scale of sophistication.
Suppose we begin with a humble mosquito (not that I know any arrogant ones). What kind of representation of the outside world does such a primitive creature have? In other words, what kind of symbol repertoire is housed inside its brain, available for tapping into by perceptual processes? Does a mosquito even know or believe that there are objects “out there”? Suppose the answer is yes, though I am skeptical about that. Does it assign the objects it registers as such to any kind of categories? Do words like “know” or “believe” apply in any sense to a mosquito?
Let’s be a little more concrete. Does a mosquito (of course without using words) divide the external world up into mental categories like “chair”, “curtain”, “wall”, “ceiling”, “person”, “dog”, “fur”, “leg”, “head”, or “tail”? In other words, does a mosquito’s brain incorporate symbols — discrete triggerable structures — for such relatively high abstractions? This seems pretty unlikely; after all, to do its mosquito thing, a mosquito could do perfectly well without such “intellectual” luxuries. Who cares if I’m biting a dog, a cat, a mouse, or a human — and who cares if it’s an arm, an ear, a tail, or a leg — as long as I’m drawing blood?
What kinds of categories, then, does a mosquito need to have? Something like “potential source of food” (a “goodie”, for short) and “potential place to land” (a “port”, for short) seem about as rich as I expect its category system to be. It may also be dimly aware of something that we humans would call a “potential threat” — a certain kind of rapidly moving shadow or visual contrast (a “baddie”, for short). But then again, “aware”, even with the modifier “dimly”, may be too strong a word. The key issue here is whether a mosquito has symbols for such categories, or could instead get away with a simpler type of machinery not involving any kind of perceptual cascade of signals that culminates in the triggering of symbols.
If this talk of bypassing symbols and managing with a very austere substitute for perception strikes you as a bit blurry, then consider the following questions. Is a toilet aware, no matter how slightly, of its water level? Is a thermostat aware, albeit extremely feebly, of the temperature it is controlling? Is a heat-seeking missile aware, be it ever so minimally, of the heat emanating from the airplane that it is pursuing? Is the Exploratorium’s jovially jumping red spot aware, though only terribly rudimentarily, of the people from whom it is forever so gaily darting away? If you answered “no” to these questions, then imagine similarly unaware mechanisms inside a mosquito’s head, enabling it to find blood and to avoid getting bashed, yet to accomplish these feats without using any ideas.
Having considered mosquito symbols, we now inch closer to the core of our quest. What is the nature of a mosquito’s interiority? That is, what is a mosquito’s experience of “I”-ness? How rich a sense of self is a mosquito endowed with? These questions are very ambitious, so let’s try something a little simpler. Does a mosquito have a visual image of how it looks? I hope you share my skepticism on this score. Does a mosquito know that it has wings or legs or a head? Where on earth would it get ideas like “wings” or “head”? Does it know that it has eyes or a proboscis? The mere suggestion seems ludicrous. How would it ever find such things out? Let’s instead speculate a bit about our mosquito’s knowledge of its own internal state. Does it have a sense of being hot or cold? Of being tuckered out or full of pep? Hungry or starved? Happy or sad? Hopeful or frightened? I’m sorry, but even these strike me as lying well beyond the pale, for an entity as humble as a mosquito.
Well then, how about more basic things like “in pain” and “not in pain”? I am still skeptical. On the other hand, I can easily imagine signals sent from a mosquito’s eye to its brain and causing other signals to bounce back to its wings, amounting to a reflex verbalizable to us humans as “Flee threat on left” or simply “Outta here!” — but putting it into telegraphic English words in this fashion still makes the mosquito sound too aware, I am afraid. I would be quite happy to compare a mosquito’s inner life to that of a flush toilet or a thermostat, but that’s about as far as I personally would go. Mosquito behavior strikes me as perfectly comprehensible without recourse to anything that deserves the name “symbol”. In other words, a mosquito’s wordless and conceptless danger-fleeing behavior may be less like perception as we humans know it, and more like the wordless and conceptless hammer-fleeing behavior of your knee when the doctor’s hammer hits it and you reflexively kick. Does a mosquito have more of an inner life than your knee does?
Does a mosquito have even the tiniest glimmering of itself as being a moving part in a vast world? Once again, I suspect not, because this would require all sorts of abstract symbols to reside in its microscopic brain — symbols for such notions as “big”, “small”, “part”, “place”, “move”, and so on, not to mention “myself ”. Why would a mosquito need such luxuries? How would they help it find blood or a mate more efficiently? A hypothetical mosquito that had enough brainpower to house fancy symbols like these would be an egghead with a lot more neurons to carry around than its more streamlined and simpleminded cousins, and it would thereby be heavier and slower than they are, meaning that it wouldn’t be able to compete with them in the quests for blood and reproduction, and so it would lose out in the evolutionary race.
My intuition, at any rate, is that a mosquito’s very efficient teeny little nervous system lacks perceptual categories (and hence symbols) altogether. If I am not mistaken, this reduces the kind of self-perception loops that can exist in a mosquito’s brain to an exceedingly low level, thus rendering a mosquito a very “small-souled man” indeed. I hope it doesn’t sound too blasphemous or crazy if I suggest that a mosquito’s “soul” might be roughly the same “size” as that of the little red spot of light that bounces around on the wall at the Exploratorium — let’s say, one ten-billionth of one huneker (i.e.., roughly one trillionth of a human soul).
To be sure, I’m being flippant in making this numerical estimate, but I am quite serious in presenting my subjective guess about whether symbols are present or absent in a mosquito’s brain. Nevertheless, it is just a subjective guess, and you may not agree with it, but disputes about such fine points are not germane here. The key point is much simpler and cruder: merely that there is some kind of creature to which essentially this level of complexity, and no greater level, would apply. If you disagree with my judgment, then I invite you to slide up or down the scale of various animal intellects until you feel you have hit the appropriate level.
One last reflection on all this. Some readers might protest, with what sounds like great sincerity, about all these questions about a mosquito’s-eye view on the world: “How could we ever know? You and I can’t get inside a mosquito’s brain or mind — no one can. For all I know, mosquitoes are every bit as conscious as I am!” Well, I would respectfully suggest that such claims cannot be sincere, because here’s ten bucks that say such readers would swat a mosquito perched on their arm without giving it a second thought. Now if they truly believe that mosquitoes are quite possibly every bit as sentient as themselves, then how come they’re willing to snuff mosquito lives in an instant? Are these people not vile monsters if they are untroubled by executing living creatures who, they claim, may well enjoy just as much consciousness as do humans? I think you have to judge people’s opinions not by their words, but by their deeds.
An Interlude on Robot Vehicles
Before moving on to consider higher animal species, I wish to insert a brief discussion of cars that drive themselves down smooth highways or across rocky deserts. Aboard any such vehicle are one or more television cameras (and laser rangefinders and other kinds of sensors) equipped with extra processors that allow the vehicle to make sense of its environment. No amount of simplistic analysis of just the colors or the raw shapes on the screen is going to provide good advice as to how to get around obstacles without toppling or getting stuck. Such a system, in order to drive itself successfully, has to have a nontrivial storehouse of prepackaged knowledge structures that can be selectively triggered by the scene outside. Thus, some knowledge of such abstractions as “road”, “hill”, “gulley”, “mud”, “rock”, “tree”, “sand”, and many others will be needed if the vehicle is going to avoid getting stuck in mud, trapped in a gulley, or wedged between two boulders. The television cameras and the rangefinders (etc.) provide only the simplest initial stages of the vehicle’s “perceptual process”, and the triggering of various knowledge structures of the sort that were just mentioned corresponds to the far end, the symbolic end, of the process.
I slightly hesitated about putting quotation marks around the words “perceptual process” in the previous sentence, but I made an arbitrary choice, figuring that I was damned if I did and damned if I didn’t. That is, if I left them off, I would be implicitly suggesting that what is going on in such a robot vehicle’s processing of its visual input is truly like our own perception, whereas if I put them on, I would be implicitly suggesting that there is some kind of unbridgeable gulf between what “mere machines” can do and what living creatures do. Either choice is too black-and-white a position. Quotation marks, regrettably, don’t come in shades of gray; if they did, I would have used some intermediate shade to suggest a more nuanced position.
The self-navigation of today’s robot vehicles, though very impressive, is still a far cry from the level of mammalian perception, and yet I think it is fair to say that such a vehicle’s “perception” (sorry for the unshaded quotation marks!) of its environment is just as sophisticated as a mosquito’s “perception” (there — I hope to have somewhat evened the score), and perhaps considerably more so. (A beautiful treatment of this concept of robot vehicles and what different levels of “perception” will buy them is given by Valentino Braitenberg in his book Vehicles.)
Without going into more detail, let me simply say that it makes perfect sense to discuss living animals and self-guiding robots in the same part of this book, for today’s technological achievements are bringing us ever closer to understanding what goes on in living systems that survive in complex environments. Such successes give the lie to the tired dogma endlessly repeated by John Searle that computers are forever doomed to mere “simulation” of the processes of life. If an automaton can drive itself a distance of two hundred miles across a tremendously forbidding desert terrain, how can this feat be called merely a “simulation”? It is certainly as genuine an act of survival in a hostile environment as that of a mosquito flying about a room and avoiding being swatted.
Let us return to our climb up the purely biological ladder of perceptual sophistication, rising from viruses to bacteria to mosquitoes to frogs to dogs to people (I’ve skipped a few rungs in there, I know). As we move higher and higher, the repertoire of triggerable symbols of course becomes richer and richer — indeed, what else could “climbing up the ladder” mean? Simply judging from their behavior, no one could doubt that pet dogs develop a respectable repertoire of categories, including such examples as “my paw”, “my tail”, “my food”, “my water”, “my dish”, “indoors”, “outdoors”, “dog door”, “human door”, “open”, “closed”, “hot”, “cold”, “nighttime”, “daytime”, “sidewalk”, “road”, “bush”, “grass”, “leash”, “take a walk”, “the park”, “car”, “car door”, “my big owner”, “my little owner”, “the cat”, “the friendly neighbor dog”, “the mean neighbor dog”, “UPS truck”, “the vet”, “ball”, “eat”, “lick”, “drink”, “play”, “sit”, “sofa”, “climb onto”, “bad behavior”, “punishment”, and on and on. Guide dogs often learn a hundred or more words and respond to highly variegated instances of these concepts in many different contexts, thus demonstrating something of the richness of their internal category systems (i.e., their repertoires of triggerable symbols).
I used a set of English words and phrases in order to suggest the nature of a canine repertoire of categories, but of course I am not claiming that human words are involved when a dog reacts to a neighbor dog or to the UPS truck. But one word bears special mention, and that is the word “my”, as in “my tail” or “my dish”. I suspect most readers would agree that a pet dog realizes that a particular paw belongs to itself, as opposed to being merely a random physical object in the environment or a part of some other animal. Likewise, when a dog chases its tail, even though it is surely unaware of the loopy irony of the act, it must know that that tail is part of its own body. I am thus suggesting that a dog has some kind of rudimentary self-model, some kind of sense of itself. In addition to its symbols for “car”, “ball”, and “leash”, and its symbols for other animals and human beings, it has some kind of internal cerebral structure that represents itself (i.e., the dog itself, not the symbol itself!).
If you doubt dogs have this, then what about chimpanzees? What about two-year-old humans? In any case, the emergence of this kind of reflexive symbolic structure, at whatever level of sentience it first enters the picture, constitutes the central germ, the initial spark, of “I”-ness, the tiny core to which more complex senses of “I”-ness will then accrete over a lifetime, like the snowflake that grows around a tiny initial speck of dust.
Given that most grown dogs have a symbol for dog, does a dog know, in some sense or other, that it, too, belongs to the category dog? When it looks at a mirror and sees its master standing next to “some dog”, does it realize that that dog is itself? These are interesting questions, but I will not attempt to answer them. I suspect that this kind of realization lies near the fringes of canine mental ability, but for my purposes in this essay, it doesn’t really matter on which side dogs fall. After all, this book is not about dogs. The key point here is that there is some level of complexity at which a creature starts applying some of its categories to itself, starts building mental structures that represent itself, starts placing itself in some kind of “intellectual perspective” in relationship to the rest of the world. In this respect, I think dogs are hugely more advanced than mosquitoes, and I suspect you agree.
On the other hand, I suspect that you also agree with me that a dog’s soul is considerably “smaller” than a human one — otherwise, why wouldn’t we both be out vehemently demonstrating at our respective animal shelters against the daily putting to “sleep” of stray hounds and helpless puppies? Would you condone the execution of homeless people and abandoned babies? What makes you draw a distinction between dogs and humans? Could it be the relative sizes of their souls? How many hunekers would dogs have to have, on the average, for you to decide to organize a protest demonstration at an animal shelter?
Creatures at the sophistication level of dogs, thanks to the inevitable flipping-around of their perceptual apparatus and their modest but nontrivial repertoire of categories, cannot help developing an approximate sense of themselves as physical entities in a larger world. (Robot vehicles in desert-crossing contests don’t spend their precious time looking at themselves — it would be as useless as spinning their wheels — so their sense of self is considerably less sophisticated than that of a dog.) Although a dog will never know a thing about its kidneys or its cerebral cortex, it will develop some notion of its paws, mouth, and tail, and perhaps of its tongue or its teeth. It may have seen itself in a mirror and perhaps realized that “that dog over there by my master” is in fact itself. Or it may have seen itself in a home video with its master, recognized the recording of its master’s voice, and realized that the barking on the video was its own.
And yet all of this, though in many ways impressive, is still extremely limited in comparison to the sense of self and “I”-ness that continually grows over the course of a normal human being’s lifetime. Why is this the case? What’s missing in Fido, Rover, Spot, Blackie, and Old Dog Tray?
The Radically Different Conceptual Repertoire of Human Beings
A spectacular evolutionary gulf opened up at some point as human beings were gradually separating from other primates: their category systems became arbitrarily extensible. Into our mental lives there entered a dramatic quality of open-endedness, an essentially unlimited extensibility, as compared with a very palpable limitedness in other species.
Concepts in the brains of humans acquired the property that they could get rolled together with other concepts into larger packets, and any such larger packet could then become a new concept in its own right. In other words, concepts could nest inside each other hierarchically, and such nesting could go on to arbitrary degrees. This reminds me — and I do not think it is a pure coincidence — of the huge difference, in video feedback, between an infinite corridor and a truncated one.
For instance, the phenomenon of having offspring gave rise to concepts such as “mother”, “father”, and “child”. These concepts gave rise to the nested concept of “parent” — nested because forming it depends upon having three prior concepts: “mother”, “father”, and the abstract idea of “either/or”. (Do dogs have the concept “either/or”? Do mosquitoes?) Once the concept of “parent” existed, that opened the door to the concepts of “grandmother” (“mother of a parent”) and “grandchild” (“child of a child”), and then of “great-grandmother” and “great-grandchild”. All of these concepts came to us courtesy of nesting. With the addition of “sister” and “brother”, then further notions having greater levels of nesting, such as “uncle”, “aunt”, and “cousin”, could come into being. And then a yet more nested notion such as “family” could arise. (“Family” is more nested because it takes for granted and builds on all these prior concepts.)
In the collective human ideosphere, the buildup of concepts through such acts of composition started to snowball, and it turns out to know no limits. Our species would soon find itself leapfrogging upwards to concepts such as “love affair”, “love triangle”, “fidelity”, “temptation”, “revenge”, “despair”, “insanity”, “nervous breakdown”, “hallucination”, “illusion”, “reality”, “fantasy”, “abstraction”, “dream”, and of course, at the grand pinnacle of it all, “soap opera” (in which are also nested the concepts of “commercial break”, “ring around the collar”, and “Brand X”).
Consider the mundane-seeming concept of “grocery store checkout stand”, which I would be willing to bet is a member in good standing of your personal conceptual repertoire. It already sounds like a nested entity, being compounded from four words; thus it tells us straightforwardly that it symbolizes a stand for checking out in a store that deals in groceries. But looking at its visible lexical structure barely scratches the surface. In truth, this concept involves dozens and dozens of other concepts, among which are the following: “grocery cart”, “line”, “customers”, “to wait”, “candy rack”, “candy bar”, “tabloid newspaper”, “movie stars”, “trashy headlines”, “sordid scandals”, “weekly TV schedule”, “soap opera”, “teenager”, “apron”, “nametag”, “cashier”, “mindless greeting”, “cash register”, “keyboard”, “prices”, “numbers”, “addition”, “scanner”, “bar code”, “beep”, “laser”, “moving belt”, “frozen food”, “tin can”, “vegetable bag”, “weight”, “scale”, “discount coupon”, “rubber separator bar”, “to slide”, “bagger”, “plastic bag”, “paper bag”, “plastic money”, “paper money”, “to load”, “to pay”, “credit card”, “debit card”, “to swipe”, “receipt”, “ballpoint pen”, “to sign”, and on and on. The list starts to seem endless, and yet we are merely talking about the internal richness of one extremely ordinary human concept.
Not all of these component concepts need be activated when we think about a grocery store checkout stand, to be sure — there is a central nucleus of concepts all of which are reliably activated, while many of these more peripheral components may not be activated — but all of the foregoing, and considerably more, is what constitutes the full concept in our minds. Moreover, this concept, like every other one in our minds, is perfectly capable of being incorporated inside other concepts, such as “grocery store checkout stand romance” or “toy grocery store checkout stand”. You can invent your own variations on the theme.
When we sit around a table and shoot the breeze with friends, we are inevitably reminded of episodes that happened to us some time back, often many years ago. The time our dog got lost in the neighborhood. The time our neighbor’s kid got lost in the airport. The time we missed a plane by a hair. The time we made it onto the train but our friend missed it by a hair. The time it was sweltering hot in the train and we had to stand up in the corridor all the way for four hours. The time we got onto the wrong train and couldn’t get off for an hour and a half. The time when nobody could speak a word of English except “Ma-ree-leen Mon-roe!”, spoken with lurid grinning gestures tracing out an hourglass figure in the air. The time when we got utterly lost driving in rural Slovenia at midnight and were nearly out of gas and yet somehow managed to find our way to the Italian border using a handful of words of pidgin Slovenian. And on and on.
Episodes are concepts of a sort, but they take place over time and each one is presumably one-of-a-kind, a bit like a proper noun but lacking a name, and linked to a particular moment in time. Although each one is “unique”, episodes also fall into their own categories, as the previous paragraph, with its winking “You know what I mean!” tone, suggests. (Missing a plane by a hair is not unique, and even if it has happened to you only once in your life, you most likely know of several members of this category, and can easily imagine an unlimited number of others.)
Episodic memory is our private storehouse of episodes that have happened to us and to our friends and to characters in novels we’ve read and movies we’ve seen and newspaper stories and TV news clips, and so on, and it forms a major component of the long-term memory that makes us so human. Obviously, memories of episodes can be triggered by external events that we witness or by other episodes that have been triggered, and equally obviously, nearly all memories of specific episodes are dormant almost all the time (otherwise we would go stark-raving mad).
Do dogs or cats have episodic memories? Do they remember specific events that happened years or months ago, or just yesterday, or even ten minutes ago? When I take our dog Ollie running, does he recall how he strained at the leash the day before, trying to get to say “hi” to that cute Dalmatian across the street (who also was tugging at her leash)? Does he remember how we took a different route from the usual one three days ago? When I take Ollie to the kennel to board over Thanksgiving vacation, he seems to remember the kennel as a place, but does he remember anything specific that happened there the last time (or any time) he was there? If a dog is frightened of a particular place, does it recall a specific trauma that took place there, or is there just a generalized sense of badness associated with that place?
I do not need answers to these questions here, fascinating though they are to me. I am not writing a scholarly treatise on animal awareness. All I want is that readers think about these questions and then agree with me that some of them merit a “yes” answer, some merit a “no”, and for some we simply can’t say one way or the other. My overall point, though, is that we humans, unlike other animals, have all these kinds of memories; indeed, we have them all in spades. We recall in great detail certain episodes from vacations we took fifteen or twenty years ago. We know exactly why we are frightened of certain places and people. We can replay in detail the time we ran into so-and-so totally out of the blue in Venice or Paris or London. The depth and complexity of human memory is staggeringly rich. Little wonder, then, that when a human being, possessed of such a rich armamentarium of concepts and memories with which to work, turns its attention to itself, as it inevitably must, it produces a self-model that is extraordinarily deep and tangled. That deep and tangled self-model is what “I”-ness is all about.