WHO NEEDS FLESH - BEYOND CURRENT CONSTRAINTS - Human: The Science Behind What Makes Us Unique - Michael S. Gazzaniga

Human: The Science Behind What Makes Us Unique - Michael S. Gazzaniga (2008)

Part IV. BEYOND CURRENT CONSTRAINTS

Chapter 9. WHO NEEDS FLESH?

The principles now being discovered at work in the brain may provide, in the future, machines even more powerful than those we can at present foresee.

J. Z. Young, Doubt and Certainty in Science: A Biologist’s Reflections on the Brain, 1960

Men ought to know that from the brain, and from the brain only, arise our pleasures, joy, laughter and jests, as well as our sorrows, pains, griefs, and tears.

Hippocrates, c. 400 B.C.

I AM A FYBORG, AND SO ARE YOU. FYBORGS, OR FUNCTIONAL cyborgs, are biological organisms functionally supplemented with technological extensions.1 For instance, shoes. Wearing shoes has not been a problem for most people. In fact, it has solved many problems, such as walking on gravelly surfaces, avoiding thorns in the foot, walking at high noon across an asphalt parking lot on a June day in Phoenix, or a January day in Duluth, and shoes have prevented over one million stubbed toes in the last month. In general, no one is going to get upset about the existence and use of shoes. Man’s ingenuity came up with a tool to make life easier and more pleasant. After the inventors and engineers were done with the concept, the basic design, and product development, the aesthetics department took over, cranked it around a bit, and came up with high heels. Perhaps not so utilitarian, but they serve a different, more specific purpose: to get across that parking lot looking sexy.

Wearing clothes has also been well accepted. They provide protection both from the cold and the sun, from thorns and brush, and can cover up years’ worth of unsightly intake errors. Watches, a handy tool, are used by quite a few people without any complaint, and are now usually run by a small computer worn on the wrist. Eyeglasses and contact lenses are common. There was no big revolution when those were introduced. Cell phones seem to be surgically attached to the palms of teenagers and, for that matter, most everyone else. Fashioning tools that make life easier is what humans have always done. For thousands of years, we humans have been fyborgs, a term coined by Alexander Chislenko, who was an artificial-intelligence theorist, researcher, and software designer for various private companies and MIT. The first caveman that slapped a piece of animal hide across the bottom of his foot and refused to leave home without it became a fyborg to a limited degree. Chislenko devised a self-test for functional cyborgization:

Are you dependent on technology to the extent that you could not survive without it?

Would you reject a lifestyle free of any technology even if you could endure it?

Would you feel embarrassed and “dehumanized” if somebody removed your artificial covers (clothing) and exposed your natural biological body in public?

Do you consider your bank deposits a more important personal resource storage system than your fat deposits?

Do you identify yourself and judge other people more by possessions, ability to manipulate tools and positions in the technological and social systems than primary biological features?

Do you spend more time thinking about—and discussing—your external “possessions” and “accessories” than your internal “parts”?1

I don’t know about you, but I would much rather hear about my friend’s new Maserati than his liver. Call me a fyborg any day.

Cyborgs, on the other hand, have a physical integration of biological and technological structures. And we now have a few in our midst. Going beyond the manufacture of tools, humans have gotten into the business of aftermarket body parts. Want to upgrade that hip or knee? Hop up on this table. Lost an arm? Let’s see what we can do to help you out. But things start getting a little bit dicier when we get to the world of implants. Replacement hips and knees are OK, but start a discussion about breast implants, and you may end up with a lively or heated debate about a silicon upgrade. Enhancement gets the ire up in some people. Why is that? What is wrong with a body upgrade?

We get into even choppier waters when we start talking about neural implants. Some people fear that tinkering with the brain by use of neural prostheses may threaten personal identity. What is a neural prosthesis? It’s a device implanted to restore a lost or altered neural function. It may be either on the input side (sensory input coming into the brain) or the output side (translating neuronal signals into actions). Currently the most successful neural implant has been used to restore auditory sensory perception: the cochlear implant.

Until recently, “artifacts” or tools that man has created have been directed to the external world. More recently, therapeutic implants—such as artificial joints, cardiac pacemakers, drugs, and physical enhancements—have been used either below the neck or for facial cosmetic purposes (that would include hair transplants). Today, we are using therapeutic implants above the neck. We are using them in the brain. We also are using therapeutic medications that affect the brain to treat mental illness, anxiety, and mood disorders. Things are changing, and they are changing rapidly. Technological and scientific advances in many areas, including genetics, robotics, and computer technology, are predicted to set about a revolution of change such as humans have never experienced before, change that may well affect what it means to be human—changes that we hope will improve our lives, our societies, and the world.

Ray Kurzweil, a researcher in artificial intelligence, makes the point that knowledge in these areas is increasing at an exponential rate, not at a linear rate.2 This is what you would like your stock price to do. The classic example of exponential growth is the story about the smart peasant of whom we learned in math class—the guy who worked a deal with a math-challenged king for a grain of rice on the first square of a chessboard, and to have it doubled on the second, and so on, until by the time the king had reached the end of the chessboard, he had lost his kingdom and then some. Across the first row or two of the chessboard things progressed rather slowly, but there came a point where the doubling was a hefty change.

In 1965, Gordon Moore, one of the cofounders of Intel, the world’s largest semiconductor manufacturing company, made the observation that the number of transistors on an integrated circuit for minimum component cost doubles every twenty-four months. That means that every twenty-four months they could double the number of transistors on a circuit without increasing the cost. That is exponential growth. Carver Mead, a professor at Caltech, dubbed this observation Moore’s law, and it has been viewed both as a prediction and a goal for growth in the technology industry. It continues to be fulfilled. In the last sixty years, computation speed, measured in what are known as floating point operations per second (FLOPS), has increased from 1 FLOPS to over 250 trillion FLOPS! As Henry Markram, project director of IBM’s Blue Brain project (which we will talk about later), states, this is “by far the largest man-made growth rate of any kind in the ~10,000 years of human civilization.”3 The graph of exponential change, instead of gradually increasing continually as a linear graph would, gradually increases until a critical point is reached and then there is an upturn such that the line becomes almost vertical. This “knee” in the graph is where Kurzweil thinks we currently are in the rate of change that will occur owing to the knowledge gained in these areas. He thinks we are not aware of it or prepared for it because we have been in the more slowly progressing earlier stage of the graph and have been lulled into thinking that the rate of change is linear.

What are the big changes that we aren’t prepared for? What do they have to do with the unique qualities of being human? You aren’t going to believe them if we don’t work up to them slowly, so that is what we are going to do.

SILICON-BASED AIDS: THE COCHLEAR IMPLANT STORY

Cochlear implants have helped hundreds of thousands of people with severe hearing problems (due to the loss of hair cells in the inner ear, which are responsible for transmitting but also augmenting or decreasing auditory stimuli) for whom a typical hearing aid does not help. In fact, a child who has been born deaf and has the implants placed at an early enough age (eighteen to twenty-four months being optimal) will be able to learn to speak normally, and although his hearing may not be perfect, it will be quite functional. Wonderful as this may sound, in the 1990s, many people in the deaf community worried that cochlear implants might adversely affect deaf culture and that, rather than a therapeutic intervention, the devices were a weapon being wielded by the medical community to commit cultural genocide against the deaf community. Some considered hearing an enhancement, an additional capability on top of what other members of the community had, gained by artificial means. Although people with cochlear implants can still use sign language, apparently they are not always welcome.4 Could this reaction be a manifestation of Richard Wragham’s theory, which we learned about in chapter 2, that humans are a party-gang species with in-group/out-group bias? This attitude has slowly been changing but is still held by many.

To understand cochlear implants, and all neuroprosthetics, it is important to also understand that the body runs on electricity. David Bodanis, in his book Electric Universe, gives us a vivid description: “Our entire body operates by electricity. Gnarled living electrical cables extend into the depths of our brains; intense electric and magnetic force fields stretch into our cells, flinging food or neurotransmitters across microscopic barrier membranes; even our DNA is controlled by potent electrical forces.”5

A DIGRESSION ON ELECTRIC CITY

The physiology of the brain and central nervous system has been a challenge to understand. We haven’t talked much about physiology, but it is the structure underneath all that occurs in the body and brain. All theories of the brain’s mechanisms must have an understanding of the physiology as their foundation. The electrical nature of the body and brain is perhaps most easily digested bit by bit and, luckily for our digestion, the continuing unfolding story began in one of the most tasty cities of the world, Bologna, Italy. In 1791, Luigi Galvani, a physician and physicist, hung a frog’s leg out on his iron balcony rail. He had hung it with a copper wire. The dang thing started twitching. Something was going on between those two metals. He zapped another frog’s leg with a bit of electricity, and it twitched. After further investigation, he suggested that nerve and muscle could generate their own electrical current, and that was what caused them to twitch. Galvani thought the electricity came from the muscle, but his intellectual sparring partner, physicist Alessandro Volta, who hailed from the southern reaches of Lake Como, was more on the mark, thinking that electricity inside and outside the body was much the same type of electrochemical reaction occurring between metals.

Nearly a hundred years go by, and another physician and physicist, from Germany, Hermann von Helmholtz, who was into everything from visual and auditory perception to chemical thermodynamics and the philosophy of science, figured out a bit more. That electrical current was no by-product of cellular activity; it was what was actually carrying messages along the axon of the nerve cell. He also figured out that even though the speed at which those electrical messages (signals) were conducted was far slower than in a copper wire, the nerve signals maintained their strength, but those in the copper did not. What was going on? Well, in wire, signals are propagated passively, so that must not be what is going on with nerve cells. Von Helmholtz found that the signals were being propagated by a wavelike action that went as fast as ninety feet per second. Well, Helmholtz had done his bit and passed the problem on.

How did those signals get propagated? Helmholtz’s former assistant, Julius Bernstein, was all over this problem and came up with the membrane theory, published in 1902. Half of it has proven true; the other half, not quite.

When a nerve axon is at rest, there is a 70-millivolt voltage difference between the inside and the outside of the membrane surrounding it, with the inside having a greater negative charge. This voltage difference across the membrane is known as the resting membrane potential.

When you get a blood panel done, part of what is being checked are your electrolyte levels. Electrolytes are electrically charged atoms (ions) of sodium, potassium, and chlorine. Your cells are sitting in a bath of this stuff, but ions are also inside the cells, and it is the difference in their concentrations inside and outside of the cell that constitutes the voltage difference.

Outside the cell are positively charged sodium ions (atoms that are short an electron) balanced by negatively charged chloride ions (chlorine atoms carrying an extra electron). Inside the cell, there is a lot of protein, which is negatively charged, balanced by positively charged potassium ions. However, since the inside of the cell has an overall negative charge, not all the protein is being balanced by potassium. What’s up with that? Bernstein flung caution to the wind and suggested that there were selectively permeable pores (now called ion channels), which allowed only potassium to flow in and out. The potassium flows out of the cell and remains near the outside of the cell membrane, making it more positively charged, while the excess of negatively charged protein ions make the inside surface of the membrane negatively charged. This creates the voltage difference at rest.

But what happens when the neuron fires off a signal (which is called an action potential)? Bernstein proposed that for a fraction of a second the membrane loses its selective permeability, letting any ion cross it. Ions would then flow into and out of the cell, neutralizing the charge and bringing the resting potential to zero. No big fancy biochemical reactions were needed, just ion concentration gradients. This second part later needed to be tweaked a bit, but first we encounter another physician and scientist, Keith Lucas.

In 1905, Lucas demonstrated that nerve impulses worked on an all-or-none basis. There is a certain threshold of stimulation that is needed for a nerve to respond, and once that threshold is reached, the nerve cell gives its all. It either fires fully, or it does not fire: all or nothin,’ baby. Increasing the stimulus does not increase the intensity of the nerve impulse. With one of his students, Baron Edgar Adrian, he discussed trying to record action potentials from nerves, but World War I intervened, and Lucas died in an airplane accident.

Adrian spent World War I treating soldiers for nerve damage and shell shock, and when it ended, he returned to his alma mater, Cambridge, to take over Lucas’s lab and study nerve impulses. Adrian set out to record those propagated signals, the action potentials, and in doing so, found out a wealth of information and bagged a Nobel Prize along the way.

Adrian found that all action potentials produced by a nerve cell are the same. If the threshold has been reached for generating the signal, it fires with the same intensity, no matter what the location, strength, or duration of the stimulus is. So an action potential is an action potential is an action potential. You’ve seen one, you’ve seen them all. Now this was a bit puzzling. If the action potentials were always the same, how could different messages be sent? How were stimuli distinguished? How could you tell the difference between a flaccid and a firm handshake, between a sunny day and a moonlit night, between a dog bark and a dog bite?

Baron Adrian discovered that the frequency of the action potentials is determined by the intensity of the stimulus. If it is a mild stimulus, such as a feather touching your skin, you get only a couple of action potentials, but if it is a hard pinch, you can get hundreds. The duration of a stimulus determines how long the potentials are generated. If, however, the stimulus is constant, although the action potentials remain constant in strength, they gradually reduce in frequency, and the sensation is diminished. And the subject of the stimulus, whether it is perceptual (visual, olfactory, etc.) or motor, is determined by the type of nerve fiber that is stimulated, its pathway, and its final destination in the brain. Adrian also figured out something cool about the somatosensory cortex, the destination of all those perception neurons. Different mammals have different amounts of somatosensory cortex for different perceptions: Different species do not have equal sensory abilities; it all depends on how big an area in their somatosensory cortex is for a specific ability.

This also applies to the motor cortex. Pigs, for instance, have most of their somatosensory cortex dedicated to their snout. Ponies and sheep also have a big nostril area; it is as large as the area for the entire rest of their bodies. Mice have a huge whisker area, and raccoons have 60 percent of their neocortex devoted to their fingers and palm. We primates have big hand and face areas, for both sensation and motor movement. You get more bang for your buck when you touch something with your index finger than when you use other parts of your body. This is why when you touch an object with your finger in the dark, you are more likely to be able to determine what it is than if you touch it with your back. It is also why you have such dexterous hands and such an expressive face. However, we will never know what it is like to have the perceptions of a pig. Although the basic physiology is the same, the hookups and the motor and somatosensory areas are different among mammalian species. Part of our unique abilities and experiences, and the uniqueness of every animal species, lies in the makeup of the motor and somatosensory cortex.

Next, Alan Hodgkin, one of Adrian’s students, figured out that the current generated by the action potential was more than enough to excite an action potential in the next segment of an axon. Each action potential had more power than it needed to spark the next one. So they could perpetuate themselves forever. This was why, once generated, they didn’t lose their strength. Later, Hodgkin and one of his students (are you following the genealogy?), Andrew Huxley, tweaked Bernstein’s membrane theory, and also received a Nobel Prize for their work. Studying the gigantic squid neuron, the largest of all neurons (picture a strand of spaghettini), they were able to record action potentials from inside and outside the cell. They confirmed the-70-millivolt difference that Bernstein had proposed, but found that in the action potential, there was actually a 110-millivolt change, and the inside of the cell ended up with a positive charge of 40 millivolts, not the neutral state that Bernstein had supposed.

Somehow, excess positive ions were getting in and staying in the cell. Hodgkin and Huxley suggested that the selectively permeable membrane was also selectively permeable in a second way. It turns out that there is another set of pores, which they called voltage-gated channels, that selectively let in sodium ions when the membrane is stimulated enough, but they let them in for only a thousandth of a second. Then they slam closed, and the other set opens, letting potassium out, and then they slam closed too—all regulated by the changing ion voltage gradients across the cell membrane. Then, since the inside of the cell now has excess sodium, a protein binds to it and carries it out of the cell. This propagating action potential gets passed along from one end of an axon to the other. With the advent of molecular biology, more has been learned. Those ion channels are actually proteins that surround the cell membrane; they have fluid-filled pores that allow the ions to pass through.

So it is electrical current that conducts an impulse along the length of a nerve axon. However, no electricity passes from one neuron to the next, although this had been thought to be the case for many years. Rather, it is chemicals that transmit a signal from one neuron to the next across a tiny gap, called the synapse. These chemicals are now known as neurotransmitters. The neurotransmitter chemical binds to the protein on the synaptic membrane, the binding causes the protein to open its ion channel, and that sets in motion the action potential along the next nerve axon. OK, back to our story of neural implants.

THE RAGING BULL

Electrical stimulation of the brain was pioneered by José Delgado, a neuroscientist who in 1963 put his money where his mouth was. In a reaction against the increasing practice of lobotomy and “psychosurgery” in the late 1940s and early 1950s, he determined to find a more conservative way of treating mental illness, and decided to investigate electrical stimulation. Luckily he was technologically gifted. He developed the first electronic brain implant, which he placed in different brain regions of various animals. By pressing a button that controlled the implanted electrical stimulator, he would get different reactions, depending on where it was implanted. Quite sure of his technology and the information that he had learned from it, he stood in a bull ring at a ranch in Córdoba, Spain, one day in 1963 facing a charging bull with only the stimulator button in his hand and an itchy trigger finger. The electrical stimulator itself was implanted into a part of the charging bull’s brain known as the caudate nucleus. A gentle tap brought the bull to a skidding stop just feet in front of him.6 The button and his theories worked! He had turned off the bull’s aggression, and it stood placidly in front of him. With this demonstration, Delgado put neural implants on the map.

Back to the Cochlear Implant

So far, the cochlear implant is the most successful neural implant. A tiny microphone about the size of a small button is worn externally, usually behind the ear. This attaches magnetically to an internal processor that is surgically implanted under the scalp. A tunnel is drilled through the skull to the cochlea, and a wire is fed from the processor through the tunnel and into the cochlea, which is shaped like one of those twisty seashells. The microphone, made of metal backed by a plastic plate, acts like the eardrum. When the metal vibrates from incoming sound waves, it creates an electrical charge in the plastic, thus converting the sound to electricity, which then travels down a wire to a small portable computer that is worn on the belt. This computer converts the electrical charges to digital representations of what the electrical charges represent acoustically; it runs on software that is continually being fine-tuned and improved. The software can adjust audio frequency ranges and volume to personal preferences.

Let’s just say this software is very complex and is the result of years of research in sound waves and frequencies and how to code them, as well as the physiology of the cochlea. The processed signal is then sent back up the wire to the external button containing the microphone. But the microphone is not home alone. There is also a tiny radio transmitter, which transmits the signal as radio waves through the skin to the internal processor, where it is reconverted back to electricity by a diode. In the processor are up to twenty-two electrodes that correspond to different audio frequencies. The electrical signal fires up the electrodes in different combinations according to the message that the software has encoded, and the end result is then signaled down the wire into the cochlea, where it electrically stimulates the auditory nerve. This whole process takes four milliseconds!* It does not provide perfect hearing; voices sound mechanical. The brain has to learn that certain sounds may not correspond to what they sounded like in the past. Also, after a sound has been learned, a software upgrade may change that sound to actually become more realistic, but the wearer now has to readjust to the sound and its significance.

Why am I telling you all this? Because here we have the first successful neuroprosthetic in a human: a merging of silicon with carbon, forming what many consider is the first truly cybernetic organism.

Manfred Clynes and Nathan Cline coined the word cyborg to describe the interaction of artificial and biological components in a single “cybernetic organism.” Their aim was to describe an organism built for space travel. Viewing space as an environment that humans were not adapted for, they suggested, “The task of adapting man’s body to any environment he may choose will be made easier by increased knowledge of homeostatic functioning, the cybernetic aspects of which are just beginning to be understood and investigated. In the past, evolution brought about the altering of bodily functions to suit different environments. Starting as of now, it will be possible to achieve this to some degree without alteration of heredity by suitable biochemical, physiological, and electronic modifications of man’s existing modus vivendi.”7

That was 1960, and this is now, and it is happening. To some extent, we can change man’s existing state without changing his heredity. We have been doing this with drugs to treat physical and mental states that occur in our adapted environment, and now, sophisticated physical apparatuses are also being used. If you were born deaf, that can be changed. And some researchers predict that it may be in the not so far future (less than forty years), if you were born not so swift, mentally or physically, that will be able to be changed. There is even the possibility that if you were born a psychopath, that could be changed, too. Just how much we will be able to tinker with such matters and how extensive the possible changes to one’s current physical and mental states will be are currently matters of intense speculation.

With a cochlear implant, a mechanical device has taken over one of the brain’s functions. Silicon has been substituted for carbon. It is a little different from a heart pacemaker, which stimulates the cardiac muscle to contract. This is directly connected to the brain, and the software determines what is heard. The conspiracy crowd may get a little agitated by this, because the software developer determines what is being heard. Is it ethical to use cochlear implants? Most people do not have a problem with them. Although the wearer may depend on a computer for part of his brain processing, Michael Chorost has written that although he is now a cyborg, his cochlear implant has made him more human,8 allowing him to be more social and participate in a community. People with normal hearing do not think of the cochlear implant as an enhancement. They think of it as a therapeutic intervention. One ethical question that arises is, What if in the future such implants or other devices allow you to have superhuman hearing, hearing enhancement? What if such an implant allows one to hear frequencies the human ear cannot hear? Is that OK too? Would hearing more frequencies provide a survival advantage? Would you be less of a person or less successful if everyone around you had one and you didn’t? Will you have to upgrade to silicon to survive? These are the questions we are going to be facing, and they don’t concern only sensory enhancements.

Artificial Retinas

Progress toward retinal implants has been slower. There are two questions that remain unanswered: How many electrodes will be necessary for the retinal implant to provide useful vision? And how much sight must they generate for it to be useful? Is being able to navigate enough, or must one be able to see well enough to read? Experimental retinal implants that have been tested on humans have only sixteen electrodes, and the vision they provide is only spots of light. A second implant that is not yet ready for human testing has sixty-four electrodes. No one knows how many electrodes will be necessary to provide adequate vision. It may well be that for vision, hundreds or thousands of electrodes will be needed, and their development will be dependent on the continuing advancements in nanotechnology and the miniaturizing of the electrode arrays. Rodney Brooks, a leader in the robotics world, sees the possibility of retinal implants being adapted for night vision, infrared vision, or ultraviolet vision.9 One day you may be able to trade in one good eye for one of these implants to enhance your vision beyond that of natural humans.

Locked-In Syndrome

One of the most terrifying brain injuries that a person can sustain is a lesion to the ventral part of the pons in the brain stem. These people are awake and conscious and intelligent but can’t move any skeletal muscles. That also means that they can’t talk or eat or drink. This is known as locked-in syndrome. The ones who are lucky, if you can call it that, can voluntarily blink or move their eyes, and this is how they communicate. Lou Gehrig’s disease (amyotrophic lateral sclerosis, or ALS) can also result in this syndrome. Phil Kennedy, a neurologist at Emory University, came up with a technology he felt could help these people. After successful trials in rats and monkeys, he was given the OK to try it in humans.

In 1998, for the first time, Kennedy implanted an electrode made up of a tiny hollow glass cone attached to two gold wires. The electrode is coated with neurotrophic factor, which encourages brain cells to grow into the tube and hold it stable in the brain. The electrode is implanted in the left-hand motor region of the brain and picks up the electrical impulses the brain generates. The patient imagines moving his left hand, and the electrode picks up the electrical impulse that this thought produces. The electrical impulse travels down the two wires, which are connected to an amplifier and an FM transmitter outside the skull but under the scalp. The transmitter signals to a receiver external to the scalp. These signals are routed to the patient’s computer, interpreted and converted by software, and end by moving the cursor on the computer screen. Kennedy’s first patients were able, after extensive training, to imagine moving their left hand and thereby move the cursor on the computer screen!10, 11 This was and still is truly amazing. He had captured electrical impulses generated by thinking about a movement and translated them into movement by a computer cursor. It requires huge processing power.12 A myriad of neural signals must be sorted through to remove “noise,” the remaining electrical activity must be digitized, and decoding algorithms must process the neural activity into a command signal—all in a few milliseconds. The result is a command that the computer can respond to.

This is all based upon an implant that can survive in the salty sea-like environment inside the body without corroding, transmit electrical signals without producing toxic by-products, and remain cool enough to avoid cooking the nearby neurons. This was not an easy assignment. This is an incredible first step, which actually, of course, was not the first step but one based on hundreds of thousands of other steps. And one electrode doesn’t provide a lot of information. It took the patient months to learn how to use it, and the cursor could only move horizontally, but the concept worked. There are several groups approaching this drawing board from different angles.13

This type of device is known as a brain-computer interface (BCI). Unlike the cochlear implant, which is supplying sensory input information to the brain, BCIs work on the output from the brain. They pick up electrical potentials generated in the brain as a by-product of neuronal activity and translate the neuronal signals into electrical impulses that can control the computer cursor—or, in the future, other devices.

BASIC-SCIENCE BREAKTHROUGHS

In 1991, Peter Fromherz of the Max Planck Institute in Germany succeeded in developing a neuron-silicon junction. This was between an insulated transistor and a Retzius cell of a leech,14 and was the beginning of actual brain-computer interfaces. The problem that had to be surmounted was that although computers and brains both work electrically, their charge carriers are different. It’s roughly like trying to hook up your gas stove to an electric line. Electrons carry the charge in the solid silicon of the chip, and ions (atoms or molecules that have gained or lost an electron) do the job in liquid water for the biological brain. Semiconductor chips also have to be protected from corrosion in the body’s saltwater environment, as anyone who has ever worked or lived by the ocean knows. Fromherz’s “intellectual and technological challenge” was to join these different systems directly at the level of electronic and ionic signals.15

This technology has allowed another lab more recently to implant a different system, called the BrainGate system, developed by John P. Donoghue at Brown University, using a neural implant developed by Richard Normann at the University of Utah. The implant, known as the Utah electrode array, was originally designed to be used in the visual cortex, but Donoghue thought it would work as well in the motor cortex. In 2004, an implant with ninety-six electrodes was surgically inserted into Matthew Nagle, a quadriplegic patient who had been stabbed in the neck at a Fourth of July celebration three years before while coming to the aid of a friend. Since this patient had been quadriplegic for a few years, no one knew if the part of his brain that controlled his motor system would still respond or whether it would have atrophied from disuse. However, he began to respond right away.

It was also easier to use than Kennedy’s implant. Nagle didn’t need several months of training before he was able to control it. Just by thinking about it, he was able to open simulated e-mail and draw an approximately circular figure on the computer screen using a paint program. He could adjust the volume, channel, and power on his television, and play video games, such as Pong. After a few trials, he was also able to open and close a robotic prosthetic hand by just looking at the hand, and he used a simple multijointed robotic limb to grasp an object and transport it from one location to another.16 This was not done easily or smoothly, but it was possible. Obviously this is huge. Anything that gives such people any degree of control over their environment is momentous. The system still has many bugs to be worked out. When the patient wants to use the system, a cable that leads to the bulky external processing equipment must be attached to a connecter on his skull. Each time it is turned on, a technician has to recalibrate the system. And, of course, the electrode array in the brain is no small potatoes. The risk of infection is ever present, as are the probability of scar tissue eventually causing the implant to lose function, the risk of causing more damage with insertion or movement of the array, and its possible malfunction.

How can a chip with only ninety-six electrodes code for the movement of an arm? The idea that recording the firing of just a few neurons could accomplish a motor activity came from Apostolos Georgopoulos, a neurophysiologist currently at the University of Minnesota. He had observed that an individual nerve cell performs more than one function. A single neuron fires for more than one direction of movement, but has a preferred direction of movement.17 It turned out that the frequency that it was firing determined the direction of the muscle’s movement: If more frequently, it was moving in one direction; less, in another—a bit like Morse code of the brain. Georgopoulos found that through a vector analysis (not everyone has forgotten their high school trig class) of the firing frequency and preferred direction of firing, he could accurately predict the direction of muscular movement.18He also suggested that recording only a few neurons, between 100 and 150, would produce fairly accurate predictions of movement in three-dimensional space.19 This made using a small electrode panel feasible in recording neuronal intentions.

For a locked-in patient, or a paralyzed patient, more autonomy would include feeding himself and being able to get a glass of water without calling for assistance. Controlling a robotic arm to perform these tasks would be great. However, there are still many limiting factors to these systems. Without enumerating all the bugs, one obvious factor is that they are open-loop systems. Information goes out, but none comes back in. In order for a person to be able to control a prosthetic arm to drink a cup of coffee or feed himself at his own pace, sensory information needs to be sent back to the brain to prevent the many a slip ’twixt cup and lip. Anyone who has done the Mr. Small skit knows about this problem.*

The input problem is a complicated business. No one quite knows all the ins and outs of how proprioception works. In addition, there is the need for sensory information, such as how firmly one is grasping a cup, its weight, temperature, and whether it is following a smooth trajectory to the mouth. There is hope that if this information can be programmed into a prosthetic arm, perhaps the real arm could be programmed and directed too. The arm would have its nerves connected to chips that receive signals from the implants in the brain directing its movement, but also incoming sensory signals would be decoded by the chip and sent to the brain to give it feedback. In this way, the implant would serve as a bridge to bypass the severed nerves.

The human arm, however, which we take for granted as we reach for a cup of java or twist a little pasta onto a fork, that whole shoulder-elbow-wrist-hand with all its fingers and network of bones, nerves, tendons, muscles, and ligaments, is immensely complicated. Muscles are flexing and extending together, being stimulated and inhibited, twisting and adjusting their movement constantly, all at varying velocities, all with sensory, proprioceptive, cognitive, and pain feedbacks to the brain telling it the muscles’ position, force, stretch, and velocity. The sensory system actually is sending back to the brain about ten times the information the motor system is sending out. The current implants are obviously still quite crude, but they are being improved every year, being reduced in size and given more capacity, just as personal computers have gotten smaller and faster with more memory. But the idea works. Neurons in your brain can grow onto a computer chip and transfer neuronal signals to it. There can be silicon replacement parts for the brain.

Richard Andersen, a professor of neuroscience at Caltech, has another idea. He thinks instead of using the motor cortex as the site to capture neuronal firings, it would be better and easier to go back up to a higher cortical area where the visual feedback is processed and the planning for the movement is made—the parietal cortex.20 The posterior parietal cortex is situated between the sensory and the motor regions and serves as a bridge from sensation to action. His lab has found that an anatomical map of plans exists within this area, with one part devoted to planning eye movements and another part to planning arm movements.21, 22 The action plans in the arm-movement area exist in a cognitive form, specifying the goal of the intended movement rather than particular signals for all the biomechanical movements. The parietal lobe says, “Get that piece of chocolate into my mouth,” but does not detail all the motions that are necessary: “First extend the shoulder joint, by flexing the blah blah blah….” All these detailed movements are encoded in the motor cortex. Andersen and his colleagues are working on a neural prosthesis for paralyzed patients that records the electrical activity of nerve cells in the posterior parietal cortex. Such an implant would interpret and transmit the patients’ intentions: “Get the coffee to my mouth.” They think this will be much easier for software programmers. These neural signals are decoded using computer algorithms, and are converted into electrical control signals to operate external devices such as a robot arm, an autonomous vehicle, or a computer. The robotic arm or vehicle would simply receive the input as a goal—chocolate in mouth—leaving the determination of how to accomplish the goal to the other systems, such as smart robotic controllers. Smart robots? We’ll get there soon. This bypasses the need for a closed-loop system. This system also needs relatively few neurons to send a signal.23

Brain surgery, implants, infection—can’t they figure out something that doesn’t require going inside the head? Can’t they use EEGs?

Jonathan Wolpaw, chief of the Laboratory of Nervous System Disorders of the New York State Department of Health and State University of New York, thinks so. He has been working on this problem for the last twenty years. When he first began, he had to figure out if the idea of using brain waves captured externally was possible. He made a headset with a series of external electrodes positioned over the motor cortex, where neurons fire to initiate movement. These neurons give off weak electrical signals that the electrodes pick up. Getting useful signals from “a few amplitudes of scalp-recorded EEG rhythms that reflect in a noisy and degraded fashion the combined activity of many millions of neurons and synapses”24 was difficult. After several years, he was able to show that people could learn to control their brain waves to move a computer cursor. The software for this system has been many years in development. The headset electrodes pick up the signals, and because the strength of the signals varies from person to person, and from one part of the cortex to another, the software is constantly surveying the different electrodes for the strongest signals, giving those the greatest influence in the decision-making process as to which way a cursor should move.

Scott Hamel, one of the subjects who test Wolpaw’s system, says it is easiest to use when he is fully relaxed. If he tries too hard, has other things on his mind, or gets frustrated and tense, things don’t go as well.4 Too many neurons are competing for attention. Wolpaw and his group, and others who have taken up the challenge, have found that “a variety of different brain signals, recorded in a variety of different ways and analyzed with a variety of different algorithms, can support some degree of real-time communication and control.”25

However, there is a big problem, and it is not just with externally controlled BCIs. It is also true of the implants. Even in controlled conditions, the results are variable. Users are better on some days than others, and performance can vary widely even within a single session and from trial to trial. Cursor movements are slow and jerky, described by some as ataxic.24 Wolpaw thinks this problem is going to persist unless researchers take into account the fact that BCIs ask the brain to do something entirely new.

This becomes clear if you look at what the brain normally does to produce movement and how it normally does it. The job of the central nervous system (CNS) is to convert sensory inputs into appropriate motor outputs. This job of creating motor outputs is a concerted effort of the entire CNS from the cerebral cortex to the spinal cord. No single area is wholly responsible for an action. Whether you walk, talk, high jump, or bronco bust, there is a collaboration among areas, from the sensory neurons up the spinal cord to the brain stem and eventually to the cortex and back down through the basal ganglia, thalamic nuclei, cerebellum, brain-stem nuclei, and spinal cord to the interneurons and motor neurons. And even though the motor action is smooth and consistent from one time to the next, the activity in all those different brain areas may not be. However, when a BCI is being used, it is a whole new ball game. Motor actions, which are normally produced by spinal motor neurons, are now being produced by the neurons that normally just contribute to the control of the motor neurons. Now they are putting on the whole show. They have to do their own job and assume the role normally performed by spinal motor neurons; their activity becomes the final product, the output, of the entire CNS. They are doing it all.

The brain has some plasticity, but there are limits. Wolpaw makes the point that BCIs provide new output pathways for the brain, but the brain has to learn them. The brain has to change the way it normally functions. He thinks that in order to make BCIs perform better, researchers have to make it easier for the brain to implement these new output pathways. An output pathway can either control a process or select a goal. He also thinks that outputting a goal is easier. Just tell the software the goal, and let it do all the work. Wolpaw is walking into Andersen’s camp.

This technology has not been overlooked by the business world. There are companies that have come up with their own versions that are being developed for playing computer games. One company, Emotiv, has a sixteen-sensor strap-on headset that they claim reads emotions, thoughts, and facial expressions. According to the company, it is the first brain-computer interface that can detect human conscious thoughts and nonconscious emotions. Its current gaming application allows for 3-D characters to reflect the player’s expressions: You wink, it winks; you smile, it smiles. It also allows the manipulation of virtual objects using the player’s thoughts.

Another company, NeuroSky, has come up with a single-electrode device that they claim will read emotions as its software translates them to commands to control a game. Other companies are developing NeuroSky’s technology to use in cell-phone headsets and MP3 players. The sensor will sense your emotional state and pick music that is compatible with it. No downer songs while you are feeling fine, or for those slow-to-wake-up folks; no heavy metal until after 11:00 A.M. Just exactly what is being recorded and used is, of course, not being revealed by either company.

Aiding Faulty Memories with Silicon

Another problem begging for a solution has to do with the increasing elderly population: memory loss. The normal slow loss of memory is annoying enough without the devastating problem of Alzheimer’s disease. Although the neuronal implants that we have discussed have to do with sensory or motor functions, other researchers are concerned with restoring cognitive loss of higher-level thought processes. Theodore Berger at USC has been interested in memory and the hippocampus for years, and more recently he has been working toward creating a prosthesis that will perform the services that Alzheimer’s disease plays havoc with: the transfer of information from immediate memory to long-term memory. The hippocampus has a star role in the formation of new memories about experienced events, as evidenced by the fact that damage to the hippocampus usually results in profound difficulties in forming new memories and also affects retrieval of memories formed prior to the damage. It doesn’t look as if procedural memory, such as learning how to play an instrument, is part of the hippocampus’s job description, for it is not affected by damage to the hippocampus.

The hippocampus is located deep in the brain and is evolutionarily old, which means that it is present in less-evolved animals. Its connections, however, are less complicated than other parts of the brain, and this makes Berger’s goal a tad (and only a tad) easier. Just what the damaged cells in the hippocampus did is still up to conjecture, but that doesn’t slow down Berger and his big plan to develop a chip for people with this type of memory loss. He doesn’t think he needs to know exactly what they did. He thinks all he has to do is provide the bridge between the input of cells on one side and the output of cells on the other side of the damaged cells.

Not that that is a walk in the park. He has to figure out from an electrical input pattern what the output pattern should be. For instance, let’s say that you were a telegraph operator who translates Morse code from one language to another. The problem is, you don’t know or understand either of the languages or codes. You receive a code tapped out in Romanian and then have to translate it and tap it out in Swedish. You have no dictionaries or codebooks to help you. You just have to figure it out. That is what his job has been like, but harder. This has taken several years and the help of researchers from many different disciplines. In Berger’s system, the damaged CNS neurons would be replaced with silicon neurons that mimic their biologic function. The silicon neurons would receive electrical activity as inputs from, and send it as outputs to, regions of the brain with which the damaged region previously was connected. This prosthesis would replace the computational function of the damaged brain and restore the transmission of that computational result to other regions of the nervous system.26 So far his tests on rats and monkeys “worked extremely well,” but tests on humans are still a few years away.4

Caveats and Concerns

Futurists like Ray Kurzweil envision this technology being able to do far more. He foresees enhancement chips: chips that will increase your intelligence, chips that will increase your memory, chips that can have information downloaded into them. Learn French, Japanese, Farsi? No problem, just download it. Do advanced calculus? Download it. Increase your memory? Sure, just get another five-terabyte chip implanted. Mary Fisher Polito, a friend who occasionally suffers from a “senior moment” memory lapse, says, “I hope they hurry up with those chips. I could use some more RAM now.” Kurzweil also envisions the world being populated with such intelligent people that the major problems facing us will be easily solved. “Greenhouse gases? Oh, I know how to fix that. Famine? Who’s hungry? There have been no reports of hunger for the last fifty years. War? That is so retro.” But then, Chris von Ruedon, one of my students, points out, “It’s often the most intelligent people who cause such problems.” Others are concerned about such scenarios as: “Honey, I know that we were saving this money for a vacation, but maybe we should get the twins neural chips instead. It is hard for them in school when so many of the other kids have them and are so much smarter. I know you wanted them to stay natural, but they just can’t keep up, and their friends think they are odd.” Artifact-driven evolution!

But in a sense, the story of human evolution has been artifact-driven ever since the first stone ax was chipped, and perhaps even earlier. Merlin Donald, a cognitive neuroscientist at Case Western Reserve University, thinks that although humanity is greatly concerned about changes in the physical ecology of the external world, we should be paying more attention to what has been going on inside our heads. Information storage and transfer went from the internally stored memory and experience of a single individual to being internally stored and transferred by many individuals as storytellers, to external memory storage on papyrus, then to books and libraries, then to computers and the Internet. He thinks that there have been equally massive changes in the cognitive ecology, due to the advent of these huge banks of external memory storage, and we are not done yet. He predicts that this runaway proliferation of information will probably set our future direction as a species.27 Perhaps that next step in this evolution of information storage may be to store it internally, again with the help of implanted silicon: just another tool.

Or not. The idea that we are messin’ with our innards is disturbing to many. And just what would we do with expanded intelligence? Are we going to use it for solving problems, or will it just allow us to have longer Christmas card lists and bigger social groups? If we spend 90 percent of our time talking about each other, will we solve the world problems or just have more stories to tell? But there is another major problem with Kurzweil’s scenario: No one knows what it is that the brain is doing that makes a person intelligent. Just having a lot of information available doesn’t necessarily make a person more intelligent. And being intelligent does not necessarily make a person wise. As David Gelernter, a computer scientist at Yale, wonders, “What are people well informed about in the information age?…Video games?” He isn’t impressed; in fact, he seems to think people are less informed.28 So what about intelligence? What were those smart robots all about?

SMART ROBOTS?

My desires in a personal robot are rather mundane. I just want it to do all the things I don’t want to do. I want it to get the mail, hand me any personal handwritten letters and invitations, and take everything else and deal with it. I want it to check my e-mail and throw out all the spam and pay my bills. I want it to keep track of finances, fund my retirement, do the taxes, and hand me a net profit at the end of the year. I want it to clean the house (including the windows), and it might as well do all the car maintenance. Ditto with weeding, trapping gophers, and…well, it might as well do the cooking, too, except when I want to. I would like my robot to look like Sophia Loren in Divorce Italian Style, not R2D2. I may have trouble with that one, because my wife wants Johnny Depp doing all the chores. Maybe R2D2 isn’t such a bad idea. As I said, my needs are mundane. I can do all these things, but I’d rather spend my time doing something else. For disabled persons who cannot do any of these things, a personalized robot would allow far more autonomy than they have.

The thing is, this may not be so far off, or at least some of it, and that would be great. But maybe, if we aren’t careful, the smart robot won’t be grumbling about cat hair as it is cleaning the floor. It may be discussing quantum physics or, worse yet, its “feelings.” And if it is intelligent, will it still do all our chores? Just like you and your kids, won’t it figure out a way not to do them? That would mean it would have desires. Once it has feelings, will we feel guilty about making it do all the scut work, and start cleaning up before the robot comes in, and apologizing for the mess? Once it is conscious, will we have to go to court to get it decommissioned so we can get the latest model? Will a robot have rights? As Clynes and Kline pointed out in their original description of a cyborg in space, “The purpose of the Cyborg…is to provide an organizational system in which [such] robot-like problems are taken care of automatically and unconsciously, leaving man free to explore, create, think, and feel.”7 Without my actually merging physically with silicon, without actually becoming a cyborg, a separate silicon assistant could just as easily give me more time to explore, create, think, and feel (and, I might add, gain weight). So I am going to be careful which model I order. I do not want a robot with emotions. I don’t want to feel guilty that my robot is vacuuming while I am out on the deck in the sun eating a now mandatory calorie-reduced lunch and thinking deep thoughts, like maybe I should get up and weed.

How close are we to my idea of a personal robot? If you haven’t been keeping up with what is going on in the world of robotics, you will be amazed. There are currently robots doing plenty of the jobs that are repetitive and/or require precision, from automobile assembly to surgery. Currently the domain of robots is the three Ds—dull, dangerous, or dirty. The dirty category includes toxic waste cleanups. Surgery is none of those three; it is just being done on a microscopic level. Currently Pack Bots that weigh eighteen kilograms are being used as emergency and military robots. They can negotiate rough terrain and obstacles such as rocks, logs, rubble, and debris; they can survive a drop of two meters onto a concrete surface and land upright; and they can function in water up to two meters deep. They can perform search and rescue, and disarm bombs. They are being used to detect roadside bombs and reconnoiter caves. However, these robots do not look like your dream of a handsome search-and-rescue guy (like my brother-in-law) as you are lying at the base of some cliff you foolishly tried to climb. They look like something your kid would build with an erector set.

There are also unmanned robotic aircraft. A robot has driven most of the way across the United States. Driving in an urban setting is still the most difficult test and has yet to be perfected. The Urban Challenge, a sixty-mile competition for autonomous vehicles sponsored by the Defense Advanced Research Projects Agency (DARPA), was held in November 2007. Vehicles had to be able to negotiate city streets, intersections, and the parking lot, including finding a spot, parking legally, and then leaving the lot without a fender bender, while avoiding shopping carts and other random objects. This is not remote control. These are cars controlled by software, driving on their own. It may not be too long before computer programs will drive all cars. We will recline, read the paper, munch a doughnut (I’ll take jelly), and drink a latte on the way to work.

But so far, on the home-cleaning front, all we have is a floor cleaner and vacuum cleaner that looks like a CD player, and a lawn mower. But what these robots have, and what my dream does not have, are wheels. No robot yet can move through the room like Sophia Loren or Johnny Depp. Half the neurons in the human brain are at work in the cerebellum. Part of their job is motivating, not in the sense of “come on, you can do it,” but in the sense of Chuck Berry and Maybelline in the Coupe de Ville motivatin’ up the hill—that is, timing and coordinating muscles and skills.

Developing a robot with animal-like motion is incredibly difficult and has yet to be accomplished, but engineers at Shadow Robot Company in England, under founder Richard Greenhill, think they are getting close. Since 1987, they have been working to build a bipedal robot. Greenhill says, “The need for anthropomorphism in domestic robotics is classically illustrated by the problem of staircases. It is not feasible to alter houses or to remove the staircases. It is possible to design robots with stair-climbing attachments, but these are usually weak spots in the design. Providing a robot with the same locomotive structures as a human will ensure that it can certainly operate in any environment a human can operate in.”29 They are getting there, and along the way they have developed many innovations, one of them being the Shadow Hand, a state-of-the-art robotic hand that can do twenty-four out of the twenty-five movements that a human hand can perform. It has forty “air muscles,” another invention. The shadow hand has touch sensors on its fingertips and can pick up a coin. Many other laboratories are working on other aspects of the anthropomorphic robot. David Hanson, at the University of Texas, has made a substance he has called Flubber, which is very much like human skin and allows lifelike facial expressions.* So it is possible to have a robotic Johnny Depp sitting in your living room, but he isn’t up to doing the tango yet.

Japan Takes the Lead

Japan is a hot spot for robotic research. They have a problem that they are hoping robots will help solve. Japan has the lowest birth rate in the world, and 21 percent of the population is over sixty-five, the highest proportion of elderly in any nation. The population actually started declining in 2005, when births were exceeded by deaths. The government discourages immigration; the population is over 99 percent pure Japanese. Any economist will tell you this is a problem. There aren’t enough young people to do all the work; shortages are already being felt in many areas, including nursing. So if the Japanese don’t want to increase immigration, then they are going to have to figure out a way to take care of their elders. They are looking to robotics.

At Waseda University, researchers have been working on creating facial expressions and upper-body movements that correlate with the emotions of fear, anger, surprise, joy, disgust, sadness, and, because it is Japan, a Zen-like neutral state. Their robot has been created with sensors: It can hear, smell, see, and touch. They are studying how senses translate into emotions and want to develop a mathematical model for this.30 Their robot will then react to external stimuli with humanlike emotions. It is also programmed with instinctual drives and needs. Its needs are driven by appetite (energy consumption), the need for security (if it senses it is in a dangerous situation, it will withdraw), and the need for exploration in a new environment. (I will not order one of these.) The Waseda engineers have also made a talking bot that has lungs, vocal cords, articulators, a tongue, lips, a jaw, a nasal cavity, and a soft palate. It can reproduce a humanlike voice with a pitch control mechanism. They have even built a robot that plays the flute.

At Meiji University, designers have set their sights on making a conscious robot. It may be that from this intersection of robotic technology, computer technology, and the desire to make humanlike robots, a greater understanding of human brain processing will emerge. Building a robot to act and think as a human does means testing the theories of brain processing with software and seeing if the result corresponds to what the human brain is actually doing. As Cynthia Breazeal, who leads a group at MIT, points out, “While many researchers have proposed models of specific components of social referencing, these models and theories are rarely integrated with one another into a coherent, testable instance of the full behavior. A computational implementation allows researchers to bring together these disparate models into a functioning whole.”31 Tohru Suzuki, Keita Inaba, and Junichi Takeno lament that no one yet has presented a good integrated model to explain consciousness. Yak yak yak, but how do you actually hook it all up? So instead of shrugging their shoulders, they went about making their own model and then built a robot using this design.

Actually they built two, and you will see why. They believe that consciousness arises from the consistency of cognition and behavior.32 What does that remind you of? How about mirror neurons? Those same neurons that are firing when you cogitate a behavior and when you perform it. You can’t get more consistent than that. Next they turn to a theory by Merlin Donald—that the ability to imitate motor action is the foundation of communication, language, the human level of consciousness, and human culture in general. This is known as mimesis theory. Donald has been thinking a lot about the origins of language, and he just does not see it happening without fine motor skills, and in particular, the ability to self-program motor skills. After all, language and gesture require the refined movements of muscles. And while other animal species have genetically determined rigid types of behavior, human language is not rigid but flexible. Thus the motor skills required for language must also be flexible. There just had to be voluntary, flexible control of muscles before language could develop. He sees this flexibility coming from one of the fundamentals of motor skill—procedural learning. To vary or refine a motor movement, one needs to rehearse the action, observe its consequences, remember them, and then alter what needs to be altered. Donald calls this a rehearsal loop, something we are all familiar with. He notes that other animals do not do this. They do not initiate and rehearse actions entirely on their own for the purpose of refining their skill.33 Your dog is not practicing shaking hands all day while you are at the office. Merlin thinks that this rehearsal-loop ability is uniquely human and forms the basis for all human culture, including language.

So, Suzuki and pals drew up a plan for a robot that had consistency of behavior and cognition. They built two, to see if they would show imitative behavior. One robot was programmed to make some specific movements, and the second robot copied them! Imitative behavior implies that the robot can distinguish itself from another robot: It is self-aware. They believe that this is the first step on the road to consciousness. Unlike other designs but like many models of human consciousness, this one had feedback loops for both internal and external information. External information (somatic sensation) feedback is needed for a robot to imitate and learn. The external result of action must come back to the interior in order to modify it if need be: Action must be connected to cognition. Internal feedback loops are what connect the cognition to the action. However, these robots don’t look like what I’m pretty sure you are visualizing. They look like something that a mechanic would pull out from under the hood of a Mercedes and charge an arm and a leg to replace.

Meanwhile, Back at MIT

The problem with robots is, they still mostly act like machines. Cynthia Breazeal at MIT sums it up: “Robots today interact with us either as other objects in the environment, or at best in a manner characteristic of socially impaired people. They generally do not understand or interact with people as people. They are not aware of our goals and intentions.”34 She wants to give her robots theory of mind! She wants her robot to understand her thoughts, needs, and desires. If one is building a robot to help the elderly, she continues, “Such a robot should be persuasive in ways that are sensitive to the person, such as helping to remind them when to take medication, without being annoying or upsetting. It must understand what the person’s changing needs are and the urgency for satisfying them so that it can set appropriate priorities. It needs to understand when the person is distressed or in trouble so that it can get help.”

Kismet, the second-generation Cog, is a sociable robot that was built in the lab of Rodney Brooks, director of the MIT Computer Science and Artificial Intelligence Laboratory, predominantly by Cynthia Breazeal when she was Brooks’s graduate student. Part of what makes Kismet a sociable robot is that it has large eyes that look at what it is paying attention to. It is programmed to pay attention to three types of things: moving things, things with saturated color, and things with skin color. It is programmed to look at skin color if it is lonely, and bright colors if it is bored. If it is paying attention to something that moves, it will follow the movement with its eyes. It has a set of programmed internal drives that increase until they release certain behaviors. Thus if its lonely drive is high, it will look around until it finds a person. Then, since that drive is satisfied, another drive will kick in, perhaps boredom, which will increase, and it will start searching for a bright color; this makes it appear to be looking for something specific. It may then find a toy, giving an observer the impression that it was looking specifically for the toy. It also has an auditory system that detects prosody in speech. With this mechanism it has a program that matches certain prosody with specific emotions. Thus it can detect certain emotions such as approval, prohibition, attention getting, and soothing—just like your dog. Incoming perceptions affect Kismet’s “mood” or emotional state, which is a combination of three variables: valence (positive or negative), arousal (how tired or stimulated it is), and novelty. Responding to various motion and prosody cues, Kismet will proceed among different emotional states, which are expressed through its eyes, eyebrows, lips, ears, and the prosody of its voice. Kismet is controlled by the interaction of fifteen different computers running various operating systems—a distributed system with no central control. It does not understand what you say to it, and it speaks only gibberish, though gibberish with the proper prosody for the situation. Because this robot simulates human emotions and reactions, many people relate to it on an emotional level and will speak to it as if it were alive. Here we are back to anthropomorphism.

Rodney Brooks wonders if simulated, hard-coded emotions in a robot are the same as real emotions. He presents the argument that most people and artificial intelligence researchers are willing to say that computers with the right software and the right problem can reason about facts, can make decisions, and can have goals; but although they may say that a computer may act as if, behave as if, seem as if, or simulate that it is afraid, it is hard to find anyone who will say that it is viscerally afraid. Brooks sees the body as a compilation of biomolecules that follow specific, well-defined physical laws. The end result is a machine that acts according to a set of specific rules. He thinks that although our physiology and constituent materials may be vastly different, we are much like robots. We are not special or unique. He thinks that we overanthropomorphize humans, “who are after all mere machines.”9 I’m not sure that, by definition, it is possible to overanthropomorphize humans. Perhaps it is better to say we underanthropomorphize machines or undermechanomorphize humans.

Breazeal’s group’s next attempt at developing TOM in a robot is Leonardo. Leo looks like a puckish cross between a Yorkshire terrier and a squirrel that is two and a half feet tall.* He can do everything that Kismet can do and more. They wanted Leo to be able to identify another’s emotional state and why that person is experiencing it. They also want him (they refer to Leo as “he” and “him,” so I will, too) to know the emotional content of an object to another person. They don’t want Leo tramping on the Gucci shoes or throwing out your child’s latest painting that looks like trash to anyone but a parent. They also want people to find Leo easy to teach. Instead of your having to read an instruction manual and learn a whole new form of communication when you get your first robot, they want Leo to be able to learn as we do. You’ll just say, “Leo, water the tomatoes on Thursdays” and show him how to do it, and that’s it. No small ambitions!

They are banking on the neuroscience theory that humans are sociable, and we learn through using our social skills. So first, in order to be responsive in a social way, Leonardo has to be able to figure out the emotional state of the person with whom he is interacting. They approached designing Leo using evidence from neuroscience that “the ability to learn by watching others (and in particular the ability to imitate) could be a crucial precursor to the development of appropriate social behavior—and ultimately the ability to reason about the thoughts, intents, beliefs, and desires of others.” This is the first step on the road to TOM. The design was inspired by the work done on newborns’ facial imitation and simulation ability by Andrew Metzoff and M. Keith Moore, whom we read about in chapter 5. They needed Leonardo to be able to do the five things that we talked about that a baby could do when it was hours old:

1. Locate and recognize the facial features of a demonstrator.

2. Find the correspondence between the perceived features and its own.

3. Identify a desired expression from this correspondence.

4. Move its features into the desired configuration.

5. Use the perceived configuration to judge its own success.35

So they built an imitation mechanism into Leonardo. Like Kismet, he has visual inputs, but they do more. Leo can recognize facial expressions. Leo has a computational system that allows him to imitate the expression he sees. He also has a built-in emotional system that is matched to facial expression. Once this system imitates a person’s expression, it takes on the emotion associated with it.

The visual system also recognizes pointing gestures and uses spatial reasoning to associate the gesture with the object that is indicated. Leonardo also tracks the head pose of another. Together these two abilities allow him to understand the object of attention and share it. He makes and keeps eye contact.

Like Kismet, he has an auditory system, and he can recognize prosody, pitch, and the energy of vocalization to assign a positive or negative emotional value. And he will react emotionally to what he hears. But unlike Kismet, Leo can recognize some words. His verbal tracking system matches words to their emotional appraisal. For instance the word friend has a positive appraisal, and the word bad has a negative one, and he will respond with the emotional expression that matches the words.

Breazeal’s group also incorporated the neuroscience findings that memory is enhanced by body posture and affect.36 As Leo stores information in long-term memory, the memory can be linked with affect. His ability to share attention also allows him to associate emotional messages of others with things in the world. You smile as you look at the painting your kid did; Leo looks at it too, and he files it away in memory as a good thing—he doesn’t toss it with the trash. Shared attention also provides a basis for learning.

So we are reasonably close to a robot that is physically humanlike in appearance and movement, one that can simulate emotions and is sociable. However, you’d better not be doing the rumba with your robot, because it most likely would break your foot if it accidentally trod on it (these puppies are not lightweight). You should also consider its energy requirements (there goes the electric bill). But what about intelligence? Social intelligence is not all my robot will need. It is going to have to outfox gophers, and it is going to have to be pretty dang intelligent to outfox the gophers in my yard, which, I am sure, have the same genetic code as the Caddyshack survivors.

Ray Kurzweil is not worried so much about the physical vehicle. It is the intelligence that interests him. He thinks that once computers are smart enough, that is, smarter than we are, they will be able to design their own vehicles. Others think that humanlike intelligence and all that contributes to it cannot exist without a human body: I think therefore my brain and my body am. Alun Anderson, editor in chief of New Scientist magazine, put it this way when asked what his most dangerous idea was: “Brains cannot become minds without bodies.”37 No brain-in-a-box will ever have humanlike intelligence. We have seen how emotion and simulation affect our thinking, and, without those inputs, we would be, well, a whole ’nother animal. And Jeff Hawkins, creator of the Palm Pilot, thinks since we don’t even know what intelligence is and what processes in the brain produce it, we have a lot of work still to do before we can have intelligent machines.38

ARTIFICIAL INTELLIGENCE

The term artificial intelligence (AI) originated in 1956, when John McCarthy from Dartmouth College, Marvin Minsky from Harvard University, Nathaniel Rochester of the IBM Corporation, and Claude Shannon from the Bell Telephone Laboratories proposed that “a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”39

Looking back at that statement made over half a century ago, it seems as if it was a little optimistic. Today the American Association for Artificial Intelligence defines AI as “the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.”40 However, despite all the computing power and effort that have gone into making computers intelligent, they still can’t do what a three-year-old child can do: They can’t tell a cat from a dog. They can’t do what any surviving husband can do: They don’t understand the nuances of language. For instance, they don’t know that the question “Have the trash barrels been taken out?” actually means, “Take the trash barrels out,” and that it also has a hidden implication: “If you don’t take the trash out, then….” Use any search engine, and as you gaze at what pops up, you think, “Where did that come from? That is so not what I’m looking for.” Language translation programs are wacky. It is obvious the program has no clue as to the meaning of the words it is translating. Attempts are continually being made, but even with all the processing power, memory, and miniaturization, creating a machine with human intelligence is still a dream. Why?

Artificial intelligence comes in two strengths: weak and strong. Weak AI is what we are used to when we think about computers. It refers to the use of software for problem-solving or reasoning tasks. Weak AI does not include the full range of human cognitive abilities, but it may also have abilities that humans do not have. Weak AI has slowly permeated our lives. AI programs are directing our cell-phone calls, e-mails, and Web searches. They are used by banks to detect fraudulent transactions, by doctors to help diagnose and treat patients, and by lifeguards to scan beaches to spot swimmers in need of help. AI is responsible for the fact that we never encounter a real person when we make a call to any large organization or even many small ones, and for the voice recognition that allows us to answer vocally rather than press a number. Weak AI beat the world champion chess player, and can actually pick stocks better than most analysts. But Jeff Hawkins points out that Deep Blue, IBM’s computer that beat the world chess champion, Garry Kasparov, at chess in 1997, didn’t win by being smarter than a human. It won because it was millions of times faster than a human: It could evaluate two hundred million positions per second. “Deep Blue had no sense of the history of the game, and didn’t know anything about its opponent. It played chess yet didn’t understand chess, in the same way that a calculator performs arithmetic but doesn’t understand mathematics.”38

Strong AI is what flips many people out. Strong AI is a term coined by John Searle, a philosopher at the University of California, Berkeley. The definition presupposes, although he does not, that it is possible for machines to comprehend and to become self-aware. “According to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.”41 Searle maintains that all conscious states are caused by lower level brain processes,42 thus consciousness is an emergent phenomenon, a physical property—the sum of the input from the entire body. Consciousness does not just arise from banter back and forth in the brain. Consciousness is not the result of computation. You have to have a body, and the physiology of the body and its input, to create a mind that thinks and has the intelligence of the human mind.

IS A CONSCIOUS MACHINE POSSIBLE?

The logic behind believing a machine can be conscious is the same logic that is behind creating AI. Because human thought processes are the result of electrical activity, if you can simulate that same electrical activity in a machine, then the result will be a machine with humanlike intelligence and consciousness. And just as with AI, there are some who think that this does not mean that the machine’s thought processes need necessarily be the same as a human’s to produce consciousness. Then there are those who agree with Hawkins and think that it must have the same processes, and that to have those, it has to be hooked up the same way. And there are those who are on the fence.

The quest for artificial intelligence was not originally based on reverse-engineering the brain, because in 1956, when AI was a glimmer of an idea, very little was known about how the brain works. Those early engineers had to wing it when they began to design AI. They initially came up with their own solutions for creating the various components of artificial intelligence, and some of these methods have actually supplied clues to how parts of the brain work. Some of these approaches are based on mathematical rules, such as Bayesian logic, which determines the likeliness of a future event based on similar events in the past, or Markov models, which evaluate the chance that a specific sequence of events will happen and are used in some voice-recognition software. The engineers built “neural nets,” set up to run in parallel and loosely simulating neurons and their connections; they actually learn responses that are not preprogrammed in. These systems have also been used in voice-recognition software. They are also used to detect fraud in credit-card charges, and in face and handwriting recognition. Some are based on inference—the old “if this, then that” logic. There are programs that search through large numbers of possibilities, such as the chess program Deep Blue. Some are planning programs that start with general facts about the world, rules about cause and effect, facts germane to particular situations, and the intended goal—just like the direction finder in your car that plans routes and tells you how to get to the closest Chinese takeout.

But the human brain is different in many ways from a computer. In his book The Singularity Is Near, Kurzweil enumerates the differences.

The brain’s circuits are slower but more massively parallel. The brain has about one hundred trillion interneuronal connections. This is more than any computer yet has.

The brain is constantly rewiring itself and self-organizing.

The brain uses emergent properties, which means that intelligent behavior is rather an unpredictable result of chaos and complexity.

The brain is only as good as it has to be, in terms of evolution. There’s no need to be ten times smarter than everyone else; you need only be a little smarter.

The brain is democratic. We contradict ourselves: We have internal conflicts that may result in a superior solution.

The brain uses evolution. The developing brain of a baby six to eight months old forms many random synapses. The patterns of connections that best make sense of the world are the ones that survive. Certain patterns of brain connections are crucial, whereas some are random. As a result, an adult has far fewer synapses than the toddler.

The brain is a distributed network. There is no dictator or central processor calling the shots. It is also deeply connected: Information has many ways to navigate through the network.

The brain has architectural regions that perform specific functions and have specific patterns of connections.

The overall design of the brain is simpler than the design of a neuron.2

It’s interesting, however, that Kurzweil leaves out something rather major. He ignores the fact that the brain is hooked up to a biological body. So far, AI programs are good only at the thing they are specifically designed for. They don’t generalize and aren’t flexible.2 Deep Blue, with all its connections, massive memory, and power, does not know that it better take the trash out…or else.

Although human-level intelligence has not been achieved, computers surpass some of our abilities. They are better at symbolic algebra and calculus, scheduling complex tasks or sequences of events, laying out circuits for fabrication, and many other mathematically involved processes.9 They are not good at that elusive quality, common sense. They can’t critique a play. As I said before, they are not good at translating from one language to another, nor at nuances within a language. Oddly, it is many of the things that a four-year-old can do, rather than what a physicist or a mathematician can do, that are the hang-ups.

No computer yet has passed the Turing Test, proposed in 1950 by Alan Turing,43 the father of computer science, to answer the question, Can machines think? In the Turing Test, a human judge engages in a natural language conversation with two other parties, one a human and the other a machine, both trying to appear human. If the judge cannot reliably tell which is which, then the machine has passed the test. The conversation is usually limited to written text, so that voice is not a prejudicial factor. Many researchers have a problem with the Turing Test. They do not think that it will indicate whether a machine is intelligent. Behavior isn’t a test of intelligence. A computer may be able to act as if it were intelligent, but that does not mean it is.

PALM PILOT TO THE RESCUE

Jeff Hawkins thinks he knows why no truly intelligent machines have been made. It is not because computers just need to be more powerful and have more memory, as some researchers think. He thinks everyone working on artificial intelligence has been barking up the wrong tree. They have been working under the wrong premise38 and should be paying more attention to how the human brain works. Although John McCarthy and most other AI researchers think that “AI does not have to confine itself to methods that are biologically observable,”44 Hawkins thinks this notion is what has led AI research astray. And he isn’t so happy with neuroscientists, either. Slogging through neuroscience literature to answer the question of just how the brain works, he found that although mounds of research have been done, and tons of data accumulated, no one yet has put it all together and come up with a theory to explain how humans think. He was tired of the failed attempts at AI and concluded that if we don’t know how humans think, then we can’t create a machine that can think like a human. He also concluded that if no one else was going to come up with a theory, he’d just have to do it himself. So he founded the Redwood Center for Theoretical Neuroscience and set about the business. Jeff is no slouch. Or maybe he is. He leaned back, put his feet up on the desk, cogitated, and came up with the memory-prediction theory,38 which presents a large-scale framework of the processes in the human brain. He hopes other computer scientists will take it out for a spin, tweak it, and see if it works.

Hawkins was fascinated when he read a paper written in 1978 by the distinguished neuroscientist Vernon Mountcastle, who had made the observation that the neocortex is remarkably similar throughout, and therefore all regions of the cortex must be performing the same job. Why the end result of that job is different for different areas—that is, vision is the result of processing in the visual cortex, hearing in the auditory cortex, etc.—is not because they have different processing methods. It is because the input signals are different, and because of how the different regions are connected to each other.

One piece of evidence that backs up this conclusion was the demonstration of the plasticity (an ability to change its wiring) of the cortex done by Mriganka Sur at MIT. To see what effect the input to a cortical area had on its structure and function, he rewired visual input in newborn ferrets so that it went to the auditory cortex instead of the visual cortex.45, 46 Would a ferret be able to use another portion of the somatosensory cortex, such as the auditory cortex tissue, to see? It turns out that the input has a big effect. The ferrets could see to some extent. This means that they were seeing with the brain area that normally hears sounds. The new “visual cortical tissue” isn’t wired exactly as it would have been in the normal visual cortex, leading Sur and his colleagues to conclude that input activity can remodel the cortical networks, but it is not the only determinant of cortical structure; there are probably intrinsic cues (genetically determined) that also provide a scaffold of connectivity.47 That means specific areas of the cortex have evolved to process certain types of information and have been wired in a certain way to better accommodate it, but if need be, because the actual mode of processing is the same in all the neurons, any part of the cortex can process it.

This idea that the brain uses the same mechanism to process all information made a lot of sense to Hawkins. It united all the capabilities of the brain into one tidy package. The brain didn’t have to reinvent the wheel every time it expanded its abilities: It has one solution for thousands of problems. If the brain uses a single processing method, then a computer could too, if he could figure out what that method was.

Hawkins is a self-declared neocortical chauvinist. He looks on the neocortex as the seat of our intelligence: It was the last to develop and is larger and better connected than any other mammal’s. However, he fully keeps in mind that all the input that goes into it has been processed by lower-level brain regions: those regions that are evolutionarily older, which we share with other animals. So using his big neocortex, Hawkins came up with his memory-prediction theory, and we are going to check it out.

All the inputs into the neocortex come from our senses, just as in all animals. One surprising thing is that no matter what sense we are talking about, the input into the brain is in the same format: neural signals that are partly electrical and partly chemical. It is the pattern of these signals that determines what sense you experience; it doesn’t matter where they come from. This can be illustrated by the phenomenon of sensory substitution.

Paul Bach y Rita, who was a physician and neuroscientist at the University of Wisconsin, became interested in the plasticity of the brain after caring for his father, who was recovering from a stroke. He understood that the brain is plastic and that it is the brain that sees, not the eyes. He wondered if he could restore vision to a blind person by providing the correct electrical signal but through a different input pathway, that is, not through the eyes, which were no longer functioning and providing input. He created a device that displays visual patterns on the tongue, so that a blind person would be able to wear the device and “see” via sensations on the tongue.48 Visual images from a small TV camera worn on the forehead are delivered to arrays of stimulators in a disc worn on the tongue. (He tried several parts of the body, including the abdomen, back, thigh, forehead, and fingertip, and found the tongue to be the best.) The images from the camera are translated into a neural code, which the stimulator implements by creating specific pressure patterns on the tongue. The nerve impulses created by the pressure patterns are sent to the brain via the intact sensory pathway from the tongue, and the brain quickly learns to interpret these impulses as vision. Wacky, huh? With this system, a congenitally blind person was able to perform assembly and inspection tasks on an electronic assembly line of miniature diodes, and totally blind persons can catch a ball rolling across a table and identify faces.

Hawkins says that an important aspect of all this sensory information is that no matter what sense’s input is being processed, it is arriving in the form of spatial and temporal patterns. When we hear something, it is not only the timing between sounds that is important, the temporal pattern, but also the actual spatial position of the receptor cells in the cochlea is important. With vision, obviously there are spatial patterns, but what we don’t realize is that with every image that we perceive, the eye is actually jumping three times a second to fixate on different points. These movements are known as saccades. Although what we perceive is a stable picture, it actually is not. The visual system automatically deals with these continuously changing images and you perceive them as stable. Touch is also spatial, but Hawkins points out that just one single sensation is not enough to identify an object; it has to be touched in more than one spot, which adds a temporal aspect.

So with this understanding of the input, we go to the six-layered dish towel, the neocortex. Following Mountcastle’s theory, Hawkins assumes that each cell in a particular layer of the dish towel performs the same type of process. So all the neurons in layer I do the same process, then the result is sent to layer II and the layer II cells all do their thing, and so on. However, the information is not just being sent up through the levels, it is also sent laterally to other regions and back down. Each one of those pyramidal neurons may have up to ten thousand synapses. Talk about an information superhighway!

The neocortex is also divided into regions that process different information. Now we come to the notion of hierarchy. The brain treats information in a hierarchical manner. This is not a physical hierarchy such that higher-level cortical areas sit on top of each other, but a hierarchy of information processing, a hierarchy of connectivity. The region at the bottom of the hierarchy is the biggest and receives tons of sensory information, each neuron a specialist in a bit of minutiae. For instance, at the bottom of the hierarchy for visual processing is an area known as V1. Each neuron in V1 specializes in a tiny patch of an image, like a pixel in a camera, but not only that, it has a specific job within the pixel. It fires only with a specific input pattern, such as a 45-degree line slanted down to the left. It makes no difference whether you are looking at a dog or a Pontiac; if there is a 45-degree downward slant to the left, this neuron will fire. Area V2, the next region up the hierarchy, starts putting the information from V1 together. Then it sends what it has pieced together to V4. V4 does its thing, and then the information goes to an area called IT. IT specializes in entire objects. So if all the incoming info matches a face pattern, then a group of neurons specific for face patterns in IT start firing away as long as they are receiving their info from below. “I’m getting a face code, still there, still there, ahh…, OK, it’s gone. I’m out.”

But don’t get the idea that this is a one-way system. Just as much information is going down the hierarchy as coming up. Why?

Computer scientists have been modeling intelligence as if it were the result of computations—a one-way process. They think of the brain as if it, too, were a computer doing tons of computations. They attribute human intelligence to our massively parallel connections, all running at the same time and spitting out an answer. They reason that once computers can match the amount of parallel connections in the brain, they will have the equivalent of human intelligence. But Hawkins points out a fallacy in this reasoning, which he calls the hundred-step rule. He gives this example: When a human is shown a picture and asked to press a button if a cat is in the picture, it takes about a half second or less. This task is either very difficult or impossible for a computer to do. We already know that neurons are much slower than a computer, and in that half second, information entering the brain can traverse only a chain of one hundred neurons. You can come up with the answer with only one hundred steps. A digital computer would take billions of steps to come up with the answer. So how do we do it?

And here is the crux of Hawkins’s hypothesis: “The brain doesn’t ‘compute’ the answers to problems; it retrieves the answers from memory. In essence, the answers were stored in memory a long time ago. It only takes a few steps to retrieve something from memory. Slow neurons are not only fast enough [to] do this, but they constitute the memory themselves. The entire cortex is a memory system. It isn’t a computer at all.”38 And this memory system differs from computer memory in four ways:

1. The neocortex stores sequences of patterns.

2. It recalls patterns autoassociatively, which means it can recall a complete pattern when given only a partial one. You see a head above a wall and know that there is a body connected to it.

3. It stores patterns in invariant form. It can handle variations in a pattern automatically: When you look at your friend from different angles and different distances, although the visual input is completely different, you still recognize her. A computer would not. Each change in input does not cause you to recalculate whom you are looking at.

4. The neocortex stores memory in a hierarchy.

Hawkins proposes that the brain uses its stored memory to make predictions constantly. When you enter your house, your brain is making predictions from past experience: where the door is, where the door handle is, how heavy the door it is, where the light switch is, which furniture is where, etc. When something is brought to your attention, it is because the prediction failed. Your wife painted the back door pink without telling you of her intentions, so you notice it. (“What the heck…?”) It didn’t match the predicted pattern. (In fact, it didn’t match anything.) Thrill seeker that he is, Hawkins proposes that prediction “is the primary function of the neocortex, and the foundation of intelligence.”38 That means that prediction is going on all the time in everything that you do, because all those neocortical cells process in the same manner. Hawkins states, “The human brain is more intelligent than that of other animals because it can make predictions about more abstract kinds of patterns and longer temporal pattern sequences.”38

Rita Rudner, in a comedy routine occasioned by her wedding anniversary, says you have to be very careful about what household activities you perform during the first two weeks of marriage, because those are going to be the ones that you will be stuck doing for the duration. You don’t want to set up a predictable pattern that you will regret! Hawkins sees intelligence as measuring just how well you remember and predict patterns, whether they are patterns of words, numbers, social situations, or physical objects. So this is what is going on when cortical areas are sending information down the cortical hierarchy:

For many years most scientists ignored these feedback connections. If your understanding of the brain focused on how the cortex took input, processed it, and then acted on it, you didn’t need feedback. All you needed were feed forward connections leading from sensory to motor sections of the cortex. But when you begin to realize that the cortex’s core function is to make predictions, then you have to put feedback into the model: the brain has to send information flowing back toward the region that first receives the inputs. Prediction requires a comparison between what is happening and what you expect to happen. What is actually happening flows up, and what you expect to happen flows down.38

So back to the visual processing of the face that we started with: IT is firing away about identifying a face pattern, sending this info forward to the frontal lobes, but also back down the hierarchy. “I’m getting a face code, still there, still there, ahh…, OK, it’s gone, I’m out.” But V4 had already put most of the info together, and while it sent it up to IT, it also yelled back down to V2, “I betcha that’s a face. I got it almost pieced together, and the last ninety-five out of one hundred times the pieces were like this, it was a face, so I betcha that’s what we got now, too!” And V2 is yelling, “I knew it! It seemed so familiar. I was so guessing the same damn thing. I told V1 as soon as it started sending me stuff. Like I am so hot!” This is a simplified rendition, but you get the idea.

The neocortex of mammals got tacked onto the lower-functioning reptilian-type brain (with some modifications). That brain, however, was no small potatoes. It could and still can do a lot. Crocodiles can see, hear, touch, run, swim, maintain all their homeostatic mechanisms, catch prey, have sex, and get a shoe company named after them. We can do most of these same things without our neocortex, although Michael Jordan needed his to get shoes named after him. Having this addition made mammals smarter, and Hawkins says it is because it added memory. Memory allowed an animal to predict the future, by being able to recall previous sensory and behavioral information. The neurons receive their input and recognize it from the day before. “Gee, we got similar signals yesterday, and it turned out to be a delicious thing to eat. Well, hey, all our input is just like yesterday. Let’s predict that it is the same thing as yesterday, a delicious tidbit. Let’s eat it.”

Memory and prediction allow a mammal to take the rigid behaviors that the evolutionarily old brain structures developed and use them more intelligently. Your dog predicts that if he sits, puts his paw on your lap, and cocks his head, you will pet him, just as you did all those other times. He did not have to invent any new movement. Even without his neocortex, he could sit, lift his paw, and cock his head, but now he can remember the past and predict the future. However, animals depend on the environment to access their memory. Your dog sees you, and that gives him his cue. There is no evidence that he is out on the lawn ruminating about what to do to get petted. Merlin Donald maintains that humans have the unique ability to autocue. We can voluntarily recall specific memory items independent of the environment.49 Hawkins thinks that human intelligence is unique because the neocortex of humans is bigger, which allows us to learn more complex models of the world and make more complex predictions. “We see deeper analogy, more structure on structure, than other mammals.” We also have language, which he sees as fitting nicely into the memory prediction framework. After all, language is pure analogy and is just patterns set in a hierarchical structure (semantics and syntax), which is the meat and potatoes of what his framework recognizes. And, just as Merlin Donald suggested, language needed motor coordination.

Humans have also taken their motor behavior to the extreme. Hawkins makes the point that our ability to execute complex movements is due to the fact that our neocortex has taken over most of our motor functions. Knock out the motor cortex of a rat, and you may not notice any change, but knock it out in a human, and the result is paralysis. Our motor cortex is much more connected to our muscles than that of any other species. This is why Michael Jordan needed his neocortex to become the king of basketball. Hawkins thinks our movements are the result of predictions, and predictions cause the motor command to move: “Instead of just making predictions based on the behavior of the old brain, the human neocortex directs behavior to satisfy its predictions.”38

Hawkins doesn’t really foresee my getting a personal robot. He thinks that in order for a robot to act like a human or interact in humanlike ways, it will need all the same sensory and emotional input, and it will need to have had human experiences. To behave as a human, you need to experience life as a human biological entity. This would be extremely difficult to program, and he doesn’t see the point. He projects that such robots would be more expensive and higher maintenance than a real human and couldn’t relate to a human on the level of shared experience. He thinks that we can build an intelligent machine by giving it senses (not necessarily the same as we have; it could have infrared vision, for instance) so that it can learn from observation of the world (rather than having everything programmed in), and a heck of a lot of memory, but it isn’t going to look like Sophia or Johnny.

Hawkins is not worried that an intelligent machine is going to be malevolent or want to take over the world or be concerned that it is a slave to its human oppressor. These fears are based on a false analogy: confusing intelligence with “thinking like a human,” which as we have seen, is often dominated by the emotional drives of the evolutionarily old part of our brain. An intelligent machine would not have the drives and desires of a human. There is a difference between the neocortical intelligence measured by the predictive ability of a hierarchical memory, and what happens to that when input from the rest of the brain is added. He doubts that we will be able to download our minds onto a chip and pop it into a robot, as Ray Kurzweil predicts will be possible. He foresees no way that the trillions of unique connections in the nervous system can be copied and duplicated, and then popped into a robotic body just like yours. All those years of sensory input from the exact dimensions of a specific body have honed the predictions of each brain. Pop it into a different body, and the predictions will be off. Michael Jordan’s timing would be totally off in Danny DeVito’s body, and vice versa.

The Blue Brain Project

Henry Markram, director of the Brain and Mind Institute at the École Polytechnique Fédérale de Lausanne (EPFL), in Switzerland, is a big advocate of the view that in order to understand how the brain works, the biology of the brain is of the utmost importance. He agrees with Hawkins about the problems in modeling artificial intelligence: “‘The main problem in computational neuroscience is that theoreticians [who] do not have a profound knowledge of neuroscience build models of the brain.’ Current models ‘may capture some elements of biological realism, but are generally far from biological.’ What the field needs, he says, is ‘computational neuroscientists [who are] willing to work closely with the neuroscientists to follow faithfully and learn from the biology.’”50 Markram is a detail man. He’s no theoretical windbag. He mucks around at the ion channel, neurotransmitter, dendritic, synaptic level and works his way up.

Markram and his institute, collaborating with IBM and their Blue Gene/L supercomputer, have now taken on the task of reverse engineering the mammalian brain. This project has been dubbed the Blue Brain Project, and it rivals the human genome project in its complexity. To begin with, they are creating a 3-D replica of a rat brain with the intention of eventually being able to create one of a human brain. “The aims of this ambitious initiative are to simulate the brains of mammals with a high degree of biological accuracy and, ultimately, to study the steps involved in the emergence of biological intelligence.”3 It is not an attempt to create a brain or artificial intelligence, but an attempt to represent the biological system. From this, insights about intelligence and even consciousness may be drawn.

Markram makes the fundamental point that there are “quantum leaps in the ‘quality’ of intelligence between different levels of an organism.” Thus the intelligence of an atom is less than that of a DNA molecule, which has less intelligence than the protein it codes, which alone is nothing compared to combinations of proteins that produce different cell types. These different cell types combine to produce different brain areas, which contain and process different types of input. You get the picture. The brain as a whole makes the next quantum leap in the quality of intelligence beyond its physical structures, the separate brain areas, and the neurons. The question is whether it is the interaction between the neurons, that whole thing about being well connected, that is driving that last qualitative leap. So this 3-D model is no flim-flam replica that has ever been done before. In fact, it never could have been done before. It requires the huge computational power of the Blue Gene computer, the biggest, baddest, fastest computer in the world.

They are building the replica specific neuron by specific neuron, because every neuron is anatomically and electrically unique, with unique dendritic connections. The project is founded on an immense amount of research that has been going on for the last hundred years in neuroanatomy, beginning with the unraveling of the microstructure of the neocortical column, and in physiology, beginning with the model of ionic currents and the idea that dendritic branches of neurons affect their processing. The first goal of the project has been accomplished. That was to construct a single neocortical column (NCC) of a two-week-old rat. In preparation for this project, the researchers at EPFL, over the last ten years, have been performing paired recordings of the morphology and physiology of thousands of individual neurons and their synaptic connections in the somatosensory cortex of two-week-old rats. The replica NCC, the “blue column,”* is made up of ten thousand neocortical neurons within the dimensions of an NCC, which is about half a millimeter in diameter and one and a half millimeters tall.3

At the end of 2006, the first column was completed; the model included thirty million synapses in precise 3-D locations! The next step is to compare simulation results of the model with experimental data from the rat brain. Areas where more info is needed can then be identified, and more research will be done to fill in these gaps. This is not a one-shot deal. The circuit will have to be rebuilt over and over again every time a section gets tweaked by new data, and the replica of the real biological circuit will become progressively more accurate.

What’s the Point of Building This Model?

Markram has a whole laundry list of information that will be gleaned from these models. Just as Breazeal thinks her robots will be useful for verifying neuroscientific theories, so Markram thinks of the blue column the same way: “Detailed, biologically accurate brain simulations offer the opportunity to answer some fundamental questions about the brain that cannot be addressed with any current experimental or theoretical approaches.”3 First, he sees it as a way to gather all the random puzzle pieces of information that have been learned about cortical columns, and put them all together in one place. Current experimental methods allow only glimpses at small parts of the structure. This would allow the puzzle to be completed. You jigsaw fans know how satisfying that can be.

Markram has hopes the continual tweaking of the details of the model will allow us to understand the fine control of ion channels, receptors, neurons, and synaptic pathways. He hopes to answer questions about the exact computational function of each element, and their contribution to emergent behavior. He also foresees insight into the mystery of how the emergent properties of these circuits—such as memory storage and retrieval, and intelligence—come about. A detailed model will also aid in disease diagnosis and treatment. Besides identifying weak points in circuits that can cause dysfunction and targeting them for treatment, simulations of neurological or psychiatric disease could also be used to check hypotheses about their origins, to design tests to diagnose them, and to find treatments. It will also provide circuit designs that can be used for silicon chips. Not too shabby!

CHANGING YOUR GENES

Gregory Stock, director of the Program on Medicine, Technology and Society at UCLA, doesn’t think the fields of technology and robotics are going to change what it means to be human. He thinks being a fyborg is where it is at. Machines will stay machines, bodies will remain carbon. The idea of hopping up onto the operating table for a bit of neurosurgery when he feels just fine doesn’t much appeal to him, and he doesn’t think it will appeal to many people, especially when everything you would gain could be had by wearing an external device. I know neurosurgery is not on the top of my to-do list. Why risk it, when you could strap on a watch-like device or clip something on your belt? Why give up a good eye when you could slip on a pair of glasses for night vision? Stock thinks our world is going to be rocked by the fields of genetics and genetic engineering—tinkering with DNA, man directing his own evolution. These changes aren’t going to be the result of some mad scientist cooking up ideas about modifying the human race to his specifications; they are going to creep in slowly as the result of work done to treat genetic diseases and to avoid passing them on to our children. They are also going to come from the realization that much of our temperament is due to our genes ( just like the domesticated Siberian foxes we talked about) and that those genes will be modifiable. “We have already used technology to transform the world around us. The canyons of glass, concrete, and stainless steel in any major city are not the stomping ground of our Pleistocene ancestors. Now our technology is becoming so potent and so precise that we are turning it back on our own selves. And before we’re done, we are likely to transform our own biology as much as we have already changed the world around us.”51

Biology-Based Aids—The Ways to Change Your DNA

You can change your biology by taking medications, or you can change the instruction manual that coded how to build your body. That manual is DNA. There are two ways to tinker with DNA: somatic gene therapy and germ-line therapy. Somatic gene therapy is tinkering with the DNA a person already has in nonreproductive cells; it affects only the current individual. Germ-line therapy is tinkering with the DNA in sperm, egg, or an embryo, so that every cell in the future adult organism has the new DNA, including the reproductive cells. That means the change is passed on to future generations.

Stanley Cohen of Stanford University and Herbert Boyer, then at the University of California, San Francisco, worked only thirty miles apart, but they met in Hawaii. They attended a conference on bacterial plasmids in 1972. A plasmid is a DNA molecule, usually in the shape of a ring. It is separate from the chromosomal DNA but is also able to replicate. It is usually found floating around in bacterial cells. One reason they are important is that these strands of DNA can carry information that makes bacteria resistant to antibiotics. Cohen had been working on ways to isolate specific genes in plasmids and clone them individually by putting them in Escherichia coli bacteria and letting them replicate. Boyer had discovered an enzyme that cut DNA strands at specific DNA sequences, leaving “cohesive ends” that could stick to other pieces of DNA. Shop-talking over lunch, they wondered if Boyer’s enzyme would cut Cohen’s plasmid DNA into specific, rather than random, segments, then bind those segments to new plasmids. They decided to collaborate, and in a matter of months succeeded in splicing a piece of foreign DNA into a plasmid.52 The plasmid acted as a vehicle to carry this new DNA, which then inserted new genetic information into a bacterium. When the bacterium reproduced, it copied the foreign DNA into its offspring. This created a bacterium that was a natural factory, cranking out the new DNA strands. Boyer and Cohen, now considered to be the fathers of biotechnology, understood that they had invented a quick and easy way to make biological chemicals. Boyer went on to cofound the first biotech company, Genentech. Today, people all around the world enjoy the benefits of Boyer and Cohen’s “cellular factories.” Genetically engineered bacteria produce human growth hormone, synthetic insulin, factor VIII for hemophilia, somatostatin for acromegaly, and the clot-dissolving agent called tissue plasminogen activator. This line of research suggested that perhaps custom DNA could be added to human cells, but the problem was how to get it into the cell.

The goal of somatic therapy is to replace a defective gene that is causing a disease or dysfunction by the insertion of a good gene into an individual’s cells. In somatic gene therapy, the recipient’s genome is changed, but not in every cell in the body, and the change is not passed along to the next generation. This has not been an easy assignment. Although there has been a lot of research done in this area, and a lot of money spent, the successes have been few and far between.

First of all, there is the problem of just exactly how one inserts genes into a cell. Researchers finally figured out that they should use the experts in cell invasion and replication: viruses. Unlike bacteria, viruses cannot replicate on their own. In reality, a virus is merely a vehicle for DNA or RNA. It consists of DNA or RNA surrounded by a protective coat of protein: That’s it. They are the quintessential houseguests from hell.

Viruses actually sneak their way inside a host cell and then use the cell’s replication apparatus to make copies of their own DNA. However, if you could make that DNA a good copy of a defective gene, and direct it to cells that have a defective copy, well then, you can see the possibilities of a virus acting as the agent of somatic gene therapy: Take the virus’s DNA out, add the DNA that you want, and turn it loose.

To begin with, research has concentrated on diseases that are caused by only a single defective gene in accessible cells, such as blood or lung cells, rather than diseases caused by a host of defects that work in concert with each other. But of course, nothing is as easy as first envisioned. The protein coats of the viruses are foreign to the body, and sometimes they have triggered host reactions that have caused rejection, a problem that recently may have been solved by researchers in Italy.53 Because of the problems with rejection, different DNA vehicles are being explored. Inserting strands of DNA on a chromosome is also tricky, because it matters where it is put. If spliced next to a DNA sequence that regulates the expression of the sequences next to it, it can result in unexpected consequences, such as tumors.54 Moreover, most genetic diseases, such as diabetes, Alzheimer’s disease, heart disease, and various cancers, arise from a host of genes, not just one. Also, the effects of the therapy may not last. The cells that have been modified may not be long-lived, so that the therapy has to be repeated.

Gene therapy has had a few successes, including the treatment of severe combined immunodeficiency disease (also known as bubble-boy disease)55, 56, 57 and X-linked chronic granulomatous disease,58 which is another type of immune deficiency. As I am writing this, the BBC reports that a team at London’s Moorfields Eye Hospital made the first attempt to treat blindness caused by a faulty gene called RPE65 using gene therapy.59 Whether this worked or not will not be known for months. The trouble is, somatic therapy is really a quick fix. The people who have been treated still carry the mutant gene and can pass it on to their offspring. This is the problem that prompts research in germ-line therapy.

In germ-line gene therapy, the embryo’s DNA is changed, including the DNA in its reproductive cells. When it comes time for it to reproduce, its egg or sperm cells carry the new DNA, and the changes are passed on to their offspring. The disease-producing gene or genes are eliminated for good in a particular individual’s genome. This idea could not even have been considered until 1978, when the first test-tube baby was born. In vitro fertilization involves harvesting egg cells from the woman’s ovary, and mixing them on a petri dish with sperm. The resultant embryo is then accessible to manipulation. Very controversial at the time, in vitro fertilization (IVF) is now casual cocktail-party talk. That is not to say the process is enjoyable. It is difficult and both physically and emotionally arduous. Notwithstanding the difficulties, many infertile couples benefit from the technology, to the extent that 1 percent of the babies born in the United States are the result of in vitro fertilization.

Not all in vitro fertilization is done for infertile couples. Some is done for couples who have had a child with a genetic disease, such as cystic fibrosis. It is also done when one or both of the prospective parents know they carry a copy of a defective gene. Embryos conceived in vitro, when they reach the eight-cell stage, can now be screened with the genetic tests that are currently available. Up until 2006, there were just a small handful of diseases that could be tested for. However, a new procedure known as preimplantation genetic haplotyping (PGH),60 developed at Guy’s Hospital in London, has changed that. It is now possible to take a single cell from the early embryo, extract the DNA, replicate it, and then use it for DNA fingerprinting. This not only increases the number of genetic defects that can be detected in preimplantation embryos, now ranging into the thousands, but also increases the number of usable embryos and their survival rate. Before this test was available, if the concern was for X-linked disease, none of the male embryos could be tested, so they were eliminated. Now they too can be screened. Humans are the only animal that can tinker with their chromosomes (and those of other species, too) and guide their genetic reproduction.

The future implications of PGH are huge. There is a Web site called BetterHumans.com. The first page of comments about PGH seems to cover the territory pretty well:

“It’s pretty important considering how much it will affect the lifelong happiness of an individual and how well they can contribute to the world.”

“It is wonderful that this is not illegal yet. Do you not love incrementalism?”

“But once again, we need to define disease. I consider the average lifespan to be a disease.”

“Perhaps it will be possible to extrapolate the genetic tendency for longer life, in which case we can engineer longer lifespans into the populace.”

“When we can clearly say that a given DNA pattern has an unacceptably high propensity for a specific disease—it would be unethical to propagate it.”

“You’re right, it’s not a simple process to weed out disease from socially desirable traits…. Diversity will be important to maintain.”

“However, for public policy: an international ethical board should decide which genetic options lead to medical disorders.”

Those less enthusiastic may agree with Josephine Quintavalle, member of the pro-life activist organization Comment on Reproductive Ethics, who said: “I am horrified to think of these people sitting in judgment on these embryos and saying who should live and who should die.”61

Even before the advent of this type of testing, an earlier version that allowed screening for only a handful of diseases caused different countries to take very different approaches to legislating and regulating its use, giving rise to the phenomenon of reproductive tourism—the one vacation from which you won’t appear so well rested on your return. Obviously this even more exhaustive testing will bring more ethical questions with it.62

Currently if a couple does such testing, they may be concerned only with genetic disease that causes a lifelong affliction or an early death. But the truth is, no embryo is going to be perfect. It may not have the genes coding for childhood-onset diseases like cystic fibrosis or muscular dystrophy, but suppose it had genes that indicated a high probability of developing diabetes in middle age, or heart disease, or Alzheimer’s disease? Are you going to toss it, start all over again, and try for a better one? How about depression? And this is where the future of germ-line therapy and all of those headache-provoking ethical questions may come into play: Don’t toss ’em, change ’em!

Changing the DNA of an embryo changes the DNA in all its future cells, from the brain to the eyeballs to the reproductive organs. It changes the DNA in the future egg and sperm cells also. That means the altered DNA is passed on to all the future offspring, which would therefore be “genetically modified organisms.” In a sense, every organism is genetically modified just by the recombining of genes. Humans have already been guiding their evolution more than they realize, from raising crops to modern medicine. Although modern medicine has found ways to treat such things as infectious disease, diabetes, and asthma, allowing people to live longer, it has also allowed some people—who normally would not have lived to reproductive age—to reproduce and pass those genes on. Inadvertently, this affects evolution, increasing the prevalence of genes coding for these diseases. However, the term genetically modified organisms has come to mean tinkering with DNA by man for the purpose of selecting for or against specific traits. This has been done in plants and on laboratory animals, but not with humans.

Today, in 2007, when you have a child without IVF, you really can’t be held responsible for his or her DNA: You get what you get. That is, unless you know that you carry a defective gene that can produce a disease, and you choose to reproduce anyway. It is a matter of opinion how ethical that is. Now that the human genome has been sequenced, and you will soon be able to get your own personal sequencing done for a few bucks, this laissez-faire attitude about the future DNA of your offspring may not be acceptable.

I can imagine the courtroom scene:

“Mr. Smith, I see here that you had your gene sequencing done in February of 2010. Is that correct?”

“Ah, yeah, I thought it would be cool to get it done.”

“I also see that you received a printout of the results and an explanation of what they meant.”

“Well, yeah, they gave me that paper.”

“Yes, but you signed this paper that said you understood you carried a gene that could cause any of your offspring to have….”

“Yeah, I guess so.”

“And you went ahead and had a child without first doing PGH? You did nothing to prevent this disease in your child?”

“Well, you know, we just got caught up in the moment, and, well, it just happened.”

“Did you tell your partner you knew you were the carrier of these defective genes?”

“Ah, well, I kinda forgot about it.”

“You kinda forgot about it? When we have the technology to prevent this sort of thing?”

But then there is the other side of the coin. Your future teenager may hold you responsible for all that she doesn’t like about herself. “Gee Dad, couldn’t you have been a little more original? Like, everyone has curly blond hair and blue eyes. And maybe you could have made me more athletic. I mean, I can’t even run a marathon without training.”

No one is tinkering with the human germ line just yet. Too much is still unknown about the properties of various genes and how they affect and control each other. It may turn out that it will be too complicated to mess with. Genes that control the expression of certain traits may be so linked with the expression and control of other genes that they may not be able to be isolated. Certain traits may be the result of a constellation of genes that can’t be altered without affecting many other traits. Parents are going to be reluctant to interfere with their children’s genes, and well they should be. Europeans and people in Marin County don’t even want them altering the genes of their vegetables. That is why a different idea is being pursued: an artificial chromosome.

Artificial Chromosomes

The first version of an artificial human chromosome was made by a group at Case Western Reserve University in 1997.63 It was to be used to help illuminate the structure and function of human chromosomes, and possibly to avoid some of the problems of viral and nonviral gene therapy. You will recall that we have twenty-three pairs of chromosomes. The idea is to add an “empty” (and, we hope, inert) chromosome, which can be modified. The artificial chromosome is put into the embryo, and then whatever you order up will be tacked on to it. Some of what is tacked on may have on-and-off switches that would be under the individual’s control when they are older. For instance, there could be a gene for cancer-fighting cells that wouldn’t express itself except in the presence of a particular chemical. That chemical would be given as an injection. A person finds out he has cancer, he gets the injection that turns on the gene that produces the cancer-fighting cells, and voilà, the body cleans up the mess without any further ado. Another type of injection would turn the gene off. And if better sequences are discovered, then when it comes time for your offspring to reproduce, they can replace whatever is on the artificial chromosome with the newer, better version. Some of the genes would have to be able to suppress the expression of genes on the original chromosomes, if they control the trait you want modified.

Of course, this all presupposes IVF. Will humans control their reproduction to this extent? Our current genetically coded sexual urges lead to a great deal of willy-nilly reproduction. In the United States, abortion eliminates half of these unplanned pregnancies. However, if this urge is suppressed by selecting for a population of people that plan everything, will we survive as a species? How much will all this cost? Will only wealthy countries, or the wealthy in each country, be able to afford it? Does that matter?

You may find this disconcerting and think we should be pulling in the reins a bit, but you also need to remember what is driving our behavior. Our genes are programmed to reproduce. Besides urging reproductive behavior, they also make us safeguard our children to ensure that they survive to reproduce themselves. Stock predicts that this safeguarding will include routine PGH, that those who can afford to will no longer reproduce that old-fashioned, rather haphazard way, but will resort to IVF and embryo selection.

And of course, next up after disease prevention will be embryo modification or enhancement. As more is learned about how our brain activity is controlled by our personal genetic code, how mental illness results from specific sequences of DNA, and how different temperaments are coded for, the temptation to tinker may prove irresistible. At first, the motivation will be to prevent disease, but while you’re at it…, how about…? Stock quotes a comment made by James Watson, codiscoverer of DNA’s double-helix structure, at a conference on human germ-line engineering in 1998: “No one really has the guts to say it, but if we could make better human beings by knowing how to add genes, why shouldn’t we?”64 Modification and enhancement will be a fuzzy zone, depending on your point of view. “If you are really stupid, I would call that a disease,” Watson said on a British documentary. “The lower 10 percent who really have difficulty, even in elementary school, what’s the cause of it? A lot of people would like to say, ‘Well, poverty, things like that.’ It probably isn’t. So I’d like to get rid of that, to help the lower 10 percent.”65 Both Watson and Stock realize we are going to have to understand that many of the psychological differences between people (and the similarities) have biological roots.

These technologies will originally be explored for the treatment and prevention of disease, for developing genetically tailored drugs, and for genetic counseling. But obviously they will have applicability to modification and enhancement of the human genome. “OK, I got a couple of embryos here. What did you guys want added? Oh, yeah, here is your order form. I see you have checked tall, symmetrical, blue eyes, happy, male. Hmm, are you sure about that? Everyone is ordering tall males. Jeez, there goes horse racing. Oh, you want the athletic package, and the anticancer, antiaging, antidiabetes, anti-heart disease package. That’s standard. Comes with the chromosome now.”

So humans may soon be taking a hands-on approach to their own evolution. However, tincture of time will not be an aspect of this type of change. Selected-for traits will not be honed by hundreds of thousands of years of physiological, emotional, social, and environmental interactions. Our track record for preserving finely balanced interactions has not been so stellar. Think rabbits in Australia: Introduced in 1859 for hunting on an estate, within ten years the twenty-four original rabbits had multiplied to such an extent that two million could be shot or trapped annually with no noticeable effect on the population. Rabbits have contributed to the demise of one-eighth of all mammalian species in Australia, and an unknown number of plant species. They also munch on plants to the point where plant loss has contributed to massive amounts of erosion. All that to be able to bag a few on the manor. You don’t even want to know how much money has been spent dealing with those rabbits.

Apparently the rabbit lesson wasn’t enough. Another supposedly good idea gone bad was the one hundred cane toads that were introduced to Australia in 1935 because they were thought to be good for controlling beetles in the sugarcane fields of Central and South America. Now there are more than one hundred million across New South Wales and the Northern Territory. They are not popular. Loud and ugly with a voracious appetite and ducts full of poisonous bile, they eat more than beetles. They have had a disastrous effect on indigenous fauna in Australia. Or consider the Indian mongoose, brought to Hawaii to control the rats that had come to Hawaii as stowaways. Not only did they not control the rats, they killed all the land fowl. Or how about the recent introduction of zebra mussels, native to the Black, Caspian, and Azov seas, which were dumped into the Great Lakes in the mid-1980s in the ballast water of vessels from Europe. They are now one of the most injurious invasive species to affect the United States, and have been found as far as Louisiana and Washington. Zebra mussels have altered the ecosystems of the Great Lakes by reducing phytoplankton, the foundation of the local food chain. They have other negative economic impacts, causing damage to the hulls of ships, docks, and other structures and clogging water-intake pipes and irrigation ditches. Need I go on? And these finely balanced systems were visible ones.

What will come of all this genetics research? Exuberant technological scenarios have us becoming so intelligent that we will be capable of solving the entire world’s problems, eradicating disease, and living for hundreds of years. Are the things that we consider problems really problems, or are they solutions for larger problems that we haven’t considered? If a deer had the capacity to enumerate some of the problems it faces, we might hear, “I feel anxious all the time, I always think there is a puma watching me. I can’t get a restful night’s sleep. If I just could get those damn pumas to become vegans, half my problems would be solved.” We have seen what happens when the puma populations wane: forests become overpopulated with deer, which wreak havoc on the vegetation, leading to erosion…on and on. Problems for the individual may be solutions in the big picture. Would animal-rights activists want to tinker with the genomes of carnivores to change them into herbivores? If they think it is wrong for a human to kill and eat a deer, what about a puma?

Genetic enhancement will certainly involve tweaking personality traits. Those that may be considered undesirable, if possessed by no one, may unwittingly cause havoc. Richard Wrangham thinks pride has caused many of society’s problems. But perhaps pride is what motivates us to do a job well. Perhaps shaving the capacity for pride out of the genome would result in people not caring about the quality of their work and hearing the word “whatever” even more, if that is possible. Anxiety is often listed as another undesirable. Maybe the world would be better off without the anxious, but maybe not. Perhaps the anxious are the canaries of the world. So who is going to define what is desirable and not? Will it be well-meaning parents, who think that a perfectly designed child will live a perfect life? Will the result be the same game of Russian roulette that we already have?

CONCLUSION

Being human is interesting, that’s for sure, and it seems that it is getting more so. In a mad frenzy of utilizing our uniquely human abilities, such as our arching and opposing thumbs, which allow us finely tuned movements, and our abilities to question, reason, and explain imperceptible causes and effects, using language, abstract thinking, imagination, auto-cuing, planning, reciprocity, combinatorial mathematics, and so on, science is beginning to model what is going on in that brain of ours and in the brains of other species. We have come across a few more uniquely human abilities as we looked at researchers trying to create smart robots. One is Merlin’s rehearsal loop, and another is his suggestion that humans are the only animals that can autocue. We also learned that each species has unique somatosensory and motor specialties that give each its unique way of perceiving, and moving in, the world.

Some of the motivation for this research is pure curiosity, which is not a uniquely human characteristic; some is from a desire to help relieve suffering from injury or disease, driven by empathy and compassion, which arguably is uniquely human; and some is done to improve the human condition in general, a goal that definitely is uniquely human. Some of the research is driven by desires that we share with all other animals, to reproduce healthy and fit offspring. It remains to be seen whether our desires will drive us to manipulate our chromosomes to the point where we will no longer be Homo sapiens, whether we will be trading up to silicon. Maybe we will be referred to in the future as Homo buttinski.