INTERACTING - The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016)

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (2016)

9

INTERACTING

Virtual reality (VR) is a fake world that feels absolutely authentic. You can experience a hint of VR when you watch a movie in 3-D on a jumbo IMAX screen in surround sound. At moments you’ll be fully immersed in a different world, which is what virtual reality aims for. But this movie experience is not full VR, because while your imagination travels to another place in a theater, your body doesn’t. It feels like you are in a chair. Indeed, in a theater you must remain sitting in the same spot looking straight ahead passively in order for the immersive magic to work.

A much more advanced VR experience might be like the world Neo confronts in the movie The Matrix. Even as Neo runs, leaps, and battles a hundred clones in a computerized world, it feels totally real to him. Maybe even hyperreal—realer than real. His vision, hearing, and touch are hijacked by the synthetic world so completely that he cannot detect its artificiality. A yet even more advanced mode of VR is the holodeck on Star Trek. There, holographic projections of objects and people are so real in fiction they are solid to the touch. A simulated environment that you can enter at will is a recurring science fiction dream that is long overdue.

Today’s virtual reality is in between the elemental feeling of a 3-D IMAX movie and the ultimate holodeck simulation. A VR experience in 2016 can involve a billionaire’s mansion in Malibu that you walk through, room by overstuffed room, feeling as if you are really there when you are actually standing a thousand miles away wearing a helmet in a real estate agent’s office. That is something I experienced recently. Or it might be a fantasy world full of prancing unicorns where you authentically feel you are flying, once you put on special glasses. Or it may be an alternate version of the office cubicle you are sitting in that includes floating touch screens and an avatar of a distant coworker speaking next to you. In each case, you have a very strong sense that you are physically present in this virtual world, in large part because you can do things—look around, freely move in any direction, move objects—that persuade you that you are “really there.”

Recently I’ve had the opportunity to immerse myself in many prototype VR worlds. The best of these achieve an unshakeable sense of presence. The usual goal for increasing the degree of realism while you tell a story is to suspend disbelief. The goal for VR is not to suspend belief but to ratchet up belief—that you are somewhere else, and maybe even somebody else. Even if your intellectual mind can figure out you are really in a swivel chair, your embodied “I” will be convinced you are trudging through a swamp.

For the past decade, researchers inventing VR have settled on a standard demonstration of this overpowering presence. The visitor waiting for the demo stands in the center of an actual real nondescript waiting room. A pair of large dark goggles rest on a stool. The visitor dons the goggles and is immediately immersed into a virtual version of the same room she was standing in, with the same nondescript paneling and chairs. Not much is changed from her point of view. She can look around. The scene looks a little coarser through the goggles. But slowly the floor of the room begins to drop away, leaving the visitor standing on a plank that now floats over the receding floor 30 meters below. She is asked to walk out farther on the plank suspended high over a most realistic pit. The realism of the scene has been improved over the years so that by now the response of the visitor is very predictable. Either she cannot move her feet or she trembles as she inches forward, palms sweating.

When I was plunged into this scene myself, I reacted the same way. My mind reeled. My conscious mind kept whispering to me that I was in a dim room in the research labs of Stanford, but my primitive mind had hijacked my body. It was insisting that I was perched on a too narrow plank too high in the sky and that I must back off this plank immediately. Right now! My fear of heights kicked in. My knees began to shake. I was almost nauseous. Then I did something stupid. I decided to jump off the plank a little ways down onto a nearby ledge in the virtual world. But of course there was no “down,” so my real body dove onto the floor. But since I was actually standing on the floor, I was caught as I fell by two strong spotters in the real room, who were standing there precisely for this purpose. My reaction was completely normal; almost everyone falls.

Totally believable virtual reality is just about here. But I have been wrong about VR before. In 1989 a friend of a friend invited me to his lab in Redwood City, California, to see some gear he had invented. The lab turned out to be a couple of rooms in an office complex that were missing most of their desks. The walls were covered by a gallery of neoprene bodysuits embroidered with wires, large gloves sporting electronic components, and rows of duct-taped swimming goggles. The guy I’d gone to see, Jaron Lanier, sported shoulder-length blond dreadlocks. I wasn’t sure where this was going, but Jaron promised me a new experience, something he called virtual reality.

A few minutes later Lanier handed me one black glove, a dozen wires snaking from the fingers across the room to a standard desktop PC. I put it on. Lanier then placed a set of black goggles suspended by a web of straps onto my head. A thick black cable ran down my back from the headgear to his computer. Once my eyes focused inside the goggles, I was in. I was inside a place bathed in a diffuse light blue. I could see a cartoon version of my glove in the exact place my real hand felt it was. The virtual glove moved in sync with my hand. It was now “my” glove, and I felt—in my body, not just my head—very strongly that I was not in an office. Lanier himself then climbed into his own creation. Using his own helmet and glove, he appeared in his own world as a girl avatar, since the beauty of his system was that you could design your avatar to look like anything you wanted. Two of us now inhabited this first mutual dream space. In 1989.

Lanier popularized the term “virtual reality,” but he was not the only person working on immersive simulations at that time in the late 1980s. Several universities, a few startups, as well as the U.S. military had comparable prototypes, some with slightly different approaches for creating the phenomenon. I felt I had seen the future during my plunge into his microcosmos and wanted as many of my friends and fellow pundits as possible to experience what I had. With the help of the magazine I was then editing (Whole Earth Review), we organized the first public demo of every VR rig that existed in the fall of 1990. For 24 hours, from Saturday noon to Sunday noon, anyone who bought a ticket could stand in line to try out as many of the two dozen or so VR prototypes as they could. In the wee hours of the night I saw the psychedelic champion Tim Leary compare VR to LSD. The overwhelming impression spun by the buggy gear was total plausibility. These simulations were real. The views were coarse, the vision often stuttered, but the intended effect was inarguable: You went somewhere else. The next morning William Gibson, an up-and-coming science fiction writer who stayed up the night testing cyberspace for the first time, was asked what he thought about these new portals to synthetic worlds. He then first uttered his now famous remark: “The future is already here; it’s just not evenly distributed.”

VR was so uneven, however, it faded. The next steps never happened. All of us, myself included, thought VR technology would be ubiquitous in five years or so—at least by the year 2000. But no advances happened till 2015, 25 years after Jaron Lanier’s pioneering work. The particular problem with VR was that close enough was not close enough. For extended stays in VR longer than 10 minutes, the coarseness and stuttering motion caused nausea. The cost of gear sufficiently powerful, fast, and comfortable enough to overcome nausea was many tens of thousands of dollars. Therefore VR remained out of reach to consumers, and also out of reach for many startup developers who needed to jump-start the creation of VR content to spark the purchase of the gear.

Twenty-five years later a most unlikely savior appeared: phones! The runaway global success of the smartphone drove the quality of their tiny hi-res screens way up and their cost way down. The eye screens for a VR goggle are approximately the size and resolution of a smartphone screen, so today VR headsets are basically built out of cheap phone screen technology. At the same time, motion sensors in phones followed the same path of increasing performance and decreasing cost, until these motion sensors could be borrowed by VR displays to track head, hand, and body positions for very little. In fact, the first consumer VR models from Samsung and Google use a regular smartphone slipped into an empty head-mounted display unit. Put a Samsung Gear VR on and you look into a phone; your movements are tracked by the phone, so the phone sends you into an alternative world.

It’s not difficult to see how VR will soon triumph in movies of the future, particularly visceral genres like horror, erotic, or thrillers—where your gut is also caught up in the story. It’s also easy to imagine VR occupying a prime role in video games. No doubt hundreds of millions of avid players will eagerly don a suit, gloves, and helmet and then teleport to a far-away place to hide, shoot, kill, and explore, either solo or in tight bands of friends. Of course, the major funder of consumer VR development today is the game industry. But VR is much bigger than this.

✵ ✵ ✵

Two benefits propel VR’s current rapid progress: presence and interaction. “Presence” is what sells VR. All the historical trends in cinema technology bend toward increased realism, starting from sound, to color, to 3-D, to faster, smoother frame rates. Those trends are now being accelerated inside VR. Week by week the resolution increases, the frame rate jumps, the contrast deepens, the color space widens, and the high-fidelity sound sharpens, all of it improving faster than it does on big screens. That is, VR is getting more “realistic” faster than movies are. Within a decade, when you look into a state-of-the-art virtual reality display, your eye will be fooled into thinking you are looking through a real window into a real world. It’ll be bright—no flicker, no visible pixels. You will feel this is absolutely for sure real. Except it isn’t.

The second generation of VR technology relies on a new, innovative “light field” projection. (The first commercial light field units are the HoloLens made by Microsoft and Magic Leap funded by Google.) In this design the VR is projected onto a semi-transparent visor much like a holograph. This permits the projected “reality” to overlay the reality you see normally without goggles. You could be standing in your kitchen and see the robot R2-D2 right before you in perfect resolution. You could walk around it, get closer, even move it to inspect it, and it would retain its authenticity. This overlay is called augmented reality (AR). Because the artificial part is added to your ordinary view of the world, your eyes are focused deeper than they are on a screen near your eyes, so this technological illusion is packed with presence. You almost swear it is really there.

Microsoft’s vision for light field AR is to build the office of the future. Instead of workers sitting in a cubicle in front of a wall of monitor screens, they sit in an open office wearing HoloLenses and see a huge wall of virtual screens around them. Or they click to be teleported to a 3-D conference room with a dozen coworkers who live in different cities. Or they click to a training room where an instructor will walk them though a first-aid class, guiding their avatars through the proper procedures. “See this? Now you do it.” In most ways, the AR class will be superior to a real-world class.

The reason why cinematic realism is advancing faster in VR than in cinema itself is due to a neat trick performed by head-mounted displays. To fill a gigantic IMAX cinema screen with the proper resolution and brightness to convince you it is a mere window into reality requires a massive amount of computation and luminosity. To fill a 60-inch flat screen with the same window-clear realism is a smaller challenge, but still daunting. It is much easier to get a tiny visor in front of your face up to that quality. Because a head-mounted display follows your gaze no matter where you look—it is always in front of your eyes—you see full realism all the time. Therefore if you make fully 3-D clear-as-a-window vision and keep it in view no matter where you look, you can create a virtual IMAX inside of the VR. Turn your gaze anywhere on the screen and the realism follows your gaze because the tech is physically attached to your face. In fact, the entire 360-degree virtual world appears in the same ultimate resolution as what’s in front of your eyes. And since what is in front of your eyes is just a small surface area, it is much easier and cheaper to magnify small improvements in quality. This tiny little area can invoke a huge disruptive presence.

But while “presence” will sell it, VR’s enduring benefits spring from its interactivity. It is unclear how comfortable, or uncomfortable, we’ll be with the encumbrances of VR gear. Even the streamlined Google Glass (which I also tried), a very mild AR display not much bigger than sunglasses, seemed too much trouble for most people in its first version. Presence will draw users in, but it is the interactivity quotient of VR that will keep it going. Interacting in all degrees will spread out to the rest of the technological world.

✵ ✵ ✵

About 10 years ago, Second Life was a fashionable destination on the internet. Members of Second Life created full-body avatars in a simulated world that mirrored “first life.” A lot of their time was spent remaking their avatars into beautiful people with glamorous clothes and socializing with other members’ incredibly beautiful avatars. Members devoted lifetimes to building super beautiful homes and slick bars and discos. The environment and avatars were created in full 3-D, but due to technological constraints, members could only view the world in flat 2-D on their desktop screens. (Second Life is rebooting itself as a 3-D world in 2016, code-named Project Sansa.) Avatars communicated via text balloons floating over their heads, typed by owners. It was like walking around in a comic book. This clunky interface held back any deep sense of presence. The main attraction of Second Life was the completely open space for constructing a quasi-3-D environment. Your avatar walked onto an empty plain, like the blank field at a Burning Man festival, and could begin constructing the coolest and most outrageous buildings, rooms, or wilderness places. Physics didn’t matter, materials were free, anything was possible. But it took many hours to master the arcane 3-D tools. In 2009 a game company in Sweden, Minecraft, launched a similar construction world in quasi-3-D, but employed idiot-easy building blocks stacked like giant Legos. No learning was necessary. Many would-be builders migrated to Minecraft.

Second Life’s success had risen on the ability of kindred creative spirits to socialize, but when the social mojo moved to the mobile world, no phones had enough computing power to handle Second Life’s sophisticated 3-D, so the biggest audiences moved on. Even more headed to Minecraft, whose crude low-res pixelation allowed it to run on phones. Millions of members are still loyal to Second Life, and today at any hour about 50,000 avatars are simultaneously roaming the imaginary 3-D worlds built by users. Half of them are there for virtual sex, which relies more on the social component than on realism. A few years ago the founder of Second Life, Phil Rosedale, started another VR-ish company trying to harness the social opportunities of an open simulated world and to invent a more convincing VR.

Recently I visited the offices of Rosedale’s startup, High Fidelity. As the name implies, the aim of its project is to raise the realism in virtual worlds occupied by thousands—maybe tens of thousands—of avatars at once. Create a realistic thriving virtual city. Jaron Lanier’s pioneering VR permitted two occupants at once, and the thing I noticed (and everyone else who visited) was that other people in VR were far more interesting than other things. Experimenting again in 2015, I found the best demos of synthetic worlds are ones that trigger a deep presence not with the most pixels per inch, but with the most engagement of other people. To that end, High Fidelity is exploiting a neat trick. Taking advantage of the tracking abilities of cheap sensors, it can mirror the direction of your gaze in both worlds. Not just where you turn your head, but where you turn your eyes. Nano-small cameras buried inside the headset look back at your real eyes and transfer your exact gaze onto your avatar. That means that if someone is talking to your avatar, their eyes are staring at your eyes, and yours at theirs. Even if you move, requiring them to rotate their head, their eyes continue to lock onto yours. This eye contact is immensely magnetic. It stirs intimacy and radiates a felt presence.

Nicholas Negroponte, head of MIT’s Media Lab, once quipped in the 1990s that the urinal in the men’s restroom was smarter than his computer because it knew he was there and would flush when he left, while his computer had no idea he was sitting in front of it all day. That is still kind of true today. Laptops and even tablets and phones are largely ignorant of their owners’ use of them. That is starting to change with cheap eye tracking mechanisms like the one in the VR headsets. The newest Samsung Galaxy phone contains eye tracking technology so the phone knows precisely where on the screen you are looking. Gaze tracking can be used in many ways. It can speed up screen navigation since you often look at something before your finger or mouse moves to confirm it. Also, by measuring the duration of thousands of people’s gazes on a screen, software can generate maps that rank areas of greater or lesser attention. Website owners can then discern what part of their front page people actually look at and what parts are glanced over, and use that information to improve the design. An app maker can use gaze patterns of visitors to find which parts of an app’s interface demand too much attention, suggesting a difficulty that needs to be fixed. Mounted in a dashboard in a car, the same gaze technology can detect when drivers are drowsy or distracted.

The tiny camera eyes that now stare back at us from any screen can be trained with additional skills. First the eyes were trained to detect a generic face, used in digital cameras to assist focusing. Then they were taught to detect particular faces—say, yours—as identity passwords. Your laptop looks into your face, and deeper into your irises, to be sure it is you before it opens its home page. Recently researchers at MIT have taught the eyes in our machines to detect human emotions. As we watch the screen, the screen is watching us, where we look, and how we react. Rosalind Picard and Rana el Kaliouby at the MIT Media Lab have developed software so attuned to subtle human emotions that they claim it can detect if someone is depressed. It can discern about two dozen different emotions. I had a chance to try a beta version of this “affective technology,” as Picard calls it, on Picard’s own laptop. The tiny eye in the lid of her laptop peering at me could correctly determine if I was perplexed or engaged with a difficult text. It could tell if I was distracted while viewing a long video. Since this perception is in real time, the smart software can adapt it to what I’m viewing. Say I am reading a book and my frown shows I’ve stumbled on a certain word; the text could expand a definition. Or if it realizes I am rereading the same passage, it could supply an annotation for that passage. Similarly, if it knows I am bored by a scene in a video, it could jump ahead or speed up the action.

We are equipping our devices with senses—eyes, ears, motion—so that we can interact with them. They will not only know we are there, they will know who is there and whether that person is in a good mood. Of course, marketers would love to get hold of our quantified emotions, but this knowledge will serve us directly as well, enabling our devices to respond to us “with sensitivity” as we hope a good friend might.

In the 1990s I had a conversation with the composer Brian Eno about the rapid changes in music technology, particularly its sprint from analog to digital. Eno made his reputation by inventing what we might now call electronic music, so it was a surprise to hear him dismiss a lot of digital instruments. His primary disappointment was with the instruments’ atrophied interfaces—little knobs, sliders, or tiny buttons mounted on square black boxes. He had to interact with them by moving only his fingers. By comparison, the sensual strings, table-size keyboards, or meaty drumheads of traditional analog instruments offered more nuanced bodily interactions with the music. Eno told me, “The trouble with computers is that there is not enough Africa in them.” By that he meant that interacting with computers using only buttons was like dancing with only your fingertips, instead of your full body, as you would in Africa.

Embedded microphones, cameras, and accelerometers inject some Africa into devices. They provide embodiment in order to hear us, see us, feel us. Swoosh your hand to scroll. Wave your arms with a Wii. Shake or tilt a tablet. Let us embrace our feet, arms, torso, head, as well as our fingertips. Is there a way to use our whole bodies to overthrow the tyranny of the keyboard?

One answer first premiered in the 2002 movie Minority Report. The director, Steven Spielberg, was eager to convey a plausible scenario for the year 2050, and so he convened a group of technologists and futurists to brainstorm the features of everyday life in 50 years. I was part of that invited group, and our job was to describe a future bedroom, or what music would sound like, and especially how you would work on a computer in 2050. There was general consensus that we’d use our whole bodies and all our senses to communicate with our machines. We’d add Africa by standing instead of sitting. We think different on our feet. Maybe we’d add some Italy by talking to machines with our hands. One of our group, John Underkoffler, from the MIT Media Lab, was way ahead in this scenario and was developing a working prototype using hand motions to control data visualizations. Underkoffler’s system was woven into the film. The Tom Cruise character stands, raises his hands outfitted with a VR-like glove, and shuffles blocks of police surveillance data, as if conducting music. He mutters voice instructions as he dances with the data. Six years later, the Iron Man movies picked up this theme. Tony Stark, the protagonist, also uses his arms to wield virtual 3-D displays of data projected by computers, catching them like a beach ball, rotating bundles of information as if they were objects.

It’s very cinematic, but real interfaces in the future are far more likely to use hands closer to the body. Holding your arms out in front of you for more than a minute is an aerobic exercise. For extended use, interaction will more closely resemble sign language. A future office worker is not going to be pecking at a keyboard—not even a fancy glowing holographic keyboard—but will be talking to a device with a newly evolved set of hand gestures, similar to the ones we now have of pinching our fingers in to reduce size, pinching them out to enlarge, or holding up two L-shaped pointing hands to frame and select something. Phones are very close to perfecting speech recognition today (including being able to translate in real time), so voice will be a huge part of interacting with devices. If you’d like to have a vivid picture of someone interacting with a portable device in the year 2050, imagine them using their eyes to visually “select” from a set of rapidly flickering options on the screen, confirming with lazy audible grunts, and speedily fluttering their hands in their laps or at their waist. A person mumbling to herself while her hands dance in front of her will be the signal in the future that she is working on her computer.

Not only computers. All devices need to interact. If a thing does not interact, it will be considered broken. Over the past few years I’ve been collecting stories of what it is like to grow up in the digital age. As an example, one of my friends had a young daughter under five years old. Like many other families these days, they didn’t have a TV, just computing screens. On a visit to another family who happened to have a TV, his daughter gravitated to the large screen. She went up to the TV, hunted around below it, and then looked behind it. “Where’s the mouse?” she asked. There had to be a way to interact with it. Another acquaintance’s son had access to a computer starting at the age of two. Once, when she and her son were shopping in a grocery store, she paused to decipher the label on a product. “Just click on it,” her son suggested. Of course cereal boxes should be interactive! Another young friend worked at a theme park. Once, a little girl took her picture, and after she did, she told the park worker, “But it’s not a real camera—it doesn’t have the picture on the back.” Another friend had a barely speaking toddler take over his iPad. She could paint and easily handle complicated tasks on apps almost before she could walk. One day her dad printed out a high-resolution image on photo paper and left it on the coffee table. He noticed his toddler came up and tried to unpinch the photo to make it larger. She tried unpinching it a few times, without success, and looked at him, perplexed. “Daddy, broken.” Yes, if something is not interactive, it is broken.

The dumbest objects we can imagine today can be vastly improved by outfitting them with sensors and making them interactive. We had an old standard thermostat running the furnace in our home. During a remodel we upgraded to a Nest smart thermostat, designed by a team of ex-Apple execs and recently bought by Google. The Nest is aware of our presence. It senses when we are home, awake or asleep, or on vacation. Its brain, connected to the cloud, anticipates our routines, and over time builds up a pattern of our lives so it can warm up the house (or cool it down) just a few minutes before we arrive home from work, turn it down after we leave, except on vacations or on weekends, when it adapts to our schedule. If it senses we are unexpectedly home, it adjusts itself. All this watching of us and interaction optimizes our fuel bill.

One consequence of increased interaction between us and our artifacts is a celebration of an artifact’s embodiment. The more interactive it is, the more it should sound and feel beautiful. Since we might spend hours holding it, craftsmanship matters. Apple was the first to recognize that this appetite applies to interactive goods. The gold trim on the Apple Watch is to feel. We end up caressing an iPad, stroking its magic surface, gazing into it for hours, days, weeks. The satin touch of a device’s surface, the liquidity of its flickers, the presence or lack of its warmth, the quality of its build, the temperature of its glow will come to mean a great deal to us.

What could be more intimate and interactive than wearing something that responds to us? Computers have been on a steady march toward us. At first computers were housed in distant air-conditioned basements, then they moved to nearby small rooms, then they crept closer to us perched on our desks, then they hopped onto our laps, and recently they snuck into our pockets. The next obvious step for computers is to lay against our skin. We call those wearables.

We can wear special spectacles that reveal an augmented reality. Wearing such a transparent computer (an early prototype was Google Glass) empowers us to see the invisible bits that overlay the physical world. We can inspect a cereal box in the grocery store and, as the young boy suggested, simply click it within our wearable to read its meta-information. Apple’s watch is a wearable computer, part health monitor, but mostly a handy portal to the cloud. The entire super-mega-processing power of the entire internet and World Wide Web is funneled through that little square on your wrist. But wearables in particular mean smart clothes. Of course, itsy-bitsy chips can be woven into a shirt so that the shirt can alert a smart washing machine to its preferred washing cycles, but wearables are more about the wearer. Experimental smart fabrics such as those from Project Jacquard (funded by Google) have conductive threads and thin flexible sensors woven into them. They will be sewn into a shirt you interact with. You use fingers of one hand to swipe the sleeve of your other arm the way you’d swipe an iPad, and for the same reason: to bring up something on a screen or in your spectacles. A smart shirt like the Squid, a prototype from Northeastern University, can feel—in fact measure—your posture, recording it in a quantified way, and then actuating “muscles” in the shirt that contract precisely to hold you in the proper posture, much as a coach would. David Eagleman, a neuroscientist at Baylor College, in Texas, invented a supersmart wearable vest that translates one sense into another. The Sensory Substitution Vest takes audio from tiny microphones in the vest and translates those sound waves into a grid of vibrations that can be felt by a deaf person wearing it. Over a matter of months, the deaf person’s brain reconfigures itself to “hear” the vest vibrations as sound, so by wearing this interacting cloth, the deaf can hear.

You may have seen this coming, but the only way to get closer than wearables over our skin is to go under our skin. Jack into our heads. Directly connect the computer to the brain. Surgical brain implants really do work for the blind, the deaf, and the paralyzed, enabling the handicapped to interact with technology using only their minds. One experimental brain jack allowed a quadriplegic woman to use her mind to control a robotic arm to pick up a coffee bottle and bring it to her lips so she could drink from it. But these severely invasive procedures have not been tried to enhance a healthy person yet. Brain controllers that are noninvasive have already been built for ordinary work and play, and they do work. I tried several lightweight brain-machine interfaces (BMIs) and I was able to control a personal computer simply by thinking about it. The apparatus generally consists of a hat of sensors, akin to a minimal bicycle helmet, with a long cable to the PC. You place it on your head and its many sensor pads sit on your scalp. The pads pick up brain waves, and with some biofeedback training you can generate signals at will. These signals can be programmed to perform operations such as “Open program,” “Move mouse,” and “Select this.” You can learn to “type.” It’s still crude, but the technology is improving every year.

In the coming decades we’ll keep expanding what we interact with. The expansion follows three thrusts.

1. More senses

We will keep adding new sensors and senses to the things we make. Of course, everything will get eyes (vision is almost free), and hearing, but one by one we can add superhuman senses such as GPS location sensing, heat detection, X-ray vision, diverse molecule sensitivity, or smell. These permit our creations to respond to us, to interact with us, and to adapt themselves to our uses. Interactivity, by definition, is two way, so this sensing elevates our interactions with technology.

2. More intimacy

The zone of interaction will continue to march closer to us. Technology will get closer to us than a watch and pocket phone. Interacting will be more intimate. It will always be on, everywhere. Intimate technology is a wide-open frontier. We think technology has saturated our private space, but we will look back in 20 years and realize it was still far away in 2016.

3. More immersion

Maximum interaction demands that we leap into the technology itself. That’s what VR allows us to do. Computation so close that we are inside it. From within a technologically created world, we interact with each other in new ways (virtual reality) or interact with the physical world in a new way (augmented reality). Technology becomes a second skin.

Recently I joined some drone hobbyists who meet in a nearby park on Sundays to race their small quadcopters. With flags and foam arches they map out a course over the grass for their drones to race around. The only way to fly drones at this speed is to get inside them. The hobbyists mount tiny eyes at the front of their drones and wear VR goggles to peer through them for what is called a first-person view (FPV). They are now the drone. As a visitor I don an extra set of goggles that piggyback on their camera signals and so I find myself sitting in the same pilots’ seats and see what each pilot sees. The drones dart in, out, and around the course obstacles, chasing each other’s tails, bumping into other drones, in scenes reminiscent of a Star Wars pod race. One young guy who’s been flying radio control model airplanes since he was a boy said that being able to immerse himself into the drone and fly from inside was the most sensual experience of his life. He said there was almost nothing more pleasurable than actually, really free flying. There was no virtuality. The flying experience was real.

✵ ✵ ✵

The convergence of maximum interaction plus maximum presence is found these days in free-range video games. For the past several years I’ve been watching my teenage son play console video games. I am not twitchy enough myself to survive more than four minutes in a game’s alterworld, but I find I can spend an hour just watching the big screen as my son encounters dangers, shoots at bad guys, or explores unknown territories and dark buildings. Like a lot of kids his age, he’s played the classic shooter games like Call of Duty, Halo, and Uncharted 2, which have scripted scenes of engagement. However, my favorite game as a voyeur is the now dated game Red Dead Redemption. This is set in the vast empty country of the cowboy West. Its virtual world is so huge that players spend a lot of time on their horses exploring the canyons and settlements, searching for clues, and wandering the land on vague errands. I’m happy to ride alongside as we pass through frontier towns in pursuit of his quests. It’s a movie you can roam in. The game’s open-ended architecture is similar to the very popular Grand Theft Auto, but it’s a lot less violent. Neither of us knows what will happen or how things will play out.

There are no prohibitions about where you can go in this virtual place. Want to ride to the river? Fine. Want to chase a train down the tracks? Fine. How about ride up alongside the train and then hop on and ride inside the train? OK! Or bushwhack across sagebrush wilderness from one town to the next? You can ride away from a woman yelling for help or—your choice—stop to help her. Each act has consequences. She may need help or she may be bait for a bandit. One reviewer speaking of the interacting free will in the game said: “I’m sincerely and pleasantly surprised that I can shoot my own horse in the back of the head while I’m riding him, and even skin him afterward.” The freedom to move in any direction in a seamless virtual world rendered with the same degree of fidelity as a Hollywood blockbuster is intoxicating.

It’s all interactive details. Dawns in the territory of Red Dead Redemption are glorious, as the horizon glows and heats up. Weather forces itself on the land, which you sense. The sandy yellow soil darkens with appropriate wet splotches as the rain blows down in bursts. Mist sometimes drifts in to cover a town with realistic veiling, obscuring shadowy figures. The pink tint of each mesa fades with the clock. Textures pile up. The scorched wood, the dry brush, the shaggy bark—every pebble or twig—is rendered in exquisite minutiae at all scales, casting perfect overlapping shadows that make a little painting. These nonessential finishes are surprisingly satisfying. The wholesale extravagance is compelling.

The game lives in a big world. A typical player might take around 15 or so hours to zoom through once, while a power player intent on achieving all the game rewards would need 40 to 50 hours to complete it. At every step you can choose any direction to take the next step, and the next, and next, and yet the grass under your feet is perfectly formed and every blade detailed, as if its authors anticipated you would tread on this microscopic bit of the map. At any of a billion spots you can inspect the details closely and be rewarded, but most of this beauty will never be seen. This warm bath of freely given abundance triggers a strong conviction that this is “natural,” that this world has always been, and that it is good. The overall feeling inside one of these immaculately detailed, stunningly interactive worlds stretching to the horizons is of being immersed in completeness. Your logic knows this can’t be true, but as on the plank over the pit, the rest of you believes it. This realism is just waiting for the full immersion of VR interaction. At the moment, the spatial richness of these game worlds must be viewed in 2-D.

Cheap, abundant VR will be an experience factory. We’ll use it to visit environments too dangerous to risk in the flesh, such as war zones, deep seas, or volcanoes. Or we’ll use it for experiences we can’t easily get to as humans—to visit the inside of a stomach, the surface of a comet. Or to swap genders, or become a lobster. Or to cheaply experience something expensive, like a flyby of the Himalayas. But experiences are generally not sustainable. We enjoy travel experiences in part because we are only visiting briefly. VR, at least in the beginning, is likely to be an experience we dip in and out of. Its presence is so strong we may want it only in small, measured doses. But we have no limit on the kind of interacting we crave.

These massive video games are pioneering new ways of interacting. The total interactive freedom suggested by unlimited horizons is illusionary in these kinds of games. Players, or the audience, are assigned tasks to accomplish and given motivations to stay till the end. Actions in the game are channeled funnel-like to meet the next bottleneck of the overall narrative, so the game eventually reveals a destiny, but your choices as a player still matter in what kind of points you accumulate. There’s a tilt in the overall world, so no matter how many explorations you make, you tend to drift over time toward an inevitable incident. When the balance between an ordained narrative and freewill interaction is tweaked just right, it creates the perception of great “game play”—a sweet feeling of being part of something large that is moving forward (the game’s narrative) while you still get to steer (the game’s play).

The games’ designers tweak the balance, but the invisible force that nudges players in certain directions is an artificial intelligence. Most of the action in open-ended games like Red Dead Redemption, especially the interactions of supporting characters, is already animated by AI. When you halt at a random homestead and chat with the cowhand, his responses are plausible because in his heart beats an AI. AI is seeping into VR and AR in other ways as well. It will be used to “see” and map the physical world you are really standing in so that it can transport you to a synthetic world. That includes mapping your physical body’s motion. An AI can watch you as you sit, stand, move around in, say, your office without the need of special tracking equipment, then mirror that in the virtual world. An AI can read your route through the synthetic environment and calculate interferences needed to herd you in certain directions, as a minor god might do.

Implicit in VR is the fact that everything—without exception—that occurs in VR is tracked. The virtual world is defined as a world under total surveillance, since nothing happens in VR without tracking it first. That makes it easy to gameify behavior—awarding points, or upping levels, or scoring powers, etc.—to keep it fun. However, today the physical world is so decked out with sensors and interfaces that it has become a parallel tracking world. Think of our sensor-filled real world as a nonvirtual virtual reality that we spend most of our day in. As we are tracked by our surroundings and indeed as we track our quantified selves, we can use the same interaction techniques that we use in VR. We’ll communicate with our appliances and vehicles using the same VR gestures. We can use the same gameifications to create incentives, to nudge participants in preferred directions in real life. You might go through your day racking up points for brushing your teeth properly, walking 10,000 steps, or driving safely, since these will all be tracked. Instead of getting A-pluses on daily quizzes, you level up. You get points for picking up litter or recycling. Ordinary life, not just virtual worlds, can be gameified.

The first technological platform to disrupt a society within the lifespan of a human individual was personal computers. Mobile phones were the second platform, and they revolutionized everything in only a few decades. The next disrupting platform—now arriving—is VR. Here is how a day plugged into virtual and augmented realities may unfold in the very near future.

I am in VR, but I don’t need a headset. The surprising thing that few people expected way back in 2016 is that you don’t need to wear goggles, or even a pair of glasses, in order to get a basic “good enough” augmented reality. A 3-D image projects directly into my eyes from tiny light sources that peek from the corner of my rooms, all without the need of something in front of my face. The quality is good enough for most applications, of which there are tens of thousands.

The very first app I got was the ID overlay. It recognizes people’s faces and then displays their name, association, and connection to me, if any. Now that I am used to this, I can’t roam outside without it. My friends say some quasi-legal ID apps provide a lot more immediate information about strangers, but you need to be wearing gear that keeps what you see private—otherwise you’ll get tagged for rude behavior.

I wear a pair of AR glasses outside to get a sort of X-ray view of my world. I use it first to find good connectivity. The warmer the colors in the world, the closer I am to heavy-duty bandwidth. With AR on I can summon earlier historical views layered on top of whatever place I am looking at, a nifty trick I used extensively in Rome. There, a fully 3-D life-size intact Colosseum appeared synchronized over the ruins as I clambered through them. It’s an unforgettable experience. It also shows me comments virtually “nailed” to different spots in the city left by other visitors that are viewable only from that very place. I left a few notes in spots for others to discover as well. The app reveals all the underground service pipes and cables beneath the street, which I find nerdly fascinating. One of the weirder apps I found is one that will float the dollar value—in big red numbers—over everything you look at. Almost any subject I care about has an overlay app that displays it as an apparition. A fair amount of public art is now 3-D mirages. The plaza in our town square hosts an elaborate rotating 3-D projection that is refreshed twice a year, like a museum art show. Most of the buildings downtown are reskinned with alternative facades inside AR, each facade commissioned by an architect or artist. The city looks different each time I walk through it.

I wore VR goggles all through high school. These lightweight frames give a much more vivid image than glassless AR. In class I’d watch all kinds of simulations, especially how-to rehearsals. I preferred the “ghost” mode in maker classes, like cooking or electrical hacking. That is how I learned how to weld. In AR I slipped my hands into the position of the teacher’s ghostly virtual guide hands in order to correctly grip the virtual welding rod held against the virtual steel tube. I tried to move my hands to follow the ghost hands. My virtual welds were only as good as my actions. For sports I wore a full helmet display. I rehearsed my moves with 360-degree motion on a real field, shadowing a model shadow body. I also spend a lot of time practicing plays in VR in a room. A couple of sports, like broadswording, we played entirely inside VR.

At my “office” I wear an AR visor on my forehead. The visor is a curved band about hand width wide that is held a few inches away from my eyes for extra comfort during daylong use. The powerful visor throws up virtual screens all around me. I have about 12 virtual screens of all sizes and large data sets I can wrestle with my hands. The visor provides enough resolution and speed that most of my day I am communicating with virtual colleagues. But I see them in a real room, so I am fully present in reality as well. Their photorealistic 3-D avatar captures their life-size likeness accurately. My coworkers and I usually sit at a virtual table in a real room while we work independently, but we can walk around each other’s avatar. We converse and overhear each other just as if we are in the same room. It is so convenient to pop up an avatar that even if my real coworker is on the other side of the real room, we’ll just meet in the AR rather than walk across the gap.

When I want to get really serious about augmented reality, I’ll wear an AR roaming system. I put on special contact lenses that give me full 360-degree views and impeccable fictional apparitions. With the contacts on, it is very difficult to visually ascertain if what I see is fake—except that one part of my brain is aware that a seven-meter-tall Godzilla stalking the street is absolute fantasy. I wear a ring on one finger of each hand to track my gestures. Tiny lenses in my shirt and headband track my body orientation. And GPS in my pocket device tracks my location to within a few millimeters. I can thus wander through my hometown as if it were an alternative world or a game platform. When I rush through the real streets, ordinary objects and spaces are transformed into extraordinary objects and spaces. A real newspaper rack on the real sidewalk becomes an elaborate 22nd-century antigravity transponder in an AR game.

The most intense VR experience of all requires a full-body VR rig. It’s a lot of trouble so I suit up only occasionally. I have an amateur rig at home that includes a standing harness to prevent me from falling while I flail about. It gives me a full cardio workout while chasing dragons. In fact, VR harnesses have replaced exercise equipment in most basements. But once or twice a month I join some friends at the local realie theater to get access to state-of-the-art VR technology. Wearing my own silk underwear suit for hygienic purposes, I slip into an inflatable exoskeleton that closes around my limbs. This generates amazing haptic feedback. When I grasp a virtual object with my virtual hand, I feel its weight—the pressure against my hand—because the inflatable is squeezing my hand just the right amount. If I bump my shin against a rock in the virtual world, the sheath on my leg will “bump” my shin just so, making a totally believable sensation. A reclining seat holds my torso, giving me the option of doing genuinely felt jumps, flips, and dashes. And the accuracy of the super-hi-res helmet, with binaural sound and even real-time smells, creates a totally convincing presence. Within two minutes of entering, I usually forget where my real body is; I am elsewhere. The best part of a realie theater is that with zero latency 250 other people are sharing my world with equal verisimilitude. With them I can do real things in a fantasy world.

✵ ✵ ✵

VR technology offers one more benefit to users. The strong presence generated by VR amplifies two paradoxically opposing traits. It enhances realness, so we might regard a fake world as real—the goal of many games and movies. And it encourages unrealness, fakery to the nth degree. For instance, it is easy to tweak the physics in VR to, say, remove gravity or friction, or to model fictional environments simulating alien planets—say, an underwater civilization. We can also alter our avatars to become other genders, other colors, or other species. For 25 years Jaron Lanier has talked about his desire to use VR to turn himself into a walking lobster. The software would swap his arms for claws, his ears for antennae, and his feet for a tail, not just visually, but kinetically. Recently at the Stanford VR lab Lanier’s dream came true. VR creation software is now agile and robust enough to quickly model such personal fantasies. Using the Stanford VR rig, I too got to modify my avatar. In the experiment, once I was in VR, my arms would become my feet, and my feet my arms. That is, to kick with my virtual foot I had to punch with my real arm. To test how well this inversion worked, I had to burst floating virtual balloons with my arms/feet and feet/arms. The first seconds were awkward and embarrassing. But amazingly, within a few minutes I could kick with my arms and punch with my feet. Jeremy Bailenson, the Stanford professor who devised this experiment and uses VR as the ultimate sociological lab, discovered that it usually took a person only four minutes to completely rewire the feet/arm circuits in their brain. Our identities are far more fluid than we think.

That’s becoming a problem. It’s very difficult to determine how real someone online is. Outward appearances are easily manipulated. Someone may present himself as a lobster, but in reality he is a dreadlocked computer engineer. Formerly you could check their friends to ascertain realness. If a person online did not have any friends on social networks, they probably weren’t who they claimed to be. But now hackers/criminals/rebels can create puppet accounts, with imaginary friends and imaginary friends of friends, working for bogus companies with bogus Wikipedia entries. The most valuable asset that Facebook owns is not its software platform but the fact that it controls the “true name” identities of a billion people, which are verified from references of the true identities of friends and colleagues. That monopoly of a persistent identity is the real engine of Facebook’s remarkable success. And it is fragile. The normal tests we used to prove who we are in digital worlds, such as passwords and captchas, no long work very well. A captcha is a visual puzzle that was easy for humans to solve, but hard for computers. Now humans have trouble solving them, while machines find it easier. Passwords are easily hacked or stolen. So what is the better solution than passwords? You, yourself.

Your body is your password. Your digital identity is you. All the tools that VR is exploiting, all the ways it needs to capture your movements, to follow your eyes, to decipher your emotions, to encapsulate you as much as possible so you can be transported into another realm and believe you were there—all these interactions will be unique to you, and therefore proof of you. One of the recurring surprises in the field of biometrics—the science behind the sensors that track your body—is that almost everything that we can measure has a personally unique fingerprint. Your heartbeat is unique. Your gait when you walk is unique. Your typing rhythm on a keyboard is distinctive. What words you use most frequently. How you sit. Your blinks. Of course, your voice. When these are combined, they fuse into a metapattern that almost can’t be faked. Indeed, that’s how we identify people in the real world. If I were to meet you and was asked if we had met before, my subconscious mind would churn through a spectrum of subtle attributes—voice, face, body, style, mannerisms, bearing—before aggregating them into a recognition or not. In the technological world, we’ll come to inspect a person with nearly the same spectrum of metrics. The system will check out a candidate’s attributes. Do the pulse, breathing, heart rate, voice, face, iris, expressions, and dozens of other imperceptible biological signatures match who (or what) they claim? Our interactions will become our password.

Degrees of interaction are rising, and will continue to increase. Yet simple noninteractive things, such as a wooden-handled hammer, will endure. Still, anything that can interact, including a smart hammer, will become more valuable in our interactive society. But high interactivity comes at a cost. Interacting demands skills, coordination, experience, and education. Embedded into our technology and cultivated in ourselves. All the more so because we have only begun to invent novel ways to interact. The future of technology resides, in large part, in the discovery of new interactions. In the coming 30 years, anything that is not intensely interactive will be considered broken.