David Eagleman

Livewired


Скачать книгу

with the grid of solenoids on the back, blind people who use the BrainPort begin to feel that scenes have “openness” and “depth” and that objects are out there. In other words, it’s more than a cognitive translation of what’s happening on the tongue: it grows into a direct perceptual experience. Their experience is not “I feel a pattern on my tongue that codes for my spouse passing by,” but instead a direct sense that their spouse is moving across the living room. If you’re a reader with normal vision, keep in mind this is precisely how your eyes work: electrochemical signals in your retinas are perceived as a friend beckoning you, a Ferrari zooming past on the road, a scarlet kite against an azure sky. Even though all the activity is at the surface of your sensory detectors, you perceive everything as out there. It simply doesn’t matter whether the detector is the eye or the tongue. As the blind participant Roger Behm describes his experience of the BrainPort:

      Last year, when I was up here for the first time, we were doing stuff on the table, in the kitchen. And I got kind of a little emotional, because it’s thirty-three years since I’ve seen before. And I could reach out and I see the different-sized balls. I mean I visually see them. I could reach out and grab them—not grope or feel for them—pick them up, and see the cup, and raise my hand and drop it right in the cup.26

      As you can presumably guess by now, the tactile input can be almost anywhere on the body. Researchers in Japan have developed a variant of the tactile grid—the Forehead Retina System—in which a video stream is converted to small points of touch on the forehead.27 Why the forehead? Why not? It’s not being used for much else.

image

       The Forehead Retina System.

      Another version hosts a grid of vibrotactile actuators on the abdomen, which use intensity to represent distance to the nearest surfaces.28

      What these all have in common is that the brain can figure out what to make of visual input coming in through channels normally thought of as touch. But it turns out that touch isn’t the only strategy that works.

      In my laboratory some years ago, Don Vaughn walked with his iPhone held out in front of him. His eyes were closed, and yet he was not crashing into things. The sounds streaming through his earbuds were busily converting the visual world into a soundscape. He was learning to see the room with his ears. He gently moved the phone around in front of him like a third eye, like a miniature walking cane, turning it this way and that to pull in the information he needed. We were testing whether a blind person could pick up visual information through the ears. Although you might not have heard of this approach to blindness before, the idea isn’t new: it began more than half a century earlier.

      In 1966, a professor named Leslie Kay became obsessed with the beauty of bat echolocation. He knew that some humans could learn to echolocate, but it wasn’t easy. So Kay designed a bulky pair of glasses to help the blind community take advantage of the idea.29

      The glasses emitted an ultrasonic sound into the environment. With its short wavelengths, ultrasonic can reveal information about small objects when it bounces back. Electronics on the glasses captured the returning reflections and converted them into sounds humans could hear. The note in your ear indicated the distance of the object: high pitches coded for something far away, low pitches for something nearby. The volume of a signal told you about the size of the object: loud meant the object was large; soft told you it was small. The clarity of the signal was used to represent texture: a smooth object became a pure tone; a rough texture sounded like a note corrupted with noise. Users learned to perform object avoidance pretty well; however, because of the low resolution, Kay and his colleagues concluded that the invention served more as a supplement to a guide dog or cane rather than a replacement.

image

       Professor Kay’s sonic glasses shown on the right.(The other glasses are merely thick, not sonic.)

      Although it was only moderately useful for adults, there remained the question of how well a baby’s brain might learn to interpret the signals, given that young brains are especially plastic. In 1974, in California, the psychologist T. G. R. Bower used a modified version of Kay’s glasses to test whether the idea could work. His participant was a sixteen-week-old baby, blind from birth.30 On the first day, Bower took an object and moved it slowly back and forth from the infant’s nose. By the fourth time he moved the object, he reports, the baby’s eyes converged (both pointed toward the nose), as happens when something approaches the face. When Bower moved the object away, the baby’s eyes diverged. After a few more cycles of this, the baby put up its hands as the object drew near. When objects were moved left and right in front of the baby, Bower reports that the baby tracked them with its head and tried to swipe at them. In his write-up of the results, Bower relates several other behaviors:

      The baby was facing [his talking mother] and wearing the device. He slowly turned his head to remove her from the sound field, then slowly turned back to bring her in again. This behavior was repeated several times to the accompaniment of immense smiles from the baby. All three observers had the impression that he was playing a kind of peek-a-boo with his mother, and deriving immense pleasure from it.

      He goes on to report remarkable results over the next several months:

      The baby’s development after these initial adventures remained more or less on a par with that of a sighted baby. Using the sonic guide the baby seemed able to identify a favorite toy without touching it. He began two-handed reaches around 6 months of age. By 8 months the baby would search for an object that had been hidden behind another object. . . . None of these behavior patterns is normally seen in congenitally blind babies.

      You may wonder why you haven’t heard of these being used before. Just as we saw earlier, the technology was bulky and heavy—not the kind of thing you could reasonably grow up using—while the resolution was fairly low. Further, the results in adults generally met with less success than those in children with the ultrasonic glasses31—an issue we’ll return to in chapter 9. So while the concept of sensory substitution took root, it had to wait for the right combination of factors to thrive.

image

      In the early 1980s, a Dutch physicist named Peter Meijer picked up the baton of thinking about the ears as a means to transmit visual information. Instead of using echolocation, he wondered if he could take a video feed and convert it into sound.

      He had seen Bach-y-Rita’s conversion of a video feed into touch, but he suspected that the ears might have a greater capacity to soak in information. The downside of going for the ears was that the conversion from video to sound was going to be less intuitive. In Bach-y-Rita’s dental chair, the shape of a circle, face, or person could be pressed directly against the skin. But how does one convert hundreds of pixels of video into sound?

      By 1991, Meijer had developed a version on a desktop computer, and by 1999 it was portable, worn as camera-mounted glasses with a computer clipped to the belt. He called his system the vOICe (where “OIC” stands for “Oh, I See”).32 The algorithm manipulates sound along three dimensions: the height of an object is represented by the frequency of the sound, the horizontal position is represented by time via a panning of the stereo input (imagine sound moving in the ears from left to right, the way you scan a scene with your eyes), and the brightness of an object is represented by volume. Visual information could be captured for a grayscale image of about sixty by sixty pixels.33

      Try to imagine the experience of using these glasses. At first, everything sounds like a cacophony. As one moves around the environment, pitches are buzzing and whining in an alien and useless manner. After a while, one gets a sense of how to use the sounds to navigate around. At this stage it is a cognitive exercise: one is laboriously translating the pitches into something