David Eagleman

Livewired


Скачать книгу

are many other types of receptors in the skin, including stretch, itch, and temperature, and a child can end up missing some, but not others. This collectively falls under the term “anaphia,” the inability to feel touch.

      When we look at this constellation of disorders, it becomes clear that our peripheral detectors unpack by dint of specific genetic programs. A minor malfunction in the genes can halt the program, and then the brain doesn’t receive that particular data stream.

image

      The all-purpose cortex idea suggests how new sensory skills can be added during evolution: with a mutated peripheral device, a new data stream makes its way into some swath of the brain, and the neural processing machinery gets to work. Thus, new skills require only the development of new sensory devices.

      And that’s why we can look across the animal kingdom and find all kinds of strange peripheral devices, each of which is crafted by millions of years of evolution. If you were a snake, your sequence of DNA would fabricate heat pits that pick up infrared information. If you were a black ghost knifefish, your genetic letters would unpack electrosensors that pick up on perturbations in the electrical field. If you were a bloodhound dog, your code would write instructions for an enormous snout crammed with smell receptors. If you were a mantis shrimp, your instructions would manufacture eyes with sixteen types of photo-receptors. The star-nosed mole has what seem like twenty-two fingers on its nose, and with these it feels around and constructs a 3-D model of its tunnel systems. Many birds, cows, and insects have magnetoreception, with which they orient to the magnetic field of the planet.

      To accommodate such varied peripherals, does the brain have to be redesigned each time? I suggest not. In evolutionary time, random mutations introduce strange new sensors, and the recipient brains simply figure out how to exploit them. Once the principles of brain operation have been established, nature can simply worry about designing new sensors.

      This perspective allows a lesson to come into focus: the devices we come to the table with—eyes, noses, ears, tongues, fingertips—are not the only collection of instruments we could have had. These are simply what we’ve inherited from a lengthy and complex road of evolution.

      But that particular collection of sensors may not be what we have to stick with.

      After all, the brain’s ability to wrap itself around different kinds of incoming information implies the bizarre prediction that one might be able to get one sensory channel to carry another’s information. For example, what if you took a data stream from a video camera and converted it into touch on your skin? Would the brain eventually be able to interpret the visual world simply by feeling it?

      Welcome to the stranger-than-fiction world of sensory substitution.

      The idea that one can feed data into the brain via the wrong channels may sound hypothetical and bizarre. But the first paper demonstrating this was published in the journal Nature more than half a century ago.

      The story begins in 1958, when a physician named Paul Bach-y-Rita received terrible news: his father, a sixty-five-year-old professor, had just suffered a major stroke. He was wheelchair bound and could barely speak or move. Paul and his brother George, a medical student at the University of Mexico, searched for ways to help their father. And together they pioneered an idiosyncratic, one-on-one rehabilitation program.

image

       Sensory substitution: feed information into the brain via unusual pathways.

      As Paul described it, “It was tough love. [George would] throw something on the floor and say ‘Dad, go get it.’ ”10 Or they would have their father try to sweep the porch, even as the neighbors looked on in dismay. But for their father, the struggle was rewarding. As Paul phrased his father’s view, “This useless man was doing something.”

      Stroke victims frequently recover only partially—and often not at all—so the brothers tried not to buy into false hope. They knew that when brain tissue is killed in a stroke, it never comes back.

      But their father’s recovery proceeded unexpectedly well. So well, in fact, that their father returned to his professorship and died much later in life (the victim of a heart attack while hiking in Colombia at nine thousand feet).

      Paul was deeply impressed at the extent of his father’s recovery, and the experience marked a major turning point in his life. Paul realized that the brain could retrain itself and that even when parts of the brain were forever gone, other parts could take over their function. Paul departed a professorship at Smith-Kettlewell in San Francisco to begin a residency in rehabilitation medicine at Santa Clara Valley Medical Center. He wanted to study people like his father. He wanted to figure out what it took for the brain to retrain.

      By the late 1960s, Paul Bach-y-Rita had pursued a scheme that most of his colleagues assumed to be foolish. He sat a blind volunteer in a reconfigured dental chair in his laboratory. Inset into the back of the chair was a grid of four hundred Teflon tips in a twenty-by-twenty grid. The tips could be extended and retracted by mechanical solenoids. Over the blind man’s head a camera was mounted on a tripod. The video stream of the camera was converted into a poking of the tips against the volunteer’s back.

      Objects were passed in front of the camera while the blind participant in the chair paid careful attention to the feelings in his back. Over days of training, he got better at identifying the objects by their feel—in the same way a person might play the game of drawing with a finger on another person’s back and then asking the person to identify the shape or letter. The experience wasn’t exactly like vision, but it was a start.

      What Bach-y-Rita found astonished the field: the blind subjects could learn to distinguish horizontal from vertical from diagonal lines. More advanced users could learn to distinguish simple objects and even faces—simply by the poking sensations on their back. He published his findings in the journal Nature, under the surprising title “Vision Substitution by Tactile Image Projection.” It was the beginning of a new era—that of sensory substitution.11 Bach-y-Rita summarized his findings simply: “The brain is able to use information coming from the skin as if it were coming from the eyes.”

image

       A video feed is translated into touch on the back.

      The technique improved drastically when Bach-y-Rita and his collaborators made a single, simple change: instead of mounting the camera to the chair, they allowed the blind user to point it himself, using his own volition to control where the “eye” looked.12 Why? Because sensory input is best learned when one can interact with the world. Letting users control the camera closed the loop between muscle output and sensory input.13 Perception can be understood not as passive but instead as a way to actively explore the environment, matching a particular action to a specific change in what returns to the brain. It doesn’t matter to the brain how that loop gets established—whether by moving the extraocular muscles that move the eye or using arm muscles to tilt a camera. However it happens, the brain works to figure out how the output maps to the input.

      The subjective experience for the users was that visual objects were found to be located “out there” instead of on the skin of the back.14 In other words, it was something like vision. Even though the sight of your friend’s face at the coffee shop impinges on your photoreceptors, you don’t perceive that the signal is at your eyes. You perceive that he’s out there, waving at you from a distance. And so it went for the users of the modified dental chair.

      While Bach-y-Rita’s device was the first to hit the public eye, it was not actually the first attempt at sensory substitution. On the other side of the world at the end of the 1890s, a Polish ophthalmologist named Kazimierz Noiszewski developed the Elektroftalm (from the Greek for “electricity” + “eye”) for blind people. A photocell was placed on the forehead