feels so effortless that it’s hard to appreciate the effort the brain exerts to construct it. To lift the lid a little on the process, I flew to Irvine, California, to see what happens when my visual system doesn’t receive the signals it expects.
Dr. Alyssa Brewer at the University of California is interested in understanding how adaptable the brain is. To that end, she outfits participants with prism goggles that flip the left and right sides of the world – and she studies how the visual system copes with it.
On a beautiful spring day, I strapped on the prism goggles. The world flipped – objects on the right now appeared on my left, and vice versa. When trying to figure out where Alyssa was standing, my visual system told me one thing, while my hearing told me another. My senses weren’t matching up. When I reached out to grab an object, the sight of my own hand didn’t match the position claimed by my muscles. Two minutes into wearing the goggles, I was sweating and nauseated.
Prism goggles flip the visual world, making it inordinately difficult to perform simple tasks, such as pouring a drink, grabbing an object, or getting through a doorway without bumping into the frame.
Although my eyes were functioning and taking in the world, the visual data stream wasn’t consistent with my other data streams. This spelled hard work for my brain. It was like I was learning to see again for the first time.
I knew that wearing the goggles wouldn’t stay that difficult forever. Another participant, Brian Barton, was also wearing prism goggles – and he had been wearing them for a full week. Brian didn’t seem to be on the brink of vomiting, as I was. To compare our levels of adaptation, I challenged him to a baking competition. The contest would require us to break eggs into a bowl, stir in cupcake mix, pour the batter into cupcake trays, and put the trays in the oven.
It was no contest: Brian’s cupcakes came out of the oven looking normal, while most of my batter ended up dried onto the counter or baked in smears across the baking tray. Brian could navigate his world without much trouble, while I had been rendered inept. I had to struggle consciously through every move.
Wearing the goggles allowed me to experience the normally hidden effort behind visual processing. Earlier that morning, just before putting on the goggles, my brain could exploit its years of experience with the world. But after a simple reversal of one sensory input, it couldn’t any longer.
To progress to Brian’s level of proficiency, I knew I would need to continue interacting with the world for many days: reaching out to grab objects, following the direction of sounds, attending to the positions of my limbs. With enough practice, my brain would get trained up by a continual cross-referencing between the senses, just the way that Brian’s brain had been doing for seven days. With training, my neural networks would figure out how various data streams entering into the brain matched up with other data streams.
Brewer reports that after a few days of wearing the goggles, people develop an internal sense of a new left and an old left, and a new right and an old right. After a week, they can move around normally, the way Brian could, and they lose the concept of which right and left were the old ones and new ones. Their spatial map of the world alters. By two weeks into the task, they can write and read well, and they walk and reach with the proficiency of someone without goggles. In that short time span, they master the flipped input.
The brain doesn’t really care about the details of the input; it simply cares about figuring out how to most efficiently move around in the world and get what it needs. All the hard work of dealing with the low-level signals is taken care of for you. If you ever get a chance to wear prism goggles, you should. It exposes how much effort the brain goes through to make vision seem effortless.
Synchronizing the senses
So we’ve seen that our perception requires the brain to compare different streams of sensory data against one another. But there’s something which makes this sort of comparison a real challenge, and that is the issue of timing. All of the streams of sensory data – vision, hearing, touch, and so on – are processed by the brain at different speeds.
Consider sprinters at a racetrack. It appears that they get off the blocks at the instant the gun fires. But it’s not actually instantaneous: if you watch them in slow motion, you’ll see the sizeable gap between the bang and the start of their movement – almost two tenths of a second. (In fact, if they move off the blocks before that duration, they’re disqualified – they’ve “jumped the gun”.) Athletes train to make this gap as small as possible, but their biology imposes fundamental limits: the brain has to register the sound, send signals to the motor cortex, and then down the spinal cord to the muscles of the body. In a sport where thousandths of a second can be the difference between winning and losing, that response seems surprisingly slow.
Could the delay be shortened if we used, say, a flash instead of a pistol to start the racers? After all, light travels faster than sound – so wouldn’t that allow them to break off the blocks faster?
I gathered up some fellow sprinters to put this to the test. In the top photograph, we are triggered by a flash of light; in the bottom photo we’re triggered by the gun.
Sprinters can break off the blocks more quickly to a bang (bottom panel) than to a flash (top panel).
We responded more slowly to the light. At first this may seem counterintuitive, given the speed of light in the outside world. But to understand what’s happening we need to look at the speed of information processing on the inside. Visual data goes through more complex processing than auditory data. It takes longer for signals carrying flash information to work their way through the visual system than for bang signals to work through the auditory system. We were able to respond to the light at 190 milliseconds, but to a bang at only 160 milliseconds. That’s why a pistol is used to start sprinters.
But here’s where it gets strange. We’ve just seen that the brain processes sounds more quickly than sights. And yet take a careful look at what happens when you clap your hands in front of you. Try it. Everything seems synchronized. How can that be, given that sound is processed more quickly? What it means is that your perception of reality is the end result of fancy editing tricks: the brain hides the difference in arrival times. How? What it serves up as reality is actually a delayed version. Your brain collects up all the information from the senses before it decides upon a story of what happens.
These timing difficulties aren’t restricted to hearing and seeing: each type of sensory information takes a different amount of time to process. To complicate things even more, even within a sense there are time differences. For example, it takes longer for signals to reach your brain from your big toe than it does from your nose. But none of this is obvious to your perception: you collect up all the signals first, so that everything seems synchronized. The strange consequence of all this is that you live in the past. By the time you think the moment occurs, it’s already long gone. To synchronize the incoming information from the senses, the cost is that our conscious awareness lags behind the physical world. That’s the unbridgeable gap between an event occurring and your conscious experience of it.
When the senses are cut off, does the show stop?
Our experience of reality is the brain’s ultimate construction. Although it’s based on all the streams of data from our senses, it’s not dependent on them. How do we know? Because when you take it all away, your reality doesn’t stop. It just gets stranger.
On a sunny San Francisco day, I took a boat across the chilly waters to Alcatraz, the famous island prison. I was going to see a particular cell called the Hole. If you broke the rules in the outside world, you were sent to Alcatraz. If you broke the rules in Alcatraz, you were sent to the Hole.
I entered the Hole and closed the door behind me. It’s about ten by ten feet. It was pitch black: not a photon of light leaks in from anywhere. Sounds