Alex Hutchinson

Endure


Скачать книгу

audio diary he uploaded to the Web throughout the trip, he described the sounds he had become so familiar with on his previous expeditions: “The squeak of the ski poles gliding into the snow, the thud of the sledge over each bump, and the swish of the skis sliding along … And then, when you stop, the unbelievable silence.”

      At first, A. V. Hill’s attempts to calculate the limits of human performance were met with bemusement. In 1924, he traveled to Philadelphia to give a lecture at the Franklin Institute on “The Mechanism of Muscle.” “At the end,” he later recalled, “I was asked, rather indignantly, by an elderly gentleman, what use I supposed all these investigations were which I had been describing.” Hill first tried to explain the practical benefits that might follow from studying athletes but soon decided that honesty was the best policy: “To tell you the truth,” he admitted, “we don’t do it because it is useful but because it’s amusing.” That was the headline in the newspaper the next day: “Scientist Does It Because It’s Amusing.”

      In reality, the practical and commercial value of Hill’s work was obvious right from the start. His VO2max studies were funded by Britain’s Industrial Fatigue Research Board, which also employed his two coauthors. What better way to squeeze the maximum productivity from workers than by calculating their physical limits and figuring out how to extend them? Other labs around the world soon began pursuing similar goals. The Harvard Fatigue Laboratory, for example, was established in 1927 to focus on “industrial hygiene,” with the aim of studying the various causes and manifestations of fatigue “to determine their interrelatedness and the effect upon work.” The Harvard lab went on to produce some of the most famous and groundbreaking studies of record-setting athletes, but its primary mission of enhancing workplace productivity was signaled by its location—in the basement of the Harvard Business School.

      Citing Hill’s research as his inspiration, the head of the Harvard lab, David Bruce Dill, figured that understanding what made top athletes unique would shed light on the more modest limits faced by everyone else. “Secret of Clarence DeMar’s Endurance Discovered in the Fatigue Laboratory,” the Harvard Crimson announced in 1930, reporting on a study in which two dozen volunteers had run on a treadmill for twenty minutes before having the chemical composition of their blood analyzed. By the end of the test, DeMar, a seven-time Boston Marathon champion, had produced almost no lactic acid—a substance that, according to Dill’s view at the time, “leaks out into the blood, producing or tending to produce exhaustion.” In later studies, Dill and his colleagues tested the effects of diet on blood sugar levels in Harvard football players before, during, and after games; and studied runners like Glenn Cunningham and Don Lash, the reigning world record holders at one mile and two miles, reporting their remarkable oxygen processing capacities in a paper titled “New Records in Human Power.”

      Are such insights about endurance on the track or the gridiron really applicable to endurance in the workplace? Dill and his colleagues certainly thought so. They drew an explicit link between the biochemical “steady state” of athletes like DeMar, who could run at an impressive clip for extended periods of time without obvious signs of fatigue, and the capacity of well-trained workers to put in long hours under stressful conditions without a decline in performance.

      At the time, labor experts were debating two conflicting views of fatigue in the workplace. As MIT historian Robin Scheffler recounts, efficiency gurus like Frederick Winslow Taylor argued that the only true limits on the productive power of workers were inefficiency and lack of will—the toddlers-on-a-plane kind of endurance. Labor reformers, meanwhile, insisted that the human body, like an engine, could produce only a certain amount of work before requiring a break (like, say, a weekend). The experimental results emerging from the Harvard Fatigue Lab offered a middle ground, acknowledging the physiological reality of fatigue but suggesting it could be avoided if workers stayed in “physicochemical” equilibrium—the equivalent of DeMar’s ability to run without accumulating excessive lactic acid.

      Dill tested these ideas in various extreme environments, studying oxygen-starved Chilean miners at 20,000 feet above sea level and jungle heat in the Panama Canal Zone. Most famously, he and his colleagues studied laborers working on the Hoover Dam, a Great Depression–era megaproject employing thousands of men in the Mojave Desert. During the first year of construction, in 1931, thirteen workers died of heat exhaustion. When Dill and his colleagues arrived the following year, they tested the workers before and after grueling eight-hour shifts in the heat, showing that their levels of sodium and other electrolytes were depleted—a telling departure from physico-chemical equilibrium. The fix: one of Dill’s colleagues persuaded the company doctor to amend a sign in the dining hall that said THE SURGEON SAYS DRINK PLENTY OF WATER, adding AND PUT PLENTY OF SALT ON YOUR FOOD. No more men died of heat exhaustion during the subsequent four years of construction, and the widely publicized results helped enshrine the importance of salt in fighting heat and dehydration—even though, as Dill repeatedly insisted in later years, the biggest difference from 1931 to 1932 was moving the men’s living quarters from encampments on the sweltering canyon floor to air-conditioned dormitories on the plateau.

      If there was any remaining doubt about Hill’s vision of the “human machine,” the arrival of World War II in 1939 helped to erase it. As Allied soldiers, sailors, and airmen headed into battle around the world, scientists at Harvard and elsewhere studied the effects of heat, humidity, dehydration, starvation, altitude, and other stressors on their performance, and searched for practical ways of boosting endurance under these conditions. To assess subtle changes in physical capacity, researchers needed an objective measure of endurance—and Hill’s concept of VO2max fit the bill.

      The most notorious of these wartime studies, at the University of Minnesota’s Laboratory of Physical Hygiene, involved thirty-six conscientious objectors—men who had refused on principle to serve in the armed forces but had volunteered instead for a grueling experiment. Led by Ancel Keys, the influential researcher who had developed the K-ration for soldiers and who went on to propose a link between dietary fat and heart disease, the Minnesota Starvation Study put the volunteers through six months of “semi-starvation,” eating on average 1,570 calories in two meals each day while working for 15 hours and walking 22 miles per week.

      In previous VO2max studies, scientists had trusted that they could simply ask their subjects to run to exhaustion in order to produce maximal values. But with men who’ve been through the physical and psychological torment of months of starvation, “there is good reason for not trusting the subject’s willingness to push himself to the point at which a maximal oxygen intake is elicited,” Keys’s colleague Henry Longstreet Taylor drily noted. Taylor and two other scientists took on the task of developing a test protocol that “would eliminate both motivation and skill as limiting factors” in objectively assessing endurance. They settled on a treadmill test in which the grade got progressively steeper, with carefully controlled warm-up duration and room temperature. When subjects were tested and retested, even a year later, their results were remarkably stable: your VO2max was your VO2max, regardless of how you felt that day or whether you were giving your absolute best. Taylor’s description of this protocol, published in 1955, marked the real start of the VO2max era.

      By the 1960s, growing faith in the scientific measurement of endurance led to a subtle reversal: instead of testing great athletes to learn about their physiology, scientists were using physiological testing to predict who could be a great athlete. South African researcher Cyril Wyndham argued that “men must have certain minimum physiological requirements if they are to reach, say, an Olympic final.” Rather than sending South African runners all the way across the world only to come up short, he suggested, they should first be tested in the lab so that “conclusions can be drawn on the question of whether the Republic’s top athletes have sufficient ‘horse-power’ to compete with the world’s best.”

      In some ways, the man-as-machine view had now been pushed far beyond what Hill initially envisioned. “There is, of course, much more in athletics than sheer chemistry,” Hill had cheerfully acknowledged, noting the importance of “moral” factors—“those qualities of resolution and experience which enable one individual to ‘run himself out’ to a far greater degree of exhaustion than another.” But the urge to focus on the quantifiable at the