use had been unknown merely a decade earlier.
Perhaps even more significant than these miracle drugs, shifts in public health and hygiene also drastically altered the national physiognomy of illness. Typhoid fever36, a contagion whose deadly swirl could decimate entire districts in weeks, melted away as the putrid water supplies of several cities were cleansed by massive municipal efforts. Even tuberculosis37, the infamous “white plague” of the nineteenth century, was vanishing, its incidence plummeting by more than half between 1910 and 1940, largely due to better sanitation and public hygiene efforts. The life expectancy of Americans38 rose from forty-seven to sixty-eight in half a century, a greater leap in longevity than had been achieved over several previous centuries.
The sweeping victories of postwar medicine illustrated the potent and transformative capacity of science and technology in American life. Hospitals proliferated39—between 1945 and 1960, nearly one thousand new hospitals were launched nationwide; between 1935 and 1952, the number of patients admitted more than doubled from 7 million to 17 million per year. And with the rise in medical care came the concomitant expectation of medical cure. As one student observed,40 “When a doctor has to tell a patient that there is no specific remedy for his condition, [the patient] is apt to feel affronted, or to wonder whether the doctor is keeping abreast of the times.”
In new and sanitized suburban towns, a young generation thus dreamed of cures—of a death-free, disease-free existence. Lulled by the idea of the durability41 of life, they threw themselves into consuming durables: boat-size Studebakers, rayon leisure suits, televisions, radios, vacation homes, golf clubs, barbecue grills, washing machines. In Levittown, a sprawling suburban settlement built in a potato field on Long Island—a symbolic utopia—“illness” now ranked third42 in a list of “worries,” falling behind “finances” and “child-rearing.” In fact, rearing children was becoming a national preoccupation at an unprecedented level. Fertility rose steadily43—by 1957, a baby was being born every seven seconds in America. The “affluent society,”44 as the economist John Galbraith described it, also imagined itself as eternally young, with an accompanying guarantee of eternal health—the invincible society.
But of all diseases, cancer had refused to fall into step in this march of progress. If a tumor was strictly local (i.e., confined to a single organ or site so that it could be removed by a surgeon), the cancer stood a chance of being cured. Extirpations, as these procedures came to be called, were a legacy of the dramatic advances of nineteenth-century surgery. A solitary malignant lump in the breast, say, could be removed via a radical mastectomy pioneered by the great surgeon William Halsted at Johns Hopkins in the 1890s. With the discovery of X-rays in the early 1900s, radiation could also be used to kill tumor cells at local sites.
But scientifically, cancer still remained a black box, a mysterious entity that was best cut away en bloc rather than treated by some deeper medical insight. To cure cancer (if it could be cured at all), doctors had only two strategies: excising the tumor surgically or incinerating it with radiation—a choice between the hot ray and the cold knife.
In May 193745, almost exactly a decade before Farber began his experiments with chemicals, Fortune magazine published what it called a “panoramic survey” of cancer medicine. The report was far from comforting: “The startling fact is that no new principle of treatment, whether for cure or prevention, has been introduced. . . . The methods of treatment have become more efficient and more humane. Crude surgery without anesthesia or asepsis has been replaced by modern painless surgery with its exquisite technical refinement. Biting caustics that ate into the flesh of past generations of cancer patients have been obsolesced by radiation with X-ray and radium. . . . But the fact remains that the cancer ‘cure’ still includes only two principles—the removal and destruction of diseased tissue [the former by surgery; the latter by X-rays]. No other means have been proved.”
The Fortune article was titled “Cancer: The Great Darkness,” and the “darkness,” the authors suggested, was as much political as medical. Cancer medicine was stuck in a rut not only because of the depth of medical mysteries that surrounded it, but because of the systematic neglect of cancer research: “There are not over two dozen funds in the U.S. devoted to fundamental cancer research. They range in capital from about $500 up to about $2,000,000, but their aggregate capitalization is certainly not much more than $5,000,000. . . . The public willingly spends a third of that sum in an afternoon to watch a major football game.”
This stagnation of research funds stood in stark contrast to the swift rise to prominence of the disease itself. Cancer had certainly been present and noticeable in nineteenth-century America, but it had largely lurked in the shadow of vastly more common illnesses. In 1899, when Roswell Park46, a well-known Buffalo surgeon, had argued that cancer would someday overtake smallpox, typhoid fever, and tuberculosis to become the leading cause of death in the nation, his remarks had been perceived as a rather “startling prophecy,” the hyperbolic speculations of a man who, after all, spent his days and nights operating on cancer. But by the end of the decade, Park’s remarks were becoming less and less startling, and more and more prophetic by the day. Typhoid, aside from a few scattered outbreaks, was becoming increasingly rare. Smallpox was on the decline47; by 1949, it would disappear from America altogether. Meanwhile cancer was already outgrowing other diseases, ratcheting its way up the ladder of killers. Between 1900 and 191648, cancer-related mortality grew by 29.8 percent, edging out tuberculosis as a cause of death. By 1926, cancer49 had become the nation’s second most common killer, just behind heart disease.
“Cancer: The Great Darkness” wasn’t alone in building a case for a coordinated national response to cancer. In May that year,50 Life carried its own dispatch on cancer research, which conveyed the same sense of urgency. The New York Times published two reports on rising cancer rates, in April and June. When cancer appeared51 in the pages of Time in July 1937, interest in what was called the “cancer problem” was like a fierce contagion in the media.
Proposals to mount a systematic national response against cancer had risen and ebbed rhythmically in America since the early 1900s. In 1907, a group of cancer surgeons had congregated at the New Willard Hotel in Washington to create an organization to lobby Congress for more funds for cancer research. By 1910, this organization, the American Association for Cancer Research52, had convinced President Taft to propose to Congress a national laboratory dedicated to cancer research. But despite initial interest in the plan, the efforts had stalled in Washington after a few fitful attempts, largely because of a lack of political support.
In the late 1920s, a decade after Taft’s proposal had been tabled, cancer research found a new and unexpected champion—Matthew Neely, a dogged and ebullient former lawyer from Fairmont, West Virginia, serving his first term in the Senate. Although Neely had relatively little experience in the politics of science, he had noted the marked increase in cancer mortality in the previous decade—from 70,000 men and women in 191153 to 115,000 in 1927. Neely asked