there are some young people who drink daily, we might suspect that drinking–and frequency of drinking–increases with age, that even a large proportion of youths who are “current drinkers” find their opportunities to drink limited mostly to weekends. One might suspect that young drinkers average less than 35 drinks per month. Reducing the estimate by only 5 drinks per month would cut our estimate for total drinks consumed in a year by underage drinkers by another billion. The assumptions that analysts make–even when they don’t make calculation errors–shape the resulting figures.
D
SOURCES: WHO COUNTED–AND WHY?
While it is sometimes possible to spot obvious blunders, most statistics seem plausible—at least they aren’t obviously wrong. But are they right? In trying to evaluate any number, it helps to ask questions. A good first question is, Who produced this figure? That is, who did the counting—and why?
Numbers don’t exist in nature. Every number is a product of human effort. Someone had to go to the trouble of counting. So we can begin by trying to identify the sources for our numbers, to ask who they are, and why they bothered to count whatever they counted.
Statistics come from all sorts of sources. Government agencies crunch a lot of numbers; they conduct the census and calculate the crime rate, the unemployment rate, the poverty rate, and a host of other statistics. Then there are the pollsters who conduct public opinion polls: sometimes they conduct independent surveys, but often they are working for particular clients, who probably hope that the poll results will support their views. And there are researchers who have collected data to study some phenomenon, and who may be more objective—or not. All sorts of people are sources for statistics.
Typically, we don’t have direct access to the folks who create these numbers. Most of us don’t receive the original reports from government agencies, pollsters, or researchers. Rather, we encounter their figures at second or third hand—in newspaper stories or news broadcasts. The editors and reporters who produce the news winnow through lots of material for potential stories and select only a few numbers to share with their audiences.
In other words, the statistics that we consume are produced and distributed by a variety of people, and those people may have very different agendas. Although we might wish them to be objective sources, intent on providing only accurate, reliable information, in practice we know that some sources present statistics selectively, in order to convince us that their positions are correct. They may have a clear interest in convincing us, say, that their new drug is effective and not harmful, or that their industry deserves a tax break. These interests can inspire deliberate attempts to deceive, as when people knowingly present false or unreliable figures; but bad statistics can also emerge for other, less devious reasons.
When researchers announce their results, when activists try to raise concern for some cause they favor, or when members of the media publish or broadcast news, they all find themselves in competition to gain our attention. There is a lot of information out there, and most of it goes unnoticed. Packaging becomes important. To attract media coverage, claims need to be crafted to seem interesting: each element in a story needs to help grab and hold our attention, and that includes statistics. Thus, people use figures to capture our interest and concern; they emphasize numbers that seem surprising, impressive, or disturbing. When we see a statistic, we should realize that it has survived a process of selection, that many other numbers have not been brought to our attention because someone deemed them less interesting.
This competition for public notice affects all sorts of numbers, even those produced by the most reputable sources. When government agencies announce the results of their newest round of number-crunching, they may be tempted to issue a news release that highlights the most interesting, eye-catching figures. Researchers who hope to publish their work in a visible, high-prestige journal may write up their results in ways intended to convince the journal’s editor that theirs is an especially significant study. In the competition to gain attention, only the most compelling numbers survive.
And, as we have already seen in the section on blunders, people sometimes present numbers they don’t understand. They may be sincere—fully convinced of the validity of their own dubious data. Of course this is going to be true for people with whom we disagree—after all, if they’ve come to the wrong conclusions, there must be something wrong with their evidence. But—and this is awkward—the same is often true for those who agree with us. Their hearts may be in the right place, yet advocates who share our views may not fully understand their own figures, either.
In short, it may seem that we’re bombarded by statistics, but the ones we encounter in news reports are only a tiny fraction of all the numbers out there. They have been selected because someone thought we’d find them especially interesting or convincing. In a sense, the numbers that reach us have often been tailored to shock and awe, to capture and hold our attention. Therefore, when we encounter a statistic, it helps to ask who produced that number, and what their agenda might be. In addition, we should watch for attempts to present data in ways that make them seem particularly impressive. Consider these examples.
D1Big Round Numbers
Big round numbers make big impressions. They seem shocking: “I had no idea things were that bad!” They are easy to remember. They are also one of the surest signs that somebody is guessing.
Particularly when advocates are first trying to draw others’ attention to a social problem, they find it necessary to make statistical guesses. If nobody has been paying much attention to the problem, then, in all likelihood, nobody has been bothering to keep accurate records, to count the number of cases. There are no good statistics on it. But as soon as the advocates’ campaign begins to attract the media, reporters are bound to start asking for numbers. Just how common is this problem? How many cases are there? The advocates are going to be pressed for figures, and they are going to have to offer guesses—ballpark figures, educated guesses, guesstimates.
These guesses may be quite sincere. The advocates think this is a serious problem, and so they are likely to think it is a big problem. They spend their days talking to other people who share their concern. If they are going to guess, they are likely to fix on a big round number that confirms their sense of urgency. As a result, their numbers are likely to err on the side of exaggeration.
LOOK FORThe name says it all: big round numbers |
EXAMPLE: ORNITHICIDE
When birds fly into windows, the collisions are often fatal. These are sad events. We enjoy looking out our windows at birds, and we hate to think that our windows are responsible for killing those same birds. This seems to be just one more way that people disrupt nature.
In recent years, a big round estimate for the number of fatal bird collisions each year has found its way into the news media. For example, an architecture professor interviewed on National Public Radio in 2005 put the annual number of bird collision deaths at one billion. The reporter conducting the interview expressed skepticism: “How accurate is that number, do you think? How would you ever calculate something like that?” After all, a billion is a lot. It is one thousand millions–a very large, very round number. But the professor insisted that the one-billion figure was “based on very careful data.”1
Well, not exactly. The previous best estimate for bird deaths due to fatal window collisions was 3.5 million–a whole lot less than a billion. This estimate simply assumed that the area of the continental United States is about 3.5 million square miles, and that each year, on average, one bird per square mile died after striking a window.2 In other words, the 3.5-million figure wasn’t much more than a guess.
Convinced that that number was too low, an ornithologist decided to do some research.3 He arranged to have residents at two houses keep careful track of bird collisions at their homes: one in southern Illinois, the other in a suburb in New York. By coincidence, the Illinois house belonged to former neighbors of ours–an older couple who loved birds and who built a custom home with lots of windows, surrounded