to “get married” without taking out a license; they might even have a wedding ceremony, yet their marriage will not be counted in the official record. Or consider couples that cohabit—live together—without getting married; there is no official record of their living arrangement. And there is the added problem of recordkeeping: is the system for filing, recording, and generally keeping track of marriages accurate, or do mistakes occur? These examples remind us that the official number of marriages reflects certain bureaucratic decisions about what will be counted and how to do the counting.
Now consider a more complicated example: statistics on suicide. Typically, a coroner decides which deaths are suicides. This can be relatively straightforward: perhaps the dead individual left behind a note clearly stating an intent to commit suicide. But often there is no note, and the coroner must gather evidence that points to suicide—perhaps the deceased is known to have been depressed, the death occurred in a locked house, the cause of death was an apparently self-inflicted gunshot to the head, and so on. There are two potential mistakes here. The first is that the coroner may label a death a “suicide” when, in fact, there was another cause (in mystery novels, at least, murder often is disguised as suicide). The second possibility for error is that the coroner may assign another cause of death to what was, in fact, a suicide. This is probably a greater risk, because some people who kill themselves want to conceal that fact (for example, some single-car automobile fatalities are suicides designed to look like accidents so that the individual’s family can avoid embarrassment or collect life insurance benefits). In addition, surviving family members may be ashamed by a relative’s suicide, and they may press the coroner to assign another cause of death, such as accident.
In other words, official records of suicide reflect coroners’ judgments about the causes of death in what can be ambiguous circumstances. The act of suicide tends to be secretive—it usually occurs in private—and the motives of the dead cannot always be known. Labeling some deaths as “suicides” and others as “homicides,” “accidents,” or whatever will sometimes be wrong, although we cannot know exactly how often. Note, too, that individual coroners may assess cases differently; we might imagine one coroner who is relatively willing to label deaths suicides, and another who is very reluctant to do so. Presented with the same set of cases, the first coroner might find many more suicides than the second.11
It is important to appreciate that coroners view their task as classifying individual deaths, as giving each one an appropriate label, rather than as compiling statistics for suicide rates. Whatever statistical reports come out of coroners’ offices (say, total number of suicides in the jurisdiction during the past year) are by-products of their real work (classifying individual deaths). That is, coroners are probably more concerned with being able to justify their decisions in individual cases than they are with whatever overall statistics emerge from those decisions.
The example of suicide records reveals that all official statistics are products—and often by-products—of decisions by various officials: not just coroners, but also the humble clerks who fill out and file forms, the exalted supervisors who prepare summary reports, and so on. These people make choices (and sometimes errors) that shape whatever statistics finally emerge from their organization or agency, and the organization provides a context for those choices. For example, the law requires coroners to choose among a specified set of causes for death: homicide, suicide, accident, natural causes, and so on. That list of causes reflects our culture. Thus, our laws do not allow coroners to list “witchcraft” as a cause of death, although that might be considered a reasonable choice in other societies. We can imagine different laws that would give coroners different arrays of choices: perhaps there might be no category for suicide; perhaps people who kill themselves might be considered ill, and their deaths listed as occurring from natural causes; or perhaps suicides might be grouped with homicides in a single category of deaths caused by humans. In other words, official statistics reflect what sociologists call organizational practices—the organization’s culture and structure shape officials’ actions, and those actions determine whatever statistics finally emerge.
Now consider an even more complicated example. Police officers have a complex job; they must maintain order, enforce the law, and assist citizens in a variety of ways. Unlike the coroner who faces a relatively short list of choices in assigning cause of death, the police have to make all sorts of decisions. For example, police responding to a call about a domestic dispute (say, a fight between husband and wife) have several, relatively ill-defined options. Perhaps they should arrest someone; perhaps the wife wants her husband arrested—or perhaps she says she does not want that to happen; perhaps the officers ought to encourage the couple to separate for the night; perhaps they ought to offer to take the wife to a women’s shelter; perhaps they ought to try talking to the couple to calm them down; perhaps they find that talking doesn’t work, and then pick arrest or a shelter as a second choice; perhaps they decide that the dispute has already been settled, or that there is really nothing wrong. Police must make decisions about how to respond in such cases, and some—but probably not all—of those choices will be reflected in official statistics. If officers make an arrest, the incident will be recorded in arrest statistics, but if the officers decide to deal with the incident informally (by talking with the couple until they calm down), there may be no statistical record of what happens. The choices officers make depend on many factors. If the domestic dispute call comes near the end of the officers’ shift, they may favor quick solutions. If their department has a new policy to crack down on domestic disputes, officers will be more likely to make arrests. All these decisions, each shaped by various considerations, will affect whatever statistics eventually summarize the officers’ actions.12
Like our earlier examples of marriage records and coroners labeling suicides, the example of police officers dealing with domestic disputes reveals that officials make decisions (relatively straightforward for marriage records, more complicated for coroners, and far less clear-cut in the case of the police), that official statistics are by-products of those decisions (police officers probably give even less thought than coroners to the statistical outcomes of their decisions), and that organizational practices form the context for those decisions (while there may be relatively little variation in how marriage records are kept, organizational practices likely differ more among coroners’ offices, and there is great variation in how police deal with their complex decisions, with differences among departments, precincts, officers, and so on). In short, even official statistics are social products, shaped by the people and organizations that create them.
THINKING ABOUT STATISTICS AS SOCIAL PRODUCTS
The lesson should be clear: statistics—even official statistics such as crime rates, unemployment rates, and census counts—are products of social activity. We sometimes talk about statistics as though they are facts that simply exist, like rocks, completely independent of people, and that people gather statistics much as rock collectors pick up stones. This is wrong. All statistics are created through people’s actions: people have to decide what to count and how to count it, people have to do the counting and the other calculations, and people have to interpret the resulting statistics, to decide what the numbers mean. All statistics are social products, the results of people’s efforts.
Once we understand this, it becomes clear that we should not simply accept statistics by uncritically treating numbers as true or factual. If people create statistics, then those numbers need to be assessed, evaluated. Some statistics are pretty good; they reflect people’s best efforts to measure social problems carefully, accurately, and objectively. But other numbers are bad statistics—figures that may be wrong, even wildly wrong. We need to be able to sort out the good statistics from the bad. There are three basic questions that deserve to be asked whenever we encounter a new statistic.
1. Who created this statistic? Every statistic has its authors, its creators. Sometimes a number comes from a particular individual. On other occasions, large organizations (such as the Bureau of the Census) claim authorship (although each statistic undoubtedly reflects the work of particular people within the organization).
In asking who the creators are, we ought to be less concerned with the names of the particular individuals who produced a number than with their part in the public drama about statistics. Does a particular statistic come from activists, who are striving to draw attention to