Daniel J. Denis

Applied Univariate, Bivariate, and Multivariate Statistics


Скачать книгу

a head or tail on the next flip, and so on for the other flips (i.e., no outcome is ever “due” to occur, as the gambler sometimes believes).

      We can easily demonstrate hypothesis testing in a binomial setting using R. For instance, let us return to the coin‐flipping experiment. Suppose you would like to know the probability of obtaining two heads on five flips of a fair coin, where each flip is assumed to have a probability of heads equal to 0.5. In R, we can compute this as follows:

      > dbinom(2, size = 5, prob = 0.5) [1] 0.3125Histogram depicting Fisher's overlay of normal density on empirical observations.

      Source: Fisher (1925, 1934).

      where dbinom calls the “density for the binomial,” “2” is the number of successes we are specifying, “size = 5” represents the number of trials we are taking, and “prob = 0.5” is the probability of success on any given trial, which recall is assumed constant from trial to trial.

      Suppose instead of two heads, we were interested in the probability of obtaining five heads:

      > dbinom(5, size = 5, prob = 0.5) [1] 0.03125

      Notice that the probability of obtaining five heads out of five flips on a fair coin is quite a bit less than that of obtaining two heads. We can continue to obtain the remaining probabilities and obtain the complete binomial distribution for this experiment:

Heads 0 1 2 3 4 5
Prob 0.03125 0.15625 0.3125 0.3125 0.15625 0.03125 ∑1.0
Histogram depicting binomial distribution for the probability of the number of heads on a fair coin.

      Binomial distributions are useful in a great variety of contexts in modeling a wide number of phenomena. But again, remember that the outcome of the variable must be binary, meaning it must have only two possibilities. If it has more than two possibilities or is continuous in nature, then the binomial distribution is not suitable. Binomial data will be featured further in our discussion of logistic regression in Chapter 10.

      One can also appreciate the general logic of hypothesis testing through the binomial. If our null hypothesis is that the coin is fair, and we obtain five heads out of five flips, this result has only a 0.03125 probability of occurring. Hence, because the probability of this data is so low under the model that the coin is fair, we typically decide to reject the null hypothesis and infer the statistical alternative hypothesis that p(H) ≠ 0.5. Substantively, we might infer that the coin is not fair, though this substantive alternative also assumes it is the coin that is to “blame” for it coming up five times heads. If the flipper was responsible for biasing the coin, for instance, or a breeze suddenly came along that helped the result occur in this particular fashion, then inferring the substantive alternative hypothesis of “unfairness” may not be correct. Perhaps the nature of the coin is such that it is fair. Maybe the flipper or other factors (e.g., breeze) are what are ultimately responsible for the rejection of the null. This is one reason why rejecting null hypotheses is quite easy, but inferring the correct substantive alternative hypothesis (i.e., the hypothesis that explains why the null was rejected) is much more challenging (see Denis, 2001). As concluded by Denis, “Anyone can reject a null, to be sure. The real skill of the scientist is arriving at the true alternative.”

      The binomial distribution is also well‐suited for comparing proportions. For details on how to run this simple test in R, see Crawley (2013, p. 365). One can also use binom.test in R to test simple binomial hypotheses, or the prop.test for testing null hypotheses about proportions. A useful test that employs binomial distributions is the sign test (see Siegel and Castellan, 1988, pp. 80–87 for details). For a demonstration of the sign test in R, see Denis (2020).

      2.1.3 Normal Approximation

      Many distributions in statistics can be regarded as limiting forms of other distributions. What this statement means can be best demonstrated through an example of how the binomial and normal distributions are related. When the number of discrete categories along the x‐axis grows larger and larger, the areas under the binomial distribution more and more resemble the probabilities computed under the normal curve. It is in this sense that for a large number of trials on the binomial, it begins to more closely approximate the normal distribution.

      We can see that the normal curve “approximates” the binomial distribution, though perhaps not tremendously well for only five trials. If we increase the number of trials, however, to say, 20, the approximation is much improved. And when we increase the number of trials to 100, the binomial distribution looks virtually like a normal density. That is, we say that the normal distribution is the limiting form of the binomial distribution.

      We can express this idea more formally. If the number of trials n in a binomial experiment is made large, the distribution of the number of successes x will tend to resemble a normal distribution. That is, the normal distribution is the limiting form of a binomial distribution as n → ∞ for a fixed p (and where q = 1 − p), where E(xi) is the expectation of the random variable xi (the meaning of “random variable” will be discussed shortly):

Histogram depicting binomial distributions approximated by normal densities for 5 (far left), 20 (middle), and 100 trials (far right).