the Bernoulli experiment with probability of success p is repeated N times, independently, then the binomial distribution is the distribution of the random variable containing the number of successes at the end of the N repetitions of the experiment.
EXAMPLE 1.16.– Hypergeometric distribution: Let n and N be two integers such that n ≤ N, p ∈]0, 1[such that pN ∈ ℕ, and let X be a random variable on (Ω,
, ℙ) such thatand for any k ∈ X(Ω),
X is then said to follow a hypergeometric distribution with parameters N, n and p, and we write X ∼
If we consider an urn containing N indistinguishable balls, k red balls and N – k white balls, with k ∈ {1, ...N 1}, and if we simultaneously draw n balls, then the random variable X, equal to the number of red balls obtained, follows a hypergeometric distribution with parameters N, n and
EXAMPLE 1.17.– Poisson distribution: Let λ > 0 and X be a random variable on (Ω,
, ℙ) such thatand for any k ∈ X(Ω),
It is then said that X follows a Poisson distribution with parameter λ, and we write X ∼
(λ).DEFINITION 1.12.– Let X be a discrete random variable such that X(Ω) = {xi, i ∈ I}, where I ⊂ ℕ.
– X or the distribution of X is said to be integrable (or summable) if
– If X is integrable, then the expectation of X is the real number defined by
EXAMPLE 1.18.– The random variable X defined in Example 1.12 admits an expectation equal to
The average winnings in the die-rolling game is therefore equal to 15
The following proposition establishes a link between the expectation of a discrete, random variable and measure theory.
PROPOSITION 1.3.– Let X be a discrete random variable such that X(Ω) = {xi, i ∈ I}, where I ⊂ ℕ. It is assumed that
Then,
The above proposition also justifies the concept of integrability introduced in Definition 1.12. Further, in this case (i.e. when X is integrable:
, ℙ). When Xp is integrable for a certain real number p ≥ 1 (i.e. when
Let us look at some of the properties of expectations.
PROPOSITION 1.4.– Let X and Y be two integrable, discrete random variables, a, b ∈ ℝ. Then,
1 1) Linearity: [aX + bY ] = a[X]+ b[Y ].
2 2) Transfer theorem: if g is a measurable function such that g(X) is integrable, then
3 3) Monotonicity: if X ≤ Y almost surely (a.s.), then [X] ≤ [Y].
4 4) Cauchy–Schwartz inequality: If X2 and Y2 are integrable, then XY is integrable and
5 5) Jensen inequality: if g is a convex function such that g(X) is integrable, then,
DEFINITION 1.13.– Let X be a discrete random variable, such that X(Ω) = {xi, i ∈ I}, I ⊂ ℕ and X2 is integrable. The variance of X is the real number:
Variance satisfies the following properties.
PROPOSITION 1.5.– If a discrete random variable X admits variance, then,
1 1) (X) ≥ 0.
2 2) (X) = [X2] − ([X])2.
3 3) For any (a, b) ∈ ℝ2, (aX + b) = a2(X).
1.2.3. σ-algebra generated by a random variable
We now define the σ-algebra generated by a random variable. This concept is important for several reasons. For instance, it can make it possible to define the independence of random variables. It is also at the heart of the definition of conditional expectations; see Chapter 2.
PROPOSITION 1.6.– Let X be a real random variable, defined on (Ω,
, ℙ) taking values in (E, ε ). Then, X = X−1(ε) = {X−1(A); A ∈ ε} is a sub-σ-algebra of on Ω. This is called the σ-algebra generated by the random variable X. It is written as σ(X). It is the smallest σ-algebra on Ω that makes X measurable:EXAMPLE 1.19.– Let
0 = {∅, Ω} and X = c ∈ ℝ be a constant. Then, for any Borel set B in ℝ, (X ∈ B) has the value ∅ if c ∉ B and Ω if c ∈ B. Thus, the σ-algebra generated by X is 0. Reciprocally, it can be demonstrated that the only