random variables are the constants. Indeed, let X be a
0-measurable random variable. Assume that it takes at least two different values, x and y. It may be assumed that y ≥ x without loss of generality. Therefore, 0-measurable.
PROPOSITION 1.7.– Let X be a random variable on (Ω,
, ℙ) taking values in (E, ε) and let σ(X) be the σ-algebra generated by X. Thus, a random variable Y is σ(X)-measurable if and only if there exists a measurable function f such that Y = f (X).This technical result will be useful in certain demonstrations further on in the text. In general, if it is known that Y is σ(X)-measurable, we cannot (and do not need to) make explicit the function f. Reciprocally, if Y can be written as a measurable function of X, it automatically follows that Y is σ(X)-measurable.
EXAMPLE 1.20.– A die is rolled 2 times. This experiment is modeled by Ω = {1, 2, 3, 4, 5, 6}2 endowed with the σ-algebra of its subsets and the uniform distribution. Consider the mappings X1, X2 and Y from Ω onto ℝ defined by
thus, Xi is the result of the ith roll and Y is the parity indicator of the first roll. Therefore, thus, Y is σ(X1)-measurable. On the other hand, Y cannot be written as a function of X2.
The σ-algebra generated by X represents all the events that can be observed by drawing X. It represents the information revealed by X.
DEFINITION 1.14.– Let (Ω,
, ℙ) be a probability space.– Let X and Y be two random variables on (Ω, , ℙ) taking values in (E1, ε1) and (E2, ε2). Then, X and Y are said to be independent if the σ-algebras σ(X) and σ(Y) are independent.
– Any family (Xi)i∈I of random variables is independent if the σ-algebras σ(Xi) are independent.
– Let be a sub-σ-algebra of , and let X be a random variable. Then, X is said to be independent of if σ(X) is independent of or, in other words, and are independent.
PROPOSITION 1.8.– If X and Y are two integrable and independent random variables, then their product XY is integrable and
1.2.4. Random vectors
We will now more closely study random variables taking values in ℝd, with d ≥ 2. This concept has already been defined in Definition 1.9. We will now look at the relations between the random vector and its coordinates. When d = 2, we then speak of a random couple.
PROPOSITION 1.9.– Let X be a real random vector on the probability space (Ω,
, ℙ), taking values in ℝd. Then,is such that for any i ∈ {1, ..., d}, Xi is a real random variable.
DEFINITION 1.15.– A random vector is said to be discrete if each of its components, Xi, is a discrete random variable.
DEFINITION 1.16.– Let
The conjoint distribution (or joint distribution or, simply, the distribution) of X is given by the family
The marginal distributions of X are the distributions of X1 and X2. These distributions may be derived from the conjoint distribution of X through:
and
The concept of joint distributions and marginal distributions can naturally be extended to vectors with dimension larger than 2.
EXAMPLE 1.21.– A coin is tossed 3 times, and the result is noted. The universe of possible outcomes is Ω = {T, H}3. Let X denote the total number of tails obtained and Y denote the number of tails obtained at the first toss. Then,
The couple (X, Y) is, therefore, a random vector (referred to here as a “random couple”), with joint distribution defined by
for any (i, j) X(Ω) × Y (Ω), which makes it possible to derive the distributions of X and Y (called the marginal distributions of the couple (X, Y )):
Distribution of X:
Distribution of Y :
1.2.5. Convergence of sequences of random variables
To conclude this section on random variables, we will review some classic results of convergence for sequences of random variables. Throughout the rest of this book, the abbreviation r.v. signifies random variable.
DEFINITION 1.17.– Let (Xn)n≥1 and X be r.v.s defined on (Ω,
, ℙ).1 1)