Algorithm Steps
1 Pad the input by adding a zero outcome X = 0 with probability 0.
2 Group by Xj and sum the corresponding pj.
3 Sort events by outcome Xj into ascending order. Relabel events X0<X1<⋯<Xm and probabilities p0,…,pm.
4 Decumulate probabilities to compute the survival function Sj:=S(Xj) using S0=1−p0 and Sj=Sj−1−pj, j > 0.
5 Difference the outcomes to compute ΔXj=Xj+1−Xj, j=0,…,m−1.
6 Outcome-probability sum-product: (3.10)
7 Survival function sum-product: (3.11)
Comments.
1 Step (1) treats 0 as special because the second integral in Eq. 3.9 starts at X = 0. The case where the smallest outcome is > 0 is illustrated in Figure 3.12. Now S(x)=1 for 0≤x<X1 and ∫0∞S(x)dx includes the shaded dashed rectangle of area X1. Step (1) allows us to systematically deal with any discrete data. It adds a new outcome row only when the smallest observation is > 0.Figure 3.12 Accounting for the effect of adding 1 to each outcome.
2 After Step (3), the Xj are distinct, they are in ascending order, X0=0, and pj=P(X=Xj).
3 In Step (4), Sm=P(X>Xm)=0 since Xm is the maximum value of X.
4 The forward difference ΔX computed in Step (5) replaces dx in various formulas. Since it is a forward difference ΔXm is undefined. It is also unneeded.
5 Step (6) computes the first integral in Eq. 3.9. It is a sum because X is discrete and has a probability mass function, like a Poisson, rather than a density, like a normal—explaining why we use the notation dF(x), not f(x)dx. The sum starts at i = 1 because X0=0. Notice that P(X=Xj)=Sj−1−Sj is the negative backward difference of S.
6 Step (7) computes the second integral in Eq. 3.9. The sum starts at i = 1, corresponding to the first vertical bar in Panel (b) that extends from X0=0 to X1 and has height S(X0).
7 Note the index shift between Eq. 3.10, 3.11.
8 Both Eqs. 3.10 and 3.11 are exact evaluations. The approximation occurs when the underlying distribution being modeled is replaced with the discrete sample given by X.
Exercise 29 Apply the Algorithm to X defined by the Simple Discrete Example, Section 2.4.1.
Solution. The sorted data, starting with X0=0, is shown in Table 3.2. From now on we label the outcomes Xj as shown there.
Table 3.2 shows event rank j=0,…,m=8, and the columns S and ΔX from Steps (4) and (5). For future applications, it is important we can recover P(X=Xj) as a difference of Sj. This is easy: P(X=Xj)=ΔSj:=S(Xj−1)−S(Xj) is the jump at Xj and ΔS0 is the jump at X0=0, i.e, 1−P(X>0). It is the negative backward difference because S is the survival function. Finally, the table shows two computed columns: XΔS and SΔX. The totals show that Steps (6) and (7) give the same result for E[X], 27.25.
Table 3.2 Simple Discrete Example with nine possible outcomes, ordered by portfolio loss X, with layer width ΔX and exceedance probability S
j | X | ΔX | PX=ΔS | S | X ΔS | S ΔX |
---|---|---|---|---|---|---|
0 | 0 | 1 | 0.25 | 0.75 | 0 | 0.75 |
1 | 1 | 7 | 0.125 | 0.625 | 0.125 | 4.375 |
2 | 8 | 1 | 0.125 | 0.5 | 1 | 0.5 |
3 | 9 | 1 | 0.0625 | 0.4375 | 0.563 | 0.4375 |
4 | 10 | 1 | 0.125 | 0.3125 | 1.25 | 0.3125 |
5 | 11 | 79 | 0.0625 | 0.25 | 0.688 | 19.75 |
6 | 90 | 8 | 0.125 | 0.125 | 11.25 | 1 |
7 | 98 | 2 | 0.0625 | 0.0625 | 6.125 | 0.125 |
8 | 100 | 0.0625 | 0 | 6.25 | ||
Sum | 1 | 27.25 | 27.25 |
Exercise 30 Apply the Algorithm to X + 100.
Solution. Step (1) now introduces the 0 row, see Table 3.3.
Table 3.3 Solution to Exercise 30.
j | X | ΔX | PX=ΔS | S | X ΔS |
S ΔX
|
---|