Antoine Savine

Modern Computational Finance


Скачать книгу

of models and designed to communicate with all models that satisfy an API discussed in section 3.7 (where we show that the API can also be added to some existing models with an adapter class). For our purpose, a model is anything that generates scenarios, also called paths in the context of Monte‐Carlo simulations, that is, the value of the relevant simulated variables (essentially, underlying asset prices) on the relevant event dates (when something meaningful is meant to happen for the transaction, like a fixing, a payment, an exercise, or the update of a path dependency). Our library consumes scenarios to evaluate payoffs, without concern of how those scenarios were produced.

      An alternative that we don't consider is step‐wise simulation, where all paths are computed simultaneously, date by date. The model first generates all the possible states of the world for the first event date, then moves on to the second event date, and so on until it reaches the final maturity. Step‐wise simulation is natural with particular random generators, control variates (when paths are modified after simulation so that the expectation of some function matches some target), and, more generally, calibration inside simulations as in the particle method of [16].

      Monte‐Carlo simulations are explained in detail in [19] [15], and the dedicated chapters of [20], as well as our publication [27], which provides a framework and code in C++, and focuses on a parallel implementation.

      We refer to the figure on page 15 for an illustration of the valuation process. The model produces a number of scenarios; every scenario is consumed by the evaluator to compute the payoff over that scenario. This is repeated over a number of paths, followed by an aggregation of the results. The evaluator is the object that computes payoffs (sums of cash flows normalized by the numeraire, which is also part of the scenarios, as described in chapter 5) as a function of the simulated scenarios. The evaluator conducts its work by traversal of the expression tree that represents the script, while maintaining an internal stack for the storage of intermediate results. We describe the evaluator in detail in section 3.6.

      The evaluation of the script over a path is typically executed a large number of times; therefore, the overall performance is largely dependent on its optimization. This is achieved by many forms of pre‐processing.

      Pre‐processing is not only about pre‐computing parts of subsequent evaluations. It is not so much about the optimization of the arithmetic calculations performed during simulations. It is mainly about moving most of the administrative, logistic overhead to processing time. CPUs perform mathematical calculations incredibly fast, but the access of data in memory may be slow if not carefully dealt with. With hard‐coded payoffs (payoffs coded in C++), it is the compiler that optimizes the code, putting all the right data in the right places in the interest of performance. With scripting, the payoff, as a function of the scenario, is built at run time. The compiler cannot optimize this function for us. This is our responsibility. This is where the pre‐processors come into play.

      A special breed of pre‐processors called indexers arrange data for the fastest possible access during simulations, with a massive performance impact. For instance, when some cash‐flow is related to a given Libor rate of a given maturity, the indexer “informs” the evaluator, before simulations start, where in pre‐allocated memory that Libor will live during simulations, so that, at that point, the evaluator reads the simulated Libor there, directly, without any kind of expensive lookup.

      Most noticeable benefits are achieved in the context of interest rates (and multiple assets) discussed in section III. The dates when coupons are fixed or paid and the maturities of the simulated data such as discount factors and Libors are independent of scenarios. The value of these simulated variables may be different in every scenario, but their specification, including maturity, is fixed. Pre‐processors identify what rates are required for what event dates, and perform memory allocation, indexing, and other logistics before simulations start. During simulations, where performance matters most, the model communicates the values of the simulated data in a pre‐indexed array for a fast, random access.

      This is covered and clarified in part III, but we introduce an example straight away. Consider a script for a caplet:

01Jun2021 caplet pays 0.25 * max( 0, libor( 03Jun2021, 3m, act/360, L3) ‐ STRIKE) on 03Sep2021

equation

      While this is all mathematically very clear, a practical implementation is more challenging. The product is scripted at run time, so we don't know, at compile time, what simulation data exactly the evaluator expects from the model. Dedicated data structures must be designed for that purpose, something like (in pseudo‐code):

Scenario := vector of SimulData per event date SimulData := numeraire, vector of libors, vector of discounts, …

      In our simple example, we have one event date: 01Jun2021, and we need one Libor fixed on the event date for a loan starting two business days later on 03Jun2021, with coupon paid on 03Sep2021, as well as one discount factor for the payment date. We humans know that from reading the script. A pre‐processor figures that out by visiting the script at processing time. This allows not only to pre‐allocate the vectors of Libors and discounts but also to transform the payoff into something like:

0.25 * max( 0, scen[0].libors[0] ‐ STRIKE) * scen[0].disc[0] / scen.numeraire

      The pre‐processor also “knows” that event date 0 is 01Jun2021, and that