Patrick Muldowney

Gauge Integral Structures for Stochastic Calculus and Quantum Electrodynamics


Скачать книгу

having equal probability; so the distinct sample values of the random variable X are negative 1, plus 1 with equal probabilities 0.5. Thus it happens, in this case, that the stochastic integral has the same sample space normal upper Omega, and the same probabilities, as each of the random variables upper Y Subscript t.

      This discrete or step function device is a fairly standard ploy of mathematical analysis. Following through on this device usually involves moving on to functions upper Y Subscript t Baseline left-parenthesis omega right-parenthesis which are limits of step functions; and this procedure generally involves use of some conditions which ensure that the integral of a “limit of step functions” is equal to the limit of the integrals of the step functions.

      Broadly speaking, it is not unreasonable to anticipate that this approach will succeed for measurable (or “reasonably nice”) functions—such as functions which are “smooth”, or which are continuous.

      But the full meaning of measurability is quite technical, involving infinite operations on sigma‐algebras of sets. This can make the analysis difficult.

      Accordingly, it may be beneficial to seek an alternative approach to the analysis of random variation for which measurability is not the primary or fundamental starting point. Such an alternative is demonstrated in Chapters 2 and 3, leading to an alternative exposition of stochastic integration in ensuing chapters.

      The method of exposition is slow and gradual, starting with the simplest models and examples. The step‐by‐step approach is as follows.

       Though there are other forms of stochastic integral, the focus will be on where Z and X are stochastic processes.

       The sample space will generally be where is the set of real numbers, is an indexing set such as the real interval , and is a cartesian product.

       Z and X are stochastic process , (); and g is a deterministic function. More often, the process Z is X, so the stochastic integral6 is .

       The approach followed in the exposition is to build up to such stochastic integrals by means of simpler preliminary examples, broadly on the following lines:– Initially take to be a finite set, then a countable set, then an uncountable set such as .– Initially, let the process(es) X (and/or Z) be very easy versions of random variation, with only a finite number of possible sample values.– Similarly let the integrand g be an easily calculated function, such as a constant function or a step function.– Gradually increase the level of “sophistication”, up to the level of recognizable stochastic integrals.

       This progression helps to develop a more robust intuition for this area of random variation. On that basis, the concept of “stochastic sums” is introduced. These are more flexible and more far reaching than stochastic integrals; and, unlike the latter, they are not over‐burdened with issues involving weak convergence.

      1 1 The random variable could be normally distributed, or Poisson, or binomial, etc.

      2 2 It is sometimes convenient to denote the integrand function by , where can be (deterministic), (random, independent of ), or (random, dependent on ).

      3 3 This construction is also described in Muldowney [115].

      4 4 Recall also that I1 and I2 make reference to a construction .

      5 5 See [MTRV] for discussion of complex‐valued random variables.

      6 6 There are other important stochastic integrals, such as .

      The previous chapter makes reference to random variables as functions which are measurable with respect to some probability domain. This conception of random variation is quite technical, and the aim of this chapter is to illuminate it by focussing on some fundamental features.

      In broad practical terms, random variation is present when unpredictable outcomes can, in advance of actual occurrence, be estimated to within some margin of error. For instance, if a coin is tossed we can usually predict that heads is an outcome which is no more or no less likely than tails. So if an experiment consists of ten throws of the coin, it is no surprise if the coin falls heads‐up on, let us say, between four and six occasions. This is an estimated outcome of the experiment, with estimated margin of error.

      In fact, with a little knowledge of binomial probability distributions, we can predict that there is approximately 40 per cent chance that heads will be thrown on four, five or six occasions out of the ten throws. So if a ten‐throw trial is repeated one hundred times, the outcome should be four, five, or six heads for approximately four hundred of the one thousand throws.

      Such knowledge enables us to estimate good betting odds for placing a wager that a toss of the coin will produce this outcome. This is the “naive or realistic” view.

      Can this fairly easily understandable scenario be expressed in the technical language of probability theory, as in Chapter 1 above? What is the probability space left-parenthesis normal upper Omega comma script upper A comma upper P right-parenthesis? What is the upper P‐measurable function which represents the random variable corresponding to a single toss of a coin?

      The following remarks are intended to provide a link between the “naive or realistic” view, and the “sophisticated or mathematical” interpretation of this underlying reality.