Группа авторов

Mechanical Engineering in Uncertainties From Classical Approaches to Some Recent Developments


Скачать книгу

respect to these different sources of uncertainty, a distinction is often made between aleatory and epistemic uncertainties (Vose 2008; National Research Council 2009), although this distinction is debatable, as will be discussed in more detail in section 1.2.

      Aleatory uncertainty is also referred to as irreducible uncertainty, stochastic uncertainty, inherent uncertainty or type I uncertainty. This uncertainty typically arises from environmental stochasticity, fluctuations in time, variations in space, heterogeneities and other intrinsic differences in a system. It is often referred to as irreducible uncertainty because it cannot be reduced further except by modification of the problem under consideration. On the other hand, it can be better characterized when it is empirically estimated. For example, the characterization of the variability of a material property can be improved by increasing the number of samples used, allowing a better estimate of statistical properties such as the mean and standard deviation.

      An example of random uncertainty can be seen in an (unbiased) coin toss (that is, “heads or tails”). The intrinsic characteristics of the toss create uncertainty about the outcome of the toss, with a probability of 0.5 of obtaining tails. Assuming that tails is an undesirable outcome, it would then be desirable to reduce the probability of obtaining tails in order to reduce the uncertainty of the desired outcome (obtaining heads). However, without breaking the rules of the game, that is, without modifying the problem under consideration, it is not possible to reduce this uncertainty, hence the term “irreducible uncertainty”. On the other hand, if this uncertainty is characterized empirically on the basis of several tosses, it is obvious that this uncertainty can be better characterized by increasing the number of tosses.

      In terms of an engineering example, the amplitude of the gusts that an aircraft will be likely to encounter during its lifetime can be seen as a random or irreducible uncertainty. Indeed, when designing a new model of aircraft, it would be desirable for the engineer to reduce this uncertainty as much as possible in order to reduce the weight of the aircraft structure. Unfortunately, since this uncertainty is essentially related to environmental stochasticity, the engineer has no way to reduce it without changing the design being considered, therefore, the uncertainty is seen as irreducible. However, it could perhaps be reduced by changing the problem under study. For example, the engineer might want to consider installing sensors that allow the aircraft to detect the amplitude of turbulence in its trajectory in advance, thus allowing pilots to undertake avoidance maneuvers. Nevertheless, such a choice would have numerous and serious consequences. (Would such an aircraft be certifiable? Is it more economical to avoid turbulence rather than designing the aircraft to withstand it? Would passenger comfort be satisfactory? etc.) Usually, the problem is thus considered fixed and the irreducible or non-irreducible nature of the uncertainties is estimated on a given problem.

      An example of epistemic uncertainty would be that associated with the estimation of the age (in years) of an individual. Let us assume that we have just heard on the radio that a Nobel Prize winning scientist is going to give an acceptance speech in our town and we would like to know the age of the scientist. There is no mention of that on the radio. At this point, let us say that we can only estimate the age of this person between 30 and 90 years old. Therefore, there is a lot of uncertainty about their age at that point. However, as mentioned earlier, epistemic uncertainty can be reduced through improved knowledge. If we attend the acceptance speech, we will have the opportunity to see this person, which could allow us to reduce the uncertainty about their age to between, let us say, 50 and 60 years of age. If we even go to talk to them after the speech, we may even be able to get additional information that will allow us to further reduce the uncertainty about their age. In this case, the uncertainty could even be reduced to zero if we can find out their date of birth.

      A typical example of epistemic uncertainty in engineering problems is measurement uncertainty. Similar to the scientist’s age, the quantity that must be measured has a true value that is fixed (considering measurements at the macroscopic, not the quantum scale). Nevertheless, measurement instrumentation usually only allows this quantity to be determined with uncertainty. This uncertainty can be reduced by developing better instruments, using a better knowledge of the measurement phenomena involved, hence the term reducible uncertainty, although, in general, this does not mean that the uncertainty can be reduced to zero.

      The purpose of this chapter is therefore to provide an overview of some of the different approaches used to represent and quantify uncertainties, both aleatory and epistemic. It is organized as follows. In section 1.2, a discussion about the need to distinguish between epistemic and aleatory uncertainty is presented. In section 1.3, an illustration of the probabilistic modeling approach is provided, including illustrations of some cases where its use may be problematic for representing epistemic uncertainty. In section 1.4, p-box theory is presented, which is an extension of probability theory that is designed to better address problematic cases in modeling epistemic uncertainties. In section 1.5, interval analysis is briefly discussed and in section 1.6 fuzzy set theory is addressed. In section 1.7, possibility theory is introduced, while evidence theory (or Dempster–Shafer theory) is presented in section 1.8. Some concluding remarks and discussions are provided in section 1.9.

      The need to classify uncertainty into epistemic and aleatory has been the subject of much debate and discussion (Hoffman and Hammonds 1994; Apostolakis 1999; Der Kiureghian and Ditlevsen 2009; Lemaire 2014). It therefore seemed useful to create a section dedicated to the challenges and the usefulness of this distinction. This debate arises from the controversy between Niels Bohr and Albert Einstein concerning the nature of randomness observed at the quantum scale. For the former, it was a fundamental randomness, while for the latter, it was linked to an incomplete knowledge of phenomena at these scales. To illustrate his point of view, Einstein stated that “God does not play dice”. Subsequent theoretical and experimental work by physicists seems to have proved Niels Bohr right: randomness observed at these scales is intrinsic to quantum nature. Now that the debate at the microscopic scale has been closed, one may raise the