Группа авторов

Chance, Calculation and Life


Скачать книгу

POMEROL.

PART 1 Randomness in all of its Aspects

      1

      Classical, Quantum and Biological Randomness as Relative Unpredictability

      We propose the thesis that randomness is unpredictability with respect to an intended theory and measurement. From this point of view, we briefly discuss various forms of randomness that physics, mathematics and computer science have proposed. Computer science allows us to discuss unpredictability in an abstract, yet very expressive way, which yields useful hierarchies of randomness and may help to relate its various forms in natural sciences. Finally, we discuss biological randomness – its peculiar nature and role in ontogenesis and in evolutionary dynamics (phylogenesis). Randomness in biology is positive as it contributes to organisms’ and populations’ structural stability by adaptation and diversity.

      Randomness is everywhere, for better or for worse: vagaries of weather, day-today fluctuations in the stock market, random motions of molecules or random genetic mutations are just a few examples. Random numbers have been used for more than 4,000 years, but they have never been in such high demand than they have in our time. What is the origin of randomness in nature and how does it relate to the only access we have to phenomena, that is, through measurement? How does randomness in nature relate to randomness in sequences of numbers? The theoretical and mathematical analysis of randomness is far from obvious. Moreover, as we will show, it depends on (and is relative to) the particular theory that is being worked on, the intended theoretical framework for the phenomena under investigation.

      Democritus (460–370 BCE) determined the causes of things to necessity and chance alike, justifying, for example, the fact that atoms’ disorderly motion can produce an orderly cosmos. However, the first philosopher to think about randomness was most likely Epicurus (341–270 BCE), who argued that “randomness is objective, it is the proper nature of events”.

      For centuries, though, randomness has only been mathematically analyzed in games and gambling. Luca Pacioli (in Summa de aritmetica, geometria, proporzioni et proporzionalita, 1494) studied how stakes had to be divided among gamblers, particularly in the difficult case when the game stops before the end. It is worth noting that Pacioli, a top Renaissance mathematician, also invented modern bookkeeping techniques (Double Entry): human activities, from gambling to financial investments, were considered as the locus for chance. As a matter of fact, early Renaissance Florence was the place of the invention of banks, paper currency and (risky) financial investments and loans1.

      Cardano (in De Ludo Aleae (The Game of Dice), 1525) developed Pacioli’s analysis further. His book was only published in 1663, so Fermat and Pascal independently and more rigorously rediscovered the “laws of chance” for interrupted games in a famous exchange of letters in 1654. Pascal clarified the independence of history in the games of chance, against common sense: dice do not remember the previous drawings. Probabilities were generally considered as a tool for facing the lack of knowledge in human activities: in contrast to God, we cannot predict the future nor master the consequences of our (risky) actions. For the thinkers of the scientific revolution, randomness is not in nature, which is a perfect “Cartesian Mechanism”: science is meant to discover the gears of its wonderful and exact mechanics. At most, as suggested by Spinoza, two independent, well-determined trajectories may meet (a walking man and a falling tile) and produce a random event. This may be considered a weak form of “epistemic” randomness, as the union of the two systems, if known, may yield a well-determined and predictable system and encounter.

      Galileo, while still studying randomness, but only for dice (Sopra le scoperte de i dadi, 1612), was the first to relate measurement and probabilities (1632). For him, in physical measurement, errors are unavoidable, yet small errors are the most probable. Moreover, errors distribute symmetrically around the mean value, whose reliability increases with the number of measurements.

      1.1.2. Preliminary remarks

      Randomness is a tricky concept which can come in many flavors (Downey and Hirschfeldt 2010). Informally, randomness means unpredictability, with a lack of patterns or correlations. Why is randomness so difficult to understand and model? An intuitive understanding comes from the myriad of misconceptions and logical fallacies related to randomness, like the gambler’s fallacy. In spite of the work of mathematicians since the Renaissance, there is the belief that after a coin has landed on tails 10 consecutive times, there are more chances that the coin will land on heads on the next flip. Similarly, common sense argues that there are “due” numbers in the lottery (since all numbers eventually appear, those that have not come up yet are “due”, and thus more likely to come up soon). Each proposed definition of randomness seems to be doomed to be falsified by some more or less clever counter-example.

      Even intuitively, the quality of randomness varies: tossing a coin may seem to produce a sequence of zeroes and ones which is less random than the randomness produced by Brownian motion. This is one of the reasons why users of randomness, like casinos, lotteries, polling firms, elections and clinical evaluations, are hard pressed to “prove” that their choices are “really” random. A new challenge is emerging, namely, to “prove randomness”.

      Measurement is a constant underlying issue: we may only associate a number with a “natural” process, by measurement. Most of the time we actually associate an interval (an approximation), an integer or a rational number as a form of counting or drawing.

      Let us finally emphasize that, in spite of the existing theoretical differences in the understanding of randomness, our approach unifies the various forms of randomness in a relativized perspective:

      Randomness is unpredictability with respect to the intended theory and measurement.

      We will move along this epistemological stand that will allow us to discuss and compare randomness in different theoretical contexts.