![]() See, for example, Arnold and Avez for the theory. This theory grew out of the study of the asymptotic behaviour of classical mechanical systems that have no analytic solutions, developed largely in the Soviet Union around the middle of the twentieth century by Kolmogorov, Rokhlin, Anosov, Arnold, Sinai and others. ![]() To our knowledge, only two such attempts have been successful, both based on the same representation and theory of dynamical systems. It turns out to be very difficult to find an approach which produces a practical RNG, fast enough and accurate enough for general MC applications. It has been known, at least since the time of Poincaré, that classical dynamical systems of sufficient complexity can exhibit chaotic behaviour, and numerous attempts have been made to make use of this “dynamical chaos” to produce random numbers by numerical algorithms which simulate mechanical systems. The best way to ensure that a RNG is good enough for a given application, is to use one designed to be good enough for all applications. There is no general method to determine how good a RNG must be for a particular MC application. ![]() It is better to use a RNG which has been studied, whose defects are known and understood, than one which looks good but whose defects are not understood. Making an algorithm more complicated (in particular, combining two or more methods in the same algorithm) may make a better RNG, but it can also make one much worse than a simpler component method alone if the component methods are not statistically independent. The period should be much longer than any sequence that will be used in any one calculation, but a long period is not sufficient to ensure lack of defects.Įmpirical testing can demonstrate that a RNG has defects (if it fails a test), but passing any number of empirical tests can never prove the absence of defects. The experience gained from developing, using and discovering defects in many RNG’s has taught us some lessons which we summarise here (they are explained in detail in ): Fortunately the theory of Mixing, outlined below, now offers this possibility. was soon solved by Martin Lüscher (in ), but it became clear that if we were to have confidence in MC calculations, we would need a better way to ensure their quality. Fortunately, the particular problem which was detected by Ferrenberg et. Since most often we don’t have any independent way to know the right answer, it became clear that empirical testing of RNG’s, at that time the only known way to verify their quality, was not good enough. showed that the RNG considered at that time to be the best was giving the wrong answer to a problem in phase transitions, while the older RNG’s known to be defective gave the right answer. Until 1992, when the famous paper of Ferrenberg et al. When the result looked good, it was assumed to be correct, but we know now that all the generators of that period had serious defects which could give incorrect results not easily detected.Īs computers got faster and RNG’s got longer periods, the situation evolved quantitatively, but still unacceptable results were occasionally obtained and of course were not published. But how do we know if those numbers are random enough? In the early days (1960’s) the RNG’s were so poor that, even with the very slow computers of that time, their defects were sometimes obvious, and users would have to try a different RNG. It is well-known that the MC method is used primarily for calculations that are too difficult or even impossible to perform analytically, so that our science has become dependent to a large extent on the random numbers used in extensive MC calculations. High-level scientific research, like many other domains, has become dependent on Monte Carlo calculations, both in theory and in all phases of experiments. Now with hindsight, it is not surprising that all the widely-used generators described there were later found to have defects (failing tests of randomness and/or giving incorrect results in Monte Carlo (MC) calculations), with the notable exception of RANLUX, which Knuth does mention briefly in the third edition, but without describing the new theoretical basis. The book contains a wealth of information about random number generation, but nothing about where the randomness comes from, or how to measure the quality (randomness) of a generator. The situation for traditional RNG’s (not based on Kolmogorov–Anasov mixing) is well described by Knuth in. It turns out to be difficult to find an operational definition of randomness that can be used to measure the quality of a RNG, that is the degree of independence of the numbers in a given sequence, or to prove that they are indeed independent. We are concerned here with pseudorandom number generators (RNG’s), in particular those of the highest quality.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |