Talk:Randomness test

Needs mentioning:

The tests of Maurice Kendall and Bernard Babington Smith: chi-squared, gaps, coupon collector, runs up, runs down.

The discussion by Knuth in Art of Computer Programming. In the first edition this was dominated by the Kendal and Babington-Smith tests. A later edition has more, much of it inspired by Marsaglia and Diehard.

There is newer random number testing software available: NIST, RaBiGeTe, SPRNG, TestU01, gjrand, Bob Jenkins' unnamed tests with ISAAC (cipher), Pracrand, and others. These are intended for testing actual data with sizes up to the gigabytes at least. These are suites each containing several different tests, and between them there are many tests which can be quite useful in finding some kinds of non-randomness that would be a problem in some applications. I would guess that some of these tests are not formally published anywhere, despite their usefulness. Others, including Maurer's universal statistical test, for instance, have been heavily analysed in peer reviewed publications.

There's a huge amount could be written about this topic, which is of enormous practical importance and often ignored by random or pseudo-random number users, who then publish scientific papers containing wrong answers.

198.142.44.32 00:19, 23 March 2007 (UTC)

This article jumps into the discussion without any simple or "intuitive" discussion to start with! That needs to be added. --Kjetil Halvorsen 23:02, 27 January 2010 (UTC) —Preceding unsigned comment added by Kjetil1001 (talk • contribs)

The example needs more work. Clearly, if arbitrary characters are allowed the description can be much shorter. For example, the MIME base64 encoding only needs 12 characters for 64 bits. — Preceding unsigned comment added by Jowagner (talk • contribs) 17:45, 18 November 2017 (UTC)

The whole thing needs more work. String 1 could be written in APL as 64ρ"01" which is only 7 characters, but so what? Ehrenkater (talk) 19:07, 18 November 2017 (UTC)

Validity of test and what it means to test randomness
From the standpoint of a test that will reliably pass for a random set, there is no such thing. You cannot say "are these numbers random?" with accuracy. What you *can* do is make assessments of random distribution to a certain level of assurance, and set that level as the bar. So, the statistical tests will essentially determine the *likelihood* of randomness, but not truly determine true randomness.

This is covered a bit in a discussion at random.org: https://www.random.org/analysis/

Another factor to consider is that "random" is an overloaded term that can mean different things to different people and in different contexts. Quite often, someone looking for "random" is not really looking for mathematical random, but something else. For example, many would argue that the same item in a set chosen twice or more sequentially is not random, when in fact, that has no bearing on randomness by definition. As a result, often people asking for randomness are actually looking for something else, such as uniqueness, distribution, certain expectations of arrangement. Spotify has had multiple discussions about the difficult of providing a satisfactory form of "shuffle" in its playlist algorithms. Statistically random algorithms led to customer complaints that they "weren't random."

https://labs.spotify.com/2014/02/28/how-to-shuffle-songs/

Some software testers may be asked to "test if this generator is random." Poor assumptions of "what is random" on both sides of the testing paradigm can confound this effort. A test that, for example, fails if an item Q in a set of size P is chosen more than once among a number of selections less than P. This is fallacious, as true randomness would place no limit on the number of times Q may be chosen from P in any number of selections. Random.org points to an apocryphal Dilbert comic in which a random number generator is seen returning "9,9,9,9,9,9." It is not valid to discount this as non-random, as such a sequence is equally likely as any other.

64.187.160.52 (talk) 20:00, 30 March 2020 (UTC)