Recommendation Needed for a PRNG - random

I'm looking for a Pseudo-Random Number Generation algorithm capable of producing a random 128-/256-bit number. Security and cryptographic integrity are not important; simplicity and performance are valued above all else. Ideally, the algorithm will be usable on modern mobile phone platforms. Can you recommend such an algorithm? Is it feasible? Thanks in advance!

You should try SFMT: SIMD-oriented Fast Mersenne Twister.
This PRNG has been designed to produce 128-bit integers, by taking advantage of vector instructions offered by processors.
For more information about this PRNG, please have a look at another post I answered to by advising SFMT: best pseudo random number generator
For a complete description, see the official page, where you can also download SFMT: http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html

If simplicity is your top priority, look at the generator in this article. The heart of the generator is just two lines of code. It's not state-of-the-art like Mersenne Twister, but it is simpler and still has good statistical properties.

http://burtleburtle.net/bob/rand/smallprng.html
That is small (128 bits of state) and fast and passes every general purpose statistical test available at this time. Every other PRNG linked to in the responses here so far fails tests rapid - the MWC-based PRNG fail many many tests, while SFMT fails only binary matrix rank / linear complexity type tests.
As others have said, to get 128 bits simply concatenate sequential 32 bit outputs. Do not forcibly extract more bits from a PRNGs state that its normal output function yields - that will generally degrade output quality, sometime by a large amount.

Related

Is there a way to test the quality of a PRNG for multidimensional use?

I'm in the process of evaluating some PRNGs, both in terms of speed and quality. One aspect of quality I want to test is multidimensional distribution and bias.
I know of TestU01's batteries, and I plan on using them (and, perhaps, others that the NIST suggests).
But what about testing multidimensional bias? Boost's PRNG have some comments, and the Mersenne Twister is known to be uniform in several hundred dimensions, while the Hellekalek PRNG has good uniform distribution in "several" dimensions (however many that means...).
I imagine the runtime complexity of a battery testing for multidimensional bias would increase with each dimension. So it's possible there isn't a suitable battery for this test. However, that I haven't confirmed that suspicion.
Is there a known way to test PRNGs for multidimensional bias? I'd even be okay if the test is limited to 2, 3, or 4 dimensions; that would be better than no test at all.
TestU01 is good. PractRand is arguably better (full disclosure: I wrote PractRand). For some categories of PRNGs, RaBiGeTe is also decent. There are other options which are not good (NIST STS, Diehard, and Dieharder are well known but ineffective).
Any good test suite will test a wide variety of numbers of "dimensions", though fundamentally it is easier to do comprehensive testing for shorter range correlations, so a better job is done on smaller numbers of dimensions.
Generally, anything that passes the TestU01 BigCrush battery and/or one terabyte of PractRand standard battery is likely to be fine for real-world non-cryptographic usage. This kind of testing cannot identify some categories of problems however, particularly inter-seed correlation issues.

Random number understanding [duplicate]

This question already has answers here:
How does a random number generator work?
(9 answers)
how does random() actually work?
(7 answers)
Closed 8 years ago.
I can't understand how could computer make random numbers.
I mean what piece of hardware can do this? and does the computer has only one source to do this and all the programming languages use that?
Thanks in advance.
The short answer is that computers can't easily make truly random numbers. There are a couple ways to generate random numbers, though, some fast but not random, and some slow but true...
Pseudo-Random Generators
Most low-level languages (Namely, C) have built in functionality that allows them to psuedo generate random numbers, but this is not true random number generation. It works by starting with a "seed" value, an initial string of numbers, and then modifying this seed, over and over again, to create a "random" string.
They fall short in that, with the right seed and factors, conditions can be created to force a certain number to be generated. Also, due to the nature of the generation, when graphed, the results will not be evenly distributed. As mentioned by the above answerer, there are things a programmer can do to make it more random, but the method can not be truly random, for the above reasons. An example is the random number generator in most programming languages. It is hard-coded, and is performed in the CPU.
Entropy Generators
Random numbers that work through entropy generation work by measuring a type of entropy (disorder, or, as I have heard it defined, chaos #duffymo has informed me that chaos is not a good synonym. Sorry!) that is presumed to be random. Atmospheric and thermal noise are common things measured. They are generally considered to be "better" than the above choice, as they are, for the most part, closer to true randomness. One issue is that they are slow - numbers can not be generated unless enough entropy is harvested. An example is random.org, an atmospheric noise entropacal random number generator (say that 10 times fast!). It is performed by whatever piece of hardware makes the measurement of entropy.
Quantum Generators
A subset of entropy generators, quantum generators measure quantum factors (factors not used in classical physics), such as the spin of particles to determine a number. A downside is that true quantum generators are expensive. An example is this piece of hardware which uses the path of a photon to determine a number.
Hope this helps!
It can be hardware, but most languages like Java and C# use a software construct best explained by Donald Knuth in his opus "The Art of Computer Programming": linear congruential generator.
As you can imagine, there are problems with these approaches.
There are attempts to improve it (e.g. Mersenne Twister).
There are extensive statistical tests to assess a given random number generation algorithm called the Diehard Tests. (I always picture big vehicles in a snowstorm being cranked in the cold by honking batteries when I hear about those tests.)
I'd be willing to bet that the period on these pseudo random number generators is more than adequate for your applications.
The best way to generate a truly random number is to use a quantum process from nature in hardware.

Is there a somewhat-reliable way to detect that a list of integers came from a common PRNG?

Basically I'm looking for a detective function. I pass it a list of integers (probably between 20 and 100 integers) and it tell me "Yeah, 84% chance this came from a PRNG, I tested it against the main ones that most modern programming languages use", or "No, only 12% chance this came from a well-known PRNG".
If it helps (or hinders), the integers will always be between 1 and 999.
Does this exist?
Unless you are prepared to break new ground in number theory, you would only be able to detect obsolete, badly designed, or poorly seeded PRNGs. Good PRNGs are explicitly designed to prevent what you are trying to do. Random number generation is a critical part of digital cryptography, so a lot of effort goes into producing random numbers that meet all known tests.
There are batteries of tests to profile PRNGs. See for example this NIST page.
As the comments point out, the first two sentences are overstated and are only strictly true for PRNGs that may be used in cryptography. Weaker (i.e. more predictable) PRNGs might be chosen for other domains in order to improve time or space performance.
You can write a battery of tests for a list of candidate generators, but there are a lot of generators, and some have enormous state where adjacent values of a well-seeded generator will reveal nothing useful and you'll have to see wait for a long time before you can get the two data points which will have an informative relationship.
On the plus side; while the list of random number generators that you might encounter is vast, there are telltale signs that will help you identify some classes of simple generators quickly and then you can perform focussed analysis to derive the specific configuration.
Unfortunately even a simple generator like KISS shows that while the generator can be trivially broken when you know its configuration, it can hide its signature from anything that does not know its configuration, leaving you in a situation where you have to individually test for every possible configuration.
There are quality tests like dieharder and TestU01 which will consume many megabytes of data to identify any weakness in a generator; however, these can also identify weaknesses in real RNGs, so they could give a strong false positive.
To consume only a 100 integers you would really need to have a list of generators in mind. For example, to detect LCG used inappropriately, you simply test to see if the bottom three bits cycle through a repeating pattern of 8 values -- but this is by far the easiest case.
If you had a sequence 625 or more 32-bit integers, you could detect with high confidence whether it was from consecutive calls to Mersenne Twister. That is because it leaks state information in the output values.
For an example of how it is done, see this blog entry.
Similar results are in theory possible when you don't have ideal data such as full 32-bit integers, but you would need a longer sequence and the maths gets harder. You would also need to know - or perhaps guess by trying obvious options - how the numbers were being reduced from the larger range to the smaller one.
Similar results are possible from other PRNGs, but generally only the non-cryptographic ones.
In principle you could identify specific PRNG sequences with very high confidence, but even simple barriers such as missing numbers from the strict sequence can make it a lot harder. There will also be many PRNGs that you will not be able to reliably detect, and typically you will either have close to 100% confidence of a match (to a hackable PRNG) or 0% confidence of any match.
Whether or not a PRNG is a hackable (and therefore could be detected by the numbers it emits) is not a general indicator of PRNG quality. Obviously, "hackable" is opposite to a requirement for "secure", so don't consider Mersenne Twister for creating unguessable codes. However, do consider it as a source of randomness for e.g. neural networks, genetic algorithms, monte-carlo simulations and other places where you need a lot of statistically random-looking data.

how does random() actually work?

Every language has a random() function or something similar to generate a pseudo-random number. I am wondering what happens underneath to generate these numbers? I am not programming anything that makes this knowledge necessary, just trying to satisfy my own curiosity.
The entire first chapter of Donald Knuth's seminal
work Seminumerical Algorithms is taken up with the subject of random number generation. I really don't think an SO answer is going to come close to describing the issues involved. Read the book.
It turns out to be surprisingly easy to get half-way-decent pseudorandom numbers. For decades the gold standard was a remarkably simple algorithm: keep state x, multiply by constant A (32x32 => 64 bits) then add constant B, then return the low 32-bits, which also become the new x. If A and B are chosen carefully this actually works fairly well.
Pseudorandom numbers need to be repeatable, too, in order to reproduce behavior during debugging. So, seeding the generator (initializing x with, say, the time-of-day) is typically avoided during debugging.
In recent years, and with more compute cycles available to burn, more sophisticated algorithms are available, some of them invented since the publication of the otherwise quite authoritive Seminumerical Algorithms. Operating systems are also starting to provide hardware and network-derived entropy bits for specialized cryptographic purposes.
The Wikipedia page is a good reference.
The actual algorithm used is going to be dependent on the language and the implementation of the language.
random() is a so called pseudorandom number generator (PRNG). random() is mostly implemented as a Linear congruential generator. This is a function of the form X(n+1) (aXn +c) modulo m. Xn is the sequence of generated pseudorandom numbers. The genarated sequence of numbers is easy guessable. This algorithm can't be used as a cryptographically safe PRNG.
Wikipedia:Linear congruential generator
And take a look at the diehard tests for PRNG
PRNG Diehard Tests
To exactly answer you answer, the random function is provided by the operation system (usually).
But how the operating system creates this random numbers is a specialized area in computer science. See for example the wiki page posted in the answers above.
One thing you might want to examine is the family of random devices available on some Unix-like OSes like Linux and Mac OSX. For example, on Linux, the kernel gathers entropy from a variety of sources into a pool which it then uses to seed it's pseudo-random number generator. The entropy can come from a variety of sources, the most notable being device driver jitter from keypresses, network events, hard disk activity and (most of all) mouse movements. Aside from this, there are other techniques to gather entropy, some of them even implemented totally in hardware. There are two character devices you can get random bytes from and on Linux, they behave in the following way:
/dev/urandom gives you a constant stream of bytes which is very random but not cryptographically safe because it reuses whatever entropy is available in the pool.
/dev/random gives you cryptographically safe random numbers but it won't give you a constant stream as it uses the entropy available in the pool and then blocks while more entropy is collected.
Note that while Mac OSX uses a different method for it's PRNG and therefore does not block, my personal benchmarks (done in college) have shown it to be every-so-slightly less random than the Linux kernel. Certainly good enough, though.
So, in my projects, when I need randomness, I typically go for reading from one of the random devices, at least for the seed for an algorithm in my program.
A pseudorandom number generator (PRNG), also known as a deterministic random bit generator (DRBG),1 is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers. The PRNG-generated sequence is not truly random, because it is completely determined by an initial value, called the PRNG's seed (which may include truly random values). Although sequences that are closer to truly random can be generated using hardware random number generators, pseudorandom number generators are important in practice for their speed in number generation and their reproducibility.[2]
PRNGs are central in applications such as simulations (e.g. for the Monte Carlo method), electronic games (e.g. for procedural generation), and cryptography. Cryptographic applications require the output not to be predictable from earlier outputs, and more elaborate algorithms, which do not inherit the linearity of simpler PRNGs, are needed.
Good statistical properties are a central requirement for the output of a PRNG. In general, careful mathematical analysis is required to have any confidence that a PRNG generates numbers that are sufficiently close to random to suit the intended use. John von Neumann cautioned about the misinterpretation of a PRNG as a truly random generator, and joked that "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin."[3]
You can check out the wikipedia page for more here

How to quantify the quality of a pseudorandom number generator?

This is based on this question. A number of answers were proposed that generate non-uniform distributions and I started wondering how to quantify the non uniformity of the output. I'm not looking for patterning issues, just single value aspects.
What are the accepted procedures?
My current thinking is to computer the average Shannon entropy per call by computing the entropy of each value and taking a weighted average. This can then be compered to the expected value.
My concerns are
Is this correct?
How to compute these value without loosing precision?
For #1 I'm wondering if I've got it correct.
For #2 the concern is that I would be processing numbers with magnitudes like 1/7 +/- 1e-18 and I'm worried that the floating point errors will kill me for any but the smallest problems. The exact form of the computation could result in some major differences here and I seem to recall that there are some ASM options for some special log cases but I can't seem to find the docs about this.
In this case the use is take a "good" PRNG for the range [1,n] and generate a SRNG for the range [1,m]. The question is how much worse is the results than the input?
What I have is expected occurrence rates for each output value.
NIST has a set of documents and tools for statistically analyzing random number generators cross a variety of metrics.
http://csrc.nist.gov/groups/ST/toolkit/rng/index.html
Many of these tests are also incorporated into the Dieharder PRNG test suite.
http://www.phy.duke.edu/~rgb/General/rand_rate.php
There are a ton of different metrics, because there are many, many different ways to use PRNGs. You can't analyze a PRNG in a vacuum - you have to understand the use case. These tools and documents provide a lot of information to help you in this, but at the end of the day you'll still have to understand what you actually need before you can determine of the algorithm is suitable. The NIST documentation is thorough, if somewhat dense.
-Adam
This page discusses one way of checking if you are getting a bad distribution: plotting the pseudo-random values in a field and then just looking at them.
TestU01 has an even more exacting test set than Dieharder. The largest test set is called "BigCrush", but it takes a long time to execute, so there are also subsets called just "Crush" and "SmallCrush". The idea is to first try SmallCrush, and if the PRNG passes that, try Crush, and if it passes that, BigCrush. If it passes that too, it should be good enough.
You can get TestU01 here.

Resources