Do stateless random number generators exist? - random

Is there a difference between generating multiple numbers using a single random number generator (RNG) versus generating one number per generator and discarding it? Do both implementations generate numbers which are equally random? Is there a difference between the normal RNGs and the secure RNGs for this?
I have a web application that is supposed to generate a list of random numbers on behalf of clients. That is, the numbers should appear to be random from each client's point of view. Does this mean I need retain a separate random RNG per client session? Or can I share a single RNG across all sessions? Or can I create and discard a RNG on a per-request basis?
UPDATE: This question is related to Is a subset of a random sequence also random?

A random number generator has a state -- that's actually a necessary feature. The next "random" number is a function of the previous number and the seed/state. The purists call them pseudo-random number generators. The numbers will pass statistical tests for randomness, but aren't -- actually -- random.
The sequence of random values is finite and does repeat.
Think of a random number generator as shuffling a collection of numbers and then dealing them out in a random order. The seed is used to "shuffle" the numbers. Once the seed is set, the sequence of numbers is fixed and very hard to predict. Some seeds will repeat sooner than others.
Most generators have period that is long enough that no one will notice it repeating. A 48-bit random number generator will produce several hundred billion random numbers before it repeats -- with (AFAIK) any 32-bit seed value.
A generator will only generate random-like values when you give it a single seed and let it spew values. If you change seeds, then numbers generated with the new seed value may not appear random when compared with values generated by the previous seed -- all bets are off when you change seeds. So don't.
A sound approach is to have one generator and "deal" the numbers around to your various clients. Don't mess with creating and discarding generators. Don't mess with changing seeds.
Above all, never try to write your own random number generator. The built-in generators in most language libraries are really good. Especially modern ones that use more than 32 bits.
Some Linux distros have a /dev/random and /dev/urandom device. You can read these once to seed your application's random number generator. These have more-or-less random values, but they work by "gathering noise" from random system events. Use them sparingly so there are lots of random events between uses.

I would recommend using a single generator multiple times. As far as I know, all the generators have a state. When you seed a generator, you set its state to something based on the seed. If you keep spawning new ones, it's likely that the seeds you pick will not be as random as the numbers generated by using just one generator.
This is especially true with most generators I've used, which use the current time in milliseconds as a seed.

Hardware-based, true [1], random number generators are possible, but non-trivial and often have low mean rates. Availablity can also be an issue [2]. Googling for "shot noise" or "radioactive decay" in combination with "random number generator" should return some hits.
These systems do not need to maintain state. Probably not what you were looking for.
As noted by others, software systems are only pseudo-random, and must maintain state.
A compromise is to use a hardware based RNG to provide an entropy pool (stored state) which is made available to seed a PRNG. This is done quite explicitly in the linux implementation of /dev/random [3] and /dev/urandom [4].
These is some argument about just how random the default inputs to the /dev/random entropy pool really are.
Footnotes:
modulo any problems with our understanding of physics
because you're waiting for a random process
/dev/random features direct access to the entropy pool seeded from various sources believed to be really or nearly random, and blocks when the entropy is exhausted
/dev/urandom is like /dev/random, but when the entopy is exhausted a cryptographic hash is employed which makes the entropy pool effectively a stateful PRNG

If you create a RNG and generate a single random number from it then discard the RNG, the number generated is only as random as the seed used to start the RNG.
It would be much better to create a single RNG and draw many numbers from it.

As people have already said, it's much better to seed the PRNG once, and reuse it. A secure PRNG is simply one which is suitable for cryptographic applications. The only way re-seeding each time will give reasonably random results is where it comes from a genuinely random "real world" source - ie specialised hardware. Even then, it's possible that the source is biased and it will still be theoretically better to use the same PRNG over.

Normally seeding a new state takes quite while for a serious PRNG, and making new ones each time won't really help much.
The only case I can think of where you might want more than one PRNG is for different systems, say in a casino game you have one generator for shuffling cards and a separate one to generate comments done by the computer control characters, this way REALLY dedicated users can't guess outcomes based on character behaviors.
A nice solution for seeding is to use this (Random.org) , they supply random numbers generated from the atmospheric noise for free. It could be a better source for seeding than using time.
Edit: In your case, I would definitely use one PRNG per client, if for no other reason than for good programming standards. Anyways if you share one PRNG among clients, you will still be providing pseudo-random values to each, of a quality equal to your PRNG's quality. So that's a viable option but seems like a bad policy for programming

It's worth mentioning that Haskell is a language which attempts to entirely eliminate mutable state. In order to reconcile this goal with hard-requirements like IO (which requires some form of mutability), monads must be used to thread state from one calculation to the next. In this way, Haskell implements its pseudo-random number generator. Strictly speaking, generating random numbers is an inherently stateful operation, but Haskell is able to hide this fact by moving the state "mutation" into the bind (>>=) operation.
This probably sounds a little abstract, and it doesn't really answer your question completely, but I think it is still applicable. From a theoretical standpoint, it is impossible to work with a RNG without involving state. Regardless, there are techniques which can be used to mitigate this interaction and make it appear as if the entire operation is of a stateless nature.

It's generally better to create a single PRNG and pull multiple values from it. Creating multiple instances means you need to ensure that the seeds for the instances are guaranteed unique, which will require incorporating instance-specific information.
As an aside, there are better "true" Random Number Generators, but they usually require specialized hardware which does things like derive random data from electrical signal variance inside the computer. Unless you're really worried about it, I'd say the Pseudo Random Number Generators built into the language libraries and/or OS are probably sufficient, as long as your seed value is not easily predictable.

The use of a secure PRNG depends on your application. What are the random numbers used for?
If they're something of real value (e.g. anything cryptographically related), you wouldn't want to use anything less.
Secure PRNGs are much slower, and may require libraries to do operation of arbitrary precision, and primality testing, etc etc...

Well, as long as they are seeded differently each time they're created, then no, I don't think there'd be any difference; however, if it depended on something like the time, then they'd probably be non-uniform, due to the biased seed.

Related

Securely Use Random Number Generator for Lottery Winning

I want to design a lottery winning mechanism using random number generator. I know that for computer, there is no true randomness but only "pseudorandom". If the system gets hacked and random seed is seen, people will know the sequence of random numbers. In fact, there is news that people did this and won several lotteries. I am thinking about two ways of designing my system:
Use random number generator as a global variable. There is only one
random seed; the sequence is generated when the system starts.
Con:
a. Once the random seed is seen, hackers will know the sequence
easily.
b. Once the system crashes and restarts, the sequence will repeat
itself.
Create a random number generator using timestamp as random seed each
time to generate a number.
Con:
a. Obviously timestamp cannot be directly used. There are some
tricks needed to be done with the timestamp each time. For example,
plus or minus some values each time on the timestamp. What algorithm can I use here to do this kind of modification on timestamp?
b. Is this method even taking advantage of random number generator?
It seems I am just creating a random number by myself...
As we can see, either of the method above is not secure enough. Which way is slightly better? Or is there a better way?
The notion that computers are incapable of truly random numbers hasn't been true for decades. All modern desktop and laptop computers have true hardware-based random number generators. Even most small embedded systems do as well.
That said, it may be the case that your programming language hasn't caught up to the recent hardware, or that even if it has, it's easy to make a mistake with RNGs and get a bad result from a good generator. So it's probably a good idea to use something like random.org unless you know what you're doing.

Equivalent of /dev/urandom on Windows?

My application would like to get a random number, preferably with entropy if available, but does not need cryptographic quality, and would like to do ensure that the call does not block if the system entropy pool is depleted (e.g. on a server in a farm).
I am aware of CryptGenRandom, but its behaviour with respect to blocking under adverse entropy conditions is not specified.
On Unix, /dev/urandom supports this use case. Is there equivalent functionality available on Windows? I would prefer to avoid using a non-system RNG simply to get non-blocking semantics.
For a toy application, you could use the standard library function rand(), but the implementation on Windows is of notoriously poor quality. For cryptographically secure random numbers, you can use the rand_s() standard library function.
A better bet is simply to include a suitable pseudo-random number generator in your program. The Mersenne Twister is a good choice IMO, particularly as there are plenty of available implementations (including in the C++11 standard library and in Boost).
If I need non-blocking behaviour on random numbers, I generally pre-generate n numbers and store them in an in memory variable: ie if I know I will need 30 random numbers per second, takes 3 seconds to compute them (including blocks), then I will pre-generate 300 while the main code is loading, store them in an array or vector and use them at need; whilst using them I generate another one on a separate thread every time I use one up, replacing the utilised random number with the newly generated one and moving on to the next one in the list, that way when I hit the limit (in this case 300) I know when I can simply start again at the start of my array/vector/list and all the random numbers are fresh and will be non-blocking (as they are pre-generated).
This means you can use any random number generator you like and not worry about blocking behaviour, however it has the expense of utilising more ram, negligible however for the sort of coding I need random numbers for.
Hope this helps, as I couldn't fit this all into a comment:)
You could wait for one good seed full of entropy and follow GMasucci advice to pre-generate a long list of random numbers.
Unless your system is already compromised it seems that a good seed it's good enough to generate a series of non-related numbers as discussed in http://www.2uo.de/myths-about-urandom/
From the discussion I get that a continuous feed of ("true"/"fresh") random numbers it's only needed if your system state (your sources of entropy are known and the attacker knows their current state) it is compromised at some point. After feeding your block cypher more randomness, the predictability of its output will get lower.
Source of seeds? Two or more pieces of trusted software that are less likely to be already compromised. I try to blur out the predictability of the functions that use time functions as seed: local rand_function() + some variable delay + mysql's rand().
From there, a list of pseudo-random numbers generated by some good library.

How would one know if one saw a random number generator?

I have been reading various articles about random numbers and their generators. There are usually 3 important conclusions that I draw from them:
Random numbers are not truly random
Much of the time they have a bias (modulo bias)
Humans are incapable of being random number generators, when they are trying to "act randomly"
So, with the latter-most of these observations in mind, how would we be able to
Tell if a sequence of numbers that we see is truly random, and more importantly
Is there some way we can prove that said sequence is really random?
I'm tempted to say that so long as you generate a sufficiently large enough sample set 1,000,000+, you should see more or less a uniform dispersion of (pseudo)random numbers occur. However, I'm sure some Maths genius has a way of discrediting this, because surely the by laws of probability you could get a run of one number just as likely as any other sequence.
From what I have read, if you really need random numbers its best to try and reuse what cryptographic libraries use. The field of Cryptography is obviously complex and relies on random numbers for key generation. From the section in OWASP's guide titled "Reversible Authentication Tokens" it says this...
The only way to generate secure authentication tokens is to ensure
there is no way to predict their sequence. In other words: true random
numbers.
It could be argued that computers can not generate true random
numbers, but using new techniques such as reading mouse movements and
key strokes to improve entropy has significantly increased the
randomness of random number generators. It is critical that you do not
try to implement this on your own; use of existing, proven
implementations is highly desirable.
Most operating systems include functions to generate random numbers
that can be called from almost any programming language.
My take is that unless you're coding Cryptographic libraries yourself, put trust in those that are (e.g. use Java Cryptography Extension) so you don't have to proove it yourself.
Pretty Simple Test:
If you really want to get into testing random numbers, you could simulate a program that outputs random numbers from 1-100 100 times as an example.
Then look at those numbers and see if there's any patterns. Then follow that test by restarting the program several times and repeating the process.
Examine all data to figure out if random numbers are always random, just random during individual tests, or never. :P
Testing a random number generator is probably mostly up to what you want to look for. Even pure non-repeatability is no guarantee of randomness.
There are some companies that will test a random number generator for the purposes of certification (e.g. online casinos). One that I found quickly is called iTech Labs, though their testing methodology page leaves a lot to be desired in terms of technical detail.
Other testers and certification bodies publish the required data for a certification; there's more specific detail here but not as much as you want.
You could potentially do a statistical analysis and compare the results of your random number generator to a "true" random source but the argument could be made for bias from trying to translate the true random source into your possibility space anyway.
Randomness tests verify the mathematical properties of the sequence. For example entry frequencies (all symbols are expected to have the same frequency), local variance, sequence analysis (the probability of a symbol must not depend on the previous ones).
A definite proof does not exist, but there is a quality factor - the probability of a sequence to really be random.
Another criterion could be based on compressibility: true randomness has maximum entropy and can not therefore be compressed.
This test is not reliable for randomness, of course, but allows quick and dirty testing with ready tools such as zlib.

Truly random number generator

From what I understand PRNG uses a seed that generates a sequence of numbers that is not truly random. Would it be possible to create a truly random number generator by reusing PRNG over and over with different seeds each time it is used. The seed could be extracted from dev/random or the current time or clock tick. If not then is there a truly random number generator implemented in software?
Thanks
If you re-seed the PRNG every time you need a random number you can just cut out the middle man and use the seed directly as random number.
But what you're talking about is done in practice. Those are so-called cryptographically-secure PRNGs and they are employed in many operating systems to provide random numbers for cryptographic applications. They get re-seeded frequently from the entropy pool and are designed so that it is computationally very hard to figure out the next number from knowing past ones (something that's very trivial to do for an LCG, for example) and also to figure out past numbers from the current one.
The benefot of this approach is that you don't block generating random numbers. Entropy in a system is a limited resource and can only come from outside sources, so by using a CSPRNG you can safely stretch it and not compromise security at the same time.
The simple answer is that there is no such implementation because, as far as I know, it's simply not possible. To generate truly random numbers you need an outside source of entropy like a hardware random number generator.
The clock is not very random, but /dev/random has some randomness - it's actually like a bucket of randomness that you can deplete depending on the rate of randomness production and consumption. If you use dev/random, then you don't have use an RNG. Seeding an RNG from /dev/random is redundant.
Intel is working on something that could be truly groundbreaking if it works as advertised. It would practically render hardware PRNGs redundant.

How different do random seeds need to be?

Consider code like this (Python):
import random
for i in [1, 2, 3, 4]:
random.seed(i)
randNumbers = [random.rand() for i in range(100)] # initialize a list with 100 random numbers
doStuff(randNumbers)
I want to make sure that randNumbers differ significantly from one call to another. Do I need to make sure the seed numbers differ significantly between the subsequent calls, or is it sufficient that the seeds are different (no matter how)?
To the pedants: please realize the above code is super-over-simplified
Short answer: Avoid the re-seeding, as it doesn't buy you anything here. Long answer below.
That all depends on what exactly you need. In Common defects in initialization of pseudorandom number generators it is outlined that linear dependent seeds (which 1, 2, 3, 4 definitely are) are a bad choice for initializing multiple PRNGs, at least when used for simulation and desiring uncorrelated results.
If all you do is rolling a few dice, or generating some pseudo-random input for something uncritical, then it very likely doesn't matter.
Note also that using some classes of a PRNG itself for generating seeds have the same problem in generating linear dependent numbers (LCGs spring to mind).
If your random number generator is high quality, it shouldn't matter how you seed it. In fact, the best practice would be to seed it only once. Random number generators are designed to have certain statistical behavior once they're started. Frequently reseeding effectively creates a different random number generator, one that may not be as good.
Randomly selecting seeds sounds like a good idea, but it isn't. In fact, because of the "birthday paradox," there's a surprisingly high probability that you'll pick the same seed twice.
Generally speaking, you only seed your random number generator when you need the random numbers to be generated in identical fashion each time through. This is useful when you have a random component to your processing, but need to test it and therefore want it to be consistent between tests. Otherwise, you let the system seed the generator itself.
In otherwords, by seeding the random number generator with specific pre-defined seeds, you are actually reducing the randomness of the system as a whole. The random numbers generated when using a seed of 1 are indeed psuedo-randomly different from that with a seed of 2, but a hard coded seed will result in repeated random sequences in each run of the program.
You seem to want pseudo-random numbers that aren't pseudo-random, with a higher probability of consecutive numbers being 'significantly' different than pseudo-randomness requires. I doubt that any common prng will do this, whatever your seeding strategy.
The seeds themselves should be random so that the output is unpredictable. There can be problems if the seeds differ only in one or two bits (as this question demonstrates).
It depends upon the application for which you're using the PRNG. If you're using something that needs to be cryptographically sound, then the seeds generally need to be extremely difficult to deduce based on the output, different every time the application runs, difficult to simply guess, and impossible to determine by reverse engineering the application (i.e. they can't be hard coded).
If your goal is a game, your requirements may be different. For example, if you're controlling computer strategy, but the computer's strategy remains the same for all runs of the game, you may have an easily beatable game. Then again, you may want that for "easy" mode.

Resources