So they say if you flip a coin 50 times and get heads all 50 times, you're still 50/50 the next flip and 1/4 for the next two. Do you think/know if this same principle applies to computer pseudo-random number generators? I theorize they're less likely to repeat the same number for long stretches.
I ran this a few times and the results are believable, but I'm wondering how many times I'd have to run it to get an anomaly output.
def genString(iterations):
mystring = ''
for _ in range(iterations):
mystring += str(random.randint(0,9))
return mystring
def repeatMax(mystring):
tempchar = ''
max = 0
for char in mystring:
if char == tempchar:
count += 1
if count > max:
max = count
else:
count = 0
tempchar = char
return max
for _ in range(10):
stringer = genString()
print repeatMax(stringer)
I got all 7's and a couple 6's. If I run this 1000 times, will it approximate a normal distribution or should I expect it to stay relatively predictable? I'm trying to understand the predictability of pseudo random number generation.
Failure to produce specific patterns is a typical weakness of PRNGs, but the probability of hitting a substantial run of repeated digits at random is so small it's hard to demonstrate that weakness.
It's perfectly reasonable for a PRNG to use only a 32-bit state, which (traditionally) means producing a sequence of four billion numbers and then repeating from the start again. In that case your sequence of 50 coin-flips coming out the same is probably never going to happen (four billion tries at something that has a one in a quadrillion chance is unlikely to succeed); but if it does, then it's going to appear way too often.
Superficially you're looking for k-dimensional equidistribution as a test for whether or not you can expect to find a prescribed pattern in the output without deeper analysis of the specific generator. If your generator claims at least 50-dimensional equidistribution then you're guaranteed to see the 50-heads state at least once.
However, if your generator emits 32-bit results but you only test whether each result maps to heads or tails, you have some chance at success even if the generator fails the k-dimension test, and that chance depends on the specifics of the generator and the mapping function.
If you adjust the implementation of your generator to return just one bit at a time, then you have an opportunity to try to squeeze 50 heads out of just 50 bits of state (or potentially as few as 18, but that generator would probably be faulty). Provided the generator visits all 2**50 possible states, one of those states will produce 50 heads in a row. You may get a few more heads when adjacent states start or end with more zeroes.
Related
I've been using Random (java.util.Random) to shuffle a deck of 52 cards. There are 52! (8.0658175e+67) possibilities. Yet, I've found out that the seed for java.util.Random is a long, which is much smaller at 2^64 (1.8446744e+19).
From here, I'm suspicious whether java.util.Random is really that random; is it actually capable of generating all 52! possibilities?
If not, how can I reliably generate a better random sequence that can produce all 52! possibilities?
Selecting a random permutation requires simultaneously more and less randomness than what your question implies. Let me explain.
The bad news: need more randomness.
The fundamental flaw in your approach is that it's trying to choose between ~2226 possibilities using 64 bits of entropy (the random seed). To fairly choose between ~2226 possibilities you're going to have to find a way to generate 226 bits of entropy instead of 64.
There are several ways to generate random bits: dedicated hardware, CPU instructions, OS interfaces, online services. There is already an implicit assumption in your question that you can somehow generate 64 bits, so just do whatever you were going to do, only four times, and donate the excess bits to charity. :)
The good news: need less randomness.
Once you have those 226 random bits, the rest can be done deterministically and so the properties of java.util.Random can be made irrelevant. Here is how.
Let's say we generate all 52! permutations (bear with me) and sort them lexicographically.
To choose one of the permutations all we need is a single random integer between 0 and 52!-1. That integer is our 226 bits of entropy. We'll use it as an index into our sorted list of permutations. If the random index is uniformly distributed, not only are you guaranteed that all permutations can be chosen, they will be chosen equiprobably (which is a stronger guarantee than what the question is asking).
Now, you don't actually need to generate all those permutations. You can produce one directly, given its randomly chosen position in our hypothetical sorted list. This can be done in O(n2) time using the Lehmer[1] code (also see numbering permutations and factoriadic number system). The n here is the size of your deck, i.e. 52.
There is a C implementation in this StackOverflow answer. There are several integer variables there that would overflow for n=52, but luckily in Java you can use java.math.BigInteger. The rest of the computations can be transcribed almost as-is:
public static int[] shuffle(int n, BigInteger random_index) {
int[] perm = new int[n];
BigInteger[] fact = new BigInteger[n];
fact[0] = BigInteger.ONE;
for (int k = 1; k < n; ++k) {
fact[k] = fact[k - 1].multiply(BigInteger.valueOf(k));
}
// compute factorial code
for (int k = 0; k < n; ++k) {
BigInteger[] divmod = random_index.divideAndRemainder(fact[n - 1 - k]);
perm[k] = divmod[0].intValue();
random_index = divmod[1];
}
// readjust values to obtain the permutation
// start from the end and check if preceding values are lower
for (int k = n - 1; k > 0; --k) {
for (int j = k - 1; j >= 0; --j) {
if (perm[j] <= perm[k]) {
perm[k]++;
}
}
}
return perm;
}
public static void main (String[] args) {
System.out.printf("%s\n", Arrays.toString(
shuffle(52, new BigInteger(
"7890123456789012345678901234567890123456789012345678901234567890"))));
}
[1] Not to be confused with Lehrer. :)
Your analysis is correct: seeding a pseudo-random number generator with any specific seed must yield the same sequence after a shuffle, limiting the number of permutations that you could obtain to 264. This assertion is easy to verify experimentally by calling Collection.shuffle twice, passing a Random object initialized with the same seed, and observing that the two random shuffles are identical.
A solution to this, then, is to use a random number generator that allows for a larger seed. Java provides SecureRandom class that could be initialized with byte[] array of virtually unlimited size. You could then pass an instance of SecureRandom to Collections.shuffle to complete the task:
byte seed[] = new byte[...];
Random rnd = new SecureRandom(seed);
Collections.shuffle(deck, rnd);
In general, a pseudorandom number generator (PRNG) can't choose from among all permutations of a 52-item list if its maximum cycle length is less than 226 bits.
java.util.Random implements an algorithm with a modulus of 248 and a maximum cycle length of not more than that, so much less than 2226 (corresponding to the 226 bits I referred to). You will need to use another PRNG with a bigger cycle length, specifically one with a maximum cycle length of 52 factorial or greater.
See also "Shuffling" in my article on random number generators.
This consideration is independent of the nature of the PRNG; it applies equally to cryptographic and noncryptographic PRNGs (of course, noncryptographic PRNGs are inappropriate whenever information security is involved).
Although java.security.SecureRandom allows seeds of unlimited length to be passed in, the SecureRandom implementation could use an underlying PRNG (e.g., "SHA1PRNG" or "DRBG"). And it depends on that PRNG's maximum cycle length whether it's capable of choosing from among 52 factorial permutations.
Let me apologize in advance, because this is a little tough to understand...
First of all, you already know that java.util.Random is not completely random at all. It generates sequences in a perfectly predictable way from the seed. You are completely correct that, since the seed is only 64 bits long, it can only generate 2^64 different sequences. If you were to somehow generate 64 real random bits and use them to select a seed, you could not use that seed to randomly choose between all of the 52! possible sequences with equal probability.
However, this fact is of no consequence as long as you're not actually going to generate more than 2^64 sequences, as long as there is nothing 'special' or 'noticeably special' about the 2^64 sequences that it can generate.
Lets say you had a much better PRNG that used 1000-bit seeds. Imagine you had two ways to initialize it -- one way would initialize it using the whole seed, and one way would hash the seed down to 64 bits before initializing it.
If you didn't know which initializer was which, could you write any kind of test to distinguish them? Unless you were (un)lucky enough to end up initializing the bad one with the same 64 bits twice, then the answer is no. You could not distinguish between the two initializers without some detailed knowledge of some weakness in the specific PRNG implementation.
Alternatively, imagine that the Random class had an array of 2^64 sequences that were selected completely and random at some time in the distant past, and that the seed was just an index into this array.
So the fact that Random uses only 64 bits for its seed is actually not necessarily a problem statistically, as long as there is no significant chance that you will use the same seed twice.
Of course, for cryptographic purposes, a 64 bit seed is just not enough, because getting a system to use the same seed twice is computationally feasible.
EDIT:
I should add that, even though all of the above is correct, that the actual implementation of java.util.Random is not awesome. If you are writing a card game, maybe use the MessageDigest API to generate the SHA-256 hash of "MyGameName"+System.currentTimeMillis(), and use those bits to shuffle the deck. By the above argument, as long as your users are not really gambling, you don't have to worry that currentTimeMillis returns a long. If your users are really gambling, then use SecureRandom with no seed.
I'm going to take a bit of a different tack on this. You're right on your assumptions - your PRNG isn't going to be able to hit all 52! possibilities.
The question is: what's the scale of your card game?
If you're making a simple klondike-style game? Then you definitely don't need all 52! possibilities. Instead, look at it like this: a player will have 18 quintillion distinct games. Even accounting for the 'Birthday Problem', they'd have to play billions of hands before they'd run into the first duplicate game.
If you're making a monte-carlo simulation? Then you're probably okay. You might have to deal with artifacts due to the 'P' in PRNG, but you're probably not going to run into problems simply due to a low seed space (again, you're looking at quintillions of unique possibilities.) On the flip side, if you're working with large iteration count, then, yeah, your low seed space might be a deal-breaker.
If you're making a multiplayer card game, particularly if there's money on the line? Then you're going to need to do some googling on how the online poker sites handled the same problem you're asking about. Because while the low seed space issue isn't noticeable to the average player, it is exploitable if it's worth the time investment. (The poker sites all went through a phase where their PRNGs were 'hacked', letting someone see the hole cards of all the other players, simply by deducing the seed from exposed cards.) If this is the situation you're in, don't simply find a better PRNG - you'll need to treat it as seriously as a Crypto problem.
Short solution which is essentially the same of dasblinkenlight:
// Java 7
SecureRandom random = new SecureRandom();
// Java 8
SecureRandom random = SecureRandom.getInstanceStrong();
Collections.shuffle(deck, random);
You don't need to worry about the internal state. Long explanation why:
When you create a SecureRandom instance this way, it accesses an OS specific
true random number generator. This is either an entropy pool where values are
accessed which contain random bits (e.g. for a nanosecond timer the nanosecond
precision is essentially random) or an internal hardware number generator.
This input (!) which may still contain spurious traces are fed into a
cryptographically strong hash which removes those traces. That is the reason those CSPRNGs are used, not for creating those numbers themselves! The SecureRandom has a counter which traces how many bits were used (getBytes(), getLong() etc.) and refills the SecureRandom with entropy bits when necessary.
In short: Simply forget objections and use SecureRandom as true random number generator.
If you consider the number as just an array of bits (or bytes) then maybe you could use the (Secure)Random.nextBytes solutions suggested in this Stack Overflow question, and then map the array into a new BigInteger(byte[]).
A very simple algorithm is to apply SHA-256 to a sequence of integers incrementing from 0 upwards. (A salt can be appended if desired to "get a different sequence".) If we assume that the output of SHA-256 is "as good as" uniformly distributed integers between 0 and 2256 - 1 then we have enough entropy for the task.
To get a permutation from the output of SHA256 (when expressed as an integer) one simply needs to reduce it modulo 52, 51, 50... as in this pseudocode:
deck = [0..52]
shuffled = []
r = SHA256(i)
while deck.size > 0:
pick = r % deck.size
r = floor(r / deck.size)
shuffled.append(deck[pick])
delete deck[pick]
My Empirical research results are Java.Random is not totally truly random. If you try yourself by using Random class "nextGaussian()"-method and generate enough big sample population for numbers between -1 and 1, the graph is normal distbruted field know as Gaussian Model.
Finnish goverment owned gambling-bookmarker have a once per day whole year around every day drawn lottery-game where winning table shows that the Bookmarker gives winnings in normal distrbuted way. My Java Simulation with 5 million draws shows me that with nextInt() -methdod used number draw, winnings are normally distributed same kind of like the my Bookmarker deals the winnings in each draw.
My best picks are avoiding numbers 3 and 7 in each of ending ones and that's true that they are rarely in winning results. Couple of times won five out of five picks by avoiding 3 and 7 numbers in ones column in Integer between 1-70 (Keno).
Finnish Lottery drawn once per week Saturday evenings If you play System with 12 numbers out of 39, perhaps you get 5 or 6 right picks in your coupon by avoiding 3 and 7 values.
Finnish Lottery have numbers 1-40 to choose and it takes 4 coupon to cover all the nnumbers with 12 number system. The total cost is 240 euros and in long term it's too expensive for the regural gambler to play without going broke. Even if you share coupons to other customers available to buy still you have to be quite a lucky if you want to make profit.
I was going through Google Interview Questions. to implement the random number generation from 1 to 7.
I did write a simple code, I would like to understand if in the interview this question asked to me and if I write the below code is it Acceptable or not?
import time
def generate_rand():
ret = str(time.time()) # time in second like, 12345.1234
ret = int(ret[-1])
if ret == 0 or ret == 1:
return 1
elif ret > 7:
ret = ret - 7
return ret
return ret
while 1:
print(generate_rand())
time.sleep(1) # Just to see the output in the STDOUT
(Since the question seems to ask for analysis of issues in the code and not a solution, I am not providing one. )
The answer is unacceptable because:
You need to wait for a second for each random number. Many applications need a few hundred at a time. (If the sleep is just for convenience, note that even a microsecond granularity will not yield true random numbers as the last microsecond will be monotonically increasing until 10us are reached. You may get more than a few calls done in a span of 10us and there will be a set of monotonically increasing pseudo-random numbers).
Random numbers have uniform distribution. Each element should have the same probability in theory. In this case, you skew 1 more (twice the probability for 0, 1) and 7 more (thrice the probability for 7, 8, 9) compared to the others in the range 2-6.
Typically answers to this sort of a question will try to get a large range of numbers and distribute the ranges evenly from 1-7. For example, the above method would have worked fine if u had wanted randomness from 1-5 as 10 is evenly divisible by 5. Note that this will only solve (2) above.
For (1), there are other sources of randomness, such as /dev/random on a Linux OS.
You haven't really specified the constraints of the problem you're trying to solve, but if it's from a collection of interview questions it seems likely that it might be something like this.
In any case, the answer shown would not be acceptable for the following reasons:
The distribution of the results is not uniform, even if the samples you read from time.time() are uniform.
The results from time.time() will probably not be uniform. The result depends on the time at which you make the call, and if your calls are not uniformly distributed in time then the results will probably not be uniformly distributed either. In the worst case, if you're trying to randomise an array on a very fast processor then you might complete the entire operation before the time changes, so the whole array would be filled with the same value. Or at least large chunks of it would be.
The changes to the random value are highly predictable and can be inferred from the speed at which your program runs. In the very-fast-computer case you'll get a bunch of x followed by a bunch of x+1, but even if the computer is much slower or the clock is more precise, you're likely to get aliasing patterns which behave in a similarly predictable way.
Since you take the time value in decimal, it's likely that the least significant digit doesn't visit all possible values uniformly. It's most likely a conversion from binary to some arbitrary number of decimal digits, and the distribution of the least significant digit can be quite uneven when that happens.
The code should be much simpler. It's a complicated solution with many special cases, which reflects a piecemeal approach to the problem rather than an understanding of the relevant principles. An ideal solution would make the behaviour self-evident without having to consider each case individually.
The last one would probably end the interview, I'm afraid. Perhaps not if you could tell a good story about how you got there.
You need to understand the pigeonhole principle to begin to develop a solution. It looks like you're reducing the time to its least significant decimal digit for possible values 0 to 9. Legal results are 1 to 7. If you have seven pigeonholes and ten pigeons then you can start by putting your first seven pigeons into one hole each, but then you have three pigeons left. There's nowhere that you can put the remaining three pigeons (provided you only use whole pigeons) such that every hole has the same number of pigeons.
The problem is that if you pick a pigeon at random and ask what hole it's in, the answer is more likely to be a hole with two pigeons than a hole with one. This is what's called "non-uniform", and it causes all sorts of problems, depending on what you need your random numbers for.
You would either need to figure out how to ensure that all holes are filled equally, or you would have to come up with an explanation for why it doesn't matter.
Typically the "doesn't matter" answer is that each hole has either a million or a million and one pigeons in it, and for the scale of problem you're working with the bias would be undetectable.
Using the same general architecture you've created, I would do something like this:
import time
def generate_rand():
ret = str(time.time()) # time in second like, 12345.1234
ret = ret % 8 # will return pseudorandom numbers 0-7
if ret == 0:
return 1 # or you could also return the result of another call to generate_rand()
return ret
while 1:
print(generate_rand())
time.sleep(1)
Can it be possible to create random number whose pattern of getting the next random number never be repeated even the universe ends.
I read this security rule-of-thumb:
All processes which require non-trivial random numbers MUST attempt to
use openssl_pseudo_random_bytes(). You MAY fallback to
mcrypt_create_iv() with the source set to MCRYPT_DEV_URANDOM. You MAY
also attempt to directly read bytes from /dev/urandom. If all else
fails, and you have no other choice, you MUST instead generate a value
by strongly mixing multiple sources of available random or secret
values.
http://phpsecurity.readthedocs.org/en/latest/Insufficient-Entropy-For-Random-Values.html
In Layman's terms, no; In order to generate a particular data form, such as a string or integer, you must have an algorithm of some sort, which obviously cannot be 100% untraceable...
Basically, the final product myust come from a series of events (algorithm) in which is impossible to keep 'unrevealed'.
bignum getUniqueRandom()
{
static bignum sum = 0;
sum += rand();
return sum;
}
That way the next random number will be always greater than the previous (by a random factor between 0 and 1) and as result the numbers returned will never repeat.
edit:
The actual approach when randomness requirements are so high is to use a hardware random number generator; there are for example chips that measure atom decays of background radiation generating truly random seeds. Of course the nature or randomness is such that there is never a guarantee a pattern can't repeat, or you'd be damaging the actual randomness of the result. But the pattern can't be repeated by any technical/mathematical means, so the repeats are meaningless.
At every step in the execution of a computer program, the total internal state determines what the next total internal state will be. This internal state must be represented by some number of bits--all of the memory used by the program, registers of the processor, anything else that affects it. There can only be 2**N possible states given N bits of state information.
Since any given state T will lead to the same state T+1 (that's what "deterministic" means), the algorithm must eventually repeat itself after no more than 2**N steps. So what limits the cycle length of an RNG is the number of bits of internal state. A simple LCG might have only 32 bits of state, and therefore a cycle <= 2^32. Something like Mersenne Twister has 19968 bits of internal state, and its period is 2^19937-1.
So for any deterministic algorithm to be "unrepeatable in the history of the Universe", you'll probably need most of the atoms of the Universe to be memory for its internal state.
This is a question about an SO question; I don't think it belongs in meta despite being sp by definition, but if someone feels it should go to math, cross-validated, etc., please let me know.
Background:
#ForceBru asked this question about how to generate a 64 bit random number using rand(). #nwellnhof provided an answer that was accepted that basically takes the low 15 bits of 5 random numbers (because MAXRAND is apparently only guaranteed to be 15bits on at least some compilers) and glues them together and then drops the first 11 bits (15*5-64=11). #NikBougalis made a comment that while this seems reasonable, it won't pass many statistical tests of randomnes. #Foon (me) asked for a citation or an example of a test that it would fail. #NikBougalis replied with an answer that didn't elucidate me; #DavidSwartz suggested running it against dieharder.
So, I ran dieharder. I ran it against the algorithm in question
unsigned long long llrand() {
unsigned long long r = 0;
for (int i = 0; i < 5; ++i) {
r = (r << 15) | (rand() & 0x7FFF);
}
return r & 0xFFFFFFFFFFFFFFFFULL;
}
For comparison, I also ran it against just rand() and just 8bits of rand() at at time.
void rand_test()
{
int x;
srand(1);
while(1)
{
x = rand();
fwrite(&x,sizeof(x),1,stdout);
}
void rand_byte_test()
{
srand(1);
while(1)
{
x = rand();
c = x % 256;
fwrite(&c,sizeof(c),1,stdout);
}
}
The algorithm under question came back with two tests showing weakenesses for rgb_lagged_sum for ntuple=28 and one of the sts_serials for ntuple=8.
The just using rand() failed horribly on many tests, presumably because I'm taking a number that has 15 bits of randomness and passing it off as 32 bits of randomness.
The using the low 8 bits of rand() at a time came back as weak for rgb_lagged_sum with ntuple 2, and (edit) failed dab_monobit, with tuple 12
My question(s) is:
Am I interpretting the results for 8 bits of randomly correctly, namely that given that one of the tests (which was marked as "good"; for the record, it also came back as weak for one of the dieharder tests marked "suspect"), came as weak and one as failed, rand()'s randomness should be suspected.
Am I interpretting the results for the algorithm under test correctly (namely that this should also be marginally suspected)
Given the description of what the tests that came back as weak do (e.g for sts_serial looks at whether the distribution of bit patterns of a certain size is valid), should I be able to determine what the bias likely is
If 3, since I'm not, can someone point out what I should be seeing?
Edit: understood that rand() isn't guaranteed to be great. Also, I tried to think what values would be less likely, and surmised zero, maxvalue, or repeated numbers might be... but doing a test of 1000000000 tries, the ratio is very near the expected value of 1 out of every 2^15 times (e.g., in 1000000000 runs, we saw 30512 zeros, 30444 max, and 30301 repeats, and bc says that 30512 * 2^15 is 999817216; other runs had similar ratios including cases where max and/or repeat was larger than zeros.
When you run dieharder the column you really need to watch is the p-value column.
The p-value column essentially says: "This is the probability that real random numbers could have produced this result." You want it to be uniformly distributed between 0 and 1.
You'll also want to run it multiple times on suspect cases. For instance, if you have a column with a p-value of (for instance) .03 then if you re-run it, you still have .03 (rather than some higher value) then you can have a high confidence that your random number generator performs poorly on that test and it's not just a 3% fluke. However, if you get a high value, then you're probably looking at a statistical fluke. But it cuts both ways.
Ultimately, knowing facts about random or pseudorandom processes is difficult. But armed with dieharder you have approximate knowledge of many things.
For several valid reasons I have to use BSD's random() to generate awfully large amounts of random numbers, and since its cycle is quite short (~2^69, if I'm not mistaken) the quality of such numbers degrades pretty quickly for my use case. I could use the rng board I have access to but it's painfully slow so I thought I could do this trick: take one number from the board, use it to seed random(), use random() to draw numbers and reseed it when the board says a new number is available. The board generates about 100 numbers per second so my guess is that random() hardly gets to cycle over and the generation rate easily keeps up with my requirements of several millions numbers per second.
Anyway, the problem is that random() claims to uniformly draw numbers between 0 and (2^31)-1, but I've been drawing an uncountable amount of numbers and I've never ever seen a 0 nor a (2^31)-1 so far. Maybe some 1 and (2^31)-2, but I've never seen the extremes. Now, I know the problem with random numbers is that you can never be sure (see Dilbert, Debian), but this seem extremely odd nonetheless. Moreover I tried analysing the generated datasets with Octave using the histc() function, and the lowest and the highest bins contain between half and three quarter the amount of numbers of the middle bins (which in turn are uniformly filled, so I guess in some sense the distribution is "uniform").
Can anybody explain this?
EDIT Some code
The board outputs this structure with the three components, and then I do some mumbo-jumbo combining them to produce the seed. I have no specs about this board, it's an ancient piece of hardware thrown together by a previous student some years ago, there's little documentation and this formula I'm using is one of those suggested in the docs. The STEP parameter tells me how may numbers I can draw using one seed so I can optimise performance and throttle down CPU usage at the same time.
float n = fabsf(fmod(sqrt(a.s1*a.s1 + a.s2*a.s2 + a.s3*a.s3), 1.0));
unsigned int seed = n * UINT32_MAX;
srandom(seed);
for(int i = 0; i < STEP; i++) {
long r = random();
n = (float)r / (UINT32_MAX >> 1);
[_numbers addObject:[NSNumber numberWithFloat:n]];
}
Are you certain that
void main() {
while (random() != 0L);
}
hangs indefinitely? On my linux machine (the Gnu C library uses the same linear feedback shift register as BSD, albeit with a different seeding procedure) it doesn't.
According to this reference the algorithm produces 'runs' of consecutive zeroes or ones up to length n-1 where n is the size of the shift register. When this has a size of 31 integers (the default case) we can even be certain that, eventually, random() will return 0 a whopping 30 (but never 31) times in a row! Of course, we may have to wait a few centuries to see it happening...
To extend the cycle length, one method is to run two RNGs, with different periods, and XOR their output. See L'Ecuyer 1988 for some examples.