What is the best method of generating a number with 256 random bits?
Does concatenating random bytes work?
byte[] data = new byte[32];
RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
rng.GetNonZeroBytes(data); // should include zero bytes?
string number = BitConverter.ToString(data, 0).Replace("-", "");
Further more, would it be appropriate to sort a deck of cards using non-duplicates of these numbers?
Whether or not you can concatenate random bytes depends on the random number generator that you are using. Some random number generators exhibit serial correlation. For these random number generators, concatenating would be bad.
If you are using these random numbers for crypotgraphic purposes, you should look at Blum Blum Shub. Otherwise, look at the Mersenne Twister.
For shuffling a finite set, look at the Fisher-Yates shuffle.
The correct way to shuffle a deck of cards is with a Knuth Shuffle. It's simple and perfect. Perfect meaning that all possible card orderings are equally likely, assuming use of a good RNG.
yes, concatenating random bytes would work.
EDIT: Not sure why you would need 256 bits to shuffle a deck of cards, can you expand on that part further?
If the random byte generator is good, any method works equally well, and also your card shuffling approach is appropriate.
Related
Take fast PRNG like xoroshiro or xorshift and 'true' entropy based generator like /dev/random.
Seed PRNG with 'true' random, but also get a single number from 'true' random and use it to XOR all results from PRNG to produce final output.
Then, replace this number once a while (e.g. after 10000 random numbers are generated).
Perhaps this is naive, but I would hope this should improve some aspects of PRNG like period size with negligible impact on speed. What am I getting wrong?
What I am concerned about here is generating UUIDs (fast), which are basically 128-bit numbers which should be "really unique". What my concern is that using modern PRGN like xorshift family with periods close to 'just' 2^128 the chance of collision of entropy seeded PRNG generator is not as negligible as it would be with truly random numbers.
The improvements are only minor compared to the plain PRNG. For example the single true random number used for masking the result can be eliminated by taking the XOR of successive results. This will be the same value as the XOR of successive plain PRNG numbers. So if you can predict the PRNG, it is not too hard to do the same to the improved sequence.
We want to generate a uniform random number from the interval [0, 1].
Let's first generate k random booleans (for example by rand()<0.5) and decide according to these on what subinterval [m*2^{-k}, (m+1)*2^{-k}] the number will fall. Then we use one rand() to get the final output as m*2^{-k} + rand()*2^{-k}.
Let's assume we have arbitrary precision.
Will a random number generated this way be 'more random' than the usual rand()?
PS. I guess the subinterval picking amounts to just choosing the binary representation of the output 0. b_1 b_2 b_3... one digit b_i at a time and the final step is adding the representation of rand() to the end of the output.
It depends on the definition of "more random". If you use more random generators, it means more random state, and it means that cycle length will be greater. But cycle length is just one property of random generators. Cycle length of 2^64 usually OK for almost any purpose (the only exception I know is that if you need a lot of different, long sequences, like for some kind of simulation).
However, if you combine two bad random generators, they don't necessarily become better, you have to analyze it. But there are generators, which do work this way. For example, KISS is an example for this: it combines 3, not-too-good generators, and the result is a good generator.
For card shuffling, you'll need a cryptographic RNG. Even a very good, but not cryptographic RNG is inadequate for this purpose. For example, Mersenne Twister, which is a good RNG, is not suitable for secure card shuffling! It is because observing output numbers, it is possible to figure out its internal state, so shuffle result can be predicted.
This can help, but only if you use a different pseudorandom generator for the first and last bits. (It doesn't have to be a different pseudorandom algorithm, just a different seed.)
If you use the same generator, then you will still only be able to construct 2^n different shuffles, where n is the number of bits in the random generator's state.
If you have two generators, each with n bits of state, then you can produce up to a total of 2^(2n) different shuffles.
Tinkering with a random number generator, as you are doing by using only one bit of random space and then calling iteratively, usually weakens its random properties. All RNGs fail some statistical tests for randomness, but you are more likely to get find that a noticeable cycle crops up if you start making many calls and combining them.
I would like to apply a permutation test to a sequence with 4,000,000 elements. To my knowledge, it is infeasible due to a number of possible permutations being ridiculously large (no RNG will generate uniformly distributed values in range {1 ... 4000000!}). I've heard of pseudorandom permutations though, and it sounds like something I need, but I can't comprehend if it's actually a proper replacement for random shuffle in my case.
If you are running a permutation test I presume that you want to generate a random sample from the set of all possible permutations, so that you can test some statistic calculated on the real data against the distribution of statistics calculated on the permuted data.
Algorithms for generating random permutations, such as those described at http://en.wikipedia.org/wiki/Random_permutation, typically use many random numbers, so there is no requirement for any single step of the generation process to need numbers as large as 4000000!. The only worry would be that, since the seed used to generate the random numbers is typically much smaller than 4000000!, not all permutations are possible.
There are other statistical tests which consume very large quantities of pseudo-random numbers (e.g. MCMC), so I wouldn't worry about this if you are using a random number generator which is commonly used for statistical tests. If you are worried about this, you could repeat the test with a cryptographically secure random number generator, such as http://docs.oracle.com/javase/6/docs/api/java/security/SecureRandom.html. This will be slower, so you might need to reduce the number of permutations tested, but it is very unlikely that it has any characteristic which would stand out far enough to affect your test results, because any such characteristic would be a security weakness - it would mean that, given a large quantity of random numbers already generated, you would have a slightly better than random chance of guessing the next number correctly.
I'm trying to generate some 8-bit random numbers with C++ and don't want to use divisions (like rand()%8 or any scale methods).
One algorithm I found online is Park-Miller-Carta Pseudo-Random Number Generator
It is a 32-bit random number generator with no divisions. With these random numbers, I'm trying to extract the lower or higher 8 bits of them so that I can get some random bytes, but this does not seem to work because these bits are not so random.
Are there any tricks to fix this or are there any other algorithms that can do the trick?
How about XORing four bytes of 32bit random integer?
So I get that all the built in function only return pseudo random numbers as they use the clock speed or some other hardware to get the number.
So here my idea, if I take two pseudo random numbers and bitwise them together would the result still be pseudo random or would it be closer to truly random.
I figured that if I fiddled about with the number a bit it would be less replicable, or am I getting this wrong.
On a side note why is pseudo random a problem?
It will not be more random, but there is a big risk that the number will be less random (less uniformly ditributed). What bitwise operator were you thinking about?
Lets assume 4-bit random numbers 0101, 1000. When OR:ed together you would get 1101. With OR there would be a clear bias towards 1111, with AND towards 0000. (75 % of getting a 1 or 0 respectively in each position)
I don't think XOR and XNOR would be biased. But also you wouldn't get any more randomness out of it (see Pavium's answer).
Algorithms executed by computers are deterministic.
You can only generate truly random numbers if there's a non-deterministic input.
Pseudo-random numbers follow a repeating sequence. Maybe a long sequence but the repetition makes them predictable and therefore not truly random.
You can't generate truly random numbers from two pseudo-random numbers.
EDITED: to put the sentences in a more logical order.