This question already has answers here:
Reversible pseudo-random sequence generator
(7 answers)
Closed 7 years ago.
I need PRNG that will be not only repeatable, but also will be able to traverse it's internal states in two directions.
E.g
r = prng_from_seed(seed)
r.next # => 0.12332132412
r.next # => 0.57842057177
r.next # => 0.74782915912
r.prev # => 0.57842057177
r.next # => 0.74782915912
What relatively strong PRNG algorithms have this feature?
Use a simple counter as seed, that you increase with next and decrese with prev. Then use this counter as the seed for another random generator that you use to generate the random number.
There are RNG which have fast logarithmic (log2(N)) forward and backward skip-ahead (a.k.a. leap-frog). They are based upon the paper by F. Brown, "Random Number Generation with Arbitrary Stride," Trans. Am. Nucl. Soc. (Nov. 1994).
Relatively simple one is used in famous MC code MCNP, Fortran and C++ implementation https://github.com/Iwan-Zotow/LCG-PLE63, though it won't pass tests like Diehard.
A bit more complex, but of excellent quality are family of PCG random, at http://www.pcg-random.org/
Pretty much ALL PRNGs have this feature, but nobody ever implements the go-to-previous state method.
Consider:
Since a PRNG has a finite-sized state, calling next() repeatedly will eventually get into a state you been in before.
If you start off in a state that will eventually be reached again by making N calls to next(), then you can always find a predecessor state by making N-1 calls to next().
Of course, in real life you would want a much more efficient way to implement prev(). How to do it would depend on the RNG, but it's generally not too hard.
Note that, in order to make the best use of their state, PRNGs are designed to have cycles that are as big as possible. Usually all accessible states are in the same cycle.
EDIT:
I should add that when people want to do this in real life, they often just use a counter as the internal state, and hash it (or encrypt it with a fixed key) to generate each random number. In a cryptographic context this is called "CTR MODE" encryption, which is googlable.
Of course if you do this, then you get random access to every state.
Related
I have searched for pseudo-RNG algorithms but all I can find seem to generate the next number by using the previous result as seed. Is there a way to generate them non-recursively?
The scenario where I need this is during OpenCL concurrent programming, each thread/pixel needs an independent RNG. I tried to seed them using BIG_NUMBER + work_id, but the result has a strong visual pattern in it. I tried several different RNG algorithms and all have this problem. Apparently they only guarantee the numbers are independent if you generate recursively, but not when you use sequential numbers as seed.
So my question is: Can I generate an array of random numbers, from an array of sequential numbers, independently and constant time for each number? Or is it mathematically impossible?
As the solution to my openCL problem, I can just pre-generate a huge array of random numbers recursively first and store in GPU memory, then use them later using seed as index. But I'm curious about the question above, because it seems very possible just by doing bunch of overflow and cutoffs, according to my very simple understanding of chaos theory.
Can I generate an array of random numbers, from an array of sequential numbers, independently and constant time for each number? Or is it mathematically impossible?
Sure you can - use block cipher in countnig mode. It is generally known as Counter based RNG, first widely used one was Fortuna RNG
I am looking for a satisfying solution of how to generate a random number.
I looked at this, this, this and this.
But am looking for something else.
Most of the posts mention using, R[n+1] = (a *R[n-1 + b) %n, this pseudo-random function, or some other mathematical functions.
But weirdly I am not looking for these; I want some non-algorithmic answer. Precisely, an "Interview" answer. Something easy to understand, not to make the interviewer feel that I mugged up a method :) .
For an interview question, a common answer might be to look at the intervals between keystrokes (ask the user to type something), disc seek times or input from a disconnected source -- that will give you thermal electrons from inside your MIC socket or whatever.
LavaRnd uses a digital camera with the lens cap on, which is a version of the last.
Some operating systems allows indirect access to some of this random input, usually through a secure random function; slower but more secure than the usual RNG.
Depending on what job the interview is for, you can talk about testing the raw data to check for entropy, and concentrating the entropy by using a cryptographic hash function like SHA-256.
There are also specialised, and expensive, hardware cards which use various quantum effects to generate true random numbers.
Take the system time, add a seed, modulo the upper limit. if upper limit is less than 0 than multiply it by -1 and then later the result subtract the max... not very strong but meets your requirement?
If you have a UI and only need a couple of randoms can ask the user to move mouse around, enter a few seeds, enter a few words and use them as seeds
I'm not sure StackOverflow is the right place to ask this question, because this question is half-programming and half-mathematics. And also really sorry if my question is stupid ^_^
I'm studying about Monte Carlo simulations via the "Monte Carlo Methods" book. One of the first thing I must learn is about Random Number Generator. The basic algorithm of RNG is:
1. Initialize: Draw the seed S0 from the distribution µ on S. Set t = 1.
2. Transition: Set St = f(St−1).
3. Output: Set Ut = g(St).
4. Repeat: Set t = t+ 1 and return to Step 2.
(µ is a probability distribution on the finite set of states S, the input is S0 and the random number we desire it the output Ut)
It is not hard to understand, but the problem here is I don't see the random factor which lie in the number of repeat. How can we decide when to stop the loop of the RNG? All examples I read which implement a RNG are loop for 100 times, and they returns the same value for a specific seed. It is not random at all >_<
Can someone explain what I'm missing here? Any help will be appreciated. Thanks everyone
You can't get a true sequence of random numbers on a computer, without specialized hardware. (Such specialized hardware performs the equivalent of an initial roll of the dice using physics to provide the randomness. Electronic ones often use the electronic noise of specialized diodes at constant temperatures; others use radioactive decay events.)
Without that specialized hardware, what you can generate are pseudorandom numbers which, as you've observed, always generate the same sequence of numbers for the same initial seed. For simple applications, you can often get away with generating an initial seed from the time of invocation, which is effectively random.
And when I say "simple applications," I am excluding cryptography. (Not just that, but especially that.)
Sometimes when you are trying to debug a simulation, you actually want to have a reproducible stream of "random" numbers so you might specifically sent a stream to start with a specific seed.
For instance in the answer Creating a facet_wrap plot with ggplot2 with different annotations in each plot rcs starts the answer by creating a reproducible set of data using the R code
set.seed(1)
df <- data.frame(x=rnorm(300), y=rnorm(300), cl=gl(3,100)) # create test data
before going on to demonstrate how to answer the actual question.
This question is NOT about how to use any language to generate a random number between any interval. It is about generating either 0 or 1.
I understand that many random generator algorithm manipulate the very basic random(0 or 1) function and take seed from users and use an algorithm to generate various random numbers as needed.
The question is that how the CPU generate either 0 or 1? If I throw a coin, I can generate head or tailer. That's because I physically throw a coin and let the nature decide. But how does CPU do it? There must be an action that the CPU does (like throwing a coin) to get either 0 or 1 randomly, right?
Could anyone tell me about it?
Thanks
(This has several facets and thus several algorithms. Keep in mind that there are many different forms of randomness used for different purposes, but I understand your question in the way that you are interested in actual randomness used for cryptography.)
The fundamental problem here is that computers are (mostly) deterministic machines. Given the same input in the same state they always yield the same result. However, there are a few ways of actually gathering entropy:
User input. Since users bring outside input into the system you can take that to derive some bits from that. Similar to how you could use radioactive decay or line noise.
Network activity. Again, an outside source of stuff.
Generally interrupts (which kinda include the first two).
As alluded to in the first item, noise from peripherals, such as audio input or a webcam can be used.
There is dedicated hardware that can generate a few hundred MiB of randomness per second. Usually they give you random numbers directly instead of their internal entropy, though.
How exactly you derive bits from that is up to you but you could use time between events, or actual content from the events, etc. – generally eliminating bias from entropy sources isn't easy or trivial and a lot of thought and algorithmic work goes into that (in the case of the aforementioned special hardware this is all done in hardware and the code using it doesn't need to care about it).
Once you have a pool of actually random bits you can just use them as random numbers (/dev/random on Linux does that). But this has downsides, since there is usually little actual entropy and possibly a higher demand for random numbers. So you can invent algorithms to “stretch” that initial randomness in a manner that makes it still impossible or at least very difficult to predict anything about following numbers (/dev/urandom on Linux or both /dev/random and /dev/urandom on FreeBSD do that). Fortuna and Yarrow are so-called cryptographically secure pseudo-random number generators and designed with that in mind. You still have a very good guarantee about the quality of random numbers you generate, but have many more before your entropy pool runs out.
In any case, the CPU itself cannot give you a random 0 or 1. There's a lot more involved and this usually includes the complete computer system or special hardware built for that purpose.
There is also a second class of computational randomness: Plain vanilla pseudo-random number generators (PRNGs). What I said earlier about determinism – this is the embodiment of it. Given the same so-called seed a PRNG will yield the exact same sequence of numbers every time¹. While this sounds idiotic it has practical benefits.
Suppose you run a simulation involving lots of random numbers, maybe to simulate interaction between molecules or atoms that involve certain probabilities and unpredictable behaviour. In science you want results anyone can independently verify, given the same setup and procedure (or, with computing, the same algorithms). If you used actual randomness the only option you have would be to save every single random number used to make sure others can replicate the results independently.
But with a PRNG all you need to save is the seed and remember what algorithm you used. Others can then get the exact same sequence of pseudo-random numbers independently. Very nice property to have :-)
Footnotes
¹ This even includes the CSPRNGs mentioned above, but they are designed to be used in a special way that includes regular re-seeding with entropy to overcome that problem.
A CPU can only generate a uniform random number, U(0,1), which happens to range from 0 to 1. So mathematically, it would be defined as a random variable U in the range [0,1]. Examples of random draws of a U(0,1) random number in the range 0 to 1 would be 0.28100002, 0.34522, 0.7921, etc. The probability of any value between 0 and 1 is equal, i.e., they are equiprobable.
You can generate binary random variates that are either 0 or 1 by setting a random draw of U(0,1) to a 0 if U(0,1)<=0.5 and 1 if U(0,1)>0.5, since in theory there will be an equal number of random draws of U(0,1) below 0.5 and above 0.5.
I'm trying to seed a random number generator with the output of a hash. Currently I'm computing a SHA-1 hash, converting it to a giant integer, and feeding it to srand to initialize the RNG. This is so that I can get a predictable set of random numbers for an set of infinite cartesian coordinates (I'm hashing the coordinates).
I'm wondering whether Kernel::srand actually has a maximum value that it'll take, after which it truncates it in some way. The docs don't really make this obvious - they just say "a number".
I'll try to figure it out myself, but I'm assuming somebody out there has run into this already.
Knowing what programmers are like, it probably just calls libc's srand(). Either way, it's probably limited to 2^32-1, 2^31-1, 2^16-1, or 2^15-1.
There's also a danger that the value is clipped when cast from a biginteger to a C int/long, instead of only taking the low-order bits.
An easy test is to seed with 1 and take the first output. Then, seed with 2i+1 for i in [1..64] or so, take the first output of each, and compare. If you get a match for some i=n and all greater is, then it's probably doing arithmetic modulo 2n.
Note that the random number generator is almost certainly limited to 32 or 48 bits of entropy anyway, so there's little point seeding it with a huge value, and an attacker can reasonably easily predict future outputs given past outputs (and an "attacker" could simply be a player on a public nethack server).
EDIT: So I was wrong.
According to the docs for Kernel::rand(),
Ruby currently uses a modified Mersenne Twister with a period of 2**19937-1.
This means it's not just a call to libc's rand(). The Mersenne Twister is statistically superior (but not cryptographically secure). But anyway.
Testing using Kernel::srand(0); Kernel::sprintf("%x",Kernel::rand(2**32)) for various output sizes (2*16, 2*32, 2*36, 2*60, 2*64, 2*32+1, 2*35, 2*34+1), a few things are evident:
It figures out how many bits it needs (number of bits in max-1).
It generates output in groups of 32 bits, most-significant-bits-first, and drops the top bits (i.e. 0x[r0][r1][r2][r3][r4] with the top bits masked off).
If it's not less than max, it does some sort of retry. It's not obvious what this is from the output.
If it is less than max, it outputs the result.
I'm not sure why 2*32+1 and 2*64+1 are special (they produce the same output from Kernel::rand(2**1024) so probably have the exact same state) — I haven't found another collision.
The good news is that it doesn't simply clip to some arbitrary maximum (i.e. passing in huge numbers isn't equivalent to passing in 2**31-1), which is the most obvious thing that can go wrong. Kernel::srand() also returns the previous seed, which appears to be 128-bit, so it seems likely to be safe to pass in something large.
EDIT 2: Of course, there's no guarantee that the output will be reproducible between different Ruby versions (the docs merely say what it "currently uses"; apparently this was initially committed in 2002). Java has several portable deterministic PRNGs (SecureRandom.getInstance("SHA1PRNG","SUN"), albeit slow); I'm not aware of something similar for Ruby.