Generating Gaussian Random Numbers without a Uniform Random Number Generator - algorithm

I know many uniform random number generators(RNGs) based on some algorithms, physical systems and so on. Eventually, all these lead to uniformly distributed random numbers. It's interesting and important to know whether there is Gaussian RNGs, i.e. the algorithm or something else creates Gaussian random numbers. Much precisely I want to say that I don't want to use transformations such as Box–Muller or Marsaglia polar method to get Gaussian from Uniform RNGs. I am interested if there is some paper, algorithm or even idea to create Gaussian random numbers without any of use Uniform RNGs. It's just to say we pretend that we don't know there exist Uniform random number generators.

As already noted in answers/comments, by virtue of CLT some sum of any iid random number could be made into some reasonable looking gaussian. If incoming stream is uniform, this is basically Bates distribution. Ami Tavory answer is pretty much amounts to using Bates in disguise. You could look at closely related Irwin-Hall distribution, and at n=12 or higher they look a lot like gaussian.
There is one method which is used in practice and does not rely on transformation of the U(0,1) - Wallace method (Wallace, C. S. 1996. "Fast Pseudorandom Generators for Normal and Exponential Variates." ACM Transactions on Mathematical Software.), or gaussian pool method. I would advice to read description here and see if it fits your purpose

As others have noted, it's a bit unclear what is your motivation for this, and therefore I'm not sure if the following answers your question.
Nevertheless, it is possible to generate (an approximation of) this without the specific formulas transforming uniform RNGs that you mention.
As with any RNG, we have to have some source of randomness (or pseudo-randomness). I'm assuming, therefore, that there is some limitless sequence of binary bits which are independently equally likely to be 0 or 1 (note that it's possible to counter that this is a uniform discrete binary RNG, so I'm unsure if this answers your question).
Choose some large fixed n. For each invocation of the RNG, generate n such bits, sum them as x, and return
(2 x - 1) / √n
By the de Moivre–Laplace theorem this is normal with mean 0 and variance 1.

Related

Normally distributed random function without irrational operations

I'm working on a game for which I want deterministic demo playback that is portable between architectures that treat floating point numbers differently. I'm using the Racket language, which conveniently has, as a primitive data type, non-floating-point representations of rational-number fractions. I want to use these to implement an approximately normally-distributed random function that accepts parameters for mean and standard deviation (skewness would be gold-plating).
Because of the limitations I've mentioned, any operation that takes in rational numbers and puts out irrational ones will need to be reimplemented from scratch in a way that produces approximations based on Racket's native fractions, not based on floating points. I've looked around at various algorithms for normal random functions, but of these, even many of the "simplest" ones like the Box-Muller transform involve things like square roots, logarithms, and trig functions. Iterated averaging is easy, so square roots aren't a problem, but I don't want to reinvent any more wheels than I need to here.
What are some algorithms I can use for generating approximately normal random numbers without invoking irrational operations like roots, logarithms, and trig functions?
I settled on a solution after typing up this question but before sending it, so I'll Share My Knowledge Q&A-Style.
After poring over several different SO posts on normally-distributed random numbers, I found that the best solution for my purposes was actually the most naive one: abuse the Central Limit Theorem. Random variables of any distribution, when added up, approximate a normal distribution just fine. In Racket, my solution turned out to be the delightfully concise
(define (random/normal μ σ)
(+ (* (- (for/sum ([i 12])
(random/uniform 0 1))
6)
σ)
μ))
where uniform-random is my function for generating uniformly random rational numbers.
In infix, imperative pseudocode, this means:
Function random_normal(μ, σ):
iterations := 12
sum := 0
for i from 1 to iterations:
sum += random_uniform(0, 1)
sum -= iterations / 2 # center the distribution on 0
return σ * sum + μ
Why 12 iterations?
A few SO answers mention this solution, but don't explain why 12 is a magic number here. When we add up those numbers, we want the standard deviation of that random sum to equal 1 so that we can stretch out or squish down the bell curve by the desired amount in a single multiplicative step.
If you sum a sample of random variables the standard deviation of the approximately normal distribution this creates is equal to
where is the standard deviation of the variables themselves.* The standard deviation of a uniform random distribution from 0 to 1 is equal to † so by substituting this in for we see that what we want is just
which works out easily to
* See "Central Limit Theorem" on Wolfram MathWorld. Equation is given under identity (2), here multiplied by N to give the standard deviation of the sum rather than of the average.
† See "Continuous uniform distribution" on Wikipedia. Table on the right, "variance" square-rooted.
But doesn't this limit your range to ±6 standard deviations?
It does, but the range of your distribution has to be truncated somewhere unless you have infinite memory, and ±6σ is A) almost as good as Box-Muller on a 32-bit machine and B) already huge.

Converting a Uniform Distribution to a Fat-tailed Distribution

This previous SO question regards converting a Uniform distribution to a Normal distribution.
For Monte-Carlo simulations, I have a need not only for Normal (Gaussian), but for some computationally efficient ways to generate large numbers of samples from "fat-tailed" or heavy-tailed distributions, using a given (64-bit or double) uniform RNG as input. Examples of these distributions include: Log-normal, Pareto, Student-T, and Cauchy.
Use of inverse CDFs is acceptable given computationally efficient means of computing the inverse CDF as needed.
The tag is for a language-independant algorithms, but the implementations needed are for basic procedural programming languages (C, Basic, procedural Swift, Python, et.al.)
A Cauchy random number can be expressed as:
scale * tan(pi * (RNDU01OneExc()-0.5)) + mu
Where RNDU01OneExc() is a random number in [0, 1), and mu and scale are the offset and scale, respectively.
A logarithmic normal random number can be expressed as exp(Normal(mu, sigma)), where Normal(mu, sigma) is a normally distributed random number with mean mu and standard deviation sigma.
These and other kinds of distributions are mentioned in my article on random number generation and sampling.

Using VB 6.0 to generate pseudorandom numbers with a Gaussian distribution

I would like to generate some pseudorandom numbers on (-infinity, infinity) with a Gaussian distribution of standard deviation s and mean m. Any suggestions about how to do this? I'd appreciate any help in the right direction, as there seems to be a huge literature out there as how best to generate pseudorandom numbers.
You can generate a Gaussian distribution (also known as a normal distribution) buy using a uniform random number generator and an appropriate algorithm. Check out [stackoverflow link to Gaussian algorithms][1]
Do you really want to go from +/- infinity? Does that make sense?
A simple algorithm to use is the Box-Muller method.
Normal Dist. Random # = SQRT(-2*LN(RAND()))*SIN(2*PI()*RAND())
The Box-Muller method is mathematically exact if implemented with a perfect uniform random number generator and infinite precision. (oops.. in that formula, mu/mean =0 and sigma = 1 and random #'s are between 0 and 1) see http://mathworld.wolfram.com/Box-MullerTransformation.html

Pseudorandom Number Generation with Specific Non-Uniform Distributions

I'm writing a program that simulates various random walks (with differing distributions). At each timestep, I need randomly generated, two dimensional step distances and angles from the distribution of the random walk. I'm hoping someone can check my understanding of how to generate these random numbers.
As I understand it I can use Inverse Transform Sampling as follows:
If f(x) is the pdf of our random walk that has a non-uniform distribution, and y is a random number from a uniform distribution.
Then if we let f(x) = y and solve to find x then we have a random number from the non-uniform distribution.
Is this a feasible solution?
Not quite. The function that needs to be inverted is not f(x), the pdf, but F(x)=P(X<=x)=int_{-inf}^{x}f(t)dt, the cdf. The good thing is that F is monotone, so actually has a unique inverse (unlike f).
There are multiple other ways of generating random numbers according to a given distribution. For example, if the cdf F is difficult to compute or to invert, rejection sampling can be a good option if f is easy to compute.
You are close, but not quite. Every probability density function (pdf) has a corresponding cumulative density function (cdf). An important property about CDF(x) is that they are always between 0 and 1. Because it is relatively easy to draw a random number between 0 and 1, we can use that to work our way backwards to the distribution. So changing the word pdf to CDF in your question makes the statement correct.
As an aside for this to make sense computationally you need to find an easy to calculate inverse of the CDF. One way to do this is to fit a polynomial approximation to the CDF and find the inverse of that function. There are more advanced techniques for simulating probability distributions with messy distributions. See this book chapter for the details.

Generating random numbers with known mean and variance

From a paper I'm reading right know:
...
S(t+1, k) = S(t, k) + ... + C*∆
...
∆ is a standard random variable with mean 0 and variance 1.
...
How to generate this series of random values with this mean and variance? If someone has links to a C or C++ library I would be glad but I wouldn't mind implementing it myself if someone tells me how to do it :)
Do you have any restrictions on the distribution of \Delta ? if not you can just use a uniform distribution in [-sqrt(3), sqrt(3)]. The reason why this would work is because for an uniform distribution [a,b] the variance is 1/(12) (b-a)^2.
You can use the Box-Muller transform.
Suppose U1 and U2 are independent random variables that are uniformly distributed in the interval (0, 1]. Let
and
Then Z0 and Z1 are independent random variables with a normal distribution of standard deviation 1.
Waffles is a mature, stable C++ library that you can use. In particular, the noise function in the *waffles_generate* module will do what you want.
Aside from center and spread (mean and sd) also need to know the probability distribution that the random numbers are drawn from. If the paper you are reading doesn't say anything about this, and there's no other reasonable inference supported by context, then the author probably is referring to a normal distribution (gaussian)--because that's the most common, and because the two parameters one needs to completely specify a normal distribution are mean and sd. Many distributions are not specified this way--e.g., for a Gamma distribution, shape, scale, and rate are needed; to specify a Logistic, you need location and scale, etc.
If all you want it a certain mean 0 and variance 1, probably the simplest is this. Do you have a uniform random number generator unif() that gives you numbers between 0 and 1? If you want the number to be very close to a normal distribution, can just add up 12 uniform(0,1) numbers and subtract 6. If you want it to be really exactly a normal distribution, you can use the Box-Muller transform, as Mark suggested, if you don't mind throwing in a log, a sine, and a cosine.

Resources