I started studying mersenne twister some days back, it's full name is 623- diamensionally equidistributed uniform pseudo random number generator. What does equidistribution and unifrom means here? I am confused. Thanks in advance.
Equidistribution is a property of a sequence. It states that the probability of a number falling within an interval is proportional to the size of the interval.
Uniformity is a property of a distribution. A distribution is uniform over an interval if its probability of taking any value in the interval is constant.
Related
I am working with NSGA II and I am wondering, how the density of the population changes while the algorithm is running. Suppose you initialize your population according to a uniform distribution. Your population is changed after each iteration. How does this afflict the density of the population? E.g. If the sample size is huge will the population after n steps still be roughly uniform distributed? Does anybody know an answer to this?
Candidate solutions will close after a while. It's normal. Uniform distributon necessary just for first population. If you hesitate to get stuck on a local optimum, you can raise mutation operator probability. Mutation operator provides to jump another area.
I am confused about the concept of uniform distribution and random number. Does random number follow uniform distribution or does random number not follow any distribution?
Traditionally, random is just that, random. There is nothing that guarantees that after 100 random numbers, at least one of them is non-zero. You'd probably think something is broken, and while you'd probably be right, it is just as possible as any other combination of numbers in the same given range.
Uniform distribution will ensure that statistically speaking, your values will be spread out across a given range. In that case if you got 100 random uniformly distributed numbers and they were all zero, something is definitely broken.
Any number generator guaranteeing a uniform distribution is not random. That said, the more numbers you generate, the more likely it is to resemble a uniform distribution.
I know many uniform random number generators(RNGs) based on some algorithms, physical systems and so on. Eventually, all these lead to uniformly distributed random numbers. It's interesting and important to know whether there is Gaussian RNGs, i.e. the algorithm or something else creates Gaussian random numbers. Much precisely I want to say that I don't want to use transformations such as Box–Muller or Marsaglia polar method to get Gaussian from Uniform RNGs. I am interested if there is some paper, algorithm or even idea to create Gaussian random numbers without any of use Uniform RNGs. It's just to say we pretend that we don't know there exist Uniform random number generators.
As already noted in answers/comments, by virtue of CLT some sum of any iid random number could be made into some reasonable looking gaussian. If incoming stream is uniform, this is basically Bates distribution. Ami Tavory answer is pretty much amounts to using Bates in disguise. You could look at closely related Irwin-Hall distribution, and at n=12 or higher they look a lot like gaussian.
There is one method which is used in practice and does not rely on transformation of the U(0,1) - Wallace method (Wallace, C. S. 1996. "Fast Pseudorandom Generators for Normal and Exponential Variates." ACM Transactions on Mathematical Software.), or gaussian pool method. I would advice to read description here and see if it fits your purpose
As others have noted, it's a bit unclear what is your motivation for this, and therefore I'm not sure if the following answers your question.
Nevertheless, it is possible to generate (an approximation of) this without the specific formulas transforming uniform RNGs that you mention.
As with any RNG, we have to have some source of randomness (or pseudo-randomness). I'm assuming, therefore, that there is some limitless sequence of binary bits which are independently equally likely to be 0 or 1 (note that it's possible to counter that this is a uniform discrete binary RNG, so I'm unsure if this answers your question).
Choose some large fixed n. For each invocation of the RNG, generate n such bits, sum them as x, and return
(2 x - 1) / √n
By the de Moivre–Laplace theorem this is normal with mean 0 and variance 1.
I have an unusual sampling problem that I'm trying to implement for a Monte Carlo technique. I am aware there are related questions and answers regarding the fully-positive problem.
I have a list of n weights w_1,...,w_n and I need to choose k elements, labelled s_1,...,s_k say. The probability distribution that I want to sample from is
p(s_1,...,s_k) = |w_s_1 + ... + w_s_k| / P_total
where P_total is a normalization factor (the sum of all possible p(s,...) without P_total). I don't really care about how the elements are ordered for my purpose.
Note that some of the w_i may be less than zero and the absolute magnitude signs above. With purely non-negative w_i this distribution is relatively straightforward by sampling without replacement - a tree method being the most efficient as far as I can tell. With some negative weights, though, I feel like I must resort to explicitly writing out each possibility and sampling from this exponentially large set. Any suggestions or insights would be appreciated!
Rejection sampling is worth a try. Compute the maximum weight of a sample (max of the abs of each of the k least and k greatest). Repeatedly generate a uniform random sample and accept it with probability equal to its weight over the maximum weight until a sample is accepted.
I would like to generate some pseudorandom numbers on (-infinity, infinity) with a Gaussian distribution of standard deviation s and mean m. Any suggestions about how to do this? I'd appreciate any help in the right direction, as there seems to be a huge literature out there as how best to generate pseudorandom numbers.
You can generate a Gaussian distribution (also known as a normal distribution) buy using a uniform random number generator and an appropriate algorithm. Check out [stackoverflow link to Gaussian algorithms][1]
Do you really want to go from +/- infinity? Does that make sense?
A simple algorithm to use is the Box-Muller method.
Normal Dist. Random # = SQRT(-2*LN(RAND()))*SIN(2*PI()*RAND())
The Box-Muller method is mathematically exact if implemented with a perfect uniform random number generator and infinite precision. (oops.. in that formula, mu/mean =0 and sigma = 1 and random #'s are between 0 and 1) see http://mathworld.wolfram.com/Box-MullerTransformation.html