Speed of a PRNG - random

Is there a specific algorithm or method of calculating the speed of a pseudo random number generator?
I've recently made a PRNG, and for my last question here I learned that the Big-O analysis is not suitable in my situation.
I want to compare my program's speed to a well known pseudo random number generator, but I can't find any valuable information related to it.

Related

Algorithms for efficiently computing logistic map

The logistic map is a classic example where floating point numbers fail. It's also a great example of where error propagates very badly in general in numerical algorithms even when dealing with bignums. I was wondering if there are any known algorithms for taming this issue? Is there an efficient way to compute a logistic map that doesn't require naively computing it with huge precision?
It is a classic example because it is a chaotic system. The entire point of a chaotic system is that it shows unbelievable sensitivity to initial conditions. To get an answer within 5% of correct after n iterations requires starting with O(n) digits of the number. Not because your algorithm is bad, but because changing any of those digits changes what the answer should be.
So, no. While you can potentially speed up the calculation somewhat, you can't get away with starting with lower precision.

Fast source of high quality randomness

Evolution algorithm is highly dependent on good randomness. Unfortunately, good randomness sources are slow (and hence the AI).
Question is, if I take one highest quality number and use it as a seed for poor quality (but fast) random generator, how much 'random' will be result?
I have conducted some research into this area previously. Evolutionary algorithms are part of a family of meta heuristic algorithms of which the particle swarm algorithm also belongs. A study has been conducted into the effectiveness of random number generators on the particle swarm algorithm here: Impact of the quality of random numbers generators on the performance of particle swarm optimization. It should directly apply to your evolutionary algorithm.

How to translate algorithm complexity to time necessary for computation

If I know complexity of an algorithm, can I predict how long it will compute in real life?
A bit more context:
I have been trying to solve university assignment which has to find the best possible result in a game from given position. I have written an algorithm and it works, however very slow. The complexity is O(n)=5^n . For 24 elements it computes a few minutes. I'm not sure if it's because my implementation is wrong, or if this algorithm is simply very slow. Is there a way for me to approximate how much time any algorithm should take?
Worst case you can base on extrapolation. So having time on N=1,2,3,4 elements (the more the better) and O-notation estimation for algorithm complexity you can estimate time for any finite number. Another question this estimation precision goes lower and lower as N increases.
What you can do with it? Search for error estimation algorithms for such approaches. In practice it usually gives good enough result.
Also please don't forget about model adequateness checks. So having results for N=1..10 and O-notation complexity you should check 'how good' your results correlate with your O-model (if you can select numbers for O-notation formula that meets your results). If you cannot get numbers, you need either more numbers to get wider picture or ... OK, you can have wrong complexity estimation :-).
Useful links:
Brief review on algorithm complexity.
Time complexity catalogue
Really good point to start - look for examples based on code as input.
You cannot predict running time based on time complexity alone. There are many factors involved: hardware speed, programming language, implementation details, etc. The only thing you can predict using the complexity is expected time increase when the size of the input increases.
For example, personally, I've seen O(N^2) algorithms take longer than O(N^3) ones, especially on small values of N, such as it is in your case. And by, the way, 5^24 is a huge number (5.9e16). I wouldn't be surprised if that took a few hours on a supercomputer, let alone on some mid-range personal pc, which most of us are using.

number of seeds in Lagged fibonacci Random number generator

Can any one tell me the number of seeds in lagged fibonacci random number generator as a function of typical lagged fibonacci parameters, I would really appreciate a diagram to illustrate the working of the random number generator
The design and properties of Fibonacci generators are discussed in the widely cited paper by Brent, and it includes an overview of implementation on a vector supercomputer of the time. This paper forms the basis of the boost::random Fibonacci generators, so you can see what a modern implementation looks like by inspecting the boost source.
I am not quite sure what this question has to do with cuda or parallel programming, though.

Binomial Random Variate Generator on CUDA

My problem is the following:
I need to generate lot of random numbers in parallel using Binomial Distribution on CUDA. All the Random Number Generators on CUDA are based on the Uniform Distribution (as far I know), what is also useful since all the algorithms for Binomial Distribution needs to use Uniform variates.
Is there any library or implementation for binomial random variate generation on CUDA? I see that there are for JAVA in http://acs.lbl.gov/~hoschek/colt/ , but it uses a very complicated algorithm to be parallelized. However, given a binomial variate following B(N,p), there are simpler algorithms with order of complexity O(N), but it is bad for me because N can be large (around 2^32, maximum for a Integer).
I would appreciate any help. Thanks a lot.
Miguel
P.S.: sorry for my bad english :)
That's an interesting problem, I would attack the problem by using a previous solution and adapting it to the way CUDA works..
CiteSeerX is where you can get hold of pdf's for research that might help..
http://citeseerx.ist.psu.edu/
Did you take a look at MDGPU? It was suggested in another question in SO
http://www-old.amolf.nl/~vanmeel/mdgpu/licence.html
Also NAG have a library which may help:
http://www.nag.co.uk/numeric/gpus/

Resources