Can any one tell me the number of seeds in lagged fibonacci random number generator as a function of typical lagged fibonacci parameters, I would really appreciate a diagram to illustrate the working of the random number generator
The design and properties of Fibonacci generators are discussed in the widely cited paper by Brent, and it includes an overview of implementation on a vector supercomputer of the time. This paper forms the basis of the boost::random Fibonacci generators, so you can see what a modern implementation looks like by inspecting the boost source.
I am not quite sure what this question has to do with cuda or parallel programming, though.
Related
Is there a specific algorithm or method of calculating the speed of a pseudo random number generator?
I've recently made a PRNG, and for my last question here I learned that the Big-O analysis is not suitable in my situation.
I want to compare my program's speed to a well known pseudo random number generator, but I can't find any valuable information related to it.
In my free time I'm preparing for interview questions like: implement multiplying numbers represented as arrays of digits. Obviously I'm forced to write it from the scratch in a language like Python or Java, so an answer like "use GMP" is not acceptable (as mentioned here: Understanding Schönhage-Strassen algorithm (huge integer multiplication)).
For which exactly range of sizes of those 2 numbers (i.e. number of digits), I should choose
School grade algorithm
Karatsuba algorithm
Toom-Cook
Schönhage–Strassen algorithm ?
Is Schönhage–Strassen O(n log n log log n) always a good solution? Wikipedia mentions that Schönhage–Strassen is advisable for numbers beyond 2^2^15 to 2^2^17. What to do when one number is ridiculously huge (e.g. 10,000 to 40,000 decimal digits), but second consists of just couple of digits?
Does all those 4 algorithms parallelizes easily?
You can browse the GNU Multiple Precision Arithmetic Library's source and see their thresholds for switching between algorithms.
More pragmatically, you should just profile your implementation of the algorithms. GMP puts a lot of effort into optimizing, so their algorithms will have different constant factors than yours. The difference could easily move the thresholds around by an order of magnitude. Find out where the times cross as input size increases for your code, and set the thresholds correspondingly.
I think all of the algorithms are amenable to parallelization, since they're mostly made up up of divide and conquer passes. But keep in mind that parallelizing is another thing that will move the thresholds around quite a lot.
I've read about factorization of integers into the prime factors and did a proof of concept implementation of Pollard's rho algorithm:
https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm
The algorithm is easy to implement, works so far. However it may (and does) fail for certain numbers. The Wikipedia page suggest to restart the algorithm using a different start condition or pick a random generator function.
That does not sound very deterministic to me. Is there an approach that guarantees that the algorithm will eventually terminate?
I know that the state of the art is the elliptic curve integer factorization, and I plan to implement it for laughs and giggles later. To do so I first need a prime factorization algorithm for small numbers though, that is why I have to deal with something like Pollards rho algorithm first.
I'm doing a research paper on the topic and while I find a lot of examples and discussion about how the algorithm works/should be implemented, I can't find anything on where it's actually used.
Is there any field in which the algorithm is used today? Or do people just implement it for "shits 'n giggles" (it's fairly simple, so that would make some sense)?
I know that large prime numbers are important in the field of encryption, but I doubt the sieve is used to find/generate those primes. Also, the huge amount of memory needed to find large primes makes it inefficient for those, too.
So is the algorithm, in any form, used anywhere today?
According to the Wikipedia article on the subject, that particular sieve is still a very efficient method for producing the full list of primes whose value is less than a few millions. Also, the general idea of a sieve is used in several other, more powerful algorithms, such as the General number field sieve for factoring large integers.
You can view a prime sieve as an application of dynamic programming to small complete prime number enumeration and testing. So your question is really "what do we need prime numbers for?". They are a fundamental part of number theory. As one example encoding an integer into its prime factorization has all sorts of useful properties and higher-level utility. By adding backtracking to a sieve we can perform this factorization very quickly.
My problem is the following:
I need to generate lot of random numbers in parallel using Binomial Distribution on CUDA. All the Random Number Generators on CUDA are based on the Uniform Distribution (as far I know), what is also useful since all the algorithms for Binomial Distribution needs to use Uniform variates.
Is there any library or implementation for binomial random variate generation on CUDA? I see that there are for JAVA in http://acs.lbl.gov/~hoschek/colt/ , but it uses a very complicated algorithm to be parallelized. However, given a binomial variate following B(N,p), there are simpler algorithms with order of complexity O(N), but it is bad for me because N can be large (around 2^32, maximum for a Integer).
I would appreciate any help. Thanks a lot.
Miguel
P.S.: sorry for my bad english :)
That's an interesting problem, I would attack the problem by using a previous solution and adapting it to the way CUDA works..
CiteSeerX is where you can get hold of pdf's for research that might help..
http://citeseerx.ist.psu.edu/
Did you take a look at MDGPU? It was suggested in another question in SO
http://www-old.amolf.nl/~vanmeel/mdgpu/licence.html
Also NAG have a library which may help:
http://www.nag.co.uk/numeric/gpus/