Finding value of a simple Pseudo random number generation funciton - random

I have an equation to generate pseudo-random numbers, example function is well-known called Linear congruential generator(LCG). The function definition is like bellow.
R1 = (a*R0 + b) mod c
I can easily write a recursive function to generate random number when seed value (R0) and a,b and c are given. For example
ƒunction random (n,r0,a,b,c)
{
if(n==0) return r0;
return (a * random (n-1) + b) % c;
}
Here, If I want to generate 20th random number, I can call with random(20,3,5,17,23).
My question is how can I calcualte value with pen and pencil for a certain value of n, suppose n = 2019?.

An easier way to calculate would be
.............
Considering and
And calculate everything modulo C.

Related

Matrix chain Multiplication Different Recursive definition

Matrix Chain Multiplication has a dynamic programming solution where a recursive definition is used which works like this :
Problem : multiply i to j
Sub-problem : multiply i to k + multiply k+1 to j + multiplication cost
and this looks straight forward to memoize, due the repeating (i,j) sub-problems. But the following recursive definition which is bit different, I am facing difficulty memoizing it :
Can someone help memoizing this algo for matrix chain multiplication :
P is sequence of orders of matrices.
For eg, A(2,3)*B(3,4)*C(4,5), then P = {2,3,4,5}, i.e. order of ith matrix is P[i-1]*P[i]
also assumed P is 0-indexed.
Here I am multiplying adjacent matrices and recursing
Pseudocode :
chain_mul(P, n) {
if(n = 1) return 0
min_cost = inf
for( i = 1 to n-1) {
cost = P[i-1]*P[i]*P[i+1] + chain_mul(P-{P[i]}, n-1);
if(cost < min_cost) min_cost = cost
}
return min_cost
}
Here repeating sub-problem is structure of P, like I have shown below :
This cannot be memoized efficiently, because the argument P, iterates over all the subsets of the initial set P, so the memory required would be O(2^n).
The algoritms that can be memoized call the function specifying sections of the matrix chain, each section is characterized by two numbers, start and end index. The number of segments will be something like (n * (n + 1) / 2), and it is easy to implement a data structure to store and retrieve the results indexed by two numbers (e.g a matrix).

Making a customizable LCG that travels backward and forward

How would i go about making an LCG (type of pseudo random number generator) travel in both directions?
I know that travelling forward is (a*x+c)%m but how would i be able to reverse it?
I am using this so i can store the seed at the position of the player in a map and be able to generate things around it by propogating backward and forward in the LCG (like some sort of randomized number line).
All LCGs cycle. In an LCG which achieves maximal cycle length there is a unique predecessor and a unique successor for each value x (which won't necessarily be true for LCGs that don't achieve maximal cycle length, or for other algorithms with subcycle behaviors such as von Neumann's middle-square method).
Suppose our LCG has cycle length L. Since the behavior is cyclic, that means that after L iterations we are back to the starting value. Finding the predecessor value by taking one step backwards is mathematically equivalent to taking (L-1) steps forward.
The big question is whether that can be converted into a single step. If you're using a Prime Modulus Multiplicative LCG (where the additive constant is zero), it turns out to be pretty easy to do. If xi+1 = a * xi % m, then xi+n = an * xi % m. As a concrete example, consider the PMMLCG with a = 16807 and m = 231-1. This has a maximal cycle length of m-1 (it can never yield 0 for obvious reasons), so our goal is to iterate m-2 times. We can precalculate am-2 % m = 1407677000 using readily available exponentiation/mod libraries. Consequently, a forward step is found as xi+1 = 16807 * xi % 231-1, while a backwards step is found as xi-1 = 1407677000 * xi % 231-1.
ADDITIONAL
The same concept can be extended to generic full-cycle LCGs by casting the transition in matrix form and doing fast matrix exponentiation to come up with the equivalent one-stage transform. The matrix formulation for xi+1 = (a * xi + c) % m is Xi+1 = T · Xi % m, where T is the matrix [[a c],[0 1]] and X is the column vector (x, 1) transposed. Multiple iterations of the LCG can be quickly calculated by raising T to any desired power through fast exponentiation techniques using squaring and halving the power. After noticing that powers of matrix T never alter the second row, I was able to focus on just the first row calculations and produced the following implementation in Ruby:
def power_mod(ary, mod, power)
return ary.map { |x| x % mod } if power < 2
square = [ary[0] * ary[0] % mod, (ary[0] + 1) * ary[1] % mod]
square = power_mod(square, mod, power / 2)
return square if power.even?
return [square[0] * ary[0] % mod, (square[0] * ary[1] + square[1]) % mod]
end
where ary is a vector containing a and c, the multiplicative and additive coefficients.
Using this with power set to the cycle length - 1, I was able to determine coefficients which yield the predecessor for various LCGs listed in Wikipedia. For example, to "reverse" the LCG with a = 1664525, c = 1013904223, and m = 232, use a = 4276115653 and c = 634785765. You can easily confirm that the latter set of coefficients reverses the sequence produced by using the original coefficients.

Algorithm to generate positive random integers smaller than given integer n

There are algorithms to generate random numbers like:
number = (previous_number * constant + other_constant) mod third_constant
for carefully selected constants.
But I need algorithm to generate random integers that are in range of 0 to n-1. (Obviously not running loop and getting the counter, I need randomness). How is this possible? Thank you.
Use third_constant = n
The multiply and add operations give you some number, then when you do the mod you get an integer from 0 to third_constant -1, so just use n for the third_constant and you're done.
This is achieved by generating a pseudo-random number in range [0,1) (which exists in most languages), and then :
r = rand()
res = floor(rand*m)
One way to generate a random in [0,1) is to generate random integer in [0,MAX_RAND), and divide by MAX_RAND.
You can also use a random int generation from [0,MAX_RAND) (let it be r, and return r % m - but beware of bias if MAX_RAND is not significantly larger than m (by 2-3 scales)

Reverse factorial

Well, we all know that if N is given it's easy to calculate N!. But what about the inverse?
N! is given and you are about to find N - Is that possible ? I'm curious.
Set X=1.
Generate F=X!
Is F = the input? If yes, then X is N.
If not, then set X=X+1, then start again at #2.
You can optimize by using the previous result of F to compute the new F (new F = new X * old F).
It's just as fast as going the opposite direction, if not faster, given that division generally takes longer than multiplication. A given factorial A! is guaranteed to have all integers less than A as factors in addition to A, so you'd spend just as much time factoring those out as you would just computing a running factorial.
If you have Q=N! in binary, count the trailing zeros. Call this number J.
If N is 2K or 2K+1, then J is equal to 2K minus the number of 1's in the binary representation of 2K, so add 1 over and over until the number of 1's you have added is equal to the number of 1's in the result.
Now you know 2K, and N is either 2K or 2K+1. To tell which one it is, count the factors of the biggest prime (or any prime, really) in 2K+1, and use that to test Q=(2K+1)!.
For example, suppose Q (in binary) is
1111001110111010100100110000101011001111100000110110000000000000000000
(Sorry it's so small, but I don't have tools handy to manipulate larger numbers.)
There are 19 trailing zeros, which is
10011
Now increment:
1: 10100
2: 10101
3: 10110 bingo!
So N is 22 or 23. I need a prime factor of 23, and, well, I have to pick 23 (it happens that 2K+1 is prime, but I didn't plan that and it isn't needed). So 23^1 should divide 23!, it doesn't divide Q, so
N=22
int inverse_factorial(int factorial){
int current = 1;
while (factorial > current) {
if (factorial % current) {
return -1; //not divisible
}
factorial /= current;
++current;
}
if (current == factorial) {
return current;
}
return -1;
}
Yes. Let's call your input x. For small values of x, you can just try all values of n and see if n! = x. For larger x, you can binary-search over n to find the right n (if one exists). Note hat we have n! ≈ e^(n ln n - n) (this is Stirling's approximation), so you know approximately where to look.
The problem of course, is that very few numbers are factorials; so your question makes sense for only a small set of inputs. If your input is small (e.g. fits in a 32-bit or 64-bit integer) a lookup table would be the best solution.
(You could of course consider the more general problem of inverting the Gamma function. Again, binary search would probably be the best way, rather than something analytic. I'd be glad to be shown wrong here.)
Edit: Actually, in the case where you don't know for sure that x is a factorial number, you may not gain all that much (or anything) with binary search using Stirling's approximation or the Gamma function, over simple solutions. The inverse factorial grows slower than logarithmic (this is because the factorial is superexponential), and you have to do arbitrary-precision arithmetic to find factorials and multiply those numbers anyway.
For instance, see Draco Ater's answer for an idea that (when extended to arbitrary-precision arithmetic) will work for all x. Even simpler, and probably even faster because multiplication is faster than division, is Dav's answer which is the most natural algorithm... this problem is another triumph of simplicity, it appears. :-)
Well, if you know that M is really the factorial of some integer, then you can use
n! = Gamma(n+1) = sqrt(2*PI) * exp(-n) * n^(n+1/2) + O(n^(-1/2))
You can solve this (or, really, solve ln(n!) = ln Gamma(n+1)) and find the nearest integer.
It is still nonlinear, but you can get an approximate solution by iteration easily (in fact, I expect the n^(n+1/2) factor is enough).
Multiple ways. Use lookup tables, use binary search, use a linear search...
Lookup tables is an obvious one:
for (i = 0; i < MAX; ++i)
Lookup[i!] = i; // you can calculate i! incrementally in O(1)
You could implement this using hash tables for example, or if you use C++/C#/Java, they have their own hash table-like containers.
This is useful if you have to do this a lot of times and each time it has to be fast, but you can afford to spend some time building this table.
Binary search: assume the number is m = (1 + N!) / 2. Is m! larger than N!? If yes, reduce the search between 1 and m!, otherwise reduce it between m! + 1 and N!. Recursively apply this logic.
Of course, these numbers might be very big and you might end up doing a lot of unwanted operations. A better idea is to search between 1 and sqrt(N!) using binary search, or try to find even better approximations, though this might not be easy. Consider studying the gamma function.
Linear search: Probably the best in this case. Calculate 1*2*3*...*k until the product is equal to N! and output k.
If the input number is really N!, its fairly simple to calculate N.
A naive approach computing factorials will be too slow, due to the overhead of big integer arithmetic. Instead we can notice that, when N ≥ 7, each factorial can be uniquely identified by its length (i.e. number of digits).
The length of an integer x can be computed as log10(x) + 1.
Product rule of logarithms: log(a*b) = log(a) + log(b)
By using above two facts, we can say that length of N! is:
which can be computed by simply adding log10(i) until we get length of our input number, since log(1*2*3*...*n) = log(1) + log(2) + log(3) + ... + log(n).
This C++ code should do the trick:
double result = 0;
for (int i = 1; i <= 1000000; ++i) { // This should work for 1000000! (where inputNumber has 10^7 digits)
result += log10(i);
if ( (int)result + 1 == inputNumber.size() ) { // assuming inputNumber is a string of N!
std::cout << i << endl;
break;
}
}
(remember to check for cases where n<7 (basic factorial calculation should be fine here))
Complete code: https://pastebin.com/9EVP7uJM
Here is some clojure code:
(defn- reverse-fact-help [n div]
(cond (not (= 0 (rem n div))) nil
(= 1 (quot n div)) div
:else (reverse-fact-help (/ n div) (+ div 1))))
(defn reverse-fact [n] (reverse-fact-help n 2))
Suppose n=120, div=2. 120/2=60, 60/3=20, 20/4=5, 5/5=1, return 5
Suppose n=12, div=2. 12/2=6, 6/3=2, 2/4=.5, return 'nil'
int p = 1,i;
//assume variable fact_n has the value n!
for(i = 2; p <= fact_n; i++) p = p*i;
//i is the number you are looking for if p == fact_n else fact_n is not a factorial
I know it isn't a pseudocode, but it's pretty easy to understand
inverse_factorial( X )
{
X_LOCAL = X;
ANSWER = 1;
while(1){
if(X_LOCAL / ANSWER == 1)
return ANSWER;
X_LOCAL = X_LOCAL / ANSWER;
ANSWER = ANSWER + 1;
}
}
This function is based on successive approximations! I created it and implemented it in Advanced Trigonometry Calculator 1.7.0
double arcfact(double f){
double result=0,precision=1000;
int i=0;
if(f>0){
while(precision>1E-309){
while(f>fact(result+precision)&&i<10){
result=result+precision;
i++;
}
precision=precision/10;
i=0;
}
}
else{
result=0;
}
return result;
}
If you do not know whether a number M is N! or not, a decent test is to test if it's divisible by all the small primes until the Sterling approximation of that prime is larger than M. Alternatively, if you have a table of factorials but it doesn't go high enough, you can pick the largest factorial in your table and make sure M is divisible by that.
In C from my app Advanced Trigonometry Calculator v1.6.8
double arcfact(double f) {
double i=1,result=f;
while((result/(i+1))>=1) {
result=result/i;
i++;
}
return result;
}
What you think about that? Works correctly for factorials integers.
Simply divide by positive numbers, i.e: 5!=120 ->> 120/2 = 60 || 60/3 = 20 || 20/4 = 5 || 5/5 = 1
So the last number before result = 1 is your number.
In code you could do the following:
number = res
for x=2;res==x;x++{
res = res/x
}
or something like that. This calculation needs improvement for non-exact numbers.
Most numbers are not in the range of outputs of the factorial function. If that is what you want to test, it's easy to get an approximation using Stirling's formula or the number of digits of the target number, as others have mentioned, then perform a binary search to determine factorials above and below the given number.
What is more interesting is constructing the inverse of the Gamma function, which extends the factorial function to positive real numbers (and to most complex numbers, too). It turns out construction of an inverse is a difficult problem. However, it was solved explicitly for most positive real numbers in 2012 in the following paper: http://www.ams.org/journals/proc/2012-140-04/S0002-9939-2011-11023-2/S0002-9939-2011-11023-2.pdf . The explicit formula is given in Corollary 6 at the end of the paper.
Note that it involves an integral on an infinite domain, but with a careful analysis I believe a reasonable implementation could be constructed. Whether that is better than a simple successive approximation scheme in practice, I don't know.
C/C++ code for what the factorial (r is the resulting factorial):
int wtf(int r) {
int f = 1;
while (r > 1)
r /= ++f;
return f;
}
Sample tests:
Call: wtf(1)
Output: 1
Call: wtf(120)
Output: 5
Call: wtf(3628800)
Output: 10
Based on:
Full inverted factorial valid for x>1
Use the suggested calculation. If factorial is expressible in full binary form the algorithm is:
Suppose input is factorial x, x=n!
Return 1 for 1
Find the number of trailing 0's in binary expansion of the factorial x, let us mark it with t
Calculate x/fact(t), x divided by the factorial of t, mathematically x/(t!)
Find how many times x/fact(t) divides t+1, rounded down to the nearest integer, let us mark it with m
Return m+t
__uint128_t factorial(int n);
int invert_factorial(__uint128_t fact)
{
if (fact == 1) return 1;
int t = __builtin_ffs(fact)-1;
int res = fact/factorial(t);
return t + (int)log(res)/log(t+1);
}
128-bit is giving in on 34!

Getting N random numbers whose sum is M

I want to get N random numbers whose sum is a value.
For example, let's suppose I want 5 random numbers that sum to 1.
Then, a valid possibility is:
0.2 0.2 0.2 0.2 0.2
Another possibility is:
0.8 0.1 0.03 0.03 0.04
And so on. I need this for the creation of a matrix of belongings for Fuzzy C-means.
Short Answer:
Just generate N random numbers, compute their sum, divide each one by
the sum and multiply by M.
Longer Answer:
The above solution does not yield a uniform distribution which might be an issue depending on what these random numbers are used for.
Another method proposed by Matti Virkkunen:
Generate N-1 random numbers between 0 and 1, add the numbers 0 and 1
themselves to the list, sort them, and take the differences of
adjacent numbers.
This yields a uniform distribution as is explained here
Generate N-1 random numbers between 0 and 1, add the numbers 0 and 1 themselves to the list, sort them, and take the differences of adjacent numbers.
I think it is worth noting that the currently accepted answer does not give a uniform distribution:
"Just generate N random numbers,
compute their sum, divide each one by
the sum"
To see this let's look at the case N=2 and M=1. This is a trivial case, since we can generate a list [x,1-x], by choosing x uniformly in the range (0,1).
The proposed solution generates a pair [x/(x+y), y/(x+y)] where x and y are uniform in (0,1). To analyze this we choose some z such that 0 < z < 0.5 and compute the probability that
the first element is smaller than z. This probaility should be z if the distribution were uniform. However, we get
Prob(x/(x+y) < z) = Prob(x < z(x+y)) = Prob(x(1-z) < zy) = Prob(x < y(z/(1-z))) = z/(2-2z).
I did some quick calculations and it appears that the only solution so far that appers to result in a uniform distribution was proposed by Matti Virkkunen:
"Generate N-1 random numbers between 0 and 1, add the numbers 0 and 1 themselves to the list, sort them, and take the differences of adjacent numbers."
Unfortunately, a number of the answers here are incorrect if you'd like uniformly random numbers. The easiest (and fastest in many languages) solution that guarantees uniformly random numbers is just
# This is Python, but most languages support the Dirichlet.
import numpy as np
np.random.dirichlet(np.ones(n))*m
where n is the number of random numbers you want to generate and m is the sum of the resulting array. This approach produces positive values and is particularly useful for generating valid probabilities that sum to 1 (let m = 1).
To generate N positive numbers that sum to a positive number M at random, where each possible combination is equally likely:
Generate N exponentially-distributed random variates. One way to generate such a number can be written as—
number = -ln(1.0 - RNDU())
where ln(x) is the natural logarithm of x and RNDU() is a method that returns a uniform random variate greater than 0 and less than 1. Note that generating the N variates with a uniform distribution is not ideal because a biased distribution of random variate combinations will result. However, the implementation given above has several problems, such as being ill-conditioned at large values because of the distribution's right-sided tail, especially when the implementation involves floating-point arithmetic. Another implementation is given in another answer.
Divide the numbers generated this way by their sum.
Multiply each number by M.
The result is N numbers whose sum is approximately equal to M (I say "approximately" because of rounding error). See also the Wikipedia article Dirichlet distribution.
This problem is also equivalent to the problem of generating random variates uniformly from an N-dimensional unit simplex.
However, for better accuracy (compared to the alternative of using floating-point numbers, which often occurs in practice), you should consider generating n random integers that sum to an integer m * x, and treating those integers as the numerators to n rational numbers with denominator x (and will thus sum to m assuming m is an integer). You can choose x to be a large number such as 232 or 264 or some other number with the desired precision. If x is 1 and m is an integer, this solves the problem of generating random integers that sum to m.
The following pseudocode shows two methods for generating n uniform random integers with a given positive sum, in random order. (The algorithm for this was presented in Smith and Tromble, "Sampling Uniformly from the Unit Simplex", 2004.) In the pseudocode below—
the method PositiveIntegersWithSum returns n integers greater than 0 that sum to m, in random order,
the method IntegersWithSum returns n integers 0 or greater that sum to m, in random order, and
Sort(list) sorts the items in list in ascending order (note that sort algorithms are outside the scope of this answer).
 
METHOD PositiveIntegersWithSum(n, m)
if n <= 0 or m <=0: return error
ls = [0]
ret = NewList()
while size(ls) < n
c = RNDINTEXCRANGE(1, m)
found = false
for j in 1...size(ls)
if ls[j] == c
found = true
break
end
end
if found == false: AddItem(ls, c)
end
Sort(ls)
AddItem(ls, m)
for i in 1...size(ls): AddItem(ret,
ls[i] - ls[i - 1])
return ret
END METHOD
METHOD IntegersWithSum(n, m)
if n <= 0 or m <=0: return error
ret = PositiveIntegersWithSum(n, m + n)
for i in 0...size(ret): ret[i] = ret[i] - 1
return ret
END METHOD
Here, RNDINTEXCRANGE(a, b) returns a uniform random integer in the interval [a, b).
In Java:
private static double[] randSum(int n, double m) {
Random rand = new Random();
double randNums[] = new double[n], sum = 0;
for (int i = 0; i < randNums.length; i++) {
randNums[i] = rand.nextDouble();
sum += randNums[i];
}
for (int i = 0; i < randNums.length; i++) {
randNums[i] /= sum * m;
}
return randNums;
}
Generate N-1 random numbers.
Compute the sum of said numbers.
Add the difference between the computed sum and the desired sum to the set.
You now have N random numbers, and their sum is the desired sum.
Just generate N random numbers, compute their sum, divide each one by
the sum.
Expanding on Guillaume's accepted answer, here's a Java function that does exactly that.
public static double[] getRandDistArray(int n, double m)
{
double randArray[] = new double[n];
double sum = 0;
// Generate n random numbers
for (int i = 0; i < randArray.length; i++)
{
randArray[i] = Math.random();
sum += randArray[i];
}
// Normalize sum to m
for (int i = 0; i < randArray.length; i++)
{
randArray[i] /= sum;
randArray[i] *= m;
}
return randArray;
}
In a test run, getRandDistArray(5, 1.0) returned the following:
[0.38106150346121903, 0.18099632814238079, 0.17275044310377025, 0.01732932296660358, 0.24786240232602647]
You're a little slim on constraints. Lots and lots of procedures will work.
For example, are numbers normally distributed? Uniform?
I'l assume that all the numbers must be positive and uniformly distributed around the mean, M/N.
Try this.
mean= M/N.
Generate N-1 values between 0 and 2*mean. This can be a standard number between 0 and 1, u, and the random value is (2*u-1)*mean to create a value in an appropriate range.
Compute the sum of the N-1 values.
The remaining value is N-sum.
If the remaining value does not fit the constraints (0 to 2*mean) repeat the procedure.

Resources