This question already has answers here:
Fast n choose k mod p for large n?
(3 answers)
Closed 6 years ago.
How do I find C (n , r) mod k
where
0 < n,r < 10^5
k = 10^9 + 7 (large prime number)
I have found links to solve this using Lucas theorem here.
But this wouldn't help me in cases where my n , r, K all are large. The extension of this problem is :-
Finding sum of series like :-
(C(n,r) + C(n, r-2) + C(n, r-4) + ...... ) % k
Original constraints hold.
Thanks.
I know algorithm with complexity O(r*log_n)
Firstly look at algorithm to calc C(n,r) without mod k:
int res = 1;
for(int i=1; i<=r; i++){
res*=(n+1-i);
res/=i;
}
In your case, you can't divide, because you use modular arithmetics. But you can multiply on the modular multiplicative inverse element, information about it you can find here https://en.wikipedia.org/wiki/Modular_multiplicative_inverse.
You code will be like this:
int res = 1;
for(int i=1; i<=r; i++){
res*=(n+1-i);
res%=k;
res*=inverse(i,k);
res%=k;
}
This is a typical use case for dynamic programming. Pascal's triangle gives us
C(n, r) = C(n-1, r) + C(n-1, r-1)
Also we know
C(n, n) = 1
C(n, 0) = 1
C(n, 1) = n
You can apply modulus to each of the sub-results to avoid overflow.
Time and memory complexity are both O(n^2)
C(n,r) = n!/(r!(n-r)!) = (n-r+1)!/r!
As k is a prime, for every r < k we can find its modular multiplicative inverse r^-1 using Extended Euclidean algorithm in O(lg n).
So you may calculate ((n-r+1)!/r) % k as (((n-r+1)! % k) * r^-1) % k.
Do it over 1~r then you will get the result.
I think, the faster way will be using modular inverse.
Complexity will be as low as log(n)
for example
ncr( x, y) % m will be
a = fac(x) % m;
b = fac(y) % m;
c = fac(x-y) % m;
now if you need to calculate (a / b ) % m
you can do (a % m) * ( pow( b , m - 2) % m ) // Using Fermat’s Little Theorem
https://comeoncodeon.wordpress.com/2011/10/09/modular-multiplicative-inverse/
Related
Could you help me please ? I need a fast algorithm for calculating the following : the remainder of division the sum of the integers in the power from given range ( from A to B , 1 < A,B < 10^8 ) and 987654321;
For instance , if I have A = 10 , B = 15, I should calculate
((11^11) + (12^12) + (13^13) + (14^14) ) % 987654321
If I use this direct approach, it takes forever to calculate this. Is there a trick to calculate such kind of remainders?
Using fast modulo exponentiation, we can calculate x^n in O(log(n)) time. In the worst case, if A = 1 and B = n where n can be upto 10^8, then the total complexity will be around
log(2) + log(3) + log(4) + ... + log(n)
= log(n!)
~ n*log(n) - n + O(log(n)) (According to Striling's Approximation)
Wikipedia
Fast Modulo Exponentiation
This method is used to quickly calculate powers of the form x^n (in O(log(n)) time).
It can be given as a recurrence relation:
x^n = (x^2)^(n/2) if n is even
= x*{(x^2)^(n/2)} if n is odd
So, essentially instead of multiplying x n times, we do the following:
x = x^2;
n = n/2;
time till we reach a trivial case, where n = 1.
Python code (with modulo for this case):
def fast(x, n, mod):
if n == 1:
return x % mod
if n % 2 == 0:
return fast(x**2 % mod, n/2, mod)
else:
return x*fast(x**2 % mod, (n-1)/2, mod) % mod
I am looking for an efficient algorithm of the problem, for any N find all i and j such that N=i^j.
I can solve it of O(N^2) as follows,
for i=1 to N
{
for j=1 to N
{
if((Power(i,j)==N)
print(i,j)
}
}
I am looking for better algorithm(or program in any language)if possible.
Given that i^j=N, you can solve the equation for j by taking the log of both sides:
j log(i) = log(N) or j = log(N) / log(i). So the algorithm becomes
for i=2 to N
{
j = log(N) / log(i)
if((Power(i,j)==N)
print(i,j)
}
Note that due to rounding errors with floating point calculations, you might want to check j-1 and j+1 as well, but even so, this is an O(n) solution.
Also, you need to skip i=1 since log(1) = 0 and that would result in a divide-by-zero error. In other words, N=1 needs to be treated as a special case. Or not allowed, since the solution for N=1 is i=1 and j=any value.
As M Oehm pointed out in the comments, an even better solution is to iterate over j, and compute i with pow(n,1.0/j). That reduces the time complexity to O(logN), since the maximum value for j is log2(N).
Here is a method you can use.
Lets say you have to solve an equation:
a^b = n //b and n are known
You can find this using binary search. If, you get a condition such that,
x^b < n and (x+1)^b > n
Then, no pair (a,b) exists such that a^b = n.
If you apply this method in range for b from 1..log(n), you should get all possible pairs.
So, complexity of this method will be: O(log n * log n)
Follow these steps:
function ifPower(n,b)
min=1, max=n;
while(min<max)
mid=min + (max-min)/2
k = mid^b, l = (mid + 1)^b;
if(k == n)
return mid;
if(l == n)
return mid + 1;
if(k < n && l > n)
return -1;
if(k < n)
max = mid - 1;
else
min = mid + 2; //2 as we are already checking for mid+1
return -1;
function findAll(n)
s = log2(n)
for i in range 2 to s //starting from 2 to ignore base cases, powers 0,1...you can handle them if required
p = ifPower(n,i)
if(b != -1)
print(b,i)
Here, in the algorithm, a^b means a raised to power of b and not a xor b (its obvs, but just saying)
Is there faster algo to calculate (n! modulo m).
faster than reduction at every multiplication step.
And also
Is there faster algo to calculate (a^p modulo m) better than right-left binary method.
here is my code:
n! mod m
ans=1
for(int i=1;i<=n;i++)
ans=(ans*i)%m;
a^p mod m
result=1;
while(p>0){
if(p%2!=0)
result=(result*a)%m;
p=(p>>1);
a=(a*a)%m;
}
Now the a^n mod m is a O(logn), It's the Modular Exponentiation Algorithm.
Now for the other one, n! mod m, the algorithm you proposed is clearly O(n), So obviously the first algorithm is faster.
The standard trick for computing a^p modulo m is to use successive square. The idea is to expand p into binary, say
p = e0 * 2^0 + e1 * 2^1 + ... + en * 2^n
where (e0,e1,...,en) are binary (0 or 1) and en = 1. Then use laws of exponents to get the following expansion for a^p
a^p = a^( e0 * 2^0 + e1 * 2^1 + ... + en * 2^n )
= a^(e0 * 2^0) * a^(e1 * 2^1) * ... * a^(en * 2^n)
= (a^(2^0))^e0 * (a^(2^1))^e1 * ... * (a^(2^n))^en
Remember that each ei is either 0 or 1, so these just tell you which numbers to take. So the only computations that you need are
a, a^2, a^4, a^8, ..., a^(2^n)
You can generate this sequence by squaring the previous term. Since you want to compute the answer mod m, you should do the modular arithmetic first. This means you want to compute the following
A0 = a mod m
Ai = (Ai)^2 mod m for i>1
The answer is then
a^p mod m = A0^e0 + A1^e1 + ... + An^en
Therefore the computation takes log(p) squares and calls to mod m.
I'm not certain whether or not there is an analog for factorials, but a good place to start looking would be at Wilson's Theorem. Also, you should put in a test for m <= n, in which case n! mod m = 0.
For the first computation, you should only bother with the mod operator if ans > m:
ans=1
for(int i=1;i<=n;i++) {
ans *= i;
if (ans > m) ans %= m;
}
For the second computation, using (p & 1) != 0 will probably be a lot faster than using p%2!=0 (unless the compiler recognizes this special case and does it for you). Then the same comment applies about avoiding the % operator unless necessary.
I came across this problem of finding said probability and my first attempt was to come up with following algorithm: I am counting number of pairs which are relatively prime.
int rel = 0
int total = n * (n - 1) / 2
for i in [1, n)
for j in [i+1, n)
if gcd(i, j) == 1
++rel;
return rel / total
which is O(n^2).
Here is my attempt to reducing complexity:
Observation (1): 1 is relatively prime to [2, n] so n - 1 pairs are trivial.
Observation (2): 2 is not relatively prime to even numbers in the range [4, n] so remaining odd numbers are relatively prime to 2, so
#Relatively prime pairs = (n / 2) if n is even
= (n / 2 - 1) if n is odd.
So my improved algorithm would be:
int total = n * (n - 1) / 2
int rel = 0
if (n % 2) // n is odd
rel = (n - 1) + n / 2 - 1
else // n is even
rel = (n - 1) + n / 2
for i in [3, n)
for j in [i+1, n)
if gcd(i, j) == 1
++rel;
return rel / total
With this approach I could reduce two loops, but worst case time complexity is still O(n^2).
Question: My question is can we exploit any mathematical properties other than above to find the desired probability in linear time?
Thanks.
You'll need to calculate the Euler's Totient Function for all integers from 1 to n. Euler's totient or phi function, φ(n), is a arithmetical function that counts the number of positive integers less than or equal to n that are relatively prime to n.
To calculate the function efficiently, you can use a modified version of Sieve of Eratosthenes.
Here is a sample C++ code -
#include <stdio.h>
#define MAXN 10000000
int phi[MAXN+1];
bool isPrime[MAXN+1];
void calculate_phi() {
int i,j;
for(i = 1; i <= MAXN; i++) {
phi[i] = i;
isPrime[i] = true;
}
for(i = 2; i <= MAXN; i++) if(isPrime[i]) {
for(j = i+i; j <= MAXN; j+=i) {
isPrime[j] = false;
phi[j] = (phi[j] / i) * (i-1);
}
}
for(i = 1; i <= MAXN; i++) {
if(phi[i] == i) phi[i]--;
}
}
int main() {
calculate_phi();
return 0;
}
It uses the Euler's Product Formula described on the Wikipedia page of Totient Function.
Calculating the complexity of this algorithm is a bit tricky, but it is much less than O(n^2). You can get results for n = 10^7 pretty quickly.
The number of integers in the range 0 .. n that are coprime to n is the Euler totient function of n. You are computing the sum of such values, e.g. called summatory totient function. Methods to compute this sum fast are for example
described here. You should easily get a method with a better than quadratic complexity,
depending on how fast you implement the totient function.
Even better are the references listed in the encyclopedia of integer sequences: http://oeis.org/A002088, though many of the references require some math skills.
Using these formulas you can even get an implementation that is sublinear.
For each prime p, probability of it dividing a randomly picked number between 1 and n is
[n / p] / n
([x] being the biggest integer not greater than x). If n is large, this is approximately 1/p.
The probability of it dividing two such randomly picked numbers is
([n / p] / n)2
Again, this is 1/p2 for large n.
Two numbers are coprime if no prime divides both, so the probability in question is the product
Πp is prime(1 - ([n / p] / n)2)
It is enough to calculate it for all primes less than or equal to n. As n goes to infinity, this product approaches 6/π2.
I'm not sure you can use the totient function directly, as described in the other answers.
I have this problem that I can't solve.. what is the complexity of this foo algorithm?
int foo(char A[], int n, int m){
int i, a=0;
if (n>=m)
return 0;
for(i=n;i<m;i++)
a+=A[i]
return a + foo(A, n*2, m/2);
}
the foo function is called by:
foo(A,1,strlen(A));
so.. I guess it's log(n) * something for the internal for loop.. which I'm not sure if it's log(n) or what..
Could it be theta of log^2(n)?
This is a great application of the master theorem:
Rewrite in terms of n and X = m-n:
int foo(char A[], int n, int X){
int i, a=0;
if (X < 0) return 0;
for(i=0;i<X;i++)
a+=A[i+n]
return a + foo(A, n*2, (X-3n)/2);
}
So the complexity is
T(X, n) = X + T((X - 3n)/2, n*2)
Noting that the penalty increases with X and decreases with n,
T(X, n) < X + T(X/2, n)
So we can consider the complexity
U(X) = X + U(X/2)
and plug this into master theorem to find U(X) = O(X) --> complexity is O(m-n)
I'm not sure if there's a 'quick and dirty' way, but you can use old good math. No fancy theorems, just simple equations.
On k-th level of recursion (k starts from zero), a loop will have ~ n/(2^k) - 2^k iterations. Therefore, the total amount of loop iterations will be S = sum(n/2^i) - sum(2^i) for 0 <= i <= l, where l is the depth of recursion.
The l will be approximately log(2, n)/2 (prove it).
Transforming each part in formula for S separately, we get.
S = (1 + 2 + .. + 2^l)*n/2^l - (2^(l + 1) - 1) ~= 2*n - 2^(l + 1) ~= 2*n - sqrt(n)
Since each other statement except loop will be repeated only l times and we know that l ~= log(2, n), it won't affect complexity.
So, in the end we get O(n).