if x:
for i in range(a):
for z in range(a):
for k in range(z):
for p in range(i):
c = (i * z) + (k * p)
else:
for i in range(a):
for z in range(a):
for k in range(z):
c = (i * z) + (k * p)
Would this be O(n^4)? Also, how many multiplications would occur?
EDIT: updated the code. Also, since the lower bound captures the max number of steps a valid input will force, wouldn't big omega be n^4 as well?
Yes, the complexity is still O(n^4). To make things simple, here is the trick to rearrange your code
for i in range(a):
for p in range(i):
f(i, p)
where f(i, p) is
for z in range(a):
for k in range(z):
c = (i * z) + (k * p)
In the first part, f(i, p) has been executed for O(n^2/2) up to the largest order (because of the summation sum_i (i^2), do the math yourself). Similarly, the f(i, p) has the complexity of f(i, p) which is again equal to O(n^2/2).
So the combined resulting order is O(n^4/4). and there is two multiplications for each operation, so number of multiplication is O(n^4/2)
The following code would only be O(n4) if all the numbers a, z, and i were O(n).
for i in range(a):
for z in range(a):
for k in range(z):
for p in range(i):
c = (i * z) + (k * p)
As you've written it, all we know is that that code block is O(a2zi). Similarly, the total number of multiplications that would occur would be: 2a2zi. And, again, if a, z, and i are all O(n), the number of multiplications would be O(n4).
I'm not sure what you want to know about the second block of code.
Related
Given X,p,a,b. We need to find out how many positive integers n ( 1 <= n <= X) satisfies the following condition:-
na^n ≡ b(mod p)
Constraints:
2 <= p <= 10^6,
1 <= a,b< p,
1 <= X <= 10^12
I have no idea how to solve this question, any approach or proof will be highly helpful.
Thanks.
This assumes p is prime.
For each i from 1 upwards, compute a^i. At some point (call it q), you'll get to 1 and then you can stop. Then finding all n <= X such that na^n = b (mod p) and a^n = a^i (mod p) is a question of counting all solutions to n = b*(a^i)^-1 (mod p) and n=i (mod q), which you can do using the Chinese remainder theorem.
This process enumerates all solutions exactly once, and if you're careful runs in O(p) time. (The care is needed to avoid O(p log p) if you calculate a^i (mod p) and (a^i)^-1 (mod p) from scratch each iteration).
This question already has answers here:
Fast n choose k mod p for large n?
(3 answers)
Closed 6 years ago.
How do I find C (n , r) mod k
where
0 < n,r < 10^5
k = 10^9 + 7 (large prime number)
I have found links to solve this using Lucas theorem here.
But this wouldn't help me in cases where my n , r, K all are large. The extension of this problem is :-
Finding sum of series like :-
(C(n,r) + C(n, r-2) + C(n, r-4) + ...... ) % k
Original constraints hold.
Thanks.
I know algorithm with complexity O(r*log_n)
Firstly look at algorithm to calc C(n,r) without mod k:
int res = 1;
for(int i=1; i<=r; i++){
res*=(n+1-i);
res/=i;
}
In your case, you can't divide, because you use modular arithmetics. But you can multiply on the modular multiplicative inverse element, information about it you can find here https://en.wikipedia.org/wiki/Modular_multiplicative_inverse.
You code will be like this:
int res = 1;
for(int i=1; i<=r; i++){
res*=(n+1-i);
res%=k;
res*=inverse(i,k);
res%=k;
}
This is a typical use case for dynamic programming. Pascal's triangle gives us
C(n, r) = C(n-1, r) + C(n-1, r-1)
Also we know
C(n, n) = 1
C(n, 0) = 1
C(n, 1) = n
You can apply modulus to each of the sub-results to avoid overflow.
Time and memory complexity are both O(n^2)
C(n,r) = n!/(r!(n-r)!) = (n-r+1)!/r!
As k is a prime, for every r < k we can find its modular multiplicative inverse r^-1 using Extended Euclidean algorithm in O(lg n).
So you may calculate ((n-r+1)!/r) % k as (((n-r+1)! % k) * r^-1) % k.
Do it over 1~r then you will get the result.
I think, the faster way will be using modular inverse.
Complexity will be as low as log(n)
for example
ncr( x, y) % m will be
a = fac(x) % m;
b = fac(y) % m;
c = fac(x-y) % m;
now if you need to calculate (a / b ) % m
you can do (a % m) * ( pow( b , m - 2) % m ) // Using Fermat’s Little Theorem
https://comeoncodeon.wordpress.com/2011/10/09/modular-multiplicative-inverse/
I want to find a fast algorithm to evaluate an expression like the following, where P is prime.
A ^ B ^ C ^ D ^ E mod P
Example:
(9 ^ (3 ^ (15 ^ (3 ^ 15)))) mod 65537 = 16134
The problem is the intermediate results can grow much too large to handle.
Basically the problem reduces to computing a^T mod m for given a, m and a term T that is ridiulously huge. However, we are able to evaluate T mod n with a given modulus n much faster than T . So we ask: "Is there an integer n, such that a^(T mod n) mod m = a^T mod m?"
Now if a and m are coprime, we know that n = phi(m) fulfills our condition according to Euler's theorem:
a^T (mod m)
= a^((T mod phi(m)) + k * phi(m)) (mod m) (for some k)
= a^(T mod phi(m)) * a^(k * phi(m)) (mod m)
= a^(T mod phi(m)) * (a^phi(m))^k (mod m)
= a^(T mod phi(m)) * 1^k (mod m)
= a^(T mod phi(m)) (mod m)
If we can compute phi(m) (which is easy to do for example in O(m^(1/2)) or if we know the prime factorization of m), we have reduced the problem to computing T mod phi(m) and a simple modular exponentiation.
What if a and m are not coprime? The situation is not as pleasant as before, since there might not be a valid n with the property a^T mod m = a^(T mod n) mod m for all T. However, we can show that the sequence a^k mod m for k = 0, 1, 2, ... enters a cycle after some point, that is there exist x and C with x, C < m, such that a^y = a^(y + C) for all y >= x.
Example: For a = 2, m = 12, we get the sequence 2^0, 2^1, ... = 1, 2, 4, 8, 4, 8, ... (mod 12). We can see the cycle with parameters x = 2 and C = 2.
We can find the cycle length via brute-force, by computing the sequence elements a^0, a^1, ... until we find two indices X < Y with a^X = a^Y. Now we set x = X and C = Y - X. This gives us an algorithm with O(m) exponentiations per recursion.
What if we want to do better? Thanks to Jyrki Lahtonen from Math Exchange for providing the essentials for the following algorithm!
Let's evaluate the sequence d_k = gcd(a^k, m) until we find an x with d_x = d_{x+1}. This will take at most log(m) GCD computations, because x is bounded by the highest exponent in the prime factorization of m. Let C = phi(m / d_x). We can now prove that a^{k + C} = a^k for all k >= x, so we have found the cycle parameters in O(m^(1/2)) time.
Let's assume we have found x and C and want to compute a^T mod m now.
If T < x, the task is trivial to perform with simple modular exponentiation. Otherwise, we have T >= x and can thus make use of the cycle:
a^T (mod m)
= a^(x + ((T - x) mod C)) (mod m)
= a^(x + (-x mod C) + (T mod C) + k*C) (mod m) (for some k)
= a^(x + (-x mod C) + k*C) * a^(T mod C) (mod m)
= a^(x + (-x mod C)) * a^(T mod C) (mod m)
Again, we have reduced the problem to a subproblem of the same form ("compute T mod C") and two simple modular exponentiations.
Since the modulus is reduced by at least 1 in every iteration, we get a pretty weak bound of O(P^(1/2) * min (P, n)) for the runtime of this algorithm, where n is the height of the stack. In practice we should get a lot better, since the moduli are expected to decrease exponentially. Of course this argument is a bit hand-wavy, maybe some more mathematically-inclined person can improve on it.
There are a few edge cases to consider that actually make your life a bit easier: you can stop immediately if m = 1 (the result is 0 in this case) or if a is a multiple of m (the result is 0 as well in this case).
EDIT: It can be shown that x = C = phi(m) is valid, so as a quick and dirty solution we can use the formula
a^T = a^(phi(m) + T mod phi(m)) (mod m)
for T >= phi(m) or even T >= log_2(m).
I was curious if there was a good way to do this. My current code is something like:
def factorialMod(n, modulus):
ans=1
for i in range(1,n+1):
ans = ans * i % modulus
return ans % modulus
But it seems quite slow!
I also can't calculate n! and then apply the prime modulus because sometimes n is so large that n! is just not feasible to calculate explicitly.
I also came across http://en.wikipedia.org/wiki/Stirling%27s_approximation and wonder if this can be used at all here in some way?
Or, how might I create a recursive, memoized function in C++?
n can be arbitrarily large
Well, n can't be arbitrarily large - if n >= m, then n! ≡ 0 (mod m) (because m is one of the factors, by the definition of factorial).
Assuming n << m and you need an exact value, your algorithm can't get any faster, to my knowledge. However, if n > m/2, you can use the following identity (Wilson's theorem - Thanks #Daniel Fischer!)
to cap the number of multiplications at about m-n
(m-1)! ≡ -1 (mod m)
1 * 2 * 3 * ... * (n-1) * n * (n+1) * ... * (m-2) * (m-1) ≡ -1 (mod m)
n! * (n+1) * ... * (m-2) * (m-1) ≡ -1 (mod m)
n! ≡ -[(n+1) * ... * (m-2) * (m-1)]-1 (mod m)
This gives us a simple way to calculate n! (mod m) in m-n-1 multiplications, plus a modular inverse:
def factorialMod(n, modulus):
ans=1
if n <= modulus//2:
#calculate the factorial normally (right argument of range() is exclusive)
for i in range(1,n+1):
ans = (ans * i) % modulus
else:
#Fancypants method for large n
for i in range(n+1,modulus):
ans = (ans * i) % modulus
ans = modinv(ans, modulus)
ans = -1*ans + modulus
return ans % modulus
We can rephrase the above equation in another way, that may or may-not perform slightly faster. Using the following identity:
we can rephrase the equation as
n! ≡ -[(n+1) * ... * (m-2) * (m-1)]-1 (mod m)
n! ≡ -[(n+1-m) * ... * (m-2-m) * (m-1-m)]-1 (mod m)
(reverse order of terms)
n! ≡ -[(-1) * (-2) * ... * -(m-n-2) * -(m-n-1)]-1 (mod m)
n! ≡ -[(1) * (2) * ... * (m-n-2) * (m-n-1) * (-1)(m-n-1)]-1 (mod m)
n! ≡ [(m-n-1)!]-1 * (-1)(m-n) (mod m)
This can be written in Python as follows:
def factorialMod(n, modulus):
ans=1
if n <= modulus//2:
#calculate the factorial normally (right argument of range() is exclusive)
for i in range(1,n+1):
ans = (ans * i) % modulus
else:
#Fancypants method for large n
for i in range(1,modulus-n):
ans = (ans * i) % modulus
ans = modinv(ans, modulus)
#Since m is an odd-prime, (-1)^(m-n) = -1 if n is even, +1 if n is odd
if n % 2 == 0:
ans = -1*ans + modulus
return ans % modulus
If you don't need an exact value, life gets a bit easier - you can use Stirling's approximation to calculate an approximate value in O(log n) time (using exponentiation by squaring).
Finally, I should mention that if this is time-critical and you're using Python, try switching to C++. From personal experience, you should expect about an order-of-magnitude increase in speed or more, simply because this is exactly the sort of CPU-bound tight-loop that natively-compiled code excels at (also, for whatever reason, GMP seems much more finely-tuned than Python's Bignum).
Expanding my comment to an answer:
Yes, there are more efficient ways to do this. But they are extremely messy.
So unless you really need that extra performance, I don't suggest to try to implement these.
The key is to note that the modulus (which is essentially a division) is going to be the bottleneck operation. Fortunately, there are some very fast algorithms that allow you to perform modulus over the same number many times.
Division by Invariant Integers using Multiplication
Montgomery Reduction
These methods are fast because they essentially eliminate the modulus.
Those methods alone should give you a moderate speedup. To be truly efficient, you may need to unroll the loop to allow for better IPC:
Something like this:
ans0 = 1
ans1 = 1
for i in range(1,(n+1) / 2):
ans0 = ans0 * (2*i + 0) % modulus
ans1 = ans1 * (2*i + 1) % modulus
return ans0 * ans1 % modulus
but taking into account for an odd # of iterations and combining it with one of the methods I linked to above.
Some may argue that loop-unrolling should be left to the compiler. I will counter-argue that compilers are currently not smart enough to unroll this particular loop. Have a closer look and you will see why.
Note that although my answer is language-agnostic, it is meant primarily for C or C++.
n! mod m can be computed in O(n1/2 + ε) operations instead of the naive O(n). This requires use of FFT polynomial multiplication, and is only worthwhile for very large n, e.g. n > 104.
An outline of the algorithm and some timings can be seen here: http://fredrikj.net/blog/2012/03/factorials-mod-n-and-wilsons-theorem/
If we want to calculate M = a*(a+1) * ... * (b-1) * b (mod p), we can use the following approach, if we assume we can add, substract and multiply fast (mod p), and get a running time complexity of O( sqrt(b-a) * polylog(b-a) ).
For simplicity, assume (b-a+1) = k^2, is a square. Now, we can divide our product into k parts, i.e. M = [a*..*(a+k-1)] *...* [(b-k+1)*..*b]. Each of the factors in this product is of the form p(x)=x*..*(x+k-1), for appropriate x.
By using a fast multiplication algorithm of polynomials, such as Schönhage–Strassen algorithm, in a divide & conquer manner, one can find the coefficients of the polynomial p(x) in O( k * polylog(k) ). Now, apparently there is an algorithm for substituting k points in the same degree-k polynomial in O( k * polylog(k) ), which means, we can calculate p(a), p(a+k), ..., p(b-k+1) fast.
This algorithm of substituting many points into one polynomial is described in the book "Prime numbers" by C. Pomerance and R. Crandall. Eventually, when you have these k values, you can multiply them in O(k) and get the desired value.
Note that all of our operations where taken (mod p).
The exact running time is O(sqrt(b-a) * log(b-a)^2 * log(log(b-a))).
Expanding on my comment, this takes about 50% of the time for all n in [100, 100007] where m=(117 | 1117):
Function facmod(n As Integer, m As Integer) As Integer
Dim f As Integer = 1
For i As Integer = 2 To n
f = f * i
If f > m Then
f = f Mod m
End If
Next
Return f
End Function
I found this following function on quora:
With f(n,m) = n! mod m;
function f(n,m:int64):int64;
begin
if n = 1 then f:= 1
else f:= ((n mod m)*(f(n-1,m) mod m)) mod m;
end;
Probably beat using a time consuming loop and multiplying large number stored in string. Also, it is applicable to any integer number m.
The link where I found this function : https://www.quora.com/How-do-you-calculate-n-mod-m-where-n-is-in-the-1000s-and-m-is-a-very-large-prime-number-eg-n-1000-m-10-9+7
If n = (m - 1) for prime m then by http://en.wikipedia.org/wiki/Wilson's_theorem n! mod m = (m - 1)
Also as has already been pointed out n! mod m = 0 if n > m
Assuming that the "mod" operator of your chosen platform is sufficiently fast, you're bounded primarily by the speed at which you can calculate n! and the space you have available to compute it in.
Then it's essentially a 2-step operation:
Calculate n! (there are lots of fast algorithms so I won't repeat any here)
Take the mod of the result
There's no need to complexify things, especially if speed is the critical component. In general, do as few operations inside the loop as you can.
If you need to calculate n! mod m repeatedly, then you may want to memoize the values coming out of the function doing the calculations. As always, it's the classic space/time tradeoff, but lookup tables are very fast.
Lastly, you can combine memoization with recursion (and trampolines as well if needed) to get things really fast.
I have this problem that I can't solve.. what is the complexity of this foo algorithm?
int foo(char A[], int n, int m){
int i, a=0;
if (n>=m)
return 0;
for(i=n;i<m;i++)
a+=A[i]
return a + foo(A, n*2, m/2);
}
the foo function is called by:
foo(A,1,strlen(A));
so.. I guess it's log(n) * something for the internal for loop.. which I'm not sure if it's log(n) or what..
Could it be theta of log^2(n)?
This is a great application of the master theorem:
Rewrite in terms of n and X = m-n:
int foo(char A[], int n, int X){
int i, a=0;
if (X < 0) return 0;
for(i=0;i<X;i++)
a+=A[i+n]
return a + foo(A, n*2, (X-3n)/2);
}
So the complexity is
T(X, n) = X + T((X - 3n)/2, n*2)
Noting that the penalty increases with X and decreases with n,
T(X, n) < X + T(X/2, n)
So we can consider the complexity
U(X) = X + U(X/2)
and plug this into master theorem to find U(X) = O(X) --> complexity is O(m-n)
I'm not sure if there's a 'quick and dirty' way, but you can use old good math. No fancy theorems, just simple equations.
On k-th level of recursion (k starts from zero), a loop will have ~ n/(2^k) - 2^k iterations. Therefore, the total amount of loop iterations will be S = sum(n/2^i) - sum(2^i) for 0 <= i <= l, where l is the depth of recursion.
The l will be approximately log(2, n)/2 (prove it).
Transforming each part in formula for S separately, we get.
S = (1 + 2 + .. + 2^l)*n/2^l - (2^(l + 1) - 1) ~= 2*n - 2^(l + 1) ~= 2*n - sqrt(n)
Since each other statement except loop will be repeated only l times and we know that l ~= log(2, n), it won't affect complexity.
So, in the end we get O(n).