I've been thinking about this question for a while. What is the complexity of multiplication if you reduce modulo N after each step?
I know the complexity of multiplication is O(m^2) (where m is the number of digets). Would reducing mod N after each step multiply it by a factor O(N^2)?
So the total complexity would be O(m^2)*O(N^2)?
Related
I have an algorithm that first does something in O(n*log(n)) time and then does something else in O(n^2) time. Am I correct that the total complexity would be
O(n*log(n) + n^2)
= O(n*(log(n) + n))
= O(n^2)
since log(n) + n is dominated by the + n?
The statement is correct, as O(n log n) is a subset of O(n^2); however, a formal proof would consist out of choosing and constructing suitable constants.
If the call probability of both is equal then you are right. But if the probability of both is not equal you have to do an amortized analysis where you split rare expensive calls (n²) to many fast calls (n log(n)).
For quick sort for example (which generally takes n log(n), but rarly takes n²) you can proof that average running time is n log(n) because of amortized anlysis.
one of the rules of complexity analysis is that you must remove the terms with lower exponent or lower factors.
nlogn vs n^2 (divide both by n)
logn vs n
logn is smaller than n, than you can remove it from the complexity equation
so if the complexity is O(nlogn + n^2), when n is really big, the value of nlogn is not significant if compared to n^2, this is why you remove it and rewrite as O(n^2)
What is the time complexity to calculate 2^5000 ?
I approached it, by recursion but then it leads to O(N) where N = power of a number. Is there any way to reduce this time complexity ?
I think that you are interested in general approach, not only in this given example.
You can calculate N-th integer power using Log(N) operations with exponentiation by squaring approach
But note that number 2^N consists of about N binary digits (bits) and simple writing in memory is O(N) operation
If I implement an algorithm that runs at O(n^4) at the current timestep and then O(n^2) at the next.
is the complexity still the max[O(n^4), O(n^2)] ?
Is there a way to get a polynomial in the range [2, 4) for the complexity? I.e something like O(n^2.83) on average
How would I calculate the average runtime cost amortized from t=0...inf ? Is it just [O(n^2) + O(n^4)] / 2 ?
O(n2) is negligible over O(n4) since the quotient of the first on the second has a zero limit when n grows indefinitely.
So your algorithm is just O(n4)
Read wikipage on Big 0 notation and any good textbooks about limits of polynomials.
Suppose that I have two computational complexities :
O(k * M(n)) - computational complexity of modular exponentiation, where k is number of exponent bits , n is number of digits , and M(n) is computational complexity of the Newton's division algorithm.
O(log^6(n)) - computational complexity of an algorithm.
How can I determine which one of these two complexities is less "expensive" ? In fact notation M(n) is that what confusing me most .
First, for any given fixed n, just put it in the runtime function (sans the Landau O, mind you) and compare.
Asymptotically, you can divide one function (resp its Landau term) by the other and consider the quotient's limit for n to infinity. If it is zero, the function in the nominator grows properly, asymptotically weaker than the other. If it is infinity, it grows properly faster. In all other cases, the have the same asymptotic grows up to a constant factor (i.e. big Theta). If the quotient is 1 in the limit, they are asymptotically equal.
Okay, according to this Wikipedia entry about application of Newton method to division , you have to do O(lg(n)) steps to calculate n bits of division. Every step employs multiplication and subtraction, so has bit complexity O(n^2) in case we employ simple "schoolbook" method.
So, complexity of first approach is O(lg(n) * n^2). It's asymptotically slower than second approach.
Definition :
O(kM(n)) : - computational complexity of modular exponentiation
where k is number of exponent bits , n is number of digits , and M(n) is computational complexity of the Newton's division algorithm.
How can I determine is this computational complexity polynomial complexity ?
In fact notation M(n) is that what confusing me most .
Think about the division algorithm.
Does the division algorithm have complexity O(n)? If so, then modular exponentiation is O(k n).
Does the division algorithm have complexity O(n^c) for some constant c? If so, then modular exponentiation is O(k n^c).
Does the division algorithm have complexity O(log n)? If so, then modular exponentiation is O(k log n).
Etc.
The complexity of modular exponentiation is polynomial in the length of the exponent and the length of the modulus even with regular long division, so it is also polynomial with a faster division algorithm. M(n) is the complexity of multiplying two n-digit/bit numbers together (see here).