Definition :
O(kM(n)) : - computational complexity of modular exponentiation
where k is number of exponent bits , n is number of digits , and M(n) is computational complexity of the Newton's division algorithm.
How can I determine is this computational complexity polynomial complexity ?
In fact notation M(n) is that what confusing me most .
Think about the division algorithm.
Does the division algorithm have complexity O(n)? If so, then modular exponentiation is O(k n).
Does the division algorithm have complexity O(n^c) for some constant c? If so, then modular exponentiation is O(k n^c).
Does the division algorithm have complexity O(log n)? If so, then modular exponentiation is O(k log n).
Etc.
The complexity of modular exponentiation is polynomial in the length of the exponent and the length of the modulus even with regular long division, so it is also polynomial with a faster division algorithm. M(n) is the complexity of multiplying two n-digit/bit numbers together (see here).
Related
For the equation : (n(n-1))/2
If you were to write an algorithm to represent this equation, what would be the complexity of such an algorithm?
Time complexity of (n*(n-1))/2 will be O(n^2).
This expression 1+2+3+4+...n-1 can be evaluated with O(n) algorithm. And the time complexity of "resulting formula" need not always be the same(or linear). Big O notation can not be applied like that.
if a function is represented as (n*(n-1))/2 , it's rate of growth will not be linear. And Big O notation will talk about rate of growth of a function, so, the time complexity of (n*(n-1))/2 is O(n^2)
To be more specific, while calculating (n*(n-1)) for a larger value of n, you have to store each and every digit of number in an array and after that multiply them. This will take O(N^2) time complexity where N is the number of digits.
(n*(n-1))/2 can be represented as (1 + 2 + ... + (n-1)). Finding the sum using this expanded expression would have O(n) time complexity and O(1) space complexity.
If you don't expand (n*(n-1))/2 into sum-expression, then it takes O(1) time complexity to do (n*(n-1))/2.
Why O(n) for expanded(sum) expression?
Since you are going to do the addition by considering (n-1) elements one by one.
So O(n) is considered same as O(n-1).
Say you have an algorithm that completes in a polynomial number of steps for the input of size n, like, for example, P(n)=2n^2+4n+3. The asymptotic tight bound for this algorithm Θ(n^2).
Is it true to say that the Big-Theta notation for any algorithm is n to the power of the degree of the polynomial P(n), or are there any cases where that is not true?
The complexity of polynomial time algorithms are bounded by O(nk), where 0 < k ≤ ∞. It doesn't mean that all the algorithms are having a polynomial time complexity. There are many algorithms with sub polynomial complexity. Examples include O(k) (constant complexity), O(k√n) (kth root of n, where 1 ≤ k ≤ ∞), O(log n), O(log log n), etc. There are also algorithms which are having super polynomial time complexity. Examples for such complexities are O(kn), where 1 < k ≤ ∞, O(n!), etc.
Strassen's algorithm is polynomially faster than n-cubed regular matrix multiplication. What does "polynomially faster" mean?
Your question has to do with the theoretical concept of "complexity".
As an example, the regular matrix multiplication is said to have the complexity of O(n^3). This means that as the dimension "n" grows, the time it takes to run the algorithm, T(n) is guaranteed to not exceed the function "n^3" (the cubic function) with respect to a positive constant.
Formally, this means:
There exists a positive treshold n_t such that for every n >= n_t, T(n) <= c * n^3, where c > 0 is some constant.
In your case, the Strassen algorithm has been demonstrated to have the complexity O(n^ log7). Since log7 = 2.8 < 3, it follows that the Strassen algorithm is guaranteed to run faster than the classical multiplication algorithm as n grows.
As a side-note, keep in mind that for very small values of n (i.e. when n < n_t above) this statement might not hold.
Algorithms with complexity O(n^3) and O(n^2) both are polynomial. But the second is polynomially faster.
In this case, I assume it means that both algoritms have a ploynomial run time, but the Strassen algorithm is faster.
That's only because the standard (even for a cube) is polynomial.
Anyway, I don't think the term "polynomially faster" is a standard term.
So I'm computing the Fibonacci numbers using Binet's formula with the GNU MP library. I'm trying to work out the asymptotic runtime of the algorithm.
For Fib(n) I'm setting the variables to n bits of precision; thus I believe multiplying two numbers is n Log(n). The exponentiation is, I believe n Log(n) multiplications; so I believe I have n Log(n)Log(n Log(n)). Is this correct, in both in assumptions (multiplying floating point numbers and number of multiplications in exponentiation with integer exponent) and in conclusion?
If my precision is high, and I use precision g(n); then I think this reduces to g(n) Log(g(n)); however I think g(n) should be g(n)=n Log(phi)+1; which shouldn't have a real impact on the asymptotics.
I don't agree with your evaluation.
The cost of long multiplies depends on the algorithm in use. Can be O(n^1.585) [Karatsuba], or O(n.Log(n).Log(Log(n))) [Schönhage–Strassen], or other.
The cost of exponentiation can be made O(Log(n)) multiplies for exponent n.
Suppose that I have two computational complexities :
O(k * M(n)) - computational complexity of modular exponentiation, where k is number of exponent bits , n is number of digits , and M(n) is computational complexity of the Newton's division algorithm.
O(log^6(n)) - computational complexity of an algorithm.
How can I determine which one of these two complexities is less "expensive" ? In fact notation M(n) is that what confusing me most .
First, for any given fixed n, just put it in the runtime function (sans the Landau O, mind you) and compare.
Asymptotically, you can divide one function (resp its Landau term) by the other and consider the quotient's limit for n to infinity. If it is zero, the function in the nominator grows properly, asymptotically weaker than the other. If it is infinity, it grows properly faster. In all other cases, the have the same asymptotic grows up to a constant factor (i.e. big Theta). If the quotient is 1 in the limit, they are asymptotically equal.
Okay, according to this Wikipedia entry about application of Newton method to division , you have to do O(lg(n)) steps to calculate n bits of division. Every step employs multiplication and subtraction, so has bit complexity O(n^2) in case we employ simple "schoolbook" method.
So, complexity of first approach is O(lg(n) * n^2). It's asymptotically slower than second approach.