Say I need to calculate the time complexity of a function
16+26+36+...+n6. I am pretty sure this would be O(n7), but I only figure that because I know that Σi from i=0 to n is in O(n2). I cannot find a simple closed-version formula for a summation of ik. Can anyone provide more detail on how to actually calculate the time complexity?
Thanks!
An easy proof that it's Θ(n⁷) is to observe that:
1⁶+2⁶+3⁶+...+n⁶ <= n⁶+n⁶+...n⁶ = n⁷
(replacing all numbers with n makes the sum larger).
and
1⁶+2⁶+3⁶+...+n⁶ >= (n/2+1)⁶+...+n⁶ >= (n/2)⁶+(n/2)⁶+...+(n/2)⁶ = n⁷/2⁷
(in the first step, we discard the terms less or equal than n/2, and in the second step we replace all numbers with n/2. Both steps reduce the sum). (Note: I've assumed n is even, but you can extend to odd n with a bit of minor fiddling around).
Thus 1⁶+2⁶+3⁶+...+n⁶ is bounded above and below by a constant factor of n⁷ and so by definition is Θ(n⁷).
As David Eisenstat suggests in the comments, another proof is to consider the (continuous) graphs y=x⁶ and y=(x+1)⁶ from 0 to n. The area under these curves bound the sum below and above, and are readily calculated via integrals: the first is n⁷/7 and the second is ((n+1)⁷-1)/7. This shows that the sum is n⁷/7 + o(n⁷)
Related
I asked myself if one can compute the nth Fibonacci number in time O(n) or O(1) and why?
Can someone explain please?
Yes. It is called Binet's Formula, or sometimes, incorrectly, De Moivre's Formula (the real De Moivre's formula is another, but De Moivre did discover Binet's formula before Binet), and involves the golden ratio Phi. The mathematical reasoning behind this (see link) is a bit involved, but doable:
While it is an approximate formula, Fibonacci numbers are integers -- so, once you achieve a high enough precision (depends on n), you can just approximate the number from Binet's formula to the closest integer.
Precision however depends on constants, so you basically have two versions, one with float numbers and one with double precision numbers, with the second also running in constant time, but slightly slower. For large n you will need an arbitrary precision number library, and those have processing times that do depend on the numbers involved; as observed by #MattTimmermans, you'll then probably end up with a O(log^2 n) algorithm. This should happen for large enough values of n that you'd be stuck with a large-number library no matter what (but I'd need to test this to be sure).
Otherwise, the Binet formula is mainly made up of two exponentiations and one division (the three sums and divisions by 2 are probably negligible), while the recursive formula mainly employs function calls and the iterative formula uses a loop. While the first formula is O(1), and the other two are O(n), the actual times are more like a, b n + c and d n + e, with values for a, b, c, d and e that depend on the hardware, compiler, implementation etc. . With a modern CPU it is very likely that a is not too larger than b or d, which means that the O(1) formula should be faster for almost every n. But most implementations of the iterative algorithm start with
if (n < 2) {
return n;
}
which is very likely to be faster for n = 0 and n = 1. I feel confident that Binet's formula is faster for any n beyond the single digits.
Instead of thinking about the recursive method, think of building the sequence from the bottom up, starting at 1+1.
You can also use a matrix m like this:
1 1
1 0
and calculate power n of it. then output m^n[0,0].
I feel stupid for asking this question, but...
For the "closest pair of points" problem (see this if unfamiliar with it), why is the worst-case running time of the brute-force algorithm O(n^2)?
If say n = 4, then there would only be 12 possible pair of points to compare in the search space, if we also consider comparing two points from either direction. If we don't compare two points twice, then it's going to be 6.
O(n^2) doesn't add up to me.
The actual number of comparisons is:
, or .
But, in big-O notation, you are only concerned about the dominant term. At very large values of , the term becomes less important, as does the coefficient on the term. So, we just say it's .
Big-O notation isn't meant to give you the exact formula for the time taken or number of steps. It only gives you the order of the complexity/time so you can get a sense of how it scales for large inputs.
Applying brute force, we are forced to check all the possible pairs.Assuming N points,for each point there are N-1 other points for which we need to calculate the distance. So total possible distances calculated = N points * N-1 other points. But in process we double counted distances. Distance between A to B remains whether A to B or B to A is calculated. Hence N*(N-1)/2. Hence O(N^2).
In big-O notation, you can factor out multiplied constants, so:
O(k*(n^2)) = O(n^2)
The idea is that the constant (1/2 in the OP example, since distance comparison is reflective) doesn't really tell us anything new about the complexity. It still gets bigger with the square of the input.
In the brute-force version of the algorithm you compare all possible pairs of points. For each of n points you have (n - 1) other points to compare and if we take every pair once we end up with (n * (n - 1)) / 2 comparisons. The pessimistic complexity of O(n^2) means that the number of operations is bound by k * n^2 for some constant k. Big O notation can't tell you the exact number of operations but a function to which it is proportional when the size of data (n) increases.
I know that a certain algorithm I am using does 2Nk - 4k^2 operations, with parameters N and k. Now, the first derivative of that function is 2N - 8k (I know that N and k can only be positive integers here, but bear with me). That derivative is positive when k < N/4 and negative when k > N/4. So the complexity actually reduces if we increase k past a certain point. How will I express this in Big O notation? Also note that k <= (N - 1)/2, so there is an upper bound on k.
NOTE:
I know that a similar question has been asked here, but it does not consider the case where the first derivative changes sign if one of the variables reaches a certain point.
I'm at a loss as to how to calculate the best case run time - omega f(n) - for this algorithm. In particular, I don't understand how there could be a "best case" - this seems like a very straightforward algorithm. Can someone give me some hints / a general approach for solving these types of questions?
for i = 0 to n do
for j = n to 0 do
for k = 1 to j-i do
print (k)
Have a look at this question and answers Still sort of confused about Big O notation it may help you sort out some of the confusion between worst case, best case, Big O, Big Omega, etc.
As to your specific problem, you are asked to find some (asymptotical) lower bound of the time complexity of the algorithm (you should also clarify whether you mean Big Omega or small omega).
Otherwise you are right, this problem is quite simple. You just have to think about how many times print (k) will be executed for a given n.
You can see that the first loop goes n-times. The second one also n-times.
For the third one, you see that if i = 1, then j = n-1, thus k is 1 to n-1-1 = n-2, which makes me think whether your example is correct and if there should be i-j instead.
In any case, the third loop will execute at least n/2 times. It is n/2 because you subtract j-i and j decreases while i increases so when they are n/2 the result will be 0 and then it will be negative at which point the innermost loop will not execute anymore.
Therefore print (k) will execute n*n*n/2-times which is Omega(n^3) which you can easily verify from definition.
Just beware, as #Yves points out in his answer, that this is all assuming that print(k) is done in constant time which is probably what was meant in your exercise.
In the real world, it wouldn't be true because printing the number also takes time and the time increases with log(n) if print(k) prints in base 2 or base 10 (it would be n for base 1).
Otherwise, in this case the best case is the same as the worst case. There is just one input of size n so you cannot find "some best instance" of size n and "worst instance" of size n. There is just instance of size n.
I do not see anything in that function that varies between different executions except for n
aside from that, as far as I can tell, there is only one case, and that is both the best case and the worst case at the same time
it's O(n3) in case you're wondering.
If this is a sadistic exercise, the complexity is probably Theta(n^3.Log(n)), because of the triple loop but also due to the fact that the number of digits to be printed increases with n.
As n the only factor in the behavior of your algorithm, best case is n being 0. One initialization, one test, and then done...
Edit:
The best case describe the algorithm behavior under optimal condition. You provide us a code snippet which depend on n. What is the nbest case? n being zero.
Another example : What is the best case of performing a linear search on an array? It is either that the key match the first element or the array is empty.
I am confused in calculating the time complexity of one problem, please help me in that.
Problem statement:-
2-D matrix is given, you are at bottom-left block, and you have to go to top-right block. One constraint is given, from every point to can move only step either upwards or right direction.
How many such ways are there, prove mathematically ?
Time complexity is in polynomial or exponential form ?
My effort :-
If the matrix is of N*N size, then you have to move exactly 2N steps, out of which N steps is R, and remaining N steps is U.
So, if we simplify this, its a permutation and combination problem, A string is given that contains only two letters R and U, how can you arrange that ?
Answer is
( (2N) C (N) )*( (N) C (N) )*2
Question
Is my above logic correct? If not, please correct me.
Above formula is polynomial or exponential ?
Your idea is correct. However answer is a bit inaccurate.
The string only has letters R and U but its length is 2N-2.
The problem is to arrange 2N-2 objects such that n-1 are of 1 type and n-1 objects are of other type.
The number of possibilities = factorial(2N-2)/( factorial(n-1) * factorial(n-1) )
If you consider product of 2 numbers as O(k), then calculating the above shall have a time complexity of O(N*k).
To get an idea of the order of multiplication for various algorithms, you can visit http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
EDIT:
This number appears in the expansion of binomial coeffecients of 2^(2n-2).
Hence we can safely say that "counting up to this number" is bounded by an exponential rate of growth, not factorial.