This question already has answers here:
What is the Big-O of a nested loop, where number of iterations in the inner loop is determined by the current iteration of the outer loop?
(8 answers)
Closed 6 years ago.
I have a piece of code which executes three nested for loops in the following way (written as language agnostic as possible):
for i = 0 to n - 1
for j = 0 to n - 1
for k = 0 to n - 1
[do computation in linear time]
My intuition says that this should result in N^3 complexity. I'd like to compare its complexity to the following nested loops:
for i = 0 to n - 1
for j = i + 1 to n - 1
for k = j + 1 to n - 1
[do computation in linear time]
Logically, it should be faster, because as i increases, the inner loops should compute faster; however, having seen countless other threads on the site explaining that the latter of those is also N^3 is confusing.
Is my assumption of N^3 for both correct, and if so, how do I quantify the differences, assuming there are any?
With big O notation, only the leading polynomial value is listed (i.e., in this case, N^3). The latter example you provide could be described as a(N^3)-b(N^2)-c(N)-d, where {a,b,c,d} are integers, and therefore the latter might be significantly quicker depending on how large or small i or n are. However, because you have 3 nested loops, big-O notation will still be listed as N^3.
The time complexity for both the above code will be order of n^3 i,e Big O (n^3). By the definition of Big O notation T(n) ≤ k f(n) for some constant k. So 2nd one code will not much more contribute for some constant k.
Related
for i = 1 to n
for j = 1 to i - 1
Is the runtime of this O(n^2)?
Is there a good way to visualize things when approaching these types of problems to find the right answer?
Inner loop executes
1 + 2 + 3 + 4 + 5 +...n-1 = n*(n-1)/2 times
using formula for arithmetic progression sum, so overall compexity is O(n^2)
Each for loop is O(n), two for loops O(n)*O(n) = O(n^2)
Check this link out. The author explains good ways to approach figuring out runtimes.
Im under the impression that to find the big O of a nested for loop, one multuplies the big O of each forloop with the next for loop. Would the big O for:
for i in range(n):
for j in range(5):
print(i*j)
be O(5n)? and if so would the big O for:
for i in range(12345):
for j in range(i**i**i)
for y in range (j*i):
print(i,j,y)
be O(12345*(i**i**i)*(j*i)? Or would it be O(n^3) because its nested 3 times?
Im so confused
This is a bit simplified, but hopefully will get across the meaning of Big-O:
Big-O is about the question "how many times does my code do something?", answering it in algebra, and then asking "which term matters the most in the long run?"
For your first example - the number of times the print statement is called is 5n times. n times in the outer loop times 5 times in the inner loop. What matters most in the long run? In the long run only n matters, as the value of 5 never changes! So the overall Big-O complexity is O(n).
For your second example - the number of times the print statement is called is very large, but constant. The outer loop runs 12345 times, the inner loop runs one time, then 16 times, then 7625597484987... all the way up to 12345^12345^12345. The innermost loop goes up in a similar fashion. What we notice is all of these are constants! The number of times the print statement is called doesn't actually vary at all. When an algorithm runs in constant time, we represent this as O(1). Conceptually this is similar to the example above - just as 5n / 5 == n, 12345 / 12345 == 1.
The two examples you have chosen only involve stripping out the constant factors (which we always do in Big-O, they never change!). Another example would be:
def more_terms(n):
for i in range(n):
for j in range(n):
print(n)
print(n)
for k in range(n):
print(n)
print(n)
print(n)
For this example, the print statement is called 2n^2 + 3n times. For the first set of loops, n times for the outer loop, n times for the inner loop and then 2 times inside the inner lop. For the second set, n times for the loop and 3 times each iteration. First we strip out the constants, leaving n^2 + n, now what matters in the long run? When n is 1, neither really matter. But the bigger n gets, the bigger the difference is, n^2 grows much faster than n - so this function has complexity O(n^2).
You are correct about O(n^3) for your second example. You can calculate big O like this:
Any number of nested loops will add an additional power of 1 to n. So, if we have three nested loops, the big O would be O(n^3). For any number of loops, the big O is O(n^(number of loops)). One loop is just O(n). Any monomial of n, such as O(5n), is just O(n).
You misunderstand what O(n) means. It's hard to understand at first, so no shame in not understanding it.O(n) means "This grows at most as fast as n". It has a rigorous mathematical definition, but it basically boils down to is this.If f and g are both functions, f=O(g) means that you could pick some constant number C, and on big inputs like n, f(n) < C*g(n)." Big O represents an upper bound, and it doesn't care about constant factors, so if f=O(5n), then f=O(n).
I have a question..I got an algorithm
procedure summation(A[1...n])
s=0
for i= 1 to n do
j=min{max{i,A[i],n^3}
s= s + j
return s
And I want to find the min and max output at this algorithm with the use of asymptotic notation θ.
Any ideas on how to do that?
What I have to look on an algorithm to understand it's complexity?
If you want to know the big O notation or time complexity works? You might want to look at the following post What is a plain English explanation of "Big O" notation?.
For the psuedo code that you showed, the complexity is O(n). were n is the length of the array.
Often you can determine the complexity by just looking at how many nested loops the algorithm has. Of course is this not always the case but this can used as rule of thumb.
In the following example:
procedure summation(A[B1[1...n],B2[1...n],...,Bn[1...n]])
s=0
for i= 1 to n do
for j= 1 to m do
j=min{max{i,A[i,j],n^3}
s= s + j
return s
the complexity would be O(n m). (length of all arrays b -> m)
best or worst case
For the algorithm that you showed there is no best or worse case. It always runs the same time for the same array, the only influence on the run time is the length of the array.
An example where there you could be a best or worst case is following.
lets say you need to find the location of a specific number in a array.
If you method would be to to through the array from start to end. The best case would that the number is at the start. The worst case would be if the number would be at the end.
For a more detailed explanation look at the link.
Cheers.
The best and the worst case are the same because the algorithm will run the "same way" every time no matter the input. So based on that we will calculate the time complexity of the algorithm using math:
T(n) = 1 + (3+2) + 1
T(n) = 2 + 5
T(n) = 2 + 5 1
T(n) = 2 + 5 (n-1+1)
T(n) = 5n + 2
This summation (3+2) is due to the fact that inside the loop we have 5 distinct and measurable actions:
j = min{max{i,A[i]}, n^3} that counts as three actions because we have 2 comparisons and a value assignment to the variable j.
s = s + j that counts as 2 actions because we have one addition and a value assignment to the variable s.
Asymptotically: Θ(n)
How we calculate Θ(n):
We look at the result that is 5n + 2 and we take out the constants so it becomes
n. Then we choose the "biggest" variable that is n.
Other examples:
8n^3+5n+2 -> Θ(n)=n^3
10logn+n^4+7 -> Θ(n)=n^4
More info: http://bigocheatsheet.com/
In O() notation, write the complexity of the following code:
For i = 1 to x functi
call funct(i) if (x <= 0)
return some value
else
In O() notation, write the complexity of the following code:
For x = 1 to N
I'm really lost at solving these 2 big O notation complexity problem, please help!
They both appear to me to be O(N).
The first one subtracts by 1 when it calls itself, this means if given N, then it runs N times.
The second one divides the N by 2, but Big-O is determined by worst case scenario, which means that we must assume N is getting significantly larger. When you take that into account, dividing by 2 does not have much of a difference. That means while it originally is O(N/2) it can be reduced to O(N)
find the big oh characterization
input: n
s<-0
for i<-1 to n^2 do
for j<-1 to i do
s<-s+i
Both loops iterate n^2 times. I'm not sure if I should add or multiply the loops. Right now I think the answer is O(n^4).
The outer loops i from 0 to n^2, while the inner one loops j from 1 to i.
Thus, the inner loop has complexity of i (inner loop needs i steps, i is varying). Let us denote the runtime of the inner loop by IL(i)=i.
To get the total runtime, we see that the inner loop is executed n^2 times with varying "parameter" i. Thus, we obtain as total runtime:
n^2 n^2
---- ----
\ IL(i) = \ i
/ /
---- ----
i=1 i=1
That is, you have to sum up all numbers from 1 to n^2. There are many basic textbooks explaining how to evaluate this.
The result is (n^2)(n^2+1)/2, which leads to an overal complexity of O(n^4).
You are correct the answer is O(n^4). First the inner loop cost i. Then the outer loop cost sum(1->p) i = p(p-1)/2, where p=n^2. Which gives a cost of O(n^4)
Your argument is almost correct.
The number of loop iterations:
1 + 2 + ... + n^2
= (n^2+1)*(n^2)/2
= (n^4 + n^2)/2
So the answer is Θ(n^4) (as a side note it also is O(n^4)).
You can prove that formally by choosing three constants c1, c2 and n0 such that:
c1*n^4 <= (n^4 + n^2)/2 <= c2*n^4 forall n >= n0
Since the maximum value of i is n^2, that gives us O(n^2) * O(n^2) which is equal to O(n^4).
You can check simply the basic sum formula. Your sum goes not to N but N^2 which gives you
n^2(n^2+1)/2
and this is indeed O(n^4)