I'm trying to find the complexity of the following algorithm:
for(i=1;i<=n;i++){
for(j=1;j<=i;j++){
for(k=i;k<=j;k++){
//code
}
}
}
Since your k starts with "i" and goes upto "j",your worst case time complexity is O(n2). Lets take an example and see. For i=4, j goes from 1 to 4 and k runs only one time for each value of j (except for j=4 which runs exactly 2 times).Therefore for each value of j,the inner loop runs in O(1) time. The outer two loops take O(n2) time. Also, taking into account that your (//code) inside the innermost loop runs in O(1) time. Therefore, time complexity for this algorithm is O(n2).
Related
What would the Big O notation be for two for loops that aren't nested?
Example:
for(int i=0; i<n; i++){
System.out.println(i);
}
for(int j=0; j<n; j++){
System.out.println(j);
}
Linear
O(n) + O(n) = 2*O(n) = O(n)
It does not matter how many non nested loops do you have (if this number is a constant and does not depends on n) the complexity would be linear and would equal to the maximum number of iterations in the loop.
Technically this algorithm still operates in O(n) time.
While the number of iterations increases by 2 for each increase in n, the time taken still increases at a linear rate, thus, in O(n) time.
It would be O(2n) because you run n+n = 2n iterations.
O(2n) is essentially equivalent to O(n) as 2 is a constant.
It will be O(n) + O(n) ==> Effectively O(n) since we don't keep constant values.
Assuming a scenario each loop runs up to n
So we can say the complexity of each for loop is O(n) as each loop will run n times.
So you specified these loops are not nested in a linear scenario for first loop O(n)+ second loop O(n)+ third loop O(n) which will give you 3O(n).
Now as we are mostly concentrating on the variable part of the complexity we will exclude the constant part and will say it's O(n) for this scenario.
But in a practical scenario, I will suggest you keep in mind that the constant factor will also play a vital role so never exclude them.
For example, consider time complexity to find the smallest integer from an integer array anyone will say it's O(n) but to find second largest or smallest of the same array will be O(2n).
But most of the answers will be saying it's O(n) where actually ignoring the constant.
Consider if the array is of 10 million size then that constant can't be ignored.
I calculated it to be O(N^2), but my instructor marked it incorrect in the exam. The Correct answer was O(1). Can anyone help me, how did the time complexity come out to be O(1)?
The outer loop will run for 2N times. (int j = 2 * N) and later decrementing everytime by 1)
And since N is not changing, and the i is assigned the values of N always (int i = N), the inner loop will always run for logN base 2 times.
(Notice the way i changes i = i div 2)
Therefore, the complexity is O(NlogN)
Question: What happens when you repeatedly half input(or search space) ?(Like in Binary Search).
Answer: Well, you get log(N) complexity. (Reference : The Algorithm Design Manual by Steven S. Skiena)
See the inner loop in your algorithm, i = i div 2 makes it a log(N) complexity loop. Therefore the overall complexity will be N log(N).
Take this with a pinch of salt : Whenever you divide your input (search space) by 2, 3 , 4 or whatever constant number greater than 1, you get log(N) complexity.
P.S. : the complexity of your algorithm is nowhere near to O(1).
Our prof and various materials say Summation(n) = (n) (n+1) /2 and hence is theta(n^2). But intuitively, we just need one loop to find the sum of first n terms! So, it has to be theta(n).I'm wondering what am I missing here?!
All of these answers are misunderstanding the problem just like the original question: The point is not to measure the runtime complexity of an algorithm for summing integers, it's talking about how to reason about the complexity of an algorithm which takes i steps during each pass for i in 1..n. Consider insertion sort: On each step i to insert one member of the original list the output list is i elements long, thus it takes i steps (on average) to perform the insert. What is the complexity of insertion sort? It's the sum of all of those steps, or the sum of i for i in 1..n. That sum is n(n+1)/2 which has an n^2 in it, thus insertion sort is O(n^2).
The running time of the this code is Θ(1) (assuming addition/subtraction and multiplaction are constant time operations):
result = n*(n + 1)/2 // This statement executes once
The running time of the following pseudocode, which is what you described, is indeed Θ(n):
result = 0
for i from 1 up to n:
result = result + i // This statement executes exactly n times
Here is another way to compute it which has a running time of Θ(n²):
result = 0
for i from 1 up to n:
for j from i up to n:
result = result + 1 // This statement executes exactly n*(n + 1)/2 times
All three of those code blocks compute the natural numbers' sum from 1 to n.
This Θ(n²) loop is probably the type you are being asked to analyse. Whenever you have a loop of the form:
for i from 1 up to n:
for j from i up to n:
// Some statements that run in constant time
You have a running time complexity of Θ(n²), because those statements execute exactly summation(n) times.
I think the problem is that you're incorrectly assuming that the summation formula has time complexity theta(n^2).
The formula has an n^2 in it, but it doesn't require a number of computations or amount of time proportional to n^2.
Summing everything up to n in a loop would be theta(n), as you say, because you would have to iterate through the loop n times.
However, calculating the result of the equation n(n+1)/2 would just be theta(1) as it's a single calculation that is performed once regardless of how big n is.
Summation(n) being n(n+1)/2 refers to the sum of numbers from 1 to n. Which is a mathematical formula and can be calculated without a loop which is O(1) time. If you iterate an array to sum all values that is an O(n) algorithm.
Here is the fragment:
sum1=0;
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
sum1++
sum2=0
for(k=1;k<=n;k*=2)
for(j=1;j<=k;j++)
sum2++
Below is the answer:
2 assignment statements – O(1) each
1st nested loop – O(n2)
2nd nested loop – O(n)
Running time complexity of code fragment = O(1) + O(n^2) + O(1) + O(n) = O(n2)
But here is how I worked it out:
2 assignments:- O(1).
First nested loop: O(n*n)=O(n^2)
Second nested loop:
Outer loop runs n times..
Now the inner loop will be executed (1+2+3+.....+(n-1)+n) times
which gives n(n+1)/2 =O(n^2)
Total running time = O(n^2)+O(n^2)+O(1)=O(n^2)
And yes I've done some research and I came across the following:
In a loop if an index jumps by an increasing amount in each iteration the sequence has complexity log n.
In that case I suppose the second loop will have complexity (n-1)/2*logn...which will be equal to O(n*log n).
I'm really confused with the second loop whether it should be O(n)..O(n^2) or O(nlogn)..
HELP PLEASE
Since you k increased double each time . your calculation is not correct. It should be (1+2+4+....n/2+n)
for(k=1;k<=n;k*=2)
So, O(nlogn) is right.
for (int j=0,k=0; j<n; j++)
for (double m=1; m<n; m*=2)
k++;
I think it's O(n^2) but I'm not certain. I'm working on a practice problem and I have the following choices:
O(n^2)
O(2^n)
O(n!)
O(n log(n))
Hmmm... well, break it down.
It seems obvious that the outer loop is O(n). It is increasing by 1 each iteration.
The inner loop however, increases by a power of 2. Exponentials are certainly related (in fact inversely) to logarithms.
Why have you come to the O(n^2) solution? Prove it.
Its O(nlog2n). The code block runs n*log2n times.
Suppose n=16; Then the first loop runs 16 (=n) times. And the second loops runs 4(=log2n) times (m=1,2,4,8). So the inner statement k++ runs 64 times = (n*log2n) times.
lets look at the worst-case behaviour. for second loop search continues from 1, 2, 4, 8.... lets say n is 2^k for some k >= 0. in the worst-case we might end up searching until 2^k and realise we overshot the target. Now we know that target can be in 2^(k - 1) and 2^k. The number of elements in that range are 2^(k - 1) (think a second.). The number of elements that we have examined so far is O(k) which is O(logn) and for first loop it's O(n).(too simple to find out). then order of whole code will O(n(logn)).
A generic way to approach these sorts of problems is to consider the order of each loop, and because they are nested, you can multiply the "O" notations.
Some simple rules for big "O":
O(n)O(m) = O(nm)
O(n) + O(m) = O(n + m)
O(kn) = O(n) where 'k' is some constant
The 'j' loop iterates across n elements, so clearly it is O(n).
The 'm' loop iterates across log(n) elements, so it is O(log(n)).
Since the loops are nested, our final result would O(n) * O(log(n)) = O(n*log(n)).