Complexity of a double for loop - algorithm

I am trying to figure out the complexity of a for loop using Big O notation. I have done this before in my other classes, but this one is more rigorous than the others because it is on the actual algorithm. The code is as follows:
for(cnt = 0, i=1; i<=n; i++) //for any size n
{
for(j = 1; j <= i; j++)
{
cnt++;
}
}
AND
for(cnt = 0, i=1; i<=n; i*=2) //for any size n
{
for(j = 1; j <= i; j++)
{
cnt++;
}
}
I have arrived that the first loop is of O(n) complexity because it is going through the list n times. As for the second loop I am a little lost. I believe that it is going through the loop i times for each n that is tested. I have (incorrectly) assumed that this means that the loop is O(n*i) for each time it is evaluated. Is there anything that I'm missing in my assumption. I know that cnt++ is constant time.
Thank you for the help in the analysis. Each loop is in its own space, they are not together.

The outer loop of the first example executes n times. For each iteration of the outer loop, the inner loop gets executed i times, so the overall complexity can be calculated as follows: one for the first iteration plus two for the second iteration plus three for the third iteration and so on, plus n for the n-th iteration.
1+2+3+4+5+...+n = (n*(n-1))/2 --> O(n^2)
The second example is trickier: since i doubles every iteration, the outer loop executes only Log2(n) times. Assuming that n is a power of 2, the total for the inner loop is
1+2+4+8+16+...+n
which is 2^Log2(n)-1 = n-1 for the complexity O(n).
For ns that are not powers of two the exact number of iterations is (2^(Log2(n)+1))-1, which is still O(n):
1 -> 1
2..3 -> 3
4..7 -> 7
8..15 -> 15
16..31 -> 31
32..63 -> 63
and so on.

Hopefully this isn't homework, but I do see that you at least made an attempt here, so here's my take on this:
cnt is incremented n*(n+1)/2 times, which makes the entire set of both loops O(n^2). The second loop is O(n/2) on the average, which is O(n).

The first example is O(N^2) and What is the Big-O of a nested loop, where number of iterations in the inner loop is determined by the current iteration of the outer loop? would be the question that answers that where the key is to note that the inner loop's number of rounds is dependent on n.
The second example is likely O(n log n) since the outer loop is being incremented on a different scale than linear. Look at binary search for an example of a logarithmic complexity case. In the second example, the outer loop is O(log n) and the inner loop is O(n) which combine to give a O(n log n) complexity.

Related

What is the time complexity (big-O) of the following code? It is hard to analyze

int cnt = 0;
for (int i = 1; i < n; i++) {
for (int j = i; j < n; j++) {
for (int k = j * j; k < n; k++) {
++cnt;
}
}
}
I have no idea of it.
How to analyze the time complexity of it?
It's easy to see that the code is Omega(n²) (that is, is at least quadratic) - the two outer loops execute around n²/2 times.
The inner k loop executes zero times unless j is less than sqrt(n). Even though it executes zero times, it takes some computation to compute the conditions for the loop, so it's O(1) work in these cases.
When j is less than sqrt(n), i must also be less than sqrt(n), since by the construction of the loops, j is always greater than or equal to i. In these cases, the k loop does n-j² iterations. We can construct a bound for the total amount of work in this inner loop in these cases: both i and j are less than sqrt(n), and there's at worst O(n) work done in the k loop, so there's at most O(n²) (ie: sqrt(n) * sqrt(n) * n) total work done in the inner loop.
There's also at most O(n²) total work done for the cases where the inner loop is trivial (ie: when j>sqrt(n)).
This gives a proof that the runtime complexity of the code is θ(n²).
Methods involving looking at nested loops individually and constructing big-O bounds for them do not in general give tight bounds, and this is an example question where such a method fails.
The first approach would be to look at the loops separately, meaning that we have three O(.) that are connected by a product. Hence,
Complexity of the Algorithm = O(OuterLoop)*O(MiddleLoop)*O(InnerLoop)
Now look at each loop separately:
Outerloop: This is the most simple one. Incrementing from 1 to n, resulting in O(n)
Middleloop: This is non-obvious, the terminate condition of the loop is still n, but the starting iterator value is i, meaning that the larger i gets, less time it will take to finish the loop. But this factor is quadratical-asymptotically only a constant, meaning that it is still O(n), hence O(n^2) "until" the second loop.
Inner loop: We see, that the iterator increases quadratically. But we also see that the quadratic-increasing depends on the second loop, which we said to be O(n). Since, we again only look at the complexity asymptomatically, means that we can assume that j rises linearly, and since k rises quadratically until n, it will take \square(n) iterations until n is reached. Meaning that the inner most loop has a running time of O(\square(n)).
Putting all these results together,
O(n * n* square(n))=O(n^2*square(n))

How to calcualte the Big-O complexity of the following algorithm?

I have been trying to calculate the Big-O of the following algorithm and it is coming out to be O(n^5) for me. I don't know what the correct answer is but most of my colleagues are getting O(n^3).
for(i=1;i<=n;i++)
{
for(j=1 ; j <= i*i ; j++)
{
for(k=1 ; k<= n/2 ; k++)
{
x = y + z;
}
}
}
What I did was start from the innermost loop. So I calculated that the innermost loop will run n/2 times, then I went to the second nested for loop which will run i^2 times and from the outermost loop will run i times as i varies from 1 to n. This would mean that the second nested for loop will run a total of Sigma(i^2) from i=1 to i=n so a total of n*(n+1)*(2n+1)/6 times. So the total amount that the code would run came out to be in the order of n^5 so I concluded that the order must be O(n^5). Is there something wrong with this approach and the answer that I calculated?
I have just started with DSA and this was my first assignment so apologies for any basic mistakes that I might have made.
The inner loop always has the same number of iterations (n/2), since it is independent of i and j. On its own it has a complexity of O(n).
The two other loops result in a sum of sequence of squares (1 + 4 + 9 + ...) of executions of the inner part.
This sum of squares corresponds to the square pyramidal number, and has an order of O(n3).
The inner loop has an order of O(n), so we get O(n4).

What is the time complexity and space complexity of following code

I want to know the time complexity of following piece of code
for (i = 0; i < n; i++) {
for (j = i + 1; j < n; j++) {
printf("hi")
}
}
Time complexity in loop
http://www.geeksforgeeks.org/analysis-of-algorithms-set-4-analysis-of-loops/
http://www.geeksforgeeks.org/analysis-of-algorithms-set-4-analysis-of-loops/
i think following link is used full for this problem :)
Time complexity is nothing but the number of instructions executed in your program. Now, in your program you have two loops. Outer loop will iterate i=0 to i=N-1, which is total of N instructions that is O(N). Since you also have a inner loop which will again iterate from j=i+1 to j=N-1 for each i.
Hence, time complexity will be O(N^2).
At each iteration of the inner loop work done is a follows
n
n-1
n-2
....
...
1
So total work done will be n+(n-1)+(n-2)+...+...+1, which turn out to be the sum of n natural numbers i.e n*(n+1)/2 so time complexity is O(n^2). As no auxiliary space is used so space complexity is O(1)

Time complexity analysis inconsistency

I have this code :
int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
The time complexity of this code can be thought of as O(n) because O(n+n/2+n/4+...) = O(n)
By that logic, the time complexity of this snippet can also be argued to be O(n) :
for(i = 1; i < n; i *= 2)
//O(1) statements
Since O(1+2+4+..+n/4+n/2) = O(n). But since the loop runs log(n) times, it can be log(n) too.
Why is the former one not : log(n) times the outer loop * log(n) times the inner loop so, log(n)log(n)
What am I doing wrong ?
The first snippet has the outer loop that executes O(log n) times, and each iteration the inner loop executes O(i) times. If you sum any number of terms of the form n / 2^k, you'll get O(n).
The second piece of code has O(log n) iterations of O(1) operations, and sum of logarithmic amount of constants is still logarithmic.
In the first example, you don't have an O(1) statement inside your loop, as you have for (int j = 0; j < i; j++) count += 1. If in your second example you put the same inner loop of the first example, you are back to the same complexity. The first loop is not O(n*log(n)); this is easy to demonstrate because you can find an upper bound in O(2n) which is equivalent to O(n).
The time complexity of the 2nd one should not be calculated as a series O(1+2+4+..+n/4+n/2) = O(n), because it is not that series.
Notice the first one. It is being calculated as a series because one counts the number of times the inner for loop executes and then add all of them (series) to get the final time complexity.
When i=n inner for loop executes n times
When i=(n/2) inner for loop executes n/2 times
When i=(n/4) inner for loop executes n/4 times
and so on..
But in the second one, there is no series to add. It just comes to a formula (2^x) = n, which evaluates to x = logn.
(2^x) = n this formula can be obtained by noticing that i starts with 1, and when it becomes 2 it is multiplied by 2 until it reaches n.
So one needs to find out how many times 2 needs to be multiplied by 2 to reach n.
Thus the formula (2^x) = n, and then solve for x.

Finding Big-O of this code

I need some help finding the complexity or Big-O of this code. If someone could explain what the Big-O of every loop would be that would be great. I think the outter loop would just be O(n) but the inner loop I'm not sure, how does the *=2 effect it?
k = 1;
do
{
j = 1;
do
{
...
j *= 2;
} while (j < n);
k++;
} while (k < n);
The outer loop runs O(n) times, since k starts at 1 and needs to be incremented n-1 times to become equal to 1.
The inner loop runs O(lg(n)) times. This is because on the m-th execcution of the loop, j = 0.5 * 2^(m).
The loop breaks when n = j = 0.5 * 2^m. Rearranging that, we get m = lg(2n) = O(lg(n)).
Putting the two loops together, the total code complexity is O(nlg(n)).
Logarithms can be tricky, but generally, whenever you see something being repeatedly being multiplied or divided by a constant factor, you can guess that the complexity of your algorithm involves a term that is either logarithmic or exponential.
That's why binary search, which repeatedly divides the size of the list it searches in half, is also O(lg(n)).
The inner loop always runs from j=1 to j=n.
For simplicity, let's assume that n is a power of 2 and that the inner loop runs k times.
The values of j for each of the k iterations are,
j = 1
j = 2
j = 4
j = 8
....
j = n
// breaks from the loop
which means that 2^k = n or k = lg(n)
So, each time, it runs for O(lg(n)) times.
Now, the outer loop is executed O(n) times, starting from k=1 to k=n.
Therefore, every time k is incremented, the inner loop runs O(lg(n)) times.
k=1 Innerloop runs for : lg(n)
k=2 Innerloop runs for : lg(n)
k=3 Innerloop runs for : lg(n)
...
k=n Innerloop runs for : lg(n)
// breaks from the loop
Therefore, total time taken is n*lg(n)
Thus, the time complexity of this is O(nlg(n))

Resources