How to calcualte the Big-O complexity of the following algorithm? - algorithm

I have been trying to calculate the Big-O of the following algorithm and it is coming out to be O(n^5) for me. I don't know what the correct answer is but most of my colleagues are getting O(n^3).
for(i=1;i<=n;i++)
{
for(j=1 ; j <= i*i ; j++)
{
for(k=1 ; k<= n/2 ; k++)
{
x = y + z;
}
}
}
What I did was start from the innermost loop. So I calculated that the innermost loop will run n/2 times, then I went to the second nested for loop which will run i^2 times and from the outermost loop will run i times as i varies from 1 to n. This would mean that the second nested for loop will run a total of Sigma(i^2) from i=1 to i=n so a total of n*(n+1)*(2n+1)/6 times. So the total amount that the code would run came out to be in the order of n^5 so I concluded that the order must be O(n^5). Is there something wrong with this approach and the answer that I calculated?
I have just started with DSA and this was my first assignment so apologies for any basic mistakes that I might have made.

The inner loop always has the same number of iterations (n/2), since it is independent of i and j. On its own it has a complexity of O(n).
The two other loops result in a sum of sequence of squares (1 + 4 + 9 + ...) of executions of the inner part.
This sum of squares corresponds to the square pyramidal number, and has an order of O(n3).
The inner loop has an order of O(n), so we get O(n4).

Related

Determining the big-O of this loop question

I'm not sure about the big-O cost of this loop. Can anyone help me?
I will comment about what I think.
sum = 0; // O(1)
for(i=0;i<N;i++)
for(j=0;j<i*i;j++)
for(k=0;k<j;k++) // run as 0+1²+2²+...+n²= n(n+1)(2n+1)/6
sum++; // O(1)
My guess is O(N^3). Is that correct?
Your 0+1²+2²+...+n² steps estimation is wrong.
The innermost loop runs exactly 0+1⁴+2⁴+...+(N-1)⁴ times. This is the sum of the first n-1 numbers to the power of four, that is equal to n(n-1)(6(n-1)³+9(n-1)²+n-2)/30.
Thus the total cost is O(N5).
Follows another way of reaching the same result.
The outer loop runs exactly N times.
The middle loop runs up to i*i times (i.e. at most N2 times) for each outer iteration.
The inner loop runs up to j times (i.e. at most N2 times) for each middle iteration.
Multiplying the cost estimations, the total cost is O(N5).

Loop Analysis - Analysis of Algorithms

This question is based off of this resource http://algs4.cs.princeton.edu/14analysis.
Can someone break down why Exercise 6 letter b is linear? The outer loop seems to be increasing i by a factor of 2 each time, so I would assume it was logarithmic...
From the link:
int sum = 0;
for (int n = N; n > 0; n /= 2)
for (int i = 0; i < n; i++)
sum++;
This is a geometric series.
The inner loops runs i iterations per iteration of the outer loop, and the outer loop decreases by half each time.
So, summing it up gives you:
n + n/2 + n/4 + ... + 1
This is geometric series, with r=1/2 and a=n - that converges to a/(1-r)=n/(1/2)=2n, so:
T(n) <= 2n
And since 2n is in O(n) - the algorithm runs in linear time.
This is a perfect example to see that complexity is NOT achieved by multiplying the complexity of each nested loop (that would have got you O(nlogn)), but by actually analyzing how many iterations are needed.
Yes its simple
See the value of n is decreasing by half each time and I runs n times.
So for the first time i goes from 1 to n
next time 0 to n/2
and hence 0 to n/k on kth term.
Now total time inner loop would run = Log(n)
So its a GP the number of times i is running.
with terms
n,n/2,n/4,n/8....0
so we can find the sum of the GP
2^(long(n) +1)-1 / (2-1)
2^(long(n)+1) = n
hence n-1/(1) = >O(n)

Need help understanding Big-O

I'm in a Data Structures class now, and we're covering Big-O as a means of algorithm analysis. Unfortunately after many hours of study, I'm still somewhat confused. I understand what Big-O is, and several good code examples I found online make sense. However I have a homework question I don't understand. Any explanation of the following would be greatly appreciated.
Determine how many times the output statement is executed in each of
the following fragments (give a number in terms of n). Then indicate
whether the algorithm is O(n) or O(n2):
for (int i = 0; i < n; i++)
for (int j = 0; j < i; j++)
if (j % i == 0)
System.out.println(i + ” ” + j);
Suppose n = 5. Then, the values of i would be 0, 1, 2, 3, and 4. This means that means that the inner loop will iterate 1, 2, 3, 4, and 5 times, respectively. Because of this, the total number of times that the if comparison will execute is 1+2+3+4+5. A mathematical formula for the sum of integers from 1 to n is n*(n+1)/2. Expanded, this gives us n^2 / 2 + n / 2.
Therefore, the algorithm itself is O(n^2).
For the number of times that something is printed, we need to look at the times that j%i=0. When j < i, the only time that this can be true is when j = 0, so this is the number of times that j = 0 and i is not 0. This means that it is only true once in each iteration of the outer loop, except the first iteration (when i = 0).
Therefore, System.out.println is called n-1 times.
A simple way to look at it is :
A single loop has a complexity of O(n)
A loop within a loop has a complexity of O(n^2) and so on.
So the above loop has a complexity of O(n^2)
This function appears to execute in Quadratic Time - O(n^2).
Here's a trick for something like this. For each nested for loop add one to the exponent for n. If there was three loops this algorithm would run in cubic time O(n^3). If there is only one loop (no halving involved) then it would be linear O(n). If the array was halved each time (recursively or iteratively) it would be considered logarithmic time O(log n) -> base 2.
Hope that helps.

Lower-bound Runtime of this pseudo-code

for i = 0 to n do
for j = n to 0 do
for k = 1 to j-i do
print (k)
I'm wondering about the lower-bound runtime of the above code. In the notes I am reading it explains the lower bound runtime to be
with the explanation;
To find the lower bound on the running time, consider the values of i, such that 0 <= i <= n/4 and values of j, such that 3n/4 <= j <= n. Note that for each of the n^2/16 different combinations of i and j, the innermost loop executes at least n/2 times.
Can someone explain where these numbers came from? They seem to be arbitrary to me.
There are n iterations of the first loop and for each of them n iterations of the second loop. In total these are n^2 iterations of the second loop.
Now if you only consider the lower quarter of possible values for i, then you have n^2/4 iterations of the inner loop left. If you also only consider the upper quarter of values for j then you have n^2/16 iterations of the inner loop left.
For each iteration of these constrained cases you have j-i >= 3n/4-n/4 = n/2 and therefore the most inner loop is iterated at least n/2 times for each of these n^2/16 iterations of the outer loops. Therefore the full number of iterations of the most inner loop is at least n^2/16*n/2.
Because we considered only specific iterations, the actual number of iterations is higher and this result is a lower bound. Therefore the algorithm is in Omega(n^3).
The values are insofar arbitrary that you could use many others. But these are some simple ones which make the argument j-i >= 3n/4-n/4 = n/2 possible. For example if you took only the lower half of the i iterations and the upper half of the j iterations, then you would have j-i >= n/2-n/2 = 0, leading to Omega(0), which is not interesting. If you took something like lower tenth and upper tenth it would still work, but the numbers wouldn't be as nice.
I can't really explain the ranges from your book, but if you attempt to proceed via the methodology below, I hope it would become more clear to find what is it that you are looking for.
The ideal way for the outer loop (index i) and the inner loop (index j) is as follows, since j - i >= 1 should be sustained, so that the innermost loop would execute (at least once in every case).
The performed decomposition was made because the range of j from i to 0 is ignored by the innermost loop.
for ( i = 0 ; i < n ; i ++ ) {
for ( j = n; j > i ; j -- ) {
for ( k = 1; k <= j - i ; k ++ ) {
printf(k);
}
}
}
This algorithm's order of growth complexity T(n) is:
Hence, your algorithm does iterate more than the algorithm above (j goes from n to 0).
Using Sigma Notation, you can do this:
Where c represent the execution cost of print(k), and c' is the execution cost of the occurring iterations that don't involve the innermost loop.

Complexity of a double for loop

I am trying to figure out the complexity of a for loop using Big O notation. I have done this before in my other classes, but this one is more rigorous than the others because it is on the actual algorithm. The code is as follows:
for(cnt = 0, i=1; i<=n; i++) //for any size n
{
for(j = 1; j <= i; j++)
{
cnt++;
}
}
AND
for(cnt = 0, i=1; i<=n; i*=2) //for any size n
{
for(j = 1; j <= i; j++)
{
cnt++;
}
}
I have arrived that the first loop is of O(n) complexity because it is going through the list n times. As for the second loop I am a little lost. I believe that it is going through the loop i times for each n that is tested. I have (incorrectly) assumed that this means that the loop is O(n*i) for each time it is evaluated. Is there anything that I'm missing in my assumption. I know that cnt++ is constant time.
Thank you for the help in the analysis. Each loop is in its own space, they are not together.
The outer loop of the first example executes n times. For each iteration of the outer loop, the inner loop gets executed i times, so the overall complexity can be calculated as follows: one for the first iteration plus two for the second iteration plus three for the third iteration and so on, plus n for the n-th iteration.
1+2+3+4+5+...+n = (n*(n-1))/2 --> O(n^2)
The second example is trickier: since i doubles every iteration, the outer loop executes only Log2(n) times. Assuming that n is a power of 2, the total for the inner loop is
1+2+4+8+16+...+n
which is 2^Log2(n)-1 = n-1 for the complexity O(n).
For ns that are not powers of two the exact number of iterations is (2^(Log2(n)+1))-1, which is still O(n):
1 -> 1
2..3 -> 3
4..7 -> 7
8..15 -> 15
16..31 -> 31
32..63 -> 63
and so on.
Hopefully this isn't homework, but I do see that you at least made an attempt here, so here's my take on this:
cnt is incremented n*(n+1)/2 times, which makes the entire set of both loops O(n^2). The second loop is O(n/2) on the average, which is O(n).
The first example is O(N^2) and What is the Big-O of a nested loop, where number of iterations in the inner loop is determined by the current iteration of the outer loop? would be the question that answers that where the key is to note that the inner loop's number of rounds is dependent on n.
The second example is likely O(n log n) since the outer loop is being incremented on a different scale than linear. Look at binary search for an example of a logarithmic complexity case. In the second example, the outer loop is O(log n) and the inner loop is O(n) which combine to give a O(n log n) complexity.

Resources