Big-Oh analysis of the running time? - big-o

For each of the following program fragments, give a Big-Oh analysis of the running time. I have two problems that I am not 100% sure if there right, can somebody help me
Fragment 1:
for( int i = 0; i < n; i++ )
for( int j = 0; j < n * n; j++ )
for( int k = 0; k < j; k++ )
sum++;
Answer: O(n^5) not really sure n*n??
Fragment 2:
for( int i = 1; i <= n; i++ )
for( int j = 1; j <= i * i; j++ )
if (j % i == 0)
for( int k = 0; k < j; k++)
sum++;
Answer:O(n^4)

Decompose the problem space per loop. Start from the outermost loop. What are the loops really going up to?
For the first problem, we have the following pattern.
The outer loop will run n times.
The outer inner loop will run n2 times, and is not bound by the current value of the inner loop.
The innermost loop will run up to j times, which causes it to be bound by the current value of the outer inner loop.
All of your steps are in linear chunks, meaning you will go from 0 to your ending condition in a linear fashion.
Here's what the summation actually looks like.
So, what would that translate into? You have to unroll the sums. It's not going to be O(n5), though.
For the second problem, we have the following pattern.
The outer loop runs up to and including n times.
The outer inner loop runs up to and including i2 times.
The innermost loop runs up to j times, on the condition that j % i == 0. That means that the innermost loop isn't executed every time.
I'll leave this problem for you to solve out. You do have to take the approach of unrolling the sums and reducing them to their algebraic counterparts.

for Fragment 1:
lets say m = n^2
Sigma(i=0...n) m Sigma(j=0.....m) j
=> n * (m(m+1)/2)
=> n ^ 5
Answer: O(n^5)
for Fragment 2:
last loop runs for i-1 times ...
Sigma(i=0...n) Sigma(j=0.....i-1) Sigma(k=0.....j) k
approximately its Sigma(i=0...n) i^2
~=> n^3
Answer:O(n^3)

Related

On understanding how to compute the Big-O of code snippets

I understand that simple statements like:
int x = 5; // is 1 or O(1)
And a while loop such as:
while(i<); // is n+1 or O(n)
And same with a for a single for loop (depending).
With nested while or for loop such as:
for(int i = 0; i<n; i++){ // this is n + 1
for(int j = 0; j<n; j++){ // this is (n+1)*n, total = O(n^2)
}
Also anytime we have a doubling effect it's log_2(n), tripling effect log_3(n) and so on. And if the control varible is being halved or quarted that's also either log_2(n) or log_4(n).
But I am dealing with much more complicated examples. How would one figure these examples out. I have the answers I just don't know how to work them out on paper come an examination.
Example1:
for (i = 1; i < (n*n+3*n+17) / 4 ; i += 1)
System.out.println("Sunshine");
Example2:
for (i = 0; i < n; i++)
if ( i % 2 == 0) // very confused by what mod would do to runtime
for (j = 0; j < n; j++)
System.out.print("Bacon");
else
for (j = 0; j < n * n; j++)
System.out.println("Ocean");
Example3:
for (i = 1; i <= 10000 * n: i *= 2)
x += 1;
Thank you
Example 1 is bounded by the term (n*n+3*n+17) and therefore should be O(n^2). The reason this is O(n^2) is because the largest, and therefore dominant, term in the expression is n^2.
The second example is a bit more tricky. The outer loop in i will iterate n times, but what executes on the inside depends on whether that value of i be odd or even. When even, another loop over n happens, but when odd a loop in n^2 happens. The odd case will dominate the running time eventually, so example 2 should be O(n^3).
The third example iterates until hitting 10000*n, but does so by doubling the loop counter i at each step. This will have an O(lgn) performance, where lg means the log base 2. To see why, imagine we wanted to reach n=32, starting at i=1 and doubling each time. Well we would have 2, 4, 8, 16, and 32, i.e. 6 steps, which grows as lg(32).

Please explain in simple terms why this code fragment has O(n^4) complexity?

Evaluate Big-Oh of the following code fragment:
sum = 0
for( i = 1; i < n; ++i )
for( j = 1; j < i * i; ++j )
if( j % i == 0 )
for( k = 0; k < j; ++k )
++sum
This is a homework problem in a textbook for my algorithms class. The answer as stated in the textbook is O(n^4). I've tried doing the problem many ways, but I am always getting O(n^5).
I'm using the summation method and mathematically evaluating from the innermost nested loop outward. The summations are not shown here because I don't know how to express my math in this space, but please follow my work below.
Here is my logic for the innermost loop:
for( k = 0; k < j; ++k )
My thinking is that the inner loop makes j+1 iterations, which can be as big as i*i, which itself can be as big as n, so this loop as an upper bound of O(n^2).
Here is my logic for the middle loop:
for( j = 1; j < i * i; ++j )
j iterates as high as i^2 times, which itself can go as high as n, so this loop has an upper bound of O(n^2).
Here is my logic for the outer loop:
for( i = 1; i < n; ++i )
i iterates as high as n times, so the loop has an upper-bound of O(n).
O(n * n^2 * n^2) = O(n^5)
Again, the answer is O(n^4). Please help me, using mathematical loops to aid your answer. Please use simple language. I am still new to algorithm analysis.
The trick is in this line:
if( j % i == 0 )
What this does is ensures the inner loop only executes when j is an exact multiple of i; otherwise no work is done.
So one shortcut you could think about is saying that this is O(n * n^2 / n * n^2) = O(n^4).
Another way you could think about it is that this is equivalent to writing:
sum = 0
for( i = 1; i < n; ++i )
for( j = 1; j < i * i; j += i )
for( k = 0; k < j; ++k )
++sum
which is O(N^4) by inspection.

Big O Algorithm Analysis

I have to analyze the Big O complexity for the below code fragments:
a)
// loop 1
for(int i = 0; i < n; i++)
// loop 2
for(int j = i; j < n; j++)
sum++;
b)
// loop 1
for(int i = 0; i < n; i++)
// loop 2
for(int j = i + 1; j > i; j--)
// loop 3
for(int k = n; k > j; k--)
sum++;
I'm not sure how to do so any help provided will be greatly appreciated. Thanks.
To analize Big-Oh complexity you have to try to count how many basic operations are made by your code.
In your first loop:
for(int i = 0; i < n; i++)
for(int j = i; j < n; j++)
sum++;
How many times is sum++ called?
The first loop happens n times, and in each one of these, the second loop happens around n times.
This gives you around n * n operations, which is equivalent to a complexity of O(n^2).
I'll let you work out the second one.
The first is straight forward (using the tools of the 2nd code snap, which is a bit trickier) - I'll focus on the 2nd code snap.
Big O notation is giving asymptotic upper bound to the number of ops the algorithm do.
Let's assume each inner iteration do 1 op, and let's neglect the counters and overhead of looping.
Denote T(n) total number of ops done in the program.
It is clear that the program has NO MORE ops then:
// loop 1
for(int i = 0; i < n; i++)
// loop 2
for(int j = i+1; j > i; j--) //note a single op in here, see (1) for details
// loop 3
for(int k = n; k > 0; k--) //we change k > j to j > 0 - for details see (2)
sum++;
(1) Since j is initialized as i+1, and is decreased each iteration, after the first iteration of loop2, you will get j == i, and the condition will yield false - thus - a single iteration is done
(2) The original loop iterates NO MORE then n times (since j >= 0) - thus the "new program" is "not better" then the old one (in terms of upper bounds).
Complexity of the simplified program
The total complexity of the above program is O(n^2), since loop1 and loop3 repeat n times each, and loop2 repeats exactly once.
If we assume single command is done each inner loop - the total number of commands which are done is then n^2.
Conclusion:
Since the new program is doing n^2 "ops" (according to the assumptions) and the original is "not worse then the new" - it is doing T(n) <= n^2 steps.
From definition of big O notation (with c=1, and for every N) - you can conclude the program is O(n^2)

Running time of for loop

I seem to understand the basic concepts of easier loops like so...the first loop runs in O(n), as does the inner loop. Because they're both nested, you multiply to get a total running time of O(n^2).
sum = 0;
for ( i = 0; i < n; i++ )
for j = 0; j < n; j++ )
++sum;
Though when things start getting switched around, I get completely lost as to how to figure it out. Could someone explain to me how to figure out running time for both of the following? Also, any links to easy to understand references that could further help me improve is also appreciated. Thanks!
sum = 0;
for( i = 0; i < n; i += 2 )
for( j = 0; j < n; j++ )
++sum;
The only thing I can gather from this is that the inner loop runs in O(n). The i+=2 really throws me off in the outer loop.
sum = 0;
for( i = 1; i < n; i *= 2 )
for( j = 0; j < n; j++ )
++sum;
From my attempt...outer loop is O(log(n)), inner is O(n), so total is O(n log(n))?
A good way of thinking about Big-O performance is to pretend each element of the code is a mathematical function that takes in n items and returns the number of computations performed on those items.
For example, a single for loop like for ( i = 0; i < n; i++ ) would be equivalent to a function i(), where i(n) = n, indicating that one computation is performed for each input n.
If you have two nested loops, then the functional equivalent for
for ( i = 0; i < n; i++ )
for j = 0; j < n; j++ )
would look like these two functions:
i(n) = n * j(n)
j(n) = n
Working these two functions out produces an end result of n*n = n^2, since j(n) can be substituted for n.
What this means is that as long as you can solve for the Big-O of any single loop, you can then apply those solutions to a group of nested loops.
For example, let's look at your second problem:
for( i = 0; i < n; i += 2 )
for( j = 0; j < n; j++ )
i+=2 means that for an input set of n items (n0, n1, n2, n3, n4) you're only touching every other element of that set. Assuming you initialize so that i=0, that means you're only touching the set of (n0,n2,n4). This means you're halving the size of the data set that you're using for processing, and means the functional equivalents work out like:
i(n) = (n/2) * j(n)
j(n) = n
Solving these gets you (n/2) * n = (n^2)*(1/2). Since this is Big-O work, we remove the constants to produce a Big-O value of (n^2).
The two key points to remember here:
Big-O math starts with a set of n data elements. If you're trying to determine the Big-O of a for loop that iterates through that set of n elements, your first step is to look at how the incrementor changes the number of data elements that the for routine actually touches.
Big-O math is math. If you can solve for each for expression individually, you can use those solutions to build up into your final answer, just like you can solve for a set of equations with common definitions.

Running time of for loop - part #2

This would be part # 2 of my question about analysis of for loop running time
http://faculty.simpson.edu/lydia.sinapova/www/cmsc250/LN250_Weiss/L03-BigOhSolutions.htm#PR4 contains solutions, and I have question about two particular "for" loops
Could someone explain to me how to figure out running time for both of them. Thanks !
1.
sum = 0;
for( i = 0; i < n; i++)
for( j = 0; j < i*i; j++)
for( k = 0; k < j; k++)
sum++;
2.
sum = 0;
for( i = 0; i < n; i++)
for( j = 0; j < i*i; j++)
if (j % i ==0)
for( k = 0; k < j; k++)
sum++;
The first snippet is O(n^5).
Top Loop = 0 - O(n) = O(n) iterations
Middle Loop = 0 - O(n^2) = O(n^2) iterations
Inner Loop = 0 - O(n^2) = O(n^2) iterations
Total = O(n^5)
Here's the closed-form solution of the first snippet: (computed via Mathematica)
sum = -(1/10)*n + (1/4)*n^2 - (1/4)*n^4 + (1/10)*n^5
This is a 5th order polynomial, therefore it is: O(n^5)
The second snippet appears to be O(n^4).
Top Loop = 0 - O(n) = O(n) iterations
Middle Loop = 0 - O(n^2) = O(n^2) iterations
If statement enters: O(1 / n) times
Inner Loop = 0 - O(n^2) = O(n^2) iterations
Total = O(n^4)
Here's the closed-form solution of the second snippet: (computed via Mathematica)
sum = -(1/12)*n + (3/8)*n^2 - (5/12)*n^3 + (1/8)*n^4
This is a 4th order polynomial, therefore it is: O(n^4)
Further explanation of the effect of the if-statement:
The middle loop iterates from 0 to i*i. The if-statement checks if j is divisible by i. But that is only possible when j is a multiple of i.
How many times is j a multiple of i if 0 <= j < i*i? Exactly i times. Therefore only 1/i of the iterations of the middle loop will fall through to the inner-most loop.
The relationship of 'n' as well as the other variables in the second for loop statement ( ..., x<=n, ...) would really define how fast it would be. Try to visualize a for loop as a racem and the second statement says how many laps you would make. So for example, variable 'n' = 1000, then you would have to run the same lap for 1000 times, truly time wasting. Hope that got you a better view on things.

Resources