This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 8 years ago.
int n = 500;
for(int i = 0; i < n; i++)
for(int j = 0; j < i; j++)
sum++;
My guess is this is simply a O(N^2), but the j < i is giving me doubts.
int n = 500;
for(int i = 0; i < n; i++)
for(int j = 0; j < i*i; j++)
sum++;
Seems like an O(N^3)
int n = 500;
for(int i = 0; i < n; i++)
for(int j = 0; j < i*i; j++)
if( j % i == 0 )
for( k = 0; k < j; k++ )
sum++
O(N^5)?
So for each loop j has a different value. If it was j < n*n, it would've been more straight forward, but this one is a tricky one, so please help. Thanks.
In the first case sum++ executes 0 + 1 + ... + n-1 times. If you apply arithmetic progression formula, you'll get n (n-1) / 2, which is O(n^2).
In the second case we'll have 0 + 1 + 4 + 9 + ... + (n-1)^2, which is sum of squares of first n-1 numbers, and there's a formula for it: (n-1) n (2n-1)
The last one is interesting. You can see, actually, that the most nested for loop is called only when j is a multiplicand of i, so you can rewrite the program as follows:
int n = 500;
for(int i = 0; i < n; i++) {
for(int m = 0; m < i; m++) {
int j = m * i;
for( k = 0; k < j; k++)
sum++
}
}
It's easier to work with math notation:
The formula is derived from the code by analysis: we can see that sum++ gets called j times in the innermost loop, which is called i times, which is called n times. In the end, the problem boils down to a sum of cubes of first n numbers plus lower-order terms (which do not affect the asymptotics)
One final note: it looks obvious, but I'd like to show that in general sum of first N natural numbers in dth power is Ω(N^(d+1)) (see Wikipedia for Big-Omega notation), that is it grows no slower than that function. You can apply the same reasoning to prove that a stronger condition is satisfied, namely, it belongs to Θ(N^(d+1)), which combines both Ω and O.
You are right for all but the last one, which has a tighter bound of O(n^4): note that the last for loop is only executed if j is a multiple of i. There are x / i multiples of i lower than or equal to x, and i * i / i = i. So the last loop is only executed for i values out of the i * i.
Note that big-oh gives an upper bound, so i*i vs n*n makes little difference. Strictly speaking, saying they are all O(n^2015) is also correct (because that is a valid upper bound), but it's hardly helpful, so in practice a tight bound is usually used.
IVlad already gave the correct answer.
I think what confuses you is the "Big Oh" definition.
N^2 has O(N^2)
1/2N^2 has O(N^2)
1/2N^2 + c*N + b also has
O(N^2) - by given c and b are constants independent from N
Check Big Oh definition from here
Related
I understand that simple statements like:
int x = 5; // is 1 or O(1)
And a while loop such as:
while(i<); // is n+1 or O(n)
And same with a for a single for loop (depending).
With nested while or for loop such as:
for(int i = 0; i<n; i++){ // this is n + 1
for(int j = 0; j<n; j++){ // this is (n+1)*n, total = O(n^2)
}
Also anytime we have a doubling effect it's log_2(n), tripling effect log_3(n) and so on. And if the control varible is being halved or quarted that's also either log_2(n) or log_4(n).
But I am dealing with much more complicated examples. How would one figure these examples out. I have the answers I just don't know how to work them out on paper come an examination.
Example1:
for (i = 1; i < (n*n+3*n+17) / 4 ; i += 1)
System.out.println("Sunshine");
Example2:
for (i = 0; i < n; i++)
if ( i % 2 == 0) // very confused by what mod would do to runtime
for (j = 0; j < n; j++)
System.out.print("Bacon");
else
for (j = 0; j < n * n; j++)
System.out.println("Ocean");
Example3:
for (i = 1; i <= 10000 * n: i *= 2)
x += 1;
Thank you
Example 1 is bounded by the term (n*n+3*n+17) and therefore should be O(n^2). The reason this is O(n^2) is because the largest, and therefore dominant, term in the expression is n^2.
The second example is a bit more tricky. The outer loop in i will iterate n times, but what executes on the inside depends on whether that value of i be odd or even. When even, another loop over n happens, but when odd a loop in n^2 happens. The odd case will dominate the running time eventually, so example 2 should be O(n^3).
The third example iterates until hitting 10000*n, but does so by doubling the loop counter i at each step. This will have an O(lgn) performance, where lg means the log base 2. To see why, imagine we wanted to reach n=32, starting at i=1 and doubling each time. Well we would have 2, 4, 8, 16, and 32, i.e. 6 steps, which grows as lg(32).
Can someone please explain how the worst case running time is O(N) and not O(N^2)in the following excercise. There is double for loop, where for every i we need to compare j to i , sum++ and then increment and again repeat the operation until reach N.
What is the order of growth of the worst case running time of the following code fragment
as a function of N?
int sum = 0;
for (int i = 1; i <= N; i = i*2)
for (int j = 0; j < i; j++)
sum++;
Question Explanation
The answer is : N
The body of the inner loop is executed 1 + 2 + 4 + 8 + ... + N ~ 2N times.
I think you already stated the answer in your question -- the inner loop is executed 2N times, which is O(N). In asymptotic (or big-O) notation any multiples are dropped because for very, very large values, the graph of 2N looks just like N, so it isn't considered significant. In this case, the complexity of the problem is equal to the number of times "sum++" is called, because the algorithm is so simple. Does that make sense?
Complexity doesn't depends upon number of nested loops
it is O(Nc):
Time complexity of nested loops is equal to the number of times theinnermost statement is executed.For example the following sample loops have O(N2) time complexity
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
I have the following algorithm:
I analyzed this algoritm as follow:
Since the outer for loop goes from i to n it iterates at most n times,
and the loop on j iterates again from i to n which we can say at most n times,
if we do the same with the whole algorithm we have 4 nested for loop so the running time would be O(n^4).
But when I run this code for different input size I get the following result:
As you can see the result is much closer to n^3? can anyone explain why does this happen or what is wrong with my analysis that I get a loose bound?
Formally, you may proceed like the following, using Sigma Notation, to obtain the order of growth complexity of your algorithm:
Moreover, the equation obtained tells the exact number of iterations executed inside the innermost loop:
int sum = 0;
for( i=0 ; i<n ; i++ )
for( j=i ; j<n ; j++ )
for( k=0 ; k<j ; k++ )
for( h=0 ; h<i ; h++ )
sum ++;
printf("\nsum = %d", sum);
When T(10) = 1155, sum = 1155 also.
I'm sure there's a conceptual way to see why, but you can prove by induction the above has (n + 2) * (n + 1) * n * (n - 1) / 24 loops. Proof left to the reader.
In other words, it is indeed O(n^4).
Edit: You're count increases too frequently. Simply try this code to count number of loops:
for (int n = 0; n < 30; n++) {
int sum = 0;
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) {
for(int k = 0; k < j; k++) {
for (int h = k; h < i; h++) {
sum++;
}
}
}
}
System.out.println(n + ": " + sum + " = " + (n + 2) * (n + 1) * n * (n - 1) / 24);
}
You are having a rather complex algorithm. The number of operations is clearly less than n^4, but it isn't at all obvious how much less than n^4, and whether it is O (n^3) or not.
Checking the values n = 1 to 9 and making a guess based on the results is rather pointless.
To get a slightly better idea, assume that the number of steps is either c * n^3 or d * n^4, and make a table of the values c and d for 1 <= n <= 1,000. That might give you a better idea. It's not a foolproof method; there are algorithms changing their behaviour dramatically much later than at n = 1,000.
Best method is of course a proof. Just remember that O (n^4) doesn't mean "approximately n^4 operations", it means "at most c * n^4 operations, for some c". Sometimes c is small.
I'm asked to give a big-O estimates for some pieces of code but I'm having a little bit of trouble
int sum = 0;
for (int i = 0; i < n; i = i + 2) {
for (int j = 0; j < 10; j + +)
sum = sum + i + j;
I'm thinking that the worst case would be O(n/2) because the outer for loop is from i to array length n. However, I'm not sure if the inner loop affects the Big O.
int sum = 0;
for (int i = n; i > n/2; i − −) {
for (int j = 0; j < n; j + +)
sum = sum + i + j;
For this one, I'm thinking it would be O(n^2/2) because the inner loop is from j to n and the outer loop is from n to n/2 which gives me n*(n/2)
int sum = 0;
for (int i = n; i > n − 2; i − −) {
for (int j = 0; j < n; j+ = 5)
sum = sum + i + j;
I'm pretty lost on this one. My guess is O(n^2-2/5)
Your running times for the first two examples are correct.
For the first example, the inner loop of course always executes 10 times. So we can say the total running time is O(10n/2).
For the last example, the outer loop only executes twice, and the inner loop n/5 times, so the total running time is O(2n/5).
Note that, because of the way big-O complexity is defined, constant factors and asymptotically smaller terms are negligible, so your complexities can / should be simplified to:
O(n)
O(n2)
O(n)
If you were to take into account constant factors (using something other than big-O notation of course - perhaps ~-notation), you may have to be explicit about what constitutes a unit of work - perhaps sum = sum + i + j constitutes 2 units of work instead of just 1, since there are 2 addition operations.
You're NOT running nested loops:
for (int i = 0; i < n; i = i + 2);
^----
That semicolon is TERMINATING the loop definition, so the i loop is just counting from 0 -> n, in steps of 2, without doing anything. The j loop is completely independent of the i loop - both are simply dependent on n for their execution time.
For the above algorithms worst case/best case are the same.
In case of Big O notation, lower order terms and coefficient of highest order term can be ignored as Big O is used for describing asymptotic upper bound.
int sum = 0;
for (int i = 0; i < n; i = i + 2) {
for (int j = 0; j < 10; j + +)
sum = sum + i + j;
Total number of outer loop iteration =n/2.for each iteration of outer loop, number of inner loop iterations=10.so total number of inner loop iterations=10*n/2=5n. so clearly it is O(n).
Now think about rest two programs and determine time complexities on your own.
I have to analyze the Big O complexity for the below code fragments:
a)
// loop 1
for(int i = 0; i < n; i++)
// loop 2
for(int j = i; j < n; j++)
sum++;
b)
// loop 1
for(int i = 0; i < n; i++)
// loop 2
for(int j = i + 1; j > i; j--)
// loop 3
for(int k = n; k > j; k--)
sum++;
I'm not sure how to do so any help provided will be greatly appreciated. Thanks.
To analize Big-Oh complexity you have to try to count how many basic operations are made by your code.
In your first loop:
for(int i = 0; i < n; i++)
for(int j = i; j < n; j++)
sum++;
How many times is sum++ called?
The first loop happens n times, and in each one of these, the second loop happens around n times.
This gives you around n * n operations, which is equivalent to a complexity of O(n^2).
I'll let you work out the second one.
The first is straight forward (using the tools of the 2nd code snap, which is a bit trickier) - I'll focus on the 2nd code snap.
Big O notation is giving asymptotic upper bound to the number of ops the algorithm do.
Let's assume each inner iteration do 1 op, and let's neglect the counters and overhead of looping.
Denote T(n) total number of ops done in the program.
It is clear that the program has NO MORE ops then:
// loop 1
for(int i = 0; i < n; i++)
// loop 2
for(int j = i+1; j > i; j--) //note a single op in here, see (1) for details
// loop 3
for(int k = n; k > 0; k--) //we change k > j to j > 0 - for details see (2)
sum++;
(1) Since j is initialized as i+1, and is decreased each iteration, after the first iteration of loop2, you will get j == i, and the condition will yield false - thus - a single iteration is done
(2) The original loop iterates NO MORE then n times (since j >= 0) - thus the "new program" is "not better" then the old one (in terms of upper bounds).
Complexity of the simplified program
The total complexity of the above program is O(n^2), since loop1 and loop3 repeat n times each, and loop2 repeats exactly once.
If we assume single command is done each inner loop - the total number of commands which are done is then n^2.
Conclusion:
Since the new program is doing n^2 "ops" (according to the assumptions) and the original is "not worse then the new" - it is doing T(n) <= n^2 steps.
From definition of big O notation (with c=1, and for every N) - you can conclude the program is O(n^2)