Can't understand big-O of a nested loop - algorithm

I am having problem understanding the answer to the following question about analyzing two algorithms below.
for (int i = n ; i >= 1; i = i/2) {
for ( int j = 1; j <= n ; j++) {
//do something
}
}
The algorithm above has complexity of O(n) according to the answers. Shouldn't it be lower since the outer loop always halves the amount we have to go through. I thought that it should be something along the lines of O(n/2 * )?
for ( int j = 1; j <= n ; j++ ) {
for ( int i = n ; i >= j ; i = i / 2 ) {
//do something
}
}
This one is O(n log n) if I am correct?

The first iteration will execute n steps, the second will execute n/2, the third will execute n/4, and so on.
If you compute the sum of n/(2^i) for i=0..log n you will get roughly 2n and that is why it is O(n).
If you take n out of the summation and sum only the 1/(2^i) part, you will get 2. Take a look at an example:
1 + 1/2 + 1/4 + 1/8 + 1/16 + ... = 1 + 0.5 + 0.25 + 0.125 + 0.0625 + ... = 2
Each next element is twice smaller, therefore the sum will never exceed 2.
You are right with the second nested loop example - it is O(n log n).
EDIT:
After the comment from ringø I re-read the question and in fact the algorithm is different from what I understood. ringø is right, the algorithm as described in the question is O(n log n). However, judging from the context I think that the OP meant an algorithm where the inner loop is tied to i and not n.
This answer relates to the following algorithm:
for (int i = n ; i >= 1; i = i/2) {
for ( int j = 1; j <= i ; j++) {
//do something
}
}

Related

Big O complexity on dependent nested loops

Can I get some help in understanding how to solve this tutorial question! I still do not understand my professors explanation. I am unsure of how to count the big 0 for the third/most inner loop. She explains that the answer for this algorithm is O(n^2) and that the 2nd and third loop has to be seen as one loop with the big 0 of O(n). Can someone please explain to me the big O notation for the 2nd / third loop in basic layman terms
Assuming n = 2^m
for ( int i = n; i > 0; i --) {
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
}
}
}
As far as I understand, the first loop has a big O notation of O(n)
Second loop = log(n)
Third loop = log (n) (since the number of times it will be looped has been reduced by logn) * 2^(2^m-1)( to represent the increase in j? )
lets add print statement to the innermost loop.
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
print(1)
}
}
output for
j = 1, 1 1
j = 2, 1 1 1
j = 4, 1 1 1 1 1
...
j = n, 1 1 1 1 1 ... n+1 times.
The question boils down to how many 1s will this print.
That number is
(2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= (2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= log n + (1 + 2 + 4 + ... + n)
= O(log n + n)
= O(n).
assuming you know why (1 + 2 + 4 + ... + n) = O(n)
O-notation is an upperbound. You can say it has O(n^2). For least upperbound, I believe it should be O(n*log(n)*log(n)) which belongs to O(n^2).
It’s because of the logarithm. If you have log(16) raised to the power 2 is 16. So log(n) raised to the power of 2 is n. That is why your teacher says to view the second and third loop as O(n) together.
If the max iterations for the second loop are O(log(n)) then the second and third loops will be: O(1 + 2 + 3 + ... + log(n)) = O(log(n)(log(n) + 1)/2) = O((log(n)^2 + log(n))/2) = O(n)
for ( int i = n; i > 0; i --) { // This runs n times
for (int j =1; j < n; j *= 2){ // This runs atmost log(n) times, i.e m times.
for (int k =0; k < j; k++){ // This will run atmost m times, when the value of j is m.
}
}
}
Hence, the overall complexity will be the product of all three, as mentioned in the comments under the question.
Upper bound can be loose or tight.
You can say that it is loosely bound under O(n^2) or tightly bound under O(n * m^2).

Trouble figuring out these tough Big-O examples

I'm trying to study for an upcoming quiz about Big-O notation. I've got a few examples here but they're giving me trouble. They seem a little too advanced for a lot of the basic examples you find online to help. Here are the problems I'm stuck on.
1. `for (i = 1; i <= n/2; i = i * 2) {
sum = sum + product;
for (j= 1; j < i*i*i; j = j + 2) {
sum++;
product += sum;
}
}`
For this one, the i = i * 2 in the outer loop implies O(log(n)), and I don't think the i <= n/2 condition changes anything because of how we ignore constants. So the outer loop stays O(log(n)). The inner loops condition j < i*i*i confuses me because its in terms of 'i' and not 'n'. Would the Big-O of this inner loop then be O(i^3)? And thus the Big-O for the entire problem
be O( (i^3) * log(n) )?
2. `for (i = n; i >= 1; i = i /2) {
sum = sum + product
for (j = 1; j < i*i; j = j + 2) {
sum ++;
for (k = 1 ; k < i*i*j; k++)
product *= i * j;
}
}`
For this one, the outermost loop implies O(log(n)). The middle loop implies, again unsure, O(i^2)? And the innermost loop implies O(i^2*j)? I've never seen examples like this before so I'm almost guessing at this point. Would the Big-O notation for this problem be O(i^4 * n * j)?
3. `for (i = 1; i < n*n; i = i*2) {
for (j = 0; j < i*i; j++) {
sum ++;
for (k = i*j; k > 0; k = k - 2)
product *= i * j;
}
}`
The outermost loop for this one has an n^2 condition, but also a logarithmic increment, so I think that cancels out to be just regular O(n). The middle loop is O(i^2), and the innermost loop is I think just O(n) and trying to trick you. So for this problem the Big-O notation would be O(n^2 * i^2)?
4. `int i = 1, j = 2;
while (i <= n) {
sum += 1;
i = i * j;
j = j * 2;
}`
For this one I did a few iterations to better see what was happening:
i = 1, j = 2
i = 2, j = 4
i = 8, j = 8
i = 64, j = 16
i = 1024, j = 32
So clearly, 'i' grows very quickly, and thus the condition is met very quickly. However I'm not sure just what kind of Big-O notation this is.
Any pointers or hints you can give are greatly appreciated, thanks guys.
You can't add i or j to O-notation, it must be converted to n.
For the first one:
Let k be log 2 i.
Then inner loop is done 2^(k*3)/2=2^(3k-1) times for each iteration of outer loop.
k goes from 1 to log2n.
So total number of iterations is
sum of 2^(3k-1) for k from 1 to log 2 n which is 4/7(n^3-1) according to Wolfram Alpha, which is O(n^3).
For the last one, i=j1*j2*j3*...jk, and jm=2^m
i=2^1*2^2*...2^k=2^(1+2+...k)
So 1+2+3+...+k=log 2 n
(k+1)k/2 = log 2 n
Which is O(sqrt(log n))
BTW, log n^2 is not n.
This question is better to ask at computer science than here.

Order Of Growth complicated for loops

For the following code fragment, what is the order of growth in terms of N?
int sum = 0;
for (int i = 1; i <= N; i = i*2)
for (int j = 1; j <= N; j = j*2)
for (int k = 1; k <= i; k++)
sum++;
I have figured that there is lgN term, but I am stuck on evaluating this part : lgN(1 + 4 + 8 + 16 + ....). What will the last term of the sequence be? I need the last term to calculate the sum.
You have a geometric progression in your outer loops, so there is a closed form for the sum of which you want to take the log:
1 + 2 + 4 + ... + 2^N = 2^(N+1) - 1
To be precise, your sum is
1 + ... + 2^(floor(ld(N))
with ld denoting the logarithm to base 2.
The outer two loops are independent from each other, while the innermost loop only depends on i. There is a single operation (increment) in the innermost loop, which means that the number of visits to the innermost loop equals the summation result.
\sum_i=1..( floor(ld(N)) ) {
\sum_j=1..( floor(ld(N)) ) {
\sum_k=1..2^i { 1 }
}
}
// adjust innermost summation bounds
= \sum_i=1..( floor(ld(N)) ) {
\sum_j=1..( floor(ld(N)) ) {
-1 + \sum_k=0..2^i { 1 }
}
}
// swap outer summations and resolve innermost summation
= \sum_j=1..( floor(ld(N)) ) {
\sum_i=1..( floor(ld(N)) ) {
2^i
}
}
// resolve inner summation
= \sum_j=1..( floor(ld(N)) ) {
2^(floor(ld(N)) + 1) - 2
}
// resolve outer summation
= ld(N) * N - 2 * floor(ld(N))
This amounts to O(N log N) ( the second term in the expression vanishes asymptotically wrt to the first ) in Big-Oh notation.
To my understanding, the outer loop will take log N steps, the next loop will also take log N steps, and the innermost loop will take at most N steps (although this is a very rough bound). In total, the loop has a runtime complexity of at most ((log N)^2)*N, which can probably be improved.

running time of algorithm does not match the reality

I have the following algorithm:
I analyzed this algoritm as follow:
Since the outer for loop goes from i to n it iterates at most n times,
and the loop on j iterates again from i to n which we can say at most n times,
if we do the same with the whole algorithm we have 4 nested for loop so the running time would be O(n^4).
But when I run this code for different input size I get the following result:
As you can see the result is much closer to n^3? can anyone explain why does this happen or what is wrong with my analysis that I get a loose bound?
Formally, you may proceed like the following, using Sigma Notation, to obtain the order of growth complexity of your algorithm:
Moreover, the equation obtained tells the exact number of iterations executed inside the innermost loop:
int sum = 0;
for( i=0 ; i<n ; i++ )
for( j=i ; j<n ; j++ )
for( k=0 ; k<j ; k++ )
for( h=0 ; h<i ; h++ )
sum ++;
printf("\nsum = %d", sum);
When T(10) = 1155, sum = 1155 also.
I'm sure there's a conceptual way to see why, but you can prove by induction the above has (n + 2) * (n + 1) * n * (n - 1) / 24 loops. Proof left to the reader.
In other words, it is indeed O(n^4).
Edit: You're count increases too frequently. Simply try this code to count number of loops:
for (int n = 0; n < 30; n++) {
int sum = 0;
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) {
for(int k = 0; k < j; k++) {
for (int h = k; h < i; h++) {
sum++;
}
}
}
}
System.out.println(n + ": " + sum + " = " + (n + 2) * (n + 1) * n * (n - 1) / 24);
}
You are having a rather complex algorithm. The number of operations is clearly less than n^4, but it isn't at all obvious how much less than n^4, and whether it is O (n^3) or not.
Checking the values n = 1 to 9 and making a guess based on the results is rather pointless.
To get a slightly better idea, assume that the number of steps is either c * n^3 or d * n^4, and make a table of the values c and d for 1 <= n <= 1,000. That might give you a better idea. It's not a foolproof method; there are algorithms changing their behaviour dramatically much later than at n = 1,000.
Best method is of course a proof. Just remember that O (n^4) doesn't mean "approximately n^4 operations", it means "at most c * n^4 operations, for some c". Sometimes c is small.

Calculating the big-O complexity of this selection sort implementation?

I'm trying to compute the big-O time complexity of this selection sort implementation:
void selectionsort(int a[], int n)
{
int i, j, minimum, index;
for(i=0; i<(n-1); i++)
{
minimum=a[n-1];
index=(n-1);
for(j=i; j<(n-1); j++)
{
if(a[j]<minimum)
{
minimum=a[j];
index=j;
}
}
if (i != index)
{
a[index]=a[i];
a[i]=minimum;
}
}
}
How might I go about doing this?
Formally, you can obtain the exact number of iterations with the order of growth using the methodology below:
Executing the following fragment code (synthetic version of the original code), sum will equal the closed form of T(n).
sum = 0;
for( i = 0 ; i < ( n - 1 ) ; i ++ ) {
for( j = i ; j < ( n - 1 ) ; j ++ ) {
sum ++;
}
}
Let's begin by looking at the inside of the outer loop. It does O(1) work with the initial assignments, then has a loop that runs n - i times, then does O(1) more work at the end to perform the swap. Therefore, the runtime is Θ(n - i).
If we sum up from i going from 0 up to n - 1, we get the following:
n + (n - 1) + (n - 2) + ... + 1
This famous sum works out to Θ(n2), so the runtime would be Θ(n2), matching the known runtime of this algorithm.
Hope this helps!

Resources