Runtime of a loop that decays exponentially? - algorithm

Where n is the input to the function can be any integer.
i = n, total = 0;
while (i > 0) {
for (j=0; j<i; j++)
for (k=0; k<i; k++)
total++;
i = i/4;
}
What is the time complexity of this function?

One way to think about this is to look at the loops independently.
This inner loop nest:
for (j=0; j<i; j++)
for (k=0; k<i; k++)
total++;
will execute a total of Θ(i2) operations, since each loop independently runs i times.
Now, let's look at the outer loop:
while (i > 0) {
/* do Theta(i^2) work */
i = i/4;
}
This loop will run a total of at most 1 + log4 i times, since on each iteration i is cut by a factor of 1/4, and this can only happen 1 + log4 i times before i drops to zero. The question, then, is how much work will be done.
One way to solve this is to write a simple recurrence relation for the total work done. We can think of the loop as a tail-recursive function, where each iteration does Θ(i2) work and then makes a recursive call on a subproblem of size 4. This gives this recurrence:
T(n) = T(n / 4) + Θ(n2).
Applying the Master Theorem, we see that a = 1, b = 4, and c = 2. Since logb a = log4 1 = 0 and 0 < c, the Master Theorem says (by Case 3) that the runtime solves to Θ(n2). Therefore, the total work done is Θ(n2).
Here's another way to think about this. The first iteration of the loop does n2 work. The next does (n / 4)2 = n2 / 16 work. The next does (n / 64)2 = n2 / 256 work. In fact, iteration x of the loop will do n2 / 16x work. Therefore, the total work done is given by
n2(1 + 1 / 16 + 1 / 162 + 1 / 163 + ...)
≤ n2(1 / (1 - 1/16))
= Θ(n2)
(This uses the formula for the sum of an infinite geometric series).
Hope this helps!

The running time is O(n^2), and the number of times that total is incremented is asymptotic to n^2/(1-1/16) which is about 1.067 n^2.
The recurrence is going to be
T(n) = n^2 + T(n/4)
= n^2 + n^2/16 + T(n/16)
= n^2 (1 + 1/16 + 1/16^2 + ...)
= n^2 / (1 - 1/16)

This code fragment:
i = n, total = 0;
while (i > 0) {
for (j=0; j<i; j++)
for (k=0; k<i; k++)
total++;
i = i/4;
}
is equivalent to this one:
for ( i = n ; i > 0 ; i = i / 4 )
for ( j = 0 ; j < i ; j ++)
for ( k = 0 ; k < i ; k ++)
total ++;
Therefore, methodically (empirically verified), you may obtain the following using Sigma Notation:*
With many thanks to WolframAlpha.

Related

Time Complexity log(n) vs Big O (root n)

Trying to analyze the below code snippet.
For the below code can the time complexity be Big O(log n)?. I am new to asymptotic analysis. In the tutorial it says its O( root n).
int p = 0;
for(int i =1;p<=n;i++){
p = p +i;
}
,,,
Variable p is going to take the successive values 1, 1+2, 1+2+3, etc.
This sequence is called the sequence of triangular numbers; you can read more about it on Wikipedia or OEIS.
One thing to be noted is the formula:
1 + 2 + ... + i = i*(i+1)/2
Hence your code could be rewritten under the somewhat equivalent form:
int p = 0;
for (int i = 1; p <= n; i++)
{
p = i * (i + 1) / 2;
}
Or, getting rid of p entirely:
for (int i = 1; (i - 1) * i / 2 <= n; i++)
{
}
Hence your code runs while (i-1)*i <= 2n. You can make the approximation (i-1)*i ≈ i^2 to see that the loop runs for about sqrt(2n) operations.
If you are not satisfied with this approximation, you can solve for i the quadratic equation:
i^2 - i - 2n == 0
You will find that the loop runs while:
i <= (1 + sqrt(1 + 8n)) / 2 == 0.5 + sqrt(2n + 0.125)

Big O complexity on dependent nested loops

Can I get some help in understanding how to solve this tutorial question! I still do not understand my professors explanation. I am unsure of how to count the big 0 for the third/most inner loop. She explains that the answer for this algorithm is O(n^2) and that the 2nd and third loop has to be seen as one loop with the big 0 of O(n). Can someone please explain to me the big O notation for the 2nd / third loop in basic layman terms
Assuming n = 2^m
for ( int i = n; i > 0; i --) {
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
}
}
}
As far as I understand, the first loop has a big O notation of O(n)
Second loop = log(n)
Third loop = log (n) (since the number of times it will be looped has been reduced by logn) * 2^(2^m-1)( to represent the increase in j? )
lets add print statement to the innermost loop.
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
print(1)
}
}
output for
j = 1, 1 1
j = 2, 1 1 1
j = 4, 1 1 1 1 1
...
j = n, 1 1 1 1 1 ... n+1 times.
The question boils down to how many 1s will this print.
That number is
(2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= (2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= log n + (1 + 2 + 4 + ... + n)
= O(log n + n)
= O(n).
assuming you know why (1 + 2 + 4 + ... + n) = O(n)
O-notation is an upperbound. You can say it has O(n^2). For least upperbound, I believe it should be O(n*log(n)*log(n)) which belongs to O(n^2).
It’s because of the logarithm. If you have log(16) raised to the power 2 is 16. So log(n) raised to the power of 2 is n. That is why your teacher says to view the second and third loop as O(n) together.
If the max iterations for the second loop are O(log(n)) then the second and third loops will be: O(1 + 2 + 3 + ... + log(n)) = O(log(n)(log(n) + 1)/2) = O((log(n)^2 + log(n))/2) = O(n)
for ( int i = n; i > 0; i --) { // This runs n times
for (int j =1; j < n; j *= 2){ // This runs atmost log(n) times, i.e m times.
for (int k =0; k < j; k++){ // This will run atmost m times, when the value of j is m.
}
}
}
Hence, the overall complexity will be the product of all three, as mentioned in the comments under the question.
Upper bound can be loose or tight.
You can say that it is loosely bound under O(n^2) or tightly bound under O(n * m^2).

running time of algorithm does not match the reality

I have the following algorithm:
I analyzed this algoritm as follow:
Since the outer for loop goes from i to n it iterates at most n times,
and the loop on j iterates again from i to n which we can say at most n times,
if we do the same with the whole algorithm we have 4 nested for loop so the running time would be O(n^4).
But when I run this code for different input size I get the following result:
As you can see the result is much closer to n^3? can anyone explain why does this happen or what is wrong with my analysis that I get a loose bound?
Formally, you may proceed like the following, using Sigma Notation, to obtain the order of growth complexity of your algorithm:
Moreover, the equation obtained tells the exact number of iterations executed inside the innermost loop:
int sum = 0;
for( i=0 ; i<n ; i++ )
for( j=i ; j<n ; j++ )
for( k=0 ; k<j ; k++ )
for( h=0 ; h<i ; h++ )
sum ++;
printf("\nsum = %d", sum);
When T(10) = 1155, sum = 1155 also.
I'm sure there's a conceptual way to see why, but you can prove by induction the above has (n + 2) * (n + 1) * n * (n - 1) / 24 loops. Proof left to the reader.
In other words, it is indeed O(n^4).
Edit: You're count increases too frequently. Simply try this code to count number of loops:
for (int n = 0; n < 30; n++) {
int sum = 0;
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) {
for(int k = 0; k < j; k++) {
for (int h = k; h < i; h++) {
sum++;
}
}
}
}
System.out.println(n + ": " + sum + " = " + (n + 2) * (n + 1) * n * (n - 1) / 24);
}
You are having a rather complex algorithm. The number of operations is clearly less than n^4, but it isn't at all obvious how much less than n^4, and whether it is O (n^3) or not.
Checking the values n = 1 to 9 and making a guess based on the results is rather pointless.
To get a slightly better idea, assume that the number of steps is either c * n^3 or d * n^4, and make a table of the values c and d for 1 <= n <= 1,000. That might give you a better idea. It's not a foolproof method; there are algorithms changing their behaviour dramatically much later than at n = 1,000.
Best method is of course a proof. Just remember that O (n^4) doesn't mean "approximately n^4 operations", it means "at most c * n^4 operations, for some c". Sometimes c is small.

Instruction execution of a C++ code

Hello I have an algorthm in C++ and I want to find the instructions executed. The code is below
cin >> n;
for(i=1;i<=n;i++)
for (j = 1; j <= n; j ++)
A[i][j] = 0;
for(i=1;i<=n;i++)
A[i][i] = 1;
now, after my calculation, I got this T(n) = n^2+8n-5. I just need someone else to verify if I am correct. Thanks
Ok, let's do an analysis step by step.
The first instruction
cin >> n
counts as one operation: 1.
Then the loop
for(i=1;i<=n;i++)
for (j = 1; j <= n; j ++)
A[i][j] = 0;
Let's go from inside out. The j loop performs n array assignments (A[i][j] = 0), (n + 1) j <= n comparisons and n j ++ assignments. It also performs once the assignment j = 1. So in total this gives: n + (n +1) + n + 1 = 3n + 2.
Then the outer i loop performs (n + 1) i <= n comparisons, n i ++ assignments and executes n times the j loop. It also performs one i = 1 assignment. This results in: n + (n + 1) + n * (3n + 2) + 1 = 3n^2 + 4n + 2.
Finally the last for loop performs n array assignments, (n + 1) i <= n comparisons and n i ++ assignments. It also performs one assignment i = 1. This results in: n + (n + 1) + n + 1 = 3n + 2.
Now, adding up three operations we get:
(1) + (3n^2 + 4n + 2) + (3n + 2) = 3n^2 + 7n + 5 = T(n) total operations.
The time function is equivalent, assuming that assignments, comparisons, additions and cin are all done in constant time. That would yield an algorithm of complexity O(n^2).
This is of curse assuming that n >= 1.

worst case running time calculation

It's from homework, but I'm asking for a general method.
Calculate the following code's worst case running time.
int sum = 0;
for (int i = 0; i*i < N; i++)
for (int j = 0; j < i*i; j++)
sum++;
the answer is N^3/2, could anyone help me through this?
Is there a general way to calculate this?
This is what I thought:
when i = 0, sum++ will be called 0 time
when i = 1, sum++ will be called 1 time
when i = 2, sum++ will be called 4 times
...
when i = i, sum++ will be called i^2 times
so the worst time will be
0 + 1 + 4 + 9 + 16 + ... + i^2
but what next?? I'm lost here...
You want to count how many times the innermost cycle will run.
The outer one will run from i = 0, to i = sqrt(N) (since i*i < N).
For each iteration of the outer one the inner one will run i^2 times.
Thus the total number of times the inner one will run is:
1^2 + 2^2 + 3^2 + ... + sqrt(N)^2
There is a formula:
1^2 + 2^2 + ... + k^2 = k(k+1)(2k+1) / 6 = O(k^3).
In your case k = sqrt(N).
This the total complexity is O(sqrt(N)^3) = O(N^(3/2)).
Your algorithm can be converted to the following shape:
int sum = 0;
for (int i = 0; i < Math.sqrt(N); i++)
for (int j = 0; j < i*i; j++)
sum++;
Therefore, we may straightforwardly and formally do the following:
then just calculate this sum
(i^2)/2 * N^(1/2) = N/2 * N^(1/2) = N^(3/2)
You are approaching this problem in the wrong way. To count the worst time, you need to find the maximum number of operations that will be performed. Because you have only a single operation in a double loop, it is enough to find out how many times the inner loop will execute.
You can do this by examining the limits of your loops. For the outer loop it is:
i^2 < N => i < sqrt(N)
The limit for your inner loop is
j < i^2
You can substitute in the second equasion to get j < N.
Because these are nested loops you multiply their limits to get the final result:
sqrt(N)*N = N^3/2

Resources