Time complexity analysis of algorithm - algorithm

Hi I'm trying to analyze the time complexity of this algorithm but I'm having difficult unraveling and counting how many times the final loop will execute. I realize that the first loop is log(n) but after that I can't seem to get to a sum which evaluates well. Here is the algorithm:
for(int i = 1; i <= n; i = 2*i){
for(int j = 1; j <= i; j = 2*j){
for(int k = 0; k <= j; k++){
// Some elementary operation here.
}
}
}
I cannot figure out how many times the loop k executes in general w.r.t to n
Thanks for any help!

It is O(n).
1 + 2 + 4 + ... + 2^N == 2^(N + 1) - 1.
The last loop, for a specific j, executes j times.
And for a specific i, the inner 2 loops execute 1 + 2 + 4 + ... + i times, which is equal to about 2 * i.
So the total execution times is 1 * 2 + 2 * 2 + 4 * 2 + ... + N * 2, which is about 4 * N.

Related

I calculated this simple loop complexity incorrectly, how to calculate it correctly? [duplicate]

This question already has an answer here:
What is the complexity of this function with nested loops?
(1 answer)
Closed 2 years ago.
The loop is
int count = 0;
for (int i = 0 ; i<n ; i++)
for (int j = 0 ; j<i ; j++)
count++;
I calculated the complexity = n * n(n+1)/2 so it will be n^3
but the answer is n^2 why?
You just need to check that j goes from 0 to i. As i goes from 0 to n, we have:
0 + 1 + 2 + ... + n =
1 + 2 + ... + n =
(n + 1) * (n / 2) = (n² + n)/2 = O(n²)
There is an extra n multiplying the answer in your calculation that comes from nowhere and this is the problem in your complexity.

Big O complexity on dependent nested loops

Can I get some help in understanding how to solve this tutorial question! I still do not understand my professors explanation. I am unsure of how to count the big 0 for the third/most inner loop. She explains that the answer for this algorithm is O(n^2) and that the 2nd and third loop has to be seen as one loop with the big 0 of O(n). Can someone please explain to me the big O notation for the 2nd / third loop in basic layman terms
Assuming n = 2^m
for ( int i = n; i > 0; i --) {
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
}
}
}
As far as I understand, the first loop has a big O notation of O(n)
Second loop = log(n)
Third loop = log (n) (since the number of times it will be looped has been reduced by logn) * 2^(2^m-1)( to represent the increase in j? )
lets add print statement to the innermost loop.
for (int j =1; j < n; j *= 2){
for (int k =0; k < j; k++){
print(1)
}
}
output for
j = 1, 1 1
j = 2, 1 1 1
j = 4, 1 1 1 1 1
...
j = n, 1 1 1 1 1 ... n+1 times.
The question boils down to how many 1s will this print.
That number is
(2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= (2^0 + 1) + (2^1 + 1) + (2^2 + 1) + ... + (n + 1)
= log n + (1 + 2 + 4 + ... + n)
= O(log n + n)
= O(n).
assuming you know why (1 + 2 + 4 + ... + n) = O(n)
O-notation is an upperbound. You can say it has O(n^2). For least upperbound, I believe it should be O(n*log(n)*log(n)) which belongs to O(n^2).
It’s because of the logarithm. If you have log(16) raised to the power 2 is 16. So log(n) raised to the power of 2 is n. That is why your teacher says to view the second and third loop as O(n) together.
If the max iterations for the second loop are O(log(n)) then the second and third loops will be: O(1 + 2 + 3 + ... + log(n)) = O(log(n)(log(n) + 1)/2) = O((log(n)^2 + log(n))/2) = O(n)
for ( int i = n; i > 0; i --) { // This runs n times
for (int j =1; j < n; j *= 2){ // This runs atmost log(n) times, i.e m times.
for (int k =0; k < j; k++){ // This will run atmost m times, when the value of j is m.
}
}
}
Hence, the overall complexity will be the product of all three, as mentioned in the comments under the question.
Upper bound can be loose or tight.
You can say that it is loosely bound under O(n^2) or tightly bound under O(n * m^2).

Time complexity of nested loops where k < j < i < n

I would like to know the time complexity of this algorithm and how it is calculated.
for (i = 1; i < 2n; i++) {
for (j = 1; j < i; j++) {
for (k = 1; k < j; k++) {
// do something
}
}
}
Assume the inner statement takes constant time.
The inner loop runs (j-1) times, hence its run time is
t_inner(j) = Sum {k from 1 to j-1} 1
= j-1
The middle loop runs i-1 times. Its run time is:
t_middle(i) = Sum { j from 1 to i-1 } t_inner(j)
= Sum { j from 1 to i-1 } j-1
= 1/2 * (2 - 3 * i + i^2)
The outer loop runs 2n-1 times. Its run time is:
t_outer(n) = Sum { i from 1 to 2n-1 } t_middle(i)
= Sum { i from 1 to 2n-1 } 1/2 * (2 - 3 * i + i^2)
= 1/3 (-3 + 11 n - 12 n^2 + 4 n^3)
From the last formula, we see that the time complexity is O(n^3).

Running time of algorithm in worst case

What's the running time of the following algorithm in the worst-case, assuming that it takes a constant time c1 to do a comparison and another constant time c2 to swap two elements?
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n - 1; j++)
{
if (array[j] > array [j+1])
{
swap(array[j], array[j+1]);
}
}
}
I get 2+4n^2. How I calculate it (starting from the inner loop):
The inner loop runs (n-1) times.
The first time it runs, there is the initialisation of j and the comparison of j with (n-1) to know whether to enter the loop. This gives 2 instructions.
Each time it runs, there is the comparison of j with (n-1) to know whether to continue the loop, the increment of j, the array comparison and the swapping. This gives 4 instructions which run (n-1) times, therefore 4(n-1) instructions.
The inner loop thus contains 2+4(n-1) instructions.
The outer loop runs n times.
The first time the outer loop runs, there is the initialisation of i and the comparison of i with n. This gives 2 instructions.
Each time it runs, there is the comparison of i with n, the increment of i and the inner loop. This gives (2+(2+4(n-1)))n instructions.
Altogether, there are 2+(2+(2+4(n-1)))n instructions, which gives 2+4n^2.
Is it correct?
You forgot to account for the addition of j+1 for the index in the if statement and the swap call, and the n-1 calculation in the inner for loop will be an extra instruction.
Remember, every calculation counts as an instruction, which means that essentially every operator in your code adds an instruction, not just the comparisons, function calls, and loop control stuff.
for (int i = 0; i < n; i++) //(1 + 1) + n(1 + 1 + innerCost) (init+comp) + numLoops(comp+inc+innerCost)
{
for (int j = 0; j < n - 1; j++) //(1 + 2) + (n-1)(1 + 1 + 1 + inner) (init+comp) + numLoops(sub+comp+inc+innerCost)
{
if (array[j] > array [j+1]) //1 + 1 (1 for comparison, 1 for +)
{
swap(array[j], array[j+1]); //1 + 1 (1 for function call, 1 for +)
}
}
}
runtime = (1+1) + n(1+1+ (1+2)+(n-1)(1+1+1+ (1+1 + 1+1)))
runtime = 2 + n( 2 + 3 +(n-1)( 3 + 2 + 2))
runtime = 2 + n( 5 +(n-1)(7))
runtime = 2 + n( 5 + 7n - 7)
runtime = 2 + n(7n-2)
runtime = 2 + 7n^2 - 2n = 7n^2 - 2n + 2

Runtime of a loop that decays exponentially?

Where n is the input to the function can be any integer.
i = n, total = 0;
while (i > 0) {
for (j=0; j<i; j++)
for (k=0; k<i; k++)
total++;
i = i/4;
}
What is the time complexity of this function?
One way to think about this is to look at the loops independently.
This inner loop nest:
for (j=0; j<i; j++)
for (k=0; k<i; k++)
total++;
will execute a total of Θ(i2) operations, since each loop independently runs i times.
Now, let's look at the outer loop:
while (i > 0) {
/* do Theta(i^2) work */
i = i/4;
}
This loop will run a total of at most 1 + log4 i times, since on each iteration i is cut by a factor of 1/4, and this can only happen 1 + log4 i times before i drops to zero. The question, then, is how much work will be done.
One way to solve this is to write a simple recurrence relation for the total work done. We can think of the loop as a tail-recursive function, where each iteration does Θ(i2) work and then makes a recursive call on a subproblem of size 4. This gives this recurrence:
T(n) = T(n / 4) + Θ(n2).
Applying the Master Theorem, we see that a = 1, b = 4, and c = 2. Since logb a = log4 1 = 0 and 0 < c, the Master Theorem says (by Case 3) that the runtime solves to Θ(n2). Therefore, the total work done is Θ(n2).
Here's another way to think about this. The first iteration of the loop does n2 work. The next does (n / 4)2 = n2 / 16 work. The next does (n / 64)2 = n2 / 256 work. In fact, iteration x of the loop will do n2 / 16x work. Therefore, the total work done is given by
n2(1 + 1 / 16 + 1 / 162 + 1 / 163 + ...)
≤ n2(1 / (1 - 1/16))
= Θ(n2)
(This uses the formula for the sum of an infinite geometric series).
Hope this helps!
The running time is O(n^2), and the number of times that total is incremented is asymptotic to n^2/(1-1/16) which is about 1.067 n^2.
The recurrence is going to be
T(n) = n^2 + T(n/4)
= n^2 + n^2/16 + T(n/16)
= n^2 (1 + 1/16 + 1/16^2 + ...)
= n^2 / (1 - 1/16)
This code fragment:
i = n, total = 0;
while (i > 0) {
for (j=0; j<i; j++)
for (k=0; k<i; k++)
total++;
i = i/4;
}
is equivalent to this one:
for ( i = n ; i > 0 ; i = i / 4 )
for ( j = 0 ; j < i ; j ++)
for ( k = 0 ; k < i ; k ++)
total ++;
Therefore, methodically (empirically verified), you may obtain the following using Sigma Notation:*
With many thanks to WolframAlpha.

Resources