How many times a while loop get executed? - algorithm

Yesterday I applied for computer engineering master degree and it was the one of the their questions. I could not solve it so I was very curious.
...
i = 1;
while (i <= n)
{
i = i * 2;
}
...
How many times will this while loop get executed, please give your answer as a formula. For ex: log n...
Thanks

On the xth iteration of the loop, i equals 2x (you can easily prove this by induction). Suppose the loop stops after X iterations, which means n < 2X. This also means that on iteration X-1 the loop was still running and so 2X-1 &leq; n. In other words :
2X-1 &leq; n < 2X
From there, finding X as a function of log2(n) is easy.

Related

How can we find the Time Complexity of the Algorithm

I want to know about the time complexity of the given code below, I have a doubt it is O of root n
i = n, sum = 0
while (i >= 0){
i /= 2
sum += i*i*i
}
I am really confused can anyone help me out and explain
If you're unsure you can always use "time" module to roughly get an idea of the complexity. It would go something like this.
import time
start = time.time() # put this before the loop
end = time.time() #put this after the loop
print(end - start) #this gives you evaluation time of your loop
Find evaluation times for different n's and determine the complexity.
Just by looking at it though, your loop gets executed roughly log2(n) times, and inside you have 2 multiplications and one division (so nothing complex). Therefore, I would assume O(log(n)) is a reasonable guess.
As mentioned in the comments, I'm assuming that the code was meant to be written as
i = n, sum = 0
while (i > 0) { // <--- Change >= to >
i /= 2
sum += i*i*i
}
since otherwise the code would be an infinite loop. With this in mind, let's take a look at what this code is doing.
For starters, note that the sum variable isn't doing anything that impacts our time complexity. On each iteration, it gets bigger, but we're doing only O(1) work to update it. That means that the time complexity here is going to depend on how many times the loop runs. Notice that, across the different iterations of the loop, the value of i will take on the sequence
n, n / 2, n / 4, n / 8, n / 16, n / 32, ...
and, in particular, on the kth iteration of the loop the value of i will be equal to n / 2k (ignoring rounding down, which we can safely do here). The question, then, is at what iteration of the loop we end up with n / 2k < 1, which is when the loop will stop. Solving, we get that
n / 2k < 1
n < 2k
log2 n < k
So this loop will stop as soon as the number of loop iterations k is greater than log2 n. This means that we do Θ(log n) loop iterations, of which each iteration does O(1) work, so the total work done is Θ(log n).

How to calculate time complexity of existing code in production [duplicate]

This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 2 years ago.
I have gone through a couple of examples in understanding the time complexity of a program using the operation count and step count technique but these examples are small and straight. However in real-world programming, if someone gives you the production or live code to find the worst case time complexity of it. How does one start analyzing it?
Is there any technique or thumb rules for analyzing the program? say I have a function as shown below that has if and for conditions. For some values of n, it goes deep but for some values, it does not.
So maximum number of operations is the worst case and the minimum number of operations is the best case.
How can we know when do maximum operations occur if we have this kind of conditional statement? (Other functions can go much deeper conditional and not the loops)
int l = 0;
while (l <= n){
int m = l + (n-l)/2;
if (arr[m] == x) return;
if (arr[m] < x) l = m + 1;
else n = m - 1;
}
How does one calculate the worst and best case by looking at the above code? Do the perform step count for the above program and by substituting n = 1 till 20 and get some values and then trying to derive the function? I would like to know how do people analyze time complexity for existing code when they have this kind of branching statements.
Step by Step analysis or Set of statements to follow to solve the above problem would be greatly helpful.
As each time half of the n is added to m and the loop will be continued by increasing a unit of l or changing the value of the n to the m-1 the following scenario can give the maximum operation:
In each iteration, the `else` part is happened and set `n` to `m-1`.
Let see what is happened for this case. As each time n is dividend by 2, and l is keeping 0, after O(log(n)) iteration, l == n.
Therefore, the time complexity of the loop is O(log(n)).
Notice that other cases can increase l faster towards n. For example, if l = m + 1 it means that l = (n-1)/2, and in the next iteration, m will be increased to n-1. Hence, just 2 iteration we will have to reach to the end of the loop.

Find the number of instructions of an algorithm

Given this algorithm (a>0, b>0) :
while(a>=b){
k=1;
while(a>=k*b){
a = a - k*b;
k++;
}
}
My question : I have to find the time complexity of this algorithm and to do so, I must find the number of instructions but I couldn't find it. Is there a way to find this number and if not, how can I find its time complexity ?
What I have done : First of all I tried to find the number of iterations of the first loop and I found a pattern : a_i = a - (i(i+1)/2)*b where i is the number of iterations. I've spent hours doing some manipulations on it but I couldn't find anything relevant (I've found weird results like q² <= a/b < q²+q where q is the number of iterations).
You have correctly calculated that the value of a after the i-th iteration of the inner loop is:
Where a_j0 is the value of a at the start of the j-th outer loop. The stopping condition for the inner loop is:
Which can be solved as a quadratic inequality:
Therefore the inner loop is approximately O(sqrt(a_j0 / b)). The next starting value of a satisfies:
Scaling roughly as sqrt(2b * a_j0). It would be quite tedious to compute the time complexity exactly, so let's apply the above approximations from here on:
Where a_n replaces a_j0, and t_n is the run-time of the inner loop – and of course the total time complexity is just the sum of t_n. Note that the first term is given by n = 1, and that the input value of a is defined to be a_0.
Before directly solving this recurrence, note that since the second term t_2 is already proportional to the square root of the first t_1, the latter dominates all other terms in the sum.
The total time complexity is therefore just O(sqrt(a / b)).
Update: numerical tests.
Note that, since all changes in the value of a are proportional to b, and all loop conditions are also proportional to b, the function can be "normalized" by setting b = 1 and only varying a.
Javascript test function, which measures the number of times that the inner loop executes:
function T(n)
{
let t = 0, k = 0;
while (n >= 1) {
k = 1;
while (n >= k) {
n -= k;
k++; t++;
}
}
return t;
}
Plot of sqrt(n) against T(n):
A convincing straight line which confirms that the time complexity is indeed half-power.

time complexity of three codes where variables depend on each other

1) i=s=1;
while(s<=n)
{
i++;
s=s+i;
}
2) for(int i=1;i<=n;i++)
for(int j=1;j<=n;j+=i)
cout<<"*";
3) j=1;
for(int i=1;i<=n;i++)
for(j=j*i;j<=n;j=j+i)
cout<<"*";
can someone explain me the time complexity of these three codes?
I know the answers but I can't understand how it came
1) To figure this out, we need to figure out how large s is on the x'th iteration of the loop. Then we'll know how many iterations occur until the condition s > n is reached.
On the x'th iteration, the variable i has value x + 1
And the variable s has value equal to the sum of i for all previous values. So, on that iteration, s has value equal to
sum_{y = 1 .. x} (y+1) = O(x^2)
This means that we have s = n on the x = O(\sqrt{n}) iteration. So that's the running time of the loop.
If you aren't sure about why the sum is O(x^2), I gave an answer to another question like this once here and the same technique applies. In this particular case you could also use an identity
sum_{y = 1 .. x} y = y choose 2 = (y+1)(y) / 2
This identity can be easily proved by induction on y.
2) Try to analyze how long the inner loop runs, as a function of i and n. Since we start at one, end at n, and count up by i, it runs n/i times. So the total time for the outer loop is
sum_{i = 1 .. n} n/i = n * sum_{i = 1 .. n} 1 / i = O(n log n)
The series sum_{i = 1 .. n} 1 / i is called the harmonic series. It is well-known that it converges to O(log n). I can't enclose here a simple proof. It can be proved using calculus though. This is a series you just have to know. If you want to see a simple proof, you can look on on wikipedia at the "comparison test". The proof there only shows the series is >= log n, but the same technique can be used to show it is <= O(log n) also.
3.) This looks like kind of a trick question. The inner loop is going to run once, but once it exits with j = n + 1, we can never reenter this loop, because no later line that runs will make j <= n again. We will run j = j * i many times, where i is a positive number. So j is going to end up at least as large as n!. For any significant value of n, this is going to cause an overflow. Ignoring that possibility, the code is going to perform O(n) operations in total.

Theta Notation and Worst Case Running time nested loops

This is the code I need to analyse:
i = 1
while i < n
do
j = 0;
while j <= i
do
j = j + 1
i = 2i
So, the first loop should run log(2,n) and the innermost loop should run log(2,n) * (i + 1), but I'm pretty sure that's wrong.
How do I use a theta notation to prove it?
An intuitive way to think about this is to see how much work your inner loop is doing for a fixed value of outer loop variable i. It's clearly as much as i itself. Thus, if the value of i is 256, then then you will do j = j + 1 that many times.
Thus, total work done is the sum of the values that i takes in the outer loop's execution. That variable is increasing much rapidly to catch up with n. Its values, as given by i = 2i (it should be i = 2*i), are going to be like: 2, 4, 8, 16, ..., because we start with 2 iterations of the inner loop when i = 1. This is a geometric series: a, ar, ar^2 ... with a = 1 and r = 2. The last term, as you figured out will be n and there will be log2 n terms in the series. And that is simple summation of a geometric series.
It doesn't make much sense to have a worst case or a best case for this algorithm because there are no different permutations of the input which is just a number n in this case. Best case or worst case are relevant when a particular input (e.g. a particular sequence of numbers) affects the running time of the algorithm.
The running time then is the sum of geometric series (a.(r^num_terms - 1)/(r-1)):
T(n) = 2 + 4 + ... 2^(log2 n)
= 2 . (2^log2 n - 1)
= 2 . (n - 1)
⩽ 3n = O(n)
Thus, you can't be doing work that is more than some constant multiple of n. Hence, the running time of this algorithm is O(n).
You can't be doing some work that is less than some (other) constant multiple of n, since you have to go through the increment in inner loop as shown above. Thus, the running time of this algorithm is also ≥ c.n i.e. it is Ω(n).
Together, this means that running time of this algorithm is Θ(n).
You can't use i in your final expression; only n.
You can easily see that the inner loop executes i times each time it is reached. And it sounds like you've figured out the different values that i can have. So add up those values, and you have the total amount of work.

Resources