How to determine the time complexity of this loop? - data-structures

x=1;
While(x<n)
{
x=x + n/100;
}
I'm trying to figure out if it's o(n) or o(1). Because no matter what we put in n's place I think the loop will go just 10 times.

lets say n=1.1
then it will go for 10 times and if n=1.2 loop will go on for 17 times
and if n=2 it will go on for 50 times and when n>=101 loop will be repeated 100 times even if n=10^10000 else you can figure out

Unfortunately you're wrong it it being O(n) or O(1) and this is immediately clear by the fact that it can't be O(1), because it takes different numbers of iterations for varying values of n(even looking at n = 1,2,3,4,5), and it can't be O(n) because it doesn't grow linearly.
Even through a bit of manual calculation you can see clearly that it won't always run 10 times. Examine the following short python program:
def t(n):
x = 1
c = 0
while x < n:
c += 1
x += n/100
return c
a = []
for i in range(10000):
a += [i/100 + 1]
with open("out.csv","w") as f:
for i in a:
f.write(str(i) + "," + str(t(i)) + "\n")
Using Excel or some other application you can easily trend the number of iterations taken to see the following curve:
It is immediately clear at this point that the number of iterations taken is logarithmic in the range {0:100} with any n < 1 taking 0 iterations and n > 100 taking 100 operations. So while Big-O notation wasn't my best subject, I would guess that the time complexity is thus O(log(n)).

Related

How can we find the Time Complexity of the Algorithm

I want to know about the time complexity of the given code below, I have a doubt it is O of root n
i = n, sum = 0
while (i >= 0){
i /= 2
sum += i*i*i
}
I am really confused can anyone help me out and explain
If you're unsure you can always use "time" module to roughly get an idea of the complexity. It would go something like this.
import time
start = time.time() # put this before the loop
end = time.time() #put this after the loop
print(end - start) #this gives you evaluation time of your loop
Find evaluation times for different n's and determine the complexity.
Just by looking at it though, your loop gets executed roughly log2(n) times, and inside you have 2 multiplications and one division (so nothing complex). Therefore, I would assume O(log(n)) is a reasonable guess.
As mentioned in the comments, I'm assuming that the code was meant to be written as
i = n, sum = 0
while (i > 0) { // <--- Change >= to >
i /= 2
sum += i*i*i
}
since otherwise the code would be an infinite loop. With this in mind, let's take a look at what this code is doing.
For starters, note that the sum variable isn't doing anything that impacts our time complexity. On each iteration, it gets bigger, but we're doing only O(1) work to update it. That means that the time complexity here is going to depend on how many times the loop runs. Notice that, across the different iterations of the loop, the value of i will take on the sequence
n, n / 2, n / 4, n / 8, n / 16, n / 32, ...
and, in particular, on the kth iteration of the loop the value of i will be equal to n / 2k (ignoring rounding down, which we can safely do here). The question, then, is at what iteration of the loop we end up with n / 2k < 1, which is when the loop will stop. Solving, we get that
n / 2k < 1
n < 2k
log2 n < k
So this loop will stop as soon as the number of loop iterations k is greater than log2 n. This means that we do Θ(log n) loop iterations, of which each iteration does O(1) work, so the total work done is Θ(log n).

What is the time complexity of this code, where one loop grows geometrically and the other decays algebraically?

I have a guess,but I am not sure.
This is the problem:
for (a = 1 ; a<n ; a= 2*a) do {
for (b=n; b > 0; b=b-a) do {
}
}
This is my first question on stackoverflow, so I hope the formatting was right.
Thank you very much.
There's a useful maxim for reasoning about big-O notation that goes like this:
When in doubt, work inside out!
More specifically, if you're trying to figure out the complexity of a loop nest, start with the innermost loop and keep replacing it with a simpler statement summarizing the amount of work done.
In your case, you have these loop:
for (a = 1 ; a<n ; a= 2*a) do {
for (b=n; b > 0; b=b-a) do {
}
}
Let's begin with the inner loop:
for (b=n; b > 0; b=b-a) do {
}
How many times will this loop run? Well, we begin with b equal to n, and on each iteration b decreases by a. That means that the number of iterations of this loop is roughly n / a, so the complexity of this loop is Θ(n / a). We can therefore replace the inner loop with something to the effect of "do Θ(n / a) work" to get this simpler structure:
for (a = 1 ; a<n ; a= 2*a) do {
do Θ(n / a) work;
}
Now, let's think about how much work this loop does. Since the amount of work done inside the loop depends on the value of a, we're not going to multiply the number of iterations by the work done per iteration, since the work done per iteration isn't a constant. Instead, we'll add up how much work is done on each iteration of the loop.
Notice that the value of a increases as 1, 2, 4, 8, 16, 32, ..., until we overshoot n. Plugging those values into the work done inside the loop gives a runtime of
Θ(n / 1 + n / 2 + n / 4 + n / 8 + n / 16 + ... )
= n Θ(1/1 + 1/2 + 1/4 + 1/8 + 1/16 + ...)
You might recognize that the sum 1/1 + 1/2 + 1/4 + 1/8 + ... happens to converge to 2. (Do you see why?) As a result, we have that the runtime of this code is
n Θ(2)
= Θ(n).
So the overall work done here is Θ(n).
The main techniques we used to determine this were the following:
Work from the inside out, replacing each loop with a summary of how much work it does.
If a loop counter increases by k on each iteration and stops at n, the loop runs for a total of Θ(n / k) iterations. (This applies equally well if we run it backwards and start at n, decreasing by k.)
The sum of the geometric series 1 + 1/2 + 1/4 + 1/8 + 1/16 + ... out to infinity is 2.
If the work done by a loop is constant per iteration, just multiply the work per iteration by the number of iterations. If the work done per iteration isn't, it's often easier to sum up the work across all loop iterations and then try to simplify the sum.
Hope this helps!

time complexity of three codes where variables depend on each other

1) i=s=1;
while(s<=n)
{
i++;
s=s+i;
}
2) for(int i=1;i<=n;i++)
for(int j=1;j<=n;j+=i)
cout<<"*";
3) j=1;
for(int i=1;i<=n;i++)
for(j=j*i;j<=n;j=j+i)
cout<<"*";
can someone explain me the time complexity of these three codes?
I know the answers but I can't understand how it came
1) To figure this out, we need to figure out how large s is on the x'th iteration of the loop. Then we'll know how many iterations occur until the condition s > n is reached.
On the x'th iteration, the variable i has value x + 1
And the variable s has value equal to the sum of i for all previous values. So, on that iteration, s has value equal to
sum_{y = 1 .. x} (y+1) = O(x^2)
This means that we have s = n on the x = O(\sqrt{n}) iteration. So that's the running time of the loop.
If you aren't sure about why the sum is O(x^2), I gave an answer to another question like this once here and the same technique applies. In this particular case you could also use an identity
sum_{y = 1 .. x} y = y choose 2 = (y+1)(y) / 2
This identity can be easily proved by induction on y.
2) Try to analyze how long the inner loop runs, as a function of i and n. Since we start at one, end at n, and count up by i, it runs n/i times. So the total time for the outer loop is
sum_{i = 1 .. n} n/i = n * sum_{i = 1 .. n} 1 / i = O(n log n)
The series sum_{i = 1 .. n} 1 / i is called the harmonic series. It is well-known that it converges to O(log n). I can't enclose here a simple proof. It can be proved using calculus though. This is a series you just have to know. If you want to see a simple proof, you can look on on wikipedia at the "comparison test". The proof there only shows the series is >= log n, but the same technique can be used to show it is <= O(log n) also.
3.) This looks like kind of a trick question. The inner loop is going to run once, but once it exits with j = n + 1, we can never reenter this loop, because no later line that runs will make j <= n again. We will run j = j * i many times, where i is a positive number. So j is going to end up at least as large as n!. For any significant value of n, this is going to cause an overflow. Ignoring that possibility, the code is going to perform O(n) operations in total.

How to effectively calculate an algorithm's time complexity? [duplicate]

This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 5 years ago.
I'm studying algorithm's complexity and I'm still not able to determine the complexity of some algorithms ... Ok I'm able to figure out basic O(N) and O(N^2) loops but I'm having some difficult in routines like this one:
// What is time complexity of fun()?
int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
Ok I know that some guys can calculate this with the eyes closed but I would love to to see a "step" by "step" how to if possible.
My first attempt to solve this would be to "simulate" an input and put the values in some sort of table, like below:
for n = 100
Step i
1 100
2 50
3 25
4 12
5 6
6 3
7 1
Ok at this point I'm assuming that this loop is O(logn), but unfortunately as I said no one solve this problem "step" by "step" so in the end I have no clue at all of what was done ....
In case of the inner loop I can build some sort of table like below:
for n = 100
Step i j
1 100 0..99
2 50 0..49
3 25 0..24
4 12 0..11
5 6 0..5
6 3 0..2
7 1 0..0
I can see that both loops are decreasing and I suppose a formula can be derived based on data above ...
Could someone clarify this problem? (The Answer is O(n))
Another simple way to probably look at it is:
Your outer loop initializes i (can be considered step/iterator) at n and divides i by 2 after every iteration. Hence, it executes the i/2 statement log2(n) times. So, a way to think about it is, your outer loop run log2(n) times. Whenever you divide a number by a base continuously till it reaches 0, you effectively do this division log number of times. Hence, outer loop is O(log-base-2 n)
Your inner loop iterates j (now the iterator or the step) from 0 to i every iteration of outer loop. i takes the maximum value of n, hence the longest run that your inner loop will have will be from 0 to n. Thus, it is O(n).
Now, your program runs like this:
Run 1: i = n, j = 0->n
Run 2: i = n/2, j = 0->n/2
Run 3: i = n/4, j = 0->n/4
.
.
.
Run x: i = n/(2^(x-1)), j = 0->[n/(2^(x-1))]
Now, runnning time always "multiplies" for nested loops, so
O(log-base-2 n)*O(n) gives O(n) for your entire code
Lets break this analysis up into a few steps.
First, start with the inner for loop. It is straightforward to see that this takes exactly i steps.
Next, think about which different values i will assume over the course of the algorithm. To start, consider the case where n is some power of 2. In this case, i starts at n, then n/2, then n/4, etc., until it reaches 1, and finally 0 and terminates. Because the inner loop takes i steps each time, then the total number of steps of fun(n) in this case is exactly n + n/2 + n/4 + ... + 1 = 2n - 1.
Lastly, convince yourself this generalizes to non-powers of 2. Given an input n, find smallest power of 2 greater than n and call it m. Clearly, n < m < 2n, so fun(n) takes less than 2m - 1 steps which is less than 4n - 1. Thus fun(n) is O(n).

Theta Notation and Worst Case Running time nested loops

This is the code I need to analyse:
i = 1
while i < n
do
j = 0;
while j <= i
do
j = j + 1
i = 2i
So, the first loop should run log(2,n) and the innermost loop should run log(2,n) * (i + 1), but I'm pretty sure that's wrong.
How do I use a theta notation to prove it?
An intuitive way to think about this is to see how much work your inner loop is doing for a fixed value of outer loop variable i. It's clearly as much as i itself. Thus, if the value of i is 256, then then you will do j = j + 1 that many times.
Thus, total work done is the sum of the values that i takes in the outer loop's execution. That variable is increasing much rapidly to catch up with n. Its values, as given by i = 2i (it should be i = 2*i), are going to be like: 2, 4, 8, 16, ..., because we start with 2 iterations of the inner loop when i = 1. This is a geometric series: a, ar, ar^2 ... with a = 1 and r = 2. The last term, as you figured out will be n and there will be log2 n terms in the series. And that is simple summation of a geometric series.
It doesn't make much sense to have a worst case or a best case for this algorithm because there are no different permutations of the input which is just a number n in this case. Best case or worst case are relevant when a particular input (e.g. a particular sequence of numbers) affects the running time of the algorithm.
The running time then is the sum of geometric series (a.(r^num_terms - 1)/(r-1)):
T(n) = 2 + 4 + ... 2^(log2 n)
= 2 . (2^log2 n - 1)
= 2 . (n - 1)
⩽ 3n = O(n)
Thus, you can't be doing work that is more than some constant multiple of n. Hence, the running time of this algorithm is O(n).
You can't be doing some work that is less than some (other) constant multiple of n, since you have to go through the increment in inner loop as shown above. Thus, the running time of this algorithm is also ≥ c.n i.e. it is Ω(n).
Together, this means that running time of this algorithm is Θ(n).
You can't use i in your final expression; only n.
You can easily see that the inner loop executes i times each time it is reached. And it sounds like you've figured out the different values that i can have. So add up those values, and you have the total amount of work.

Resources