How can we find the Time Complexity of the Algorithm - algorithm

I want to know about the time complexity of the given code below, I have a doubt it is O of root n
i = n, sum = 0
while (i >= 0){
i /= 2
sum += i*i*i
}
I am really confused can anyone help me out and explain

If you're unsure you can always use "time" module to roughly get an idea of the complexity. It would go something like this.
import time
start = time.time() # put this before the loop
end = time.time() #put this after the loop
print(end - start) #this gives you evaluation time of your loop
Find evaluation times for different n's and determine the complexity.
Just by looking at it though, your loop gets executed roughly log2(n) times, and inside you have 2 multiplications and one division (so nothing complex). Therefore, I would assume O(log(n)) is a reasonable guess.

As mentioned in the comments, I'm assuming that the code was meant to be written as
i = n, sum = 0
while (i > 0) { // <--- Change >= to >
i /= 2
sum += i*i*i
}
since otherwise the code would be an infinite loop. With this in mind, let's take a look at what this code is doing.
For starters, note that the sum variable isn't doing anything that impacts our time complexity. On each iteration, it gets bigger, but we're doing only O(1) work to update it. That means that the time complexity here is going to depend on how many times the loop runs. Notice that, across the different iterations of the loop, the value of i will take on the sequence
n, n / 2, n / 4, n / 8, n / 16, n / 32, ...
and, in particular, on the kth iteration of the loop the value of i will be equal to n / 2k (ignoring rounding down, which we can safely do here). The question, then, is at what iteration of the loop we end up with n / 2k < 1, which is when the loop will stop. Solving, we get that
n / 2k < 1
n < 2k
log2 n < k
So this loop will stop as soon as the number of loop iterations k is greater than log2 n. This means that we do Θ(log n) loop iterations, of which each iteration does O(1) work, so the total work done is Θ(log n).

Related

What are the number of operations in this method?

I am currently learning about asymptotic analysis, however I am unsure of how many operations are in these nested loops. Would somebody be able to help me understand how to approach them? Also what would the Big-O notation be? (This is also my first time on stack overflow so please forgive any formatting errors in the code).
public static void primeFactors(int n){
while( n % 2 == 0){
System.out.print(2 + " ");
n /= 2;
}
for(int i = 3; i <= Math.sqrt(n); i += 2){
while(n % i == 0){
System.out.print(i + " ");
n /= i;
}
}
}
In the worst case, the first loop (while) is O(log(n)). In the second loop, the outer loop runs in O(sqrt(n)) and the inner loop runs in O(log_i(n)). Hence, the time complexity of the second loop (inner and outer in total) is:
O(sum_{i = 3}^{sqrt(n)} log_i(n))
Therefore, the time complexity of the mentioned algorithm is O(sqrt(n) log(n)).
Notice that, if you mean n is modified inside the inner loop, and it affects on sqrt(n) in the outer loop, so the complexity of the second loop is O(sqrt(n)). Therefore, under this assumtion, the time complexity of the alfgorithm will be O(sqrt(n)) + O(log(n)) == O(sqrt(n)).
First, we see that the first loop is really a special case of the inner loop that occurs in the other loop, with i equal to two. It was separated as a special case in order to be able to increase i with steps of 2 instead of 1. But from the point of view of asymptotic complexity the step by 2 makes no difference: it represents a constant coefficient, which we can ignore. And so for our analysis we can just rewrite the code to this:
public static void primeFactors(int n){
for(int i = 2; i <= Math.sqrt(n); i += 1){ // note the change in start and increment value
while(n % i == 0){
System.out.print(i + " ");
n /= i;
}
}
}
The number of times that n/i is executed, corresponds to the number of non-distinct prime divisors that a number has. According to this Q&A that number of times is O(loglogn). It is not straightforward to derive this, so I had to look it up.
We should also consider the number of times the for loop iterates. The Math.sqrt(n) boundary for i can lower as the for loop iterates. The more divisions take place, the (much) fewer iterations the for loop has to make.
We can see that at the time that the loop exits, i has surpassed the square root of the greatest prime divisor of n. In the worst case that greatest prime divisor is n itself (when n is prime). So the for loop can iterate up to the square root of n, so O(√n) times. In that case the inner loop never iterates (no divisions).
We should thus see which is more determining, and we get O(√n + loglogn). This is O(√n).
The first loop divides n by 2 until it is not divisible by 2 anymore. The maximum number of time this can happen is log2(n).
The second loop, at first sight, seems to run sqrt(n) times the inner loop which is also O(log(n)), but that is actually not the case. Everytime the while condition of the second loop is satisfied, n is drastically decreased, and since the condition of a for-loop is executed on each iteration, sqrt(n) also decreases. The worst case actually happens if the condition of while loop is never satisfied, i.e. if n is prime.
Hence the overall time complexity is O(sqrt(n)).

time complexity of three codes where variables depend on each other

1) i=s=1;
while(s<=n)
{
i++;
s=s+i;
}
2) for(int i=1;i<=n;i++)
for(int j=1;j<=n;j+=i)
cout<<"*";
3) j=1;
for(int i=1;i<=n;i++)
for(j=j*i;j<=n;j=j+i)
cout<<"*";
can someone explain me the time complexity of these three codes?
I know the answers but I can't understand how it came
1) To figure this out, we need to figure out how large s is on the x'th iteration of the loop. Then we'll know how many iterations occur until the condition s > n is reached.
On the x'th iteration, the variable i has value x + 1
And the variable s has value equal to the sum of i for all previous values. So, on that iteration, s has value equal to
sum_{y = 1 .. x} (y+1) = O(x^2)
This means that we have s = n on the x = O(\sqrt{n}) iteration. So that's the running time of the loop.
If you aren't sure about why the sum is O(x^2), I gave an answer to another question like this once here and the same technique applies. In this particular case you could also use an identity
sum_{y = 1 .. x} y = y choose 2 = (y+1)(y) / 2
This identity can be easily proved by induction on y.
2) Try to analyze how long the inner loop runs, as a function of i and n. Since we start at one, end at n, and count up by i, it runs n/i times. So the total time for the outer loop is
sum_{i = 1 .. n} n/i = n * sum_{i = 1 .. n} 1 / i = O(n log n)
The series sum_{i = 1 .. n} 1 / i is called the harmonic series. It is well-known that it converges to O(log n). I can't enclose here a simple proof. It can be proved using calculus though. This is a series you just have to know. If you want to see a simple proof, you can look on on wikipedia at the "comparison test". The proof there only shows the series is >= log n, but the same technique can be used to show it is <= O(log n) also.
3.) This looks like kind of a trick question. The inner loop is going to run once, but once it exits with j = n + 1, we can never reenter this loop, because no later line that runs will make j <= n again. We will run j = j * i many times, where i is a positive number. So j is going to end up at least as large as n!. For any significant value of n, this is going to cause an overflow. Ignoring that possibility, the code is going to perform O(n) operations in total.

How to determine the time complexity of this loop?

x=1;
While(x<n)
{
x=x + n/100;
}
I'm trying to figure out if it's o(n) or o(1). Because no matter what we put in n's place I think the loop will go just 10 times.
lets say n=1.1
then it will go for 10 times and if n=1.2 loop will go on for 17 times
and if n=2 it will go on for 50 times and when n>=101 loop will be repeated 100 times even if n=10^10000 else you can figure out
Unfortunately you're wrong it it being O(n) or O(1) and this is immediately clear by the fact that it can't be O(1), because it takes different numbers of iterations for varying values of n(even looking at n = 1,2,3,4,5), and it can't be O(n) because it doesn't grow linearly.
Even through a bit of manual calculation you can see clearly that it won't always run 10 times. Examine the following short python program:
def t(n):
x = 1
c = 0
while x < n:
c += 1
x += n/100
return c
a = []
for i in range(10000):
a += [i/100 + 1]
with open("out.csv","w") as f:
for i in a:
f.write(str(i) + "," + str(t(i)) + "\n")
Using Excel or some other application you can easily trend the number of iterations taken to see the following curve:
It is immediately clear at this point that the number of iterations taken is logarithmic in the range {0:100} with any n < 1 taking 0 iterations and n > 100 taking 100 operations. So while Big-O notation wasn't my best subject, I would guess that the time complexity is thus O(log(n)).

Inconsistencies in Big-O Analysis of a Basic "Algorithm"

I recently learned about formal Big-O analysis of algorithms; however, I don't see why these 2 algorithms, which do virtually the same thing, would have drastically different running times. The algorithms both print numbers 0 up to n. I will write them in pseudocode:
Algorithm 1:
def countUp(int n){
for(int i = 0; i <= n; i++){
print(n);
}
}
Algorithm 2:
def countUp2(int n){
for(int i = 0; i < 10; i++){
for(int j = 0; j < 10; j++){
... (continued so that this can print out all values 0 - Integer.MAX_VALUE)
for(int z = 0; z < 10; z++){
print("" + i + j + ... + k);
if(("" + i + j + k).stringToInt() == n){
quit();
}
}
}
}
}
So, the first algorithm runs in O(n) time, whereas the second algorithm (depending on the programming language) runs in something close to O(n^10). Is there anything with the code that causes this to happen, or is it simply the absurdity of my example that "breaks" the math?
In countUp, the loop hits all numbers in the range [0,n] once, thus resulting in a runtime of O(n).
In countUp2, you do somewhat the exact same thing, a bunch of times. The bounds on all your loops is 10.
Say you have 3 loop running with a bound of 10. So, outer loop does 10, inner does 10x10, innermost does 10x10x10. So, worst case your innermost loop will run 1000 times, which is essentially constant time. So, for n loops with bounds [0, 10), your runtime is 10^n which, again, can be called constant time, O(1), since it is not dependent on n for worst case analysis.
Assuming you can write enough loops and that the size of n is not a factor, then you would need a loop for every single digit of n. Number of digits in n is int(math.floor(math.log10(n))) + 1; lets call this dig. So, a more strict upper bound on the number of iterations would be 10^dig (which can be kinda reduced to O(n); proof is left to the reader as an exercise).
When analyzing the runtime of an algorithm, one key thing to look for is the loops. In algorithm 1, you have code that executes n times, making the runtime O(n). In algorithm 2, you have nested loops that each run 10 times, so you have a runtime of O(10^3). This is because your code runs the innermost loop 10 times for each run of the middle loop, which in turn runs 10 times for each run of the outermost loop. So the code runs 10x10x10 times. (This is purely an upper bound however, because your if-statement may end the algorithm before the looping is complete, depending on the value of n).
To count up to n in countUp2, then you need the same number of loops as the number of digits in n: so log(n) loops. Each loop can run 10 times, so the total number of iterations is 10^log(n) which is O(n).
The first runs in O(n log n) time, since print(n) outputs O(log n) digits.
The second program assumes an upper limit for n, so is trivially O(1). When we do complexity analysis, we assume a more abstract version of the programming language where (usually) integers are unbounded but arithmetic operations still perform in O(1). In your example you're mixing up the actual programming language (which has bounded integers) with this more abstract model (which doesn't). If you rewrite the program[*] so that is has a dynamically adjustable number of loops depending on n (so if your number n has k digits, then there's k+1 nested loops), then it does one iteration of the innermost code for each number from 0 up to the next power of 10 after n. The inner loop does O(log n) work[**] as it constructs the string, so overall this program too is O(n log n).
[*] you can't use for loops and variables to do this; you'd have to use recursion or something similar, and an array instead of the variables i, j, k, ..., z.
[**] that's assuming your programming language optimizes the addition of k length-1 strings so that it runs in O(k) time. The obvious string concatenation implementation would be O(k^2) time, meaning your second program would run in O(n(log n)^2) time.

Theta Notation and Worst Case Running time nested loops

This is the code I need to analyse:
i = 1
while i < n
do
j = 0;
while j <= i
do
j = j + 1
i = 2i
So, the first loop should run log(2,n) and the innermost loop should run log(2,n) * (i + 1), but I'm pretty sure that's wrong.
How do I use a theta notation to prove it?
An intuitive way to think about this is to see how much work your inner loop is doing for a fixed value of outer loop variable i. It's clearly as much as i itself. Thus, if the value of i is 256, then then you will do j = j + 1 that many times.
Thus, total work done is the sum of the values that i takes in the outer loop's execution. That variable is increasing much rapidly to catch up with n. Its values, as given by i = 2i (it should be i = 2*i), are going to be like: 2, 4, 8, 16, ..., because we start with 2 iterations of the inner loop when i = 1. This is a geometric series: a, ar, ar^2 ... with a = 1 and r = 2. The last term, as you figured out will be n and there will be log2 n terms in the series. And that is simple summation of a geometric series.
It doesn't make much sense to have a worst case or a best case for this algorithm because there are no different permutations of the input which is just a number n in this case. Best case or worst case are relevant when a particular input (e.g. a particular sequence of numbers) affects the running time of the algorithm.
The running time then is the sum of geometric series (a.(r^num_terms - 1)/(r-1)):
T(n) = 2 + 4 + ... 2^(log2 n)
= 2 . (2^log2 n - 1)
= 2 . (n - 1)
⩽ 3n = O(n)
Thus, you can't be doing work that is more than some constant multiple of n. Hence, the running time of this algorithm is O(n).
You can't be doing some work that is less than some (other) constant multiple of n, since you have to go through the increment in inner loop as shown above. Thus, the running time of this algorithm is also ≥ c.n i.e. it is Ω(n).
Together, this means that running time of this algorithm is Θ(n).
You can't use i in your final expression; only n.
You can easily see that the inner loop executes i times each time it is reached. And it sounds like you've figured out the different values that i can have. So add up those values, and you have the total amount of work.

Resources