How to work out the complexity of an algorithm? - complexity-theory

I have two questions for algorithm analysis, and would like to know how to determine the complexity of the following two:
First:
For(int i=2; i<n; i=i*i*i)
{
//something O(1)
}
Second:
n/1 + n/2 + n/3 +...+ n/n

To the first:
It will be infinite, because 1*1*1 = 1 so i is always 1 and will never be >= n.
The second algorithm is not really an algorithm but the addition runs in O(n).

For the first Algorithm:
Suppose that the initial value of i is 2 (rather than 1 that would lead to an infinite loop as #tschaefemedia remarked).
At the 1st iteration, i==2
At the second iteration, i==2*2*2 == 2ˆ3
At the third iteration, i== (2ˆ3 * 2ˆ3 * 2ˆ3) == 2ˆ(3*3)
At the fourth iteration, i== 2ˆ(3*3) * 2ˆ(3*3) * 2ˆ(3*3) == 2ˆ(3*3*3)
...
At iteration k+1, i== 2ˆ(3*3*3*...*3) == 2ˆ(3ˆk)
Suppose for simplicity that at iteration k-1, i becomes equal to n and the loop stops. Then:
n == 2ˆ(3ˆk)
log2(n) == 3^k
log3(log2(n)) == k
So, the complexity is O(log3(log2(n)))
As for the second question, I suppose that you are giving the complexity formula. So,
n/1 + n/2 + n/3 + ... + n/n = n (1+ 1/2 + 1/3 + ... + 1/n)
This is Harmonic series and it is O(log(n))
So, the overall complexity is O(n*log(n))

Related

Complexity Analysis of the following loops

I have some exercises of complexity analysis of double loops, and I don't know if I'm doing them correctly.
for i = 1 to n do
j = i
while j < n do
j = 2∗j
end while
end for
My answer on this is O(n^2), because the first loop is running O(n) times and the inner one is doing O(n/2) iterations for the "worst" iteration of the outer loop. So O(n) * O(n/2) = O(n^2).
Also looking a bit further, I think I can say that the inner loops is doing a partial sum that is O(n/2) + O(n-1) + ... + O(1), and this is also O(n)
for i = 1 to n do
j = n
while i∗i < j do
j = j − 1
end while
end for
Again the outer loop is O(n), and the inner loop is doing O(sqrt(n)) in the worst iteration, so here I think it's O(n*sqrt(n)) but I'm unsure about this one.
for i = 1 to n do
j = 2
while j < i do
j = j ∗j
end while
end for
Here the outer loop is O(n) and the inner loop is doing O(logn) work for the worst case. Hence I think this is O(nlogn)
i = 2
while (i∗i < n) and (n mod i != 0) do
i = i + 1
end while
Finally, I don't know how to make sense of this one. Because of the modulus operator.
My questions are:
Did I do anything wrong in the first 3 examples?
Is the "worst-case approach" for the inner loops I'm doing correct?
How should I approach the last exercise?
First Question:
The inner loop takes log(n/i) time. an upper bound is O(log(n)) giving a total time of O(n*log(n)). a lower bound is log(n/2) and sum only on the last n/2 terms, giving a total complexity of n/2 * log(n/2) = n/2*log(n) - n/2 = O(n * log(n)) and we get that the bound O(n* log(n)) is tight (we have a theta bound).
Second Question:
The inner loop takes n - i^2 time (and O(1) if i^2 >= n). Notice that for i >= sqrt(n) the inner loop takes O(1) time so we can run the outer loop only for i in 1:sqrt(n) and add O(n) to the result. An upper bound is n for the inner loop, giving a total time of O(n * sqrt(n) + n) = O(n ^ (3/2)). A lower bound is 3/4 * n for the inner loop and summing only for i's up to sqrt(n) / 2 (so that i^2 < n / 4 and n - i ^ 2 > 3/4 * n ) and we get a total time of Ω(sqrt(n) / 2 * n * 3/4 + n) = Ω(n^(3/2)) thus the bound O(n * sqrt(n)) is indeed tight.
Third Question:
In this one j is starting from 2 and we square it until it reaches i. after t steps of the inner loop, j is equal to 2^(2^t). we reach i when j = 2 ^ (log(i)) = 2 ^ (2 ^ log(log(i))), i.e., after t = log(log(i)) steps. We can again give an upper bound and lower bound similarly to the previous questions, and get the tight bound O(n * log(log(n))).
Forth Question:
The complexity can vary between 2 = O(1) and sqrt(n), depending on the factorization of n. In the worst case, n is a perfect square, giving a complexity of O(sqrt(n)
To answer your questions at the end:
1. Yes, you have done some things wrong. You have reached wrong answers in 1 and 3 and in 2 your result is right but the reasoning is flawed; the inner loop is not O(sqrt(n)), as you have already seen in my analysis.
2. Considering the "worst case" for the inner loop is good, as it's giving you an upper bound (which is mostly accurate in this kind of questions), but to establish a tight bound you must also show a lower bound, usually by taking only the higher terms and lowering them to the first, as I did in some of the examples. Another way to prove tight bounds is to use formulas of known series such as 1 + ... + n = n * (n + 1) / 2, giving an immediate bound of O(n^2) instead of getting the lower bound by 1 + ... + n >= n/2 + ... + n >= n/2 + ... + n/2 = n/2 * n/2 = n^/4 = Ω(n^2).
3. Answered above.
For the first one in the inner loop we have:
i, 2*i, 4*i, ... , (2^k)*i where (2^k)*i < n. So k < logn - logi. The outer loop as you said repeats n+1 times. In total we have this sum:
Which equals to
Therefore I think the complexity should be O(nlogn).
For the second one we have:
For third one:
So I think it should be O(log(n!))
For the last one, if n is even, it will be O(1) because we don't enter the loop. But the worst case is when n is odd and is not divisible by any of the square numbers, then I think it should be

time complexity (with respect of n input)

I was asked if what time complexity if this:
What is the time complexity (with respect of n) of this algorithm:
k=0
for(i = n / 2 ; i < n ; i++ ) {
for( j=0 ; j < i ; j++)
k = k + n / 2
}
choices was : a. O(n) b. O(n/2) c. O(n log(n) and d. O(n^2)
can have a multiple answers.
i know the algorithm above is d. O(n^2) but i came with with a. O(n) since it is looking for complexity of n only?.
if you are to have this question. how would you answer it.?? im so curious about the answer.
The answer is O(n²).
This is easy to understand. I will try to make you understand it.
See, the outer for loop block is executed n - n/2 = n/2 times.
Of course it depends whether the number n is even or odd. If it's even then the outer loop is executed n/2 times. If it's odd then it's executed for (n-1)/2 times.
But for time complexity, we don't consider this. We just assume that the outer for loop is executed n/2 times where i starts from n/2 and ends at n - 1 (because the terminating condition is i < n and not i <= n).
For each iteration of the outer loop, the inner loop executes i times.
For example, for every iteration, inner loop starts with j = 0 to j = i - 1. This means that it executes i times (not i - 1 times because j starts from 0 and not from 1).
Therefore, for 1st iteration the inner loop is executed i = n / 2 times. i = n / 2 + 1 for 2nd iteration and so on upto i = n - 1 times.
Now, the total no. of times the inner loop executes is n/2 + (n/2 + 1) + (n/2 + 2) + ... + (n - 2) + (n - 1). It's simple math that this sums up to (3n² - n)/2 times.
So, the time complexity becomes O((3n² - n)/2).
But we ignore the n term because n² > n and the constant terms because for every n they will remain the same.
Therefore, the final time complexity is O(n²).
Hope this helps you understand.

Complexity Algorithm Analysis with if

I have the following code. What time complexity does it have?
I have tried to write a recurrence relation for it but I can't understand when will the algorithm add 1 to n or divide n by 4.
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 1)
T(n + 1);
else if (n%2 == 0)
T(n / 4);
}
You can view it like this: you always divide by four only if you have odd you add 1 to n before division. So, you should count how many times 1 was added. If there no increments then you have log4n recursive calls. Let's assume that you always have to add 1 before division. Then can rewrite it like this:
void T(int n) {
for (i = 0; i < n; i++);
if (n == 1 || n == 0)
return;
else if (n%2 == 0)
T(n / 4 + 1);
}
But n/4 + 1 < n/2, and in case of recursive call T(n/2), running time is O(log(n,4)), but base of logarithm doesn't impact running time in big-O notation because it's just like constant factor. So running time is O(log(n)).
EDIT:
As ALB pointed in a comment, there is cycle of length n. So, with accordance with master theorem running time is Theta(n). You can see it in another way as sum of n * (1 + 1/2 + 1/4 + 1/8 + ...) = 2 * n.
Interesting question. Be aware that even though your for loop is doing nothing, since it is not an optimized solution (see Dukeling's comment), it will be considered in your time complexity as if computing time was taken to iterate through it.
First part
The first section is definitely O(n).
Second part
Let's suppose for the sake of simplicity that half the time n will be odd and the other half time it will be even. Hence, the recursion is looping (n+1) half the time and (n/4) the other half.
Conclusion
For each time T(n) is called, the recursion will implicitly loop n times. Hence, The first half of the time, we will have a complexity of n * (n+1) = n^2 + n. The other half of the time, we will deal with a n * (n/4) = (1/4)n^2.
For Big O notation, we care more about the upper bound than its precise behavior. Hence, your algorithm would be bound by O(n^2).

Why doesn't the time complexity of Sieve of Eratosthenes algorithm have the argument sqrt(n)?

I'm trying to understand the Sieve of Eratosthenes algorithm time complexity. Everywhere online, it says the time complexity is O(nloglog(n)), but I don't understand why.
Here is some pseudocode
factors = new int[n+1];
for i from 2 to n
factors[i] = 1; //true
for i from 2 to sqrt(n)
if(factors[i] == 1) //isPrime
{
for all multiples of i upto n
factors[multiple] = 0 //notPrime
}
return all indices of factors that have a value of 1
I think we can all agree that the time complexity of this function depends on the nested for loop. Now its analysis. When i = 2, the inner loop runs n/2 times. When i = 3, the inner loop runs n/3 times. The next time the inner loops executes is the next prime number so n/5 times. Altogether the loop will run
n/2 + n/3 + n/5 + n/7 + ... times
This is
n(1/2 + 1/3 + 1/5 + 1/7 + ...)
The sum of the reciprocals of primes up to n is a element of O(log(log(n))).
Thus, the overall complexity is O(nlog(log(n)))
HOWEVER, as written in our pseudocode, the outer for loop only run root(n) times. Thus we are only summing the reciprocals of primes up to sqrt(n). So the complexity should beO(nlog(log(sqrt(n)))) not what is stated above.
What is wrong with my analysis?
O(nlog(log(sqrt(n)))) is O(nlog(log(n))), because log(sqrt(n)) = log(n)/2.

Time complexity of the following algorithm?

I'm learning Big-O notation right now and stumbled across this small algorithm in another thread:
i = n
while (i >= 1)
{
for j = 1 to i // NOTE: i instead of n here!
{
x = x + 1
}
i = i/2
}
According to the author of the post, the complexity is Θ(n), but I can't figure out how. I think the while loop's complexity is Θ(log(n)). The for loop's complexity from what I was thinking would also be Θ(log(n)) because the number of iterations would be halved each time.
So, wouldn't the complexity of the whole thing be Θ(log(n) * log(n)), or am I doing something wrong?
Edit: the segment is in the best answer of this question: https://stackoverflow.com/questions/9556782/find-theta-notation-of-the-following-while-loop#=
Imagine for simplicity that n = 2^k. How many times x gets incremented? It easily follows this is Geometric series
2^k + 2^(k - 1) + 2^(k - 2) + ... + 1 = 2^(k + 1) - 1 = 2 * n - 1
So this part is Θ(n). Also i get's halved k = log n times and it has no asymptotic effect to Θ(n).
The value of i for each iteration of the while loop, which is also how many iterations the for loop has, are n, n/2, n/4, ..., and the overall complexity is the sum of those. That puts it at roughly 2n, which gets you your Theta(n).

Resources