I have some exercises of complexity analysis of double loops, and I don't know if I'm doing them correctly.
for i = 1 to n do
j = i
while j < n do
j = 2∗j
end while
end for
My answer on this is O(n^2), because the first loop is running O(n) times and the inner one is doing O(n/2) iterations for the "worst" iteration of the outer loop. So O(n) * O(n/2) = O(n^2).
Also looking a bit further, I think I can say that the inner loops is doing a partial sum that is O(n/2) + O(n-1) + ... + O(1), and this is also O(n)
for i = 1 to n do
j = n
while i∗i < j do
j = j − 1
end while
end for
Again the outer loop is O(n), and the inner loop is doing O(sqrt(n)) in the worst iteration, so here I think it's O(n*sqrt(n)) but I'm unsure about this one.
for i = 1 to n do
j = 2
while j < i do
j = j ∗j
end while
end for
Here the outer loop is O(n) and the inner loop is doing O(logn) work for the worst case. Hence I think this is O(nlogn)
i = 2
while (i∗i < n) and (n mod i != 0) do
i = i + 1
end while
Finally, I don't know how to make sense of this one. Because of the modulus operator.
My questions are:
Did I do anything wrong in the first 3 examples?
Is the "worst-case approach" for the inner loops I'm doing correct?
How should I approach the last exercise?
First Question:
The inner loop takes log(n/i) time. an upper bound is O(log(n)) giving a total time of O(n*log(n)). a lower bound is log(n/2) and sum only on the last n/2 terms, giving a total complexity of n/2 * log(n/2) = n/2*log(n) - n/2 = O(n * log(n)) and we get that the bound O(n* log(n)) is tight (we have a theta bound).
Second Question:
The inner loop takes n - i^2 time (and O(1) if i^2 >= n). Notice that for i >= sqrt(n) the inner loop takes O(1) time so we can run the outer loop only for i in 1:sqrt(n) and add O(n) to the result. An upper bound is n for the inner loop, giving a total time of O(n * sqrt(n) + n) = O(n ^ (3/2)). A lower bound is 3/4 * n for the inner loop and summing only for i's up to sqrt(n) / 2 (so that i^2 < n / 4 and n - i ^ 2 > 3/4 * n ) and we get a total time of Ω(sqrt(n) / 2 * n * 3/4 + n) = Ω(n^(3/2)) thus the bound O(n * sqrt(n)) is indeed tight.
Third Question:
In this one j is starting from 2 and we square it until it reaches i. after t steps of the inner loop, j is equal to 2^(2^t). we reach i when j = 2 ^ (log(i)) = 2 ^ (2 ^ log(log(i))), i.e., after t = log(log(i)) steps. We can again give an upper bound and lower bound similarly to the previous questions, and get the tight bound O(n * log(log(n))).
Forth Question:
The complexity can vary between 2 = O(1) and sqrt(n), depending on the factorization of n. In the worst case, n is a perfect square, giving a complexity of O(sqrt(n)
To answer your questions at the end:
1. Yes, you have done some things wrong. You have reached wrong answers in 1 and 3 and in 2 your result is right but the reasoning is flawed; the inner loop is not O(sqrt(n)), as you have already seen in my analysis.
2. Considering the "worst case" for the inner loop is good, as it's giving you an upper bound (which is mostly accurate in this kind of questions), but to establish a tight bound you must also show a lower bound, usually by taking only the higher terms and lowering them to the first, as I did in some of the examples. Another way to prove tight bounds is to use formulas of known series such as 1 + ... + n = n * (n + 1) / 2, giving an immediate bound of O(n^2) instead of getting the lower bound by 1 + ... + n >= n/2 + ... + n >= n/2 + ... + n/2 = n/2 * n/2 = n^/4 = Ω(n^2).
3. Answered above.
For the first one in the inner loop we have:
i, 2*i, 4*i, ... , (2^k)*i where (2^k)*i < n. So k < logn - logi. The outer loop as you said repeats n+1 times. In total we have this sum:
Which equals to
Therefore I think the complexity should be O(nlogn).
For the second one we have:
For third one:
So I think it should be O(log(n!))
For the last one, if n is even, it will be O(1) because we don't enter the loop. But the worst case is when n is odd and is not divisible by any of the square numbers, then I think it should be
Related
I'm learning how to prove/disprove big-Oh, big-Omega, and little-oh, and I have the following algorithm f(n). However I'm unsure how to prove this f(n) as it has an if statement which I've never come across before. How can I prove, for example, that this f(n) is O( n^2 )?
if n is even
4 sum(n/2,n)
else
2n-1 sum(n−3,n)
where sum(j,k) is a ‘partial arithmetic sum’ of the integers from j up to k, that is sum(j,k)=
if j > k
0
else
j+(j+1)+(j+2)+...+j
e.g. sum(3,4) = 3 + 4 = 7, etc.
Note that sum(j,k) = sum(1,k) – sum(1,j-1).
ok. Got it no worries. I'll try to help you understand this.
Big O notation is used to define an upper limit on how much time a program will take in term of its input size.
Let's try to see how much time each statement will take in this function
f(n) {
if n is even // O(1) .....#1
4 * sum(n/2,n) // O(n) .....#2
else // O(1) ................#3
(2n-1) * sum(n−3,n) // O(n) .......#4
}
if n is even
This can be done by a check like if ((n%2) == 0)) As you can see that this is a constant time operation. no loop nothing just one computation.
sum(j, k) function is being computated by iterating from j to k whenever j <= k. So, it will run (k - j + 1) times which is linear time
So, total complexity will be summation of complexity of the if block or the else block
For analyzing complexity, one needs to take care of worst time.
Complexity of if block = #1 + #2 = O(1) + O(n) = O(n)
Similarly for else block = #3 + #4 = O(1) + O(n) = O(n)
Max of both = maximum of(O(n), O(n)) = O(n)
Thus, the overall complexity = O(n)
I was asked if what time complexity if this:
What is the time complexity (with respect of n) of this algorithm:
k=0
for(i = n / 2 ; i < n ; i++ ) {
for( j=0 ; j < i ; j++)
k = k + n / 2
}
choices was : a. O(n) b. O(n/2) c. O(n log(n) and d. O(n^2)
can have a multiple answers.
i know the algorithm above is d. O(n^2) but i came with with a. O(n) since it is looking for complexity of n only?.
if you are to have this question. how would you answer it.?? im so curious about the answer.
The answer is O(n²).
This is easy to understand. I will try to make you understand it.
See, the outer for loop block is executed n - n/2 = n/2 times.
Of course it depends whether the number n is even or odd. If it's even then the outer loop is executed n/2 times. If it's odd then it's executed for (n-1)/2 times.
But for time complexity, we don't consider this. We just assume that the outer for loop is executed n/2 times where i starts from n/2 and ends at n - 1 (because the terminating condition is i < n and not i <= n).
For each iteration of the outer loop, the inner loop executes i times.
For example, for every iteration, inner loop starts with j = 0 to j = i - 1. This means that it executes i times (not i - 1 times because j starts from 0 and not from 1).
Therefore, for 1st iteration the inner loop is executed i = n / 2 times. i = n / 2 + 1 for 2nd iteration and so on upto i = n - 1 times.
Now, the total no. of times the inner loop executes is n/2 + (n/2 + 1) + (n/2 + 2) + ... + (n - 2) + (n - 1). It's simple math that this sums up to (3n² - n)/2 times.
So, the time complexity becomes O((3n² - n)/2).
But we ignore the n term because n² > n and the constant terms because for every n they will remain the same.
Therefore, the final time complexity is O(n²).
Hope this helps you understand.
The algorithm below has runtime O(n) according to our professor, however I am confused as to why it is not
O(n log(n)), because the outer loop can run up to log(n) times and the inner loop can run up to n times.
Algoritme Loop5(n)
i = 1
while i ≤ n
j = 1
while j ≤ i
j = j + 1
i = i∗2
Your professor was right, the running time is O(n).
In the i-th iteration of the outer while-loop, when we have i=2^k for k=0,1,...,log n, the inner while-loop makes O(i) iterations. (When I say log n I mean the base-2 logarithm log_2 n.)
The running time is O(1+2+2^2+2^3+...+2^k) for k=floor(log n). This sums to O(2^{k+1}) which is O(2^{log n}). (This follows from the formula for the partial sum of geometric series.)
Because 2^{log n} = n, the total running time is O(n).
For the interested, here's a proof that the powers of two really sum to what I claim they sum to. (This is a very special case of a more general result.)
Claim. For any natural k, we have 1+2+2^2+...+2^k = 2^{k+1}-1.
Proof. Note that (2-1)*(1+2+2^2+...+2^k) = (2 - 1) + (2^2 - 2) + ... + (2^{k+1} - 2^k) where all 2^i for 0<i<k+1 cancel out, except for i=0 and i=k+1, and we are left with 2^{k+1}-1. QED.
I'm learning Big-O notation right now and stumbled across this small algorithm in another thread:
i = n
while (i >= 1)
{
for j = 1 to i // NOTE: i instead of n here!
{
x = x + 1
}
i = i/2
}
According to the author of the post, the complexity is Θ(n), but I can't figure out how. I think the while loop's complexity is Θ(log(n)). The for loop's complexity from what I was thinking would also be Θ(log(n)) because the number of iterations would be halved each time.
So, wouldn't the complexity of the whole thing be Θ(log(n) * log(n)), or am I doing something wrong?
Edit: the segment is in the best answer of this question: https://stackoverflow.com/questions/9556782/find-theta-notation-of-the-following-while-loop#=
Imagine for simplicity that n = 2^k. How many times x gets incremented? It easily follows this is Geometric series
2^k + 2^(k - 1) + 2^(k - 2) + ... + 1 = 2^(k + 1) - 1 = 2 * n - 1
So this part is Θ(n). Also i get's halved k = log n times and it has no asymptotic effect to Θ(n).
The value of i for each iteration of the while loop, which is also how many iterations the for loop has, are n, n/2, n/4, ..., and the overall complexity is the sum of those. That puts it at roughly 2n, which gets you your Theta(n).
I'm trying to learn how to find the big-theta bounds of various algorithms, but I'm having a hard time understanding how to do it, even after reading a number of questions here and lectures and textbooks on the subject. So for example
procedure for array a{
step=1
while(step <= n){
i = n
while(i >= step){
a[i]= a[i-step] + a[i]
i = i - 1}
step = step * 2}
}
I want to figure out the big-theta bound on the number of additions this goes through in terms of n, the number of indices in array a. I can see that the outer loop itself goes through log(n) iterations, but I can't figure out how to express what happens in the inner loop. Does anyone have an explanation or perhaps even a resource I might try consulting?
Big Theta notation asks us to find 2 constants, k1 and k2 such that our function f(n) is between k1*g(n) and k2*g(n) for sufficiently large n. In other words, can we find some other function g(n) that is at some point less than f(n) and at another point greater than f(n) (monotonically each way).
To prove Big-Theta, we need to find g(n) such that f(n) is O(g(n)) (upper bound), and f(n) is Big-Omega(g(n)) (lower bound).
Prove Big-O
In terms of Big-O notation (where f(n) <= g(n)*k), your algorithm, f(n), is O(log(n)*n), In this case g(n) = log(n) * n.
Let's prove this:
Find Inner Loop Executions
How many times does the outer loop execute? Track the "step" variable:
Let's say that n is 100:
1
2
4
8
16
32
64
128 (do not execute this loop)
That's 7 executions for an input of 100. We can equivalently say that it executes (log n) times (actually floor(log n) times, but log(n) is adequate).
Now let's look at the inner loop. Track the i variable, which starts at n and decrements until it is of size step each iteration. Therefore, the inner while loop will execute n - step times, for each value of step.
For example, when n = 100
100 - 1 = 99 iterations
100 - 2 = 98 iterations
100 - 4 = 96 iterations
100 - 8 = 92 iterations
100 - 16 = 84 iterations
100 - 32 = 68 iterations
100 - 64 = 36 iterations
So what's our total iteration count of the inner loop?
(n-1)
(n-1) + (n-2)
(n-1) + (n-2) + (n-4)
(n-1) + (n-2) + (n-4) + (n-8)
etc.
How is this thing growing? Well, because we know the outer loop will iterate log(n) times, we can formulate this thing as a summation:
Sum(from i=0 to log(n)) n-2^i
= log(n)*n - Sum(from i=0 to log(n)) 2^i
= log(n)*n - (2^0 + 2^1 + 2^2 + ... + 2^log(n))
= log(n)*n - ( (1-2^log(n) ) / (1-2) ) (actually 2^log(n+1) but close enough)
= log(n)*n + 1 - n
So now our goal is to show that:
log(n)*n + 1 - n = O(log(n)*n)
Clearly, log(n)*n is O(log(n)*n), but what about the 1-n?
1-n = O(log(n)*n)?
1-n <= k*log(n)*n, for some k?
Let k = 1
1-n <= log(n)*n?
Add n to both sides
1 <= n*log(n) + n? YES
So we've shown that f(n) is O(n*log(n)).
Prove Big-Omega
Now that we have an upper bound on f(n) using log(n) * n, let's try to get a lower bound on f(n) also using log(n) * n.
For a lower bound we use Big Omega notation. Big Omega looks for a function g(n)*k <= f(n) for some positive constant k.
k(n*log(n)) <= n*log(n) + 1 - n?
let k = 1/10
n*log(n)/10 <= n*log(n) + 1 - n?
(n*log(n) - 10n*log(n)) / 10 <= 1 - n?
-9n*log(n)/10 <= 1 - n? Multiply through by 10
-9n*log(n) <= 10 - 10n? Multiply through by -1 (flipping inequality)
9n*log(n) >= 10n - 10? Divide through by 9
n*log(n) >= 10n/9 - 10/9? Divide through by n
log(n) >= 10/9 - 10/9n ? Yes
Clearly, the quantity log(n) grows larger as (10/9 - 10/9n) tends towards 10/9. In fact for n = 1, 0 >= 10/9 - 10/9. 0 >= 0.
Prove Big-Theta
So now we've shown that f(n) is Big-Omega(n*log(n)). Combining this with the proof for f(n) is O(n*log(n)), and we've shown that f(n) is Big-Theta(n*log(n))! (the exclamation point is for excitement, not factorial...)
g(n) = n*log(n), and one valid set of constants is k1=1/10 (lower bound) and k2 = 1 (upper bound).
To prove big-O: there are floor(log2(n)) + 1 = O(log(n)) iterations through the outer loop, and the inner loop iterates O(n) times per, for a total of O(n * log(n)).
To prove big-Omega: there are floor(log2(n/2)) + 1 = Omega(log(n)) iterations through the outer loop during which step <= n/2. The inner loop iterates n + 1 - step times, which, for these outer iterations, is more than n/2 = Omega(n) per, for a total of Omega(n * log(n)).
Together, big-O and big-Omega prove big-Theta.
Simplifying the representation of your code like the following, we can translate your code into Sigma notation
for (step = 1; <= n; step = step * 2) {
for(i = n; i >= step; step = step - 1) {
}
}
Like this: