The algorithm below has runtime O(n) according to our professor, however I am confused as to why it is not
O(n log(n)), because the outer loop can run up to log(n) times and the inner loop can run up to n times.
Algoritme Loop5(n)
i = 1
while i ≤ n
j = 1
while j ≤ i
j = j + 1
i = i∗2
Your professor was right, the running time is O(n).
In the i-th iteration of the outer while-loop, when we have i=2^k for k=0,1,...,log n, the inner while-loop makes O(i) iterations. (When I say log n I mean the base-2 logarithm log_2 n.)
The running time is O(1+2+2^2+2^3+...+2^k) for k=floor(log n). This sums to O(2^{k+1}) which is O(2^{log n}). (This follows from the formula for the partial sum of geometric series.)
Because 2^{log n} = n, the total running time is O(n).
For the interested, here's a proof that the powers of two really sum to what I claim they sum to. (This is a very special case of a more general result.)
Claim. For any natural k, we have 1+2+2^2+...+2^k = 2^{k+1}-1.
Proof. Note that (2-1)*(1+2+2^2+...+2^k) = (2 - 1) + (2^2 - 2) + ... + (2^{k+1} - 2^k) where all 2^i for 0<i<k+1 cancel out, except for i=0 and i=k+1, and we are left with 2^{k+1}-1. QED.
Related
I have some exercises of complexity analysis of double loops, and I don't know if I'm doing them correctly.
for i = 1 to n do
j = i
while j < n do
j = 2∗j
end while
end for
My answer on this is O(n^2), because the first loop is running O(n) times and the inner one is doing O(n/2) iterations for the "worst" iteration of the outer loop. So O(n) * O(n/2) = O(n^2).
Also looking a bit further, I think I can say that the inner loops is doing a partial sum that is O(n/2) + O(n-1) + ... + O(1), and this is also O(n)
for i = 1 to n do
j = n
while i∗i < j do
j = j − 1
end while
end for
Again the outer loop is O(n), and the inner loop is doing O(sqrt(n)) in the worst iteration, so here I think it's O(n*sqrt(n)) but I'm unsure about this one.
for i = 1 to n do
j = 2
while j < i do
j = j ∗j
end while
end for
Here the outer loop is O(n) and the inner loop is doing O(logn) work for the worst case. Hence I think this is O(nlogn)
i = 2
while (i∗i < n) and (n mod i != 0) do
i = i + 1
end while
Finally, I don't know how to make sense of this one. Because of the modulus operator.
My questions are:
Did I do anything wrong in the first 3 examples?
Is the "worst-case approach" for the inner loops I'm doing correct?
How should I approach the last exercise?
First Question:
The inner loop takes log(n/i) time. an upper bound is O(log(n)) giving a total time of O(n*log(n)). a lower bound is log(n/2) and sum only on the last n/2 terms, giving a total complexity of n/2 * log(n/2) = n/2*log(n) - n/2 = O(n * log(n)) and we get that the bound O(n* log(n)) is tight (we have a theta bound).
Second Question:
The inner loop takes n - i^2 time (and O(1) if i^2 >= n). Notice that for i >= sqrt(n) the inner loop takes O(1) time so we can run the outer loop only for i in 1:sqrt(n) and add O(n) to the result. An upper bound is n for the inner loop, giving a total time of O(n * sqrt(n) + n) = O(n ^ (3/2)). A lower bound is 3/4 * n for the inner loop and summing only for i's up to sqrt(n) / 2 (so that i^2 < n / 4 and n - i ^ 2 > 3/4 * n ) and we get a total time of Ω(sqrt(n) / 2 * n * 3/4 + n) = Ω(n^(3/2)) thus the bound O(n * sqrt(n)) is indeed tight.
Third Question:
In this one j is starting from 2 and we square it until it reaches i. after t steps of the inner loop, j is equal to 2^(2^t). we reach i when j = 2 ^ (log(i)) = 2 ^ (2 ^ log(log(i))), i.e., after t = log(log(i)) steps. We can again give an upper bound and lower bound similarly to the previous questions, and get the tight bound O(n * log(log(n))).
Forth Question:
The complexity can vary between 2 = O(1) and sqrt(n), depending on the factorization of n. In the worst case, n is a perfect square, giving a complexity of O(sqrt(n)
To answer your questions at the end:
1. Yes, you have done some things wrong. You have reached wrong answers in 1 and 3 and in 2 your result is right but the reasoning is flawed; the inner loop is not O(sqrt(n)), as you have already seen in my analysis.
2. Considering the "worst case" for the inner loop is good, as it's giving you an upper bound (which is mostly accurate in this kind of questions), but to establish a tight bound you must also show a lower bound, usually by taking only the higher terms and lowering them to the first, as I did in some of the examples. Another way to prove tight bounds is to use formulas of known series such as 1 + ... + n = n * (n + 1) / 2, giving an immediate bound of O(n^2) instead of getting the lower bound by 1 + ... + n >= n/2 + ... + n >= n/2 + ... + n/2 = n/2 * n/2 = n^/4 = Ω(n^2).
3. Answered above.
For the first one in the inner loop we have:
i, 2*i, 4*i, ... , (2^k)*i where (2^k)*i < n. So k < logn - logi. The outer loop as you said repeats n+1 times. In total we have this sum:
Which equals to
Therefore I think the complexity should be O(nlogn).
For the second one we have:
For third one:
So I think it should be O(log(n!))
For the last one, if n is even, it will be O(1) because we don't enter the loop. But the worst case is when n is odd and is not divisible by any of the square numbers, then I think it should be
I was asked if what time complexity if this:
What is the time complexity (with respect of n) of this algorithm:
k=0
for(i = n / 2 ; i < n ; i++ ) {
for( j=0 ; j < i ; j++)
k = k + n / 2
}
choices was : a. O(n) b. O(n/2) c. O(n log(n) and d. O(n^2)
can have a multiple answers.
i know the algorithm above is d. O(n^2) but i came with with a. O(n) since it is looking for complexity of n only?.
if you are to have this question. how would you answer it.?? im so curious about the answer.
The answer is O(n²).
This is easy to understand. I will try to make you understand it.
See, the outer for loop block is executed n - n/2 = n/2 times.
Of course it depends whether the number n is even or odd. If it's even then the outer loop is executed n/2 times. If it's odd then it's executed for (n-1)/2 times.
But for time complexity, we don't consider this. We just assume that the outer for loop is executed n/2 times where i starts from n/2 and ends at n - 1 (because the terminating condition is i < n and not i <= n).
For each iteration of the outer loop, the inner loop executes i times.
For example, for every iteration, inner loop starts with j = 0 to j = i - 1. This means that it executes i times (not i - 1 times because j starts from 0 and not from 1).
Therefore, for 1st iteration the inner loop is executed i = n / 2 times. i = n / 2 + 1 for 2nd iteration and so on upto i = n - 1 times.
Now, the total no. of times the inner loop executes is n/2 + (n/2 + 1) + (n/2 + 2) + ... + (n - 2) + (n - 1). It's simple math that this sums up to (3n² - n)/2 times.
So, the time complexity becomes O((3n² - n)/2).
But we ignore the n term because n² > n and the constant terms because for every n they will remain the same.
Therefore, the final time complexity is O(n²).
Hope this helps you understand.
I'm learning Big-O notation right now and stumbled across this small algorithm in another thread:
i = n
while (i >= 1)
{
for j = 1 to i // NOTE: i instead of n here!
{
x = x + 1
}
i = i/2
}
According to the author of the post, the complexity is Θ(n), but I can't figure out how. I think the while loop's complexity is Θ(log(n)). The for loop's complexity from what I was thinking would also be Θ(log(n)) because the number of iterations would be halved each time.
So, wouldn't the complexity of the whole thing be Θ(log(n) * log(n)), or am I doing something wrong?
Edit: the segment is in the best answer of this question: https://stackoverflow.com/questions/9556782/find-theta-notation-of-the-following-while-loop#=
Imagine for simplicity that n = 2^k. How many times x gets incremented? It easily follows this is Geometric series
2^k + 2^(k - 1) + 2^(k - 2) + ... + 1 = 2^(k + 1) - 1 = 2 * n - 1
So this part is Θ(n). Also i get's halved k = log n times and it has no asymptotic effect to Θ(n).
The value of i for each iteration of the while loop, which is also how many iterations the for loop has, are n, n/2, n/4, ..., and the overall complexity is the sum of those. That puts it at roughly 2n, which gets you your Theta(n).
I am doing the exercise of Skiena's book on algorithms and I am stuck in this question:
I need to calculate the big O of the following algorithm:
function mystery()
r=0
for i=1 to n-1 do
for j=i+1 to n do
for k=1 to j do
r=r+1
Here, the big O of the outermost loop will be O(n-1) and the middle loop will be O(n!). Please tell me if I am wrong here.
I am not able to calculate the big O of the innermost loop.
Can anyone please help me with this?
Here's a more rigorous way to approach solving this problem:
Define the run-time of the algorithm to be f(n) since n is our only input. The outer loop tells us this
f(n) = Sum(g(i,n), 1, n-1)
where Sum(expression, min, max) is the sum of expression from i = min to i = max. Notice that, the expression in this case is an evaluation of g(i, n) with a fixed i (the summation index) and n (the input to f(n)). And we can peel another layer and defineg(i, n):
g(i, n) = Sum(h(j), i+1, n), where i < n
which is the sum of h(j) where j ranges of i+1 to n. Finally we can just define
h(j) = Sum(O(1), 1, j)
since we've assumed that r = r+1 takes time O(1).
Notice at this point that we haven't done any hand-waving, saying stuff like "oh you can just multiply the loops together. The 'innermost operation' is the only one that counts." That statement isn't even true for all algorithms. Here's an example:
for (i = 0; i < n; i++)
{
for (j = 0; j < n; j++)
{
Solve_An_NP_Complete_Problem(n);
for (k = 0; k < n; k++)
{
count++;
}
}
}
The above algorithm isn't O(n^3)... it's not even polynomial.
Anyways, now that I've established the superiority of a rigorous evaluation (:D) we need to work our way upwards so we can figure out what the upper bound is for f(n). First, it's easy to see that h(j) is O(j) (just use the definition of Big-Oh). From that we can now rewrite g(i, n) as:
g(i, n) = Sum(O(j), i+1, n)
=> g(i, n) = O(i+1) + O(i+2) + ... O(n-1) + O(n)
=> g(i, n) = O(n^2 - i^2 - 2i - 1) (because we can sum Big-Oh functions
in this way using lemmas based on
the definition of Big-Oh)
=> g(i, n) = O(n^2) (because g(i, n) is only defined for i < n. Also, note
that before this step we had a Theta bound, which is
stronger!)
And so we can rewrite f(n) as:
f(n) = Sum(O(n^2), 1, n-1)
=> f(n) = (n-1)*O(n^2)
=> f(n) = O(n^3)
You might consider proving the lower bound to show that f(n) = Theta(n^3). The trick there is note simplifying g(i, n) = O(n^2) but keeping the tight bound when computing f(n). It requires some ugly algebra, but I'm pretty sure (i.e. I haven't actually done it) that you will be able to prove f(n) = Omega(n^3) as well (or just Theta(n^3) directly if you're really meticulous).
The analysis takes place as follows:
the outer loop goes from i = [1, n - 1], so the program would have linear complexity O(i) = O(n) if, inside this loop, only constant operations were performed;
however, inside the first loop, for each i, some operation is executed n times -- starting from j = [i + 1, n - 1]. This gives a total complexity of O(i * j) = O(n * n) = O(n^2);
finally, the innermost loop will be executed, for each i, and for each j, k times, with k ranging from k = [j, n - 1]. As j begins with i + 1 = 1 + 1, k is assymptotically equal to n, which gives you O(i * j * k) = O(n * n * n) = O(n^3).
The complexity of the program itself corresponds to the number of iterations of your innermost operations -- or the sums of complexities of innermost operations. If you had:
for i = 1:n
for j = 1:n
--> innermost operation (executed n^2 times)
for k = 1:n
--> innermost operation (executed n^3 times)
endfor
for l = 1:n
--> innermost operation (executed n^3 times)
endfor
endfor
endfor
The total complexity would be given as O(n^2) + O(n^3) + O(n^3), which is equal to max(O(n^2), O(n^3), O(n^3)), or O(n^3).
It cannot be factorial. For example if u have two nested cycles the big O will be n^2, not n^n. So three cycles cannot give more than n^3. Keep digging ;)
I can probably figure out part b if you can help me do part a. I've been looking at this and similar problems all day, and I'm just having problems grasping what to do with nested loops. For the first loop there are n iterations, for the second there are n-1, and for the third there are n-1.. Am I thinking about this correctly?
Consider the following algorithm,
which takes as input a sequence of n integers a1, a2, ..., an
and produces as output a matrix M = {mij}
where mij is the minimum term
in the sequence of integers ai, a + 1, ..., aj for j >= i and mij = 0 otherwise.
initialize M so that mij = ai if j >= i and mij = 0
for i:=1 to n do
for j:=i+1 to n do
for k:=i+1 to j do
m[i][j] := min(m[i][j], a[k])
end
end
end
return M = {m[i][j]}
(a) Show that this algorithm uses Big-O(n^3) comparisons to compute the matrix M.
(b) Show that this algorithm uses Big-Omega(n^3) comparisons to compute the matrix M.
Using this face and part (a), conclude that the algorithm uses Big-theta(n^3) comparisons.
In part A, you need to find an upper bound for the number of min ops.
In order to do so, it is clear that the above algorithm has less min ops then the following:
for i=1 to n
for j=1 to n //bigger range then your algorithm
for k=1 to n //bigger range then your algorithm
(something with min)
The above has exactly n^3 min ops - thus in your algorithm, there are less then n^3 min ops.
From this we can conclude: #minOps <= 1 * n^3 (for each n > 10, where 10 is arbitrary).
By definition of Big-O, this means the algorithm is O(n^3)
You said you can figure B alone, so I'll let you try it :)
hint: the middle loop has more iterations then for j=i+1 to n/2
For each iteration of outer loop inner two nested loop would give n^2 complexity if i == n. Outer loop will run for i = 1 to n. So total complexity would be a series like: 1^2 + 2^2 + 3^2 + 4^2 + ... ... ... + n^2. This summation value is n(n+1)(2n+1)/6. Ignoring lower order terms of this summation term ultimately the order would be O(n^3)