Can someone confirm me that the complexity of this algorithm is O(n^2)?
a = 0
b = 0
c = n
while (b <= c)
{
for (j = b; j<=c; j++)
{
a = j * j -2 * j + 3
}
b = b + 3
c = c + 2
}
The inner loop executes c - b + 1 times. Each execution of the inner loop body a = j * j -2 * j + 3 takes constant (bounded) time (assuming we're dealing with fixed-width integer types, otherwise it would depend on the multiplication algorithm used [and addition, but that's hard to implement in a way that multiplication is faster]), so the execution of the body of the outer loop is O(d) (Θ(d) even), where d = c - b + 1.
The updates of the variables controlling the outer loop
b = b + 3
c = c + 2
decrease the difference c - b by 1 in each execution of the outer loop's body, hence the outer loop is executed n+1 times, and you have a total of O(n²), since
n n+1
∑ (n+2k - (3k) +1) = ∑ j = (n+1)(n+2)/2
k=0 j=1
It even is Θ(n²), unless the compiler optimises and sets all variables to their final values directly.
Answer for original question with typo:
The inner loop
for (j = b; j==c; j++)
will execute either once - when b == c or not at all, so the body of the outer loop is O(1). The updates in the outer loop
b = b + 3
c = c + 2
mean that the difference c-b decreases by 1 each time the loop body is executed, so
b = 0
c = n
while (b <= c)
will execute n+1 times - total: O(n).
b = b + 3
c = c + 2
makes it so that b catches up to c by one each iteration of the outer loop. This implies the outer loop runs n+1 = O(n) times since they are initially n from each other.
The inner loop executes (c - b + 1) times. We know that they are initially n apart, and get closer by 1 each iteration of the outer loop.
Looking at the number of times the inner loop runs, it would look something like: (n, n-1, n-2, ..., 1) and in total
1 + 2 + ... + n = (n)(n+1)/2 = O(n^2)
Each time your outer loop
while(b <= c)
executes, b and c become closer by 1 than before. However, b and c start off at a distance n apart, so your inner for loop starts by executing n+1 times, then it executes n times, then it executes n-1 times, and so forth, until it finally executes 1 time and then your program finishes. Thus your running time is proportional to
(n+1) + n + (n-1) + (n-2) + ... + 1
and you can look up sum of increasing integers formula to see that this summation is equal to
(n+2)(n+1)/2 = O(n^2)
so your running time is O(n^2)
Related
Consider finding the total running time as function of n in these two loops:
(1)
q <- 1
while q <= n
p <- 1
while p <= q
p <- p + 1
q <- 2 * q
(2)
q,s <- 1, 1
while s < n
for j <- 1 to s
k <- 1
while k < = j
k <- 2 * k
q <- q + 1
s <- q * q
Going off of what I know I believe:
(1) is theta(n * lg(n)) where n represents time for inner loop, and
lg(n) for the outer loop.
(2) is theta(n * lg(n) * sqrt(n)) where n represents time for the for loop,
sqrt(n) for the outer loop, and lg(n) for the inner while loop.
I am not sure if this is correct. Any advice would be appreciated.
(1):
This is not the correct way to view this, in fact, the inner while-loop does not do "n cycles lg n times", it does q cycles whatever this number may be each iteration!
The correct way to analyze this is saying that the inner while-loop runs q times, and q takes the numbers 1, 2, 4, ... , n (Yes, the out while-loop runs Θ(lg n) times).
Thus the whole running times is:
1 + 2 + 4 + ... + L beware that if n is not a perfect power of 2, it goes up to the largest power less than n. Thus we can say it runs until it hits n (L = Θ(n))
Computing this gives us a geometric progression with lg(n) elements:
1 + 2 + 4 + ... + n = Θ(n)
(2):
Not a final solution, but a hint/kickstart
Your analysis is still wrong by saying the for-loop runs ~n times, this loop simply runs s times, and s is changing with each and every iteration. On iteration t we have s = t^2.
The analysis goes as so:
The for-loop and its inneer while-loop are correlated, j runs from 1-s and the while loop runs lg(j) - they are correlated because j is changed in each of every iteration of the for-loop. But we need to keep in mind that s is changing as-well, and so the for-loop runs s ∈ {1, 4, 9, ..., n}
If the the first loop runs for n+1 times.
second loop runs for n(n+1) times.
third loop will run for ??? it has n^2+1 one relation with with with the second loop i guess but how about with the first one
somefunction(n) {
c = 0
for (i = 1 to n*n)
for (j = 1 to n)
for (k = 1 to 2*j)
c = c+1
return c
}
The first loop has O(n**2) iterations.
The second loop has O(n) iterations.
The third loop has O(n) iterations as well, since j is steadily increasing towards n.
(It's a little easier to see if you sum up the number of times c = c + 1 executes for the two inner loops combined. The inner loop runs 2 times for j = 1, 4 for j = 2, ..., and 2*n times for j = n. 2 + 4 + .. + 2*n = O(n**2).)
You can then (loosely speaking) multiply the three values together to get a total bound of O(n**4).
I feel that in worst case also, condition is true only two times when j=i or j=i^2 then loop runs for an extra i + i^2 times.
In worst case, if we take sum of inner 2 loops it will be theta(i^2) + i + i^2 , which is equal to theta(i^2) itself;
Summation of theta(i^2) on outer loop gives theta(n^3).
So, is the answer theta(n^3) ?
I would say that the overall performance is theta(n^4). Here is your pseudo-code, given in text format:
for (i = 1 to n) do
for (j = 1 to i^2) do
if (j % i == 0) then
for (k = 1 to j) do
sum = sum + 1
Appreciate first that the j % i == 0 condition will only be true when j is multiples of n. This would occur in fact only n times, so the final inner for loop would only be hit n times coming from the for loop in j. The final for loop would require n^2 steps for the case where j is near the end of the range. On the other hand, it would only take roughly n steps for the start of the range. So, the overall performance here should be somewhere between O(n^3) and O(n^4), but theta(n^4) should be valid.
For fixed i, the i integers 1 ≤ j ≤ i2 such that j % i = 0 are {i,2i,...,i2}. It follows that the inner loop is executed i times with arguments i * m for 1 ≤ m ≤ i and the guard executed i2 times. Thus, the complexity function T(n) ∈ Θ(n4) is given by:
T(n) = ∑[i=1,n] (∑[j=1,i2] 1 + ∑[m=1,i] ∑[k=1,i*m] 1)
= ∑[i=1,n] ∑[j=1,i2] 1 + ∑[i=1,n] ∑[m=1,i] ∑[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + ∑[i=1,n] ∑[m=1,i] ∑[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + n4/8 + 5n3/12 + 3n2/8 + n/12
= n4/8 + 3n3/4 + 7n2/8 + n/4
For the following block of code, select the most appropriate run-time formula in terms of primitive operations needed for input of size n:
When resolving from inside out, i get:
inner loop = 3n+1
main loop + inner loop = 3 + (3n +1) + logn = 4 + 3n + logn
extra steps + all loops = 4 + n(4 + 3n + logn) = 4 + 4n + 3n2 + logn
This is the code to analyze:
def rate(n):
total= 0
i = 1
while i < n:
j = 0
while j < n:
total= i * j + total
j = j + 1
i = i * 2
return total
and the answer is supposed to be --> f(n) = 4 + 4log(n) + log(n)*(3n)
I am actually coming up with O(NlgN) here for the overall running time. Appreciate that the inner loop in j is not dependent on the outer loop in i. The following should be true:
The outer loop in i is O(lgN), because i is doubling at each iteration, which is exponential behavior.
The inner loop in j is O(N), because j cycles from 0 to N at each iteration, regardless of the value of i.
We may therefore multiply together these complexities to get the overall complexity.
Note that for N of arbitrarily large size, your expression:
4 + 4log(n) + log(n)*(3n)
reduces to NlgN.
def rate(n):
total= 0
i = 1
while i < n: //This outer loop runs O(log(n)) times
j = 0
while j < n: //This inner loop runs O(n) times for each iteration of outer loop
total= i * j + total
j = j + 1
i = i * 2
return total
Hence, the total runtime complexity for your implementation in big-O is = O(log(n)) * O(n) = O(nlog(n)).
Given the following pseudo code for an array A
x = 0
for i = 0 to n - 1
for j = i to n - 1
if A[i] > A[j]:
x = x + 1
return x
how do I determine the running time?
I would say it's (n - 1)*(n - 1) = n^2 - 2n - 1 = O(n^2).
I'm not quite sure how to work with the if loop though.
yes O(n^2), just sum the number of iterations in the inner loop:
n + (n - 1) + (n - 2) + ... 1 = [ n x (n + 1) ] / 2
and if is not a loop, it is a control structure. generally you just count the number of times the condition is checked without considering the condition. The condition may be important if there is another loop in the if body.
how to count the iterations of the inner loop:
when i = 0 the inner loop runs n times, then ends
then i = 1 the inner loop runs n - 1 times, then ends
then i = 2 the inner loop runs n - 2 times, then ends
....
when i = n - 2 inner loop runs 1 times
when i = n - 1 inner loop runs 0 times
so all we need to do is to add the number of iterations of the inner loop:
n + (n - 1) + (n - 2) + ... 1 = [ n x (n + 1) ] / 2
#perreal is totally right about the order:
n*(n+1)/2 => O(n^2)
About the "if" part, it doesn't really matter. (I write this to answer to this part)
Lets say doing checking if takes c1 time, and doing the x=x+1 takes c2 time. You will have
(c1 | c1+c2)* n*(n+1)/2
And since you can ignore the constants from the order, it is from
O(n^2)
Actually, saying "this algorithm has a O(n^2)time complexity" suggests implicitly that it's a worst case complexity.
In a naive way, you can just count the number of times each instruction is executed in the worst case. if A[i] > A[j]: might be considered as an instruction as well, so first you don't necessarily have to ask yourself when the condition is true.
2*(n-1)*(n-1) is a majorant of the number of instructions executed in the inner-most loop, and more precisely:
2(n + (n-1) + ... + 1) = n(n+1) = O(n^2)
Even if it's not important with the O-notation, there are several arrays for which the condition is always true (thus these are the worst case for this algorithm). For example:
n n-1 n-2 ... 2 1