what is the time complexity of this code?
int count=0;
for(int I=N;I>0;I=I/2)
{
for(int j=0;j<I;j++)
{
count=count+1;
}
}
Please explain it clearly
The inner loop does n iterations, then n/2, then n/4, etc.
i =n n/2 n/4 n/8 ....................logn times
j=n n/2 n/4 n/8 ...........logn tmes
T(n) = n+ n/2 + n/4 + n/8 + ...........logn time
= n(1+ 1/2 + 1/4 + .............logn times) Decreasing GP
=O(n)
Therefore,
The time complexity is O(n)
To know more about Geometric series see this document
Here lets take eg :
LET n = 10
initially: i = 10 (first loop)
j = 0 < 10(i) so it will loop from 0 to 9 times
NOW AFTER NESTED LOOP GETS OVER THIS TAKES PLACE
i /= 2
SO value of i = 5 (first loop ) 2 iteration.
this time j will run from j = 0 < 5(i) so it will loop from 0 to 5 times
each time value of i will be divided by 2 and similarly for corresponding value of j will iterate from 0 to i/2 times.
so, T(n) = O(n + n/2 + n/4 + … 1) = O(n) for j (This is for iteration of j only )
i j
10 0-9 times
5 0 - 4 times
2 0 - 1 times
similarly value of j which was initially n i.e 10 gets decreased in order on n/2 forming GP & thus we get O(n)
Related
I feel that in worst case also, condition is true only two times when j=i or j=i^2 then loop runs for an extra i + i^2 times.
In worst case, if we take sum of inner 2 loops it will be theta(i^2) + i + i^2 , which is equal to theta(i^2) itself;
Summation of theta(i^2) on outer loop gives theta(n^3).
So, is the answer theta(n^3) ?
I would say that the overall performance is theta(n^4). Here is your pseudo-code, given in text format:
for (i = 1 to n) do
for (j = 1 to i^2) do
if (j % i == 0) then
for (k = 1 to j) do
sum = sum + 1
Appreciate first that the j % i == 0 condition will only be true when j is multiples of n. This would occur in fact only n times, so the final inner for loop would only be hit n times coming from the for loop in j. The final for loop would require n^2 steps for the case where j is near the end of the range. On the other hand, it would only take roughly n steps for the start of the range. So, the overall performance here should be somewhere between O(n^3) and O(n^4), but theta(n^4) should be valid.
For fixed i, the i integers 1 ≤ j ≤ i2 such that j % i = 0 are {i,2i,...,i2}. It follows that the inner loop is executed i times with arguments i * m for 1 ≤ m ≤ i and the guard executed i2 times. Thus, the complexity function T(n) ∈ Θ(n4) is given by:
T(n) = ∑[i=1,n] (∑[j=1,i2] 1 + ∑[m=1,i] ∑[k=1,i*m] 1)
= ∑[i=1,n] ∑[j=1,i2] 1 + ∑[i=1,n] ∑[m=1,i] ∑[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + ∑[i=1,n] ∑[m=1,i] ∑[k=1,i*m] 1
= n3/3 + n2/2 + n/6 + n4/8 + 5n3/12 + 3n2/8 + n/12
= n4/8 + 3n3/4 + 7n2/8 + n/4
For the following block of code, select the most appropriate run-time formula in terms of primitive operations needed for input of size n:
When resolving from inside out, i get:
inner loop = 3n+1
main loop + inner loop = 3 + (3n +1) + logn = 4 + 3n + logn
extra steps + all loops = 4 + n(4 + 3n + logn) = 4 + 4n + 3n2 + logn
This is the code to analyze:
def rate(n):
total= 0
i = 1
while i < n:
j = 0
while j < n:
total= i * j + total
j = j + 1
i = i * 2
return total
and the answer is supposed to be --> f(n) = 4 + 4log(n) + log(n)*(3n)
I am actually coming up with O(NlgN) here for the overall running time. Appreciate that the inner loop in j is not dependent on the outer loop in i. The following should be true:
The outer loop in i is O(lgN), because i is doubling at each iteration, which is exponential behavior.
The inner loop in j is O(N), because j cycles from 0 to N at each iteration, regardless of the value of i.
We may therefore multiply together these complexities to get the overall complexity.
Note that for N of arbitrarily large size, your expression:
4 + 4log(n) + log(n)*(3n)
reduces to NlgN.
def rate(n):
total= 0
i = 1
while i < n: //This outer loop runs O(log(n)) times
j = 0
while j < n: //This inner loop runs O(n) times for each iteration of outer loop
total= i * j + total
j = j + 1
i = i * 2
return total
Hence, the total runtime complexity for your implementation in big-O is = O(log(n)) * O(n) = O(nlog(n)).
This code snippet is suppose to have a complexity of O(n). Yet, I don't understand why.
sum = 0;
for (k = 1; k <= n; k *= 2) // For some arbitrary n
for (j = 1; j <= k; j++)
sum++;
Now, I understand that the outer loop by itself is O(log n), so why is it that adding the inner loop makes this O(n).
Let's assume that n is a power of 2 for a moment.
The final iteration of the inner loop will run n times. The iteration before that will run n/2 times, the second-to-last iteration n/4 times, and so on up until the first iteration which will run once. This forms a series which sums to 2n − 1 total iterations. This is O(n).
(For example, with n = 16, the inner loop runs 1 + 2 + 4 + 8 + 16 = 31 total times.)
Let m = floor(lg(n)). Then 2^m = C*n with 1 <= C < 2. The number k of steps in the inner loop goes like:
1, 2, 4, 8, ..., 2^m = 2^0, 2^1, ..., 2^m
Therefore the total number of operations is
2^0 + 2^1 + ... + 2^m = 2^{m+1} - 1 ; think binary
= 2*2^m - 1
= 2*C*n - 1 ; replace
= O(n)
I am stuck on a review question for my upcoming midterms, and any help is greatly appreciated.
Please see function below:
void george(int n) {
int m = n; //c1 - 1 step
while (m > 1) //c2 - log(n) steps
{
for (int i = 1; i < m; i++) //c3 - log(n)*<Stuck here>
int S = 1; //c4 - log(n)*<Stuck here>
m = m / 2; //c5 - (1)log(n) steps
}
}
I am stuck on the inner for loop since i is incrementing and m is being divided by 2 after every iteration.
If m = 100:
1st iteration m = 100: loop would run 100, i iterates 100 times + 1 for last check
2nd iteration m = 50: loop would run 50 times, i iterates 50 times + 1 for last check
..... and so on
Would this also be considered log(n) since m is being divided by 2?
External loop executes log(n) times
Internal loop executes n + n/2 + n/4 +..+ 1 ~ 2*n times (geometric progression sum)
Overall time is O(n + log(n)) = O(n)
Note - if we replace i < m with i < n in the inner loop, we will obtain O(n*log(n)) complexity, because in this case we have n + n + n +.. + n operations for inner loops, where number of summands is log(n)
def mystery(L):
sum1 = 0
sum2 = 0
bound = 1
while bound <= len(L):
i = 0
while i < bound:
j = 0
while j < len(L):
if L[j] > L[i]:
sum1 = sum1 + L[j]
j = j + 2
j = 1
while j < len(L):
sum2 = sum2 + L[j]
j = j*2
i = i + 1
bound = bound * 2
return sum1 + sum2
I am having trouble finding the complexity of this function. I got to the i loop and don't know what to do.
It's a bit tricky to sort out how many times the middle level while loop runs. The outer loop increases bound by a factor of two on each pass (up to len(L)), which means the i loop will run O(bound) times per pass for O(log(N)) passes (where N is len(L)). The tricky part is how to add up the bound values, since they're changing on each pass.
I think the easiest way to figure out the sum is to start with the largest bound, just before the loop quits. First, lets assume that N (aka len(L)) is a power of 2. Then the last bound value will be exactly equal to N. The next smaller one (used on the next to last iteration) will be N/2 and the next after that will be N/4. Their sum will be:
N + N/2 + N/4 + N/8 + ... + 1
If we factor out N from each term, we'll get:
N*(1 + 1/2 + 1/4 + 1/8 + ... + 1/N)
You should recognize the sum in the parentheses, it's a simple geometric series (the sum of the powers of 1/2), which comes up pretty often in mathematics and analysis. If the sum went on forever, it would add up exactly to 2. Since we're quitting a bit early, it will be less than two by an amount equal to the last term (1/N). When we multiply the N term in again, we get the whole thing as being run 2*N - 1 times, so the loop is O(N)
The same Big-O bound works when N is not exactly a power of 2, since the values we added up in the analysis above will each serve as the upper bound for one of the actual bound values we will see in the loop.
So, the i loop runs O(N) times.