Given the following code -:
for(int i = 1; i <= N; i++)
for(int j = 1; j <= N; j = j+i)
{
//Do something
}
I know that the outer loop runs N times, and that the inner loop runs approximately log(N) times. This is because on each iteration of i, j runs ceil(N), ceil(N/2), ceil(N/4) times and so on. This is just a rough calculation through which one can guess that the time complexity will definitely be O(N log(N)).
How would I mathematically prove the same?
I know that for the ith iteration, j increments by ceil(N/2(i - 1)).
The total number of iterations of the inner loop for each value of i will be
i = 1: j = 1, 2, 3 ..., n ---> total iterations = n
i = 2: j = 1, 3, 5 ..., n ---> total iterations = n/2 if 2 divides n or one less otherwise
i = 3: j = 1, 4, 7 ..., n ---> total iterations = n/3 if 3 divides n or one less otherwise
...
i = m: j = 1, 1 + m, ... , n ---> total iterations ~ n/m
...
1
So approximately the total iterations will be (n + n/2 + n/3 ... + 1).
That sum is the Harmonic Series which has value approximately ln(n) + C so the total iterations is approximately n ln(n) and since all logarithms are related by a constant, the iterations will be O(nlogn).
Related
How can we find the time complexity for this loop
int c = 0;
int j = 1;
while (j< n^3) {
c+=1;
System.out.println(c);
j=j*4;
}
Since every time j is multiplied by 4 we can say after every iteration it can be written as :
1, 4, (4^2), ..., (4^k)
Now for loop to be false, (4^k) >= n^3
4^k >= n^3
k = log(n^3) to the base 4
You can further simplify it to:
3log(n) to base 4 and remove 3 as we do for constants.
k = log(n)
This should be the complexity of your loop.
I want to know the time complexity of a for loop for(i=m;i<=n;i++) where m and n are both variables. I am thinking it will be O(n-m) as the loop is dependent on both values of m and n.
Please guide me !
Explanation
Assuming your loop only executes statements that run in constant time, i.e. O(1), you simply have to count the amount of loop iterations.
Your loop head
for (i = m; i <= n; i++)
will generate iterations with i being m, m + 1, m + 2, m + 3, ... , n - 2, n - 1, n. So from m to n, both ends inclusive.
So exactly n - m + 1 iterations (simple example 2, 3, 4, 5 with 5 - 2 + 1 = 4).
Thus, the asymptotic time complexity is
O(n - m + 1) = O(n - m)
Like you said.
Yes indeed, it is O(n-m+1) since it will start at m and reaches n in the worst-case scenario.
This code snippet is suppose to have a complexity of O(n). Yet, I don't understand why.
sum = 0;
for (k = 1; k <= n; k *= 2) // For some arbitrary n
for (j = 1; j <= k; j++)
sum++;
Now, I understand that the outer loop by itself is O(log n), so why is it that adding the inner loop makes this O(n).
Let's assume that n is a power of 2 for a moment.
The final iteration of the inner loop will run n times. The iteration before that will run n/2 times, the second-to-last iteration n/4 times, and so on up until the first iteration which will run once. This forms a series which sums to 2n − 1 total iterations. This is O(n).
(For example, with n = 16, the inner loop runs 1 + 2 + 4 + 8 + 16 = 31 total times.)
Let m = floor(lg(n)). Then 2^m = C*n with 1 <= C < 2. The number k of steps in the inner loop goes like:
1, 2, 4, 8, ..., 2^m = 2^0, 2^1, ..., 2^m
Therefore the total number of operations is
2^0 + 2^1 + ... + 2^m = 2^{m+1} - 1 ; think binary
= 2*2^m - 1
= 2*C*n - 1 ; replace
= O(n)
I am stuck on a review question for my upcoming midterms, and any help is greatly appreciated.
Please see function below:
void george(int n) {
int m = n; //c1 - 1 step
while (m > 1) //c2 - log(n) steps
{
for (int i = 1; i < m; i++) //c3 - log(n)*<Stuck here>
int S = 1; //c4 - log(n)*<Stuck here>
m = m / 2; //c5 - (1)log(n) steps
}
}
I am stuck on the inner for loop since i is incrementing and m is being divided by 2 after every iteration.
If m = 100:
1st iteration m = 100: loop would run 100, i iterates 100 times + 1 for last check
2nd iteration m = 50: loop would run 50 times, i iterates 50 times + 1 for last check
..... and so on
Would this also be considered log(n) since m is being divided by 2?
External loop executes log(n) times
Internal loop executes n + n/2 + n/4 +..+ 1 ~ 2*n times (geometric progression sum)
Overall time is O(n + log(n)) = O(n)
Note - if we replace i < m with i < n in the inner loop, we will obtain O(n*log(n)) complexity, because in this case we have n + n + n +.. + n operations for inner loops, where number of summands is log(n)
In the book Programming Interviews Exposed it says that the complexity of the program below is O(N), but I don't understand how this is possible. Can someone explain why this is?
int var = 2;
for (int i = 0; i < N; i++) {
for (int j = i+1; j < N; j *= 2) {
var += var;
}
}
You need a bit of math to see that. The inner loop iterates Θ(1 + log [N/(i+1)]) times (the 1 + is necessary since for i >= N/2, [N/(i+1)] = 1 and the logarithm is 0, yet the loop iterates once). j takes the values (i+1)*2^k until it is at least as large as N, and
(i+1)*2^k >= N <=> 2^k >= N/(i+1) <=> k >= log_2 (N/(i+1))
using mathematical division. So the update j *= 2 is called ceiling(log_2 (N/(i+1))) times and the condition is checked 1 + ceiling(log_2 (N/(i+1))) times. Thus we can write the total work
N-1 N
∑ (1 + log (N/(i+1)) = N + N*log N - ∑ log j
i=0 j=1
= N + N*log N - log N!
Now, Stirling's formula tells us
log N! = N*log N - N + O(log N)
so we find the total work done is indeed O(N).
Outer loop runs n times. Now it all depends on the inner loop.
The inner loop now is the tricky one.
Lets follow:
i=0 --> j=1 ---> log(n) iterations
...
...
i=(n/2)-1 --> j=n/2 ---> 1 iteration.
i=(n/2) --> j=(n/2)+1 --->1 iteration.
i > (n/2) ---> 1 iteration
(n/2)-1 >= i > (n/4) ---> 2 iterations
(n/4) >= i > (n/8) ---> 3 iterations
(n/8) >= i > (n/16) ---> 4 iterations
(n/16) >= i > (n/32) ---> 5 iterations
(n/2)*1 + (n/4)*2 + (n/8)*3 + (n/16)*4 + ... + [n/(2^i)]*i
N-1
n*∑ [i/(2^i)] =< 2*n
i=0
--> O(n)
#Daniel Fischer's answer is correct.
I would like to add the fact that this algorithm's exact running time is as follows:
Which means: