What will be the Worst case Time complexity for this? - algorithm

for(int i = 1 ; i < n ; i* = 2)
for(int j = 1 ; j < i ; j* = 2)
Can anyone explain me this?
I think it is log(n)*log(i) .Is that correct?

Assuming
for (i = 1; i < n; i *= 2)
for (j = 1; j < i; j *= 2)
...stuff...
"stuff" will be run 1 + 2 + 3 + ... + log(n)-1 times. Since the sum of integers 1 to N is N * (N + 1) / 2, worse case run time is O(log(n) ^ 2).

Related

nested for loop time complexity with function calls

for(i = 1; i < a; i++){
for(j = 1; j < b; j = j + 3){
if((i+j) % 2 == 0)
Func()
}
}
In this case, I thought it is O(a*b) and Theta(a*b).
Did I analyze the Complexity correctly?
First of all, you, probably, mean
if ((i + j) % 2 == 0)
instead of
if (i + j % 2 == 0)
since when i is positive, j % 2 non-negative then i + j % 2 is positive and thus i + j % 2 never equals to zero: Func() doesn't run at all.
Your answer is correct one: the complexity is
a * // from the first loop
b / 3 * // from the second loop
1 // from the condition (it always true)
So you have
Θ(a * b / 3 * 1) = Θ(ab)

Running Time of Nested Loops

I am sure the running time of this nested loop is O(N*log(N)). The running time of the inner loop is log(N) and the outher loop is N.
for (int i = 0; i < N; ++i) {
for (int j = 1; j <= i; j *= 2) {
}
}
In the inner Loop what if I change j *= 2 to j *= 3. How is the result going to change in this case?
#Kevin is completely right, but I thought I would show some experimental results. You can easily test this out by creating a counter that gets incremented inside each inner loop iteration and running for different values of N. Then a fit can be made of the form time = a * N * log(N). For the case j *= 2, we get a coefficient a = 1.28. For j *= 3, we get a = 0.839.
I generated this figure using the MATLAB script below:
clear
clc
close all
nmin = 10;
nmax = 1000;
count1 = zeros(nmax - nmin + 1, 1);
for n = nmin: nmax
k = 0;
for i = 0: n - 1
j = 1;
while (j <= i)
j = j * 2;
k = k + 1;
end
end
count1(n - nmin + 1) = k;
end
ns = (nmin: nmax)';
figure
hold on
plot(ns, count1, '--')
a1 = mean(count1 ./ (ns .* log(ns)))
fit1 = a1 * ns .* log(ns);
plot(ns, fit1)
%%
count2 = zeros(nmax - nmin + 1, 1);
for n = nmin: nmax
k = 0;
for i = 0: n - 1
j = 1;
while (j <= i)
j = j * 3;
k = k + 1;
end
end
count2(n - nmin + 1) = k;
end
ns = (nmin: nmax)';
plot(ns, count2, '-.')
a2 = mean(count2 ./ (ns .* log(ns)))
fit2 = a2 * ns .* log(ns);
plot(ns, fit2)
xlabel('N')
ylabel('Time complexity')
legend('j *= 2', 'j *= 2 fit', 'j *= 3', 'j *= 3 fit', 'Location', 'NorthWest')
It will still be logarithmic. However, it will be scaled by a constant factor (which is irrelevant in Big O analysis).
The effect is that the base of the logarithm changes (see https://en.wikipedia.org/wiki/Logarithm#Change_of_base).
----------[ j = 2 * j ] for j < i:-------------
j = 2*1 = 2 =>2^1
2*2 = 4 =>2^2
2*4 = 8 =>2^3
............. 2^k = n say n==i
if log applied on both side 2^k = n
log(2^k) = logn
k = log_2(N) where 2 is the base
----------[ j = 3 * j ] for j < i:-------------
j = 3*1 = 3 =>3^1
3*3 = 9 =>3^2
3*9 = 27 =>2^3
.............loop stop when 3^k = n say n==i
if log applied on both side 3^k = n
log(3^k) = logn
k = log_3(N) where 3 is the base

Complexity of triple-nested Loop

I have the following algorithm to find all triples
for (int i = 0; i < N; i++)
for (int j = i+1; j < N; j++)
for (int k = j+1; k < N; k++)
if (a[i] + a[j] + a[k] == 0)
{ cnt++; }
I now that I have triple loop and I check all triples. How to show that the number of different triples that can be chosen from N items is precisely N*(N-1)*(N-2)/6 ?
If we have two loops
for (int i = 0; i < N; i++)
for (int j = i+1; j < N; j++)
...
when i = 0 we go to the second loop N-1 times
i = 1 => N-2 times
...
i = N-1 => 1 time
i = N => 0 time
So 0 + 1 + 2 + 3 + ... + N-2 + N-1 = ((0 + N-1)/2)*N = N*N - N/2
But how to do the same proof with triples?
Ok, I'll do this step by step. It's more of a maths problem than anything.
Use a sample array for visualisation:
[1, 5, 7, 11, 6, 3, 2, 8, 5]
The first time the 3rd nested loop begins at 7, correct?
And the 2nd at 5, and the 1st loop at 1.
The 3rd nested loop is the important one.
It will loop through n-2 times. Then the 2nd loop increments.
At this point the 3rd loop loops through n-3 times.
We keep adding these till we get.
[(n-2) + (n-3) + ... + 2 + 1 + 0]
Then the 1st loop increments, so we begin at n-3 this time.
So we get:
[(n-3) + ... + 2 + 1 + 0]
Adding them all together we get:
[(n-2) + (n-3) + ... + 2 + 1 + 0] +
[(n-3) + ... + 2 + 1 + 0] +
[(n-4) + ... + 2 + 1 + 0] +
.
. <- (n-2 times)
.
[2 + 1 + 0] +
[1 + 0] +
[0]
We can rewrite this as:
(n-2)(1) + (n-3)(2) + (n-4)(3) + ... + (3)(n-4) + (2)(n-3) + (1)(n-2)
Which in maths notation we can write like this:
Make sure you look up the additive properties of Summations. (Back to college maths!)
We have
=
... Remember how to convert sums to polynomials
=
=
Which has a complexity of O(n^3).
One way is to realize that the number of such triples is equal to C(n, 3):
n!
C(n, 3) = --------
3!(n-3)!
= (n-2)(n-1)n/6
Another is to count what your loops do:
for (int i = 0; i < N; i++)
for (int j = i+1; j < N; j++)
for (int k = j+1; k < N; k++)
You've already showed that two loops from 0 to n-1 execute n(n-1)/2 operations.
For i = 0, the inner loops execute (n-1)(n-2)/2 operations.
For i = 1, the inner loops execute (n-2)(n-3)/2 operations.
...
For i = N - 1, the inner loops execute 0 operations.
We have:
(n-1)(n-2)/2 + (n-2)(n-3)/2 + ... =
= sum(i = 1 to n) {(n - i)(n - i - 1)/2}
= 1/2 sum(i = 1 to n) {n^2 - ni - n - ni + i^2 + i}
= 1/2 sum(i = 1 to n) {n^2} - sum{ni} - 1/2 sum{n} + 1/2 sum{i^2} + 1/2 sum{i}
= 1/2 n^3 - n^2(n+1)/2 - 1/2 n^2 + n(n+1)(2n+1)/12 + n(n+1)/4
Which reduces to the right formula, but it gets too ugly to continue here. You can check that it does on Wolfram.

How to calculate Running time of an algorithm

I have the following algorithm below :
for(i = 1; i < n; i++){
SmallPos = i;
Smallest = Array[SmallPos];
for(j = i + 1; j <= n; j++)
if (Array[j] < Smallest) {
SmallPos = j;
Smallest = Array[SmallPos];
}
Array[SmallPos] = Array[i];
Array[i] = Smallest;
}
Here is my calculation :
For the nested loop, I find a time complexity of
1 ("int i = 0") + n+1 ("i < n") + n ("i++")
* [1 ("j = i + 1") + n+1 ("j < n") + n ("j++")]
+ 3n (for the if-statement and the statements in it)
+ 4n (the 2 statements before the inner for-loop and the final 2 statements after the inner for-loop).
This is (1 + n + 1 + n)(1 + 1 + n + n) + 7n = (2 + 2n)(2 + 2n) + 7n = 4n^2 + 15n + 4.
But unfortunately, the text book got T(n) = 2n^2 +4n -5.
Please, anyone care to explain to me where I got it wrong?
Here is a formal manner to represent your algorithm, mathematically (Sigma Notation):
Replace c by the number of operations in the outer loop, and c' by the number of operations in the inner loop.

Instruction execution of a C++ code

Hello I have an algorthm in C++ and I want to find the instructions executed. The code is below
cin >> n;
for(i=1;i<=n;i++)
for (j = 1; j <= n; j ++)
A[i][j] = 0;
for(i=1;i<=n;i++)
A[i][i] = 1;
now, after my calculation, I got this T(n) = n^2+8n-5. I just need someone else to verify if I am correct. Thanks
Ok, let's do an analysis step by step.
The first instruction
cin >> n
counts as one operation: 1.
Then the loop
for(i=1;i<=n;i++)
for (j = 1; j <= n; j ++)
A[i][j] = 0;
Let's go from inside out. The j loop performs n array assignments (A[i][j] = 0), (n + 1) j <= n comparisons and n j ++ assignments. It also performs once the assignment j = 1. So in total this gives: n + (n +1) + n + 1 = 3n + 2.
Then the outer i loop performs (n + 1) i <= n comparisons, n i ++ assignments and executes n times the j loop. It also performs one i = 1 assignment. This results in: n + (n + 1) + n * (3n + 2) + 1 = 3n^2 + 4n + 2.
Finally the last for loop performs n array assignments, (n + 1) i <= n comparisons and n i ++ assignments. It also performs one assignment i = 1. This results in: n + (n + 1) + n + 1 = 3n + 2.
Now, adding up three operations we get:
(1) + (3n^2 + 4n + 2) + (3n + 2) = 3n^2 + 7n + 5 = T(n) total operations.
The time function is equivalent, assuming that assignments, comparisons, additions and cin are all done in constant time. That would yield an algorithm of complexity O(n^2).
This is of curse assuming that n >= 1.

Resources