for (int i = 1; i < N; i *= 2) { ... }
Things like that are the signatures of logarithmic complexity.
But how get log(N)?
Could you give mathematical evidence?
Useful reference on algorithmic complexity: http://en.wikipedia.org/wiki/Big_O_notation
On the nth iteration,
i = 2^n
We know that it iterates until i >= N
Therefore,
i < N
Now,
2^n = i < N
N > 2^n
log2 N > log2 (2^n)
log2 N > n
We know it iterates n times, being less than log2 N.
Thus # iterations < log2 N, or # iterations is O(log N)
QED. Logarithmic complexity.
Multiplying N by 2 adds one more iteration, regardless of the size of N. That's pretty much the definition of the log function -- it goes up by a constant amount every time you multiply N by a constant.
Your code will work untill i < N, and each step i *= 2. We say your loop has logarithmic complexity if it runs log(N) + const times. 2 ^ log(N) = N, so after [log(N)] + 1 times i > N.
Related
I found this example problem on the internet that I just cannot understand how the author came to their conclusion.
sum1 = 0;
for(k=1; k<=n; k*=2) // Do log n times
for (j=1; j<=n; j++) // Do n times
sum1++;`
sum2 = 0;
for (k=1; k<=n; k*=2) // Do log n times
for (j=1; j<=k; j++) // Do k times
sum2++;
I understand that the running time for the first loop is O(n) = nlog(n), but the author claims that for the second loop, the running time is O(n) = n.
Why is that?
The closest I can get to an answer is:
O(n) = k * log(n)
k = 2^i
O(n) = 2^i * log(n) ----> this is where I get stuck
I'm guessing some property of logarithms is used, but I can't figure out which one. Can someone point me in the right direction?
Thanks.
In the second example, the complexity is sum_j 2^j, i.e. the total number of operations in the inner loop.
As 2^j <= n, there are logn terms.
This sum is equal to 2^{jmax+1} - 1, with 2^jmax roughly (<=) equal to n.
Then, effectively, a complexity O(2n) = O(n).
sum2++ is executed 1+2+4+8+...+K times, where K is the largest power of 2 less than or equal to n. That sum is equal to 2K-1.
Since n/2 < K <= n (because K is the largest power of 2 less than or equal to n), the number of iterations is between n-1 and 2n-1. That's Theta(n) if you want to express it in asymptotic notation.
The algorithm below has runtime O(n) according to our professor, however I am confused as to why it is not
O(n log(n)), because the outer loop can run up to log(n) times and the inner loop can run up to n times.
Algoritme Loop5(n)
i = 1
while i ≤ n
j = 1
while j ≤ i
j = j + 1
i = i∗2
Your professor was right, the running time is O(n).
In the i-th iteration of the outer while-loop, when we have i=2^k for k=0,1,...,log n, the inner while-loop makes O(i) iterations. (When I say log n I mean the base-2 logarithm log_2 n.)
The running time is O(1+2+2^2+2^3+...+2^k) for k=floor(log n). This sums to O(2^{k+1}) which is O(2^{log n}). (This follows from the formula for the partial sum of geometric series.)
Because 2^{log n} = n, the total running time is O(n).
For the interested, here's a proof that the powers of two really sum to what I claim they sum to. (This is a very special case of a more general result.)
Claim. For any natural k, we have 1+2+2^2+...+2^k = 2^{k+1}-1.
Proof. Note that (2-1)*(1+2+2^2+...+2^k) = (2 - 1) + (2^2 - 2) + ... + (2^{k+1} - 2^k) where all 2^i for 0<i<k+1 cancel out, except for i=0 and i=k+1, and we are left with 2^{k+1}-1. QED.
I am trying to calculate Theta(n) of the following algorithm
for i = 1 -> n
for j = 1 -> n
B[i,j] = findMax(A,i,j)
findMax(A,i,j)
if j < i
return 0
else
max = A[i]
for k = i + 1 -> j
if max < A[k]
max = A[k]
return max
I know that O, theta, and omega roughly translate to
O ≈ ≤
Ω ≈ ≥
Θ ≈ =
For the algorithm I think that omega = n^2, o = n^3, but i'm not sure what theta would be. Any ideas?
If you calculate the number of times the code line
if max < A[k]
is executed depending on n you would get Theta(n^3) executions. Thus the running time of your algorithm is Thetat(n^3) as well.
Theta(n^3). There are n^2 iterations of the nested for loop. Approximately half of these iterations run in O(1) time, when j < i. The other half of these iterations have on average an n/2 difference for j-i, so the other half of the iterations take Theta(n/2) time. Since approximately half of the n^2 iterations take on average n/2 time, n^2/2 * n/2 = n^3/4 = Theta(n^3) time for half of the iterations. The other half of the n^2 iterations take n^2/2 = Theta(n^2) time. Thus total runtime = Theta(n^3).
What is the (a) worst case, (b) best case, and (c) average case complexity of the following function which does matrix multiplication
for i=1 to n do
for j=1 to n do
C[i,j]=0
for k=1 to n do
C[i,j]=C[i,j]+A[i,k]*B[k,j]
end {for}
end {for}
end {for}
How would you justify the complexity?
i, j and k all go from 1 to n.
Therefore the best, average, and worst cases are O(n * n * n) = O(n^3)
For each of the n possible is, there are n js and for each of the n js, there are n ks.
Which gives n * n * n executions of the inner loop.
O(n^3), because on each of the nested loop, N is multiplied by N, since you have a nested loop 3 times which completely process the entire N, that will be N X N X N = N^3
public void foo(int n, int m) {
int i = m;
while (i > 100) {
i = i / 3;
}
for (int k = i ; k >= 0; k--) {
for (int j = 1; j < n; j *= 2) {
System.out.print(k + "\t" + j);
}
System.out.println();
}
}
I figured the complexity would be O(logn).
That is as a product of the inner loop, the outer loop -- will never be executed more than 100 times, so it can be omitted.
What I'm not sure about is the while clause, should it be incorporated into the Big-O complexity? For very large i values it could make an impact, or arithmetic operations, doesn't matter on what scale, count as basic operations and can be omitted?
The while loop is O(log m) because you keep dividing m by 3 until it is below or equal to 100.
Since 100 is a constant in your case, it can be ignored, yes.
The inner loop is O(log n) as you said, because you multiply j by 2 until it exceeds n.
Therefore the total complexity is O(log n + log m).
or arithmetic operations, doesn't matter on what scale, count as basic operations and can be omitted?
Arithmetic operations can usually be omitted, yes. However, it also depends on the language. This looks like Java and it looks like you're using primitive types. In this case it's ok to consider arithmetic operations O(1), yes. But if you use big integers for example, that's not really ok anymore, as addition and multiplication are no longer O(1).
The complexity is O(log m + log n).
The while loop executes log3(m) times - a constant (log3(100)). The outer for loop executes a constant number of times (around 100), and the inner loop executes log2(n) times.
The while loop divides the value of m by a factor of 3, therefore the number of such operations will be log(base 3) m
For the for loops you could think of the number of operations as 2 summations -
summation (k = 0 to i) [ summation (j = 0 to lg n) (1)]
summation (k = 0 to i) [lg n + 1]
(lg n + 1) ( i + 1) will be total number of operations, of which the log term dominates.
That's why the complexity is O(log (base3) m + lg n)
Here the lg refers to log to base 2