I am trying to calculate Theta(n) of the following algorithm
for i = 1 -> n
for j = 1 -> n
B[i,j] = findMax(A,i,j)
findMax(A,i,j)
if j < i
return 0
else
max = A[i]
for k = i + 1 -> j
if max < A[k]
max = A[k]
return max
I know that O, theta, and omega roughly translate to
O ≈ ≤
Ω ≈ ≥
Θ ≈ =
For the algorithm I think that omega = n^2, o = n^3, but i'm not sure what theta would be. Any ideas?
If you calculate the number of times the code line
if max < A[k]
is executed depending on n you would get Theta(n^3) executions. Thus the running time of your algorithm is Thetat(n^3) as well.
Theta(n^3). There are n^2 iterations of the nested for loop. Approximately half of these iterations run in O(1) time, when j < i. The other half of these iterations have on average an n/2 difference for j-i, so the other half of the iterations take Theta(n/2) time. Since approximately half of the n^2 iterations take on average n/2 time, n^2/2 * n/2 = n^3/4 = Theta(n^3) time for half of the iterations. The other half of the n^2 iterations take n^2/2 = Theta(n^2) time. Thus total runtime = Theta(n^3).
Related
I found this example problem on the internet that I just cannot understand how the author came to their conclusion.
sum1 = 0;
for(k=1; k<=n; k*=2) // Do log n times
for (j=1; j<=n; j++) // Do n times
sum1++;`
sum2 = 0;
for (k=1; k<=n; k*=2) // Do log n times
for (j=1; j<=k; j++) // Do k times
sum2++;
I understand that the running time for the first loop is O(n) = nlog(n), but the author claims that for the second loop, the running time is O(n) = n.
Why is that?
The closest I can get to an answer is:
O(n) = k * log(n)
k = 2^i
O(n) = 2^i * log(n) ----> this is where I get stuck
I'm guessing some property of logarithms is used, but I can't figure out which one. Can someone point me in the right direction?
Thanks.
In the second example, the complexity is sum_j 2^j, i.e. the total number of operations in the inner loop.
As 2^j <= n, there are logn terms.
This sum is equal to 2^{jmax+1} - 1, with 2^jmax roughly (<=) equal to n.
Then, effectively, a complexity O(2n) = O(n).
sum2++ is executed 1+2+4+8+...+K times, where K is the largest power of 2 less than or equal to n. That sum is equal to 2K-1.
Since n/2 < K <= n (because K is the largest power of 2 less than or equal to n), the number of iterations is between n-1 and 2n-1. That's Theta(n) if you want to express it in asymptotic notation.
If evaluating f(n) is theta(n)
i = 1;
sum = 0;
while (i <= n)
do if (f(i) > k)
then sum += f(i);
i = 2*i;
Would the running time of this be O(n^3) because of the n times the functions are possibly being called or would it be O(n)? Or is it something in terms of theta since that's the information we know? I am very lost on this...
The i variable doubles each time => will reach n in Log2(n) time.
The evaluation of f will be done Log2(n) times => the function time complexity is O(N x LogN).
In fact, if computing f(i) has complexity O(i), then the time complexity is:
1 + 2 + 4 + ... + 2^(Log2(n)) = n (there are Log2(n) steps) => O(n)
I'm trying to figure out the exact big-O value of algorithms. I'll provide an example:
for (int i = 0; i < n; i++) // 2N + 2
{
for (int x = i; x < n; x++) // N * 2N + 2 ?
{
sum += i; // N
}
} // Extra N?
So if I break some of this down, int i = 0 would be O(1), i < n is N+1, i++ is N, multiply the inner loop by N:
2N + 2 + N(1 + N + 1 + N) = 2N^2 + 2N + 2N + 2 = 2N^2 + 4N + 2
Add an N for the loop termination and the sum constant, = 3N^2 + 5N + 2...
Basically, I'm not 100% sure how to calculate the exact O notation for an algorithm, my guess is O(3N^2 + 5N + 2).
What do you mean by exact? Big O is an asymptotic upper bound, so it's by definition not exact.
Thinking about i=0 as O(1) and i<n as O(N+1) is not correct. Instead, think of the outer loop as doing something n times, and for every iteration of the outer loop, the inner loop is executed at most n times. The calculation inside the loop takes constant time (the calculation is not getting more complex as n gets bigger). So you end up with O(n*n*1) = O(n^2), quadratic complexity.
When asking about "exact", you're running the inner loop from 0 to n, then from 1 to n, then from 2 to n, ... , from (n-1) to n, each time doing a constant time operation. So you do n + (n-1) + (n-2) + ... + 1 = n*(n+1)/2 = n^2/2 + n/2 iterations. To get from the exact number of calculations to big O notation, omit constants and lower-order terms, and you'll end up with O(n^2) (the 1/2 and +n/2 are omitted).
Big O means Worst case complexity.
And Here worst case will occur only if both the loops will run for n numbers of time i.e n*n.
So, complexity is O(n2).
The algorithm below has runtime O(n) according to our professor, however I am confused as to why it is not
O(n log(n)), because the outer loop can run up to log(n) times and the inner loop can run up to n times.
Algoritme Loop5(n)
i = 1
while i ≤ n
j = 1
while j ≤ i
j = j + 1
i = i∗2
Your professor was right, the running time is O(n).
In the i-th iteration of the outer while-loop, when we have i=2^k for k=0,1,...,log n, the inner while-loop makes O(i) iterations. (When I say log n I mean the base-2 logarithm log_2 n.)
The running time is O(1+2+2^2+2^3+...+2^k) for k=floor(log n). This sums to O(2^{k+1}) which is O(2^{log n}). (This follows from the formula for the partial sum of geometric series.)
Because 2^{log n} = n, the total running time is O(n).
For the interested, here's a proof that the powers of two really sum to what I claim they sum to. (This is a very special case of a more general result.)
Claim. For any natural k, we have 1+2+2^2+...+2^k = 2^{k+1}-1.
Proof. Note that (2-1)*(1+2+2^2+...+2^k) = (2 - 1) + (2^2 - 2) + ... + (2^{k+1} - 2^k) where all 2^i for 0<i<k+1 cancel out, except for i=0 and i=k+1, and we are left with 2^{k+1}-1. QED.
for (int i = 1; i < N; i *= 2) { ... }
Things like that are the signatures of logarithmic complexity.
But how get log(N)?
Could you give mathematical evidence?
Useful reference on algorithmic complexity: http://en.wikipedia.org/wiki/Big_O_notation
On the nth iteration,
i = 2^n
We know that it iterates until i >= N
Therefore,
i < N
Now,
2^n = i < N
N > 2^n
log2 N > log2 (2^n)
log2 N > n
We know it iterates n times, being less than log2 N.
Thus # iterations < log2 N, or # iterations is O(log N)
QED. Logarithmic complexity.
Multiplying N by 2 adds one more iteration, regardless of the size of N. That's pretty much the definition of the log function -- it goes up by a constant amount every time you multiply N by a constant.
Your code will work untill i < N, and each step i *= 2. We say your loop has logarithmic complexity if it runs log(N) + const times. 2 ^ log(N) = N, so after [log(N)] + 1 times i > N.