I am doing the exercise of Skiena's book on algorithms and I am stuck in this question:
I need to calculate the big O of the following algorithm:
function mystery()
r=0
for i=1 to n-1 do
for j=i+1 to n do
for k=1 to j do
r=r+1
Here, the big O of the outermost loop will be O(n-1) and the middle loop will be O(n!). Please tell me if I am wrong here.
I am not able to calculate the big O of the innermost loop.
Can anyone please help me with this?
Here's a more rigorous way to approach solving this problem:
Define the run-time of the algorithm to be f(n) since n is our only input. The outer loop tells us this
f(n) = Sum(g(i,n), 1, n-1)
where Sum(expression, min, max) is the sum of expression from i = min to i = max. Notice that, the expression in this case is an evaluation of g(i, n) with a fixed i (the summation index) and n (the input to f(n)). And we can peel another layer and defineg(i, n):
g(i, n) = Sum(h(j), i+1, n), where i < n
which is the sum of h(j) where j ranges of i+1 to n. Finally we can just define
h(j) = Sum(O(1), 1, j)
since we've assumed that r = r+1 takes time O(1).
Notice at this point that we haven't done any hand-waving, saying stuff like "oh you can just multiply the loops together. The 'innermost operation' is the only one that counts." That statement isn't even true for all algorithms. Here's an example:
for (i = 0; i < n; i++)
{
for (j = 0; j < n; j++)
{
Solve_An_NP_Complete_Problem(n);
for (k = 0; k < n; k++)
{
count++;
}
}
}
The above algorithm isn't O(n^3)... it's not even polynomial.
Anyways, now that I've established the superiority of a rigorous evaluation (:D) we need to work our way upwards so we can figure out what the upper bound is for f(n). First, it's easy to see that h(j) is O(j) (just use the definition of Big-Oh). From that we can now rewrite g(i, n) as:
g(i, n) = Sum(O(j), i+1, n)
=> g(i, n) = O(i+1) + O(i+2) + ... O(n-1) + O(n)
=> g(i, n) = O(n^2 - i^2 - 2i - 1) (because we can sum Big-Oh functions
in this way using lemmas based on
the definition of Big-Oh)
=> g(i, n) = O(n^2) (because g(i, n) is only defined for i < n. Also, note
that before this step we had a Theta bound, which is
stronger!)
And so we can rewrite f(n) as:
f(n) = Sum(O(n^2), 1, n-1)
=> f(n) = (n-1)*O(n^2)
=> f(n) = O(n^3)
You might consider proving the lower bound to show that f(n) = Theta(n^3). The trick there is note simplifying g(i, n) = O(n^2) but keeping the tight bound when computing f(n). It requires some ugly algebra, but I'm pretty sure (i.e. I haven't actually done it) that you will be able to prove f(n) = Omega(n^3) as well (or just Theta(n^3) directly if you're really meticulous).
The analysis takes place as follows:
the outer loop goes from i = [1, n - 1], so the program would have linear complexity O(i) = O(n) if, inside this loop, only constant operations were performed;
however, inside the first loop, for each i, some operation is executed n times -- starting from j = [i + 1, n - 1]. This gives a total complexity of O(i * j) = O(n * n) = O(n^2);
finally, the innermost loop will be executed, for each i, and for each j, k times, with k ranging from k = [j, n - 1]. As j begins with i + 1 = 1 + 1, k is assymptotically equal to n, which gives you O(i * j * k) = O(n * n * n) = O(n^3).
The complexity of the program itself corresponds to the number of iterations of your innermost operations -- or the sums of complexities of innermost operations. If you had:
for i = 1:n
for j = 1:n
--> innermost operation (executed n^2 times)
for k = 1:n
--> innermost operation (executed n^3 times)
endfor
for l = 1:n
--> innermost operation (executed n^3 times)
endfor
endfor
endfor
The total complexity would be given as O(n^2) + O(n^3) + O(n^3), which is equal to max(O(n^2), O(n^3), O(n^3)), or O(n^3).
It cannot be factorial. For example if u have two nested cycles the big O will be n^2, not n^n. So three cycles cannot give more than n^3. Keep digging ;)
Related
The algorithm below has runtime O(n) according to our professor, however I am confused as to why it is not
O(n log(n)), because the outer loop can run up to log(n) times and the inner loop can run up to n times.
Algoritme Loop5(n)
i = 1
while i ≤ n
j = 1
while j ≤ i
j = j + 1
i = i∗2
Your professor was right, the running time is O(n).
In the i-th iteration of the outer while-loop, when we have i=2^k for k=0,1,...,log n, the inner while-loop makes O(i) iterations. (When I say log n I mean the base-2 logarithm log_2 n.)
The running time is O(1+2+2^2+2^3+...+2^k) for k=floor(log n). This sums to O(2^{k+1}) which is O(2^{log n}). (This follows from the formula for the partial sum of geometric series.)
Because 2^{log n} = n, the total running time is O(n).
For the interested, here's a proof that the powers of two really sum to what I claim they sum to. (This is a very special case of a more general result.)
Claim. For any natural k, we have 1+2+2^2+...+2^k = 2^{k+1}-1.
Proof. Note that (2-1)*(1+2+2^2+...+2^k) = (2 - 1) + (2^2 - 2) + ... + (2^{k+1} - 2^k) where all 2^i for 0<i<k+1 cancel out, except for i=0 and i=k+1, and we are left with 2^{k+1}-1. QED.
I'm given this algorithm and I'm told to find the complexity of it big theta.
for (i = 1; i <= n; i++) {
j = n;
while( j >= 1) {
j = j/3;
}
}
I know outer for loop runs n times. The while loop is more tricky though, Is it possibly log n (of base 3). In total making it n*log3n
Is this correct?
There is an outer for loop of size n. It contributes a complexity of a factor of n to the over all complexity.
Let's say inner while loop runs for m times. After i'th iteration, the value of j will be n/(3^i). We will run this till n/(3^i) > 1. Therefore,
=> n/(3^i) = m
=> n = 3^m
=> log(n) = log2(3) * m
=> m = O(log(n))
So, for loop contributes to O(n) and while loop contributes to O(log(n)). Complexity of nested loop becomes O(n log(n)).
If you had code that looks like this, what would the big O be? I'm uncertain as to how if statements affect big O.
n = some arbitrary number
for(i = 0; i < n; i++)
for(j = 0; j < n; j++)
if(i <= j)
for(k = i; k <= j; k++)
//do some simple operation
y = x+1
else
//do some simple operation
y = y+1
I'm not considering compiler optimizations. I know this is somewhere between O(n^2) and O(n^3) but am not sure as the if statement does not always execute the inner most loop.
O(N * N * N) which we can just say is O(N^3)
First Loop happens N times.
Second Loop happens N times.
Those multiply together to get O(N^2)
Out of all the possible N^2 loops, the third loop would run about half the time, which is O(N/2) which is equivalent to O(N).
And that's how you get O(N * N * N) or O(N^3)
In fact you can count (almost exactly) how many operations you do:
i: 0 to n-1 = N operations
x
j: 0 to n-1 = N operations
x
only when i<=j, from i to j, or another task O(1)
The other task gives you NxN operations, then O(NxN)
that is, if you invert
for every j (0 to n-1) : N operations
then for every i from 0 to j, you do an operation from i to j, that is j-i+1
exactly the same as every 0 to j operations. Then you have (j+1)x(j+2)/2 operations.
then finally, you get Sum (j+1)x(j+2)/2 from 0 to N which is
1/2 ((N+1)x(N+2)/2 + (N+1)^3/3+(N+1)^2/2+(N+1)/6) operations, so O(N^3)
Perhaps, I forgot some +/-1
You can analyse your algorithm using Sigma notation:
From this, it's obvious the time complexity will depend on cubic n terms, and hence your algorithm is in O(n^3).
This is O(N^3).
Proof: http://www.wolframalpha.com/input/?i=sum+sum+%28j+-+i+%2B+1%29%2C+j+%3D+i+to+n+-+1%2C+i+%3D+0+to+n+-+1
Last cycle runs (j - i + 1) times.
How to find this sum manually?
This is not a rocket math.
Try to read about https://en.wikipedia.org/wiki/Telescoping_series
Just to save time it's easier to use wolframalpha for that purpose.
Consider the following function:
int testFunc(int n){
if(n < 3) return 0;
int num = 7;
for(int j = 1; j <= n; j *= 2) num++;
for(int k = n; k > 1; k--) num++;
return testFunc(n/3) + num;
}
I get that the first loop is O(logn) while the second loop gives O(n) which gives a time complexity of O(n) in total. But due to the recursive calls I thought the time complexity would be O(nlogn), but apperantly it is only O(n). Can anyone explain why?
The recursive call pretty much gives the following for the complexity(denoting the complexity for input n by T(n)):
T(n) = log(n) + n + T(n/3)
First observation as you correctly noted is that you can ignore the logarithm as it is dominated by n. Now we are only left with T(n) = n + T(n/3). Try writing this up to 0 for instance. We have:
T(n) = n + n/3 + n/9+....
You can easily prove that the above sum is always less than 2*n. In fact better limits can be proven but this one is enough to state that overall complexity is O(n).
For procedures using a recursive algorithm such as the following:
procedure T( n : size of problem ) defined as:
if n < base_case then exit
Do work of amount f(n) // In this case, the O(n) for loop
T(n/b)
T(n/b)
... a times... // In this case, b = 3, and a = 1
T(n/b)
end procedure
Applying the Master theorem to find the time complexity, the f(n) in this case is O(n) (due to the second for loop, like you said). This makes c = 1.
Now, logba = log31 = 0, making this the 3rd case of the theorem, according to which the time complexity T(n) = Θ(f(n)) = Θ(n).
public void foo(int n, int m) {
int i = m;
while (i > 100) {
i = i / 3;
}
for (int k = i ; k >= 0; k--) {
for (int j = 1; j < n; j *= 2) {
System.out.print(k + "\t" + j);
}
System.out.println();
}
}
I figured the complexity would be O(logn).
That is as a product of the inner loop, the outer loop -- will never be executed more than 100 times, so it can be omitted.
What I'm not sure about is the while clause, should it be incorporated into the Big-O complexity? For very large i values it could make an impact, or arithmetic operations, doesn't matter on what scale, count as basic operations and can be omitted?
The while loop is O(log m) because you keep dividing m by 3 until it is below or equal to 100.
Since 100 is a constant in your case, it can be ignored, yes.
The inner loop is O(log n) as you said, because you multiply j by 2 until it exceeds n.
Therefore the total complexity is O(log n + log m).
or arithmetic operations, doesn't matter on what scale, count as basic operations and can be omitted?
Arithmetic operations can usually be omitted, yes. However, it also depends on the language. This looks like Java and it looks like you're using primitive types. In this case it's ok to consider arithmetic operations O(1), yes. But if you use big integers for example, that's not really ok anymore, as addition and multiplication are no longer O(1).
The complexity is O(log m + log n).
The while loop executes log3(m) times - a constant (log3(100)). The outer for loop executes a constant number of times (around 100), and the inner loop executes log2(n) times.
The while loop divides the value of m by a factor of 3, therefore the number of such operations will be log(base 3) m
For the for loops you could think of the number of operations as 2 summations -
summation (k = 0 to i) [ summation (j = 0 to lg n) (1)]
summation (k = 0 to i) [lg n + 1]
(lg n + 1) ( i + 1) will be total number of operations, of which the log term dominates.
That's why the complexity is O(log (base3) m + lg n)
Here the lg refers to log to base 2