I am currently trying to determine the Big-Theta complexity of the recursive algorithm below. It makes sense that the complexity is at least n^2 (due to the nested for-loop). However, the recursive aspect makes me struggle to determine its precise Big-Theta complexity.
I guess it must be n^3 as the function calls itself recursively and executes itself. But I struggle to find proof for that. Can anyone please tell me the complexity and how to determine it for recursive algorithms?
function F(n)
if n < 1:
return 1
t = 0
for i <- 0 to n:
for j <- i to n:
t = t + j
return t + F(n-1)
The recursion aspect of this algorithm is fairly simple: it comes down to tail recursion and so it can be written as a loop:
function F(n)
t = 0
for k <- n to 1:
for i <- 0 to k:
for j <- i to k:
t = t + j
return t + 1
...where k replaces the argument that each recursive call would get as n
The number of times that t = t + j executes is determining for the time complexity (assuming addition is considered a Θ(1) step).
This represents a tetrahedral number and is Θ(n³).
Related
I'm learning how to prove/disprove big-Oh, big-Omega, and little-oh, and I have the following algorithm f(n). However I'm unsure how to prove this f(n) as it has an if statement which I've never come across before. How can I prove, for example, that this f(n) is O( n^2 )?
if n is even
4 sum(n/2,n)
else
2n-1 sum(n−3,n)
where sum(j,k) is a ‘partial arithmetic sum’ of the integers from j up to k, that is sum(j,k)=
if j > k
0
else
j+(j+1)+(j+2)+...+j
e.g. sum(3,4) = 3 + 4 = 7, etc.
Note that sum(j,k) = sum(1,k) – sum(1,j-1).
ok. Got it no worries. I'll try to help you understand this.
Big O notation is used to define an upper limit on how much time a program will take in term of its input size.
Let's try to see how much time each statement will take in this function
f(n) {
if n is even // O(1) .....#1
4 * sum(n/2,n) // O(n) .....#2
else // O(1) ................#3
(2n-1) * sum(n−3,n) // O(n) .......#4
}
if n is even
This can be done by a check like if ((n%2) == 0)) As you can see that this is a constant time operation. no loop nothing just one computation.
sum(j, k) function is being computated by iterating from j to k whenever j <= k. So, it will run (k - j + 1) times which is linear time
So, total complexity will be summation of complexity of the if block or the else block
For analyzing complexity, one needs to take care of worst time.
Complexity of if block = #1 + #2 = O(1) + O(n) = O(n)
Similarly for else block = #3 + #4 = O(1) + O(n) = O(n)
Max of both = maximum of(O(n), O(n)) = O(n)
Thus, the overall complexity = O(n)
I'm studying for an exam, and i've come across the following question:
Provide a precise (Θ notation) bound for the running time as a
function of n for the following function
for i = 1 to n {
j = i
while j < n {
j = j + 4
}
}
I believe the answer would be O(n^2), although I'm certainly an amateur at the subject but m reasoning is the initial loop takes O(n) and the inner loop takes O(n/4) resulting in O(n^2/4). as O(n^2) is dominating it simplifies to O(n^2).
Any clarification would be appreciated.
If you proceed using Sigma notation, and obtain T(n) equals something, then you get Big Theta.
If T(n) is less or equal, then it's Big O.
If T(n) is greater or equal, then it's Big Omega.
I am doing the exercise of Skiena's book on algorithms and I am stuck in this question:
I need to calculate the big O of the following algorithm:
function mystery()
r=0
for i=1 to n-1 do
for j=i+1 to n do
for k=1 to j do
r=r+1
Here, the big O of the outermost loop will be O(n-1) and the middle loop will be O(n!). Please tell me if I am wrong here.
I am not able to calculate the big O of the innermost loop.
Can anyone please help me with this?
Here's a more rigorous way to approach solving this problem:
Define the run-time of the algorithm to be f(n) since n is our only input. The outer loop tells us this
f(n) = Sum(g(i,n), 1, n-1)
where Sum(expression, min, max) is the sum of expression from i = min to i = max. Notice that, the expression in this case is an evaluation of g(i, n) with a fixed i (the summation index) and n (the input to f(n)). And we can peel another layer and defineg(i, n):
g(i, n) = Sum(h(j), i+1, n), where i < n
which is the sum of h(j) where j ranges of i+1 to n. Finally we can just define
h(j) = Sum(O(1), 1, j)
since we've assumed that r = r+1 takes time O(1).
Notice at this point that we haven't done any hand-waving, saying stuff like "oh you can just multiply the loops together. The 'innermost operation' is the only one that counts." That statement isn't even true for all algorithms. Here's an example:
for (i = 0; i < n; i++)
{
for (j = 0; j < n; j++)
{
Solve_An_NP_Complete_Problem(n);
for (k = 0; k < n; k++)
{
count++;
}
}
}
The above algorithm isn't O(n^3)... it's not even polynomial.
Anyways, now that I've established the superiority of a rigorous evaluation (:D) we need to work our way upwards so we can figure out what the upper bound is for f(n). First, it's easy to see that h(j) is O(j) (just use the definition of Big-Oh). From that we can now rewrite g(i, n) as:
g(i, n) = Sum(O(j), i+1, n)
=> g(i, n) = O(i+1) + O(i+2) + ... O(n-1) + O(n)
=> g(i, n) = O(n^2 - i^2 - 2i - 1) (because we can sum Big-Oh functions
in this way using lemmas based on
the definition of Big-Oh)
=> g(i, n) = O(n^2) (because g(i, n) is only defined for i < n. Also, note
that before this step we had a Theta bound, which is
stronger!)
And so we can rewrite f(n) as:
f(n) = Sum(O(n^2), 1, n-1)
=> f(n) = (n-1)*O(n^2)
=> f(n) = O(n^3)
You might consider proving the lower bound to show that f(n) = Theta(n^3). The trick there is note simplifying g(i, n) = O(n^2) but keeping the tight bound when computing f(n). It requires some ugly algebra, but I'm pretty sure (i.e. I haven't actually done it) that you will be able to prove f(n) = Omega(n^3) as well (or just Theta(n^3) directly if you're really meticulous).
The analysis takes place as follows:
the outer loop goes from i = [1, n - 1], so the program would have linear complexity O(i) = O(n) if, inside this loop, only constant operations were performed;
however, inside the first loop, for each i, some operation is executed n times -- starting from j = [i + 1, n - 1]. This gives a total complexity of O(i * j) = O(n * n) = O(n^2);
finally, the innermost loop will be executed, for each i, and for each j, k times, with k ranging from k = [j, n - 1]. As j begins with i + 1 = 1 + 1, k is assymptotically equal to n, which gives you O(i * j * k) = O(n * n * n) = O(n^3).
The complexity of the program itself corresponds to the number of iterations of your innermost operations -- or the sums of complexities of innermost operations. If you had:
for i = 1:n
for j = 1:n
--> innermost operation (executed n^2 times)
for k = 1:n
--> innermost operation (executed n^3 times)
endfor
for l = 1:n
--> innermost operation (executed n^3 times)
endfor
endfor
endfor
The total complexity would be given as O(n^2) + O(n^3) + O(n^3), which is equal to max(O(n^2), O(n^3), O(n^3)), or O(n^3).
It cannot be factorial. For example if u have two nested cycles the big O will be n^2, not n^n. So three cycles cannot give more than n^3. Keep digging ;)
I need to build a recurrence relation for the following algorithm (T(n) stands for number of elemental actions) and find it's time complexity:
Alg (n)
{
if (n < 3) return;
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
for i=1 to 7
Alg(n-2);
}
I came to this Recurrence relation (don't know if it's right):
T(n) = 1 if n < 3
T(n) = 7T(n-2)+100n2 otherwise.
I don't know how to get the time complexity, though.
Is my recurrence correct? What's the time complexity of this code?
Let's take a look at the code to see what the recurrence should be.
First, let's look at the loop:
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
How much work does this do? Well, let's begin by simplifying it. Rather than having j count up from i to 2i, let's define a new variable j' that counts up from 0 to i. This means that j' = j - i, and so we get this:
for i=1 to n
{
for j' = 0 to i
{
for k=j' to j'+100
write (i, j' + i, k);
}
}
Ah, that's much better! Now, let's also rewrite k as k', where k' ranges from 1 to 100:
for i=1 to n
{
for j' = 0 to i
{
for k'= 1 to 100
write (i, j' + i, k' + j);
}
}
From this, it's easier to see that this loop has time complexity Θ(n2), since the innermost loop does O(1) work, and the middle loop will run 1 + 2 + 3 + 4 + ... + n = Θ(n2) times. Notice that it's not exactly 100n2 because the summation isn't exactly n2, but it is close.
Now, let's look at the recursive part:
for i=1 to 7
Alg(n-2);
For starters, this is just plain silly! There's no reason you'd ever want to do something like this. But, that said, we can say that this is 7 calls to the algorithm on an input of size n - 2.
Accordingly, we get this recurrence relation:
T(n) = 7T(n - 2) + Θ(n2) [if n ≥ 3]
T(n) = Θ(1) [otherwise]
Now that we have the recurrence, we can start to work out the time complexity. That ends up being a little bit tricky. If you think about how much work we'll end up doing, we'll get that
There's 1 call of size n.
There's 7 calls of size n - 2.
There's 49 calls of size n - 4.
There's 343 calls of size n - 6.
...
There's 7k calls of size n - 2k
From this, we immediately get a lower bound of Ω(7n/2), since that's the number of calls that will get made. Each call does O(n2) work, so we can get an upper boudn of O(n27n/2). The true value lies somewhere in there, though I honestly don't know how to figure out what it is. Sorry about that!
Hope this helps!
A formal method is to do the following:
The prevailing order of growth can be intuitively inferred from the source code, when it comes to the number of recursive calls.
An algorithm with 2 recursive calls has a complexity of 2^n; with 3 recursive calls the complexity 3^n and so on.
I am trying to prove the following worst-case scenario for the Quicksort algorithm but am having some trouble. Initially, we have an array of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be theta(n^2). To prove this, I've used the iteration method to solve the recurrence equation:
T(n) = T(ij) = m if j = 1
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1
T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci
So it looks like the recurrence is:
j
T(n) = jm + ci * sum k - 1
k=1
At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.
Pay attention, the quicksort algorithm worst case scenario is when you have two subproblems of size 0 and n-1. In this scenario, you have this recurrence equations for each level:
T(n) = T(n-1) + T(0) < -- at first level of tree
T(n-1) = T(n-2) + T(0) < -- at second level of tree
T(n-2) = T(n-3) + T(0) < -- at third level of tree
.
.
.
The sum of costs at each level is an arithmetic serie:
n n(n-1)
T(n) = sum k = ------ ~ n^2 (for n -> +inf)
k=1 2
It is O(n^2).
Its a problem of simple mathematics. The complexity as you have calculated correctly is
O(jm + ij^2)
what you have found out is a parameterized complextiy. The standard O(n^2) is contained in this as follows - assuming i=1 you have a standard base case so m=O(1) hence j=n therefore we get O(n^2). if you put ij=n you will get O(nm/i+n^2/i) . Now what you should remember is that m is a function of i depending upon what you will use as the base case algorithm hence m=f(i) thus you are left with O(nf(i)/i + n^2/i). Now again note that since there is no linear algorithm for general sorting hence f(i) = omega(ilogi) which will give you O(nlogi + n^2/i). So you have only one degree of freedom that is i. Check that for any value of i you cannot reduce it below nlogn which is the best bound for comparison based.
Now what I am confused is that you are doing some worst case analysis of quick sort. This is not the way its done. When you say worst case it implies you are using randomization in which case the worst case will always be when i=1 hence the worst case bound will be O(n^2). An elegant way to do this is explained in randomized algorithm book by R. Motwani and Raghavan alternatively if you are a programmer then you look at Cormen.