Algorithm complexity, solving recursive equation - algorithm

I'm taking Data Structures and Algorithm course and I'm stuck at this recursive equation:
T(n) = logn*T(logn) + n
obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering n to be 2^m , but I couldn't manage to find any good fix.

The answer is Theta(n). To prove something is Theta(n), you have to show it is Omega(n) and O(n). Omega(n) in this case is obvious because T(n)>=n. To show that T(n)=O(n), first
Pick a large finite value N such that log(n)^2 < n/100 for all n>N. This is possible because log(n)^2=o(n).
Pick a constant C>100 such that T(n)<Cn for all n<=N. This is possible due to the fact that N is finite.
We will show inductively that T(n)<Cn for all n>N. Since log(n)<n, by the induction hypothesis, we have:
T(n) < n + log(n) C log(n)
= n + C log(n)^2
< n + (C/100) n
= C * (1/100 + 1/C) * n
< C/50 * n
< C*n
In fact, for this function it is even possible to show that T(n) = n + o(n) using a similar argument.

This is by no means an official proof but I think it goes like this.
The key is the + n part. Because of this, T is bounded below by o(n). (or should that be big omega? I'm rusty.) So let's assume that T(n) = O(n) and have a go at that.
Substitute into the original relation
T(n) = (log n)O(log n) + n
= O(log^2(n)) + O(n)
= O(n)
So it still holds.

Related

Why the substitution method need to prove the exact form when determining time complexity of recursive functions

When using the substitution method to find the time complexity of recursive functions, why do we to prove the exact form and can't use the asymptotic notations as they are defined. An example from the book "Introduction to Algorithms" at page 86:
T(n) ≤ 2(c⌊n/2⌋) + n
≤ cn + n
= O(n) wrong!!
Why is this wrong?
From the definition of Big-O: O(g(n)) = {f(n): there exist positive constants c2 and n0 such that 0 ≤ f(n) ≤ c2g(n) for all n > n0}, the solution seems right.
Let's say f(n) = cn + n = n(c + 1), and g(n) = n. Then n(c + 1) ≤ c2n if c2 ≥ c + 1. => f(n) = O(n).
Is it wrong because f(n) actually is T(n) and not cn + n, and instead g(n) = cn + n??
I would really appreciate a helpful answer.
At page 2 in this paper the problem is describe. The reason why this solution is wrong is because the extra n in cn + n adds up in every recursive level. As the paper said "over many recursive calls, those “plus ones” add up"
Edit (Have more time to answer probably).
What I tried to do in the question was to solve the problem by induction. This means that I show that the next recursion still is true for my time complexity, which would mean that it would hold for the next recursion after that and so on. However this is only true if the time complexity I calculated is lower than my guess, in this case cn. For example cn + n is larger than cn and and my proof therefore fails. This is because, if I now let the recursion go on one more time I start with cn + n. This would than be calculated to c(cn + n) + n = n * c^2 + cn + n. This would increase for every recursive level and would not grow with O(n). It therefore fails. If however my proof was calculated to cn or lower(imagine it), then the next level would be cn or lower, just as the following one and so on. This leads to O(n). In short the calculation needs to be lower or equal to the quest.

Solve recurrence relation using Master's method -> T(n) = 2T(n/2) + n^2 when n is even and T(n) = 2T(n/2) + n^3 when n is odd

T(n) ={ 2T(n/2) + n^2 when n is even and T(n) = 2T(n/2) + n^3 when n is odd
I solved this separately and i am getting the solution as theta(n^2) if n is even and theta(n^3) if n is odd from case 3 of master's theorem. But i am not supposed to solve this problem separately.
How to solve a recurrence relation like this together?
T(n) ={ 2T(n/2) + n^2 when n is even and T(n) = 2T(n/2) + n^3 when n is odd
Is it solvable by master's theorem or master's theorem does not apply?
Kindly help me with this.
Suppose n = 2^k for some integer k, so n equals to 100...00. Then you can apply master method the even part of the recurrence. and obtain theta(n^2).
Now suppose there is also 1 not in the most significant bit, e.g. 100100..00. So, you will have at least one level in your recursion-tree all nodes of which add up to n^3 * constant, and by this you obtain theta(n^3).
Thus, the answer is theta(n^2) if n is a power of two and theta(n^3) otherwise. But if we first encounter odd n and it is equal to a base case then it might not be cubic.
After some chatting with kelalaka it came to me that if first 1 is k-th from the right in n then if k > (2/3)(1/lg 2)lg n, we don't care any more about (n/2^k)^3. It is still O(n^2).

Get the complexity of T(n)=T(n/4)+T(3n/4)+c

In this recurrence relation T(n)=T(n/4)+T(3n/4)+c , I am having just confusion that what is the relation of this recurrence relation with the best and worst case analysis since we have to solve both the sub-problems which are of size n/4 and 3n/4 so what is the terminology of worst case or best case analysis here ?
Moreover we should use here theta(log n ) our O(log n ) ,although seeing the below link I found O(log n ) more applicable but still couldn't get why are we not using theta(log n ) here .
How to solve the recursive complexity T(n) = T(n/4)+T(3n/4)+cn
T(n) = T(n/4) + T(3n/4) + CONST <= 2T(3n/4) + CONST
We will use case 1 of master theorem with:
a = 2, b = 4/3.
c = log_{4/3}(2) ~= 0.4
CONST is in O(n^0.4)
Thus, from master theorem, one cad derive that 2T(3n/4) + CONST is in Theta(logn), and since T(n) <= 2T(3n/4) + CONST, we can say that T(n) is in O(logn).
By following the same idea, but with lower bound:
T(n) >= T(3n/4) + CONST ...
And using master theorem again, we can tell that T(n) is also in Omega(logn).
Since T(n) is both O(logn) and Omega(logn), it is also Theta(logn).
As for your question, you can use either big-O or Theta notation, whatever you prefer. As you can see, proving Theta requires a bit more work, but it is also more informative, as it tells you the bound you found is tight.
These types of recurrences can be easily solved with Akra-Bazzi theorem (and if you have looked at the question you linked, someone showed you a solution to a similar problem).
So 1/4^p + (3/4)^p = 1, where p = 1. In your case g(u) = c, so the integral
So int c/u^2 du from 1 to x which is equal to -1/u evaluated from 1 to x. This is equal to -1/x + 1. Now when you multiply it by x and you will get that the complexity is O(n) and not O(log n) as other people suggested.

Solving a Recurrence Relation: T(n)=T(n-1)+T(n/2)+n

Solve: T(n)=T(n-1)+T(n/2)+n.
I tried solving this using recursion trees.There are two branches T(n-1) and T(n/2) respectively. T(n-1) will go to a higher depth. So we get O(2^n). Is this idea correct?
This is a very strange recurrence for a CS class. This is because from one point of view: T(n) = T(n-1) + T(n/2) + n is bigger than T(n) = T(n-1) + n which is O(n^2).
But from another point of view, the functional equation has an exact solution: T(n) = -2(n + 2). You can easily see that this is the exact solution by substituting it back to the equation: -2(n + 2) = -2(n + 1) + -(n + 2) + n. I am not sure whether this is the only solution.
Here is how I got it: T(n) = T(n-1) + T(n/2) + n. Because you calculate things for very big n, than n-1 is almost the same as n. So you can rewrite it as T(n) = T(n) + T(n/2) + n which is T(n/2) + n = 0, which is equal to T(n) = - 2n, so it is linear. This was counter intuitive to me (the minus sign here), but armed with this solution, I tried T(n) = -2n + a and found the value of a.
I believe you are right. The recurrence relation will always split into two parts, namely T(n-1) and T(n/2). Looking at these two, it is clear that n-1 decreases in value slower than n/2, or in other words, you will have more branches from the n-1 portion of the tree. Despite this, when considering big-o, it is useful to just consider the 'worst-case' scenario, which in this case is that both sides of the tree decreases by n-1 (since this decreases more slowly and you would need to have more branches). In all, you would need to split the relation into two a total of n times, hence you are right to say O(2^n).
Your reasoning is correct, but you give away far too much. (For example, it is also correct to say that 2x^3+4=O(2^n), but that’s not as informative as 2x^3+4=O(x^3).)
The first thing we want to do is get rid of the inhomogeneous term n. This suggests that we may look for a solution of the form T(n)=an+b. Substituting that in, we find:
an+b = a(n-1)+b + an/2+b + n
which reduces to
0 = (a/2+1)n + (b-a)
implying that a=-2 and b=a=-2. Therefore, T(n)=-2n-2 is a solution to the equation.
We now want to find other solutions by subtracting off the solution we’ve already found. Let’s define U(n)=T(n)+2n+2. Then the equation becomes
U(n)-2n-2 = U(n-1)-2(n-1)-2 + U(n/2)-2(n/2)-2 + n
which reduces to
U(n) = U(n-1) + U(n/2).
U(n)=0 is an obvious solution to this equation, but how do the non-trivial solutions to this equation behave?
Let’s assume that U(n)∈Θ(n^k) for some k>0, so that U(n)=cn^k+o(n^k). This makes the equation
cn^k+o(n^k) = c(n-1)^k+o((n-1)^k) + c(n/2)^k+o((n/2)^k)
Now, (n-1)^k=n^k+Θ(n^{k-1}), so that the above becomes
cn^k+o(n^k) = cn^k+Θ(cn^{k-1})+o(n^k+Θ(n^{k-1})) + cn^k/2^k+o((n/2)^k)
Absorbing the lower order terms and subtracting the common cn^k, we arrive at
o(n^k) = cn^k/2^k
But this is false because the right hand side grows faster than the left. Therefore, U(n-1)+U(n/2) grows faster than U(n), which means that U(n) must grow faster than our assumed Θ(n^k). Since this is true for any k, U(n) must grow faster than any polynomial.
A good example of something that grows faster than any polynomial is an exponential function. Consequently, let’s assume that U(n)∈Θ(c^n) for some c>1, so that U(n)=ac^n+o(c^n). This makes the equation
ac^n+o(c^n) = ac^{n-1}+o(c^{n-1}) + ac^{n/2}+o(c^{n/2})
Rearranging and using some order of growth math, this becomes
c^n = o(c^n)
This is false (again) because the left hand side grows faster than the right. Therefore,
U(n) grows faster than U(n-1)+U(n/2), which means that U(n) must grow slower than our assumed Θ(c^n). Since this is true for any c>1, U(n) must grow more slowly than any exponential.
This puts us into the realm of quasi-polynomials, where ln U(n)∈O(log^c n), and subexponentials, where ln U(n)∈O(n^ε). Either of these mean that we want to look at L(n):=ln U(n), where the previous paragraphs imply that L(n)∈ω(ln n)∩o(n). Taking the natural log of our equation, we have
ln U(n) = ln( U(n-1) + U(n/2) ) = ln U(n-1) + ln(1+ U(n/2)/U(n-1))
or
L(n) = L(n-1) + ln( 1 + e^{-L(n-1)+L(n/2)} ) = L(n-1) + e^{-(L(n-1)-L(n/2))} + Θ(e^{-2(L(n-1)-L(n/2))})
So everything comes down to: how fast does L(n-1)-L(n/2) grow? We know that L(n-1)-L(n/2)→∞, since otherwise L(n)∈Ω(n). And it’s likely that L(n)-L(n/2) will be just as useful, since L(n)-L(n-1)∈o(1) is much smaller than L(n-1)-L(n/2).
Unfortunately, this is as far as I’m able to take the problem. I don’t see a good way to control how fast L(n)-L(n/2) grows (and I’ve been staring at this for months). The only thing I can end with is to quote another answer: “a very strange recursion for a CS class”.
I think we can look at it this way:
T(n)=2T(n/2)+n < T(n)=T(n−1)+T(n/2)+n < T(n)=2T(n−1)+n
If we apply the master's theorem, then:
Θ(n∗logn) < Θ(T(n)) < Θ(2n)
Remember that T(n) = T(n-1) + T(n/2) + n being (asymptotically) bigger than T(n) = T(n-1) + n only applies for functions which are asymptotically positive. In that case, we have T = Ω(n^2).
Note that T(n) = -2(n + 2) is a solution to the functional equation, but it doesn't interest us, since it is not an asymptotically positive solution, hence the notations of O don't have meaningful application.
You can also easily check that T(n) = O(2^n). (Refer to yyFred solution, if needed)
If you try using the definition of O for functions of the type n^a(lgn)^b, with a(>=2) and b positive constants, you see that this is not a possible solution too by the Substitution Method.
In fact, the only function that allows a proof with the Substitution Method is exponential, but we know that this recursion doesn't grow as fast as T(n) = 2T(n-1) + n, so if T(n) = O(a^n), we can have a < 2.
Assume that T(m) <= c(a^m), for some constant c, real and positive. Our hypothesis is that this relation is valid for all m < n. Trying to prove this for n, we get:
T(n) <= (1/a+1/a^(n/2))c(a^n) + n
we can get rid of the n easily by changing the hypothesis by a term of lower order. What is important here is that:
1/a+1/a^(n/2) <= 1
a^(n/2+1)-a^(n/2)-a >= 0
Changing variables:
a^(N+1)-a^N-a >= 0
We want to find a bond as tight as possible, so we are searching for the lowest a possible. The inequality we found above accept solutions of a which are pretty close to 1, but is a allowed to get arbitrarily close to 1? The answer is no, let a be of the form a = (1+1/N). Substituting a at the inequality and applying the limit N -> INF:
e-e-1 >= 0
which is a absurd. Hence, the inequality above has some fixed number N* as maximum solution, which can be found computationally. A quick Python program allowed me to find that a < 1+1e-45 (with a little extrapolation), so we can at least be sure that:
T(n) = ο((1+1e-45)^n)
T(n)=T(n-1)+T(n/2)+n is the same as T(n)=T(n)+T(n/2)+n since we are solving for extremely large values of n. T(n)=T(n)+T(n/2)+n can only be true if T(n/2) + n = 0. That means T(n) = T(n) + 0 ~= O(n)

Worst Case Performance of Quicksort

I am trying to prove the following worst-case scenario for the Quicksort algorithm but am having some trouble. Initially, we have an array of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be theta(n^2). To prove this, I've used the iteration method to solve the recurrence equation:
T(n) = T(ij) = m if j = 1
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1
T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci
So it looks like the recurrence is:
j
T(n) = jm + ci * sum k - 1
k=1
At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.
Pay attention, the quicksort algorithm worst case scenario is when you have two subproblems of size 0 and n-1. In this scenario, you have this recurrence equations for each level:
T(n) = T(n-1) + T(0) < -- at first level of tree
T(n-1) = T(n-2) + T(0) < -- at second level of tree
T(n-2) = T(n-3) + T(0) < -- at third level of tree
.
.
.
The sum of costs at each level is an arithmetic serie:
n n(n-1)
T(n) = sum k = ------ ~ n^2 (for n -> +inf)
k=1 2
It is O(n^2).
Its a problem of simple mathematics. The complexity as you have calculated correctly is
O(jm + ij^2)
what you have found out is a parameterized complextiy. The standard O(n^2) is contained in this as follows - assuming i=1 you have a standard base case so m=O(1) hence j=n therefore we get O(n^2). if you put ij=n you will get O(nm/i+n^2/i) . Now what you should remember is that m is a function of i depending upon what you will use as the base case algorithm hence m=f(i) thus you are left with O(nf(i)/i + n^2/i). Now again note that since there is no linear algorithm for general sorting hence f(i) = omega(ilogi) which will give you O(nlogi + n^2/i). So you have only one degree of freedom that is i. Check that for any value of i you cannot reduce it below nlogn which is the best bound for comparison based.
Now what I am confused is that you are doing some worst case analysis of quick sort. This is not the way its done. When you say worst case it implies you are using randomization in which case the worst case will always be when i=1 hence the worst case bound will be O(n^2). An elegant way to do this is explained in randomized algorithm book by R. Motwani and Raghavan alternatively if you are a programmer then you look at Cormen.

Resources