What's the run time of: T(n) = 2T(n-1) + 3T(n-2)+ 1 - algorithm

I understand that it is similar to the Fibonacci sequence that has an exponential running time. However this recurrence relation has more branches. What are the asymptotic bounds of T(n) = 2T(n-1) + 3T(n-2)+ 1?

Usually, you will need to make some assumptions on T(0) and T(1), because there will be exponentially many of them and their values may determine the functional form of T(n). However in this case it doesn't seem to matter.
Then, recurrence relations of this form can be solved by finding their characteristic polynomials. I found this article: http://www.wikihow.com/Solve-Recurrence-Relations
I obtained the characteristic polynomial with roots 3 and 1, so that guesses the form T(n) = c_1*3^n + c_2. In particular, T(n) = 1/2*3^n - 1/4 satisfies the recurrence relation, and we can verify this.
1/2*3^n - 1/4 = 2*T(n-1) + 3*T(n-2) + 1
= 2*(1/2*3^(n-1) - 1/4) + 3*(1/2*3^(n-2) - 1/4) + 1
= 3^(n-1) - 1/2 + 1/2*3^(n-1) - 3/4 + 1
= 3/2*3^(n-1) - 1/4
= 1/2*3^n - 1/4
Hence it would give that T(n) = Theta(3^n). However, this may not be the only function that satisfies the recurrence and other possibilities will also depend on what you defined the values T(0) and T(1), but they should all be O(3^n).

This type of recurrences are called: non-homogeneous recurrence relations and you have to solve in the beginning homogeneous recurrence (the one without a constant at the end). If you are interested, read the math behind it.
I will show you an easy way. Just type your equation in wolfram-alpha and you will get:
,
which is clearly an exponential complexity: O(3^n)

Related

Recursive algorithm time complexity (maximum independent set)

I have an assignment to analyse and implement an algorithm for finding the maximum independent set of a graph, and I've come to the time complexity part of said algorithm. I've deduced it is as follows:
T(n) = T(n-1) + T(n-3) + O(n^3) + O(n^2)
I've figured some other similar recursive algorithms out before, but I got stuck several times with this one. I think it's supposed to turn out to be ~O(2^n).
Can someone help me get the solution?
Edit: I just double checked it the solution is supposed to be O(1.8^n). Any way to get to that solution?
Since O(n³) + O(n²) = O(n³) = O(f(n)) where f(n) = n³ − 12n² + 30n − 28,
there exists some constant γ > 0 such that we can overestimate the
original recurrence T as a concrete linear non-homogeneous recurrence
T′:
T′(n) = T′(n−1) + T′(n−3) + γf(n).
We follow the usual playbook, focusing on the homogeneous part first
(same recurrence but without γn³). The characteristic polynomial is x³ −
x² − 1. Letting α be any zero of this polynomial, the function
αn satisfies
αn = αn−1 + αn−3,
Assuming that the base case values are positive, the growth rate of the
homogeneous part will be Θ(αn) where α = 1.46557… is the zero
of maximum argument.
We also have to deal with the non-homogeneous part, but it turns out not
to matter asymptotically. By design we have f(n) = (n−1)³ + (n−3)³ − n³,
so letting U(n) = βαn − γn3, we can verify that
U(n−1) + U(n−3) + γn³ = β(αn−1 + αn−3) −
γ((n−1)³ + (n−3)³ − f(n)) = βαn − γn³ = U(n)
is a solution for T′, keeping us in the same asymptotic class.
If we draw the recursion tree, it will look something like this, For simplicity, lets every node has 2 child nodes and the height of this tree is n until it reaches the base case. As it's a binary tree the number of nodes will be (2^n)-1.
So the complexity for T(n) = T(n-1) + T(n-3) is O(2^n).
Now the overall complexity is O(2^N) + O(n^3) + O (n^2) = O(2^N)

Calculate the time complexity of recurrence relation f(n) = f(n/2) + f(n/3)

How to calculate time complexity of recurrence relation f(n) = f(n/2) + f(n/3). We have base case at n=1 and n=0.
How to calculate time complexity for general case i.e f(n) = f(n/x) + f(n/y), where x<n and y<n.
Edit-1 :(after first answer posted) every number considered is integer.
Edit-2 :(after first answer posted) I like the answer given by Mbo but is it possible to answer this without using any fancy theorem like master theorem etc.Like by making tree etc.
However users are free to answer the way they like and i will try to understand.
In "layman terms" you can get dependence with larger coefficient:
T(n) = T(n/2) + T(n/2) + O(1)
build call tree for n=2^k and see that the last tree level contains 2^k items, higher level 2^k-1 items, next one 2^k-2 and so on. Sum of sequence (geometric progression)
2^k + 2^k-1 + 2^k-2 + ... + 1 = 2^(k+1) = 2*n
so complexity for this dependence is linear too.
Now get dependence with smaller (zero) second coefficient:
T(n) = T(n/2) + O(1)
and ensure in linear complexity too.
Seems clear that complexity of recurrence in question lies between complexities for these simpler examples, and is linear.
In general case recurrences with complex branching might be solved with Aktra-Bazzi method (more general approach than Master theorem)
I assume that dependence is
T(n) = T(n/2) + T(n/3) + O(1)
In this case g=1, to find p we should numerically solve
(1/2)^p + (1/3)^p = 1
and get p~0.79, then integrate
T(x) = Theta(x^0.79 * (1 + Int[1..x]((1/u^0.79)*du))) =
Theta(x^0.79 * (1 + 4.8*x^0.21 - 4.8) =
Theta(x^0.79 + 4.8*x) =
Theta(x)
So complexity is linear

Algorithms: Find recursive equation of divide and conquer algorithm

I have the following "divide and conquer" algorithm A1.
A1 divides a problem with size n , to 4 sub-problems with size n/4.
Then, solves them and compose the solutions to 12n time.
How can I to write the recursive equation that give the runtime of algorithms.
Answering the question "How can I to write the recursive equation that give the runtime of algorithms"
You should write it this way:
Let T(n) denote the run time of your algorithm for input size of n
T(n) = 4*T(n/4) + 12*n;
Although the master theorem does give a shortcut to the answer, it is imperative to understand the derivation of the Big O runtime. Divide and conquer recurrence relations are written in the form T(n) = q * T(n/j) + cn, where q is the number of subproblems, j the amount we divide the data for each subproblem, and cn is the time it takes to divide/combine/manipulate each subproblem at each level. cn could also be cn^2 or c, whatever the runtime would be.
In your case, you have 4 subproblems of size n/4 with each level being solved in 12n time giving a recurrence relation of T(n) = 4 * T(n/4) + 12n. From this recurrence, we can then derive the runtime of the algorithm. Given it is a divide and conquer relation, we can assume that the base case is T(1) = 1.
To solve the recurrence, I will use a technique called substitution. We know that T(n) = 4 * T(n/4) + 12n, so we will substitute for T(n/4). T(n/4) = 4 * T(n/16) + 12(n/4). Plugging this into the equation gets us T(n) = 4 * (4 * T(n/16) + 12n/4) + 12n, which we can simplify to T(n) = 4^2 * T(n/16) + 2* 12n. Again, we still have more work to do in the equation to capture the work in all levels, so we substitute for T(n/16), T(n) = 4^3 * T(n/64) + 3* 12n. We see the pattern emerge and know that we want to go all the way down to our base case, T(1), so that we substitute to get T(n) = 4^k*T(1) + k * 12n. This equation defines the total amount of work that is in the divide and conquer algorithm because we have substituted all of the levels in, however, we still have an unknown variable k and we want it in terms of n We get k by solving the equation n/4^k = 1 as we know that we have reached the point where we are calling the algorithm on only one variable. We solve for n and get that k = log4n. That means that we have done log4n substitutions. We plug that in for k and get T(n) =4^log4n*T(1) + log4n * 12n. We simplify this to T(n) =n *1 + log4n * 12n. Since this is Big O analysis and log4n is in O(log2n) due to the change of base property of logarithms, we get that T(n) = n + 12n * logn which means that T(n) is in the Big O of nlogn.
Recurrence relation that best describes is given by:
T(n)=4*T(n/4)+12*n
Where T(n)= run time of given algorithm for input of size n, 4= no of subproblems,n/4 = size of each subproblem .
Using Master Theorem Time Complexity is calculated to be:theta(n*log n)

Get time complexity of the recursion: T(n)=4T(n-1) - 3T(n-2)

I have a recurrence relation given by:
T(n)=4T(n-1) - 3T(n-2)
How do I solve this?
Any detailed explanation:
What I tried was that I substituted for T(n-1) on the right hand side using the relation and I got this:
=16T(n-2)-12T(n-3)-3T(n-2)
But I don't know where and how to end this.
Not only you can easily get the time complexity of this recursion, but you can even solve it exactly. This is thanks to the exhaustive theory behind linear recurrence relations and the one you called here is a specific case of homogeneous linear recurrence.
To solve it you need to write a characteristic polynomial: t^2 -4t +3 and find it's roots which are t=1 and t=3. Which means that your solution is of the form:
T(n) = c1 + 3^n * c2.
You can get c1 and c2 if you have a boundary conditions, but for your case it is enough to claim O(3^n) time complexity.
While it's obviously O(4^n) (because T(n)<=4*T(n-1)), it looks like a smaller limit can be proved:
T(n) = 4*T(n-1) - 3*T(n-2)
T(n) - T(n-1) = 3*T(n-1) - 3*T(n-2)
D(n) = T(n) - T(n-1)
D(n) = 3*D(n-1)
D(n) = D(0) * 3^n
if D(0)=0, T(n)=const=O(1)
otherwise since the difference is exponential, the resulting function will be exponential as well:
T(n) = O(3^n)
NOTE :- Generally, these kind of recurrence relations (where number of recurrence function calls are repeated , e.g-recurrence relation for a fibonacci sequence for value n ) will result into an exponential time complexity.
First of all, your question is incomplete . It does not provide a termination condition ( a condition for which the recurrence will terminate ). I assume that it must be
T(n) = 1 for n=1 and 2 for n=2
Based on this assumption I start breaking down the above recurrence relation
On substituting T(n) into T(n-1) I get this :
16T(n-2) - 24T(n-3) + 9T(n-4)
this forms a polynomial in the power of 2
{(4^2)T(n-2) - 2.4.3 T(n-3) + (3^2) T(n-4)}
again breaking the above recurrence further we get :
64T(n-3) -144T(n-4) + 108T(n-5) -27T(n-6)
which is a polynomial of power 3
on breaking down the relation for n-1 terms we will get :
(4^n-1) T(1) - ............. something like that
we can clearly see that in the above expansion all the remaining terms will be less than 4^n-1 so, we can take the asymptotic notation as :
O(4^n)
As an exercise you can either expand the polynomial for few more terms and also draw the recursion tree to find out what's actually happening .
Trying T(n) = x^n gives you a quadratic equation: x^2 = 4x - 3. This has solutions x=1 and x=3, so the general form for T(n) is a + b*3^n. The exact values of a and b depend on the initial conditions (for example, the values of T(0) and T(1)).
Depending on the initial conditions, the solution is going to be O(1) or O(3^n).

Recurence related to master theorem T(n)=T(n^(1/2))+1

In masters theorem were given a "plug-in" formula to find the big O, given it satisfies some condition.
However, what if we have problems like the following below? Can anyone show me how to do a step by step formula. And what topics would help me to know more about these types of questions. Assume that the person asking this question knows nothing about induction.
T(n)=T(n^(1/2))+1
T(n)=T(n-1) + 1
T(n)=T(n-1) + n^c , c is a natural number >1
T(n)= T(n-1) C^n, c is a natural number >1
You'll need to know a little math to do some of these. You can figure out what the recursion looks like when you expand it out all the way to the base case, e.g. for T(n) = T(n-1) + n^c you get T(n) = 1^c + 2^c + ... + n^c, but then you need to know some math in order to know that this is O(n^(c+1)). (The easiest way to see this is by bounding the sum above and below in terms of integrals of x^c). Similarly for T(n) = T(n-1) + c^n you easily get T(n) = c^1 + c^2 + ... + c^n but you again need to use some calculus or something to figure out that this is T(n) = O(c^n).
For T(n) = T(n^(1/2)) + 1 you need to count how many times you apply the recurrence before you get to the base case. Again math helps here. When you take square-root, the logarithm gets cut in half. So you want to know how many times you can cut the logarithm in half until you get to the base case. This is O(log log n).
You can expand upon the formula and work on it:
For example:
T(n) = T(n-1) + 1
T(n) = [T(n-2) + 1] + 1
...
T(n) = 1 + 1 + 1 ... (n times)
So T(n) = O(n).

Resources