Recurrence Relation - algorithm

Why is the recurrence relation of recursive factorial algorithm this?
T(n)=1 for n=0
T(n)=1+T(n-1) for n>0
Why is it not this?
T(n)=1 for n=0
T(n)=n*T(n-1) for n>0
Putting values of n i.e 1,2,3,4...... the second recurrence relation holds(The factorials are correctly calculated) not the first one.

we generally use recurrence relation to find the time complexity of algorithm.
Here, the function T(n) is not actually for calculating the value of an factorial but it is telling you about the time complexity of the factorial algorithm.
It means for finding a factorial of n it will take 1 more operation than factorial of n-1
(i.e. T(n)=T(n-1)+1) and so on.
so correct recurrence relation for a recursive factorial algorithm is
T(n)=1 for n=0
T(n)=1+T(n-1) for n>0
not that you mentioned later.
like recurrence for tower of hanoi is
T(n)=2T(n-1)+1 for n>0;
Update:
It does not have anything to do with implementation generally.
But recurrence can give an intuition of programming paradigm for eg if T(n)=2*T(n/2)+n (merge sort) this gives kind of intuition for divide and conquer because we are diving n into half.
Also, If you will solve the equation it will give you a bound on running time.eg big oh notation.

Looks like T(n) is the recurrence relation of the time complexity of the recursive factorial algorithm, assuming constant time multiplication. Perhaps you misread your source?

What he put was not the factorial recursion, but the time complexity of it.
Assuming this is the pseudocode for such a recurrence:
1. func factorial(n)
2. if (n == 0)
3. return 1
4. return n * (factorial - 1)
I am assuming that tail-recursion elimination is not involved.
Line 2 and 3 costs a constant time, c1 and c2.
Line 4 costs a constant time as well. However, it calls factorial(n-1) which will take some time T(n-1). Also, the time it takes to multiply factorial(n-1) by n can be ignored once T(n-1) is used.
Time for the whole function is just the sum: T(n) = c1 + c2 + T(n-1).
This, in big-o notation, is reduced to T(n) = 1 + T(n-1).
This is, as Diam has pointed out, is a flat recursion, therefore its running time should be O(n). Its space complexity will be enormous though.

I assume that you have bad information. The second recurrence relation you cite is the correct one, as you have observed. The first one just generates the natural numbers.

This question is very confusing... You first formula is not factorial. It is simply T(n) = n + 1, for all n. Factorial of n is the product of the first n positive integers: factorial(1) = 1. factorial(n) = n * factorial(n-1). Your second formula is essentially correct.

T(n) = T(n-1) + 1 is correct recurrence equation for factorial of n.
This equation gives you the time to compute factorial of n NOT value of the factorial of n.

Where did you find the first one ? It's completely wrong.
It's only going to add 1 each time whatever the value is .

First you have to find a basic operation and for this example it is multiplication. Multiplication happens once in every recursion. So
T(n) = T(n-1) +1
this +1 is basic operation (mutliplication for this example)
T(n-1) is next recursion call.

TL;DR: The answer to your question actually depends on what sequence your recurrence relation is defining. That is, whether the sequence Tn in your question represents the factorial function or the running-time cost of computing the factorial functionX.
The factorial function
The recursive defintion of the factorial of n, fn, is:
fn = n • fn-1 for n > 0 with f0 = 1.
As you can see, the equation above is actually a recurrence relation, since it is an equation that, together with the initial term (i.e., f0 = 1), recursively defines a sequence (i.e., the factorial function, fn).
Modelling the running-time cost of computing the factorial
Now, we are going to find a model for representing the running-time cost of computing the factorial of n. Let's call Tn the running-time cost of computing fn.
Looking at the definition above of the factorial function fn, its running-time cost Tn will consist of the running-time cost of computing fn-1 (i.e., this cost is Tn-1) plus the running-time cost of performing the multiplication between n and fn-1. The multiplication is achieved in constant time. Therefore we could say that Tn = Tn-1 + 1.
However, what is the value of T0? T0 represents the running-time cost of computing f0. Since the value of f0 is initially known by definition, the running-time cost for computing f0 is actually constant. Therefore, we could say that T0 = 1.
Finally, what we obtain is:
Tn = Tn-1 + 1 for n > 0 with T0 = 1.
This equation above is also a recurrence relation. However, what it defines (together with the initial term), is a sequence that models the running-time cost of computing the factorial function.
XTaking into account how the sequence in your recurrence relation is called (i.e., Tn), I think it very likely represents the latter, i.e., the running-time cost of computing the factorial function.

Related

Recurrence relation and time complexity

What is the recurrence relation and time complexity for the following pseudo-code?
temp = 1
repeat
for i=1 to n
temp = temp +1
n=n/2
until n>=1
When we are dealing with asymptotic notations like Big-Oh , Omega and Theta, we don't consider the constants. No doubt your time complexity will go like
n + n/2 + n/4 + ... + 1
but if you add on this decreasing G.P series you will get exact answer equals to c*n where c will be some constant greater than 1. But in Asymptotic notations as I said earlier, constants doesn't matter so whether value of c is 2 or 50 or 100 or 10000 or anything, it will be O(n) only.
Another thing, Try not to use Master's Theorem for solving recurrence relations and use Recursive Tree method as it is pure conceptual and will help you in building up your concepts and can be used in every case. Master's theorem is like short cut and also have limitations.

Forming recurrence relations

I have a question on forming recurrence relations and calculating the time complexity.
If we have a recurrence relation T(n)=2T(n/2) + c then it means that the constant amount of work c is divided into 2 parts T(n/2) + T(n/2) when drawing recursion tree.
Now consider recurrence relation of factorial which is T(n)=n*T(n-1) + c . If we follow the above method then we should divide the work c into n instances each of T(n-1) and then evaluate time complexity. However if calculate in this way then answer will O(n^n) because we will have O(n^n) recursive calls which is wrong.
So my question is why can't we use the same approach of dividing the elements into sub parts as in first case.
Let a recurrence relation be T(n) = a * T(n/b) + O(n).
This recurrence implies that there is a recursive function which:
divides the original problem into a subproblems
the size of each subproblem will be n/b if the current problem size is n
when the subproblems are trivial (too easy to solve), no recursion is needed and they are solved directly (and this process will take O(n) time).
When we say that original problem is divided into a subproblems, we mean that there are a recursive calls in the function body.
So, for instance, if the function is:
int f(int n)
{
if(n <= 1)
return n;
return f(n-1) + f(n-2);
}
we say that the problem (of size n) is divided into 2 subproblems, of sizes n-1 and n-2. The recurrence relation would be T(n) = T(n-1) + T(n-2) + c. This is because there are 2 recursive calls, and with different arguments.
But, if the function is like:
int f(int n)
{
if(n <= 2)
return n;
return n * f(n-1);
}
we say that the problem (of size n) is divided into only 1 subproblem, which is of size n-1. This is because there is only 1 recursive call.
So, the recurrence relation would be T(n) = T(n-1) + c.
If we multiply the T(n-1) with n, as would seem normal, we are conveying that there were n recursive calls made.
Remember, our main motive for forming recurrence relations is to perform (asymptotic) complexity analysis of recursive functions. Even though it would seem like n cannot be discarded from the relation as it depends on the input size, it would not serve the same purpose as it does in the function itself.
But, if you are talking about the value returned by the function, it would be f(n) = n * f(n-1). Here, we are multiplying with n because it is an actual value, that will be used in the computation.
Now, coming to the c in T(n) = T(n-1) + c; it merely suggests that when we are solving a problem of size n, we need to solve a smaller problem of size n-1 and some other constant (constant time) work like comparison, multiplication and returning values are also performed.
We can never divide "constant amount of work c" into two parts T(n/2) and T(n/2), even using recursive tree method. What we are, in fact, dividing, is the problem into two halves. The same "c" amount of work will be needed in each recursive call in each level of the recursive tree.
If there were a recurrence relation like T(n) = 2T(n/2) + O(n), where the amount of work to be done depends on the input size, then the amount of work to be done at each level will be halved at the next level, just like you described.
But, if the recurrence relation were like T(n) = T(n-1) + O(n), we would not be dividing the amount of work into two halves in the next recursion level. We would just be reducing the amount of work by one at each successive level (n-sized problem becomes n-1 at next level).
To check how the amount of work will change with recursion, apply substitution method to your recurrence relation.
I hope I have answered your question.

calculate n for nlog(n) and n! when time is 1 second. (algorithm takes f(n) microseconds)

given the following problem from CLRS algo book.
For each function f (n) and time t in the following table, determine
the largest size n of a problem that can be solved in time t, assuming
that the algorithm to solve the problem takes f(n) microseconds.
how can one calculate n for f(n)=nlog(n) when time is 1 second?
how can one calculate n for f(n)=n! when time is 1 second?
It is mentioned that the algorithm takes f(n) microseconds. Then, one may consider that algorithm to consist of f(n) steps each of which takes 1 microsecond.
The questions given state that relevant f(n) values are bound by 1 second. (i.e. 106 microseconds) Then, since you are looking for the largest n possible to fulfill those conditions, your questions boil down to the inequalities given below.
1) f(n) = nlog(n) <= 106
2) f(n) = n! <= 106
The rest, I believe, is mainly juggling with algebra and logarithmic equations to find the relevant values.
In first case, You can refer to Example of newtons method to calculate cube root Newton’s method to approximate the roots or Lambert W Function. It may help to calculate value of n. As per the my findings mostly there is no other analytical approach can help.
In second case, python script can help to calculate n with manual approch.
def calFact(n):
if(n == 0 or n==1):
return n
return n*calFact(n-1)
nVal=1
while(calFact(nVal)<1000000): # f(n) = n! * 10^-6 sec
nVal=nVal+1 # 10^6 = n!
print(nVal)
So in this case we are trying to find out n such that n! is equal to or near to 10^6.

Get time complexity of the recursion: T(n)=4T(n-1) - 3T(n-2)

I have a recurrence relation given by:
T(n)=4T(n-1) - 3T(n-2)
How do I solve this?
Any detailed explanation:
What I tried was that I substituted for T(n-1) on the right hand side using the relation and I got this:
=16T(n-2)-12T(n-3)-3T(n-2)
But I don't know where and how to end this.
Not only you can easily get the time complexity of this recursion, but you can even solve it exactly. This is thanks to the exhaustive theory behind linear recurrence relations and the one you called here is a specific case of homogeneous linear recurrence.
To solve it you need to write a characteristic polynomial: t^2 -4t +3 and find it's roots which are t=1 and t=3. Which means that your solution is of the form:
T(n) = c1 + 3^n * c2.
You can get c1 and c2 if you have a boundary conditions, but for your case it is enough to claim O(3^n) time complexity.
While it's obviously O(4^n) (because T(n)<=4*T(n-1)), it looks like a smaller limit can be proved:
T(n) = 4*T(n-1) - 3*T(n-2)
T(n) - T(n-1) = 3*T(n-1) - 3*T(n-2)
D(n) = T(n) - T(n-1)
D(n) = 3*D(n-1)
D(n) = D(0) * 3^n
if D(0)=0, T(n)=const=O(1)
otherwise since the difference is exponential, the resulting function will be exponential as well:
T(n) = O(3^n)
NOTE :- Generally, these kind of recurrence relations (where number of recurrence function calls are repeated , e.g-recurrence relation for a fibonacci sequence for value n ) will result into an exponential time complexity.
First of all, your question is incomplete . It does not provide a termination condition ( a condition for which the recurrence will terminate ). I assume that it must be
T(n) = 1 for n=1 and 2 for n=2
Based on this assumption I start breaking down the above recurrence relation
On substituting T(n) into T(n-1) I get this :
16T(n-2) - 24T(n-3) + 9T(n-4)
this forms a polynomial in the power of 2
{(4^2)T(n-2) - 2.4.3 T(n-3) + (3^2) T(n-4)}
again breaking the above recurrence further we get :
64T(n-3) -144T(n-4) + 108T(n-5) -27T(n-6)
which is a polynomial of power 3
on breaking down the relation for n-1 terms we will get :
(4^n-1) T(1) - ............. something like that
we can clearly see that in the above expansion all the remaining terms will be less than 4^n-1 so, we can take the asymptotic notation as :
O(4^n)
As an exercise you can either expand the polynomial for few more terms and also draw the recursion tree to find out what's actually happening .
Trying T(n) = x^n gives you a quadratic equation: x^2 = 4x - 3. This has solutions x=1 and x=3, so the general form for T(n) is a + b*3^n. The exact values of a and b depend on the initial conditions (for example, the values of T(0) and T(1)).
Depending on the initial conditions, the solution is going to be O(1) or O(3^n).

Complexity of recursive factorial program

What's the complexity of a recursive program to find factorial of a number n? My hunch is that it might be O(n).
If you take multiplication as O(1), then yes, O(N) is correct. However, note that multiplying two numbers of arbitrary length x is not O(1) on finite hardware -- as x tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it's O(x ** 1.585)).
You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one. x, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway of N, grows with O(log N), of course.
If you mean to limit your question to factorials of numbers short enough to be multiplied in O(1), then there's no way N can "tend to infinity" and therefore big-O notation is inappropriate.
Assuming you're talking about the most naive factorial algorithm ever:
factorial (n):
if (n = 0) then return 1
otherwise return n * factorial(n-1)
Yes, the algorithm is linear, running in O(n) time. This is the case because it executes once every time it decrements the value n, and it decrements the value n until it reaches 0, meaning the function is called recursively n times. This is assuming, of course, that both decrementation and multiplication are constant operations.
Of course, if you implement factorial some other way (for example, using addition recursively instead of multiplication), you can end up with a much more time-complex algorithm. I wouldn't advise using such an algorithm, though.
When you express the complexity of an algorithm, it is always as a function of the input size. It is only valid to assume that multiplication is an O(1) operation if the numbers that you are multiplying are of fixed size. For example, if you wanted to determine the complexity of an algorithm that computes matrix products, you might assume that the individual components of the matrices were of fixed size. Then it would be valid to assume that multiplication of two individual matrix components was O(1), and you would compute the complexity according to the number of entries in each matrix.
However, when you want to figure out the complexity of an algorithm to compute N! you have to assume that N can be arbitrarily large, so it is not valid to assume that multiplication is an O(1) operation.
If you want to multiply an n-bit number with an m-bit number the naive algorithm (the kind you do by hand) takes time O(mn), but there are faster algorithms.
If you want to analyze the complexity of the easy algorithm for computing N!
factorial(N)
f=1
for i = 2 to N
f=f*i
return f
then at the k-th step in the for loop, you are multiplying (k-1)! by k. The number of bits used to represent (k-1)! is O(k log k) and the number of bits used to represent k is O(log k). So the time required to multiply (k-1)! and k is O(k (log k)^2) (assuming you use the naive multiplication algorithm). Then the total amount of time taken by the algorithm is the sum of the time taken at each step:
sum k = 1 to N [k (log k)^2] <= (log N)^2 * (sum k = 1 to N [k]) =
O(N^2 (log N)^2)
You could improve this performance by using a faster multiplication algorithm, like Schönhage-Strassen which takes time O(n*log(n)*log(log(n))) for 2 n-bit numbers.
The other way to improve performance is to use a better algorithm to compute N!. The fastest one that I know of first computes the prime factorization of N! and then multiplies all the prime factors.
The time-complexity of recursive factorial would be:
factorial (n) {
if (n = 0)
return 1
else
return n * factorial(n-1)
}
So,
The time complexity for one recursive call would be:
T(n) = T(n-1) + 3 (3 is for As we have to do three constant operations like
multiplication,subtraction and checking the value of n in each recursive
call)
= T(n-2) + 6 (Second recursive call)
= T(n-3) + 9 (Third recursive call)
.
.
.
.
= T(n-k) + 3k
till, k = n
Then,
= T(n-n) + 3n
= T(0) + 3n
= 1 + 3n
To represent in Big-Oh notation,
T(N) is directly proportional to n,
Therefore,
The time complexity of recursive factorial is O(n).
As there is no extra space taken during the recursive calls,the space complexity is O(N).

Resources