T(Running Time) vs T (Recursion) - algorithm

I don't understand whats the difference between the (T) from recurrence and the (T) from Running Time. I took some courses that teach me recursion and lineal recurrence, for example in this code
factorial (n) {
if (n = 0)
return 1
else
return n * factorial(n-1)
}
Why the time complexity is O(n) ?
I took a course about recurrence and I'm a little confuse. I would analize the code in this way:
Tn = 1 , n=0
Tn = n*T(n-1)
so if we expand the recurssion :
Tn = (n-1)*(n)*T(n-2)
so the recurssion grow n! and the growth is O(n!), however, the analysis is different, but what am I doing wrong?
And then I've another similar question , I took a linear recurrence function course and in these course I learn how to solve a recurrence for example : f(n) = f(n-1) + f(n-2)
So in the fibanicci program:
def Fibonacci(n):
if n<0:
print("Incorrect input")
# First Fibonacci number is 0
elif n==0:
return 0
# Second Fibonacci number is 1
elif n==1:
return 1
else:
return Fibonacci(n-1)+Fibonacci(n-2)
I would solve the fibonacci linear recurrence with a close form like these:
1/sqr(5)*(1+sqr(5))/2 *((1+sqr(5))/2)**n - 1/sqr(5)*(1-sqr(5))/2 *((1-sqr(5))/2)**n
Why would the order of growth not be? O(1/sqr(5)*(1+sqr(5))/2 *((1+sqr(5))/2)**n)

There's a difference between the value of the function and the time to compute that value.
When you analyze your linear recurrence, you claim that the analysis is:
Tn = 1 , n=0
Tn = n*T(n-1)
But the computation involved in computing each next term is really just 1 (one multiplication) given the previous values. So it should be Tn = 1 + T(n-1). When you rerun your analysis, the linear result will become clear.
A similar separation between the value and the runtime will help you analyze your second question.

Related

How to calculate time complexity of this recursion?

I'm interested in calculating the following code's time and space complexity but seem to be struggling a lot. I know that the deepest the recursion could reach is n so the space should be O(n). I have no idea however how to calculate the time complexity... I don't know how to write the formula when it comes to recursions similar to this forms like: f(f(n-1)) .
if it was something like, return f3(n-1) + f3(n-1) then i know it should be O(2^n) since T(n) = 2T(n-1) correct?
Here's the code:
int f3(int n)
{
if(n <= 2)
return 1;
f3(1 + f3(n-2));
return n - 1;
}
Thank you for your help!
Notice that f3(n) = n - 1 for all n, so the line f3(1 + f3(n-2)), first f3(n-2) is computed, which returns n - 3 and then f3(1 + n - 3) = f3(n-2) is computed again!
So, f3(n) computes f3(n-2) twice, alongside with some O(1) overhead.
We got the recursion formula T(n) = 2T(n-2) + c for some constant c, and T(n) is the running time of f3(n).
Solving the recursion, we get T(n) = O(2^(n/2)).

Time Complexity of Recursive Algorithm Array

I have a recursive algorithm like that which computes the smallest integer in the array.
ALGORITHM F_min1(A[0..n-1])
//Input: An array A[0..n-1] of real numbers
If n = 1
return A[0]
else
temp ← F_minl(A[0..n-2])
If temp ≤ A[n-1]
return temp
else
return A[n-1]
What I think that the reccurence relation should be
T(n)=T(n-1) + n
However I am not sure about the + n part. I want to be sure in which cases the recurrence is T(n)=T(n-1) + 1 and in which cases the recurrence is T(n)=T(n-1) + n.
The recurrence should be
T(1) = 1,
T(n) = T(n-1) + 1
because besides the recursive call to the smaller array, all computational effort (reading the last entry of A and doing the comparison) take constant time in unit cost measure. The algorithm can be understood as divide-and-conquer where the divide part is splitting the array into a prefix and the last entry; the conquer part, which is a comparison, cannot take more than constant time here. In total, a case where there is linear work after the recursive call does not exist.

Recurrence Relation based off Pseudo Code (Time complexity)

Consider the element uniqueness problem, in which we are given a range, i, i + 1, . . . , j, of indices for an array, A, and we want to determine if the elements of this range, A[i], A[i+1], . . . , A[j], are all unique, that is, there is no repeated element in this group of array entries. Consider the following (inefficient) recursive algorithm.
public static boolean isUnique(int[] A, int start, int end) {
if (start >= end) return true; // the range is too small for repeats
// check recursively if first part of array A is unique
if (!isUnique(A,start,end-1) // there is duplicate in A[start],...,A[end-1]
return false;
// check recursively if second part of array A is unique
if (!isUnique(A,start+1,end) // there is duplicate in A[start+1],...,A[end]
return false;
return (A[start] != A[end]; // check if first and last are different
}
Let n denote the number of entries under consideration, that is, let n = end − start + 1. What is an upper is upper bound on the asymptotic running time of this code fragment for large n? Provide a brief and precise explanation.
(You lose marks if you do not explain.) To begin your explanation, you may say how many recursive calls the
algorithm will make before it terminates, and analyze the number of operations per invocation of this algorithm.
Alternatively, you may provide the recurrence characterizing the running time of this algorithm, and then solve it
using the iterative substitution technique?
This question is from a sample practise exam for an Algorithms class this is my current answer can some one please help verify if im on the right track
Answer:
The recurrence equation:
T(n) = 1 if n = 1,
T(n) = 2T(n-1) if n > 1
after solving using iterative substitution i got
2^k * T (n-k) and I solved this to O(2^(n-1)) and I simplified it O(2^n)
Your recurrence relation should be T(n) = 2T(n-1) + O(1) with T(1) = O(1). However this doesn't change the asymptotics, the solution is still T(n) = O(2^n). To see this you can expand the recurrence relation to get T(n) = O(1) + 2(O(1) + 2(O(1) + ...)) so you have T(n) = O(1) * (1 + 2 + 4 = ... + 2^n) = O(1) * (2^(n+1) - 1) = O(2^n).

Running Time of Divide And conquer fibonacci program

count = 0
def fibonacci(n):
global count
count = count + 1
if not isinstance(n, int):
print ('Invalid Input')
return None
if n < 0:
print ('Invalid Input')
return None
if n == 0:
return 0
if n == 1:
return 1
fib = fibonacci(n-1) + fibonacci(n-2)
return fib
fibonacci(8)
print(count)
I was trying to find out the running time of this fibonacci program. Can any one help me in solving the recurrence relation for the same..
T(n) = T(n-1) + T(n-2)...What would be the running time calculation from here?
Thanks... :)
I am assuming you meant 'fibonacci' where you said 'factorial'.
At each level, you have two calls to fibonacci(). This means your running time will be O(2^n). You can see this by drawing the recursion tree.
For a much better and more detailed explanation, please see Computational complexity of Fibonacci Sequence.
you can see wiki,
But simple observation As you wrote:
T(n) < 2T(n-1) = 2 * 2 T(n-2) =.... = 2^(n-1)T(1) = 2^(n-1). So T(n) is in O(2^n).
in fact you should solve x^2 = X + 1 so x will be phi1 = (1+sqrt(5))/2 or phi2 = (1-sqrt(5))/2 so result is phi1 ^ n + phi2 ^n but because phi2 is smaller than 1 for big n we can say it's T(n)=phi1^n.
Edit:*But you can edit your current solution to take O(n) running time(by for loop start from first element).
Take a look at this especially time.clock(). Call clock before your function call and after, calculate the difference and you got the elapsed time.
Btw: Why so much code for fibonacci?
def fib (n): return fib (n - 1) + fib (n - 2) if n > 1 else n
The runtime is 2F(n+1) - 1 calls, where n is the nth Fibonacci number.
Here's a quick inductive proof:
As a base case, if n = 0 or n = 1, then we make exactly one call, and F(1) = F(2) = 1, and we have that 2F(n+1) - 1 = 1.
For the inductive step, if n > 1, then we make as many calls as are necessary to evaluate the function on n-1 and n-2. By the inductive hypothesis, this takes 2F(n) - 1 + 2F(n-1) - 1 = 2F(n+1) - 2 recursive calls to complete. However, because we count the current function call as well, we add one to this to get 2F(n+1) - 1 as required.
Note that 2F(n+1) - 1 is an expression for the nth Leonardo number, where
L(0) = L(1) = 1
L(n+2) = L(n) + L(n+1) + 1
Which grows at Θ(Φn) as Saeed points out. However, this answer is mathematically exact.
This is more accurately the runtime you're interested in, since you need to account for the work being done in each recursive call itself. If you leave off the +1 term, you just get bac the Fibonacci series!

Recurrence Relation

Why is the recurrence relation of recursive factorial algorithm this?
T(n)=1 for n=0
T(n)=1+T(n-1) for n>0
Why is it not this?
T(n)=1 for n=0
T(n)=n*T(n-1) for n>0
Putting values of n i.e 1,2,3,4...... the second recurrence relation holds(The factorials are correctly calculated) not the first one.
we generally use recurrence relation to find the time complexity of algorithm.
Here, the function T(n) is not actually for calculating the value of an factorial but it is telling you about the time complexity of the factorial algorithm.
It means for finding a factorial of n it will take 1 more operation than factorial of n-1
(i.e. T(n)=T(n-1)+1) and so on.
so correct recurrence relation for a recursive factorial algorithm is
T(n)=1 for n=0
T(n)=1+T(n-1) for n>0
not that you mentioned later.
like recurrence for tower of hanoi is
T(n)=2T(n-1)+1 for n>0;
Update:
It does not have anything to do with implementation generally.
But recurrence can give an intuition of programming paradigm for eg if T(n)=2*T(n/2)+n (merge sort) this gives kind of intuition for divide and conquer because we are diving n into half.
Also, If you will solve the equation it will give you a bound on running time.eg big oh notation.
Looks like T(n) is the recurrence relation of the time complexity of the recursive factorial algorithm, assuming constant time multiplication. Perhaps you misread your source?
What he put was not the factorial recursion, but the time complexity of it.
Assuming this is the pseudocode for such a recurrence:
1. func factorial(n)
2. if (n == 0)
3. return 1
4. return n * (factorial - 1)
I am assuming that tail-recursion elimination is not involved.
Line 2 and 3 costs a constant time, c1 and c2.
Line 4 costs a constant time as well. However, it calls factorial(n-1) which will take some time T(n-1). Also, the time it takes to multiply factorial(n-1) by n can be ignored once T(n-1) is used.
Time for the whole function is just the sum: T(n) = c1 + c2 + T(n-1).
This, in big-o notation, is reduced to T(n) = 1 + T(n-1).
This is, as Diam has pointed out, is a flat recursion, therefore its running time should be O(n). Its space complexity will be enormous though.
I assume that you have bad information. The second recurrence relation you cite is the correct one, as you have observed. The first one just generates the natural numbers.
This question is very confusing... You first formula is not factorial. It is simply T(n) = n + 1, for all n. Factorial of n is the product of the first n positive integers: factorial(1) = 1. factorial(n) = n * factorial(n-1). Your second formula is essentially correct.
T(n) = T(n-1) + 1 is correct recurrence equation for factorial of n.
This equation gives you the time to compute factorial of n NOT value of the factorial of n.
Where did you find the first one ? It's completely wrong.
It's only going to add 1 each time whatever the value is .
First you have to find a basic operation and for this example it is multiplication. Multiplication happens once in every recursion. So
T(n) = T(n-1) +1
this +1 is basic operation (mutliplication for this example)
T(n-1) is next recursion call.
TL;DR: The answer to your question actually depends on what sequence your recurrence relation is defining. That is, whether the sequence Tn in your question represents the factorial function or the running-time cost of computing the factorial functionX.
The factorial function
The recursive defintion of the factorial of n, fn, is:
fn = n • fn-1 for n > 0 with f0 = 1.
As you can see, the equation above is actually a recurrence relation, since it is an equation that, together with the initial term (i.e., f0 = 1), recursively defines a sequence (i.e., the factorial function, fn).
Modelling the running-time cost of computing the factorial
Now, we are going to find a model for representing the running-time cost of computing the factorial of n. Let's call Tn the running-time cost of computing fn.
Looking at the definition above of the factorial function fn, its running-time cost Tn will consist of the running-time cost of computing fn-1 (i.e., this cost is Tn-1) plus the running-time cost of performing the multiplication between n and fn-1. The multiplication is achieved in constant time. Therefore we could say that Tn = Tn-1 + 1.
However, what is the value of T0? T0 represents the running-time cost of computing f0. Since the value of f0 is initially known by definition, the running-time cost for computing f0 is actually constant. Therefore, we could say that T0 = 1.
Finally, what we obtain is:
Tn = Tn-1 + 1 for n > 0 with T0 = 1.
This equation above is also a recurrence relation. However, what it defines (together with the initial term), is a sequence that models the running-time cost of computing the factorial function.
XTaking into account how the sequence in your recurrence relation is called (i.e., Tn), I think it very likely represents the latter, i.e., the running-time cost of computing the factorial function.

Resources