Proving this recursive Fibonacci implementation runs in time O(2^n)? - algorithm

I'm having difficulty proving that the 'bad' version of fibonacci is O(2^n).
Ie.
Given the function
int fib(int x)
{
if ( x == 1 || x == 2 )
{
return 1;
}
else
{
return ( f( x - 1 ) + f( x - 2) );
}
}
Can I get help for the proof of this being O(2^n).

Let's start off by writing a recurrence relation for the runtime:
T(1) = 1
T(2) = 1
T(n+2) = T(n) + T(n + 1) + 1
Now, let's take a guess that
T(n) ≤ 2n
If we try to prove this by induction, the base cases check out:
T(1) = 1 ≤ 2 = 21
T(2) = 1 ≤ 4 = 22
Then, in the inductive step, we see this:
T(n + 2) = T(n) + T(n + 1) + 1
≤ 2n + 2n+1 + 1
< 2n+1 + 2n+1
= 2n+2
Therefore, by induction, we can conclude that T(n) ≤ 2n for any n, and therefore T(n) = O(2n).
With a more precise analysis, you can prove that T(n) = 2Fn - 1, where Fn is the nth Fibonacci number. This proves, more accurately, that T(n) = Θ(φn), where φ is the Golden Ratio, which is approximately 1.61. Note that φn = o(2n) (using little-o notation), so this is a much better bound.
Hope this helps!

Try manually doing a few test cases like f(5) and take note of how many times the method f() is called.
A fat hint would be to notice that every time the method f() is called (except for x is 1 or 2), f() is called twice. Each of those call f() two more times each, and so on...

There's actually a pretty simple proof that the total number of calls to the f is going to be 2Fib(n)-1, where Fib(n) is the n'th Fibonacci number. It goes like this:
The set of calls to f form a binary tree, where each call is either a leaf (for x=1 or x=2) or else the call spawns two child calls (for x>2).
Each leaf contributes exactly 1 to the total returned by the original call, therefore there are Fib(n) total leaves.
The total number of internal nodes in any binary tree is equal to L-1, where L is the number of leaves, so the total number of nodes in this tree is 2L-1.
This shows that the running time (measured in terms of total calls to f) is
T(n)=2Fib(n)-1=O(Fib(n))
and since Fib(n)=Θ(φ^n), where φ is the golden ratio
Φ=(1+sqrt{5})/2 = 1.618...
this proves that T(n) = Θ(1.618...^n) = O(n).

Using the Recursion Tree Method :
T(n)
↙ ↘
n-1 n – 2
↙ ↘ ↙ ↘
N – 2 n – 3 n – 3 n - 4
Each tree level is considered as a call for fib(x - 1) fib(x - 2) if you complete the recursion tree on this manner you will stop when x = 1 or x = 2 (base case) .... this tree shows only three levels of the recursion tree . To solve this tree you need these important informations : 1- height of the tree. 2-how much work is done at each level .
The height of this tree is 2^n and the work at each level is O(1) then the Order of this recurrence is Height * Work at each level = 2^n * 1 = O(2^n)

Related

is the following way correct for determines the complexity of Fibonacci function? [duplicate]

I understand Big-O notation, but I don't know how to calculate it for many functions. In particular, I've been trying to figure out the computational complexity of the naive version of the Fibonacci sequence:
int Fibonacci(int n)
{
if (n <= 1)
return n;
else
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
What is the computational complexity of the Fibonacci sequence and how is it calculated?
You model the time function to calculate Fib(n) as sum of time to calculate Fib(n-1) plus the time to calculate Fib(n-2) plus the time to add them together (O(1)). This is assuming that repeated evaluations of the same Fib(n) take the same time - i.e. no memoization is used.
T(n<=1) = O(1)
T(n) = T(n-1) + T(n-2) + O(1)
You solve this recurrence relation (using generating functions, for instance) and you'll end up with the answer.
Alternatively, you can draw the recursion tree, which will have depth n and intuitively figure out that this function is asymptotically O(2n). You can then prove your conjecture by induction.
Base: n = 1 is obvious
Assume T(n-1) = O(2n-1), therefore
T(n) = T(n-1) + T(n-2) + O(1) which is equal to
T(n) = O(2n-1) + O(2n-2) + O(1) = O(2n)
However, as noted in a comment, this is not the tight bound. An interesting fact about this function is that the T(n) is asymptotically the same as the value of Fib(n) since both are defined as
f(n) = f(n-1) + f(n-2).
The leaves of the recursion tree will always return 1. The value of Fib(n) is sum of all values returned by the leaves in the recursion tree which is equal to the count of leaves. Since each leaf will take O(1) to compute, T(n) is equal to Fib(n) x O(1). Consequently, the tight bound for this function is the Fibonacci sequence itself (~θ(1.6n)). You can find out this tight bound by using generating functions as I'd mentioned above.
Just ask yourself how many statements need to execute for F(n) to complete.
For F(1), the answer is 1 (the first part of the conditional).
For F(n), the answer is F(n-1) + F(n-2).
So what function satisfies these rules? Try an (a > 1):
an == a(n-1) + a(n-2)
Divide through by a(n-2):
a2 == a + 1
Solve for a and you get (1+sqrt(5))/2 = 1.6180339887, otherwise known as the golden ratio.
So it takes exponential time.
I agree with pgaur and rickerbh, recursive-fibonacci's complexity is O(2^n).
I came to the same conclusion by a rather simplistic but I believe still valid reasoning.
First, it's all about figuring out how many times recursive fibonacci function ( F() from now on ) gets called when calculating the Nth fibonacci number. If it gets called once per number in the sequence 0 to n, then we have O(n), if it gets called n times for each number, then we get O(n*n), or O(n^2), and so on.
So, when F() is called for a number n, the number of times F() is called for a given number between 0 and n-1 grows as we approach 0.
As a first impression, it seems to me that if we put it in a visual way, drawing a unit per time F() is called for a given number, wet get a sort of pyramid shape (that is, if we center units horizontally). Something like this:
n *
n-1 **
n-2 ****
...
2 ***********
1 ******************
0 ***************************
Now, the question is, how fast is the base of this pyramid enlarging as n grows?
Let's take a real case, for instance F(6)
F(6) * <-- only once
F(5) * <-- only once too
F(4) **
F(3) ****
F(2) ********
F(1) **************** <-- 16
F(0) ******************************** <-- 32
We see F(0) gets called 32 times, which is 2^5, which for this sample case is 2^(n-1).
Now, we want to know how many times F(x) gets called at all, and we can see the number of times F(0) is called is only a part of that.
If we mentally move all the *'s from F(6) to F(2) lines into F(1) line, we see that F(1) and F(0) lines are now equal in length. Which means, total times F() gets called when n=6 is 2x32=64=2^6.
Now, in terms of complexity:
O( F(6) ) = O(2^6)
O( F(n) ) = O(2^n)
There's a very nice discussion of this specific problem over at MIT. On page 5, they make the point that, if you assume that an addition takes one computational unit, the time required to compute Fib(N) is very closely related to the result of Fib(N).
As a result, you can skip directly to the very close approximation of the Fibonacci series:
Fib(N) = (1/sqrt(5)) * 1.618^(N+1) (approximately)
and say, therefore, that the worst case performance of the naive algorithm is
O((1/sqrt(5)) * 1.618^(N+1)) = O(1.618^(N+1))
PS: There is a discussion of the closed form expression of the Nth Fibonacci number over at Wikipedia if you'd like more information.
You can expand it and have a visulization
T(n) = T(n-1) + T(n-2) <
T(n-1) + T(n-1)
= 2*T(n-1)
= 2*2*T(n-2)
= 2*2*2*T(n-3)
....
= 2^i*T(n-i)
...
==> O(2^n)
Recursive algorithm's time complexity can be better estimated by drawing recursion tree, In this case the recurrence relation for drawing recursion tree would be T(n)=T(n-1)+T(n-2)+O(1)
note that each step takes O(1) meaning constant time,since it does only one comparison to check value of n in if block.Recursion tree would look like
n
(n-1) (n-2)
(n-2)(n-3) (n-3)(n-4) ...so on
Here lets say each level of above tree is denoted by i
hence,
i
0 n
1 (n-1) (n-2)
2 (n-2) (n-3) (n-3) (n-4)
3 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6)
lets say at particular value of i, the tree ends, that case would be when n-i=1, hence i=n-1, meaning that the height of the tree is n-1.
Now lets see how much work is done for each of n layers in tree.Note that each step takes O(1) time as stated in recurrence relation.
2^0=1 n
2^1=2 (n-1) (n-2)
2^2=4 (n-2) (n-3) (n-3) (n-4)
2^3=8 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6) ..so on
2^i for ith level
since i=n-1 is height of the tree work done at each level will be
i work
1 2^1
2 2^2
3 2^3..so on
Hence total work done will sum of work done at each level, hence it will be 2^0+2^1+2^2+2^3...+2^(n-1) since i=n-1.
By geometric series this sum is 2^n, Hence total time complexity here is O(2^n)
The proof answers are good, but I always have to do a few iterations by hand to really convince myself. So I drew out a small calling tree on my whiteboard, and started counting the nodes. I split my counts out into total nodes, leaf nodes, and interior nodes. Here's what I got:
IN | OUT | TOT | LEAF | INT
1 | 1 | 1 | 1 | 0
2 | 1 | 1 | 1 | 0
3 | 2 | 3 | 2 | 1
4 | 3 | 5 | 3 | 2
5 | 5 | 9 | 5 | 4
6 | 8 | 15 | 8 | 7
7 | 13 | 25 | 13 | 12
8 | 21 | 41 | 21 | 20
9 | 34 | 67 | 34 | 33
10 | 55 | 109 | 55 | 54
What immediately leaps out is that the number of leaf nodes is fib(n). What took a few more iterations to notice is that the number of interior nodes is fib(n) - 1. Therefore the total number of nodes is 2 * fib(n) - 1.
Since you drop the coefficients when classifying computational complexity, the final answer is θ(fib(n)).
It is bounded on the lower end by 2^(n/2) and on the upper end by 2^n (as noted in other comments). And an interesting fact of that recursive implementation is that it has a tight asymptotic bound of Fib(n) itself. These facts can be summarized:
T(n) = Ω(2^(n/2)) (lower bound)
T(n) = O(2^n) (upper bound)
T(n) = Θ(Fib(n)) (tight bound)
The tight bound can be reduced further using its closed form if you like.
It is simple to calculate by diagramming function calls. Simply add the function calls for each value of n and look at how the number grows.
The Big O is O(Z^n) where Z is the golden ratio or about 1.62.
Both the Leonardo numbers and the Fibonacci numbers approach this ratio as we increase n.
Unlike other Big O questions there is no variability in the input and both the algorithm and implementation of the algorithm are clearly defined.
There is no need for a bunch of complex math. Simply diagram out the function calls below and fit a function to the numbers.
Or if you are familiar with the golden ratio you will recognize it as such.
This answer is more correct than the accepted answer which claims that it will approach f(n) = 2^n. It never will. It will approach f(n) = golden_ratio^n.
2 (2 -> 1, 0)
4 (3 -> 2, 1) (2 -> 1, 0)
8 (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
14 (5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
22 (6 -> 5, 4)
(5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
(4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
The naive recursion version of Fibonacci is exponential by design due to repetition in the computation:
At the root you are computing:
F(n) depends on F(n-1) and F(n-2)
F(n-1) depends on F(n-2) again and F(n-3)
F(n-2) depends on F(n-3) again and F(n-4)
then you are having at each level 2 recursive calls that are wasting a lot of data in the calculation, the time function will look like this:
T(n) = T(n-1) + T(n-2) + C, with C constant
T(n-1) = T(n-2) + T(n-3) > T(n-2) then
T(n) > 2*T(n-2)
...
T(n) > 2^(n/2) * T(1) = O(2^(n/2))
This is just a lower bound that for the purpose of your analysis should be enough but the real time function is a factor of a constant by the same Fibonacci formula and the closed form is known to be exponential of the golden ratio.
In addition, you can find optimized versions of Fibonacci using dynamic programming like this:
static int fib(int n)
{
/* memory */
int f[] = new int[n+1];
int i;
/* Init */
f[0] = 0;
f[1] = 1;
/* Fill */
for (i = 2; i <= n; i++)
{
f[i] = f[i-1] + f[i-2];
}
return f[n];
}
That is optimized and do only n steps but is also exponential.
Cost functions are defined from Input size to the number of steps to solve the problem. When you see the dynamic version of Fibonacci (n steps to compute the table) or the easiest algorithm to know if a number is prime (sqrt(n) to analyze the valid divisors of the number). you may think that these algorithms are O(n) or O(sqrt(n)) but this is simply not true for the following reason:
The input to your algorithm is a number: n, using the binary notation the input size for an integer n is log2(n) then doing a variable change of
m = log2(n) // your real input size
let find out the number of steps as a function of the input size
m = log2(n)
2^m = 2^log2(n) = n
then the cost of your algorithm as a function of the input size is:
T(m) = n steps = 2^m steps
and this is why the cost is an exponential.
Well, according to me to it is O(2^n) as in this function only recursion is taking the considerable time (divide and conquer). We see that, the above function will continue in a tree until the leaves are approaches when we reach to the level F(n-(n-1)) i.e. F(1). So, here when we jot down the time complexity encountered at each depth of tree, the summation series is:
1+2+4+.......(n-1)
= 1((2^n)-1)/(2-1)
=2^n -1
that is order of 2^n [ O(2^n) ].
No answer emphasizes probably the fastest and most memory efficient way to calculate the sequence. There is a closed form exact expression for the Fibonacci sequence. It can be found by using generating functions or by using linear algebra as I will now do.
Let f_1,f_2, ... be the Fibonacci sequence with f_1 = f_2 = 1. Now consider a sequence of two dimensional vectors
f_1 , f_2 , f_3 , ...
f_2 , f_3 , f_4 , ...
Observe that the next element v_{n+1} in the vector sequence is M.v_{n} where M is a 2x2 matrix given by
M = [0 1]
[1 1]
due to f_{n+1} = f_{n+1} and f_{n+2} = f_{n} + f_{n+1}
M is diagonalizable over complex numbers (in fact diagonalizable over the reals as well, but this is not usually the case). There are two distinct eigenvectors of M given by
1 1
x_1 x_2
where x_1 = (1+sqrt(5))/2 and x_2 = (1-sqrt(5))/2 are the distinct solutions to the polynomial equation x*x-x-1 = 0. The corresponding eigenvalues are x_1 and x_2. Think of M as a linear transformation and change your basis to see that it is equivalent to
D = [x_1 0]
[0 x_2]
In order to find f_n find v_n and look at the first coordinate. To find v_n apply M n-1 times to v_1. But applying M n-1 times is easy, just think of it as D. Then using linearity one can find
f_n = 1/sqrt(5)*(x_1^n-x_2^n)
Since the norm of x_2 is smaller than 1, the corresponding term vanishes as n tends to infinity; therefore, obtaining the greatest integer smaller than (x_1^n)/sqrt(5) is enough to find the answer exactly. By making use of the trick of repeatedly squaring, this can be done using only O(log_2(n)) multiplication (and addition) operations. Memory complexity is even more impressive because it can be implemented in a way that you always need to hold at most 1 number in memory whose value is smaller than the answer. However, since this number is not a natural number, memory complexity here changes depending on whether if you use fixed bits to represent each number (hence do calculations with error)(O(1) memory complexity this case) or use a better model like Turing machines, in which case some more analysis is needed.

Calculating Big O complexity of Recursive Algorithms

Somehow, I find that it is much harder to derive Big O complexities for recursive algorithms compared to iterative algorithms. Do provide some insight about how I should go about solving these 2 questions.
*assume that submethod has linear complexity
def myMethod(n)
if (n>0)
submethod(n)
myMethod(n/2)
end
end
def myMethod(k,n)
if(n>0)
submethod(k)
myMethod(k,n/2)
end
end
For your first problem, the recurrence will be:
T(n) = n + T(n/2)
T(n/2) = n/2 + T(n/4)
...
...
...
T(2) = 2 + T(1)
T(1) = 1 + T(0) // assuming 1/2 equals 0(integer division)
adding up we get:
T(n) = n + n/2 + n/4 + n/8 + ..... 1 + T(0)
= n(1 + 1/2 + 1/4 + 1/8 .....) + k // assuming k = T(0)
= n*1/(1 - 1/2) ( sum of geometric series a/(1-r) when n tends to infinity)
= 2n + k
Therefore, T(n) = O(n). Remember i have assumed n tends to infinity ,cause this is what we do in Asymptotic analysis.
For your second problem its easy to see that, we perform k primitive operations everytime till n becomes 0. This happens log(n) times. Therefore, T(n) = O(k*log(n))
All you need to do is count how many times a basic operation is executed. This is true for analysing any kind of algorithm. In your case, we will count the number of times submethod is called.
You could break-down the running time of call myMethod(n) to be 1 + myMethod(n / 2). Which you can further break down to 1 + (1 + myMethod(n / 4)). At some point you will reach the base case, in log(n)th step. That gives you an algorithm of log(n).
The second one is no different, since k is constant all the time, it will again take log(n) time, assuming submethod takes constant time regardless of its input.

Why is the complexity of computing the Fibonacci series 2^n and not n^2?

I am trying to find complexity of Fibonacci series using a recursion tree and concluded height of tree = O(n) worst case, cost of each level = cn, hence complexity = n*n=n^2
How come it is O(2^n)?
The complexity of a naive recursive fibonacci is indeed 2ⁿ.
T(n) = T(n-1) + T(n-2) = T(n-2) + T(n-3) + T(n-3) + T(n-4) =
= T(n-3) + T(n-4) + T(n-4) + T(n-5) + T(n-4) + T(n-5) + T(n-5) + T(n-6) = ...
In each step you call T twice, thus will provide eventual asymptotic barrier of:
T(n) = 2⋅2⋅...⋅2 = 2ⁿ
bonus: The best theoretical implementation to fibonacci is actually a close formula, using the golden ratio:
Fib(n) = (φⁿ – (–φ)⁻ⁿ)/sqrt(5) [where φ is the golden ratio]
(However, it suffers from precision errors in real life due to floating point arithmetics, which are not exact)
The recursion tree for fib(n) would be something like :
n
/ \
n-1 n-2 --------- maximum 2^1 additions
/ \ / \
n-2 n-3 n-3 n-4 -------- maximum 2^2 additions
/ \
n-3 n-4 -------- maximum 2^3 additions
........
-------- maximum 2^(n-1) additions
Using n-1 in 2^(n-1) since for fib(5) we will eventually go down to fib(1)
Number of internal nodes = Number of leaves - 1 = 2^(n-1) - 1
Number of additions = Number of internal nodes + Number of leaves = (2^1 + 2^2 + 2^3 + ...) + 2^(n-1)
We can replace the number of internal nodes to 2^(n-1) - 1 because it will always be less than this value :
= 2^(n-1) - 1 + 2^(n-1)
~ 2^n
Look at it like this. Assume the complexity of calculating F(k), the kth Fibonacci number, by recursion is at most 2^k for k <= n. This is our induction hypothesis. Then the complexity of calculating F(n + 1) by recursion is
F(n + 1) = F(n) + F(n - 1)
which has complexity 2^n + 2^(n - 1). Note that
2^n + 2^(n - 1) = 2 * 2^n / 2 + 2^n / 2 = 3 * 2^n / 2 <= 2 * 2^n = 2^(n + 1).
We have shown by induction that the claim that calculating F(k) by recursion is at most 2^k is correct.
You are correct that the depth of the tree is O(n), but you are not doing O(n) work at each level. At each level, you do O(1) work per recursive call, but each recursive call then contributes two new recursive calls, one at the level below it and one at the level two below it. This means that as you get further and further down the recursion tree, the number of calls per level grows exponentially.
Interestingly, you can actually establish the exact number of calls necessary to compute F(n) as 2F(n + 1) - 1, where F(n) is the nth Fibonacci number. We can prove this inductively. As a base case, to compute F(0) or F(1), we need to make exactly one call to the function, which terminates without making any new calls. Let's say that L(n) is the number of calls necessary to compute F(n). Then we have that
L(0) = 1 = 2*1 - 1 = 2F(1) - 1 = 2F(0 + 1) - 1
L(1) = 1 = 2*1 - 1 = 2F(2) - 1 = 2F(1 + 1) - 1
Now, for the inductive step, assume that for all n' < n, with n ≥ 2, that L(n') = 2F(n + 1) - 1. Then to compute F(n), we need to make 1 call to the initial function that computes F(n), which in turn fires off calls to F(n-2) and F(n-1). By the inductive hypothesis we know that F(n-1) and F(n-2) can be computed in L(n-1) and L(n-2) calls. Thus the total runtime is
1 + L(n - 1) + L(n - 2)
= 1 + 2F((n - 1) + 1) - 1 + 2F((n - 2) + 1) - 1
= 2F(n) + 2F(n - 1) - 1
= 2(F(n) + F(n - 1)) - 1
= 2(F(n + 1)) - 1
= 2F(n + 1) - 1
Which completes the induction.
At this point, you can use Binet's formula to show that
L(n) = 2(1/√5)(((1 + √5) / 2)n - ((1 - √5) / 2)n) - 1
And thus L(n) = O(((1 + √5) / 2)n). If we use the convention that
φ = (1 + √5) / 2 &approx; 1.6
We have that
L(n) = Θ(φn)
And since φ < 2, this is o(2n) (using little-o notation).
Interestingly, I've chosen the name L(n) for this series because this series is called the Leonardo numbers. In addition to its use here, it arises in the analysis of the smoothsort algorithm.
Hope this helps!
t(n)=t(n-1)+t(n-2)
which can be solved through tree method:
t(n-1) + t(n-2) 2^1=2
| |
t(n-2)+t(n-3) t(n-3)+t(n-4) 2^2=4
. . 2^3=8
. . .
. . .
similarly for the last level . . 2^n
it will make total time complexity=>2+4+8+.....2^n
after solving the above gp we will get time complexity as O(2^n)
The complexity of Fibonacci series is O(F(k)), where F(k) is the kth Fibonacci number. This can be proved by induction. It is trivial for based case. And assume for all k<=n, the complexity of computing F(k) is c*F(k) + o(F(k)), then for k = n+1, the complexity of computing F(n+1) is c*F(n) + o(F(n)) + c*F(n-1) + o(F(n-1)) = c*(F(n) + F(n-1)) + o(F(n)) + o(F(n-1)) = O(F(n+1)).
The complexity of recursive Fibonacci series is 2^n:
This will be the Recurrence Relations for recursive Fibonacci
T(n)=T(n-1)+T(n-2) No of elements 2
Now on solving this relation using substitution method (substituting value of T(n-1) and T(n-2))
T(n)=T(n-2)+2*T(n-3)+T(n-4) No of elements 4=2^2
Again substituting values of above term we will get
T(n)=T(n-3)+3*T(n-4)+3*T(n-5)+T(n-6) No of elements 8=2^3
After solving it completely, we get
T(n)={T(n-k)+---------+---------}----------------------------->2^k eq(3)
This implies that maximum no of recursive calls at any level will be at most 2^n.
And for all the recursive calls in equation 3 is ϴ(1) so time complexity will be 2^n* ϴ(1)=2^n
The O(2^n) complexity of Fibonacci number calculation only applies to the recursion approach. With a few extra space, you can achieve a much better performance with O(n).
public static int fibonacci(int n) throws Exception {
if (n < 0)
throws new Exception("Can't be a negative integer")
if (n <= 1)
return n;
int s = 0, s1 = 0, s2 = 1;
for(int i= 2; i<=n; i++) {
s = s1 + s2;
s1 = s2;
s2 = s;
}
return s;
}
I cannot resist the temptation of connecting a linear time iterative algorithm for Fib to the exponential time recursive one: if one reads Jon Bentley's wonderful little book on "Writing Efficient Algorithms" I believe it is a simple case of "caching": whenever Fib(k) is calculated, store it in array FibCached[k]. Whenever Fib(j) is called, first check if it is cached in FibCached[j]; if yes, return the value; if not use recursion. (Look at the tree of calls now ...)

Running Time of Divide And conquer fibonacci program

count = 0
def fibonacci(n):
global count
count = count + 1
if not isinstance(n, int):
print ('Invalid Input')
return None
if n < 0:
print ('Invalid Input')
return None
if n == 0:
return 0
if n == 1:
return 1
fib = fibonacci(n-1) + fibonacci(n-2)
return fib
fibonacci(8)
print(count)
I was trying to find out the running time of this fibonacci program. Can any one help me in solving the recurrence relation for the same..
T(n) = T(n-1) + T(n-2)...What would be the running time calculation from here?
Thanks... :)
I am assuming you meant 'fibonacci' where you said 'factorial'.
At each level, you have two calls to fibonacci(). This means your running time will be O(2^n). You can see this by drawing the recursion tree.
For a much better and more detailed explanation, please see Computational complexity of Fibonacci Sequence.
you can see wiki,
But simple observation As you wrote:
T(n) < 2T(n-1) = 2 * 2 T(n-2) =.... = 2^(n-1)T(1) = 2^(n-1). So T(n) is in O(2^n).
in fact you should solve x^2 = X + 1 so x will be phi1 = (1+sqrt(5))/2 or phi2 = (1-sqrt(5))/2 so result is phi1 ^ n + phi2 ^n but because phi2 is smaller than 1 for big n we can say it's T(n)=phi1^n.
Edit:*But you can edit your current solution to take O(n) running time(by for loop start from first element).
Take a look at this especially time.clock(). Call clock before your function call and after, calculate the difference and you got the elapsed time.
Btw: Why so much code for fibonacci?
def fib (n): return fib (n - 1) + fib (n - 2) if n > 1 else n
The runtime is 2F(n+1) - 1 calls, where n is the nth Fibonacci number.
Here's a quick inductive proof:
As a base case, if n = 0 or n = 1, then we make exactly one call, and F(1) = F(2) = 1, and we have that 2F(n+1) - 1 = 1.
For the inductive step, if n > 1, then we make as many calls as are necessary to evaluate the function on n-1 and n-2. By the inductive hypothesis, this takes 2F(n) - 1 + 2F(n-1) - 1 = 2F(n+1) - 2 recursive calls to complete. However, because we count the current function call as well, we add one to this to get 2F(n+1) - 1 as required.
Note that 2F(n+1) - 1 is an expression for the nth Leonardo number, where
L(0) = L(1) = 1
L(n+2) = L(n) + L(n+1) + 1
Which grows at Θ(Φn) as Saeed points out. However, this answer is mathematically exact.
This is more accurately the runtime you're interested in, since you need to account for the work being done in each recursive call itself. If you leave off the +1 term, you just get bac the Fibonacci series!

How to solve: T(n) = T(n/2) + T(n/4) + T(n/8) + (n)

I know how to do recurrence relations for algorithms that only call itself once, but I'm not sure how to do something that calls itself multiple times in one occurrence.
For example:
T(n) = T(n/2) + T(n/4) + T(n/8) + (n)
Use Recursion Tree. See the last example of Recursion tree at CLRS "Intro to Algorithm".
T(n) = T(n/2) + T(n/4) + T(n/8) + n. The root will be n(cost) & divided into 3 recursions. So the recursion tree looks like as follows:
T(n) = n = n
T(n/2)T(n/4)T(n/8) (n/2) (n/4) (n/8)
T(n/4)T(n/8)T(n/16) T(n/8)T(n/16)T(n/32) T(n/16)T(n/32)T(n/64)
n---------------------------------> n
(n/2) (n/4) (n/8)--------------> (7/8)n
n/4 n/8 n/16 n/8 n/16 n/32 n/16 n/32 n/64)--------> (49/64)n
...
Longest path: the leftmost left branch = n -> n/2 -> n/4 -> ... -> 1
Shortest branch: the rightmost right branch = n -> n/8 -> n->64 -> ... -> 1
The number of leaves (l): 3^log_8(n) < l < 3^log_2(n) => n^0.5 < l < n^1.585
Look at the tree - upto log_8(n) levels the tree is full, and then as we go down, more & more internal nodes are absent. By this theory we can give the bound,
T(n) = Big-Oh (Summation j=0 to log_2(n)-1 (7/8)^j n) = ... => T(n) = O(n).
T(n) = Big-Omega (Summation j=0 to log_8(n)-1 (7/8)^j n) = ... => T(n) = Big-Omega(n).
Therefore, T(n) = Theta(n).
Here the points are:
T(n/2) path has the highest length...
This must not be a complete ternary tree ... height = log base 2 of n & # of leaves must be less than n ...
Hope, likely this way u can solve the prob.
See the Image for a better explanation-
Height of Tree: We took log(n)(base 2) because n/2 make the tree longer in comparison to n/4, and n/8. And our GP series will go till k=logn(base).
There are two ways of solving this. One is unrolling recursion and finding similarities which can require inventiveness and can be really hard. Another way is to use Akra-Bazzi method.
In this case g(x) = n, a1 = a2 = a3 = 1 and b1 = 1/2, b2 = 1/4, b3 = 1/8. Solving the equation
which is 1/2^p + 1/4^p + 1/8^p = 1 you get p = 0.87915. Solving the integral you will get , which means that the complexity is: O(n)
Just like coding the Fibonacci Sequence (the hard way) as an example, you would simply type something along the lines of:
long fib(long n){
if(n <= 1) return n;
else return fib(n-1) + fib(n-2);
}
Or, better yet, memoize it using a global variable to make it much quicker. Once again, with the Fibonacci Sequence:
static ArrayList<Long>fib_global = new ArrayList(1000);
//delcare a global variable that can be appended to
long fib(long n){
if(n >= fib_global.length)fib_global.add(fib(n-1) + fib(n-2));
return fib_global.get(n);
}
The code will only execute one of these calls at a time, and most likely in the left-to-right order you typed them in, making it so that you only don't really need to worry about the amount of times you needed to call something.
As CLRS has said, T(n) can be replaced by cn by mathematical induction. This inductive assumption works for the number below n. As mentioned above, what we need to prove is that the parameter value is n. Therefore, as follows:
assume: T(n) <= cn for the number below n;
conclude:
T(n) = T(n/2) + T(n/4) + T(n/8) + n
<= c/2*n + c/4*n + c/8*n + n
= (7/8*c + 1) * n
<= cn (when c >= 8)
that's all.

Resources