Big O Notation - Fibonnaci - Cracking the Coding interview example 14, page 53 - big-o

I'm reading the "Cracking the Coding Interview" book, chapter VI Big(O). On page 53, the author shows this code and asks: what is the time complexity of it?
void allFib(int n) {
for (int i = 0; i < n; i++) {
System.out.println(i + ": " + fib(i));
}
}
int fib(int n) {
if (n <= 0)
return 0;
else if (n == 1)
return 1;
return fib(n - 1) + fib(n - 2);
}
He says it's not O(n * 2^n) but O(2^n) because:
fib(1) --> 2^1 steps
fib(2) --> 2^2 steps
fib(3) --> 2^3 steps
...
fib(n) --> 2^n steps
and so, the total amount of work is 2^1 + 2^2 + 2^3 + ... + 2^n = 2^(n+1) = O(2^n).
My question is: why is fib(2) = 4 steps? The way I see it, it's 5: fib(2) = 2 steps (first two if statements) + fib(1) + fib(0) = 2 + 2 + 1 = 5.

As n=2 so below line of code will be executed
return fib(n - 1) + fib(n - 2);
above line of code execution will take 2 steps. one step for fib(2-1) and one step for fib(2-2). So total=2 steps performed.
There are no more recursion operations left to do as both terms in the line of code have been resolved to actual values:
Now values are 1 and 0 and against value 1, below line will be executed
else if (n == 1)
One step added to execute above line of code. Now Total steps=3
again now only 0 value remaining and against value 0 below line of code will be executed
if (n <= 0)
above line of code will take one step to execute. Now total steps=3+1=4
So to execute Fib(2), time complexity will be O(2^2)=4 steps
I hope it helped you to understand the execution of steps.
Refer to the Recursive Fibonacci Method Explained for more details.

You will get lost if you focus too much on the details. You already know that fib(n) takes O(2^n) time. If we call allFib() with the value of 5, for example, the for loop will call fib() with values 0 through 4, meaning to get the total runtime, we have to add the runtimes for fib(0)+fib(1)+...+fib(4).
Knowing that the runtime for fib(n) takes O(2^n) time, replace n with the current for-loop index to get the runtime.
fib(0) - 2^0
fib(1) - 2^1
fib(2) - 2^2
fib(3) - 2^3
fib(4) - 2^4
Then add (2^0)+(2^1)+(2^2)+(2^3)+(2^4)+...(2^n) = 2^(n+1) = O(2^n)

Related

Recursive Find Big O time complexity

I am learning about time complexity and I trying to figure out a relationship. My lecture notes describe a recursive find functions as :
find(array A, item I)
if(arrayEmpty(A)) return BAD;
if(item == A[0]) return GOOD;
return find(allButFirst(A), I);
useful link : https://www.youtube.com/watch?v=_cG5KZSn1LE
My notes says that the starting relationship for time complexity is as a follow:
T(n) = 1 + T(n-1) // I understand this
T(1) = 1 // only one computation, understandable
then we unroll T(n)
T(N) = 1 + 1 + T(n-2) // every recursive step 1 comparison plus recursive call
T(N) = 1 + 1 + 1 + T(n-3)
T(N) = 1 + 1 + 1 + 1 + T(n-4)
...
T(N) = (n - 1) + T(n-(n-1)) // This point I am lost how they got this generalisation
If someone can explain how the above relation was generalised to T(N) = (n - 1) + T(n-(n-1)) and perhaps with a example would be better for clarity.
For example I want to try the above relationship with some values so let's say A {1, 2, 3} and I = 3
then here are the following computation
1 + T(n-1) // {2,3}
1 + 1 + T(n-2) // {3}
1 // {3} Found
So for above we had total 3 comparison and 2 recursive calls. So I would say the relationship is T(n) = n + t(n-(n-1)) = n + t(1) = n.
In each step of the unrolled pattern, if we are k steps into the recursion (for the first step, k=1, the last step, k=n), there are k 1's.
In the expansion you posted, the last line, T(N) = (n - 1) + T(n-(n-1)) is the second-to-last step, since T(n-(n-1)) expands to T(1), so for that line, k=n-1.
So there are n-1 1's in that line, hence the (n-1) term.
Likewise, the parameter passed to T at the kth step is n minus the current step, since it's the number of steps remaining. For the first row, that's n-k = n-1, hence T(n-1). For the second-to-last step, it's n-k = n-(n-1), hence T(n-(n-1)) = T(1).

About the time complexity algorithm and asymptotic growth

I've got the question about the time complexity algorithm and asymptotic growth.
The pseudo code of question is
1: function NAIVE(x,A)
2: answer = 0
3: n= length of A
4: for I from - to n do
5: aux = 1
6. for j from 1 to I do
7: aux = aux*x
8: answer = answer + aux * A[I]
9. return answer
I have to find upperbound with O-notation and lowerbound witn Ω-notation.
I got the time complexity f(n) = 5n^2 + 4n + 8 and g(n) = n^2.
My question is I'm not too sure about the running time of line number 6 to 8.
I got constant 2 and time n+1 for line number 4 and constant 1 and time 1 for line 5.
I'm stuck on after that. I tried it and I got constant 2 and time n^2 + 1 for line 6, because it runs in the for loop (for loop and for loop) so I thought its n^2+1. Is that correct?
And for line number 8 its has 3 constants and run time is n^2. is this correct? I'm not too sure about the line number 8. This is how I got f(n) = 5n^2 + 4n + 8!
Please help me out to complete this question!
I wonder my work is right or not!
Thank you
Let's go through this step by step.
The complexity of line 7 T7 = 1.
The complexity of the surrounding for will be T6(I) = I * T7 = I.
The complexity of line 5 T5 = 1.
The complexity of line 8 T8 = 1.
The complexity of the surrounding for (assuming that - stands for 0) is
T4(n) = Sum{I from 0 to n} (T5 + T6(I) + T8)
= Sum{I from 0 to n}(2 + I)
= Sum{I from 0 to n}(2) + Sum{I from 0 to n}(I)
= (n + 1) * 2 + (n+1)/2 * (0 + n)
= 2n + 2 + n^2/2 + n/2
= 1/2 n^2 + 5/2 n + 2
The complexity of the remaining lines is T2 = T3 = T9 = 1.
The complexity of the entire algorithm is
T(n) = T2 + T3 + T4(n) + T9
= 1 + 1 + 1/2 n^2 + 5/2 n + 2 + 1
= 1/2 n^2 + 5/2 n + 5
This runtime is in the complexity classes O(n^2) and Ω(n^2).

complexity theory-sorting algorithm

I'am taking a course in complexity theory,and so it's need some mathematical background which i have a problem,.
so while i'am trying to do some practicing i stuck in the bellow example
1) for (i = 1; i < n; i++) {
2) SmallPos = i;
3) Smallest = Array[SmallPos];
4) for (j = i+1; j <= n; j++)
5) if (Array[j] < Smallest) {
6) SmallPos = j;
7) Smallest = Array[SmallPos]
}
8) Array[SmallPos] = Array[i];
9) Array[i] = Smallest;
}
Thus, the total computing time is:
T(n) = (n) + 4(n-1) + n(n+1)/2 – 1 + 3[n(n-1) / 2]
= n + 4n - 4 + (n^2 + n)/2 – 1 + (3n^2 - 3n) / 2
= 5n - 5 + (4n2 - 2n) / 2
= 5n - 5 + 2n^2 - n
= 2n^2 + 4n - 5
= O(n^2)
and what i don't understand or confused about line 4 analyzed to n(n+1)/2 – 1,
and line 5 3[n(n-1) / 2].
i knew that the sum of positive series is =n(first+last)/2 ,but when i tried to calculate it as i understand it it gives me different result.
i calculate for line no 4 so it shoulb be =n((n-1)+2)/2 according to n(first+last)/2 ,but here it's n(n+1)/2 – 1.
and same for 3[n(n-1) / 2].....i don't understand this too
also here's what is written in the analysis it could help if anyone can explain to me,
Statement 1 is executed n times (n - 1 + 1); statements 2, 3, 8, and 9 (each representing O(1) time) are executed n - 1 times each, once on each pass through the outer loop. On the first pass through this loop with i = 1, statement 4 is executed n times; statement 5 is executed n - 1 times, and assuming a worst case where the elements of the array are in descending order, statements 6 and 7 (each O(1) time) are executed n - 1 times.
On the second pass through the outer loop with i = 2, statement 4 is executed n - 1 times and statements 5, 6, and 7 are executed n - 2 times, etc. Thus, statement 4 is executed (n) + (n-1) +... + 2 times and statements 5, 6, and 7 are executed (n-1) + (n-2) + ... + 2 + 1 times. The first sum is equal to n(n+1)/2 - 1, and the second is equal to n(n-1)/2.
Thus, the total computing time is:
T(n) = (n) + 4(n-1) + n(n+1)/2 – 1 + 3[n(n-1) / 2]
= n + 4n - 4 + (n^2 + n)/2 – 1 + (3n^2 - 3n) / 2
= 5n - 5 + (4n2 - 2n) / 2
= 5n - 5 + 2n^2 - n
= 2n^2 + 4n - 5
= O(n^2)
here's the link for the file containing this example:
http://www.google.com.eg/url?sa=t&rct=j&q=Consider+the+sorting+algorithm+shown+below.++Find+the+number+of+instructions+executed+&source=web&cd=1&cad=rja&ved=0CB8QFjAA&url=http%3A%2F%2Fgcu.googlecode.com%2Ffiles%2FAnalysis%2520of%2520Algorithms%2520I.doc&ei=3H5wUNiOINDLswaO3ICYBQ&usg=AFQjCNEBqgrtQldfp6eqdfSY_EFKOe76yg
line 4: as the analysis says, it is executed n+(n-1)+...+2 times. This is a sum of (n-1) terms. In the formula you use, n(first+last)/2, n represents the number of terms. If you apply the formula to your sequence of n-1 terms, then it should be (n-1)((n)+(2))/2=(n²+n-2)/2=n(n+1)/2-1.
line 5: the same formula can be used. As the analysis says, you have to calculate (n-1)+...+1. This is a sum of n-1 terms, with the first and last being n-1 and 1. The sum is given by (n-1)(n-1+1)/2. The factor 3 is from the 3 lines (5,6,7) that are each being done (n-1)(n)/2 times

Calculating T(n) Time Complexity of an Algorithm

I am looking for some clarification in working out the time efficiency of an Algorithm, specifically T(n). The algorithm below is not as efficient as it could be, though it's a good example to learn from I believe. I would appreciate a line-by-line confirmation of the sum of operations in the code:
Pseudo-code
1. Input: array X of size n
2. Let A = an empty array of size n
3. For i = 0 to n-1
4. Let s = x[0]
5. For j = 0 to i
6. Let sum = sum + x[j]
7. End For
8. Let A[i] = sum / (i+1)
9. End For
10. Output: Array A
My attempt at calculating T(n)
1. 1
2. n
3. n
4. n(2)
5. n(n-1)
6. n(5n)
7. -
8. n(6)
9. -
10. 1
T(n) = 1 + n + n + 2n + n^2 - n + 5n^2 + 6n + 1
= 6n^2 + 9n + 2
So, T(n) = 6n^2 + 9n + 2 is what I arrive at, from this I derive Big-O of O(n^2).
What errors, if any have I made in my calculation...
Edit: ...in counting the primitive operations to derive T(n)?
Your result O(n^2) is correct and is given by the two nested loops. I would prefer the derivation like
0 + 1 + 2 + + (n-1) = (n-1)n/2 = O(n^2)
that follows from observing the nested loops.
I'm not really sure on your methodology but O(n^2) does seem to be correct. At each iteration through the first loop you do a sub loop of the previous elements. Therefore you're looking at 1 the first time 2 the second then 3 then... then n the final time. This is equivalent to the sum from 1 to n which gives you complexity of n^2.

Why is the complexity of computing the Fibonacci series 2^n and not n^2?

I am trying to find complexity of Fibonacci series using a recursion tree and concluded height of tree = O(n) worst case, cost of each level = cn, hence complexity = n*n=n^2
How come it is O(2^n)?
The complexity of a naive recursive fibonacci is indeed 2ⁿ.
T(n) = T(n-1) + T(n-2) = T(n-2) + T(n-3) + T(n-3) + T(n-4) =
= T(n-3) + T(n-4) + T(n-4) + T(n-5) + T(n-4) + T(n-5) + T(n-5) + T(n-6) = ...
In each step you call T twice, thus will provide eventual asymptotic barrier of:
T(n) = 2⋅2⋅...⋅2 = 2ⁿ
bonus: The best theoretical implementation to fibonacci is actually a close formula, using the golden ratio:
Fib(n) = (φⁿ – (–φ)⁻ⁿ)/sqrt(5) [where φ is the golden ratio]
(However, it suffers from precision errors in real life due to floating point arithmetics, which are not exact)
The recursion tree for fib(n) would be something like :
n
/ \
n-1 n-2 --------- maximum 2^1 additions
/ \ / \
n-2 n-3 n-3 n-4 -------- maximum 2^2 additions
/ \
n-3 n-4 -------- maximum 2^3 additions
........
-------- maximum 2^(n-1) additions
Using n-1 in 2^(n-1) since for fib(5) we will eventually go down to fib(1)
Number of internal nodes = Number of leaves - 1 = 2^(n-1) - 1
Number of additions = Number of internal nodes + Number of leaves = (2^1 + 2^2 + 2^3 + ...) + 2^(n-1)
We can replace the number of internal nodes to 2^(n-1) - 1 because it will always be less than this value :
= 2^(n-1) - 1 + 2^(n-1)
~ 2^n
Look at it like this. Assume the complexity of calculating F(k), the kth Fibonacci number, by recursion is at most 2^k for k <= n. This is our induction hypothesis. Then the complexity of calculating F(n + 1) by recursion is
F(n + 1) = F(n) + F(n - 1)
which has complexity 2^n + 2^(n - 1). Note that
2^n + 2^(n - 1) = 2 * 2^n / 2 + 2^n / 2 = 3 * 2^n / 2 <= 2 * 2^n = 2^(n + 1).
We have shown by induction that the claim that calculating F(k) by recursion is at most 2^k is correct.
You are correct that the depth of the tree is O(n), but you are not doing O(n) work at each level. At each level, you do O(1) work per recursive call, but each recursive call then contributes two new recursive calls, one at the level below it and one at the level two below it. This means that as you get further and further down the recursion tree, the number of calls per level grows exponentially.
Interestingly, you can actually establish the exact number of calls necessary to compute F(n) as 2F(n + 1) - 1, where F(n) is the nth Fibonacci number. We can prove this inductively. As a base case, to compute F(0) or F(1), we need to make exactly one call to the function, which terminates without making any new calls. Let's say that L(n) is the number of calls necessary to compute F(n). Then we have that
L(0) = 1 = 2*1 - 1 = 2F(1) - 1 = 2F(0 + 1) - 1
L(1) = 1 = 2*1 - 1 = 2F(2) - 1 = 2F(1 + 1) - 1
Now, for the inductive step, assume that for all n' < n, with n ≥ 2, that L(n') = 2F(n + 1) - 1. Then to compute F(n), we need to make 1 call to the initial function that computes F(n), which in turn fires off calls to F(n-2) and F(n-1). By the inductive hypothesis we know that F(n-1) and F(n-2) can be computed in L(n-1) and L(n-2) calls. Thus the total runtime is
1 + L(n - 1) + L(n - 2)
= 1 + 2F((n - 1) + 1) - 1 + 2F((n - 2) + 1) - 1
= 2F(n) + 2F(n - 1) - 1
= 2(F(n) + F(n - 1)) - 1
= 2(F(n + 1)) - 1
= 2F(n + 1) - 1
Which completes the induction.
At this point, you can use Binet's formula to show that
L(n) = 2(1/√5)(((1 + √5) / 2)n - ((1 - √5) / 2)n) - 1
And thus L(n) = O(((1 + √5) / 2)n). If we use the convention that
φ = (1 + √5) / 2 &approx; 1.6
We have that
L(n) = Θ(φn)
And since φ < 2, this is o(2n) (using little-o notation).
Interestingly, I've chosen the name L(n) for this series because this series is called the Leonardo numbers. In addition to its use here, it arises in the analysis of the smoothsort algorithm.
Hope this helps!
t(n)=t(n-1)+t(n-2)
which can be solved through tree method:
t(n-1) + t(n-2) 2^1=2
| |
t(n-2)+t(n-3) t(n-3)+t(n-4) 2^2=4
. . 2^3=8
. . .
. . .
similarly for the last level . . 2^n
it will make total time complexity=>2+4+8+.....2^n
after solving the above gp we will get time complexity as O(2^n)
The complexity of Fibonacci series is O(F(k)), where F(k) is the kth Fibonacci number. This can be proved by induction. It is trivial for based case. And assume for all k<=n, the complexity of computing F(k) is c*F(k) + o(F(k)), then for k = n+1, the complexity of computing F(n+1) is c*F(n) + o(F(n)) + c*F(n-1) + o(F(n-1)) = c*(F(n) + F(n-1)) + o(F(n)) + o(F(n-1)) = O(F(n+1)).
The complexity of recursive Fibonacci series is 2^n:
This will be the Recurrence Relations for recursive Fibonacci
T(n)=T(n-1)+T(n-2) No of elements 2
Now on solving this relation using substitution method (substituting value of T(n-1) and T(n-2))
T(n)=T(n-2)+2*T(n-3)+T(n-4) No of elements 4=2^2
Again substituting values of above term we will get
T(n)=T(n-3)+3*T(n-4)+3*T(n-5)+T(n-6) No of elements 8=2^3
After solving it completely, we get
T(n)={T(n-k)+---------+---------}----------------------------->2^k eq(3)
This implies that maximum no of recursive calls at any level will be at most 2^n.
And for all the recursive calls in equation 3 is ϴ(1) so time complexity will be 2^n* ϴ(1)=2^n
The O(2^n) complexity of Fibonacci number calculation only applies to the recursion approach. With a few extra space, you can achieve a much better performance with O(n).
public static int fibonacci(int n) throws Exception {
if (n < 0)
throws new Exception("Can't be a negative integer")
if (n <= 1)
return n;
int s = 0, s1 = 0, s2 = 1;
for(int i= 2; i<=n; i++) {
s = s1 + s2;
s1 = s2;
s2 = s;
}
return s;
}
I cannot resist the temptation of connecting a linear time iterative algorithm for Fib to the exponential time recursive one: if one reads Jon Bentley's wonderful little book on "Writing Efficient Algorithms" I believe it is a simple case of "caching": whenever Fib(k) is calculated, store it in array FibCached[k]. Whenever Fib(j) is called, first check if it is cached in FibCached[j]; if yes, return the value; if not use recursion. (Look at the tree of calls now ...)

Resources