Algorithm Analysis - algorithm

I have an algorithm with the following pseudocode:
R(n)
if(n = 1)
return 1
else
return(R(n-1) + 2 * n + 1)
I need to setup a recurrence relation for the number of multiplications carried out by this algorithm and solve it.
Is the following right?
R(1) = 0
R(n) = R(n-1) + n^2

You are performing only one multiplication per step. Therefore, the relation will be:
R(n) = R(n-1) + 1

In the algorithm as shown, R(n) is calculated by adding R(n-1) to 2*n+1. If 2*n is calculated using a multiplication, there will be one multiplication per level of recursion, thus n-1 multiplications in the calculation of R(n).
To compute that via a recurrence, let M(n) be the number of multiplications used to compute R(n). The recurrence boundary condition is M(1) = 0 and the recurrence relation is M(i) = M(i-1) + 1 for i>1.
Errors in writing “R(1) = 0; R(n) = R(n-1) + n^2” as the recurrence for the number of multiplications include:
• R() already is in use as a function being computed, hence re-using R() as the number of multiplications is incorrect
• Each level of recursion in the algorithm adds one multiplication, not n² multiplications.
Note, R(n) = 1 + 5 + 7 + ... + 2n+1 = 1 + 3 + 5 + 7 + ... + 2n+1 - 3 = n²-3; that is, function R(n) returns the value n²-3.

Related

analyze algorithm of finding maximum number in array with n number

def maximum(array):
max = array[0]
counter = 0
for i in array:
size +=1
if i>max:
max=i
return max
I need to analyze that algorithm which find maximum number in array with n numbers in it. the only thing I want to know how to get Recursive and General formula for Average case of this algorithm.
Not sure what you mean by "Recursive and General formula for Average case of this algorithm". Your algorithm is not recursive. So, how can it be "recursive formula"?
Recursive way to find maximum in an array:
def findMax(Array, n):
if (n == 1):
return A[0]
return max(Array[n - 1], findMax(Array, n - 1))
I guess you want Recurrence relation.
Let T(n) be time taken to find the maximum of n elements. So, for above written code.
T(n) = T(n-1) + 1 .... Equation I
In case you are interested to solve the recurrence relation:
T(n-1) = T((n-1)-1) + 1 = T(n-2) + 1 .... Equation II
If you substitute value of T(n-1) from Equation II into Equation I, you get:
T(n) = (T(n-2) + 1) + 1 = T(n-2) + 2
Similarly,
T(n) = T(n-3) + 3
T(n) = T(n-4) + 4
and so on..
Continuing the above for k times,
T(n) = T(n-k) + k
If n-k = 0, means n = k. The equation then becomes
T(n) = T(0) + n = 1 + n
Therefore, the recursive algorithm we came up with has time complexity O(n).
Hope it helped.

Algorithm Analysis: Expected Running Time of Recursive Function Based on a RNG

I am somewhat confused with the running time analysis of a program here which has recursive calls which depend on a RNG. (Randomly Generated Number)
Let's begin with the pseudo-code, and then I will go into what I have thought about so far related to this one.
Func1(A, i, j)
/* A is an array of at least j integers */
1 if (i ≥ j) then return (0);
2 n ← j − i + 1 ; /* n = number of elements from i to j */
3 k ← Random(n);
4 s ← 0; //Takes time of Arbitrary C
5 for r ← i to j do
6 A[r] ← A[r] − A[i] − A[j]; //Arbitrary C
7 s ← s + A[r]; //Arbitrary C
8 end
9 s ← s + Func1(A, i, i+k-1); //Recursive Call 1
10 s ← s + Func1(A, i+k, j); //Recursive Call 2
11 return (s);
Okay, now let's get into the math I have tried so far. I'll try not to be too pedantic here as it is just a rough, estimated analysis of expected run time.
First, let's consider the worst case. Note that the K = Random(n) must be at least 1, and at most n. Therefore, the worst case is the K = 1 is picked. This causes the total running time to be equal to T(n) = cn + T(1) + T(n-1). Which means that overall it takes somewhere around cn^2 time total (you can use Wolfram to solve recurrence relations if you are stuck or rusty on recurrence relations, although this one is a fairly simple one).
Now, here is where I get somewhat confused. For the expected running time, we have to base our assumption off of the probability of the random number K. Therefore, we have to sum all the possible running times for different values of k, plus their individual probability. By lemma/hopefully intuitive logic: the probability of any one Randomly Generated k, with k between 1 to n, is equal 1/n.
Therefore, (in my opinion/analysis) the expected run time is:
ET(n) = cn + (1/n)*Summation(from k=1 to n-1) of (ET(k-1) + ET(n-k))
Let me explain a bit. The cn is simply for the loop which runs i to j. This is estimated by cn. The summation represents all of the possible values for k. The (1/n) multiplied by this summation is there because the probability of any one k is (1/n). The terms inside the summation represent the running times of the recursive calls of Func1. The first term on the left takes ET(k-1) because this recursive call is going to do a loop from i to k-1 (which is roughly ck), and then possibly call Func1 again. The second is a representation of the second recursive call, which would loop from i+k to j, which is also represented by n-k.
Upon expansion of the summation, we see that the overall function ET(n) is of the order n^2. However, as a test case, plugging in k=(n/2) gives a total running time for Func 1 of roughly nlog(n). This is why I am confused. How can this be, if the estimated running time is of the order n^2? Am I considering a "good" case by plugging in n/2 for k? Or am I thinking about k in the wrong sense in some way?
Expected time complexity is ET(n) = O(nlogn) . Following is math proof derived by myself please tell if any error :-
ET(n) = P(k=1)*(ET(1)+ET(n-1)) + P(k=2)*(ET(2)+ET(n-2)).......P(k=n-1)*(ET(n-1)+ET(1)) + c*n
As the RNG is uniformly random P(k=x) = 1/n for all x
hence ET(n) = 1/n*(ET(1)*2+ET(2)*2....ET(n-1)*2) + c*n
ET(n) = 2/n*sum(ET(i)) + c*n i in (1,n-1)
ET(n-1) = 2/(n-1)*sum(ET(i)) + c*(n-1) i in (1,n-2)
sum(ET(i)) i in (1,n-2) = (ET(n-1)-c*(n-1))*(n-1)/2
ET(n) = 2/n*(sum(ET(i)) in (1,n-2) + ET(n-1)) + c*n
ET(n) = 2/n*((ET(n-1)-c*(n-1))*(n-1)/2+ET(n-1)) + c*n
ET(n) = 2/n*((n+1)/2*ET(n-1) - c*(n-1)*(n-1)/2) + c*n
ET(n) = (n+1)/n*ET(n-1) + c*n - c*(n-1)*(n-1)/n
ET(n) = (n+1)/n*ET(n-1) + c
solving recurrence
ET(n) = (n+1)ET(1) + c + (n+1)/n*c + (n+1)/(n-1)*c + (n+1)/(n-2)*c.....
ET(n) = (n+1) + c + (n+1)*sum(1/i) i in (1,n)
sum(1/i) i in (1,n) = O(logn)
ET(n) = (n+1) + c + (n+1)*logn
ET(n) = O(nlogn)

Proving this recursive Fibonacci implementation runs in time O(2^n)?

I'm having difficulty proving that the 'bad' version of fibonacci is O(2^n).
Ie.
Given the function
int fib(int x)
{
if ( x == 1 || x == 2 )
{
return 1;
}
else
{
return ( f( x - 1 ) + f( x - 2) );
}
}
Can I get help for the proof of this being O(2^n).
Let's start off by writing a recurrence relation for the runtime:
T(1) = 1
T(2) = 1
T(n+2) = T(n) + T(n + 1) + 1
Now, let's take a guess that
T(n) ≤ 2n
If we try to prove this by induction, the base cases check out:
T(1) = 1 ≤ 2 = 21
T(2) = 1 ≤ 4 = 22
Then, in the inductive step, we see this:
T(n + 2) = T(n) + T(n + 1) + 1
≤ 2n + 2n+1 + 1
< 2n+1 + 2n+1
= 2n+2
Therefore, by induction, we can conclude that T(n) ≤ 2n for any n, and therefore T(n) = O(2n).
With a more precise analysis, you can prove that T(n) = 2Fn - 1, where Fn is the nth Fibonacci number. This proves, more accurately, that T(n) = Θ(φn), where φ is the Golden Ratio, which is approximately 1.61. Note that φn = o(2n) (using little-o notation), so this is a much better bound.
Hope this helps!
Try manually doing a few test cases like f(5) and take note of how many times the method f() is called.
A fat hint would be to notice that every time the method f() is called (except for x is 1 or 2), f() is called twice. Each of those call f() two more times each, and so on...
There's actually a pretty simple proof that the total number of calls to the f is going to be 2Fib(n)-1, where Fib(n) is the n'th Fibonacci number. It goes like this:
The set of calls to f form a binary tree, where each call is either a leaf (for x=1 or x=2) or else the call spawns two child calls (for x>2).
Each leaf contributes exactly 1 to the total returned by the original call, therefore there are Fib(n) total leaves.
The total number of internal nodes in any binary tree is equal to L-1, where L is the number of leaves, so the total number of nodes in this tree is 2L-1.
This shows that the running time (measured in terms of total calls to f) is
T(n)=2Fib(n)-1=O(Fib(n))
and since Fib(n)=Θ(φ^n), where φ is the golden ratio
Φ=(1+sqrt{5})/2 = 1.618...
this proves that T(n) = Θ(1.618...^n) = O(n).
Using the Recursion Tree Method :
T(n)
↙ ↘
n-1 n – 2
↙ ↘ ↙ ↘
N – 2 n – 3 n – 3 n - 4
Each tree level is considered as a call for fib(x - 1) fib(x - 2) if you complete the recursion tree on this manner you will stop when x = 1 or x = 2 (base case) .... this tree shows only three levels of the recursion tree . To solve this tree you need these important informations : 1- height of the tree. 2-how much work is done at each level .
The height of this tree is 2^n and the work at each level is O(1) then the Order of this recurrence is Height * Work at each level = 2^n * 1 = O(2^n)

Calculating T(n) Time Complexity of an Algorithm

I am looking for some clarification in working out the time efficiency of an Algorithm, specifically T(n). The algorithm below is not as efficient as it could be, though it's a good example to learn from I believe. I would appreciate a line-by-line confirmation of the sum of operations in the code:
Pseudo-code
1. Input: array X of size n
2. Let A = an empty array of size n
3. For i = 0 to n-1
4. Let s = x[0]
5. For j = 0 to i
6. Let sum = sum + x[j]
7. End For
8. Let A[i] = sum / (i+1)
9. End For
10. Output: Array A
My attempt at calculating T(n)
1. 1
2. n
3. n
4. n(2)
5. n(n-1)
6. n(5n)
7. -
8. n(6)
9. -
10. 1
T(n) = 1 + n + n + 2n + n^2 - n + 5n^2 + 6n + 1
= 6n^2 + 9n + 2
So, T(n) = 6n^2 + 9n + 2 is what I arrive at, from this I derive Big-O of O(n^2).
What errors, if any have I made in my calculation...
Edit: ...in counting the primitive operations to derive T(n)?
Your result O(n^2) is correct and is given by the two nested loops. I would prefer the derivation like
0 + 1 + 2 + + (n-1) = (n-1)n/2 = O(n^2)
that follows from observing the nested loops.
I'm not really sure on your methodology but O(n^2) does seem to be correct. At each iteration through the first loop you do a sub loop of the previous elements. Therefore you're looking at 1 the first time 2 the second then 3 then... then n the final time. This is equivalent to the sum from 1 to n which gives you complexity of n^2.

Why is the complexity of computing the Fibonacci series 2^n and not n^2?

I am trying to find complexity of Fibonacci series using a recursion tree and concluded height of tree = O(n) worst case, cost of each level = cn, hence complexity = n*n=n^2
How come it is O(2^n)?
The complexity of a naive recursive fibonacci is indeed 2ⁿ.
T(n) = T(n-1) + T(n-2) = T(n-2) + T(n-3) + T(n-3) + T(n-4) =
= T(n-3) + T(n-4) + T(n-4) + T(n-5) + T(n-4) + T(n-5) + T(n-5) + T(n-6) = ...
In each step you call T twice, thus will provide eventual asymptotic barrier of:
T(n) = 2⋅2⋅...⋅2 = 2ⁿ
bonus: The best theoretical implementation to fibonacci is actually a close formula, using the golden ratio:
Fib(n) = (φⁿ – (–φ)⁻ⁿ)/sqrt(5) [where φ is the golden ratio]
(However, it suffers from precision errors in real life due to floating point arithmetics, which are not exact)
The recursion tree for fib(n) would be something like :
n
/ \
n-1 n-2 --------- maximum 2^1 additions
/ \ / \
n-2 n-3 n-3 n-4 -------- maximum 2^2 additions
/ \
n-3 n-4 -------- maximum 2^3 additions
........
-------- maximum 2^(n-1) additions
Using n-1 in 2^(n-1) since for fib(5) we will eventually go down to fib(1)
Number of internal nodes = Number of leaves - 1 = 2^(n-1) - 1
Number of additions = Number of internal nodes + Number of leaves = (2^1 + 2^2 + 2^3 + ...) + 2^(n-1)
We can replace the number of internal nodes to 2^(n-1) - 1 because it will always be less than this value :
= 2^(n-1) - 1 + 2^(n-1)
~ 2^n
Look at it like this. Assume the complexity of calculating F(k), the kth Fibonacci number, by recursion is at most 2^k for k <= n. This is our induction hypothesis. Then the complexity of calculating F(n + 1) by recursion is
F(n + 1) = F(n) + F(n - 1)
which has complexity 2^n + 2^(n - 1). Note that
2^n + 2^(n - 1) = 2 * 2^n / 2 + 2^n / 2 = 3 * 2^n / 2 <= 2 * 2^n = 2^(n + 1).
We have shown by induction that the claim that calculating F(k) by recursion is at most 2^k is correct.
You are correct that the depth of the tree is O(n), but you are not doing O(n) work at each level. At each level, you do O(1) work per recursive call, but each recursive call then contributes two new recursive calls, one at the level below it and one at the level two below it. This means that as you get further and further down the recursion tree, the number of calls per level grows exponentially.
Interestingly, you can actually establish the exact number of calls necessary to compute F(n) as 2F(n + 1) - 1, where F(n) is the nth Fibonacci number. We can prove this inductively. As a base case, to compute F(0) or F(1), we need to make exactly one call to the function, which terminates without making any new calls. Let's say that L(n) is the number of calls necessary to compute F(n). Then we have that
L(0) = 1 = 2*1 - 1 = 2F(1) - 1 = 2F(0 + 1) - 1
L(1) = 1 = 2*1 - 1 = 2F(2) - 1 = 2F(1 + 1) - 1
Now, for the inductive step, assume that for all n' < n, with n ≥ 2, that L(n') = 2F(n + 1) - 1. Then to compute F(n), we need to make 1 call to the initial function that computes F(n), which in turn fires off calls to F(n-2) and F(n-1). By the inductive hypothesis we know that F(n-1) and F(n-2) can be computed in L(n-1) and L(n-2) calls. Thus the total runtime is
1 + L(n - 1) + L(n - 2)
= 1 + 2F((n - 1) + 1) - 1 + 2F((n - 2) + 1) - 1
= 2F(n) + 2F(n - 1) - 1
= 2(F(n) + F(n - 1)) - 1
= 2(F(n + 1)) - 1
= 2F(n + 1) - 1
Which completes the induction.
At this point, you can use Binet's formula to show that
L(n) = 2(1/√5)(((1 + √5) / 2)n - ((1 - √5) / 2)n) - 1
And thus L(n) = O(((1 + √5) / 2)n). If we use the convention that
φ = (1 + √5) / 2 &approx; 1.6
We have that
L(n) = Θ(φn)
And since φ < 2, this is o(2n) (using little-o notation).
Interestingly, I've chosen the name L(n) for this series because this series is called the Leonardo numbers. In addition to its use here, it arises in the analysis of the smoothsort algorithm.
Hope this helps!
t(n)=t(n-1)+t(n-2)
which can be solved through tree method:
t(n-1) + t(n-2) 2^1=2
| |
t(n-2)+t(n-3) t(n-3)+t(n-4) 2^2=4
. . 2^3=8
. . .
. . .
similarly for the last level . . 2^n
it will make total time complexity=>2+4+8+.....2^n
after solving the above gp we will get time complexity as O(2^n)
The complexity of Fibonacci series is O(F(k)), where F(k) is the kth Fibonacci number. This can be proved by induction. It is trivial for based case. And assume for all k<=n, the complexity of computing F(k) is c*F(k) + o(F(k)), then for k = n+1, the complexity of computing F(n+1) is c*F(n) + o(F(n)) + c*F(n-1) + o(F(n-1)) = c*(F(n) + F(n-1)) + o(F(n)) + o(F(n-1)) = O(F(n+1)).
The complexity of recursive Fibonacci series is 2^n:
This will be the Recurrence Relations for recursive Fibonacci
T(n)=T(n-1)+T(n-2) No of elements 2
Now on solving this relation using substitution method (substituting value of T(n-1) and T(n-2))
T(n)=T(n-2)+2*T(n-3)+T(n-4) No of elements 4=2^2
Again substituting values of above term we will get
T(n)=T(n-3)+3*T(n-4)+3*T(n-5)+T(n-6) No of elements 8=2^3
After solving it completely, we get
T(n)={T(n-k)+---------+---------}----------------------------->2^k eq(3)
This implies that maximum no of recursive calls at any level will be at most 2^n.
And for all the recursive calls in equation 3 is ϴ(1) so time complexity will be 2^n* ϴ(1)=2^n
The O(2^n) complexity of Fibonacci number calculation only applies to the recursion approach. With a few extra space, you can achieve a much better performance with O(n).
public static int fibonacci(int n) throws Exception {
if (n < 0)
throws new Exception("Can't be a negative integer")
if (n <= 1)
return n;
int s = 0, s1 = 0, s2 = 1;
for(int i= 2; i<=n; i++) {
s = s1 + s2;
s1 = s2;
s2 = s;
}
return s;
}
I cannot resist the temptation of connecting a linear time iterative algorithm for Fib to the exponential time recursive one: if one reads Jon Bentley's wonderful little book on "Writing Efficient Algorithms" I believe it is a simple case of "caching": whenever Fib(k) is calculated, store it in array FibCached[k]. Whenever Fib(j) is called, first check if it is cached in FibCached[j]; if yes, return the value; if not use recursion. (Look at the tree of calls now ...)

Resources