Recurrence relation and time complexity - algorithm

What is the recurrence relation and time complexity for the following pseudo-code?
temp = 1
repeat
for i=1 to n
temp = temp +1
n=n/2
until n>=1

When we are dealing with asymptotic notations like Big-Oh , Omega and Theta, we don't consider the constants. No doubt your time complexity will go like
n + n/2 + n/4 + ... + 1
but if you add on this decreasing G.P series you will get exact answer equals to c*n where c will be some constant greater than 1. But in Asymptotic notations as I said earlier, constants doesn't matter so whether value of c is 2 or 50 or 100 or 10000 or anything, it will be O(n) only.
Another thing, Try not to use Master's Theorem for solving recurrence relations and use Recursive Tree method as it is pure conceptual and will help you in building up your concepts and can be used in every case. Master's theorem is like short cut and also have limitations.

Related

Randomized selection complexity

after analyzing the algorithm complexity I have a few questions:
For the best case complexity - the recurrence relation is T(n) = T(n/2) + dn which implies that the complexity is Θ(n).
So by the master theory I can clearly see why this is true , but when I draw the algorithm recursive calls as a tree I don't fully understand the final result. (It seems like I have one branch in height of log(n) which in each level I operate a partition O(n) - so it suppose to be nlog(n) .
(just for memorizing - this is very similar to the best case of mergeSort algorithem , but here we ignore the unwanted sub-array after partition).
Thanks!
It is as Yves Daoust wrote. Image it with real numbers, i.e. n=1024
T(n) = T(n/2) + dn
T(1024) = T(512) + 1024
T(512) = T(256) + 512
....
T(2) = T(1) + 2 -> this would be the last operation
Therefore you get 1024+512+256+...+1 <= 2048, which is 2n
You must think about that dn is as big as n, but in recurrence relation the n is not global variable, it is local variable based on method you call.
So there is log(n) calls but they do not take n-time everyone, they take less and less time.

Get time complexity of the recursion: T(n)=4T(n-1) - 3T(n-2)

I have a recurrence relation given by:
T(n)=4T(n-1) - 3T(n-2)
How do I solve this?
Any detailed explanation:
What I tried was that I substituted for T(n-1) on the right hand side using the relation and I got this:
=16T(n-2)-12T(n-3)-3T(n-2)
But I don't know where and how to end this.
Not only you can easily get the time complexity of this recursion, but you can even solve it exactly. This is thanks to the exhaustive theory behind linear recurrence relations and the one you called here is a specific case of homogeneous linear recurrence.
To solve it you need to write a characteristic polynomial: t^2 -4t +3 and find it's roots which are t=1 and t=3. Which means that your solution is of the form:
T(n) = c1 + 3^n * c2.
You can get c1 and c2 if you have a boundary conditions, but for your case it is enough to claim O(3^n) time complexity.
While it's obviously O(4^n) (because T(n)<=4*T(n-1)), it looks like a smaller limit can be proved:
T(n) = 4*T(n-1) - 3*T(n-2)
T(n) - T(n-1) = 3*T(n-1) - 3*T(n-2)
D(n) = T(n) - T(n-1)
D(n) = 3*D(n-1)
D(n) = D(0) * 3^n
if D(0)=0, T(n)=const=O(1)
otherwise since the difference is exponential, the resulting function will be exponential as well:
T(n) = O(3^n)
NOTE :- Generally, these kind of recurrence relations (where number of recurrence function calls are repeated , e.g-recurrence relation for a fibonacci sequence for value n ) will result into an exponential time complexity.
First of all, your question is incomplete . It does not provide a termination condition ( a condition for which the recurrence will terminate ). I assume that it must be
T(n) = 1 for n=1 and 2 for n=2
Based on this assumption I start breaking down the above recurrence relation
On substituting T(n) into T(n-1) I get this :
16T(n-2) - 24T(n-3) + 9T(n-4)
this forms a polynomial in the power of 2
{(4^2)T(n-2) - 2.4.3 T(n-3) + (3^2) T(n-4)}
again breaking the above recurrence further we get :
64T(n-3) -144T(n-4) + 108T(n-5) -27T(n-6)
which is a polynomial of power 3
on breaking down the relation for n-1 terms we will get :
(4^n-1) T(1) - ............. something like that
we can clearly see that in the above expansion all the remaining terms will be less than 4^n-1 so, we can take the asymptotic notation as :
O(4^n)
As an exercise you can either expand the polynomial for few more terms and also draw the recursion tree to find out what's actually happening .
Trying T(n) = x^n gives you a quadratic equation: x^2 = 4x - 3. This has solutions x=1 and x=3, so the general form for T(n) is a + b*3^n. The exact values of a and b depend on the initial conditions (for example, the values of T(0) and T(1)).
Depending on the initial conditions, the solution is going to be O(1) or O(3^n).

time complexity and size of the input

I'm studying for an exam which is mostly about the time complexity. I've encountered a problem while solving these four questions.
1) if we prove that an algorithm has a time complexity of theta(n^2), is it possible that it takes him the time calculation of O(n) for ALL inputs?
2) if we prove that an algorithm has a time complexity of theta(n^2), is it possible that it takes him the time calculation of O(n) for SOME inputs?
3) if we prove that an algorithm has a time complexity of O(n^2), is it possible that it takes him the time calculation of O(n) for SOME inputs?
4) if we prove that an algorithm has a time complexity of O(n^2), is it possible that it takes him the time calculation of O(n) for ALL inputs?
can anyone tell me how to answer such questions. I'm mostly confused when they ask for "all" or "some" inputs.
thanks
gkovacs90 answer provides a good link : WIKI
T(n) = O(n3), means T(n) grows asymptotically no faster than n3. A constant k>0 exists and for all n>N , T(n) < k*n3
T(n) = Θ(n3), means T(n) grows asymptotically as fast as n3. Two constants k1, k2 >0 exist and for all n>N , k1*n3 < T(n) < k2*n3
so if T(n) = n3 + 2*n + 3
Then T(n) = Θ(n3) is more appropriate than T(n) = O(n3) since we have more information about the way T(n) behaves asymptotically.
T(n) = Θ(n3) means that for n>N the curve of T(n) will "approach" and "stay close" to the curve of k*n3, with k>0.
T(n) = O(n3) means that for n>N the curve of T(n) will always be under to the curve of k*n3, with k>0.
1:No
2:Yes, as gkovacs90 says, for small values of n you can have O(n) time calculation but I would say No for big enough inputs. The notations Theta and Big-O only mean something asymptotically
3:Yes
4:Yes
Example for number 4 (dumm but still true) : for an Array A : Int[] compute the sum of the values. Your algorithm certainly will be :
Given A an Int Array
sum=0
for int a in A
sum = sum + a
end for
return sum
If n is the length of the array A : The time complexity is T(n) = n. So T(n) = O(n2) since T(n) will not grow faster than n2. And still we have for all array a time calculation of O(n).
If you find such a result for a time (or memory) complexity. Then you can (and certainly you must) refine the Big-O / Theta of your function (here obviously we have : Θ(n))
Some last points :
T(n)=Θ(g(n)) implies T(n)=O(g(n)).
In computational complexity theory, the complexity is sometimes computed for best, worst and average cases.
A "barfoot" explanation:
Big O notation is for setting an upper bound. By definition, there is always an index(or an input-length) from wich the notation is correct. So below this index, anything can happen.
For example sorting an array(O(n^2)) with one element takes less time, than writing the elements to the output(O(n)). ( we don't sort, we know it is in the right order, so it takes 0 time ).
So the answers:
1: No
2: Yes
3: Yes
4: Yes
You can find a detailed understandable description at WIKI
And HERE You can find a simpler explanation.

Finding time complexity of relation T (n) =T (n-1)+T(n-2)+T(n-3) if n > 3

Somewhat similar to fibonacci sequence
Running time of an algorithm is given by
T (n) =T (n-1)+T(n-2)+T(n-3) if n > 3
= n otherwise the order of this algorithm is?
if calculated by induction method then
T(n) = T(n-1) + T(n-2) + T(n-3)
Let us assume T(n) to be some function aⁿ
then aⁿ = an-1 + an-2 + an-3
=> a³ = a² + a + 1
which give complex solutions also roots of above equation according to my calculations are
a = 1.839286755
a = 0.419643 - i ( 0.606291)
a = 0.419643 + i ( 0.606291)
Now, how can I proceed further or is there any other method for this?
If I remember correctly, when you have determined the roots of the characteristic equation, then the T(n) can be the linear combination of the powers of those Roots
T(n)=A1*root1^n+A2*root2^n+A3*root3^n
So I guess the maximum complexity here will be
(maxroot)^n where maxroot is the maximum absolute value of your roots. So for your case it is ~ 1.83^n
Asymptotic analysis is done for running times of programs which give us how the running time will grow with the input.
For Recurrence relations (like the one you mentioned), we use a two step process:
Estimate the running time using the recursion tree method.
Validate(Confirm) the estimate using the substitution method.
You can find explanation of these methods in any algorithm text (eg. Cormen).
it can be aproximated like 3+9+27+......3^n which is O(3^n)

Recurrence Relation

Why is the recurrence relation of recursive factorial algorithm this?
T(n)=1 for n=0
T(n)=1+T(n-1) for n>0
Why is it not this?
T(n)=1 for n=0
T(n)=n*T(n-1) for n>0
Putting values of n i.e 1,2,3,4...... the second recurrence relation holds(The factorials are correctly calculated) not the first one.
we generally use recurrence relation to find the time complexity of algorithm.
Here, the function T(n) is not actually for calculating the value of an factorial but it is telling you about the time complexity of the factorial algorithm.
It means for finding a factorial of n it will take 1 more operation than factorial of n-1
(i.e. T(n)=T(n-1)+1) and so on.
so correct recurrence relation for a recursive factorial algorithm is
T(n)=1 for n=0
T(n)=1+T(n-1) for n>0
not that you mentioned later.
like recurrence for tower of hanoi is
T(n)=2T(n-1)+1 for n>0;
Update:
It does not have anything to do with implementation generally.
But recurrence can give an intuition of programming paradigm for eg if T(n)=2*T(n/2)+n (merge sort) this gives kind of intuition for divide and conquer because we are diving n into half.
Also, If you will solve the equation it will give you a bound on running time.eg big oh notation.
Looks like T(n) is the recurrence relation of the time complexity of the recursive factorial algorithm, assuming constant time multiplication. Perhaps you misread your source?
What he put was not the factorial recursion, but the time complexity of it.
Assuming this is the pseudocode for such a recurrence:
1. func factorial(n)
2. if (n == 0)
3. return 1
4. return n * (factorial - 1)
I am assuming that tail-recursion elimination is not involved.
Line 2 and 3 costs a constant time, c1 and c2.
Line 4 costs a constant time as well. However, it calls factorial(n-1) which will take some time T(n-1). Also, the time it takes to multiply factorial(n-1) by n can be ignored once T(n-1) is used.
Time for the whole function is just the sum: T(n) = c1 + c2 + T(n-1).
This, in big-o notation, is reduced to T(n) = 1 + T(n-1).
This is, as Diam has pointed out, is a flat recursion, therefore its running time should be O(n). Its space complexity will be enormous though.
I assume that you have bad information. The second recurrence relation you cite is the correct one, as you have observed. The first one just generates the natural numbers.
This question is very confusing... You first formula is not factorial. It is simply T(n) = n + 1, for all n. Factorial of n is the product of the first n positive integers: factorial(1) = 1. factorial(n) = n * factorial(n-1). Your second formula is essentially correct.
T(n) = T(n-1) + 1 is correct recurrence equation for factorial of n.
This equation gives you the time to compute factorial of n NOT value of the factorial of n.
Where did you find the first one ? It's completely wrong.
It's only going to add 1 each time whatever the value is .
First you have to find a basic operation and for this example it is multiplication. Multiplication happens once in every recursion. So
T(n) = T(n-1) +1
this +1 is basic operation (mutliplication for this example)
T(n-1) is next recursion call.
TL;DR: The answer to your question actually depends on what sequence your recurrence relation is defining. That is, whether the sequence Tn in your question represents the factorial function or the running-time cost of computing the factorial functionX.
The factorial function
The recursive defintion of the factorial of n, fn, is:
fn = n • fn-1 for n > 0 with f0 = 1.
As you can see, the equation above is actually a recurrence relation, since it is an equation that, together with the initial term (i.e., f0 = 1), recursively defines a sequence (i.e., the factorial function, fn).
Modelling the running-time cost of computing the factorial
Now, we are going to find a model for representing the running-time cost of computing the factorial of n. Let's call Tn the running-time cost of computing fn.
Looking at the definition above of the factorial function fn, its running-time cost Tn will consist of the running-time cost of computing fn-1 (i.e., this cost is Tn-1) plus the running-time cost of performing the multiplication between n and fn-1. The multiplication is achieved in constant time. Therefore we could say that Tn = Tn-1 + 1.
However, what is the value of T0? T0 represents the running-time cost of computing f0. Since the value of f0 is initially known by definition, the running-time cost for computing f0 is actually constant. Therefore, we could say that T0 = 1.
Finally, what we obtain is:
Tn = Tn-1 + 1 for n > 0 with T0 = 1.
This equation above is also a recurrence relation. However, what it defines (together with the initial term), is a sequence that models the running-time cost of computing the factorial function.
XTaking into account how the sequence in your recurrence relation is called (i.e., Tn), I think it very likely represents the latter, i.e., the running-time cost of computing the factorial function.

Resources