time complexity recursive function - algorithm

How can i calculate the time complexity and the t(n) equation of this recursive function?
Function CoeffBin(n,k)
if (n=1) or (k=0) then return(1)
else return (CoeffBin(n-1,k) + CoeffBin(n-1,k-1))

Let T(n, k) be the cost function and assume a unit cost of the statement if (n=1) or (k=0) then return(1).
Now neglecting the cost of addition, we have the recurrence
T(n, k) =
1 if n = 1 or k = 0 (that is T(1, k) = T(n, 0) = 1)
T(n-1, k) + T(n-1, k-1) otherwise
The solution is T(n, k)=B(n-1, n-1+k)=B(k, n-1+k) where B denotes Binomial numbers
and the costs also follows Pascal's triangle !
For a more precise estimate, we can assume the costs a when n=1or k=0, and T(n-1,k)+T(n-1,k-1)+b otherwise. The solution is then (a+b)B(k, n+k-1)-b.

Note that, at the base level (that is, when not doing recursive calls), the function always returns ones.
So, to have an answer of X, the program will ultimately need to do X-1 additions, and thus do X calls executing the case in the first line and X-1 calls executing the second line.
So, whatever the intended result of the function call is -- perhaps choose(n,k), -- if you prove that it works, you automatically establish that the number of calls is proportional to that result.

Related

How do I find the space and time complexities for this code

fun root(n) =
if n>0 then
let
val x = root(n div 4);
in
if (2*x+1)*(2*x+1) > n then 2*x
else 2*x+1
end
else 0;
fun isPrime(n,c) =
if c<=root(n) then
if n mod c = 0 then false
else isPrime(n,c+1)
else true;
The time complexity for the root(n) function here is O(log(n)): the number is getting divided by 4 at every step and the code in the function itself is O(1). The time complexity for the isPrime function is o(sqrt(n)) as it runs iteratively from 1 to sqrt(n). The issue I face now is what would be the order of both functions together? Would it just be O(sqrt(n)) or would it be O(sqrt(n)*log(n)) or something else altogether?
I'm new to big O notation in general, I have gone through multiple websites and youtube videos trying to understand the concept but I can't seem to calculate it with any confidence... If you guys could point me towards a few resources to help me practice calculating, it would be a great help.
root(n) is O(log₄(n)), yes.
isPrime(n,c) is O((√n - c) · log₄(n)):
You recompute root(n) in every step even though it never changes, causing the "... · log₄(n)".
You iterate c from some value up to root(n); while it is upwards bounded by root(n), it is not downards bounded: c could start at 0, or at an arbitrarily large negative number, or at a positive number less than or equal to √n, or at a number greater than √n. If you assume that c starts at 0, then isPrime(n,c) is O(√n · log₄(n)).
You probably want to prove this using either induction or by reference to the Master Theorem. You may want to simplify isPrime so that it does not take c as an argument in its outer signature, and so that it does not recompute root(n) unnecessarily on every iteration.
For example:
fun isPrime n =
let
val sq = root n
fun check c = c > sq orelse (n mod c <> 0 andalso check (c + 1))
in
check 2
end
This isPrime(n) is O(√n + log₄(n)), or just O(√n) if we omit lower-order terms.
First it computes root n once at O(log₄(n)).
Then it loops from 0 up to root n once at O(√n).
Note that neither of us have proven anything formally at this point.
(Edit: Changed check (n, 0) to check (n, 2), since duh.)
(Edit: Removed n as argument from check since it never varies.)
(Edit: As you point out, Aryan, looping from 2 to root n is indeed O(√n) even though computing root n takes only O(log₄(n))!)

Time complexity of naïve merge of two binary search trees

I saw a very short algorithm for merging two binary search trees. I was surprised how easy and also very inefficient it is. But when I tried to guess its time complexity, I failed.
Lets have a two immutable binary search trees (not balanced) that contains integers and you want to merge them together with the following recursive algorithm in pseudo code. Function insert is auxiliary:
function insert(Tree t, int elem) returns Tree:
if elem < t.elem:
return new Tree(t.elem, insert(t.leftSubtree, elem), t.rightSubtree)
elseif elem > t.elem:
return new Tree(t.elem, t.leftSubtree, insert(t.rightSubtree, elem))
else
return t
function merge(Tree t1, Tree t2) returns Tree:
if t1 or t2 is Empty:
return chooseNonEmpty(t1, t2)
else
return insert(merge(merge(t1.leftSubtree, t1.rightSubtree), t2), t1.elem)
I guess its an exponencial algorithm but I cannot find an argument for that. What is the worst time complexity of this merge algorithm?
Let's consider the worst case:
At each stage every tree is in the maximally imbalanced state, i.e. each node has at least one sub-tree of size 1.
In this extremal case the complexity of insert is quite easily shown to be Ө(n) where n is the number of elements in the tree, as the height is ~ n/2.
Based on the above constraint, we can deduce a recurrence relation for the time complexity of merge:
where n, m are the sizes of t1, t2. It is assumed without loss of generality that the right sub-tree always contains a single element. The terms correspond to:
T(n - 2, 1): the inner call to merge on the sub-trees of t1
T(n - 1, m): the outer call to merge on t2
Ө(n + m): the final call to insert
To solve this, let's re-substitute the first term and observe a pattern:
We can solve this sum by stripping out the first term:
Where in step (*) we used a change-in-variable substitution i -> i + 1. The recursion stops when k = n:
T(1, m) is just the insertion of an element into a tree of size m, which is obviously Ө(m) in our assumed setup.
Therefore the absolute worst-case time complexity of merge is
Notes:
The order of the parameters matters. It is thus common to insert the smaller tree into the larger tree (in a manner of speaking).
Realistically you are extremely unlikely to have maximally imbalanced trees at every stage of the procedure. The average case will naturally involve semi-balanced trees.
The optimal case (i.e. always perfectly balanced trees) is much more complex (I am unsure that an analytical solution like the above exists; see gdelab's answer).
EDIT: How to evaluate the exponential sum
Suppose we want to compute the sum:
where a, b, c, n are positive constants. In the second step we changed the base to e (the natural exponential constant). With this substitution we can treat ln c as a variable x, differentiate a geometrical progression with respect to it, then set x = ln c:
But the geometrical progression has a closed-form solution (a standard formula which is not difficult to derive):
And so we can differentiate this result with respect to x by n times to obtain an expression for Sn. For the problem above we only need the first two powers:
So that troublesome term is given by:
which is exactly what Wolfram Alpha directly quoted. As you can see, the basic idea behind this was simple, although the algebra was incredibly tedious.
It's quite hard to compute exactly, but it looks like it's not polynomially bounded in the worst case (this is not a complete proof however, you'd need a better one):
insert has complexity O(h) at worst, where h is the height of the tree (i.e. at least log(n),possibly n).
The complexity of merge() could then be of the form: T(n1, n2) = O(h) + T(n1 / 2, n1 / 2) + T(n1 - 1, n2)
let's consider F(n) such that F(1)=T(1, 1) and F(n+1)=log(n)+F(n/2)+F(n-1). We can probably show that F(n) is smaller than T(n, n) (since F(n+1) contains T(n, n) instead of T(n, n+1)).
We have F(n)/F(n-1) = log(n)/F(n-1) + F(n/2) / F(n-1) + 1
Assume F(n)=Theta(n^k) for some k. Then F(n/2) / F(n-1) >= a / 2^k for some a>0 (that comes from the constants in the Theta).
Which means that (beyond a certain point n0) we always have F(n) / F(n-1) >= 1 + epsilon for some fixed epsilon > 0, which is not compatible with F(n)=O(n^k), hence a contradiction.
So F(n) is not a Theta(n^k) for any k. Intuitively, you can see that the problem is probably not the Omega part but the big-O part, hence it's probably not a O(n) (but technically we used the Omega part here to get a). Since T(n, n) should be even bigger than F(n), T(n, n) should not be polynomial, and is maybe exponential...
But then again, this is not rigorous at all, so maybe I'm actually dead wrong...

Exponential complexity of a program

I got a program which depends on two entries of sizes m and n respectively. If T(m,n) is the running time of the problem, it follows:
T(m,n)=T(m-1,n-1)+T(m-1,n)+T(m,n-1)+C
for a given constant C.
I could prove that the time complexity is in Omega(2^min(m,n)). However, it seems that it is of complexity Omega(2^max(m,n)) (it was just confirmed to me) but I can't find a formal proof. Anyone has a trick?
Thanks in advance!
From the top of my head:
I assume the recursion of T(m,n) stops when you reach T(0,x) or T(x,0).
You have 3 factors contributing to the complexity:
Factor 1: T(m-1,n-1) decreases both m and n, so its lenght until m or n become 0 is min(m,n) steps (See note below)
Factor 2: T(m-1,n) decreases m only, so its length until m=0 is m steps.
Factor 3: T(m,n-1) same as above but until n=0 is n steps.
The overall complexity is the greater of all complexities, so it must be related to maximum of
max( min(m,n) steps, m steps, n steps) = max(m,n)
rather than min(m,n).
I guess you can fill in the details. The constant C does not contribute, or more precisely contributes with O(1), which is the lowest of all complexities here.
Note about item 1: This factor has also a branch for m-1 until 0 and n-1 until 0, so strictly speaking it s complexity will also be max(m,n)
First, you should define that, for all values of x, T(0, x) = T(x, 0) = 0 to have your recursion stop - or at least T(0, x) = T(x, 0) = C.
Second, "time complexity is in Omega(2^min(m,n))" must obviously be wrong. Set m=10000 and n=1. Now try to prove to me that the complexity is the same as m=1 and n=1. The T(m-1, n-1) and T(m, n-1) parts disappear quickly, but you still have to walk the T(m-1, n) part.
Third, this observation directly leads to 2^max(m, n). Try to find out the number of recursion steps for a few low values of m and n. Then try to make up a formula for the number of steps depending on m and n. (Hint: Fibonacci). When you have that formula, you're finished.

Divide-and-conquer algorithms' property example

I'm having trouble with understanding the following property of divide-and-conquer algorithms.
A recursive method that divides a problem of size N into two independent
(nonempty) parts that it solves recursively calls itself less than N times.
The proof is
A recursive function that divides a problem of size N into two independent
(nonempty) parts that it solves recursively calls itself less than N times.
If the parts are one of size k and one of size N-k, then the total number of
recursive calls that we use is T(n) = T(k) + T(n-k) + 1, for N>=1 with T(1) = 0.
The solution T(N) = N-1 is immediate by induction. If the sizes sum to a value
less than N, the proof that the number of calls is less than N-1 follows from
same inductive argument.
I perfectly understand the formal proof above. What I don't understand is how this property is connected to the examples that are usually used to demonstrate the divide-and-conquer idea, particularly to the finding the maximum problem:
static double max(double a[], int l, int r)
{
if (l == r) return a[l];
int m = (l+r)/2;
double u = max(a, l, m);
double v = max(a, m+1, r);
if (u > v) return u; else return v;
}
In this case when a consists of N=2 elements max(0,1) will call itself 2 more times, that is max(0,0) and max(1,1), which equals to N. If N=4, max(0,3) will call itself 2 times, and then each of the subsequent calls will also call max 2 times, so the total number of calls is 6 > N. What am I missing?
You're not missing anything. The theorem and its proof are wrong. The error is here:
T(n) = T(k) + T(n-k) + 1
The constant term of 1 should be 2, as the function makes one recursive call for each of the two pieces into which it divides the problem. The correct bound is 2N-1, rather than N. Hopefully, this error will be fixed in the next edition of your textbook, or at least in the errata.

How to calculate time complexity of the following algorithm

How to calculate time complexity of the following algorithm. I tried but I am getting confused because recursive calls.
power (real x, positive integer n)
//comment : This algorithm returns xn, taking x and n as input
{
if n=1 then
return x;
y = power(x, |n/2|)
if n id odd then
return y*y*x //comment : returning the product of y2 and x
else
return y * y //comment : returning y2
}
can some one explain in simple steps.
To figure out the time complexity of a recursive function you need to calculate the number of recursive calls that is going to be made in terms of some input variable N.
In this case, each call makes at most one recursive invocation. The number of invocations is on the order of O(log2N), because each invocation decreases N in half.
The rest of the body of the recursive function is O(1), because it does not depend on N. Therefore, your function has time complexity of O(log2N).
Each call is considered a constant time operation, and how many times will it recurse is equal to how many times can you do n/2 before n = 1, which is at most log2(n) times. Therefore the worst case running time is O(log2n).

Resources