Recursive function runtime - algorithm

1.Given that T(0)=1, T(n)=T([2n/3])+c (in this case 2n/3 is lower bound). What is big-Θ bound for T(n)? Is this just simply log(n)(base 3/2). Please tell me how to get the result.
2.Given the code
void mystery(int n) {
if(n < 2)
return;
else {
int i = 0;
for(i = 1; i <= 8; i += 2) {
mystery(n/3);
}
int count = 0;
for(i = 1; i < n*n; i++) {
count = count + 1;
}
}
}
According to the master theorem, the big-O bound is n^2. But my result is log(n)*n^2 (base 3) . I'm not sure of my result, and actually I do not really know how to deal with the runtime of recursive function. It is just simply the log function?
Or what if like in this code T(n)=4*T(n/3)+n^2?
Cheers.

For (1), the recurrence solves to c log3/2 n + c. To see this, you can use the iteration method to expand out a few terms of the recurrence and spot a pattern:
T(n) = T(2n/3) + c
= T(4n/9) + 2c
= T(8n/27) + 3c
= T((2/3)k n) + kc
Assuming that T(1) = c and solving for the choice of k that makes the expression inside the parentheses equal to 1, we get that
1 = (2/3)k n
(3/2)k = n
k = log3/2
Plugging in this choice of k into the above expression gives the final result.
For (2), you have the recurrence relation
T(n) = 4T(n/3) + n2
Using the master theorem with a = 4, b = 3, and d = 2, we see that logb a = log3 4 < d, so this solves to O(n2). Here's one way to see this. At the top level, you do n2 work. At the layer below that, you have four calls each doing n2 / 9 work, so you do 4n2 / 9 work, less than the top layer. The layer below that does 16 calls that each do n2 / 81 work for a total of 16n2 / 81 work, again much work than the layer above. Overall, each layer does exponentially less work than the layer above it, so the top layer ends up dominating all the other ones asymptotically.

Let's do some complexity analysis, and we'll find that the asymptotic behavior of T(n) depends on the constants of the recursion.
Given T(n) = A T(n*p) + C, with A,C>0 and p<1, let's first try to prove T(n)=O(n log n). We try to find D such that for large enough n
T(n) <= D(n * log(n))
This yields
A * D(n*p * log(n*p)) + C <= D*(n * log(n))
Looking at the higher order terms, this results in
A*D*p <= D
So, if A*p <= 1, this works, and T(n)=O(n log n).
In the special case that A<=1 we can do better, and prove that T(n)=O(log n):
T(n) <= D log(n)
Yields
A * D(log(n*p)) + C <= D*(log(n))
Looking at the higher order terms, this results in
A * D * log(n) + C + A * D *log(p) <= D * log(n)
Which is true for large enough D and n since A<=1 and log(p)<0.
Otherwise, if A*p>1, let's find the minimal value of q such that T(n)=O(n^q). We try to find the minimal q such that there exists D for which
T(n) <= D n^q
This yields
A * D p^q n^q + C <= D*n^q
Looking at the higher order terms, this results in
A*D*p^q <= D
The minimal q that satisfies this is defined by
A*p^q = 1
So we conclude that T(n)=O(n^q) for q = - log(A) / log(p).
Now, given T(n) = A T(n*p) + B n^a + C, with A,B,C>0 and p<1, try to prove that T(n)=O(n^q) for some q. We try to find the minimal q>=a such that for some D>0,
T(n) <= D n^q
This yields
A * D n^q p^q + B n^a + C <= D n^q
Trying q==a, this will work only if
ADp^a + B <= D
I.e. T(n)=O(n^a) if Ap^a < 1.
Otherwise we get to Ap^q = 1 as before, which means T(n)=O(n^q) for q = - log(A) / log(p).

Related

Time complexity of the following recurrence equation?

Hi all I'm having problems calculating the complexity of the following recurrence equation:
T(n)={ O(1) , if n<=2
{ 2*T(n^(1/2)) + O(logn) , if n>=2
I got to a probable conclusion of O(2^n * nlogn). If anyone has got any clue I'd be happy. Thank you.
Suppose for now that n > 2 is a power of two, so that you can write n = 2^m. Also lets write the constant in your O(log(n)) term explicitly as c*log2(n).
Then, unravelling the recursion gives us :
T(2^m) <= 2*T((2^m)^(1/2)) + c*log2(2^m)
= 2*T(2^(m/2)) + c*m
<= 2*( 2*T((2^(m/2))^(1/2)) + c*log2(2^(m/2)) ) + c*m
= 4*T(2^(m/4)) + 2*c*m
<= 4*( 2*T((2^(m/4))^(1/2)) + c*log2(2^(m/4)) ) + 2*c*m
= 8*T(2^(m/8)) + 3*c*m
<= ...
= (2^log2(m))*T(2^1) + log2(m)*c*m
= m*T(2) + c*m*log2(m)
= log2(n)*T(2) + c*log2(n)*log2(log2(n))
= O(log2(n)*log2(log2(n)))
The term log2(m) comes from the fact that we divide m by two at each new recursion level, and so it will take (at most) log2(m) divisions before m <= 1.
Now if n is not a power of two, you can notice that there exists some number r which is a power of two such that n <= r < 2*n. And you can then write T(n) <= T(r) = O(log2(r)*log2(log2(r))) = O(log2(2*n)*log2(log2(2*n))) = O(log2(n)*log2(log2(n))).
So the overall answer is
T(n) = O(log2(n)*log2(log2(n)))

Is it always possible to find a constant K to prove big O or big Omega?

So I have to figure out if n^(1/2) is Big Omega of log(n)^3. I am pretty sure that it is not, since n^(1/2) is not even in the bounds of log(n)^3; but I do not know how to prove it without limits. I know the definition without limits is
g(n) is big Omega of f(n) iff there is a constant c > 0 and an
integer constant n0 => 1 such that f(n) => cg(n) for n => n0
But can I really always find a constant c that will satisfy this?
for instance for log(n)^3=>c*n^(1/2) if c = 0.1 and n = 10 then we get 1=>0.316.
When comparing sqrt(n) with ln(n)^3 what happens is that
ln(n)^3 <= sqrt(n) ; for all n >= N0
How do I know? Because I printed out sufficient samples of both expressions as to convince myself which dominated the other.
To see this more formally, let's first assume that we have already found N0 (we will do that later) and let's prove by induction that if the inequality holds for n >= N0, it will also hold for n+1.
Note that I'm using ln in base e for the sake of simplicity.
Equivalently, we have to show that
ln(n + 1) <= (n + 1)^(1/6)
Now
ln(n + 1) = ln(n + 1) - ln(n) + ln(n)
= ln(1 + 1/n) + ln(n)
<= ln(1 + 1/n) + n^(1/6) ; inductive hypethesis
From the definition of e we know
e = limit (1 + 1/n)^n
taking logarithms
1 = limit n*ln(1 + 1/n)
Therefore, there exits N0 such that
n*ln(1 + 1/n) <= 2 ; for all n >= N0
so
ln(1 + 1/n) <= 2/n
<= 1
Using this above, we get
ln(n + 1) <= 1 + n^(1/6)
<= (n+1)^(1/6)
as we wanted.
We are now left with the task of finding some N0 such that
ln(N0) <= N0^(1/6)
let's take N0 = e^(6k) for some value of k that we will are about to find. We get
ln(N0) = 6k
N0^(1/6) = e^k
so, we only need to pick k such that 6k < e^k, which is possible because the right hand side grows much faster than the left.

what is the best way to argue about the big O or about theta?

We're asked to provide a $ n+4![\sqrt{n}] =O(n) $ with having a good argumentation and a logical build up for it but it's not said how a good argumentation would look like, so I know that $2n+4\sqrt{n}$ always bigger for n=1 but i wouldn't know how to argue about it and how to logically build it since i just thought about it and it happened to be true. Can someone help out with this example so i would know how to do it?
You should look at the following site https://en.wikipedia.org/wiki/Big_O_notation
For the O big notation we would say that if a function is the following: X^3+X^2+100X = O(x^3). This is with idea that if X-> some very big number, the X^3 term will become the dominant factor in the equation.
You can use the same logic to your equation. Which term will become dominant in your equation.
If this is not clear you should try to plot both terms and see how they scale. This could be more clarifying.
A proof is a convincing, logical argument. When in doubt, a good way to write a convincing, logical argument is to use an accepted template for your argument. Then, others can simply check that you have used the template correctly and, if so, the validity of your argument follows.
A useful template for showing asymptotic bounds is mathematical induction. To use this, you show that what you are trying to prove is true for specific simple cases, called base cases, then you assume it is true in all cases up to a certain size (the induction hypothesis) and you finish the proof by showing the hypothesis implies the claim is true for cases of the very next size. If done correctly, you will have shown the claim (parameterized by a natural number n) is true for a fixed n and for all larger n. This is what is exactly what is required for proving asymptotic bounds.
In your case: we want to show that n + 4 * sqrt(n) = O(n). Recall that the (one?) formal definition of big-Oh is the following:
A function f is bound from above by a function g, written f(n) = O(g(n)), if there exist constants c > 0 and n0 > 0 such that for all n > n0, f(n) <= c * g(n).
Consider the case n = 0. We have n + 4 * sqrt(n) = 0 + 4 * 0 = 0 <= 0 = c * 0 = c * n for any constant c. If we now assume the claim is true for all n up to and including k, can we show it is true for n = k + 1? This would require (k + 1) + 4 * sqrt(k + 1) <= c * (k + 1). There are now two cases:
k + 1 is not a perfect square. Since we are doing analysis of algorithms it is implied that we are using integer math, so sqrt(k + 1) = sqrt(k) in this case. Therefore, (k + 1) + 4 * sqrt(k + 1) = (k + 4 * sqrt(k)) + 1 <= (c * k) + 1 <= c * (k + 1) by the induction hypothesis provided that c > 1.
k + 1 is a perfect square. Since we are doing analysis of algorithms it is implied that we are using integer math, so sqrt(k + 1) = sqrt(k) + 1 in this case. Therefore, (k + 1) + 4 * sqrt(k + 1) = (k + 4 * sqrt(k)) + 5 <= (c * k) + 5 <= c * (k + 1) by the induction hypothesis provided that c >= 5.
Because these two cases cover all possibilities and in each case the claim is true for n = k + 1 when we choose c >= 5, we see that n + 4 * sqrt(n) <= 5 * n for all n >= 0 = n0. This concludes the proof that n + 4 * sqrt(n) = O(n).

Recurrence relation for given algorithm?

int print4Subtree(struct Node *root) {
if (root == NULL)
return 0;
int l = print4Subtree(root->left);
int r = print4Subtree(root->right);
if ((l + r + 1) == 4)
printf("%d ", root->data);
return (l + r + 1); }
This algorithm/code finds number of subtrees having exactly 4 nodes in binary tree , it's works in bottom-up manner .
I know the time complexity of this code would be O(n) , and space complexity is O(log n) , since it's using recursion.
What will be recurrence relation for the code ?
I try to draw T(n) = 2T(n-1)+1 , which is obviously wrong !
You can only talk about recurrence relations in terms of n alone in cases where you know more about the structure of the tree, for instance:
Case 1: Every node has only one child meaning
T(n) = T(0) + T(n-1) + k.
Case 2: Subtrees at any level are balanced so that
T(n) = 2 T((n-1)/2) + k.
Both of these will result in O(n), but these two cases are only a very select minority of possible trees. For a more universal approach you have to use a formula like T(n) = T(a) + T(b), where a and b are an arbitrary division into sub-problems resulting from the structure of your tree. You can still establish results from this kind of formula using strong induction.
The following is the exact formula and approach I would use:
T(n) = nk + mnc, where mn ≤ n + 1. (Note: I am using k for overhead of recursive steps and c for overhead of base/null steps).
Base case (n=0):
For a null node T(0) = c so T(n) = kn + mnc ,
where mn = 1 ≤ n+1 = 1.
Inductive step (T(x) = xk + mxc for all x < n):
The sub_tree of size n has two sub-trees of sizes a and b (a or b may be 0) such that n = a + b + 1.
T(n) = T(a) + T(b) + k = ak + mac + bk + mbc + k = (a+b+1)k + (ma+mb)c = nk + mnc ,
where mn = ma + mb ≤ a + 1 + b + 1 = n + 1.
The reason for using mn is merely a formality to make the proof smoother, as the exact number of null cases is what is actually affected by the structure of tree (in the former case 2, it is log n). So T(n) is at best O(n) because of the nk term, and can be no worst than O(n) because of the bound on mnc.

Can not figure out complexity of this recurrence

I am refreshing on Master Theorem a bit and I am trying to figure out the running time of an algorithm that solves a problem of size n by recursively solving 2 subproblems of size n-1 and combine solutions in constant time.
So the formula is:
T(N) = 2T(N - 1) + O(1)
But I am not sure how can I formulate the condition of master theorem.
I mean we don't have T(N/b) so is b of the Master Theorem formula in this case b=N/(N-1)?
If yes since obviously a > b^k since k=0 and is O(N^z) where z=log2 with base of (N/N-1) how can I make sense out of this? Assuming I am right so far?
ah, enough with the hints. the solution is actually quite simple. z-transform both sides, group the terms, and then inverse z transform to get the solution.
first, look at the problem as
x[n] = a x[n-1] + c
apply z transform to both sides (there are some technicalities with respect to the ROC, but let's ignore that for now)
X(z) = (a X(z) / z) + (c z / (z-1))
solve for X(z) to get
X(z) = c z^2 / [(z - 1) * (z-a)]
now observe that this formula can be re-written as:
X(z) = r z / (z-1) + s z / (z-a)
where r = c/(1-a) and s = - a c / (1-a)
Furthermore, observe that
X(z) = P(z) + Q(z)
where P(z) = r z / (z-1) = r / (1 - (1/z)), and Q(z) = s z / (z-a) = s / (1 - a (1/z))
apply inverse z-transform to get that:
p[n] = r u[n]
and
q[n] = s exp(log(a)n) u[n]
where log denotes the natural log and u[n] is the unit (Heaviside) step function (i.e. u[n]=1 for n>=0 and u[n]=0 for n<0).
Finally, by linearity of z-transform:
x[n] = (r + s exp(log(a) n))u[n]
where r and s are as defined above.
so relabeling back to your original problem,
T(n) = a T(n-1) + c
then
T(n) = (c/(a-1))(-1+a exp(log(a) n))u[n]
where exp(x) = e^x, log(x) is the natural log of x, and u[n] is the unit step function.
What does this tell you?
Unless I made a mistake, T grows exponentially with n. This is effectively an exponentially increasing function under the reasonable assumption that a > 1. The exponent is govern by a (more specifically, the natural log of a).
One more simplification, note that exp(log(a) n) = exp(log(a))^n = a^n:
T(n) = (c/(a-1))(-1+a^(n+1))u[n]
so O(a^n) in big O notation.
And now here is the easy way:
put T(0) = 1
T(n) = a T(n-1) + c
T(1) = a * T(0) + c = a + c
T(2) = a * T(1) + c = a*a + a * c + c
T(3) = a * T(2) + c = a*a*a + a * a * c + a * c + c
....
note that this creates a pattern. specifically:
T(n) = sum(a^j c^(n-j), j=0,...,n)
put c = 1 gives
T(n) = sum(a^j, j=0,...,n)
this is geometric series, which evaluates to:
T(n) = (1-a^(n+1))/(1-a)
= (1/(1-a)) - (1/(1-a)) a^n
= (1/(a-1))(-1 + a^(n+1))
for n>=0.
Note that this formula is the same as given above for c=1 using the z-transform method. Again, O(a^n).
Don't even think about Master's Theorem. You can only use Masther's Theorem when you're given master's theorem when b > 1 from the general form T(n) = aT(n/b) + f(n).
Instead, think of it this way. You have a recursive call that decrements the size of input, n, by 1 at each recursive call. And at each recursive call, the cost is constant O(1). The input size will decrement until it reaches 1. Then you add up all the costs that you used to make the recursive calls.
How many are they? n. So this would take O(2^n).
Looks like you can't formulate this problem in terms of the Master Theorem.
A good start is to draw the recursion tree to understand the pattern, then prove it with the substitution method. You can also expand the formula a couple of times and see where it leads.
See also this question which solves 2 subproblems instead of a:
Time bound for recursive algorithm with constant combination time
May be you could think of it this way
when
n = 1, T(1) = 1
n = 2, T(2) = 2
n = 3, T(3) = 4
n = 4, T(4) = 8
n = 5, T(5) = 16
It is easy to see that this is a geometric series 1 + 2+ 4+ 8 + 16..., the sum of which is
first term (ratio^n - 1)/(ratio - 1). For this series it is
1 * (2^n - 1)/(2 - 1) = 2^n - 1.
The dominating term here is 2^n, therefore the function belongs to Theta(2^n). You could verify it by doing a lim(n->inf) [2^n / (2^n - 1)] = +ve constant.
Therefore the function belongs to Big Theta (2^n)

Resources