So I'm preparing for the Algorithms exam and I don't know how to solve this recurrence T(n) = T(6n/5) + 1 since b = 5/6 < 1 and Master theorem cannot be applied.
I hope that someone can give me a hint on how to solve this. :)
Given just that recurrence relation (and no additional information like T(x) = 1 when x > 100), an algorithm with time complexity as described by the relation will never terminate, as the amount of work increases at each call.
T(n) = T(6n/5) + 1
= T(36n/25) + 2
= T(216n/125) + 3
= ...
You can see that the amount of work increases each call, and that it's not going to have a limit as to how much it increases by. As a result, there is no bound on the time complexity of the function.
We can even (informally) argue that such an algorithm cannot exist - increasing the size of the input by 1.2 times each call requires at least 0.2n work, which is clearly O(n) - but the actual cost at each step is claimed to be 1, O(1), so it's impossible for an algorithm described by
this exact recurrence to exist (but fine for algorithms with recurrence eg. T(n) = T(6n/5) + n).
Solve: T(n)=T(n-1)+T(n/2)+n.
I tried solving this using recursion trees.There are two branches T(n-1) and T(n/2) respectively. T(n-1) will go to a higher depth. So we get O(2^n). Is this idea correct?
This is a very strange recurrence for a CS class. This is because from one point of view: T(n) = T(n-1) + T(n/2) + n is bigger than T(n) = T(n-1) + n which is O(n^2).
But from another point of view, the functional equation has an exact solution: T(n) = -2(n + 2). You can easily see that this is the exact solution by substituting it back to the equation: -2(n + 2) = -2(n + 1) + -(n + 2) + n. I am not sure whether this is the only solution.
Here is how I got it: T(n) = T(n-1) + T(n/2) + n. Because you calculate things for very big n, than n-1 is almost the same as n. So you can rewrite it as T(n) = T(n) + T(n/2) + n which is T(n/2) + n = 0, which is equal to T(n) = - 2n, so it is linear. This was counter intuitive to me (the minus sign here), but armed with this solution, I tried T(n) = -2n + a and found the value of a.
I believe you are right. The recurrence relation will always split into two parts, namely T(n-1) and T(n/2). Looking at these two, it is clear that n-1 decreases in value slower than n/2, or in other words, you will have more branches from the n-1 portion of the tree. Despite this, when considering big-o, it is useful to just consider the 'worst-case' scenario, which in this case is that both sides of the tree decreases by n-1 (since this decreases more slowly and you would need to have more branches). In all, you would need to split the relation into two a total of n times, hence you are right to say O(2^n).
Your reasoning is correct, but you give away far too much. (For example, it is also correct to say that 2x^3+4=O(2^n), but that’s not as informative as 2x^3+4=O(x^3).)
The first thing we want to do is get rid of the inhomogeneous term n. This suggests that we may look for a solution of the form T(n)=an+b. Substituting that in, we find:
an+b = a(n-1)+b + an/2+b + n
which reduces to
0 = (a/2+1)n + (b-a)
implying that a=-2 and b=a=-2. Therefore, T(n)=-2n-2 is a solution to the equation.
We now want to find other solutions by subtracting off the solution we’ve already found. Let’s define U(n)=T(n)+2n+2. Then the equation becomes
U(n)-2n-2 = U(n-1)-2(n-1)-2 + U(n/2)-2(n/2)-2 + n
which reduces to
U(n) = U(n-1) + U(n/2).
U(n)=0 is an obvious solution to this equation, but how do the non-trivial solutions to this equation behave?
Let’s assume that U(n)∈Θ(n^k) for some k>0, so that U(n)=cn^k+o(n^k). This makes the equation
cn^k+o(n^k) = c(n-1)^k+o((n-1)^k) + c(n/2)^k+o((n/2)^k)
Now, (n-1)^k=n^k+Θ(n^{k-1}), so that the above becomes
cn^k+o(n^k) = cn^k+Θ(cn^{k-1})+o(n^k+Θ(n^{k-1})) + cn^k/2^k+o((n/2)^k)
Absorbing the lower order terms and subtracting the common cn^k, we arrive at
o(n^k) = cn^k/2^k
But this is false because the right hand side grows faster than the left. Therefore, U(n-1)+U(n/2) grows faster than U(n), which means that U(n) must grow faster than our assumed Θ(n^k). Since this is true for any k, U(n) must grow faster than any polynomial.
A good example of something that grows faster than any polynomial is an exponential function. Consequently, let’s assume that U(n)∈Θ(c^n) for some c>1, so that U(n)=ac^n+o(c^n). This makes the equation
ac^n+o(c^n) = ac^{n-1}+o(c^{n-1}) + ac^{n/2}+o(c^{n/2})
Rearranging and using some order of growth math, this becomes
c^n = o(c^n)
This is false (again) because the left hand side grows faster than the right. Therefore,
U(n) grows faster than U(n-1)+U(n/2), which means that U(n) must grow slower than our assumed Θ(c^n). Since this is true for any c>1, U(n) must grow more slowly than any exponential.
This puts us into the realm of quasi-polynomials, where ln U(n)∈O(log^c n), and subexponentials, where ln U(n)∈O(n^ε). Either of these mean that we want to look at L(n):=ln U(n), where the previous paragraphs imply that L(n)∈ω(ln n)∩o(n). Taking the natural log of our equation, we have
ln U(n) = ln( U(n-1) + U(n/2) ) = ln U(n-1) + ln(1+ U(n/2)/U(n-1))
or
L(n) = L(n-1) + ln( 1 + e^{-L(n-1)+L(n/2)} ) = L(n-1) + e^{-(L(n-1)-L(n/2))} + Θ(e^{-2(L(n-1)-L(n/2))})
So everything comes down to: how fast does L(n-1)-L(n/2) grow? We know that L(n-1)-L(n/2)→∞, since otherwise L(n)∈Ω(n). And it’s likely that L(n)-L(n/2) will be just as useful, since L(n)-L(n-1)∈o(1) is much smaller than L(n-1)-L(n/2).
Unfortunately, this is as far as I’m able to take the problem. I don’t see a good way to control how fast L(n)-L(n/2) grows (and I’ve been staring at this for months). The only thing I can end with is to quote another answer: “a very strange recursion for a CS class”.
I think we can look at it this way:
T(n)=2T(n/2)+n < T(n)=T(n−1)+T(n/2)+n < T(n)=2T(n−1)+n
If we apply the master's theorem, then:
Θ(n∗logn) < Θ(T(n)) < Θ(2n)
Remember that T(n) = T(n-1) + T(n/2) + n being (asymptotically) bigger than T(n) = T(n-1) + n only applies for functions which are asymptotically positive. In that case, we have T = Ω(n^2).
Note that T(n) = -2(n + 2) is a solution to the functional equation, but it doesn't interest us, since it is not an asymptotically positive solution, hence the notations of O don't have meaningful application.
You can also easily check that T(n) = O(2^n). (Refer to yyFred solution, if needed)
If you try using the definition of O for functions of the type n^a(lgn)^b, with a(>=2) and b positive constants, you see that this is not a possible solution too by the Substitution Method.
In fact, the only function that allows a proof with the Substitution Method is exponential, but we know that this recursion doesn't grow as fast as T(n) = 2T(n-1) + n, so if T(n) = O(a^n), we can have a < 2.
Assume that T(m) <= c(a^m), for some constant c, real and positive. Our hypothesis is that this relation is valid for all m < n. Trying to prove this for n, we get:
T(n) <= (1/a+1/a^(n/2))c(a^n) + n
we can get rid of the n easily by changing the hypothesis by a term of lower order. What is important here is that:
1/a+1/a^(n/2) <= 1
a^(n/2+1)-a^(n/2)-a >= 0
Changing variables:
a^(N+1)-a^N-a >= 0
We want to find a bond as tight as possible, so we are searching for the lowest a possible. The inequality we found above accept solutions of a which are pretty close to 1, but is a allowed to get arbitrarily close to 1? The answer is no, let a be of the form a = (1+1/N). Substituting a at the inequality and applying the limit N -> INF:
e-e-1 >= 0
which is a absurd. Hence, the inequality above has some fixed number N* as maximum solution, which can be found computationally. A quick Python program allowed me to find that a < 1+1e-45 (with a little extrapolation), so we can at least be sure that:
T(n) = ο((1+1e-45)^n)
T(n)=T(n-1)+T(n/2)+n is the same as T(n)=T(n)+T(n/2)+n since we are solving for extremely large values of n. T(n)=T(n)+T(n/2)+n can only be true if T(n/2) + n = 0. That means T(n) = T(n) + 0 ~= O(n)
Suppose I have the following algorithm:
procedure(n)
if n == 1 then break
R = generaterandom()
procedure(n/2)
Now I understand that the complexity of this algorithm is log(n) but does it make log(n) calls to the random generator or log(n)-1 since it is not called for the call when n==1.
Sorry if this is obvious, but i've been looking around and its not really stated anywhere what the exact answer is.
There are ceil(log(n))calls to the generator
Proof Using induction:
Hypothesis:
There are ceil(log(k)) calls to generator for each k<n
Base:
log_2(1) = 0 => 0 calls
Step:
For arbitrary n>1 there is one call, and then from hypothesis ceil(log(n/2) more calls in the recursive calls.
This gives us total of ceil(log(n/2))+1 = ceil(log(n/2)) + log(2) = ceil(log(n/2 * 2)) = ceil(log(n)) calls
QED
Note: In here, all logs are with base 2.
By the Master's Theorem, your method can be written as T(n) = T(n/2) + O(1), since you are dividing n into half every function call, and this is exactly O(log n). - I realized you are not asking for complexity analysis, but like I mentioned, the idea is the same (i.e. finding the number of calls is equivalent to its complexity)
I have the homework question:
Let T(n) denote the number of times the statement x = x + 1 is
executed in the algorithm
example (n)
{
if (n == 1)
{
return
}
for i = 1 to n
{
x = x + 1
}
example (n/2)
}
Find the theta notation for the number of times x = x + 1 is executed.
(10 points).
here is what I have done:
we have the following amount of work done:
- A constant amount of work for the base case check
- O(n) work to count up the number
- The work required for a recursive call to something half the size
We can express this as a recurrence relation:
T(1) = 1
T(n) = n + T(n/2)
Let’s see what this looks like. We can start expanding this out by noting that
T(n)=n+T(n/2)
=n+(n/2+T(n/4))
=n+n/2+T(n/4)
=n+n/2+(n/4+T(n/8))
=n+n/2+n/4+T(n/8)
We can start to see a pattern here. If we expand out the T(n/2) bit k times, we get:
T(n)=n+n/2+n/4+⋯+n/2^k +T(n/2^k )
Eventually, this stops when n/2^k =1 when this happens, we have:
T(n)=n+n/2+n/4+n/8+⋯+1
What does this evaluate to? Interestingly, this sum is equal to 2n+1 because the sum n+ n/2 + n/4 + n/8 +…. = 2n. Consequently this first function is O(n)
I got that answer thanks to this question answered by #templatetypedef
Know I am confused with the theta notation. will the answer be the same? I know that the theta notation is supposed to bound the function with a lower and upper limit. that means I need to functions?
Yes, the answer will be the same!
You can easily use the same tools to prove T(n) = Omega(n).
After proving T(n) is both O(n) [upper asymptotic bound] and Omega(n) [lower asymptotic bound], you know it is also Theta(n) [tight asymptotic bound]
In your example, it is easy to show that T(n) >= n - since it is homework, it is up to you to understand why.
I have this exercise:
"use a recursion tree to determine a good asymptotic upper bound on the recurrence T(n)=T(n/2)+n^2. Use a substitution method to verify your answer"
I have made this recursion tree
Where I have assumed that k -> infinity (in my book they often stop the reccurence when the input in T gets 1, but I don't think this is the case, when I don't have other informations).
I have concluded that:
When I have used the substitution method I have assumed that T(n)=O(n^2)
And then done following steps:
When n>0 and c>0 I see that following is true for some choice of c
And therefore T(n)<=cn^2
And my question is: "Is this the right way to do it? "
First of, as Philip said, your recursion tree is wrong. It didn't affect the complexity in the end, but you got the constants wrong.
T(n) becomes
n^2 + T(n/2) becomes
n^2 + n^2/4 + T(n/4) becomes
...
n^2(1 + 1/4 + 1/16 + ...)
Stoping at one vs stoping at infinity is mostly a matter of taste and choosing what is more convenient. In this case, I'd do the same as you did and use the infinite sum, because then we can use the geometric series formula to get a good guess that T(n) <= (4/3)n^2
The only thing that bothers me a bit is that your proof in the end tended towards the informal. It is very easy to get lost in informal proofs so if I had to grade your assignment, I'd be more confortable with a traditional proof by induction, like the following one:
statement to prove
We wish to prove that T(n) <= (4/3)*n^2, for n >= 1
Concrete values for c and n0 make the proof more belieavable so put them in if you can. Often you will need to run the proof once to actualy find the values and then come back and put them in, as if you had already known them in the first place :) In this case, I'm hoping my 4/3 guess from the recursion tree turns out correct.
Proof by induction:
Base case (n = 1):
T(1) = 1
(You did not make the value of T(1) explicit, but I guess this should be in the original exercise)
T(1) = 1
<= 4/3
= (4/3)*1^2
T(1) <= (4/3)*1^2
As we wanted.
Inductive case (n > 1):
(Here we assume the inductive hypothesis T(n') <= 4/3*(n')^2 for all n' < n)
We know that
T(n) = n^2 + T(n/2)
By the inductive hypothesis:
T(n) <= n^2 + (4/3)(n/2)^2
Doing some algebra:
T(n) <= n^2 + (4/3)(n/2)^2
= n^2 + (1/3)n^2
= (4/3)n^2
T(n) <= (4/3)*n^2
As we wanted.
May look boring, but now I can be sure that I got the answer right!
Your recursion tree is erroneous. It should look like this:
n^2
|
(n/2)^2
|
(n/4)^2
|
...
|
(n/2^k)^2
You can safely stop once T reaches 1, because you are usually counting discrete "steps", and there's no such thing as "0.34 steps".
Since you're dividing n by 2 in each iteration, k equals log2(n) here.