I've been ripping my hair out trying to solve this:
Σ(k=0,n)3k = O(3n)
I've been looking through various things online but I still can't seem to solve it. I know it involves the formal definition of Big O, where
|f(x)| <= C*|g(x)|, x>=k
Since they are the same, I am assuming C is some value I have to find through induction to prove the original statement, and that k=0.
Thanks for your help with this.
Σ(k=0,n)3k
= 30 + 31 + ... + 3n
= (1 - 3n+1) / (1 - 3) ; sum of geometric series
= (3/2)*3n - k
<= c*3n ; for c >= 3/2
= O(3n)
Induction is not needed here; that sum is a geometric series and has closed form solution
= 1(1-3^(n + 1))/(1-3) = (3^(n + 1) - 1)/2
= (3*3^n - 1)/2
Pick C = 3/2 and F = 3/2*3^n - 1/2, G = 3^n, and this satisfies the requirement for O(3^n), but really in practice, though it might be thought informal and sloppy, you don't really worry much about an exact constant since any constant will do for satisfying Big-O.
You can rewrite it as 3n * ( 1 + 1/3 + 1/9 + ....1/3n).
There is an upper bound for that sum. Calculate the limit of that infinite series.
From there, it's easy to get a good C, eg: 2.
Related
How can I solve this recursion :
f(n) = f(n-1) + f(n-3) + c.
where c is constant
f(1)=1
Please help me to solve this
We explicitly solve for the function. First, add c to both sides to get
f(n) + c = (f(n - 1) + c) + (f(n - 3) + c)
Define g(n) = f(n) + c. Then the recurrence for g is
g(n) = g(n - 1) + g(n - 3)
Note that the set of solutions to this equation forms a vector space of dimension 3, since g is determined by g(0), g(1), g(2). So it suffices to find 3 basis elements.
We try an Ansatz of g(n) = k^n for k nonzero. For such an Ansatz, we see that
k^n = k^(n - 1) + k^(n - 3)
In other words, we see that
k^3 = k^2 + 1
This equation has no rational roots, so its roots will be a bit nasty. It turns out that there are three roots, one real and two complex. The roots are approximately
x = 1.47
x = -0.233 +/- 0.793 i
So the three basic solutions are 1.47^n and (-0.233 +/- 0.793 i)^n (approximately).
You did not actually give enough information to find g, since we need to know g(0), g(1), and g(2). But for the basic solutions, it's easy to see that 1.47^n = O(1.47^n) and (-0.233 +/- 0.793 i)^n = O(1), since |(-0.233 +/- 0.793 i)| < 1.
Therefore, in almost all cases (except the ones where we can write g(n) = a (-0.233 + 0.793 i)^n + b (-0.233 - 0.793 i)^n), we have g(n)= Theta(1.47^n) and hence f(n) = g(n) - c = Theta(1.47^n).
Keep in mind that 1.47 is, of course, an approximation. The true value is the unique real solution of k^3 = k^2 + 1.
I am learning algorithm analysis. In a book I read that 2^2n = O(2^n) is not true. though I know that means we can't find a c such that : 2^2n<=c(2^n). but if we put n=5 and c=10^6 the equation will be right. can you please give me hint. what am I doing wrong?
For 2^2n = O(2^n) to hold you would have to find one c such that 2^2n<=c(2^n) for all n > n0. Your example works only for small n. Once n reaches a point where 2^n > c the inequality does no longer hold.
Let's just work out the math: 2^2n = (2^2)^n = 4^n, since a^bc = (a^b)^c.
The question is not what happens for specific values, but what is the effect when n gets one larger, or even twice as large.
If we replace n with n + 1, you get 2^2(n + 1) = 2^(2n + 2) = 2^2 * 2^2n = 4 * 2^2n. So the result becomes 4 times larger.
We're getting into Big O in my CS degree and am having a difficult time understanding it. There's two problems I'd like to post, one I tried to complete on my own and another I'm not sure how to start. Would it be possible for a member to tell me if my first one is correct or incorrect and maybe point me in a direction for understanding the second one? Any help is greatly appreciated.
a)
E(n) ≤ 5n^2 + 9n^3, then E(n) = O(?)
Guess: O(n^3)
Proof:
9n^3 + 5n^2 <= c*n^3, where c = 10 and n > 1,
Therefore, E(n) = O(n^3)
b)
E(n) ≤ 8n*sqrt(n) + 100n log2(n), then E(n) = O(?) .
a)
For n = 2,
9*8 + 5*4 = 92 > 10 * 8 = 80. (n > 1 is incorrect)
You should solve for an n explicitly.
b)
Should be order of O(n^3/2). Check with a large number such as 2^50. log(n) grows much more slowly than n^1/2.
I have the following recurrence:
T(n) = c for n = 1.
T(n) = T(floor[n/2]) + T(ceil[n/2]) + n - 1 for n > 1.
It looks like merge sort to me so i guess that the solution to the recurrence is Θ(nlogn). According to the master method i have:
a) Θ(1) for n = 1 (constant time).
b) If we drop the floor and ceil we have: (step1)
T(N) = 2T(N/2) + n - 1 => a = 2, b = 2.
logb(a) (base b) = lg(2) = 1 so n^lg(2) = n^1 = n
Having a closer look we know that we have case 2 of master method:
if f(n) = Θ(log(b)a) our solution to the recurrence is T(n) = Θ(log(b)a log(2)n)
The solution is indeed T(n) = Θ(nlogn) but we are off my a constant factor 1.
My first question is:
at step 1 we dropped of ceil and floor. Is this correct ? The second question is how do i get rid of the constant factor 1 ? do i drop it ? or should i name it d and prove that n - 1 is indeed n (if so how do i prove it ?). Lastly is it better to prove it with the substitution method ?
Edit: if we use the substitution method we get:
We guess that the solution is O(n). We need to show that T(n) <= cn.
Substitutting in the recurrence we obtein
T(n) <= c(floor[n/2]) + c(ceil[n/2]) + n/2 - 1 = cn + n/2 - 1
So it is not merge sort ? What do i miss?
It was long time ago, but here goes
Step 1 we dropped of ceil and floor. Is this correct ?
I would rather say
T(floor(n/2)) + T(floor[n/2)) <= T(floor(n/2)) + T(ceil[n/2))
T(floor(n/2)) + T(ceil[n/2)) <= T(ceil(n/2)) + T(ceil[n/2))
in case they are not equal they differ by 1 (and you can ignore any constant)
The second question is how do i get rid of the constant factor 1 ?
You ignore it. Reasoning behind it is : even if constant is huge 10^100 it will be small compared to the size when n grows larger. In real life you can't really ignore really big constants, but that is how real life and theory differs. In any case 1 makes smallest amount of difference.
Lastly is it better to prove it with the substitution method
You can prove how you like, some are just simpler. Simpler are usually better, but other then that 'better' has no meaning. So my answer is no.
I am refreshing on Master Theorem a bit and I am trying to figure out the running time of an algorithm that solves a problem of size n by recursively solving 2 subproblems of size n-1 and combine solutions in constant time.
So the formula is:
T(N) = 2T(N - 1) + O(1)
But I am not sure how can I formulate the condition of master theorem.
I mean we don't have T(N/b) so is b of the Master Theorem formula in this case b=N/(N-1)?
If yes since obviously a > b^k since k=0 and is O(N^z) where z=log2 with base of (N/N-1) how can I make sense out of this? Assuming I am right so far?
ah, enough with the hints. the solution is actually quite simple. z-transform both sides, group the terms, and then inverse z transform to get the solution.
first, look at the problem as
x[n] = a x[n-1] + c
apply z transform to both sides (there are some technicalities with respect to the ROC, but let's ignore that for now)
X(z) = (a X(z) / z) + (c z / (z-1))
solve for X(z) to get
X(z) = c z^2 / [(z - 1) * (z-a)]
now observe that this formula can be re-written as:
X(z) = r z / (z-1) + s z / (z-a)
where r = c/(1-a) and s = - a c / (1-a)
Furthermore, observe that
X(z) = P(z) + Q(z)
where P(z) = r z / (z-1) = r / (1 - (1/z)), and Q(z) = s z / (z-a) = s / (1 - a (1/z))
apply inverse z-transform to get that:
p[n] = r u[n]
and
q[n] = s exp(log(a)n) u[n]
where log denotes the natural log and u[n] is the unit (Heaviside) step function (i.e. u[n]=1 for n>=0 and u[n]=0 for n<0).
Finally, by linearity of z-transform:
x[n] = (r + s exp(log(a) n))u[n]
where r and s are as defined above.
so relabeling back to your original problem,
T(n) = a T(n-1) + c
then
T(n) = (c/(a-1))(-1+a exp(log(a) n))u[n]
where exp(x) = e^x, log(x) is the natural log of x, and u[n] is the unit step function.
What does this tell you?
Unless I made a mistake, T grows exponentially with n. This is effectively an exponentially increasing function under the reasonable assumption that a > 1. The exponent is govern by a (more specifically, the natural log of a).
One more simplification, note that exp(log(a) n) = exp(log(a))^n = a^n:
T(n) = (c/(a-1))(-1+a^(n+1))u[n]
so O(a^n) in big O notation.
And now here is the easy way:
put T(0) = 1
T(n) = a T(n-1) + c
T(1) = a * T(0) + c = a + c
T(2) = a * T(1) + c = a*a + a * c + c
T(3) = a * T(2) + c = a*a*a + a * a * c + a * c + c
....
note that this creates a pattern. specifically:
T(n) = sum(a^j c^(n-j), j=0,...,n)
put c = 1 gives
T(n) = sum(a^j, j=0,...,n)
this is geometric series, which evaluates to:
T(n) = (1-a^(n+1))/(1-a)
= (1/(1-a)) - (1/(1-a)) a^n
= (1/(a-1))(-1 + a^(n+1))
for n>=0.
Note that this formula is the same as given above for c=1 using the z-transform method. Again, O(a^n).
Don't even think about Master's Theorem. You can only use Masther's Theorem when you're given master's theorem when b > 1 from the general form T(n) = aT(n/b) + f(n).
Instead, think of it this way. You have a recursive call that decrements the size of input, n, by 1 at each recursive call. And at each recursive call, the cost is constant O(1). The input size will decrement until it reaches 1. Then you add up all the costs that you used to make the recursive calls.
How many are they? n. So this would take O(2^n).
Looks like you can't formulate this problem in terms of the Master Theorem.
A good start is to draw the recursion tree to understand the pattern, then prove it with the substitution method. You can also expand the formula a couple of times and see where it leads.
See also this question which solves 2 subproblems instead of a:
Time bound for recursive algorithm with constant combination time
May be you could think of it this way
when
n = 1, T(1) = 1
n = 2, T(2) = 2
n = 3, T(3) = 4
n = 4, T(4) = 8
n = 5, T(5) = 16
It is easy to see that this is a geometric series 1 + 2+ 4+ 8 + 16..., the sum of which is
first term (ratio^n - 1)/(ratio - 1). For this series it is
1 * (2^n - 1)/(2 - 1) = 2^n - 1.
The dominating term here is 2^n, therefore the function belongs to Theta(2^n). You could verify it by doing a lim(n->inf) [2^n / (2^n - 1)] = +ve constant.
Therefore the function belongs to Big Theta (2^n)