I am trying to prove that a function is in Ī©(š^3), however the constant C is fixed to 3.
The function is:
š(š) = 3š^3 ā 39š^2 + 360š + 20
In order to prove that š is Ī©(š^3), we need constants š, š_0 > 0 such that
|š(š)| ā„ C|š^3|
for every š ā„ š_0.
When plugging in C = 3, you would get the inequality
3š^3 ā 39š^2 + 360š + 20 ā„ 3š^3
which would equal
ā39š^2 + 360š + 20 ā„ 0
I'm stuck here, because I can't find an n_0 that satisfies the equation for every š ā„ š0.
Also, if c = 2.25 fixed, how do I find the smallest integer that satisfies n_0 ?
How to prove it in general
Proving š in Ī©(š^3) with š(š) = 3š^3 ā 39š^2 + 360š + 20 is pretty simple.
The exact definition (from Wikipedia) is:
In words you need to find a constant c such that c * g will always be smaller than f (from a given n_0). You are of course allowed to choose this c small and the n_0 big.
We first drop some unnecessary stuff in order to estimate f:
ā39š^2 is greater than -n^3 for all n >= 39
360š is obviously greater than 0
20 is also greater than 0
Okay, putting that together we receive:
f(n) >= 3n^3 - n^3 + 0 + 0
= 2n^3
for n >= 39.
We choose C = 2 (or something smaller), n_0 = 39 (or something greater) and follow that
C * |g(n)| = 2 * |n^3| <= |2 * n^3| <= |f(n)|
<=> C * |g(n)| <= |f(n)|
for all n > n_0. By definition this means š in Ī©(š^3).
Your specific scenario
For C = 3 fixed this is obviously not possible since 3 * n^3 is always greater than 3š^3 ā 39š^2 + 360š + 20 (for some n_0). This is due to the second summand ā39š^2.
Take a look at the plot:
You see that 3n^3 grows beyond f, for all n >= n_0 with n_0 at about 9.286. The exact value is:
n >= (2 / 39) * (90 + 8295^(0.5))
Here is the query at Wolfram Alpha.
But it is possible for a fixed C = 2.25 as you can see at this query. It is true for all n >= 40.
Related
I was looking at this question:
Prove that 100š+5 ā š(šĀ²) (Which is 100š+5 is upper bounded by šĀ²)
š(š) ā¤ šš(š) for all š ā„ š0
so it becomes 100š+5 ā¤ ššĀ²
The answer was:
š0 ā 25.05 (the number where the šĀ² algorithm intercepts the š algorithm) and š = 4 so that when š increases above 25.05 no matter what it will
still prove that 100š+5āššĀ² is true
My question is: how do you derive that š0 = 25.05 and š = 4? Is it a guess and trial method, or is there a proper way to get that particular answer? Or you just gotta start from 1 and work your way up to see if it works?
A good approach to tackle such kind of problems is to first fix the c
let's take 4 in this example
and then all you have to do is figure out n0 using a simple equality
100n + 5 = 4n^2 <=> 4n^2 - 100n - 5 = 0 <=> n = 25.05 or n = -0.05 and here you can remark that they intersect twice in -0.08 and 25.05 and as you want n0 such that after which 100n +5 is always below 4n^2 -0.05 is not the one as 25.05 > -0.05 and in 25.05 they intersect so n0 = 25.05 .
Before fixing c and trying to figure out n0 you could try big numbers for n0 to have an idea whether it's an upper bound or not.
There are infinitely many choices for n0 and c that can be used to prove this bound holds. We need to find n0 and c such that for n >= n0, f(n) <= c * g(n). In your case, we need 100n + 5 <= cn^2. We can rearrange this as follows using basic algebra:
cn^2 - 100n - 5 >= 0
We can use the quadratic formula to find the roots:
n1, n2 = [100 +- sqrt(10000 + 20c)]/2c
Because c is positive we know the sqrt term will be greater than 100 once evaluated and since we are only interested in n > 0 we can discard the smaller of these solutions and focus on this:
n0 = [100 + sqrt(10000 + 20c)]/2c
We can simplify this a bit:
n0 = [100 + sqrt(10000 + 20c)]/2c
= [100 + 2*sqrt(2500 + 5c)]/2c
= [50 + sqrt(2500 + 5c)]/c
At this point, we can choose either a value for c or a value for n0, and solve for the other one. Your example chooses c = 4 and gets the approximate answer n0 ~ 25.05. If we'd prefer to choose n0 directly (say we want n0 = 10) then we calculate as follows:
10 = [50 + sqrt(2500 + 5c)]/c
10c = 50 + sqrt(2500 + 5c)
(10c - 50) = sqrt(2500 + 5c)
(100c^2 - 1000c + 2500) = (2500 + 5c)
100c^2 - 1005c = 0
c(100c - 1005) = 0
c = 0 or c = 1005/100 ~ 10.05
Because the solution c=0 is obviously no good, the solution c ~ 10.05 appears to work for our choice of n0 = 10. You can choose other n0 or c and find the corresponding constant in this way.
Here is an asymptotic notation problem:
Let g(n) = 27n^2 + 18n and let f(n) = 0.5n^2 ā 100. Find positive constants n0, c1 and c2 such that c1f(n) ā¤ g(n) ā¤ c2f(n) for all n ā„ n0.
Is this solving for theta? Do I prove 27n^2 + 18n = Ī©(0.5n^2 ā 100) and then prove (27n^2 + 18n) = O(0.5n^2 ā 100)?
In that case wouldn't c1 and c2 be 1 and 56 respectively, and n0 would be the higher of the two n0 that I find?
There are infinitely many solutions. We just need to fiddle with algebra to find one.
The first thing to note is that both g and f are positive for all nā„15. In particular, g(15) = 6345, f(15) = 12.5. (All smaller values of n make f<0.) This implies n0=15 might work fine as well as any larger value.
Next note g'(n) = 54n + 18 and f'(n) = n.
Since f(15) < g(15) and f'(n) < g'(n) for all n >= 15, choose c1 = 1.
Proof that this is a good choice:
0.5n^2 ā 100 ā¤ 27n^2 + 18n <=> 26.5n^2 + 18n + 100 ā„ 0
...obviously true for all nā„15.
What about c2? First, we want c2*f(n) to grow at least as fast as g: c2f'(n)ā„g'(n), or c2*n ā„ 54n + 18 for n ā„ 15. So choose c2 ā„ 56, which obviously makes this true.
Unfortunately, c2=56 doesn't quite work with n0 = 15. There's the other criterion to meet: c2*f(15)ā„g(15). For that, 56 isn't big enough: 56*f(15) is only 700; g(15) is much bigger.
It turns out by substitution in the relation above and a bit more algebra that c2 = 508 does the trick.
Proof:
27n^2 + 18n ā¤ 508 * (0.5n^2 ā 100)
<=> 27n^2 + 18n ā¤ 254n^2 ā 50800
<=> 227n^2 - 18n - 50800 ā„ 0
At n=15, this is true by simple substitution. For all bigger values of n, note the lhs derivative 454n - 18 is positive for all nā„15, so the function is also non-decreasing over that domain. That makes the relation true as well.
To summarize, we've shown that n0=15, c1=1, and c2=508 is one solution.
So I have to figure out if n^(1/2) is Big Omega of log(n)^3. I am pretty sure that it is not, since n^(1/2) is not even in the bounds of log(n)^3; but I do not know how to prove it without limits. I know the definition without limits is
g(n) is big Omega of f(n) iff there is a constant c > 0 and an
integer constant n0 => 1 such that f(n) => cg(n) for n => n0
But can I really always find a constant c that will satisfy this?
for instance for log(n)^3=>c*n^(1/2) if c = 0.1 and n = 10 then we get 1=>0.316.
When comparing sqrt(n) with ln(n)^3 what happens is that
ln(n)^3 <= sqrt(n) ; for all n >= N0
How do I know? Because I printed out sufficient samples of both expressions as to convince myself which dominated the other.
To see this more formally, let's first assume that we have already found N0 (we will do that later) and let's prove by induction that if the inequality holds for n >= N0, it will also hold for n+1.
Note that I'm using ln in base e for the sake of simplicity.
Equivalently, we have to show that
ln(n + 1) <= (n + 1)^(1/6)
Now
ln(n + 1) = ln(n + 1) - ln(n) + ln(n)
= ln(1 + 1/n) + ln(n)
<= ln(1 + 1/n) + n^(1/6) ; inductive hypethesis
From the definition of e we know
e = limit (1 + 1/n)^n
taking logarithms
1 = limit n*ln(1 + 1/n)
Therefore, there exits N0 such that
n*ln(1 + 1/n) <= 2 ; for all n >= N0
so
ln(1 + 1/n) <= 2/n
<= 1
Using this above, we get
ln(n + 1) <= 1 + n^(1/6)
<= (n+1)^(1/6)
as we wanted.
We are now left with the task of finding some N0 such that
ln(N0) <= N0^(1/6)
let's take N0 = e^(6k) for some value of k that we will are about to find. We get
ln(N0) = 6k
N0^(1/6) = e^k
so, we only need to pick k such that 6k < e^k, which is possible because the right hand side grows much faster than the left.
We're asked to provide a $ n+4![\sqrt{n}] =O(n) $ with having a good argumentation and a logical build up for it but it's not said how a good argumentation would look like, so I know that $2n+4\sqrt{n}$ always bigger for n=1 but i wouldn't know how to argue about it and how to logically build it since i just thought about it and it happened to be true. Can someone help out with this example so i would know how to do it?
You should look at the following site https://en.wikipedia.org/wiki/Big_O_notation
For the O big notation we would say that if a function is the following: X^3+X^2+100X = O(x^3). This is with idea that if X-> some very big number, the X^3 term will become the dominant factor in the equation.
You can use the same logic to your equation. Which term will become dominant in your equation.
If this is not clear you should try to plot both terms and see how they scale. This could be more clarifying.
A proof is a convincing, logical argument. When in doubt, a good way to write a convincing, logical argument is to use an accepted template for your argument. Then, others can simply check that you have used the template correctly and, if so, the validity of your argument follows.
A useful template for showing asymptotic bounds is mathematical induction. To use this, you show that what you are trying to prove is true for specific simple cases, called base cases, then you assume it is true in all cases up to a certain size (the induction hypothesis) and you finish the proof by showing the hypothesis implies the claim is true for cases of the very next size. If done correctly, you will have shown the claim (parameterized by a natural number n) is true for a fixed n and for all larger n. This is what is exactly what is required for proving asymptotic bounds.
In your case: we want to show that n + 4 * sqrt(n) = O(n). Recall that the (one?) formal definition of big-Oh is the following:
A function f is bound from above by a function g, written f(n) = O(g(n)), if there exist constants c > 0 and n0 > 0 such that for all n > n0, f(n) <= c * g(n).
Consider the case n = 0. We have n + 4 * sqrt(n) = 0 + 4 * 0 = 0 <= 0 = c * 0 = c * n for any constant c. If we now assume the claim is true for all n up to and including k, can we show it is true for n = k + 1? This would require (k + 1) + 4 * sqrt(k + 1) <= c * (k + 1). There are now two cases:
k + 1 is not a perfect square. Since we are doing analysis of algorithms it is implied that we are using integer math, so sqrt(k + 1) = sqrt(k) in this case. Therefore, (k + 1) + 4 * sqrt(k + 1) = (k + 4 * sqrt(k)) + 1 <= (c * k) + 1 <= c * (k + 1) by the induction hypothesis provided that c > 1.
k + 1 is a perfect square. Since we are doing analysis of algorithms it is implied that we are using integer math, so sqrt(k + 1) = sqrt(k) + 1 in this case. Therefore, (k + 1) + 4 * sqrt(k + 1) = (k + 4 * sqrt(k)) + 5 <= (c * k) + 5 <= c * (k + 1) by the induction hypothesis provided that c >= 5.
Because these two cases cover all possibilities and in each case the claim is true for n = k + 1 when we choose c >= 5, we see that n + 4 * sqrt(n) <= 5 * n for all n >= 0 = n0. This concludes the proof that n + 4 * sqrt(n) = O(n).
I'm trying to explain to my friend why 7n - 2 = O(N). I want to do so based on the definition of big O.
Based on the definition of big O, f(n) = O(g(n)) if:
We can find a real value C and integer value n0 >= 1 such that:
f(n)<= C . g(n) for all values of n >= n0.
In this case, is the following explanation correct?
7n - 2 <= C . n
-2 <= C . n - 7n
-2 <= n (C - 7)
-2 / (C - 7) <= n
if we consider C = 7, mathematically, -2 / (C - 7) is equal to negative infinity, so
n >= (negative infinity)
It means that for all values of n >= (negative infinity) the following holds:
7n - 2 <= 7n
Now we have to pick n0 such that for all n >= n0 and n0 >= 1 the following holds:
7n - 2 <= 7n
Since for all values of n >= (negative infinity) the inequality holds, we can simply take n0 = 1.
You're on the right track here. Fundamentally, though, the logic you're using doesn't work. If you are trying to prove that there exist an n0 and c such that f(n) ā¤ cg(n) for all n ā„ n0, then you can't start off by assuming that f(n) ā¤ cg(n) because that's ultimately what you're trying to prove!
Instead, see if you can start with the initial expression (7n - 2) and massage it into something upper-bounded by cn. Here's one way to do this: since 7n - 2 ā¤ 7n, we can (by inspection) just pick n0 = 0 and c = 7 to see that 7n - 2 ā¤ cn for all n ā„ n0.
For a more interesting case, let's try this with 7n + 2:
7n + 2
ā¤ 7n + 2n (for all n ā„ 1)
= 9n
So by inspection we can pick c = 9 and n0 = 1 and we have that 7n + 2 ā¤ cn for all n ā„ n0, so 7n + 2 = O(n).
Notice that at no point in this math did we assume the ultimate inequality, which means we never had to risk a divide-by-zero error.