I'm trying to find the n0 (n not) of a function with a big omega size of n^3 where c=2.25
π(π) = 3π^3 β 39π^2 + 360π + 20. In order to prove that π(π) is Ξ©(π^3), we need constants π, π0 > 0 such that π(π) β₯ ππ^3 for every π β₯ π0
If c=2.25, how do I find the smallest integer that satisfies n0?
My first thought was to plug in n=1, because n>0, and if the inequality worked n=1 would be the smallest n (therefore n0). But, the inequality has to be satisfied for every n>=n0, and if i plug in, for example, n=15 the inequality doesn't work.
You can solve this mathematically.
To make sure that I understand what you want, I will summarize what you are asking. You want to find the smallest integer n so that:
3π^3 β 39π^2 + 360π + 20 β₯ 2.25π^3 (1)
And any other integers bigger than n must also satisfy the equation (1).
So here is my solution:
(1) <=> 0.75π^3 β 39π^2 + 360π + 20 β₯ 0
Let f(n) = 0.75π^3 β 39π^2 + 360π + 20
f(n) = 0 <=> n1 = -0.05522 or n2 = 12.079 or n3 = 39.976
If n < n1, f(n) < 0 (try this yourself)
If n1 < n < n2, f(n) > 0 (the sign will alternate)
If n2 < n < n3, f(n) < 0 (the sign will alternate, again)
If n > n3, f(n) > 0
So to satisfy your requirements, the minimum value of n must be 40
Think about it like this. After a certain point 3π^3 β 39π^2 + 360π + 20 will always be greater than or equal to n^3 for the simple fact that eventually 3n^3 will beat out the -39n^2. So F(n) will never dip below n^3 for an extremely large number. You don't have to put the minimum nO, just choose an extremely large number for nO, since the question is asking after a certain value for n, the statement will hold true for ever. Choose nO, for example, to be an extremely large number X, and then use an inductive proof where X is the base case.
Related
I have an array of n random integers
I choose a random integer and partition by the chosen random integer (all integers smaller than the chosen integer will be on the left side, all bigger integers will be on the right side)
What will be the size of my left and right side in the average case, if we assume no duplicates in the array?
I can easily see, that there is 1/n chance that the array is split in half, if we are lucky. Additionally, there is 1/n chance, that the array is split so that the left side is of length 1/2-1 and the right side is of length 1/2+1 and so on.
Could we derive from this observation the "average" case?
You can probably find a better explanation (and certainly the proper citations) in a textbook on randomized algorithms, but here's the gist of average-case QuickSort, in two different ways.
First way
Let C(n) be the expected number of comparisons required on average for a random permutation of 1...n.Β Since the expectation of the sum of the number of comparisons required for the two recursive calls equals the sum of the expectations, we can write a recurrence that averages over the n possible divisions:
C(0) = 0
1 nβ1
C(n) = nβ1 + β sum (C(i) + C(nβ1βi))
n i=0
Rather than pull the exact solution out of a hat (or peek at the second way), I'll show you how I'd get an asymptotic bound.
First, I'd guess the asymptotic bound. Obviously I'm familiar with QuickSort and my reasoning here is fabricated, but since the best case is O(n log n) by the Master Theorem, that's a reasonable place to start.
Second, I'd guess an actual bound: 100 n log (n + 1). I use a big constant because why not? It doesn't matter for asymptotic notation and can only make my job easier. I use log (n + 1) instead of log n because log n is undefined for n = 0, and 0 log (0 + 1) = 0 covers the base case.
Third, let's try to verify the inductive step. Assuming that C(i) β€ 100 i log (i + 1) for all i β {0, ..., nβ1},
1 nβ1
C(n) = nβ1 + β sum (C(i) + C(nβ1βi)) [by definition]
n i=0
2 nβ1
= nβ1 + β sum C(i) [by symmetry]
n i=0
2 nβ1
β€ nβ1 + β sum 100 i log(i + 1) [by the inductive hypothesis]
n i=0
n
2 /
β€ nβ1 + β | 100 x log(x + 1) dx [upper Darboux sum]
n /
0
2
= nβ1 + β (50 (nΒ² β 1) log (n + 1) β 25 (n β 2) n)
n
[WolframAlpha FTW, I forgot how to integrate]
= nβ1 + 100 (n β 1/n) log (n + 1) β 50 (n β 2)
= 100 (n β 1/n) log (n + 1) β 49 n + 100.
Well that's irritating. It's almost what we want but that + 100 messes up the program a little bit. We can extend the base cases to n = 1 and n = 2 by inspection and then assume that n β₯ 3 to finish the bound:
C(n) = 100 (n β 1/n) log (n + 1) β 49 n + 100
β€ 100 n log (n + 1) β 49 n + 100
β€ 100 n log (n + 1). [since n β₯ 3 implies 49 n β₯ 100]
Once again, no one would publish such a messy derivation. I wanted to show how one could work it out formally without knowing the answer ahead of time.
Second way
How else can we derive how many comparisons QuickSort does in expectation? Another possibility is to exploit the linearity of expectation by summing over each pair of elements the probability that those elements are compared. What is that probability? We observe that a pair {i, j} is compared if and only if, at the leaf-most invocation where i and j exist in the array, either i or j is chosen as the pivot. This happens with probability 2/(j+1 β i), since the pivot must be i, j, or one of the j β (i+1) elements that compare between them. Therefore,
n n 2
C(n) = sum sum βββββββ
i=1 j=i+1 j+1 β i
n n+1βi 2
= sum sum β
i=1 d=2 d
n
= sum 2 (H(n+1βi) β 1) [where H is the harmonic numbers]
i=1
n
= 2 sum H(i) β n
i=1
= 2 (n + 1) (H(n+1) β 1) β n. [WolframAlpha FTW again]
Since H(n) is Ξ(log n), this is Ξ(n log n), as expected.
Here is an asymptotic notation problem:
Let g(n) = 27n^2 + 18n and let f(n) = 0.5n^2 β 100. Find positive constants n0, c1 and c2 such that c1f(n) β€ g(n) β€ c2f(n) for all n β₯ n0.
Is this solving for theta? Do I prove 27n^2 + 18n = Ξ©(0.5n^2 β 100) and then prove (27n^2 + 18n) = O(0.5n^2 β 100)?
In that case wouldn't c1 and c2 be 1 and 56 respectively, and n0 would be the higher of the two n0 that I find?
There are infinitely many solutions. We just need to fiddle with algebra to find one.
The first thing to note is that both g and f are positive for all nβ₯15. In particular, g(15) = 6345, f(15) = 12.5. (All smaller values of n make f<0.) This implies n0=15 might work fine as well as any larger value.
Next note g'(n) = 54n + 18 and f'(n) = n.
Since f(15) < g(15) and f'(n) < g'(n) for all n >= 15, choose c1 = 1.
Proof that this is a good choice:
0.5n^2 β 100 β€ 27n^2 + 18n <=> 26.5n^2 + 18n + 100 β₯ 0
...obviously true for all nβ₯15.
What about c2? First, we want c2*f(n) to grow at least as fast as g: c2f'(n)β₯g'(n), or c2*n β₯ 54n + 18 for n β₯ 15. So choose c2 β₯ 56, which obviously makes this true.
Unfortunately, c2=56 doesn't quite work with n0 = 15. There's the other criterion to meet: c2*f(15)β₯g(15). For that, 56 isn't big enough: 56*f(15) is only 700; g(15) is much bigger.
It turns out by substitution in the relation above and a bit more algebra that c2 = 508 does the trick.
Proof:
27n^2 + 18n β€ 508 * (0.5n^2 β 100)
<=> 27n^2 + 18n β€ 254n^2 β 50800
<=> 227n^2 - 18n - 50800 β₯ 0
At n=15, this is true by simple substitution. For all bigger values of n, note the lhs derivative 454n - 18 is positive for all nβ₯15, so the function is also non-decreasing over that domain. That makes the relation true as well.
To summarize, we've shown that n0=15, c1=1, and c2=508 is one solution.
I have a few asymptotic notation problems I do not entirely grasp.
So when proving asymptotic complexity, I understand the operations of finding a constant and the n0 term of which the notation will be true for. So, for example:
Prove 7n+4 = β¦(n)
In such a case we would pick a constant c, such that it is lower than 7 since this regarding Big Omega. Picking 6 would result in
7n+4 >= 6n
n+4 >= 0
n = -4
But since n0 cannot be a negative term, we pick a positive integer, so n0 = 1.
But what about a problem like this:
Prove that n^3 β 91n^2 β 7n β 14 = β¦(n^3).
I picked 1/2 as the constant, reaching
1/2n^3 - 91n^2 - 7n -14 >= 0.
But I am unsure how to continue. Also, a problem like this, I think regarding theta:
Let g(n) = 27n^2 + 18n and let f(n) = 0.5n^2 β 100. Find positive constants n0, c1 and c2 such
that c1f(n) β€ g(n) β€ c2f(n) for all n β₯ n0.
In such a case am I performing two separate operations here, one big O comparison and one Big Omega comparison, so that there is a theta relationship, or tight bound? If so, how would I go about that?
To show n3 β 91n2 β 7n β 14 is in β¦(n3), we need to exhibit some numbers n0 and c such that, for all n β₯ n0:
n3 β 91n2 β 7n β 14 β₯ cn3
You've chosen c = 0.5, so let's go with that. Rearranging gives:
n3 β 0.5n3 β₯ 91n2 + 7n + 14
Multiplying both sides by 2 and simplifying:
182n2 + 14n + 28 β€ n3
For all n β₯ 1, we have:
182n2 + 14n + 28 β€ 182n2 + 14n2 + 28n2 = 224n2
And when n β₯ 224, we have 224n2 β€ n3. Therefore, the choice of n0 = 224 and c = 0.5 demonstrates that the original function is in β¦(n3).
How do you work this out? do you get c first which is the ratio of the two functions then with the ratio find the range of n ? how can you tell ? please explain i'm really lost, Thanks.
Example 1: Prove that running time T(n) = n^3 + 20n + 1 is O(n^3)
Proof: by the Big-Oh definition,
T(n) is O(n^3) if T(n) β€ cΒ·n^3 for some n β₯ n0 .
Let us check this condition:
if n^3 + 20n + 1 β€ cΒ·n^3 then 1 + 20/n^2 + 1/n^3 <=c .
Therefore,
the Big-Oh condition holds for n β₯ n0 = 1 and c β₯ 22 (= 1 + 20 + 1). Larger
values of n0 result in smaller factors c (e.g., for n0 = 10 c β₯ 1.201 and so on) but in
any case the above statement is valid.
I think the trick you're seeing is that you aren't thinking of LARGE numbers. Hence, let's take a counter example:
T(n) = n^4 + n
and let's assume that we think it's O(N^3) instead of O(N^4). What you could see is
c = n + 1/n^2
which means that c, a constant, is actually c(n), a function dependent upon n. Taking N to a really big number shows that no matter what, c == c(n), a function of n, so it can't be O(N^3).
What you want is in the limit as N goes to infinity, everything but a constant remains:
c = 1 + 1/n^3
Now you can easily say, it is still c(n)! As N gets really, really big 1/n^3 goes to zero. Hence, with very large N in the case of declaring T(n) in O(N^4) time, c == 1 or it is a constant!
Does that help?
I want to find out the time complexity of the program using recurrence equations.
That is ..
int f(int x)
{
if(x<1) return 1;
else return f(x-1)+g(x);
}
int g(int x)
{
if(x<2) return 1;
else return f(x-1)+g(x/2);
}
I write its recurrence equation and tried to solve it but it keep on getting complex
T(n) =T(n-1)+g(n)+c
=T(n-2)+g(n-1)+g(n)+c+c
=T(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c
=T(n-4)+g(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c+c
β¦β¦β¦β¦β¦β¦β¦β¦β¦.
β¦β¦β¦β¦β¦β¦β¦β¦..
Kth time β¦..
=kc+g(n)+g(n-1)+g(n-3)+g(n-4).. .. . β¦ +T(n-k)
Let at kth time input become 1
Then n-k=1
K=n-1
Now i end up with this..
T(n)= (n-1)c+g(n)+g(n-1)+g(n-2)+g(n-3)+β¦.. .. g(1)
I βm not able to solve it further.
Any way if we count the number of function calls in this program , it can be easily seen that time complexity is exponential but I want proof it using recurrence . how can it be done ?
Explanation in Anwer 1, looks correct , similar work I did.
The most difficult task in this code is to write its recursion equation. I have drawn another diagram , I identified some patterns , I think we can get some help form this diagram what could be the possible recurrence equation.
And I came up with this equation , not sure if it is right ??? Please help.
T(n) = 2*T(n-1) + c * logn
Ok, I think I have been able to prove that f(x) = Theta(2^x) (note that the time complexity is the same). This also proves that g(x) = Theta(2^x) as f(x) > g(x) > f(x-1).
First as everyone noted, it is easy to prove that f(x) = Omega(2^x).
Now we have the relation that f(x) <= 2 f(x-1) + f(x/2) (since f(x) > g(x))
We will show that, for sufficiently large x, there is some constant K > 0 such that
f(x) <= K*H(x), where H(x) = (2 + 1/x)^x
This implies that f(x) = Theta(2^x), as H(x) = Theta(2^x), which itself follows from the fact that H(x)/2^x -> sqrt(e) as x-> infinity (wolfram alpha link of the limit).
Now (warning: heavier math, perhap cs.stackexchange or math.stackexchange is better suited)
according to wolfram alpha (click the link and see series expansion near x = infinity),
H(x) = exp(x ln(2) + 1/2 + O(1/x))
And again, according to wolfram alpha (click the link (different from above) and see the series expansion for x = infinity), we have that
H(x) - 2H(x-1) = [1/2x + O(1/x^2)]exp(x ln(2) + 1/2 + O(1/x))
and so
[H(x) - 2H(x-1)]/H(x/2) -> infinity as x -> infinity
Thus, for sufficiently large x (say x > L) we have the inequality
H(x) >= 2H(x-1) + H(x/2)
Now there is some K (dependent only on L (for instance K = f(2L))) such that
f(x) <= K*H(x) for all x <= 2L
Now we proceed by (strong) induction (you can revert to natural numbers if you want to)
f(x+1) <= 2f(x) + f((x+1)/2)
By induction, the right side is
<= 2*K*H(x) + K*H((x+1)/2)
And we proved earlier that
2*H(x) + H((x+1)/2) <= H(x+1)
Thus f(x+1) <= K * H(x+1)
Using memoisation, both functions can easily be computed in O(n) time. But the program takes at least O(2^n) time, and thus is a very inefficient way of computing f(n) and g(n)
To prove that the program takes at most O(2+epsilon)^n time for any epsilon > 0:
Let F(n) and G(n) be the number of function calls that are made in evaluating f(n) and g(n), respectively. Clearly (counting the addition as 1 function call):
F(0) = 1; F(n) = F(n-1) + G(n) + 1
G(1) = 1; G(n) = F(n-1) + G(n/2) + 1
Then one can prove:
F and G are monotonic
F > G
Define H(1) = 2; H(n) = 2 * H(n-1) + H(n/2) + 1
clearly, H > F
for all n, H(n) > 2 * H(n-1)
hence H(n/2) / H(n-1) -> 0 for sufficiently large n
hence H(n) < (2 + epsilon) * H(n-1) for all epsilon > 0 and sufficiently large n
hence H in O((2 + epsilon)^n) for any epsilon > 0
(Edit: originally I concluded here that the upper bound is O(2^n). That is incorrect,as nhahtdh pointed out, but see below)
so this is the best I can prove.... Because G < F < H they are also in O((2 + epsilon)^n) for any epsilon > 0
Postscript (after seeing Mr Knoothes solution): Because i.m.h.o a good mathematical proof gives insight, rather than lots of formulas, and SO exists for all those future generations (hi gals!):
For many algorithms, calculating f(n+1) involves twice (thrice,..) the amount of work for f(n), plus something more. If this something more becomes relatively less with increasing n (which is often the case) using a fixed epsilon like above is not optimal.
Replacing the epsilon above by some decreasing function Ξ΅(n) of n will in many cases (if Ξ΅ decreases fast enough, say Ξ΅(n)=1/n) yield an upper bound O((2 + Ξ΅(n))^n ) = O(2^n)
Let f(0)=0 and g(0)=0
From the function we have,
f(x) = f(x - 1) + g(x)
g(x) = f(x - 1) + g(x/2)
Substituting g(x) in f(x) we get,
f(x) = f(x-1) + f(x -1) + g(x/2)
β΄f(x) = 2f(x-1) + g(x/2)
Expanding this we get,
f(x) = 2f(x-1)+f(x/2-1)+f(x/4-1)+ ... + f(1)
Let s(x) be a function defined as follows,
s(x) = 2s(x-1)
Now clearly f(x)=Ξ©(s(x)).
The complexity of s(x) is O(2x).
Therefore function f(x)=Ξ©(2x).
I think is clear to see that f(n) > 2n, because f(n) > h(n) = 2h(n-1) = 2n.
Now I claim that for every n, there is an Ξ΅ such that:
f(n) < (2+Ξ΅)n, to see this, let do it by induction, but to make it more sensible at first I'll use Ξ΅ = 1, to show f(n) <= 3n, then I'll extend it.
We will use strong induction, suppose for every m < n, f(m) < 3m then we have:
f(n) = 2[f(n-1) + f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)]
but for this part:
A = f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)
we have:
f(n/2) = 2[f(n/2 -1) + f(n/4 -1)+ ... +f(1-1]) ==>
A <= f(n/2) [1]
So we can rewrite f(n):
f(n) = 2f(n-1) + A < 2f(n-1) +f(n/2),
Now let back to our claim:
f(n) < 2*3^(n-1) + 2*3^(n/2)==>
f(n) < 2*3^(n-1) + 3^(n-1) ==>
f(n) < 3^n. [2]
By [2], proof of f(n)∈O(3n) is completed.
But If you want to extend this to the format of (2+Ξ΅)n, just use 1 to replace the inequality, then we will have
for Ξ΅ > 1/(2+Ξ΅)n/2-1 β f(n) < (2+Ξ΅)n.[3]
Also by [3] you can say that for every n there is an Ξ΅ such that f(n) < (2+Ξ΅)n actually there is constant Ξ΅ such that for n > n0, f(n)∈O((2+Ξ΅)n). [4]
Now we can use wolfarmalpha like #Knoothe, by setting Ξ΅=1/n, then we will have:
f(n) < (2+1/n)n which results on f(n) < e*2n, and by our simple lower bound at start we have: f(n)∈ Ξ(2^n).[5]
P.S: I didn't calculate epsilon exactly, but you can do it with pen and paper simply, I think this epsilon is not correct, but is easy to find it, and if is hard tell me is hard, and I'll write it.