What exactly does epsilon represent in master theorem? - algorithm

I understand substitution method and recursion trees. I understand how to use master theorem but don't understand it's proof or intuitive explanation, specifically i don't understand where does the epsilon value come from in the theorem
The Master's Theorem states:
I am studying from CLRS 3rd edition, page 97. I want to know what does epsilon value represent, how do we come up with epsilon in the proof and Why do we need it. Would some other resource / book for this proof and extended master theorem proof!

You don't need the epsilon to state the Master Theorem. Wikipedia gives another formulation:
With c_crit = log_b(a):
if f(n) = O(n^c) where c < c_crit, then T(n) = Θ(n^c_crit);
if f(n) = Θ(n^c_crit log(n)^k) for any k >= 0, then T(n) = Θ(n^c_crit log(n)^(k+1));
if f(n) = Ω(n^c) where c > c_crit, and a*f(n/b) <= k*f(n) for some k < 1 and sufficiently large n, then T(n) = Θ(f(n)).
Essentially, the epsilon in your example is to make sure that log_b(a)-ε < log_b(a) and log_b(a)+ε > log_b(a) (when ε > 0), just like c < c_crit and c > c_crit in this example from Wikipedia.

Related

cant understand why n0 isnt 1 in this big-O definition?

So I've been solving this exercise that asks to prove by big-O definition
2^(2log(n)) = O(n^2)
so I realized that 2^(2log(n)) = n^2
and i found out that c = 1 and n0 = 1 because n^2 <= 1*n^2 starts from n >=1
but why at the answer the teacher chose n0 = 2 ?
does it matter? or it can be 1 also?
is there any trick to find c and n0 easily in these kind of questions ?
As you correctly noticed, they are exactly equal using logarithmic rules / power rules, thus we can choose c=1 and some arbitrary n0 because the definition asks for n0>0 such that for all n>n0:
n^2 <= c*n^2 = n^2. And of-course it is true for all values of n0.
So it does not matter if your teacher chose n0=2 or you chose n0=1, you are both correct by definition.
According to the definition of Big O:
f(n) = O(g(n)) if there exists a positive integer n0 and a positive constant c, such that f(n) ≤ c*g(n) ∀ n≥n0
From your question, it is unclear of what the base value of log function is?
Let f(n) = 2^(2log(n)) and g(n) = n^2.
Let us consider 3 following cases:
Case 1: base = 2
f(n) evaluates to n^2 and therefore it is clear that c=1 and n0=1.
Case 2: base = 10
f(n) = 2^(2log10(n)) ~ n^(0.602)
In this case, we can also say that c=1 and n0=1.
As a proof, I plotted the graph for the functions x^2 and x^0.602 as follows:
In the above figure, you can clearly see that for n0 > 1, the x^2 > x^0.602.
Case 3: base = e
f(n) = 2^(2loge(n)) ~ n^(1.3862)
In this case as well, we can say that c=1 and n0=1.
As a proof, I plotted the graph for the functions x^2 and x^1.3862 as follows:
Therefore, in all the cases, you are correct.
PS: There is a strong possibility that you and professor are assuming different value for the base of the logarithmic function. But in that case as well, if base>=2, I don't see there is any wrong to consider n0=1.

How to solve a problem on relative asymptotic growth (table) from CLRS?

I struggle to fill this table in even though I took calculus recently and good at math. It is only specified in the chapter how to deal with lim(n^k/c^n), but I have no idea how to compare other functions. I checked the solution manual and no info on that, only a table with answers which provides little insight.
When I solve these I don't really think about limits -- I lean on a couple facts and some well-known properties of big-O notation.
Fact 1: for all functions f and g and all exponents p > 0, we have f(n) = O(g(n)) if and only if f(n)p = O(g(n)p), and likewise with o, Ω, ω, and Θ respectively. This has a straightforward proof from the definition; you just have to raise the constant c to the power p as well.
Fact 2: for all exponents ε > 0, the function lg(n) is o(nε). This follows from l'Hôpital's rule for limits: lim lg(n)/nε = lim (lg(e)/n)/(ε nε−1) = (lg(e)/ε) lim n−ε = 0.
Fact 3:
If f(n) ≤ g(n) + O(1), then 2f(n) = O(2g(n)).
If f(n) ≤ g(n) − ω(1), then 2f(n) = o(2g(n)).
If f(n) ≥ g(n) − O(1), then 2f(n) = Ω(2g(n)).
If f(n) ≥ g(n) + ω(1), then 2f(n) = ω(2g(n)).
Fact 4: lg(n!) = Θ(n lg(n)). The proof uses Stirling's approximation.
To solve (a), use Fact 1 to raise both sides to the power of 1/k and apply Fact 2.
To solve (b), rewrite nk = 2lg(n)k and cn = 2lg(c)n, prove that lg(c) n − lg(n) k = ω(1), and apply Fact 3.
(c) is special. nsin(n) ends up anywhere between 0 and n. Since 0 is o(√n) and n is ω(√n), that's a solid row of NO.
To solve (d), observe that n ≥ n/2 + ω(1) and apply Fact 3.
To solve (e), rewrite nlg(c) = 2lg(n)lg(c) = 2lg(c)lg(n) = clg(n).
To solve (f), use Fact 4 and find that lg(n!) = Θ(n lg(n)) = lg(nn).

Asymptotic Notion: What is n₀ in formula, how do we find constant

I was doing study on Asymptotic Notations Topic, i recon that its formula is so simple yet it tells nothing and there are couple of things i don't understand.
When we say
f(n) <= c.g(n) where n >= n₀
And we don't know the value of c =? and n=? at first but by doing division of f(n) or g(n) we get the value of c. (here is where confusion lies)
First Question: How do we decide which side's 'n' has to get divided in equation f(n) or g(n)?
Suppose we have to prove:
2n(square) = O(n(cube))
here f(n) = 2(n(square)) and g(n)=n(cube)
which will form as:
2(n(square)) = c . n(cube)
Now in the notes i have read they are dividing 2(n(square)) to get the value of c by doing that we get c = 1;
But if we do it dividing n(cube) [which i don't know whether we can do it or not] we get c = 2;
How do we know what value we have to divide ?
Second Problem: Where does n₀ come from what's its task ?
Well by formula we know n >= n(0) which means what ever we take n we should take the value of n(0) or should be greater what n is.
But i am confuse that where do we use n₀ ? Why it is needed ?
By just finding C and N can't we get to conclusion if
n(square) = O(n(cube)) or not.
Would any one like to address this? Many thanks in advance.
Please don't snub me if i ask anything stupid or give -1. Address it please any useful link which covers all this would be enough as well:3
I have gone through the following links before posting this question this is what i understand and here are those links:
http://openclassroom.stanford.edu/MainFolder/VideoPage.php?course=IntroToAlgorithms&video=CS161L2P8&speed=
http://faculty.cse.tamu.edu/djimenez/ut/utsa/cs3343/lecture3.html
https://sites.google.com/sites/algorithmss15
From the second url in your question:
Let's define big-Oh more formally:
O(g(n)) = { the set of all f such that there exist positive constants c and n0 satisfying 0 <= f(n) <= cg(n) for all n >= n0 }.
This means, that for f(n) = 4*n*n + 135*n*log(n) + 1e8*n the big-O is O(n*n).
Because for large enough c and n0 this is true:
4*n*n + 135*n*log(n) + 1e8*n = f(n) <= O(n*n) = c*n*n
In this particular case the [c,n0] can be for example [6, 1e8], because (this is of course not valid mathematical proof, but I hope it's "obvious" from it):
f(1e8) = 4*1e16 + 135*8*1e8 + 1e16 = 5*1e16 + 1080*1e8 <= 6*1e16 = 6*1e8*1e8 =~= O(n*n). There are of course many more possible [c,n0] for which the f(n) <= c*n*n holds true, but you need to find only one such pair to prove the f(n) has O(f(n)) of O(n*n).
As you can see, for n=1 you need quite a huge c (like 1e9), so at first look the f(n) may look much bigger than n*n, but in the asymptotic notion you don't care about the first few initial values, as long as the behaviour since some boundary is as desired. That boundary is some [c,n0]. If you can find such boundary ([6, 1e8]), then QED: "f(n) has big-O of n*n".
The n >= n₀ means that whatever you say in the lemma can be false for some first k (countable) parameters n' : n' < n₀, but since some n₀ the lemma is true for all the rest of (bigger) integers.
It says that you don't care about first few integers ("first few" can be as "little" as 1e400, or 1e400000, ...etc... from the theory point of view), and you only care about the bigger (big enough, bigger than n₀) n values.
Ultimately it means, that in the big-O notation you usually write the simplest and lowest function having the same asymptotic notion as the examined f(n).
For example for any f(n) of polynomial type like f(n) = ∑aini, i=0..k the O(f(n)) = O(nk).
So I did throw away all the lower 0..(k-1) powers of n, as they stand no chance against nk in the long run (for large n). And the ak does lose to some bigger c.
In case you are lost in that i,k,...:
f(n) = 34n4 + 23920392n2 has O(n4).
As for large enough n that n4 will "eclipse" any value created from n2. And 34n4 is only 34 times bigger than n4 => 34 is constant (relates to c) and can be omitted from big-O notation too.

The master method - why can't it solve T(n) = T(n/2) + n^2/logn?

The master method - why can't it solve T(n) = 4*T(n/2) + (n^2)/logn?
I realize it can solve recurrences of type T(n) = aT(n/b) + f(n)
On MIT OCW they mentioned that it couldn't solve the above recurrence though. Can someone provide an explanation as to why?
Answer for T(n/2) + (n^2)/logn:
Case 1 does not apply because f(n) != O(n^-e) for any positive e.
Case 2 does not apply because f(n) != Θ(log^k(n)) for any k >= 0
Case 3 does not apply,
f(n) = Ω(n^e) for e = 1, BUT
a*f(n/b) <= c*f(n) for no c<1 (works out that c >= 2)
So we can't apply any case. Beyond this I'm no good really - and again I'm not 100% on this answer.
The following was prior to this edit, and assumed the question was with regards to T(n) = T(n/2) + n^(2logn)
I'm fairly sure that it is case 3 of the theorum.
Case 1 does not apply because f(n) != O(n^-e) for any positive e.
Case 2 does not apply because f(n) != Θ(log^k(n)) for any k >= 0
Case 3 does apply,
a*f(n/b) <= c*f(n) for some c<1 (works out that c >= 0.5)
and f(n) = Ω(n^e) for e = 1
I may be wrong so check it and let me know!
f(n)=(n^2)/logn and n^(log a/log b) . Compute the difference between the above two functions.
ratio= (n^2/log n)/(n^2)
The ratio turns out to be logarithmic. This recurrence relation falls in to the gap between case 2 and case 3. So Masters theorem is not applicable for this recurrence relation.
Masters theorem is applicable when the difference between the two functions stated above is polynomial.
its because in all the mentioned three cases you can't justify the positive asymptotic nature. which means when n->infinity n^2/lg(n) -> infinity, that simply implies n^e = w(lg(n)), Which can be paraphrased as "the function can not be contained" no upper bound exists for dividing and conquering procedures.
It seems to be the 3rd case as f(n) is greater but it should also satisfy the regularity condition (af(n/b)<=cf(n) for some c in(0,1)).
Here the function is not satisfying the regularity condition so master method fails here

What is an easy way for finding C and N when proving the Big-Oh of an Algorithm?

I'm starting to learn about Big-Oh notation.
What is an easy way for finding C and N0 for a given function?
Say, for example:
(n+1)5, or n5+5n4+10n2+5n+1
I know the formal definition for Big-Oh is:
Let f(n) and g(n) be functions mapping
nonnegative integers to real numbers.
We say that f(n) is O(g(n)) if there
is a real constant c > 0 and an
integer constant N0 >= 1
such that f(n) <= cg(n) for every integer N > N0.
My question is, what is a good, sure-fire method for picking values for c and N0?
For the given polynomial above (n+1)5, I have to show that it is O(n5). So, how should I pick my c and N0 so that I can make the above definition true without guessing?
You can pick a constant c by adding the coefficients of each term in your polynomial. Since
| n5 + 5n4 + 0n3 + 10n2 + 5n1 + 1n0 | <= | n5 + 5n5 + 0n5 + 10n5 + 5n5 + 1n5 |
and you can simplify both sides to get
| n5 + 5n4 + 10n2 + 5n + 1 | <= | 22n5 |
So c = 22, and this will always hold true for any n >= 1.
It's almost always possible to find a lower c by raising N0, but this method works, and you can do it in your head.
(The absolute value operations around the polynomials are to account for negative coefficients.)
Usually the proof is done without picking concrete C and N0. Instead of proving f(n) < C * g(n) you prove that f(n) / g(n) < C.
For example, to prove n3 + n is O(n3) you do the following:
(n3 + n) / n3 = 1 + (n / n3) = 1 + (1 / n2) < 2 for any n >= 1. Here you can pick any C >= 2 with N0 = 1.
You can check what the lim abs(f(n)/g(n)) is when n->+infitity and that would give you the constant (g(n) is n^5 in your example, f(n) is (n+1)^5).
Note that the meaning of Big-O for x->+infinity is that if f(x) = O(g(x)), then f(x) "grows no faster than g(x)", so you just need to prove that lim abs(f(x)/g(x)) exists and is less than +infinity.
It's going to depend greatly on the function you are considering. However, for a given class of functions, you may be able to come up with an algorithm.
For instance, polynomials: if you set C to any value greater than the leading coefficient of the polynomial, then you can solve for N0.
After you understand the magic there, you should also get that big-O is a notation. It means that you do not have to look for these coefficients in every problem you solve, once you made sure you understood what's going on behind these letters. You should just operate the symbols according to the notaion, according to its rules.
There's no easy generic rule to determine actual values of N and c. You should recall your calculus knowledge to solve it.
The definition to big-O is entangled with definition of the limit. It makes c satisfy:
c > lim |f(n)/g(n)|, given n approaches +infinity.
If the sequence is upper-bounded, it always has a limit. If it's not, well, then f is not O(g). After you have picked concrete c, you will have no problem finding appropriate N.

Resources