In proving and disproving Big O questions that explicitly say use the definition to prove and disprove, my question is, is what I am doing correct?
For example you have a question that is g(n) = O(f(n)) ... In order to prove it I was doing the following
g(n) <= C(F(n))
g(n)/F(n) <= C .. then give n=1 and solve for C , which proves it.
The contradiction that I run to when doing this is when i approach a question of disproving this stuff
for example
g(n) =O(F(n)) to disprove it I would do
g(n) >= C(F(n)) and solve for C again . However this leads me to believe that big O can be proved and disproved at once ? which is 100% wrong.
Using real world numbers
(Proving)
n^2 + 3 = O(n^2)
(n^2 + 3)/n^2 <= C assume n = 1 then C >= 3
Disproving
n^2 + 3 = O(n^2)
(n^2 + 3)/n^2 >= C assume n = 1 then C <= 3
n^2 + 3 = O(n^2)
both of these say that # n =1 and c = 3 the algorithm is O(n^2) and is NOT O(n^2).
Can anyone help me clarify my confusion and give me a help me learn a good algorithmic way of solving big O questions?
Neither of your techniques work. Let's start with the definition of big-O:
f is O(g) iff there exist C, N such that |f(x)| ≤ C |g(x)| for all x ≥ N
To prove "there exist" type statements, you need to show that, well, the things exist. In the case of big-O proofs, you usually find the things, though proofs of existence don't generally need to be constructive. To build a proof for a "for all" statement, pretend someone just handed you specific values. Be careful you make no implicit assumptions about their properties (you can explicitly state properties, such as N > 0).
In the case of proving big-O, you need to find the C and N. Showing |g(n)| ≤ C|F(n)| for a single n isn't sufficent.
For the example "n2+3 is O(n2)":
For n ≥ 2, we have:
n2 ≥ 4 > 3
⇒ n2-1 > 2
⇒ 2(n2-1) > (n2-1)+2
⇒ 2n2 > (n2-1)+4 = n2+3
Thus n2+3 is O(n2) for C=2, N=2.
To disprove, you take the negation of the statement: show there is no C or N. In other words, show that for all C and N, there exists an n > N such that |f(n)| > C |g(n)|. In this case, the C and N are qualified "for all", so pretend they've been given to you. Since n is qualified "there exists", you have to find it. This is where you start with the equation you wish to prove and work backwards until you find a suitable n.
Suppose we want to prove that n is not O(ln n). Pretend we're given N and C, and we need to find an n ≥ N such that n > C ln n.
For all whole numbers C, N, let M=1+max(N, C) and n = eM. Note n > N > 0 and M > 0.
Thus n = eM > M2 = M ln eM = M ln n > C ln n. QED.
Proofs of x > 0 ⇒ ex > x2 and "n is not O(ln n)" ⇒ "n is not O(logb n)" left as exercises.
Related
A TA told me that this is true today but I was unable to verify this by googling. This is saying functions like log(n)^2, log(n)^3, ... , log(n)^m are all O(n).
Is this true?
Claim
The function f(n) = log(n)^m, for any natural number m > 2 (m ∈ ℕ+) is in
O(n).
I.e. there exists a set of positive constants c and n0 such that
the following holds:
log(n)^m < c · n, for all n > n0, { m > 2 | m ∈ ℕ+ } (+)
Proof
Assume that (+) does not hold, and denote this assumption as (*).
I.e., given (*), there exists no set of positive constants c and n0 such that (+) holds for any value of m > 2. Under this assumption, the following holds, that for all positive constants c and n0, there exists a n > n0 such that (thanks #Oriol):
log(n)^m ≥ c · n, { m > 2 | m ∈ ℕ+ } (++)
Now, if (++) holds, then the inequality in (++) will hold also after applying any monotonically increasing function to both sides of the inequality. One such function is, conveniently, the log function itself
Hence, under the assumption that (++) holds, then, for all positive constants c and n0, there exists a n > n0 such that the following holds
log(log(n)^m) ≥ log(c · n), { m > 2 | m ∈ ℕ+ }
m · log(log(n)) ≥ log(c · n), { m > 2 | m ∈ ℕ+ } (+++)
However, (+++) is obviously a contradiction: since log(n) dominates (w.r.t. growth) over log(log(n)),
we can—for any given value of m > 2—always find a set of constants c and n0 such that (+++) (and hence (++)) is violated for all n > n0.
Hence, assumption (*) is proven false by contradiction, and hence, (+) holds.
=> for f(n) = log(n)^m, for any finite integer m > 2, it holds that f ∈ O(n).
Yes. If the function it's f(n), it means m is a parameter and f does not depend on it. In fact, we have a different f_m function for each m.
f_m(n) = log(n)^m
Then it's easy. Given m ∈ ℕ, use L'Hôpital's rule repeatively
f_m(n) log(n)^m m * log(n)^(m-1)
limsup ──────── = limsup ────────── = limsup ────────────────── =
n→∞ n n→∞ n n→∞ n
m*(m-1) * log(n)^(m-2) m!
= limsup ──────────────────────── = … = limsup ──── = 0
n n→∞ n
Therefore, f_m ∈ O(n).
Of course, it would be different if we had f(m,n) = log(n)^m. For example, taking m=n,
f(n,n) log(n)^n
limsup ──────── = limsup ────────── = ∞
n→∞ n n→∞ n
Then f ∉ O(n)
In many ways it is more intuitive that for any positive integer m we have:
x^m = O(e^x)
This says that exponential growth dominates polynomial growth (which is why exponential time algorithms are bad news in computer programming).
Assuming that this is true, simply take x = log(n) and use the fact that then x tends to infinity if and only if n tends to infinity and that e^x and log(x) are inverses:
log(n)^m = O(e^log(n)) = O(n)
Finally, since for any natural number m, the root function n => n^(1/m) is increasing, we can rewrite the result as
log(n) = O(n^(1/m))
This way of writing it says that log(n) grows slower than any root (square, cube, etc.) of n, which more obviously corresponds to e^n growing faster than any power of n.
On Edit: the above showed that log(n)^m = O(n) followed from the more familiar x^m = O(e^x). To convert it to a more self-contained proof, we can show the later somewhat directly.
Start with the Taylor series for e^x:
e^x = 1 + x/1! + x^2/2! + x^3/3! + ... + x^n/n! + ...
This is known to converge for all real numbers x. If a positive integer m is given, let K = (m+1)!. Then, if x > K we have 1/x < 1/(m+1)!, hence
x^m = x^(m+1)/x < x^(m+1)/(m+1)! < e^x
which implies x^m = O(e^x). (The last inequality in the above is true since all terms in the expansion for e^x are strictly positive if x>0 and x^(m+1)/(m+1)! is just one of those terms.)
f(n) = 4 * 2n + 4n + 20n5
So, g(n) = 4n
Now our f(n) = O(g(n))
4 * 2n + 4n + 20n5 ≤ c*4n
How do we do this? I know how to do it for simple cases, but this one is far more complex. Would it go along the lines of removing the constant 4 and 20n5 to then have 2n + 4n ≤ c*4n?
Or would it be for any c > 4*2n + 20n5. It feels like a lame answer, so i'm going to assume i'm wrong. Would prefer if someone hinted at the idea of how to solve these problems rather than give me the answer, thank you.
Hint / preparations
In the context of asymptotic analysis and, in this case, Big-O notation specifically; generally when wanting to prove that inequalities such as
4 * 2^n + 4^n + 20n^5 ≤ c*4^n, (+)
for some constant c > 0,
for n larger than some constant n0; n > n0
holds, we approach the left hand side expression term by term. Since we're free to choose any constants c and n0 to show that (+) holds, we can always express the lower order terms as less or equal to (≤) the higher order term by making n sufficiently large, e.g., choosing the value of n0 as we see fit.
Solution (spoilers ahead!)
Below follows one way to show that (+) holds for some set of positive constants c and n0. Since you only asked for hints, I suggest you start with the section above, and return to this section in case you get stuck or want to verify the derivation you ended up using.
Term by term analysis (in terms of 4^n) of the left hand side expression of(+)` follows.
Term 4 * 2^n:
4 * 2^n = 4^n <=> (2*2)*2^n = (2^2)^n <=> 2^(n+2) = 2^(2n)
<=> n+2 = 2n => n = 2
=> 4 * 2^n ≤ 4^n for n ≥ 2 (i)
Term 4^n: Trivial
Term 20n^5:
for which n is 20 * n^5 = 4^n?
Graphical solution:
=> 20 * n^5 ≤ 4^n for n ≥~ 10.7 (choose 11) (ii)
Inserting inequalities (i) and (ii) in the lhs of (+) yields:
4 * 2^n + 4^n + 20n^5 ≤ 4^n + 4^n + 4^n = 3*4^n
^
for n>max(2,11)=11 <-- choice of n0 |
choice of c
Hence, we have showed that (+) holds for constants n0 = 11 and c=3. Naturally, the choice of these constants is not unique (in fact, if such constants exists, an infinite amount of them exists). Subsequently, the lhs of (+) is in O(4^n).
Now, I note that your title mentions Big-Θ (whereas your question covers only Big-O). For deriving that lhs of (+) is Θ(4^n), we need to find also a lower asymptotic bound on the lhs of (+) in terms of 4^n. Since n > 0, this is, in this case, quite trivial:
4 * 2^n + 4^n + 20n^5 ≥ c2*4^n ? for n > n0 ? (++)
=> 4 * 2^n + 4^n + 20n^5 ≥ 4^n, for n > 0
I.e., in addition to showing that (+) holds (which implies O(4^n)), we've shown that (++) holds for e.g. c2 = 1 and (re-use) n0 = 11, which implies that lhs of (+) is Θ(4^n).
One way to approach an asymptotic analysis of a function such as the left hand side of (+) would be to make use of the somewhat rigorous term-by-term analysis shown in this solution. In practice, however, we know that 4^n will quickly dominate the lower order terms, so we could've just chosen a somewhat large n0 (say 100) and tested, term by term, if the lower order terms could be replaced by the higher order term with less or equal to (≤) relation, given n>n0. Or, given in what context we need to make use of our asymptotic bounds, we could just glance at the function and, without rigour, directly state that the asymptotic behaviour of the function is naturally O(4^n), due to this being the dominant term. This latter method should, imo, only be used after one has grasped how to formally analyse the asymptotic behaviour of functions and algorithms in the context of Big-O/-Omega and -Theta notation.
The formal definition is
O(g(n)) = {f(n) | ∃c, n₀ : 0 ≤ f(n) ≤ c g(n), ∀ n ≥ n₀}
But when you want to check if f(x) ∈ O(g(n)), you can use the simpler
f(n)
lim sup ────── ≤ c
n → ∞ g(n)
In this case,
4*2ⁿ + 4ⁿ + 20n⁵
lim sup ────────────────── = 1
n → ∞ 4ⁿ
So yes, we can choose for example c = 1.
I came across two asymptotic function proofs.
f(n) = O(g(n)) implies 2^f(n) = O(2^g(n))
Given: f(n) ≤ C1 g(n)
So, 2^f(n) ≤ 2^C1 g(n) --(i)
Now, 2^f(n) = O(2^g(n)) → 2^f(n) ≤ C2 2^g(n) --(ii)
From,(i) we find that (ii) will be true.
Hence 2^f(n) = O(2^g(n)) is TRUE.
Can you tell me if this proof is right? Is there any other way to solve this?
2.f(n) = O((f(n))^2)
How to prove the second example? Here I consider two cases one is if f(n)<1 and other is f(n)>1.
Note: None of them are homework questions.
The attempted-proof for example 1 looks well-intentioned but is flawed. First, “2^f(n) ≤ 2^C1 g(n)” means 2^f(n) ≤ (2^C1)*g(n), which in general is false. It should have been written 2^f(n) ≤ 2^(C1*g(n)). In the line beginning with “Now”, you should explicitly say C2 = 2^C1. The claim “(ii) will be true” is vacuous (there is no (ii)).
A function like f(n) = 1/n disproves the claim in example 2 because there are no constants N and C such that for all n > N, f(n) < C*(f(n))². Proof: Let some N and C be given. Choose n>N, n>C. f(n) = 1/n = n*(1/n²) > C*(1/n²) = C*(f(n))². Because N and C were arbitrarily chosen, this shows that there are no fixed values of N and C such that for all n > N, f(n) < C*(f(n))², QED.
Saying that “f(n) ≥ 1” is not enough to allow proving the second claim; but if you write “f(n) ≥ 1 for all n” or “f() ≥ 1” it is provable. For example, if f(n) = 1/n for odd n and 1+n for even n, we have f(n) > 1 for even n > 0, and less than 1 for odd n. To prove that f(n) = O((f(n))²) is false, use the same proof as in the previous paragraph but with the additional provision that n is even.
Actually, “f(n) ≥ 1 for all n” is stronger than necessary to ensure f(n) = O((f(n))²). Let ε be any fixed positive value. No matter how small ε is, “f(n) ≥ ε for all n > N'” ensures f(n) = O((f(n))²). To prove this, take C = max(1, 1/ε) and N=N'.
I've decided to try and do a problem about analyzing the worst possible runtime of an algorithm and to gain some practice.
Since I'm a beginner I only need help in expressing my answer in a right way.
I came accros this problem in a book that uses the following algorithm:
Input: A set of n points (x1, y1), . . . , (xn, yn) with n ≥ 2.
Output: The squared distance of a closest pair of points.
ClosePoints
1. if n = 2 then return (x1 − x2)^2 + (y1 − y2)^2
2. else
3. d ← 0
4. for i ← 1 to n − 1 do
5. for j ← i + 1 to n do
6. t ← (xi − xj)^2 + (yi − yj)^2
7. if t < d then
8. d ← t
9. return d
My question is how can I offer a good proof that T(n) = O(n^2),T(n) = Ω(n^2) and T (n) = Θ(n^2)?,where T(n) represents the worst possible runtime.
I know that we say that f is O(g),
if and only if there is an n0 ∈ N and c > 0 in R such that for all
n ≥ n0 we have
f(n) ≤ cg(n).
And also we say that f is Ω(g) if there is an
n0 ∈ N and c > 0 in R such that for all n ≥ n0 we have
f(n) ≥ cg(n).
Now I know that the algoritm is doing c * n(n - 1) iterations, yielding T(n)=c*n^2 - c*n.
The first 3 lines are executed O(1) times line 4 loops for n - 1 iterations which is O(n) . Line 5 loops for n - i iterations which is also O(n) .Does each line of the inner loop's content
(lines 6-7) takes (n-1)(n-i) or just O(1)?and why?The only variation is how many times 8.(d ← t) is performed but it must be lower than or equal to O(n^2).
So,how should I write a good and complete proof that T(n) = O(n^2),T(n) = Ω(n^2) and T (n) = Θ(n^2)?
Thanks in advance
Count the number of times t changes its value. Since changing t is the innermost operation performed, finding how many times that happens will allow you to find the complexity of the entire algorithm.
i = 1 => j runs n - 1 times (t changes value n - 1 times)
i = 2 => j runs n - 2 times
...
i = n - 1 => j runs 1 time
So the number of times t changes is 1 + 2 + ... + n - 1. This sum is equal n(n - 1) / 2. This is dominated by 0.5 * n^2.
Now just find appropriate constants and you can prove that this is Ω(n^2), O(n^2), Θ(n^2).
T(n)=c*n^2 - c*n approaches c*n^2 for large n, which is the definition of O(n^2).
if you observe the two for loops, each for loop gives an O(n) because each loop is incrementing/decrementing in a linear fashion. hence, two loops combined roughly give a O(n^2) complexity. the whole point of big-oh is to find the dominating term- coeffecients do not matter. i would strongly recommend formatting your pseudocode in a proper manner in which it is not ambiguous. in any case, the if and else loops do no affect the complexity of the algorithm.
lets observe the various definitions:
Big-Oh
• f(n) is O(g(n)) if f(n) is
asymptotically “less than or equal” to
g(n)
Big-Omega
• f(n) is Ω(g(n)) if f(n) is
asymptotically “greater than or equal”
to g(n)
Big-Theta
• f(n) is Θ(g(n)) if f(n) is
asymptotically “equal” to g(n)
so all you need are to find constraints which satisfy the answer.
I'm starting to learn about Big-Oh notation.
What is an easy way for finding C and N0 for a given function?
Say, for example:
(n+1)5, or n5+5n4+10n2+5n+1
I know the formal definition for Big-Oh is:
Let f(n) and g(n) be functions mapping
nonnegative integers to real numbers.
We say that f(n) is O(g(n)) if there
is a real constant c > 0 and an
integer constant N0 >= 1
such that f(n) <= cg(n) for every integer N > N0.
My question is, what is a good, sure-fire method for picking values for c and N0?
For the given polynomial above (n+1)5, I have to show that it is O(n5). So, how should I pick my c and N0 so that I can make the above definition true without guessing?
You can pick a constant c by adding the coefficients of each term in your polynomial. Since
| n5 + 5n4 + 0n3 + 10n2 + 5n1 + 1n0 | <= | n5 + 5n5 + 0n5 + 10n5 + 5n5 + 1n5 |
and you can simplify both sides to get
| n5 + 5n4 + 10n2 + 5n + 1 | <= | 22n5 |
So c = 22, and this will always hold true for any n >= 1.
It's almost always possible to find a lower c by raising N0, but this method works, and you can do it in your head.
(The absolute value operations around the polynomials are to account for negative coefficients.)
Usually the proof is done without picking concrete C and N0. Instead of proving f(n) < C * g(n) you prove that f(n) / g(n) < C.
For example, to prove n3 + n is O(n3) you do the following:
(n3 + n) / n3 = 1 + (n / n3) = 1 + (1 / n2) < 2 for any n >= 1. Here you can pick any C >= 2 with N0 = 1.
You can check what the lim abs(f(n)/g(n)) is when n->+infitity and that would give you the constant (g(n) is n^5 in your example, f(n) is (n+1)^5).
Note that the meaning of Big-O for x->+infinity is that if f(x) = O(g(x)), then f(x) "grows no faster than g(x)", so you just need to prove that lim abs(f(x)/g(x)) exists and is less than +infinity.
It's going to depend greatly on the function you are considering. However, for a given class of functions, you may be able to come up with an algorithm.
For instance, polynomials: if you set C to any value greater than the leading coefficient of the polynomial, then you can solve for N0.
After you understand the magic there, you should also get that big-O is a notation. It means that you do not have to look for these coefficients in every problem you solve, once you made sure you understood what's going on behind these letters. You should just operate the symbols according to the notaion, according to its rules.
There's no easy generic rule to determine actual values of N and c. You should recall your calculus knowledge to solve it.
The definition to big-O is entangled with definition of the limit. It makes c satisfy:
c > lim |f(n)/g(n)|, given n approaches +infinity.
If the sequence is upper-bounded, it always has a limit. If it's not, well, then f is not O(g). After you have picked concrete c, you will have no problem finding appropriate N.