Asymptotic proof examples - algorithm

I came across two asymptotic function proofs.
f(n) = O(g(n)) implies 2^f(n) = O(2^g(n))
Given: f(n) ≤ C1 g(n)
So, 2^f(n) ≤ 2^C1 g(n) --(i)
Now, 2^f(n) = O(2^g(n)) → 2^f(n) ≤ C2 2^g(n) --(ii)
From,(i) we find that (ii) will be true.
Hence 2^f(n) = O(2^g(n)) is TRUE.
Can you tell me if this proof is right? Is there any other way to solve this?
2.f(n) = O((f(n))^2)
How to prove the second example? Here I consider two cases one is if f(n)<1 and other is f(n)>1.
Note: None of them are homework questions.

The attempted-proof for example 1 looks well-intentioned but is flawed. First, “2^f(n) ≤ 2^C1 g(n)” means 2^f(n) ≤ (2^C1)*g(n), which in general is false. It should have been written 2^f(n) ≤ 2^(C1*g(n)). In the line beginning with “Now”, you should explicitly say C2 = 2^C1. The claim “(ii) will be true” is vacuous (there is no (ii)).
A function like f(n) = 1/n disproves the claim in example 2 because there are no constants N and C such that for all n > N, f(n) < C*(f(n))². Proof: Let some N and C be given. Choose n>N, n>C. f(n) = 1/n = n*(1/n²) > C*(1/n²) = C*(f(n))². Because N and C were arbitrarily chosen, this shows that there are no fixed values of N and C such that for all n > N, f(n) < C*(f(n))², QED.
Saying that “f(n) ≥ 1” is not enough to allow proving the second claim; but if you write “f(n) ≥ 1 for all n” or “f() ≥ 1” it is provable. For example, if f(n) = 1/n for odd n and 1+n for even n, we have f(n) > 1 for even n > 0, and less than 1 for odd n. To prove that f(n) = O((f(n))²) is false, use the same proof as in the previous paragraph but with the additional provision that n is even.
Actually, “f(n) ≥ 1 for all n” is stronger than necessary to ensure f(n) = O((f(n))²). Let ε be any fixed positive value. No matter how small ε is, “f(n) ≥ ε for all n > N'” ensures f(n) = O((f(n))²). To prove this, take C = max(1, 1/ε) and N=N'.

Related

Asymptotic bounds and Big Θ notation

Suppose that f(n)=4^n and g(n)=n^n, will it be right to conclude that f(n)=Θ(g(n)).
In my opinion it's a correct claim but I'm not 100% sure.
It is incorrect. f(n) = Theta(g(n)) if and only if both f(n) = O(g(n)) and g(n) = O(f(n)). It is true that f(n) = O(g(n)). We will show that it is not the case that g(n) = O(f(n)).
Assume g(n) = O(f(n)). Then there exists a positive real constant c and a positive natural number n0 such that for all n > n0, g(n) <= c * f(n). For our functions, this implies n^n <= c * 4^n. If we take the nth root of both sides of this inequality we get n <= 4c^(1/n). We are free to assume c >= 1 and n0 >= since if we found a smaller value that worked a larger value would work too. For all c > 1 and n > 1, 4c^(1/n) is strictly less than 4c. But then if we choose n > 4c, the inequality is false. So, there cannot be an n0 such that for all n at least n0 the condition holds. This is a contradiction; our initial assumption is disproven.

Finding the values in Big Oh

I am going through the Asymptotic notations from here. I am reading this f(n) ≤ c g(n)
For example, if f(n) = 2n + 2, We can satisfy it in any way as f(n) is O (c.g(n)) by adjusting the value of n and c. Or is there any specific rule or formula for selecting the value of c and n. Will no always be 1?
There is no formula per se. You can find the formal definition here:
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n. (big-O notation).
What I understood from your question is, you are not getting the essence of big-O notation. If your complexity is, for example, O(n^2), then you can guarantee that there is some value of n (greater than k) after which f(n) in no case will exceed c g(n).
Let's try to prove f(n) = 2n + 2 is O(n):
As it seems from the function itself, you cannot set the value of c equal to 2 as you want to find f(n) ≤ c g(n). If you plug in c = 2, you have to find k such that f(n) ≤ c g(n) for n ≥ k. Clearly, there is no n for which 2n ≥ 2n + 2. So, we move on to c = 3.
Now, let's find the value of k. So, we solve the equation 3n ≥ 2n + 2. Solving it:
3n ≥ 2n + 2
=> 3n - 2n ≥ 2
=> n ≥ 2
Therefore, for c = 3, we found value of k = 2 (n ≥ k).
You must also understand, your function isn't just O(n). It is also O(n^2), O(n^3), O(n^4) and so on. All because corresponding values of c and k exist for g(n) = n^2, g(n) = n^3 and g(n) = n^4.
Hope it helps.

f(n) = log(n)^m is O(n) for all natural numbers m?

A TA told me that this is true today but I was unable to verify this by googling. This is saying functions like log(n)^2, log(n)^3, ... , log(n)^m are all O(n).
Is this true?
Claim
The function f(n) = log(n)^m, for any natural number m > 2 (m ∈ ℕ+) is in
O(n).
I.e. there exists a set of positive constants c and n0 such that
the following holds:
log(n)^m < c · n, for all n > n0, { m > 2 | m ∈ ℕ+ } (+)
Proof
Assume that (+) does not hold, and denote this assumption as (*).
I.e., given (*), there exists no set of positive constants c and n0 such that (+) holds for any value of m > 2. Under this assumption, the following holds, that for all positive constants c and n0, there exists a n > n0 such that (thanks #Oriol):
log(n)^m ≥ c · n, { m > 2 | m ∈ ℕ+ } (++)
Now, if (++) holds, then the inequality in (++) will hold also after applying any monotonically increasing function to both sides of the inequality. One such function is, conveniently, the log function itself
Hence, under the assumption that (++) holds, then, for all positive constants c and n0, there exists a n > n0 such that the following holds
log(log(n)^m) ≥ log(c · n), { m > 2 | m ∈ ℕ+ }
m · log(log(n)) ≥ log(c · n), { m > 2 | m ∈ ℕ+ } (+++)
However, (+++) is obviously a contradiction: since log(n) dominates (w.r.t. growth) over log(log(n)),
we can—for any given value of m > 2—always find a set of constants c and n0 such that (+++) (and hence (++)) is violated for all n > n0.
Hence, assumption (*) is proven false by contradiction, and hence, (+) holds.
=> for f(n) = log(n)^m, for any finite integer m > 2, it holds that f ∈ O(n).
Yes. If the function it's f(n), it means m is a parameter and f does not depend on it. In fact, we have a different f_m function for each m.
f_m(n) = log(n)^m
Then it's easy. Given m ∈ ℕ, use L'Hôpital's rule repeatively
f_m(n) log(n)^m m * log(n)^(m-1)
limsup ──────── = limsup ────────── = limsup ────────────────── =
n→∞ n n→∞ n n→∞ n
m*(m-1) * log(n)^(m-2) m!
= limsup ──────────────────────── = … = limsup ──── = 0
n n→∞ n
Therefore, f_m ∈ O(n).
Of course, it would be different if we had f(m,n) = log(n)^m. For example, taking m=n,
f(n,n) log(n)^n
limsup ──────── = limsup ────────── = ∞
n→∞ n n→∞ n
Then f ∉ O(n)
In many ways it is more intuitive that for any positive integer m we have:
x^m = O(e^x)
This says that exponential growth dominates polynomial growth (which is why exponential time algorithms are bad news in computer programming).
Assuming that this is true, simply take x = log(n) and use the fact that then x tends to infinity if and only if n tends to infinity and that e^x and log(x) are inverses:
log(n)^m = O(e^log(n)) = O(n)
Finally, since for any natural number m, the root function n => n^(1/m) is increasing, we can rewrite the result as
log(n) = O(n^(1/m))
This way of writing it says that log(n) grows slower than any root (square, cube, etc.) of n, which more obviously corresponds to e^n growing faster than any power of n.
On Edit: the above showed that log(n)^m = O(n) followed from the more familiar x^m = O(e^x). To convert it to a more self-contained proof, we can show the later somewhat directly.
Start with the Taylor series for e^x:
e^x = 1 + x/1! + x^2/2! + x^3/3! + ... + x^n/n! + ...
This is known to converge for all real numbers x. If a positive integer m is given, let K = (m+1)!. Then, if x > K we have 1/x < 1/(m+1)!, hence
x^m = x^(m+1)/x < x^(m+1)/(m+1)! < e^x
which implies x^m = O(e^x). (The last inequality in the above is true since all terms in the expansion for e^x are strictly positive if x>0 and x^(m+1)/(m+1)! is just one of those terms.)

Prove that f(n) = Θ(g(n)) iff g(n) = Θ(f(n))

I have been given the problem:
f(n) are asymptotically positive functions. Prove f(n) = Θ(g(n)) iff g(n) = Θ(f(n)).
Everything I have found points to this statement being invalid. For example an answer I've come across states:
f(n) = O(g(n)) implies g(n) = O(f(n))
f(n) = O(g(n)) means g(n) grows faster than f(n). It cannot imply that f(n) grows
faster than g(n). Hence not true.
Another states:
If f(n) = O(g(n)) then O(f(n)). This is false. If f(n) = 1 and g(n) = n
for all natural numbers n, then f(n) <= g(n) for all natural numbers n, so
f(n) = O(g(n)). However, suppose g(n) = O(f(n)). Then there are natural
numbers n0 and a constant c > 0 such that n=g(n) <= cf(n) = c for all n >=
n0 which is impossible.
I understand that there are slight differences between my exact question and the examples I have found, but I've only been able to come up with solutions that do not prove it. I am correct in thinking that it is not able to be proved or am I looking over some detail?
You can start from here:
Formal Definition: f(n) = Θ (g(n)) means there are positive constants c1, c2, and k, such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ k.
Because you have that iff, you need to start from the left side and to prove the right side, and then start from the right side and prove the left side.
Left -> right
We consider that:
f(n) = Θ(g(n))
and we want to prove that
g(n) = Θ(f(n))
So, we have some positive constants c1, c2 and k such that:
0 ≤ c1*g(n) ≤ f(n) ≤ c2*g(n), for all n ≥ k
The first relation between f and g is:
c1*g(n) ≤ f(n) => g(n) ≤ 1/c1*f(n) (1)
The second relation between f and g is:
f(n) ≤ c2*g(n) => 1/c2*f(n) ≤ g(n) (2)
If we combine (1) and (2), we obtain:
1/c2*f(n) ≤ g(n) ≤ 1/c1*f(n)
If you consider c3 = 1/c2 and c4 = 1/c1, they exist and are positive (because the denominators are positive). And this is true for all n ≥ k (where k can be the same).
So, we have some positive constants c3, c4, k such that:
c3*f(n) ≤ g(n) ≤ c4*f(n), for all n ≥ k
which means that g(n) = Θ(f(n)).
Analogous for right -> left.

Either f(n) = O(g(n)) or g(n) = O(f(n))

I'm trying to prove that this is correct for any function f and g with domain and co-domain N. I have seen it proven using limits, but apparently you can also prove it without them.
Essentially what I'm trying to prove is "If f(n) doesn't have a big-O of g(n) then g(n) must have a big-O of f(n). What I'm having trouble is trying to understand what "f doesn't have a big-O of g" means.
According to the formal definition of big-O, if f(n) = O(g(n)) then n>=N -> f(n) <= cg(n) for some N and a constant c. If f(n) != O(g(n)) I think that means there is no c that fulfills this inequality for all values of n. Yet I don't see what I can do to use that fact to prove g(n) = O(f(n)). That doesn't prove that a c' exists for g(n) <= c'f(n), which would successfully prove the question.
Not true. Let f(n) = 1 if n is odd and zero otherwise, and g(n) = 1 if n is even and zero otherwise.
To say that f is O(g) would say there is a constant C > 0 and N > 0 such that n > N implies f(n) <= C g(n). Let n = 2 * N + 1, so that n is odd. Then f(n) = 1 but g(n) = 0 so that f(n) <= C * g(n) is impossible. Thus, f is O(g) is not true.
Similarly, we can show that g is O(f) is not true.
First of all, your definition of big-O is a little bitt off. You say:
I think that means there is no c that fulfills this inequality for all values of n.
In actuality, you need to pick a value c that fulfills the inequality for any value of n.
Anyway, to answer the question:
I don't believe the statement in the question is true... Let's see if we can think of a counter-example, where f(n) ≠ O(g(n)) and g(n) ≠ O(f(n)).
note: I'm going to use n and x interchangeably, since it's easier for me to think that way.
We'd have to come up with two functions that continually cross each other as they go towards infinity. Not only that, but they'd have to continue to cross each other regardless of the constant c that we multibly them by.
So that leaves me thinking that the functions will have to alternate between two different time complexities.
Let's look at a function that alternates between y = x and y = x^2:
f(x) = .2 (x * sin(x) + x^2 * (1 - sin(x)) )
Now, if we create a similar function with a slightly offset oscillation:
g(x) = .2 (x * cos(x) + x^2 * (1 - cos(x)) )
Then these two functions will continue to cross each others' paths out to infinity.
For any number N that you select, no matter how high, there will be an x1 greater than N such that f(x) = x^2 and g(x) = x. Similarly, there will be an x2 such that g(x) = x^2 and f(x) = x.
At these points, you won't be able to choose any c1 or c2 that will ensure that f(x) < c1 * g(x) or that g(x) < c2 * f(x).
In conclusion, f(n) ≠ O(g(n)) does not imply g(n) = O(f(n)).

Resources