Big O notation g(n) ∈ O(f(n)) =⇒ (g(n))^2 ∈ O((f(n))^2) - big-o

My question is, is this True.
g(n) ∈ O(f(n)) =⇒ (g(n))^2 ∈ O((f(n))^2)
At the long run it should be true, but i have one example (log n )^ 2 which is still in or less then O(sqrt n)
Is there a way to proof this without the graph.
THX

It is quite intuitive that, if a function g is less or equally fast than another function f, the square of g is less or equally fast than the square of f.
Formally:
Statement #1. g(n) ∈ O(f(n)) means that, for at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 ≤ g(n) ≤ k f(n) holds for all n > a.
Statement #2. g(n)^2 ∈ O(f(n)^2) means that, for at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 ≤ g(n)^2 ≤ k f(n)^2 holds for all n > a.
Since we want to prove that g(n) ∈ O(f(n)) implies g(n)^2 ∈ O(f(n)^2), we want to reach the statement #2 starting from the statement #1.
Let us take two constants k and a such that the statement #1 is satisfied.
First, notice that:
0 ≤ k f(n) holds for all n > a (from hypotesis);
⇒ 0 ≤ f(n) holds for all n > a (since k > 0). [Result #1]
Also notice that:
g(n) ≤ k f(n) holds for all n > a (from hypotesis);
⇒ g(n)^2 ≤ (k f(n))^2 holds for all n > a (since, from hypotesis and result #1, both g(n) and f(n) are non-negative for all n > a, so we can keep the ≤ sign);
⇒ g(n)^2 ≤ k^2 f(n)^2 holds for all n > a. [Result #2]
From results #1 and #2, the statement #2 is satisfied.
Q.E.D.

Related

Proving Order of complexity proofs if f(n) is Ω(n∗g(n)), then f(n) is not O(g(n))

Show that if f(n) is Ω(n∗g(n)), then f(n) is not O(g(n))
Assume f(n) is Ω(n ∗ g(n)) and f(n) is O(g(n)). Need to show a contradiction. The approach is to find a value of n that violates the definitions.
Proof: f(n) is Ω(n ∗ g(n)) implies there exists positive values C and k such that n > k implies f(n) ≥ C ∗ n ∗ g(n). f(n) is O(g(n)) implies there exists positive values C′ and k′ such that n > k′ implies f(n) ≤ C ∗ g(n).
So what value of n violates the definition and how can I show a contradiction?
Your approach to prove the statement by contradiction is possible. But first of all, you need to be a bit more precise:
f and g are positive non-decreasing functions on integers
C and C' are >= 0
Your last implication should read C' * g(n) (as opposed to C * g(n)).
So we start with:
(a) There exist positive integers C, C', k, k' such that for all n > k and n' > k':
C * n * g(n) <= f(n) and f(n') <= C' g(n')
By chaining together your two implications and merging the two universal quantifiers into one (by noting that for all n > k and n' > k' implies for all n > max(k,k')), you immediately get:
(b) There exist positive integers C, C', k, k' such that for all n > max(k,k'):
C * n * g(n) <= C' g(n)
Dividing by g(n) on both sides, which is valid by assumption 1. above, yields the equivalent:
(c) There exist positive integers C, C', k, k' such that for all n > max(k,k'):
C * n <= C'
This is equivalent to:
(d) There exist positive integers C, C', k, k' such that for all n > max(k,k'):
n <= C'/C
The last statement is equivalent to false. This is a contradiction and hence the original statement is true.

Finding the values in Big Oh

I am going through the Asymptotic notations from here. I am reading this f(n) ≤ c g(n)
For example, if f(n) = 2n + 2, We can satisfy it in any way as f(n) is O (c.g(n)) by adjusting the value of n and c. Or is there any specific rule or formula for selecting the value of c and n. Will no always be 1?
There is no formula per se. You can find the formal definition here:
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n. (big-O notation).
What I understood from your question is, you are not getting the essence of big-O notation. If your complexity is, for example, O(n^2), then you can guarantee that there is some value of n (greater than k) after which f(n) in no case will exceed c g(n).
Let's try to prove f(n) = 2n + 2 is O(n):
As it seems from the function itself, you cannot set the value of c equal to 2 as you want to find f(n) ≤ c g(n). If you plug in c = 2, you have to find k such that f(n) ≤ c g(n) for n ≥ k. Clearly, there is no n for which 2n ≥ 2n + 2. So, we move on to c = 3.
Now, let's find the value of k. So, we solve the equation 3n ≥ 2n + 2. Solving it:
3n ≥ 2n + 2
=> 3n - 2n ≥ 2
=> n ≥ 2
Therefore, for c = 3, we found value of k = 2 (n ≥ k).
You must also understand, your function isn't just O(n). It is also O(n^2), O(n^3), O(n^4) and so on. All because corresponding values of c and k exist for g(n) = n^2, g(n) = n^3 and g(n) = n^4.
Hope it helps.

Prove that f(n) = Θ(g(n)) iff g(n) = Θ(f(n))

I have been given the problem:
f(n) are asymptotically positive functions. Prove f(n) = Θ(g(n)) iff g(n) = Θ(f(n)).
Everything I have found points to this statement being invalid. For example an answer I've come across states:
f(n) = O(g(n)) implies g(n) = O(f(n))
f(n) = O(g(n)) means g(n) grows faster than f(n). It cannot imply that f(n) grows
faster than g(n). Hence not true.
Another states:
If f(n) = O(g(n)) then O(f(n)). This is false. If f(n) = 1 and g(n) = n
for all natural numbers n, then f(n) <= g(n) for all natural numbers n, so
f(n) = O(g(n)). However, suppose g(n) = O(f(n)). Then there are natural
numbers n0 and a constant c > 0 such that n=g(n) <= cf(n) = c for all n >=
n0 which is impossible.
I understand that there are slight differences between my exact question and the examples I have found, but I've only been able to come up with solutions that do not prove it. I am correct in thinking that it is not able to be proved or am I looking over some detail?
You can start from here:
Formal Definition: f(n) = Θ (g(n)) means there are positive constants c1, c2, and k, such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ k.
Because you have that iff, you need to start from the left side and to prove the right side, and then start from the right side and prove the left side.
Left -> right
We consider that:
f(n) = Θ(g(n))
and we want to prove that
g(n) = Θ(f(n))
So, we have some positive constants c1, c2 and k such that:
0 ≤ c1*g(n) ≤ f(n) ≤ c2*g(n), for all n ≥ k
The first relation between f and g is:
c1*g(n) ≤ f(n) => g(n) ≤ 1/c1*f(n) (1)
The second relation between f and g is:
f(n) ≤ c2*g(n) => 1/c2*f(n) ≤ g(n) (2)
If we combine (1) and (2), we obtain:
1/c2*f(n) ≤ g(n) ≤ 1/c1*f(n)
If you consider c3 = 1/c2 and c4 = 1/c1, they exist and are positive (because the denominators are positive). And this is true for all n ≥ k (where k can be the same).
So, we have some positive constants c3, c4, k such that:
c3*f(n) ≤ g(n) ≤ c4*f(n), for all n ≥ k
which means that g(n) = Θ(f(n)).
Analogous for right -> left.

Asymptotic proof examples

I came across two asymptotic function proofs.
f(n) = O(g(n)) implies 2^f(n) = O(2^g(n))
Given: f(n) ≤ C1 g(n)
So, 2^f(n) ≤ 2^C1 g(n) --(i)
Now, 2^f(n) = O(2^g(n)) → 2^f(n) ≤ C2 2^g(n) --(ii)
From,(i) we find that (ii) will be true.
Hence 2^f(n) = O(2^g(n)) is TRUE.
Can you tell me if this proof is right? Is there any other way to solve this?
2.f(n) = O((f(n))^2)
How to prove the second example? Here I consider two cases one is if f(n)<1 and other is f(n)>1.
Note: None of them are homework questions.
The attempted-proof for example 1 looks well-intentioned but is flawed. First, “2^f(n) ≤ 2^C1 g(n)” means 2^f(n) ≤ (2^C1)*g(n), which in general is false. It should have been written 2^f(n) ≤ 2^(C1*g(n)). In the line beginning with “Now”, you should explicitly say C2 = 2^C1. The claim “(ii) will be true” is vacuous (there is no (ii)).
A function like f(n) = 1/n disproves the claim in example 2 because there are no constants N and C such that for all n > N, f(n) < C*(f(n))². Proof: Let some N and C be given. Choose n>N, n>C. f(n) = 1/n = n*(1/n²) > C*(1/n²) = C*(f(n))². Because N and C were arbitrarily chosen, this shows that there are no fixed values of N and C such that for all n > N, f(n) < C*(f(n))², QED.
Saying that “f(n) ≥ 1” is not enough to allow proving the second claim; but if you write “f(n) ≥ 1 for all n” or “f() ≥ 1” it is provable. For example, if f(n) = 1/n for odd n and 1+n for even n, we have f(n) > 1 for even n > 0, and less than 1 for odd n. To prove that f(n) = O((f(n))²) is false, use the same proof as in the previous paragraph but with the additional provision that n is even.
Actually, “f(n) ≥ 1 for all n” is stronger than necessary to ensure f(n) = O((f(n))²). Let ε be any fixed positive value. No matter how small ε is, “f(n) ≥ ε for all n > N'” ensures f(n) = O((f(n))²). To prove this, take C = max(1, 1/ε) and N=N'.

Exponential growth in big-o notation

I have got problem about understanding the following question. It says:
Prove that exponential functions have different orders of growth for different
values of base.
It looks to me like for example, consider an. If a=3, its growth rate will be larger than when a=2. It looks obvious. Is that really what the question wants? How can i do a formal proof for that?
Thanks in advance for your help.
f(n) ∈ O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Let 1>a>b without loss of generality, and suppose b^n ∈ O(a^n). This implies that there are positive constants c and k such that 0 ≤ b^n ≤ c.a^n for all n ≥ k, which is impossible :
b^n ≤ c.a^n for all n ≥ k implies (b/a)^n ≤ c for all n ≥ k
which is in contradiction with lim (b/a)^n = +inf because b/a>1.
If 1>a>b then b^n ∉ O(a^n), but a^n ∈ O(b^n) so O(a^n)⊊O(b^n)

Resources