Big O notation constants - big-o

Given the following functions f(n) and g(n), is f in O(g(n)) or is f in Θ(g(n)),
or both? If true, specify a constant c and point n0, if false, briefly specify why.
(a) f(n) = 2n , g(n) = 2^2n
(b) f(n) = n!, g(n) = 2n
I do understand for (a), f(n) = O(g(n)) because g(n) upper bounds f(n)
and for (b), g(n) = O(f(n)) because of dominance relativity on the fact that n! > 2^n..
I have done some research but could not find much on how to calculate the constants c and n0 for this type of questions. thanks for the reply :)

a) f(n) = 2n , g(n) = 2^(2n)
(I added parenthesis.)
f(n) = O(g(n)) iff | f(n) | <= C * | g(n) | for some C>0 and all n > n0
2n <= 1*(2^(2n)) for n>1
Therefore 2n = O(2^(2n)). My constants are C=1 and n0 = 1. But others work too.

Related

If f(n) is Θ(h(n)) and g(n) = O(h(n)) then f(n) + g(n) is Θ(h(n)). True or False

I have been trying to prove/disprove the above,
I have proved that if f(n) is Θ(h(n)) and g(n) = O(h(n)) then f(n) + g(n) is O(h(n))
but now when I am trying to prove/disprove f(n) + g(n) is also Ω(h(n) I am facing a problem. Below is my approach.
From given,
There exists b,c > 0 such that b.h(n) =< f(n) <= c(h(n)) and exists a > 0 such that g(n) <= a.h(n)
I proved O(h(n)) by adding the above two inequalities, but to prove/disprove lower bound formally I am stuck since I do not have a lower bound to g(n) but have a lower bound only to f(n).
Also I am getting confused if the big-oh notation consist of strict inequalities always or not eq. if f(n) is Θ(h(n)) does the following statement hold :
There exists b,c > 0 such that b.h(n) =< f(n) =< c(h(n)).
Thank you.
Assuming f and g positive,
f + g >= f, g
implies
f + g = Ω(h(n)).

Asymptotic Growth: Understanding the specific proof of f(n) + little o(f(n)) = theta (f(n))?

I'm working through proof of f(n) + o(f(n)) = theta (f(n)) and I came across a part in the proof that I am having trouble understanding.
We let f(n) and g(n) be asymptotically positive functions and assume g(n) = O(f(n)).
In the proof, it states that since we know that f(n) + g(n) ≥ f(n) for all n, we can conclude that f(n) + g(n) = Omega((f(n)).
We can also conclude similarly that f(n) + g(n) ≤ 2 f(n). Therefore f(n) + g(n) = O(f(n)).
I am having trouble understanding why it is the case that f(n) + g(n) = Omega((f(n)) and f(n) + g(n) = O(f(n)) would be true. How is it that we can prove that the tight-lower bound is specifically when we add g(n) to f(n)? What is it that we are exactly concluding from the value of g(n)?
One way of proving that f(n) is theta(g(n)) is to prove two separate statements: that f(n) is omega(g(n)), and f(n) is O(g(n)). It's pretty clear this way of proving is correct from the definitions of those notations.
In this exact problem, if we choose some constant c to be equal to 1, we will have, for every n, that f(n) + g(n) >= c * f(n), so that, by definition, shows that f(n) + g(n) is Omega(f(n)). Furthermore, for the O(f(n)) part, if we choose the constant c to be 2 in this case, we need to prove that there exists some n0 such that f(n) + g(n) <= c * f(n) for every n > n0, which is equivalent to g(n) <= f(n) for every n > n0, which is equivalent to the definition of g(n) = O(f(n)) given in the problem statement.
Hope this helps.

Asymptotic notation properties proofs?

I am trying to prove that if f(n) and g(n) are asymptotically positive functions, then:
f(n) = O((f(n))^2)
f(n) = O(g(n)) implies 2^(f(n)) = O(2^(g(n)))
f(n) = O(g(n)) implies g(n) = O(f(n))
1) Theorem: If f(n) is an asymptotically positive function from natural numbers to natural numbers, then f(n) = O((f(n))^2) (note I have added an extra, perhaps implied, assumption).
Proof: Because f(n) is an asymptotically positive function from natural numbers to natural numbers, it is guaranteed that for all natural numbers n greater than or equal to some natural number n0, f(n) > 0, hence f(n) >= 1. Because f(n) is guaranteed to be positive we are free to multiply both sides of the inequality by f(n) without changing the direction to get f(n)^2 >= f(n). Therefore, we can choose c = 1 and use the n0 from the assumption to show that f(n) = O((f(n))^2). (Recall that by the definition of Big-Oh, f(n) = O(g(n)) if and only if there exist constants c > 0, n0 such that for n >= n0, f(n) <= c * g(n)).
2) Theorem: if f(n) and g(n) are asymptotically positive functions from natural numbers to natural numbers and f(n) = O(g(n)), then it is not necessarily true that 2^(f(n)) = O(2^(g(n)).
Proof: The proof is by example. It can be shown that 4n = O(2n). 4n and 2n are both asymptotically positive functions from naturals to naturals. However, it can also be shown that 2^(4n) = 16^n is not O(2^(2n)) = O(4^n).
3) Theorem: if f(n) and g(n) are asymptotically positive functions from natural numbers to natural numbers and f(n) = O(g(n)), then it is not necessarily true that g(n) = O(f(n)).
Proof: The proof is by example. It can be shown that n = O(n^2). n and n^2 are both asymptotically positive functions from naturals to naturals. However, it can also be shown that n^2 is not O(n).
f(n) = O((f(n))2)
Any function is by default big-O of itself, i.e. we can use a bigger constant cbig such that f(n) <= cbig.f(n).
Thus,
if f(n) is less than or equal to cbig.f(n),
then f(n) will definitely be less than or equal to cbig.f(n).f(n), for asymptotically positive f(n).
Mathematically, f(n) = O(f(n).f(n)) = O(f(n)2) is true.
f(n) = O(g(n)) implies 2f(n) = O(2g(n))
f(n) = O(g(n)) implies that f(n) <= g(n)
Also, if some positive number n is less than m, then 2n will be less than 2m
Using 1. and 2. above, we can conclude that if f(n) = O(g(n)), then 2f(n) = O(2g(n))
f(n) = O(g(n)) implies g(n) = O(f(n))
This one is wrong.
f(n) = O(g(n)) implies g(n) = Ω(f(n)).
If f(n) = O(g(n)), then f(n) is upper bound by g(n) which means that g(n) is lower bound by f(n), therefore g(n) = Ω(f(n)).

Prove that f(n) = Θ(g(n)) iff g(n) = Θ(f(n))

I have been given the problem:
f(n) are asymptotically positive functions. Prove f(n) = Θ(g(n)) iff g(n) = Θ(f(n)).
Everything I have found points to this statement being invalid. For example an answer I've come across states:
f(n) = O(g(n)) implies g(n) = O(f(n))
f(n) = O(g(n)) means g(n) grows faster than f(n). It cannot imply that f(n) grows
faster than g(n). Hence not true.
Another states:
If f(n) = O(g(n)) then O(f(n)). This is false. If f(n) = 1 and g(n) = n
for all natural numbers n, then f(n) <= g(n) for all natural numbers n, so
f(n) = O(g(n)). However, suppose g(n) = O(f(n)). Then there are natural
numbers n0 and a constant c > 0 such that n=g(n) <= cf(n) = c for all n >=
n0 which is impossible.
I understand that there are slight differences between my exact question and the examples I have found, but I've only been able to come up with solutions that do not prove it. I am correct in thinking that it is not able to be proved or am I looking over some detail?
You can start from here:
Formal Definition: f(n) = Θ (g(n)) means there are positive constants c1, c2, and k, such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ k.
Because you have that iff, you need to start from the left side and to prove the right side, and then start from the right side and prove the left side.
Left -> right
We consider that:
f(n) = Θ(g(n))
and we want to prove that
g(n) = Θ(f(n))
So, we have some positive constants c1, c2 and k such that:
0 ≤ c1*g(n) ≤ f(n) ≤ c2*g(n), for all n ≥ k
The first relation between f and g is:
c1*g(n) ≤ f(n) => g(n) ≤ 1/c1*f(n) (1)
The second relation between f and g is:
f(n) ≤ c2*g(n) => 1/c2*f(n) ≤ g(n) (2)
If we combine (1) and (2), we obtain:
1/c2*f(n) ≤ g(n) ≤ 1/c1*f(n)
If you consider c3 = 1/c2 and c4 = 1/c1, they exist and are positive (because the denominators are positive). And this is true for all n ≥ k (where k can be the same).
So, we have some positive constants c3, c4, k such that:
c3*f(n) ≤ g(n) ≤ c4*f(n), for all n ≥ k
which means that g(n) = Θ(f(n)).
Analogous for right -> left.

Comparing complexities

I have these three questions for an exam review:
If f(n) = 2n - 3 give two different functions g(n) and h(n) (so g(n) doesn't equal h(n)) such that f(n) = O(g(n)) and f(n) = O(h(n))
Now do the same again with functions g'(n) and h'(n), but this time the function should be of the form
g'(n) = Ɵ(f(n)) and f(n) = o(h'(n))
Is it possible for a function f(n) = O(g(n)) and f(n) = Ω(g(n))?
I know that a function is O(n) of another, if it is less than or equal to the other function. So I think 1. could be g(n) = 2n²-3 and h(n) = 2n²-10.
I also know that a function is Ɵ(n) of another if it is basically equal to the other function (we can ignore constants), and o(n) if it is only less than the function, so for 2. I think you could have g'(n) = 2n-15 and h'(n) = 2n.
To 3.: It is possible for a function to be both O(n) and Ω(n) because O(n) and Ω(n) allows for the function to be the same as the given function, so you could have a function g(n) that equals f(n) and satisfies the rules for being both O and Ω.
Can someone please tell me if this is correct?
Your answers are mostly right. But I would like to add some points:
Given is f(n) = 2n - 3
With g(n) = 2n²-3 and h(n) = 2n²-10 f(n) is in O(g(n)) and in O(h(n)). But your g(n) and h(n) are basicly the same, at least they are both in Θ(n²). There exists many other function that would also work. E.g.
f(n) ∈ O(n) ⇒ g(n) = n
f(n) ∈ O(nk) ⇒ g(n) = nk ∀ k ≥ 1
f(n) ∈ O(2ⁿ) ⇒ g(n) = 2ⁿ
g'(n) = 2n-15 reduces to g'(n) = n, if we think in complexities, and this is right. In fact, it is the only possible answer.
But f(n) ∈ o(h'(n)) does not hold for h'(n) = 2n. Little-o means that
limn → ∞ | f(n)/g(n) | = 0 ⇔ f(n) ∈ o(g(n))
So you can choose h'(n) = n² or more general h'(n) = nk ∀ k > 1 or h'(n) = cⁿ for a constant c > 1.
Yes it is possible and you can take it also as a definition for Θ(g(n)):
f(n) ∈ Θ(g(n)) ⇔ f(n) ∈ O(g(n)) and f(n) ∈ Ω(g(n))

Resources