Questions on runtime analysis properties - big-o

Am I wondering if the follow are true.
If f(n) is O(g(n)) and f(n) is also Ω(g(n)) that means
f(n) is also big Θ(g(n)) right? Also if either of the 2 above are false, then f(n) is not big Θ(g(n))?

If f(n) is O(g(n)) and f(n) is also Ω(g(n)) that means f(n) is also big Θ(g(n)) right?
Yes. That is the definition of big theta.
Also if either of the 2 above are false, then f(n) is not big Θ(g(n))?
Yes. It is a bijection.
if we know f1(n) is Θ(g(n)) and f2 (n) is Θ(g(n)), does that mean f1 (n) + f2 (n) is Θ(g(n)). Why?
Because f1(n) is approximately c1*g(n) and f2 is approximately c2*g(n), so f1(n)+f2(n) is approximately (c1+c2)*g(n), and so any linear combination will preserve that relationship.

Since big-omega is lower bound, big-O is upper bound, and big-theta is upper and lower bound, it stands to reason yes.

Related

f(n) is in O(g(n)), could it have same growth as g(n)?

I know that f(n) grows slower than g(n), but could f(n) has the same growth rate as g(n) since there is an equality sign?
Based on the Big-O definition, yes. For example n is in O(n) as well. In this case, f(n) = n and g(n) = n are even equal, a stronger relation than having the same growth.

Why is f=O(g) if f(n) grows more slowly than g(n)?

If a function f(n) grows more slowly than a function g(n), why is f(n) = O(g(n))?
e.g. if f(n) is 4n^4 and g(n) is log(4n^n^4)
My book says f=O(g(n)) because g=n^4*log(4n)=n^4(logn + log4)=O(n^4*logn). I understand why g=O(n^4*logn), but I'm not sure how they reached the conclusion that f=O(g(n)) from big O of g.
I understand that f(n) grows more slowly than g(n) just by thinking about the graphs, but I'm having trouble understanding asymptotic behavior and why f=O(g(n)) in general.
The formal definition of big-O notation is that
f(n) = O(g(n)) if there are constants n0 and c such that for any n ≥ n0, we have f(n) ≤ c · g(n).
In other words, f(n) = O(g(n)) if for sufficiently large values of n, the value of f(n) is upper-bounded by some constant multiple of g(n).
Notice that this just says that f(n) is upper-bounded by g(n), not that f(n)'s rate of growth is the same as g(n)'s rate of growth. In that sense, you can think of f(n) = O(g(n)) as akin to saying something like "f ≤ g," that f doesn't grow faster than g, leaving open the possibility that g grows a lot faster than f does.

upper-bound intersecting unnecessarily

I was going through basics of Big-O notation.
f(n) = Ω(g(n)) means c.g(n) is a lower bound on f(n) such that f(n) is always ≥ c.g(n)
f(n) = O(g(n)) means c.g(n) is an upper bound on f(n) such that f(n) is always
≤ c.g(n)
for all n ≥ n0
upper and lower bound is clear in graph above, but why is f(n) and upperbound intersecting? when its clear from above definition? does that have meaning or I am just pointing out unnecessarily?
Source: The Algorith Design Manual by Skiena
Based on the first two definitions, there should not be an intersection because of the word always
f(n) = Ω(g(n)) means c.g(n) is a lower bound on f(n) such that f(n) is
always ≥ c.g(n)
f(n) = O(g(n)) means c.g(n) is an upper bound on f(n) such that f(n)
is always ≤ c.g(n)
These definitions are not exactly correct. Because the idea for big-O notation is to check the number of operations when n is really big. In layman terms it means that you start checking the complexity only after some number which you consider big enough. This is outlined on your picture:
Upper and lower bounds valid for n > n0 ...
and this is why on the picture you have a vertical line n0. So you do not care about anything before this line, because you consider only numbers after n0 big enough.
To make these definitions exactly correct, just add for n > n0 at the end of both of them.
The definition is simply inaccurate. Big-O notation is about asymptotic growth. As such, it's properties are considered for "large enough N", which means it might not hold true for small N's.
In the chart, a "large enough N" is marked as N0, after which the limiting property is maintained.
In addition to what has been already said in other answers, the inequalities in the definition are incorrect as well, they should be reversed:
f(n) = Ω(g(n)) means c.g(n) is a lower bound on f(n) such that f(n) is always ≥ c.g(n)
f(n) = O(g(n)) means c.g(n) is an upper bound on f(n) such that f(n) is always ≤ c.g(n)

What is actually mean't by the big-O graph

There is a lot of Explanation about big-0, But i'm really confused on this part.
Acoording to the definition of Big-O in this function
f (n) ≤ c ·g(n), for n ≥ n0
“ f (n) is big-Oh of g(n).”
But A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
so for e.g here 34 is a upper bound for the set { 5, 10, 34 }
So if in this graph how f(n) is O(g(n)) because if i get the upper bound of g(n) function it's value would be different than what is mentioned here for n>=n0 ..
Beyond n0, f(n) will not grow faster than g(n). f(n)'s rate of growth as a function of n is at most g(n).
g(n)'s rate of growth is said to be an upper-bound of f(n)'s rate of growth of f(n) is Big-O of g(n).
The worst case rate of growth of f(n) will be at most g(n) since f(n) is Big-O of g(n).
This is all about knowing just how big f(n) can grow relative to another known function.
For example, if f(n) = n^2, and g(n) is n^3, then trivially f(n) is Big-O of g(n) since n^2 will never grow faster than n^3.
"c" is used for mathematical proofs - it's just a linear scaling variable. We can't just go around and claim something is Big-O of something else. If we choose n0 and c for a given g(n), and this equation holds
f(n) ≤ c ·g(n), for n ≥ n0
then we can show that truly f(n) is Big-O of g(n).
Example:
f(n) = n^2;
g(n) = n^3;
We can choose n0 = 1, and c = 1 such that
f(n) ≤ 1 ·g(n), for n ≥ 1
which becomes
n^2 ≤ 1 ·n^3, for n ≥ 1
which always holds, thus f(n) is proven to be Big-O of g(n).
Proofs can get more complicated, for instance this, but this is the gist of it.

Complexity analysis of logarithms

I have two functions, f(n)=log2n and g(n)=log10n. I am trying to decide whether f(n) is O(g(n)), or Ω(g(n)) or Θ(g(n)). I thinks i should take the limit f(n)/g(n) as n goes to infinity, and I think that limit is constant so f(n) must be Θ(n).
Am I right?
log2n = log10n / log102 (from here)
So f(n) = g(n) / log102
So f(n) and g(n) only differ by a constant factor (since log102 is constant).
So, from the definitions of O(x), Ω(x) and Θ(x), I'd say:
f(n) ∈ O(g(n)),
f(n) ∈ Ω(g(n)),
f(n) ∈ Θ(g(n))
Yes, you are right. From complexity point of view (at least big O point of view) doesn't matter if it is log2 or log10.
f(n) is both O(g(n)) and f(n) is Ω(g(n)), f(n) is Θ(g(n))
As the limit is constant, you are right that f(n) ∈ Θ(g(n)) (assuming you have a typo in the question). Also of course g(n) ∈ Θ(f(n)).
BTW: Not only the limit of the ratio is constant but log2n/log10n is always a constant(log210).

Resources