upper-bound intersecting unnecessarily - algorithm

I was going through basics of Big-O notation.
f(n) = Ω(g(n)) means c.g(n) is a lower bound on f(n) such that f(n) is always ≥ c.g(n)
f(n) = O(g(n)) means c.g(n) is an upper bound on f(n) such that f(n) is always
≤ c.g(n)
for all n ≥ n0
upper and lower bound is clear in graph above, but why is f(n) and upperbound intersecting? when its clear from above definition? does that have meaning or I am just pointing out unnecessarily?
Source: The Algorith Design Manual by Skiena

Based on the first two definitions, there should not be an intersection because of the word always
f(n) = Ω(g(n)) means c.g(n) is a lower bound on f(n) such that f(n) is
always ≥ c.g(n)
f(n) = O(g(n)) means c.g(n) is an upper bound on f(n) such that f(n)
is always ≤ c.g(n)
These definitions are not exactly correct. Because the idea for big-O notation is to check the number of operations when n is really big. In layman terms it means that you start checking the complexity only after some number which you consider big enough. This is outlined on your picture:
Upper and lower bounds valid for n > n0 ...
and this is why on the picture you have a vertical line n0. So you do not care about anything before this line, because you consider only numbers after n0 big enough.
To make these definitions exactly correct, just add for n > n0 at the end of both of them.

The definition is simply inaccurate. Big-O notation is about asymptotic growth. As such, it's properties are considered for "large enough N", which means it might not hold true for small N's.
In the chart, a "large enough N" is marked as N0, after which the limiting property is maintained.

In addition to what has been already said in other answers, the inequalities in the definition are incorrect as well, they should be reversed:
f(n) = Ω(g(n)) means c.g(n) is a lower bound on f(n) such that f(n) is always ≥ c.g(n)
f(n) = O(g(n)) means c.g(n) is an upper bound on f(n) such that f(n) is always ≤ c.g(n)

Related

An f(n) and g(n) that aren't upper bounds of each other

As the title says, can someone name an f(n) and a g(n) where they aren't upper bounds of each other. I had absolutely no idea and put two random constants:
f(n) = 8
g(n) = 3
Still no idea
In your example, they are both O(1). I'd say they're both "equivalent" and both upper/lower bounds of each other.
I'm pretty sure
f(n) = sin(n)
g(n) = cos(n)
will work. If you take the limit as n approaches infinity, f(n)/g(n) does not converge, and neither will g(n)/f(n) converge. Therefore, neither is an upper bound of the other.
Please post in a comment if you're not sure about why limits are being used here, and I can explain in greater depth.
Take f(n) be any positive value depending on n and
g(n) = n*f(n) if n is even else f(n)/n
Then, there is no constant A such that for n large enough g(n) <= A f(n), and
there is no constant B such that for n large enough f(n) <= B g(n). Thus g is not O(f) and f is not O(g).

Clarification for Theta notation in complexity analysis. Θ(g)

When we talk about Θ(g) are we referring to the highest order term of g(n) or are we referring to g(n) exactly as it is?
For example if f(n) = n3. And g(n)=1000n3+n does Θ(g) mean Θ(1000n3+n) or Θ(n3)?
In this scenario can we say that f(n) is Θ(g)?
Θ(g) yields sets of functions that are all of the same complexity class. Θ(1000n3+n) is equal to Θ(n3) because both of these result in the same set.
For simplicity's sake one will usually drop the non-significant terms and multiplicative constants. The lower order additive terms don't change the complexity, nor do any multipliers, so there's no reason to write them out.
Since Θ(g) is a set, you would say that f(n) &in; Θ(g).
NB: Many CS teachers, textbooks, and other resources muddy the waters by using imprecise notation. Lots of people say that f(n)=n3 is O(n3), rather than f(n)=n3 is in O(n3). They use = when they mean &in;.
theta(g(n)) lies between O(g(n)) and omega(g(n))
if g(n) = 1000n^3 + n
first lets find O(g(n)) upper bound
It could be n^3, n^4, n^5 but we choose the closest one which is O(n^3).
O(n^3) is valid because we can find a constant c such that for some value of n
1000n^3 + n < c.n^3
second lets see omega(g(n)) which is lower bound
omega says f(n) > c.g(n)
we can find a constant c such that
1000.n^3 + n > c.n^3
Now we have upper bound which is O(n^3) and lower bound which is omega(n^3).
therefore we have theta which bounds both upper and lower using same function.
By rule : if f(n) = O(g(n)) and f(n) = omega(g(n)) therefore f(n) = theta(g(n))
1000.n^3 + n = theta(n^3)

Big Theta Questions

I don't understand this situation.
f(n) ∈ O(g(n)),
g(n) ∈ Θ(f(n))
for these situations why is are the following the correct answers.
f(n) <= g(n) for all n>1, neither always true or false
g(n) ∈ Ω(f(n)), always true
f(n)<= Θ(g(n)), always true
My logic is since g(n) ∈ Θ(f(n)), g(n) and f(n) would have to have the same highest power (for example: n=n, n^2=n^2). In that case, wouldn't all 3 statements be true?
I don't understand why the first one is neither always true or false and third one is always true.
Big-O, Big-Ω and Big-Θ notation, in mathematics, describes the asymptotic behaviour of a function, as an upper bound, lower bound and a tight bound (upper and lower bound), respectively. On SE, in the context of programming, we usually use these notations to describe the asymptotic behaviour of algorithms, with regard to the size of the problem that the algorithm is intended to solve (typically this size is denoted n).
For reference, see e.g.
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation
For answering your question, we shall treat this subject in the context of the asymptotic (/limiting) behaviour or functions. Lets cover your two properties of and three statements of f(n) and g(n) one by one.
Property i) f(n) ∈ O(g(n))
Given is that
f(n) ∈ O(g(n)).
For some constant k>0, (the dominant term(s) of) g(n) provides an upper bound to the asymptotic behaviour of f(n), i.e.,
f(n) < k*g(n), n sufficiently large. (*)
As a quick example of what I mean by dominant terms: if g(n) is a some polynomial, we have
O(g(n)) = O(a_0 + a_1*n + ... + a_j*n^j) = O(n^j),
i.e., asymptotically dominant term n^j.
Property ii) g(n) ∈ Θ(f(n))
Given is that
g(n) ∈ Θ(f(n)).
For some constants k_1>0 and k_2>0, k_1*f(n) and k_2*f(n) provides lower and upper bounds, respectively, on the asymptotic behaviour of g(n), i.e.,
k_1*f(n) < g(n) < k_2*f(n), n sufficiently large. (**)
Again, as we describe asymptotic behaviour, it's really the dominant terms of g(n) and f(n) that are of interest.
Assume from now on that i) and ii) both holds, for all sufficiently large n
We move only to your three statements.
Statement a) f(n) <= g(n) for all n>1, either always true or false
First of all, given i) and ii), we cannot draw any conclusion of the behaviour of f(n) and g(n) for "n smaller than sufficiently large n", i.e., we cannot say any statements regarding f(n) and g(n) for all n>1. Properties in i) and ii) only describe asymptotic behaviours of f(n) and g(n). If we adjust the statement to
f(n) <= g(n) for all sufficiently large n; either always true or false,
we can analyse it. Assume the following holds true (for n sufficiently large):
f(n) <= g(n). (a1)
For n sufficiently large, we also know that (*) and (**) holds, that is
(*) f(n) < k*g(n), for some constant k>0, (a2)
(**) f(n) < (1/k_1)*g(n), for some constant k_1>0, (a3)
g(n) < k_2*f(n), for some constant k_2>0, (a4)
Since (a1) holds, by assumption, we can consider (a2) and (a3) as redundant by choosing some k=(1/k_1)>1. This leaves us with
f(n) <= g(n) < k_2*f(n), for some constant k_2>0. (a1, a4)
This is simply property ii) above, g(n) ∈ Θ(f(n)), where we've found that constant k_1 = 1 (or, strictly, k_1 very near 1) satisfy the left hand side of (**).
On the other hand, if we assume that f(n) <= g(n) is always false (sufficiently large n), we arrive at the result
g(n) < f(n), (a1.false)
g(n) < k_2*f(n), for some constant k_2>0. (a4)
Which, naturally, holds (k_2=1).
To round off, statement a) is kind of weird.
Statement b) g(n) ∈ Ω(f(n)), always true
Much like in i) (for upper bound), given that
g(n) ∈ Ω(f(n)),
then, for sufficiently large n,
k*f(n) < g(n), for some constant k>0, (b1)
holds. We already know this to be true from ii), as this is given in the left hand side of (**)
(**) k_1*f(n) < g(n), for some constant k_1>0. (b2)
Hence, given ii), it is trivial that g(n) ∈ Ω(f(n)) holds.
Statement c) f(n)<= Θ(g(n)), always true
Recall Big-Θ from ii); f(n) ∈ Θ(g(n)) can be described as
k_1*g(n) < f(n) < k_2*g(n), n sufficiently large, (c1)
for some constants k_1>0, k_2>0.
In this context, f(n) <= Θ(g(n)) does not make much sense, as O(g(n)) either describes a property of f(n) or set of functions f(n) that conforms to the property O(g(n)) (in context of asymptotic behaviour. From Wikipedia article of Big O notation (https://en.wikipedia.org/wiki/Big_O_notation):
"the notation O(g(n)) is also used to denote the set of all functions
f(n) that satisfy the relation f(n)=O(g(n)); f(n) ∈ Θ(g(n))"
Perhaps the leq operator "<=" has some special meaning in the context of Big-...-notation, but it's not something I have ever encountered myself.

What is actually mean't by the big-O graph

There is a lot of Explanation about big-0, But i'm really confused on this part.
Acoording to the definition of Big-O in this function
f (n) ≤ c ·g(n), for n ≥ n0
“ f (n) is big-Oh of g(n).”
But A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
so for e.g here 34 is a upper bound for the set { 5, 10, 34 }
So if in this graph how f(n) is O(g(n)) because if i get the upper bound of g(n) function it's value would be different than what is mentioned here for n>=n0 ..
Beyond n0, f(n) will not grow faster than g(n). f(n)'s rate of growth as a function of n is at most g(n).
g(n)'s rate of growth is said to be an upper-bound of f(n)'s rate of growth of f(n) is Big-O of g(n).
The worst case rate of growth of f(n) will be at most g(n) since f(n) is Big-O of g(n).
This is all about knowing just how big f(n) can grow relative to another known function.
For example, if f(n) = n^2, and g(n) is n^3, then trivially f(n) is Big-O of g(n) since n^2 will never grow faster than n^3.
"c" is used for mathematical proofs - it's just a linear scaling variable. We can't just go around and claim something is Big-O of something else. If we choose n0 and c for a given g(n), and this equation holds
f(n) ≤ c ·g(n), for n ≥ n0
then we can show that truly f(n) is Big-O of g(n).
Example:
f(n) = n^2;
g(n) = n^3;
We can choose n0 = 1, and c = 1 such that
f(n) ≤ 1 ·g(n), for n ≥ 1
which becomes
n^2 ≤ 1 ·n^3, for n ≥ 1
which always holds, thus f(n) is proven to be Big-O of g(n).
Proofs can get more complicated, for instance this, but this is the gist of it.

What does the big-O notation mean? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Plain English explanation of Big O
I need to figure out O(n) of the following:
f(n) = 10n^2 + 10n + 20
All I can come up with is 50, and I am just too embarrassed to state how I came up with that.
Can someone explain what it means and how I should calculate it for f(n) above?
Big-O notation is to do with complexity analysis. A function is O(g(n)) if (for all except some n values) it is upper-bounded by some constant multiple of g(n) as n tends to infinity. More formally:
f(n) is in O(g(n)) iff there exist constants n0 and c such that for all n >= n0, f(n) <= c.g(n)
In this case, f(n) = 10n^2 + 10n + 20, so f(n) is in O(n^2), O(n^3), O(n^4), etc. The tightest upper bound is O(n^2).
In layman's terms, what this means is that f(n) grows no worse than quadratically as n tends to infinity.
There's a corresponding Big-Omega notation which can be used to lower-bound functions in a similar manner. In this case, f(n) is also Omega(n^2): that is, it grows no better than quadratically as n tends to infinity.
Finally, there's a Big-Theta notation which combines the two, i.e. iff f(n) is in O(g(n)) and f(n) is in Omega(g(n)) then f(n) is in Theta(g(n)). In this case, f(n) is in Theta(n^2): that is, it grows exactly quadratically as n tends to infinity.
--> The point of all this is that as n gets big, the linear (10n) and constant (20) terms become essentially irrelevant, as the value of the function is far more affected by the quadratic term. <--

Resources