Confused on Little O meaning - algorithm

So what I took from little o page is when you apply the small O notation we have to check if one rate is faster then the other (small o focuses on the upper bound)?
In this case when we apply small o:
2^n = o(3^n) will be false as 2^n and 3^n upper bound is equal in speed but not less then
2n = o(n^2) is true as n^2 upper bound is 2 and 2n does not have an upper bound.
Am I on the right track?

2^n is in o(3^n) (little o), since:
lim_n->infinity (2^n / 3^n) = 0
Simmilarly. for 2n, it is easy to show that it is in o(n^2)
An intuitive for "little o" is - it's an upper bound, but not a tight one. It means, a function f(n) is in o(g(n)) if f(n) is in O(g(n)), but not in Omega(g(n)).
In your example, 2^n is in O(3^n), but it is not in Omega(3^n), so we can say it is in o(3^n)

the only difference between the big O and Small O is that big O allows the function to grow at equal phase however the small O states that g(x) has higher rate of growth and can never be equal after a specific point x'(considering f(x)=o(g(x)) )
The first example you have provided is wrong as small O states that:
for f(x)=o(g(x))
|f(x)|x'
however in the above case where f(x)=2^x and g(x)=3^x there exists no C and x' to satisfy it
as g(x) has higher rate of growth.
the best way to define small O if you understand Big O is:
A function is called small O if it is Big O but not Big Omega
-- this is because the big omega and big O only intersect at the condition when the rate of groth of both function s is equal so if we remove that specific case it is small O.
However please remember that if f(x) is Big O g(x) it can also be small O of g(x) however vice versa is not possible.

Related

O-notation of a linear function has an infinite number of solutions? [duplicate]

Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?
Short explanation:
If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).
If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).
Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.
More technically:
O(n) represents upper bound. Θ(n) means tight bound.
Ω(n) represents lower bound.
f(x) = Θ(g(x)) iff f(x) =
O(g(x)) and f(x) = Ω(g(x))
Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).
In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,
f(x) = Θ(g(x)) iff g(x) = Θ(f(x))
Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))
There are also little-oh and little-omega (ω) notations representing loose upper and loose lower bounds of a function.
To summarize:
f(x) = O(g(x)) (big-oh) means that
the growth rate of f(x) is
asymptotically less than or equal
to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means
that the growth rate of f(x) is
asymptotically greater than or
equal to the growth rate of g(x)
f(x) = o(g(x)) (little-oh) means that
the growth rate of f(x) is
asymptotically less than the
growth rate of g(x).
f(x) = ω(g(x)) (little-omega) means
that the growth rate of f(x) is
asymptotically greater than the
growth rate of g(x)
f(x) = Θ(g(x)) (theta) means that
the growth rate of f(x) is
asymptotically equal to the
growth rate of g(x)
For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.
There's a simple way (a trick, I guess) to remember which notation means what.
All of the Big-O notations can be considered to have a bar.
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
one is Big "O"
one is Big Theta
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
When both condition are true for the same expression, you can use the big theta notation....
Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:
Assume the run time of f(i) is O(1). Below is a code fragment whose asymptotic runtime is Θ(n). It always calls the function f(...) n times. Both the lower and the upper bound is n.
for(int i=0; i<n; i++){
f(i);
}
The second code fragment below has the asymptotic runtime of O(n). It calls the function f(...) at most n times. The upper bound is n, but the lower bound could be Ω(1) or Ω(log(n)), depending on what happens inside f2(i).
for(int i=0; i<n; i++){
if( f2(i) ) break;
f(i);
}
Theta is a shorthand way of referring to a special situtation where
the big O and Omega are the same.
Thus, if one claims The Theta is expression q, then they are also necessarily claiming that Big O is expression q and Omega is expression q.
Rough analogy:
If:
Theta claims, "That animal has 5 legs."
then it follows that:
Big O is true ("That animal has less than or equal to 5 legs.")
and
Omega is true("That animal has more than or equal to 5 legs.")
It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).
A chart could make the previous answers easier to understand:
Θ-Notation - Same order | O-Notation - Upper bound
In English,
On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say f(n) = Θ(g(n)) or f(n) is in big theta of g(n).
Starting with the right, the simpler example, it is saying the upper bound g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).
f(n) belongs to O(n) if exists positive k as f(n)<=k*n
f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n
Wikipedia article on Big O Notation
Using limits
Let's consider f(n) > 0 and g(n) > 0 for all n. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)) instead of the absolute value (|f(n)|).
f(n) = O(g(n))
General:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = O(n) 1
1/2*n = O(n) 1/2
2*n = O(n) 2
n+log(n) = O(n) 1
n = O(n*log(n)) 0
n = O(n²) 0
n = O(nⁿ) 0
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ O(log(n)) ∞
1/2*n ≠ O(sqrt(n)) ∞
2*n ≠ O(1) ∞
n+log(n) ≠ O(log(n)) ∞
f(n) = Θ(g(n))
General:
f(n)
0 < lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 < lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = Θ(n) 1
1/2*n = Θ(n) 1/2
2*n = Θ(n) 2
n+log(n) = Θ(n) 1
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ Θ(log(n)) ∞
1/2*n ≠ Θ(sqrt(n)) ∞
2*n ≠ Θ(1) ∞
n+log(n) ≠ Θ(log(n)) ∞
n ≠ Θ(n*log(n)) 0
n ≠ Θ(n²) 0
n ≠ Θ(nⁿ) 0
Conclusion: we regard big O, big θ and big Ω as the same thing.
Why? I will tell the reason below:
Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.
In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:
T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!
Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.
Similarly, we also regard big Ω as big θ subconsciously.
In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.

Big O, Theta, and big Omega notation

Based on my understanding, big O is essentially similar to theta notation but can include anything bigger than the given function (e.g. n^3 = O(n^4), n^3 = O(n^5), etc.), and big Omega includes anything smaller than the given function (n^3 = Ω(n^2), etc.).
However, my professor said the other day that n^0.79 = Ω(n^0.8), while he was doing an exercise that involved the master theorem.
Why/how is this true when n^0.8 is larger than n^0.79?
You have big O and big Omega backwards. Big O is everything the "same" or smaller than the function.

Big O, Big Omega, Big Theta [duplicate]

Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?
Short explanation:
If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).
If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).
Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.
More technically:
O(n) represents upper bound. Θ(n) means tight bound.
Ω(n) represents lower bound.
f(x) = Θ(g(x)) iff f(x) =
O(g(x)) and f(x) = Ω(g(x))
Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).
In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,
f(x) = Θ(g(x)) iff g(x) = Θ(f(x))
Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))
There are also little-oh and little-omega (ω) notations representing loose upper and loose lower bounds of a function.
To summarize:
f(x) = O(g(x)) (big-oh) means that
the growth rate of f(x) is
asymptotically less than or equal
to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means
that the growth rate of f(x) is
asymptotically greater than or
equal to the growth rate of g(x)
f(x) = o(g(x)) (little-oh) means that
the growth rate of f(x) is
asymptotically less than the
growth rate of g(x).
f(x) = ω(g(x)) (little-omega) means
that the growth rate of f(x) is
asymptotically greater than the
growth rate of g(x)
f(x) = Θ(g(x)) (theta) means that
the growth rate of f(x) is
asymptotically equal to the
growth rate of g(x)
For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.
There's a simple way (a trick, I guess) to remember which notation means what.
All of the Big-O notations can be considered to have a bar.
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
one is Big "O"
one is Big Theta
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
When both condition are true for the same expression, you can use the big theta notation....
Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:
Assume the run time of f(i) is O(1). Below is a code fragment whose asymptotic runtime is Θ(n). It always calls the function f(...) n times. Both the lower and the upper bound is n.
for(int i=0; i<n; i++){
f(i);
}
The second code fragment below has the asymptotic runtime of O(n). It calls the function f(...) at most n times. The upper bound is n, but the lower bound could be Ω(1) or Ω(log(n)), depending on what happens inside f2(i).
for(int i=0; i<n; i++){
if( f2(i) ) break;
f(i);
}
Theta is a shorthand way of referring to a special situtation where
the big O and Omega are the same.
Thus, if one claims The Theta is expression q, then they are also necessarily claiming that Big O is expression q and Omega is expression q.
Rough analogy:
If:
Theta claims, "That animal has 5 legs."
then it follows that:
Big O is true ("That animal has less than or equal to 5 legs.")
and
Omega is true("That animal has more than or equal to 5 legs.")
It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).
A chart could make the previous answers easier to understand:
Θ-Notation - Same order | O-Notation - Upper bound
In English,
On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say f(n) = Θ(g(n)) or f(n) is in big theta of g(n).
Starting with the right, the simpler example, it is saying the upper bound g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).
f(n) belongs to O(n) if exists positive k as f(n)<=k*n
f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n
Wikipedia article on Big O Notation
Using limits
Let's consider f(n) > 0 and g(n) > 0 for all n. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)) instead of the absolute value (|f(n)|).
f(n) = O(g(n))
General:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = O(n) 1
1/2*n = O(n) 1/2
2*n = O(n) 2
n+log(n) = O(n) 1
n = O(n*log(n)) 0
n = O(n²) 0
n = O(nⁿ) 0
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ O(log(n)) ∞
1/2*n ≠ O(sqrt(n)) ∞
2*n ≠ O(1) ∞
n+log(n) ≠ O(log(n)) ∞
f(n) = Θ(g(n))
General:
f(n)
0 < lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 < lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = Θ(n) 1
1/2*n = Θ(n) 1/2
2*n = Θ(n) 2
n+log(n) = Θ(n) 1
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ Θ(log(n)) ∞
1/2*n ≠ Θ(sqrt(n)) ∞
2*n ≠ Θ(1) ∞
n+log(n) ≠ Θ(log(n)) ∞
n ≠ Θ(n*log(n)) 0
n ≠ Θ(n²) 0
n ≠ Θ(nⁿ) 0
Conclusion: we regard big O, big θ and big Ω as the same thing.
Why? I will tell the reason below:
Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.
In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:
T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!
Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.
Similarly, we also regard big Ω as big θ subconsciously.
In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.

Difference between Big-Theta and Big O notation in simple language

While trying to understand the difference between Theta and O notation I came across the following statement :
The Theta-notation asymptotically bounds a function from above and below. When
we have only an asymptotic upper bound, we use O-notation.
But I do not understand this. The book explains it mathematically, but it's too complex and gets really boring to read when I am really not understanding.
Can anyone explain the difference between the two using simple, yet powerful examples.
Big O is giving only upper asymptotic bound, while big Theta is also giving a lower bound.
Everything that is Theta(f(n)) is also O(f(n)), but not the other way around.
T(n) is said to be Theta(f(n)), if it is both O(f(n)) and Omega(f(n))
For this reason big-Theta is more informative than big-O notation, so if we can say something is big-Theta, it's usually preferred. However, it is harder to prove something is big Theta, than to prove it is big-O.
For example, merge sort is both O(n*log(n)) and Theta(n*log(n)), but it is also O(n2), since n2 is asymptotically "bigger" than it. However, it is NOT Theta(n2), Since the algorithm is NOT Omega(n2).
Omega(n) is asymptotic lower bound. If T(n) is Omega(f(n)), it means that from a certain n0, there is a constant C1 such that T(n) >= C1 * f(n). Whereas big-O says there is a constant C2 such that T(n) <= C2 * f(n)).
All three (Omega, O, Theta) give only asymptotic information ("for large input"):
Big O gives upper bound
Big Omega gives lower bound and
Big Theta gives both lower and upper bounds
Note that this notation is not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.
I will just quote from Knuth's TAOCP Volume 1 - page 110 (I have the Indian edition). I recommend reading pages 107-110 (section 1.2.11 Asymptotic representations)
People often confuse O-notation by assuming that it gives an exact order of Growth; they use it as if it specifies a lower bound as well as an upper bound. For example, an algorithm might be called inefficient because its running time is O(n^2). But a running time of O(n^2) does not necessarily mean that running time is not also O(n)
On page 107,
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^4) and
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^3) and
1^2 + 2^2 + 3^2 + ... + n^2 = (1/3) n^3 + O(n^2)
Big-Oh is for approximations. It allows you to replace ~ with an equals = sign. In the example above, for very large n, we can be sure that the quantity will stay below n^4 and n^3 and (1/3)n^3 + n^2 [and not simply n^2]
Big Omega is for lower bounds - An algorithm with Omega(n^2) will not be as efficient as one with O(N logN) for large N. However, we do not know at what values of N (in that sense we know approximately)
Big Theta is for exact order of Growth, both lower and upper bound.
I am going to use an example to illustrate the difference.
Let the function f(n) be defined as
if n is odd f(n) = n^3
if n is even f(n) = n^2
From CLRS
A function f(n) belongs to the set Θ(g(n)) if there exist positive
constants c1 and c2 such that it can be "sandwiched" between c1g(n)
and c2g(n), for sufficiently large n.
AND
O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
f(n) ≤ cg(n) for all n ≥ n0}.
AND
Ω(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
cg(n) ≤ f(n) for all n ≥ n0}.
The upper bound on f(n) is n^3. So our function f(n) is clearly O(n^3).
But is it Θ(n^3)?
For f(n) to be in Θ(n^3) it has to be sandwiched between two functions one forming the lower bound, and the other the upper bound, both of which grown at n^3. While the upper bound is obvious, the lower bound can not be n^3. The lower bound is in fact n^2; f(n) is Ω(n^2)
From CLRS
For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and
only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
Hence f(n) is not in Θ(n^3) while it is in O(n^3) and Ω(n^2)
If the running time is expressed in big-O notation, you know that the running time will not be slower than the given expression. It expresses the worst-case scenario.
But with Theta notation you also known that it will not be faster. That is, there is no best-case scenario where the algorithm will retun faster.
This gives are more exact bound on the expected running time. However for most purposes it is simpler to ignore the lower bound (the possibility of faster execution), while you are generally only concerned about the worst-case scenario.
Here's my attempt:
A function, f(n) is O(n), if and only if there exists a constant, c, such that f(n) <= c*g(n).
Using this definition, could we say that the function f(2^(n+1)) is O(2^n)?
In other words, does a constant 'c' exist such that 2^(n+1) <= c*(2^n)? Note the second function (2^n) is the function after the Big O in the above problem. This confused me at first.
So, then use your basic algebra skills to simplify that equation. 2^(n+1) breaks down to 2 * 2^n. Doing so, we're left with:
2 * 2^n <= c(2^n)
Now its easy, the equation holds for any value of c where c >= 2. So, yes, we can say that f(2^(n+1)) is O(2^n).
Big Omega works the same way, except it evaluates f(n) >= c*g(n) for some constant 'c'.
So, simplifying the above functions the same way, we're left with (note the >= now):
2 * 2^n >= c(2^n)
So, the equation works for the range 0 <= c <= 2. So, we can say that f(2^(n+1)) is Big Omega of (2^n).
Now, since BOTH of those hold, we can say the function is Big Theta (2^n). If one of them wouldn't work for a constant of 'c', then its not Big Theta.
The above example was taken from the Algorithm Design Manual by Skiena, which is a fantastic book.
Hope that helps. This really is a hard concept to simplify. Don't get hung up so much on what 'c' is, just break it down into simpler terms and use your basic algebra skills.

What exactly does big Ө notation represent?

I'm really confused about the differences between big O, big Omega, and big Theta notation.
I understand that big O is the upper bound and big Omega is the lower bound, but what exactly does big Ө (theta) represent?
I have read that it means tight bound, but what does that mean?
First let's understand what big O, big Theta and big Omega are. They are all sets of functions.
Big O is giving upper asymptotic bound, while big Omega is giving a lower bound. Big Theta gives both.
Everything that is Ө(f(n)) is also O(f(n)), but not the other way around.
T(n) is said to be in Ө(f(n)) if it is both in O(f(n)) and in Omega(f(n)). In sets terminology, Ө(f(n)) is the intersection of O(f(n)) and Omega(f(n))
For example, merge sort worst case is both O(n*log(n)) and Omega(n*log(n)) - and thus is also Ө(n*log(n)), but it is also O(n^2), since n^2 is asymptotically "bigger" than it. However, it is not Ө(n^2), Since the algorithm is not Omega(n^2).
A bit deeper mathematic explanation
O(n) is asymptotic upper bound. If T(n) is O(f(n)), it means that from a certain n0, there is a constant C such that T(n) <= C * f(n). On the other hand, big-Omega says there is a constant C2 such that T(n) >= C2 * f(n))).
Do not confuse!
Not to be confused with worst, best and average cases analysis: all three (Omega, O, Theta) notation are not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.
We usually use it to analyze complexity of algorithms (like the merge sort example above). When we say "Algorithm A is O(f(n))", what we really mean is "The algorithms complexity under the worst1 case analysis is O(f(n))" - meaning - it scales "similar" (or formally, not worse than) the function f(n).
Why we care for the asymptotic bound of an algorithm?
Well, there are many reasons for it, but I believe the most important of them are:
It is much harder to determine the exact complexity function, thus we "compromise" on the big-O/big-Theta notations, which are informative enough theoretically.
The exact number of ops is also platform dependent. For example, if we have a vector (list) of 16 numbers. How much ops will it take? The answer is: it depends. Some CPUs allow vector additions, while other don't, so the answer varies between different implementations and different machines, which is an undesired property. The big-O notation however is much more constant between machines and implementations.
To demonstrate this issue, have a look at the following graphs:
It is clear that f(n) = 2*n is "worse" than f(n) = n. But the difference is not quite as drastic as it is from the other function. We can see that f(n)=logn quickly getting much lower than the other functions, and f(n) = n^2 is quickly getting much higher than the others.
So - because of the reasons above, we "ignore" the constant factors (2* in the graphs example), and take only the big-O notation.
In the above example, f(n)=n, f(n)=2*n will both be in O(n) and in Omega(n) - and thus will also be in Theta(n).
On the other hand - f(n)=logn will be in O(n) (it is "better" than f(n)=n), but will NOT be in Omega(n) - and thus will also NOT be in Theta(n).
Symmetrically, f(n)=n^2 will be in Omega(n), but NOT in O(n), and thus - is also NOT Theta(n).
1Usually, though not always. when the analysis class (worst, average and best) is missing, we really mean the worst case.
It means that the algorithm is both big-O and big-Omega in the given function.
For example, if it is Ө(n), then there is some constant k, such that your function (run-time, whatever), is larger than n*k for sufficiently large n, and some other constant K such that your function is smaller than n*K for sufficiently large n.
In other words, for sufficiently large n, it is sandwiched between two linear functions :
For k < K and n sufficiently large, n*k < f(n) < n*K
Theta(n): A function f(n) belongs to Theta(g(n)), if there exists positive constants c1 and c2 such that f(n) can be sandwiched between c1(g(n)) and c2(g(n)). i.e it gives both upper and as well as lower bound.
Theta(g(n)) = { f(n) : there exists positive constants c1,c2 and n1 such that
0<=c1(g(n))<=f(n)<=c2(g(n)) for all n>=n1 }
when we say f(n)=c2(g(n)) or f(n)=c1(g(n)) it represents asymptotically tight bound.
O(n): It gives only upper bound (may or may not be tight)
O(g(n)) = { f(n) : there exists positive constants c and n1 such that 0<=f(n)<=cg(n) for all n>=n1}
ex: The bound 2*(n^2) = O(n^2) is asymptotically tight, whereas the bound 2*n = O(n^2) is not asymptotically tight.
o(n): It gives only upper bound (never a tight bound)
the notable difference between O(n) & o(n) is f(n) is less than cg(n)
for all n>=n1 but not equal as in O(n).
ex: 2*n = o(n^2), but 2*(n^2) != o(n^2)
I hope this is what you may want to find in the classical CLRS(page 66):
Big Theta notation:
Nothing to mess up buddy!!
If we have a positive valued functions f(n) and g(n) takes a positive valued argument n then ϴ(g(n)) defined as {f(n):there exist constants c1,c2 and n1 for all n>=n1}
where c1 g(n)<=f(n)<=c2 g(n)
Let's take an example:
let f(n)=5n^2+2n+1
g(n)=n^2
c1=5 and c2=8 and n1=1
Among all the notations ,ϴ notation gives the best intuition about the rate of growth of function because it gives us a tight bound unlike big-oh and big -omega
which gives the upper and lower bounds respectively.
ϴ tells us that g(n) is as close as f(n),rate of growth of g(n) is as close to the rate of growth of f(n) as possible.
First of All Theory
Big O = Upper Limit O(n)
Theta = Order Function - theta(n)
Omega = Q-Notation(Lower Limit) Q(n)
Why People Are so Confused?
In many Blogs & Books How this Statement is emphasised is Like
"This is Big O(n^3)" etc.
and people often Confuse like weather
O(n) == theta(n) == Q(n)
But What Worth keeping in mind is They Are Just Mathematical Function With Names O, Theta & Omega
so they have same General Formula of Polynomial,
Let,
f(n) = 2n4 + 100n2 + 10n + 50 then,
g(n) = n4, So g(n) is Function which Take function as Input and returns Variable with Biggerst Power,
Same f(n) & g(n) for Below all explainations
Big O(n) - Provides Upper Bound
Big O(n4) = 3n4, Because 3n4 > 2n4
3n4 is value of Big O(n4) Just like f(x) = 3x
n4 is playing a role of x here so,
Replacing n4 with x'so, Big O(x') = 2x', Now we both are happy General Concept is
So 0 ≤ f(n) ≤ O(x')
O(x') = cg(n) = 3n4
Putting Value,
0 ≤ 2n4 + 100n2 + 10n + 50 ≤ 3n4
3n4 is our Upper Bound
Big Omega(n) - Provides Lower Bound
Theta(n4) = cg(n) = 2n4 Because 2n4 ≤ Our Example f(n)
2n4 is Value of Theta(n4)
so, 0 ≤ cg(n) ≤ f(n)
0 ≤ 2n4 ≤ 2n4 + 100n2 + 10n + 50
2n4 is our Lower Bound
Theta(n) - Provides Tight Bound
This is Calculated to find out that weather lower Bound is similar to Upper bound,
Case 1). Upper Bound is Similar to Lower Bound
if Upper Bound is Similar to Lower Bound, The Average Case is Similar
Example, 2n4 ≤ f(x) ≤ 2n4,
Then Theta(n) = 2n4
Case 2). if Upper Bound is not Similar to Lower Bound
In this case, Theta(n) is not fixed but Theta(n) is the set of functions with the same order of growth as g(n).
Example 2n4 ≤ f(x) ≤ 3n4, This is Our Default Case,
Then, Theta(n) = c'n4, is a set of functions with 2 ≤ c' ≤ 3
Hope This Explained!!
I am not sure why there is no short simple answer explaining big theta in plain english (seems like that was the question) so here it is
Big Theta is the range of values or the exact value (if big O and big Omega are equal) within which the operations needed for a function will grow

Resources