Difference between Big-Theta and Big O notation in simple language - algorithm

While trying to understand the difference between Theta and O notation I came across the following statement :
The Theta-notation asymptotically bounds a function from above and below. When
we have only an asymptotic upper bound, we use O-notation.
But I do not understand this. The book explains it mathematically, but it's too complex and gets really boring to read when I am really not understanding.
Can anyone explain the difference between the two using simple, yet powerful examples.

Big O is giving only upper asymptotic bound, while big Theta is also giving a lower bound.
Everything that is Theta(f(n)) is also O(f(n)), but not the other way around.
T(n) is said to be Theta(f(n)), if it is both O(f(n)) and Omega(f(n))
For this reason big-Theta is more informative than big-O notation, so if we can say something is big-Theta, it's usually preferred. However, it is harder to prove something is big Theta, than to prove it is big-O.
For example, merge sort is both O(n*log(n)) and Theta(n*log(n)), but it is also O(n2), since n2 is asymptotically "bigger" than it. However, it is NOT Theta(n2), Since the algorithm is NOT Omega(n2).
Omega(n) is asymptotic lower bound. If T(n) is Omega(f(n)), it means that from a certain n0, there is a constant C1 such that T(n) >= C1 * f(n). Whereas big-O says there is a constant C2 such that T(n) <= C2 * f(n)).
All three (Omega, O, Theta) give only asymptotic information ("for large input"):
Big O gives upper bound
Big Omega gives lower bound and
Big Theta gives both lower and upper bounds
Note that this notation is not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.

I will just quote from Knuth's TAOCP Volume 1 - page 110 (I have the Indian edition). I recommend reading pages 107-110 (section 1.2.11 Asymptotic representations)
People often confuse O-notation by assuming that it gives an exact order of Growth; they use it as if it specifies a lower bound as well as an upper bound. For example, an algorithm might be called inefficient because its running time is O(n^2). But a running time of O(n^2) does not necessarily mean that running time is not also O(n)
On page 107,
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^4) and
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^3) and
1^2 + 2^2 + 3^2 + ... + n^2 = (1/3) n^3 + O(n^2)
Big-Oh is for approximations. It allows you to replace ~ with an equals = sign. In the example above, for very large n, we can be sure that the quantity will stay below n^4 and n^3 and (1/3)n^3 + n^2 [and not simply n^2]
Big Omega is for lower bounds - An algorithm with Omega(n^2) will not be as efficient as one with O(N logN) for large N. However, we do not know at what values of N (in that sense we know approximately)
Big Theta is for exact order of Growth, both lower and upper bound.

I am going to use an example to illustrate the difference.
Let the function f(n) be defined as
if n is odd f(n) = n^3
if n is even f(n) = n^2
From CLRS
A function f(n) belongs to the set Θ(g(n)) if there exist positive
constants c1 and c2 such that it can be "sandwiched" between c1g(n)
and c2g(n), for sufficiently large n.
AND
O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
f(n) ≤ cg(n) for all n ≥ n0}.
AND
Ω(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
cg(n) ≤ f(n) for all n ≥ n0}.
The upper bound on f(n) is n^3. So our function f(n) is clearly O(n^3).
But is it Θ(n^3)?
For f(n) to be in Θ(n^3) it has to be sandwiched between two functions one forming the lower bound, and the other the upper bound, both of which grown at n^3. While the upper bound is obvious, the lower bound can not be n^3. The lower bound is in fact n^2; f(n) is Ω(n^2)
From CLRS
For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and
only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
Hence f(n) is not in Θ(n^3) while it is in O(n^3) and Ω(n^2)

If the running time is expressed in big-O notation, you know that the running time will not be slower than the given expression. It expresses the worst-case scenario.
But with Theta notation you also known that it will not be faster. That is, there is no best-case scenario where the algorithm will retun faster.
This gives are more exact bound on the expected running time. However for most purposes it is simpler to ignore the lower bound (the possibility of faster execution), while you are generally only concerned about the worst-case scenario.

Here's my attempt:
A function, f(n) is O(n), if and only if there exists a constant, c, such that f(n) <= c*g(n).
Using this definition, could we say that the function f(2^(n+1)) is O(2^n)?
In other words, does a constant 'c' exist such that 2^(n+1) <= c*(2^n)? Note the second function (2^n) is the function after the Big O in the above problem. This confused me at first.
So, then use your basic algebra skills to simplify that equation. 2^(n+1) breaks down to 2 * 2^n. Doing so, we're left with:
2 * 2^n <= c(2^n)
Now its easy, the equation holds for any value of c where c >= 2. So, yes, we can say that f(2^(n+1)) is O(2^n).
Big Omega works the same way, except it evaluates f(n) >= c*g(n) for some constant 'c'.
So, simplifying the above functions the same way, we're left with (note the >= now):
2 * 2^n >= c(2^n)
So, the equation works for the range 0 <= c <= 2. So, we can say that f(2^(n+1)) is Big Omega of (2^n).
Now, since BOTH of those hold, we can say the function is Big Theta (2^n). If one of them wouldn't work for a constant of 'c', then its not Big Theta.
The above example was taken from the Algorithm Design Manual by Skiena, which is a fantastic book.
Hope that helps. This really is a hard concept to simplify. Don't get hung up so much on what 'c' is, just break it down into simpler terms and use your basic algebra skills.

Related

Clarification for Theta notation in complexity analysis. Θ(g)

When we talk about Θ(g) are we referring to the highest order term of g(n) or are we referring to g(n) exactly as it is?
For example if f(n) = n3. And g(n)=1000n3+n does Θ(g) mean Θ(1000n3+n) or Θ(n3)?
In this scenario can we say that f(n) is Θ(g)?
Θ(g) yields sets of functions that are all of the same complexity class. Θ(1000n3+n) is equal to Θ(n3) because both of these result in the same set.
For simplicity's sake one will usually drop the non-significant terms and multiplicative constants. The lower order additive terms don't change the complexity, nor do any multipliers, so there's no reason to write them out.
Since Θ(g) is a set, you would say that f(n) &in; Θ(g).
NB: Many CS teachers, textbooks, and other resources muddy the waters by using imprecise notation. Lots of people say that f(n)=n3 is O(n3), rather than f(n)=n3 is in O(n3). They use = when they mean &in;.
theta(g(n)) lies between O(g(n)) and omega(g(n))
if g(n) = 1000n^3 + n
first lets find O(g(n)) upper bound
It could be n^3, n^4, n^5 but we choose the closest one which is O(n^3).
O(n^3) is valid because we can find a constant c such that for some value of n
1000n^3 + n < c.n^3
second lets see omega(g(n)) which is lower bound
omega says f(n) > c.g(n)
we can find a constant c such that
1000.n^3 + n > c.n^3
Now we have upper bound which is O(n^3) and lower bound which is omega(n^3).
therefore we have theta which bounds both upper and lower using same function.
By rule : if f(n) = O(g(n)) and f(n) = omega(g(n)) therefore f(n) = theta(g(n))
1000.n^3 + n = theta(n^3)

How can an algorithm that is O(n) also be O(n^2), O(n^1000000), O(2^n)?

So the answer to this question What is the difference between Θ(n) and O(n)?
states that "Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2)."
I understand Big O to represent upper bound or worst case with that I don't understand how O(n) is also O(n2) and the other cases worse than O(n).
Perhaps I have some fundamental misunderstandings. Please help me understand this as I have been struggling for a while.
Thanks.
It's helpful to think of what big-Oh means: if a function is O(n), then c*n, where c is some positive number, is the upper-bound. If c*n is an upper-bound, it's clear that for integers, c*n^2 would also be an upper-bound. Also c*n^3, c*n^4, c*n^1000, etc.
The below graph shows the growth of functions, which are upper bounds of the function "to the right" of it; i.e., it grows faster on smaller n.
Suppose the running time of your algorithm is T(n) = 3n + 6 (i.e., an arbitrary polynomial of order 1).
It's true that T(n) = O(n) because 3n + 6 < 4n for all n > 5 (to use the definition of big-oh notation). It's also true that T(n) = O(n^2) because 3n + 6 < n^2 for all n > 5 (to use the defintion again).
It's also true that T(n) = Θ(n) because, in addition to the proof that it was O(n), it is true that 3n + 6 > n for all n > 1. However, you cannot prove that 3n + 6 > c n^2 for any value of c for arbitrarily large n. (Proof sketch: lim (cn^2 - 3n - 6) > 0 as n -> infinity).
I understand Big O to represent upper bound or worst case with that I don't understand how O(n) is also O(n2) and the other cases worse than O(n).
Intuitively, an "upper bound of x" means that something will always be less than or equal to x. If something is less than or equal to x, it is also less than or equal to x^2 and x^1000, for large enough values of x. So x^2 and x^1000 can also be upper bounds.
This is what Big-oh represents: upper bounds.
When we say that f(n) = O(g(n)), we mean only that for all sufficiently large n, there exists a constant c such that f(n) <= cg(n). Note that if f(n) = O(g(n)), we can always choose a function h(n) bigger than g(n) and since g(n) is eventually less than h(n), we have f(n) <= cg(n) <= ch(n), so f(n) = O(h(n)) as well.
Note that the O bound is not tight. The theta bound is the intersection of O(g(n)) and Omega(g(n)), where Omega gives the lower bound (it's like O, the upper bound, but bounds from below instead). If f(n) is bounded below by g(n), and h(n) is bigger than g(n), then if follows that f(n) is not (necessarily) bounded below by h(n).

Asymptotic Notations: (an + b) ∈ O(n^2)

I was reading Intro to Algorithms, by Thomas H. Corman when I encountered this statement (in Asymptotic Notations)
when a>0, any linear function an+b is in O(n^2) which is essentially verified by taking c = a + |b| and no = max(1, -b/a)
I can't understand why O(n^2) and not O(n). When will O(n) upper bound fail.
For example, for 3n+2, according to the book
3n+2 <= (5)n^2 n>=1
but this also holds good
3n+2 <= 5n n>=1
So why is the upper bound in terms of n^2?
Well I found the relevant part of the book. Indeed the excerpt comes from the chapter introducing big-O notation and relatives.
The formal definition of the big-O is that the function in question does not grow asymptotically faster than the comparison function. It does not say anything about whether the function grows asymptotically slower, so:
f(n) = n is in O(n), O(n^2) and also O(e^n) because n does not grow asymptotically faster than any of these. But n is not in O(1).
Any function in O(n) is also in O(n^2) and O(e^n).
If you want to describe the tight asymptotic bound, you would use the big-Θ notation, which is introduced just before the big-O notation in the book. f(n) ∊ Θ(g(n)) means that f(n) does not grow asymptotically faster than g(n) and the other way around. So f(n) ∊ Θ(g(n)) is equivalent to f(n) ∊ O(g(n)) and g(n) ∊ O(f(n)).
So f(n) = n is in Θ(n) but not in Θ(n^2) or Θ(e^n) or Θ(1).
Another example: f(n) = n^2 + 2 is in O(n^3) but not in Θ(n^3), it is in Θ(n^2).
You need to think of O(...) as a set (which is why the set theoretic "element-of"-symbol is used). O(g(n)) is the set of all functions that do not grow asymptotically faster than g(n), while Θ(g(n)) is the set of functions that neither grow asymptotically faster nor slower than g(n). So a logical consequence is that Θ(g(n)) is a subset of O(g(n)).
Often = is used instead of the ∊ symbol, which really is misleading. It is pure notation and does not share any properties with the actual =. For example 1 = O(1) and 2 = O(1), but not 1 = O(1) = 2. It would be better to avoid using = for the big-O notation. Nonetheless you will later see that the = notation is useful, for example if you want to express the complexity of rest terms, for example: f(n) = 2*n^3 + 1/2*n - sqrt(n) + 3 = 2*n^3 + O(n), meaning that asymptotically the function behaves like 2*n^3 and the neglected part does asymptotically not grow faster than n.
All of this is kind of against the typically usage of big-O notation. You often find the time/memory complexity of an algorithm defined by it, when really it should be defined by big-Θ notation. For example if you have an algorithm in O(n^2) and one in O(n), then the first one could actually still be asymptotically faster, because it might also be in Θ(1). The reason for this may sometimes be that a tight Θ-bound does not exist or is not known for given algorithm, so at least the big-O gives you a guarantee that things won't take longer than the given bound. By convention you always try to give the lowest known big-O bound, while this is not formally necessary.
The formal definition (from Wikipedia) of the big O notation says that:
f(x) = O(g(x)) as x → ∞
if and only if there is a positive constant M such that for all
sufficiently large values of x, f(x) is at most M multiplied by g(x)
in absolute value. That is, f(x) = O(g(x)) if and only if there exists
a positive real number M and a real number x0 such that
|f(x)|≤ M|g(x)| for all x > x₀ (mean for x big enough)
In our case, we can easily show that
|an + b| < |an + n| (for n sufficiently big, ie when n > b)
Then |an + b| < (a+1)|n|
Since a+1 is constant (corresponds to M in the formal definition), definitely
an + b = O(n)
Your were right to doubt.

What exactly does big Ө notation represent?

I'm really confused about the differences between big O, big Omega, and big Theta notation.
I understand that big O is the upper bound and big Omega is the lower bound, but what exactly does big Ө (theta) represent?
I have read that it means tight bound, but what does that mean?
First let's understand what big O, big Theta and big Omega are. They are all sets of functions.
Big O is giving upper asymptotic bound, while big Omega is giving a lower bound. Big Theta gives both.
Everything that is Ө(f(n)) is also O(f(n)), but not the other way around.
T(n) is said to be in Ө(f(n)) if it is both in O(f(n)) and in Omega(f(n)). In sets terminology, Ө(f(n)) is the intersection of O(f(n)) and Omega(f(n))
For example, merge sort worst case is both O(n*log(n)) and Omega(n*log(n)) - and thus is also Ө(n*log(n)), but it is also O(n^2), since n^2 is asymptotically "bigger" than it. However, it is not Ө(n^2), Since the algorithm is not Omega(n^2).
A bit deeper mathematic explanation
O(n) is asymptotic upper bound. If T(n) is O(f(n)), it means that from a certain n0, there is a constant C such that T(n) <= C * f(n). On the other hand, big-Omega says there is a constant C2 such that T(n) >= C2 * f(n))).
Do not confuse!
Not to be confused with worst, best and average cases analysis: all three (Omega, O, Theta) notation are not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.
We usually use it to analyze complexity of algorithms (like the merge sort example above). When we say "Algorithm A is O(f(n))", what we really mean is "The algorithms complexity under the worst1 case analysis is O(f(n))" - meaning - it scales "similar" (or formally, not worse than) the function f(n).
Why we care for the asymptotic bound of an algorithm?
Well, there are many reasons for it, but I believe the most important of them are:
It is much harder to determine the exact complexity function, thus we "compromise" on the big-O/big-Theta notations, which are informative enough theoretically.
The exact number of ops is also platform dependent. For example, if we have a vector (list) of 16 numbers. How much ops will it take? The answer is: it depends. Some CPUs allow vector additions, while other don't, so the answer varies between different implementations and different machines, which is an undesired property. The big-O notation however is much more constant between machines and implementations.
To demonstrate this issue, have a look at the following graphs:
It is clear that f(n) = 2*n is "worse" than f(n) = n. But the difference is not quite as drastic as it is from the other function. We can see that f(n)=logn quickly getting much lower than the other functions, and f(n) = n^2 is quickly getting much higher than the others.
So - because of the reasons above, we "ignore" the constant factors (2* in the graphs example), and take only the big-O notation.
In the above example, f(n)=n, f(n)=2*n will both be in O(n) and in Omega(n) - and thus will also be in Theta(n).
On the other hand - f(n)=logn will be in O(n) (it is "better" than f(n)=n), but will NOT be in Omega(n) - and thus will also NOT be in Theta(n).
Symmetrically, f(n)=n^2 will be in Omega(n), but NOT in O(n), and thus - is also NOT Theta(n).
1Usually, though not always. when the analysis class (worst, average and best) is missing, we really mean the worst case.
It means that the algorithm is both big-O and big-Omega in the given function.
For example, if it is Ө(n), then there is some constant k, such that your function (run-time, whatever), is larger than n*k for sufficiently large n, and some other constant K such that your function is smaller than n*K for sufficiently large n.
In other words, for sufficiently large n, it is sandwiched between two linear functions :
For k < K and n sufficiently large, n*k < f(n) < n*K
Theta(n): A function f(n) belongs to Theta(g(n)), if there exists positive constants c1 and c2 such that f(n) can be sandwiched between c1(g(n)) and c2(g(n)). i.e it gives both upper and as well as lower bound.
Theta(g(n)) = { f(n) : there exists positive constants c1,c2 and n1 such that
0<=c1(g(n))<=f(n)<=c2(g(n)) for all n>=n1 }
when we say f(n)=c2(g(n)) or f(n)=c1(g(n)) it represents asymptotically tight bound.
O(n): It gives only upper bound (may or may not be tight)
O(g(n)) = { f(n) : there exists positive constants c and n1 such that 0<=f(n)<=cg(n) for all n>=n1}
ex: The bound 2*(n^2) = O(n^2) is asymptotically tight, whereas the bound 2*n = O(n^2) is not asymptotically tight.
o(n): It gives only upper bound (never a tight bound)
the notable difference between O(n) & o(n) is f(n) is less than cg(n)
for all n>=n1 but not equal as in O(n).
ex: 2*n = o(n^2), but 2*(n^2) != o(n^2)
I hope this is what you may want to find in the classical CLRS(page 66):
Big Theta notation:
Nothing to mess up buddy!!
If we have a positive valued functions f(n) and g(n) takes a positive valued argument n then ϴ(g(n)) defined as {f(n):there exist constants c1,c2 and n1 for all n>=n1}
where c1 g(n)<=f(n)<=c2 g(n)
Let's take an example:
let f(n)=5n^2+2n+1
g(n)=n^2
c1=5 and c2=8 and n1=1
Among all the notations ,ϴ notation gives the best intuition about the rate of growth of function because it gives us a tight bound unlike big-oh and big -omega
which gives the upper and lower bounds respectively.
ϴ tells us that g(n) is as close as f(n),rate of growth of g(n) is as close to the rate of growth of f(n) as possible.
First of All Theory
Big O = Upper Limit O(n)
Theta = Order Function - theta(n)
Omega = Q-Notation(Lower Limit) Q(n)
Why People Are so Confused?
In many Blogs & Books How this Statement is emphasised is Like
"This is Big O(n^3)" etc.
and people often Confuse like weather
O(n) == theta(n) == Q(n)
But What Worth keeping in mind is They Are Just Mathematical Function With Names O, Theta & Omega
so they have same General Formula of Polynomial,
Let,
f(n) = 2n4 + 100n2 + 10n + 50 then,
g(n) = n4, So g(n) is Function which Take function as Input and returns Variable with Biggerst Power,
Same f(n) & g(n) for Below all explainations
Big O(n) - Provides Upper Bound
Big O(n4) = 3n4, Because 3n4 > 2n4
3n4 is value of Big O(n4) Just like f(x) = 3x
n4 is playing a role of x here so,
Replacing n4 with x'so, Big O(x') = 2x', Now we both are happy General Concept is
So 0 ≤ f(n) ≤ O(x')
O(x') = cg(n) = 3n4
Putting Value,
0 ≤ 2n4 + 100n2 + 10n + 50 ≤ 3n4
3n4 is our Upper Bound
Big Omega(n) - Provides Lower Bound
Theta(n4) = cg(n) = 2n4 Because 2n4 ≤ Our Example f(n)
2n4 is Value of Theta(n4)
so, 0 ≤ cg(n) ≤ f(n)
0 ≤ 2n4 ≤ 2n4 + 100n2 + 10n + 50
2n4 is our Lower Bound
Theta(n) - Provides Tight Bound
This is Calculated to find out that weather lower Bound is similar to Upper bound,
Case 1). Upper Bound is Similar to Lower Bound
if Upper Bound is Similar to Lower Bound, The Average Case is Similar
Example, 2n4 ≤ f(x) ≤ 2n4,
Then Theta(n) = 2n4
Case 2). if Upper Bound is not Similar to Lower Bound
In this case, Theta(n) is not fixed but Theta(n) is the set of functions with the same order of growth as g(n).
Example 2n4 ≤ f(x) ≤ 3n4, This is Our Default Case,
Then, Theta(n) = c'n4, is a set of functions with 2 ≤ c' ≤ 3
Hope This Explained!!
I am not sure why there is no short simple answer explaining big theta in plain english (seems like that was the question) so here it is
Big Theta is the range of values or the exact value (if big O and big Omega are equal) within which the operations needed for a function will grow

How to calculate big-theta

Can some one provide me a real time example for how to calculate big theta.
Is big theta some thing like average case, (min-max)/2?
I mean (minimum time - big O)/2
Please correct me if I am wrong, thanks
Big-theta notation represents the following rule:
For any two functions f(n), g(n), if f(n)/g(n) and g(n)/f(n) are both bounded as n grows to infinity, then f = Θ(g) and g = Θ(f). In that case, g is both an upper bound and a lower bound on the growth of f.
Here's an example algorithm:
def find-minimum(List)
min = +∞
foreach value in List
min = value if min > value
return min
We wish to evaluate the cost function c(n) where n is the size of the input list. This algorithm will perform one comparison for every item in the list, so c(n) = n.
c(n)/n = 1 which remains bounded as n goes to infinity, so c(n) grows no faster than n. This is what is meant by big-O notation c(n) = O(n). Conversely, n/C(n) = 1 also remains bounded, so c(n) grows no slower than n. Since it grows neither slower nor faster, it must grow at the same speed. This is what is meant by theta notation c(n) = Θ(n).
Note that c(n)/n² is also bounded, so c(n) = O(n²) as well — big-O notation is merely an upper bound on the complexity, so any O(n) function is also O(n²), O(n³)...
However, since n²/c(n) = n is not bounded, then c(n) ≠ Θ(n²). This is the interesting property of big-theta notation: it's both an upper bound and a lower bound on the complexity.
Big theta is a tight bound, for a function T(n): if: Omega(f(n))<=T(n)<=O(f(n)), then Theta(f(n)) is the tight bound for T(n).
In other words Theta(f(n)) 'describes' a function T(n), if both O [big O] and Omega, 'describe' the same T, with the same f.
for example, a quicksort [with correct median choices], always takes at most O(nlogn), at at least Omega(nlogn), so quicksort [with good median choices] is Theta(nlogn)
EDIT:
added discussion in comments:
Searching an array is still Theta(n). the Theta function does not indicate worst/best case, but the behavior of the desired case. i.e, searching for an array, T(n)=number of ops for worst case. in here, obviously T(n)<=O(n), but also T(n)>=n/2, because at worst case you need to iterate the whole array, so T(n)>=Omega(n) and therefore Theta(n) is asymptotic bound.
From http://en.wikipedia.org/wiki/Big_O_notation#Related_asymptotic_notations, we learn that "Big O" denotes an upper bound, whereas "Big Theta" denotes an upper and lower bound, i.e. in the limit as n goes to infinity:
f(n) = O(g(n)) --> |f(n)| < k.g(n)
f(n) = Theta(g(n)) --> k1.g(n) < f(n) < k2.g(n)
So you cannot infer Big Theta from Big O.
ig-Theta (Θ) notation provides an asymptotic upper and lower bound on the growth rate of an algorithm's running time. To calculate the big-Theta notation of a function, you need to find two non-negative functions, f(n) and g(n), such that:
There exist positive constants c1, c2 and n0 such that 0 <= c1 * g(n) <= f(n) <= c2 * g(n) for all n >= n0.
f(n) and g(n) have the same asymptotic growth rate.
The big-Theta notation for the function f(n) is then written as Θ(g(n)). The purpose of this notation is to provide a rough estimate of the running time, ignoring lower order terms and constant factors.
For example, consider the function f(n) = 2n^2 + 3n + 1. To calculate its big-Theta notation, we can choose g(n) = n^2. Then, we can find c1 and c2 such that 0 <= c1 * n^2 <= 2n^2 + 3n + 1 <= c2 * n^2 for all n >= n0. For example, c1 = 1/2 and c2 = 2. So, f(n) = Θ(n^2).

Resources