Related
I know that f(n) grows slower than g(n), but could f(n) has the same growth rate as g(n) since there is an equality sign?
Based on the Big-O definition, yes. For example n is in O(n) as well. In this case, f(n) = n and g(n) = n are even equal, a stronger relation than having the same growth.
Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?
Short explanation:
If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).
If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).
Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.
More technically:
O(n) represents upper bound. Θ(n) means tight bound.
Ω(n) represents lower bound.
f(x) = Θ(g(x)) iff f(x) =
O(g(x)) and f(x) = Ω(g(x))
Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).
In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,
f(x) = Θ(g(x)) iff g(x) = Θ(f(x))
Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))
There are also little-oh and little-omega (ω) notations representing loose upper and loose lower bounds of a function.
To summarize:
f(x) = O(g(x)) (big-oh) means that
the growth rate of f(x) is
asymptotically less than or equal
to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means
that the growth rate of f(x) is
asymptotically greater than or
equal to the growth rate of g(x)
f(x) = o(g(x)) (little-oh) means that
the growth rate of f(x) is
asymptotically less than the
growth rate of g(x).
f(x) = ω(g(x)) (little-omega) means
that the growth rate of f(x) is
asymptotically greater than the
growth rate of g(x)
f(x) = Θ(g(x)) (theta) means that
the growth rate of f(x) is
asymptotically equal to the
growth rate of g(x)
For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.
There's a simple way (a trick, I guess) to remember which notation means what.
All of the Big-O notations can be considered to have a bar.
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
one is Big "O"
one is Big Theta
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
When both condition are true for the same expression, you can use the big theta notation....
Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:
Assume the run time of f(i) is O(1). Below is a code fragment whose asymptotic runtime is Θ(n). It always calls the function f(...) n times. Both the lower and the upper bound is n.
for(int i=0; i<n; i++){
f(i);
}
The second code fragment below has the asymptotic runtime of O(n). It calls the function f(...) at most n times. The upper bound is n, but the lower bound could be Ω(1) or Ω(log(n)), depending on what happens inside f2(i).
for(int i=0; i<n; i++){
if( f2(i) ) break;
f(i);
}
Theta is a shorthand way of referring to a special situtation where
the big O and Omega are the same.
Thus, if one claims The Theta is expression q, then they are also necessarily claiming that Big O is expression q and Omega is expression q.
Rough analogy:
If:
Theta claims, "That animal has 5 legs."
then it follows that:
Big O is true ("That animal has less than or equal to 5 legs.")
and
Omega is true("That animal has more than or equal to 5 legs.")
It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).
A chart could make the previous answers easier to understand:
Θ-Notation - Same order | O-Notation - Upper bound
In English,
On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say f(n) = Θ(g(n)) or f(n) is in big theta of g(n).
Starting with the right, the simpler example, it is saying the upper bound g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).
f(n) belongs to O(n) if exists positive k as f(n)<=k*n
f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n
Wikipedia article on Big O Notation
Using limits
Let's consider f(n) > 0 and g(n) > 0 for all n. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)) instead of the absolute value (|f(n)|).
f(n) = O(g(n))
General:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = O(n) 1
1/2*n = O(n) 1/2
2*n = O(n) 2
n+log(n) = O(n) 1
n = O(n*log(n)) 0
n = O(n²) 0
n = O(nⁿ) 0
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ O(log(n)) ∞
1/2*n ≠ O(sqrt(n)) ∞
2*n ≠ O(1) ∞
n+log(n) ≠ O(log(n)) ∞
f(n) = Θ(g(n))
General:
f(n)
0 < lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 < lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = Θ(n) 1
1/2*n = Θ(n) 1/2
2*n = Θ(n) 2
n+log(n) = Θ(n) 1
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ Θ(log(n)) ∞
1/2*n ≠ Θ(sqrt(n)) ∞
2*n ≠ Θ(1) ∞
n+log(n) ≠ Θ(log(n)) ∞
n ≠ Θ(n*log(n)) 0
n ≠ Θ(n²) 0
n ≠ Θ(nⁿ) 0
Conclusion: we regard big O, big θ and big Ω as the same thing.
Why? I will tell the reason below:
Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.
In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:
T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!
Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.
Similarly, we also regard big Ω as big θ subconsciously.
In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.
I've come across in CLRS (Introduction to Algorithms) a sentence which states
"Distinguishing asymptotic Upper Bounds from asymptotically tight bounds is standard in the algorithms literature"
While I understand the essence of what the text wants to convey, It would be better understood if I get an example illustrating the difference.
O-notation gives us asymptotic upper bound.
Consider a function f(x),
We can define a function g(x), such that f(x) = O(g(x)).
Here g(x) is the asymptotic upper bound of f(x), meaning for all values of x >= c, f(x) grows at the same rate or slower than g(x) as x increases.
Another thing to be noticed is that if h(x) is the asymptotic upper bound of g(x) then it could easily be concluded that h(x) is also the asymptotic upper bound of f(x). After all, if f(x) can only grow at an equal or smaller rate than g(x), it is bound to grow at an equal of smaller rate than h(x) as g(x) cannot grow at any faster than h(x).
Eg. if f(x) = 10x + 2,
g(x) = 12x + 1 and h(x) = 2x^2.
We can safely say that f(x) = O(g(x)), g(x) = O(h(x)) and f(x) = O(h(x)).
Here g(x) is said to be asymptotic tight upper bound and h(x) is said to be the asymptotic upper bound of f(x).
Would Ω(n log n) be the same as saying n^2?
Extra: Can someone please explain to me clearly what big O, Θ and Ω means illustratively?
It is not, it is Omega, which says "It is asymptotically same or lower".
In "asymptoticall" equations, it is same as n^2 >= n log n
Extra :
Standard equations || Text representation || Asymptotically equations
f(x) = O(g(x)) || g(x) is asymptotically same or higher as f(x) || f(x) <= g(x)
f(x) = Θ(g(x)) || g(x) is asymptotically same as f(x) || f(x) = g(x)
f(x) = Ω(g(x)) || g(x) is asymptotically same or lower as (fx) || f(x) >= O(g(x))
PS: Note also that if f(x) = O(g(x)) it also means that g(x) = Ω(f(x)), which is similar to if f(x) <= g(x) then g(x) >= f(x)
n^2 = Ω(n log n) is not equality, but relation between these functions.
You can read about it and see examples here.
Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?
Short explanation:
If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).
If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).
Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.
More technically:
O(n) represents upper bound. Θ(n) means tight bound.
Ω(n) represents lower bound.
f(x) = Θ(g(x)) iff f(x) =
O(g(x)) and f(x) = Ω(g(x))
Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).
In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,
f(x) = Θ(g(x)) iff g(x) = Θ(f(x))
Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))
There are also little-oh and little-omega (ω) notations representing loose upper and loose lower bounds of a function.
To summarize:
f(x) = O(g(x)) (big-oh) means that
the growth rate of f(x) is
asymptotically less than or equal
to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means
that the growth rate of f(x) is
asymptotically greater than or
equal to the growth rate of g(x)
f(x) = o(g(x)) (little-oh) means that
the growth rate of f(x) is
asymptotically less than the
growth rate of g(x).
f(x) = ω(g(x)) (little-omega) means
that the growth rate of f(x) is
asymptotically greater than the
growth rate of g(x)
f(x) = Θ(g(x)) (theta) means that
the growth rate of f(x) is
asymptotically equal to the
growth rate of g(x)
For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.
There's a simple way (a trick, I guess) to remember which notation means what.
All of the Big-O notations can be considered to have a bar.
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
one is Big "O"
one is Big Theta
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
When both condition are true for the same expression, you can use the big theta notation....
Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:
Assume the run time of f(i) is O(1). Below is a code fragment whose asymptotic runtime is Θ(n). It always calls the function f(...) n times. Both the lower and the upper bound is n.
for(int i=0; i<n; i++){
f(i);
}
The second code fragment below has the asymptotic runtime of O(n). It calls the function f(...) at most n times. The upper bound is n, but the lower bound could be Ω(1) or Ω(log(n)), depending on what happens inside f2(i).
for(int i=0; i<n; i++){
if( f2(i) ) break;
f(i);
}
Theta is a shorthand way of referring to a special situtation where
the big O and Omega are the same.
Thus, if one claims The Theta is expression q, then they are also necessarily claiming that Big O is expression q and Omega is expression q.
Rough analogy:
If:
Theta claims, "That animal has 5 legs."
then it follows that:
Big O is true ("That animal has less than or equal to 5 legs.")
and
Omega is true("That animal has more than or equal to 5 legs.")
It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).
A chart could make the previous answers easier to understand:
Θ-Notation - Same order | O-Notation - Upper bound
In English,
On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say f(n) = Θ(g(n)) or f(n) is in big theta of g(n).
Starting with the right, the simpler example, it is saying the upper bound g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).
f(n) belongs to O(n) if exists positive k as f(n)<=k*n
f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n
Wikipedia article on Big O Notation
Using limits
Let's consider f(n) > 0 and g(n) > 0 for all n. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)) instead of the absolute value (|f(n)|).
f(n) = O(g(n))
General:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = O(n) 1
1/2*n = O(n) 1/2
2*n = O(n) 2
n+log(n) = O(n) 1
n = O(n*log(n)) 0
n = O(n²) 0
n = O(nⁿ) 0
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ O(log(n)) ∞
1/2*n ≠ O(sqrt(n)) ∞
2*n ≠ O(1) ∞
n+log(n) ≠ O(log(n)) ∞
f(n) = Θ(g(n))
General:
f(n)
0 < lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 < lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = Θ(n) 1
1/2*n = Θ(n) 1/2
2*n = Θ(n) 2
n+log(n) = Θ(n) 1
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ Θ(log(n)) ∞
1/2*n ≠ Θ(sqrt(n)) ∞
2*n ≠ Θ(1) ∞
n+log(n) ≠ Θ(log(n)) ∞
n ≠ Θ(n*log(n)) 0
n ≠ Θ(n²) 0
n ≠ Θ(nⁿ) 0
Conclusion: we regard big O, big θ and big Ω as the same thing.
Why? I will tell the reason below:
Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.
In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:
T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!
Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.
Similarly, we also regard big Ω as big θ subconsciously.
In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.