Solving for Big Theta Notation - notation

I'm having an issue solving for big theta notation. I understand that big O notation denotes the worst case and upperbound while Omega notation denotes the best case and lower bound.
If I'm given an algorithm that runs in O(nlogn) time and Omega(n), how would I infer what Theta equals? I'm beginning to assume that there exists a theta notation if and only if O and Omega are equal, is this true?

Assume your algorithm runs in f(x).
f(x) = O(n*log(n)) means that for x high enough there is some constant c1 > 0 so that f(x) will always be smaller than c1*n*log(n).
f(x) = Omega(n) means that for x high enough there is some constant c2 > 0 so that f(x) will be bigger than c2*n
So all you know now is that from a certain point onward (x big enough) your algorithm will run faster than c2*n and slower than c1*n*log(n).
f(x) = Theta(g(x)) means that for x big enough there is some c3 > 0 and c4 > 0 so that c3*g(x) <= f(x) <= c4*g(x), this means f(x) will only run a constant factor faster or slower than g(x). So indeed, f(x) = O(g(x)) and f(x) = Omega(g(x)).
Conclusion: With only O and Omega given, if they are not the same you cannot conclude what Theta is. If you have the algorithm you could try and see if O was maybe chosen too high, or Omega might have been chosen too low.

Related

O-notation of a linear function has an infinite number of solutions? [duplicate]

Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?
Short explanation:
If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).
If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).
Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.
More technically:
O(n) represents upper bound. Θ(n) means tight bound.
Ω(n) represents lower bound.
f(x) = Θ(g(x)) iff f(x) =
O(g(x)) and f(x) = Ω(g(x))
Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).
In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,
f(x) = Θ(g(x)) iff g(x) = Θ(f(x))
Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))
There are also little-oh and little-omega (ω) notations representing loose upper and loose lower bounds of a function.
To summarize:
f(x) = O(g(x)) (big-oh) means that
the growth rate of f(x) is
asymptotically less than or equal
to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means
that the growth rate of f(x) is
asymptotically greater than or
equal to the growth rate of g(x)
f(x) = o(g(x)) (little-oh) means that
the growth rate of f(x) is
asymptotically less than the
growth rate of g(x).
f(x) = ω(g(x)) (little-omega) means
that the growth rate of f(x) is
asymptotically greater than the
growth rate of g(x)
f(x) = Θ(g(x)) (theta) means that
the growth rate of f(x) is
asymptotically equal to the
growth rate of g(x)
For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.
There's a simple way (a trick, I guess) to remember which notation means what.
All of the Big-O notations can be considered to have a bar.
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
one is Big "O"
one is Big Theta
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
When both condition are true for the same expression, you can use the big theta notation....
Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:
Assume the run time of f(i) is O(1). Below is a code fragment whose asymptotic runtime is Θ(n). It always calls the function f(...) n times. Both the lower and the upper bound is n.
for(int i=0; i<n; i++){
f(i);
}
The second code fragment below has the asymptotic runtime of O(n). It calls the function f(...) at most n times. The upper bound is n, but the lower bound could be Ω(1) or Ω(log(n)), depending on what happens inside f2(i).
for(int i=0; i<n; i++){
if( f2(i) ) break;
f(i);
}
Theta is a shorthand way of referring to a special situtation where
the big O and Omega are the same.
Thus, if one claims The Theta is expression q, then they are also necessarily claiming that Big O is expression q and Omega is expression q.
Rough analogy:
If:
Theta claims, "That animal has 5 legs."
then it follows that:
Big O is true ("That animal has less than or equal to 5 legs.")
and
Omega is true("That animal has more than or equal to 5 legs.")
It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).
A chart could make the previous answers easier to understand:
Θ-Notation - Same order | O-Notation - Upper bound
In English,
On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say f(n) = Θ(g(n)) or f(n) is in big theta of g(n).
Starting with the right, the simpler example, it is saying the upper bound g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).
f(n) belongs to O(n) if exists positive k as f(n)<=k*n
f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n
Wikipedia article on Big O Notation
Using limits
Let's consider f(n) > 0 and g(n) > 0 for all n. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)) instead of the absolute value (|f(n)|).
f(n) = O(g(n))
General:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = O(n) 1
1/2*n = O(n) 1/2
2*n = O(n) 2
n+log(n) = O(n) 1
n = O(n*log(n)) 0
n = O(n²) 0
n = O(nⁿ) 0
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ O(log(n)) ∞
1/2*n ≠ O(sqrt(n)) ∞
2*n ≠ O(1) ∞
n+log(n) ≠ O(log(n)) ∞
f(n) = Θ(g(n))
General:
f(n)
0 < lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 < lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = Θ(n) 1
1/2*n = Θ(n) 1/2
2*n = Θ(n) 2
n+log(n) = Θ(n) 1
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ Θ(log(n)) ∞
1/2*n ≠ Θ(sqrt(n)) ∞
2*n ≠ Θ(1) ∞
n+log(n) ≠ Θ(log(n)) ∞
n ≠ Θ(n*log(n)) 0
n ≠ Θ(n²) 0
n ≠ Θ(nⁿ) 0
Conclusion: we regard big O, big θ and big Ω as the same thing.
Why? I will tell the reason below:
Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.
In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:
T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!
Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.
Similarly, we also regard big Ω as big θ subconsciously.
In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.

Big O vs Small omega

Why is ω(n) smaller than O(n)?
I know what is little omega (for example, n = ω(log n)), but I can't understand why ω(n) is smaller than O(n).
Big Oh 'O' is an upper bound and little omega 'ω' is a Tight lower bound.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
ω(g(n)) = { f(n): for all constants c > 0, there exists a constant n0 such that 0 ≤ cg(n) < f(n) for all n ≥ n0}.
ALSO: infinity = lim f(n)/g(n)
n ∈ O(n) and n ∉ ω(n).
Alternatively:
n ∈ ω(log(n)) and n ∉ O(log(n))
ω(n) and O(n) are at the opposite side of the spectrum, as is illustrated below.
Formally,
For more details, see CSc 345 — Analysis of Discrete Structures
(McCann), which is the source of the graph above. It also contains a compact representation of the definitions, which makes them easy to remember:
I can't comment, so first of all let me say that n ≠ Θ(log(n)). Big Theta means that for some positive constants c1, c2, and k, for all values of n greater than k, c1*log(n) ≤ n ≤ c2*log(n), which is not true. As n approaches infinity, it will always be larger than log(n), no matter log(n)'s coefficient.
jesse34212 was correct in saying that n = ω(log(n)). n = ω(log(n)) means that n ≠ Θ(log(n)) AND n = Ω(log(n)). In other words, little or small omega is a loose lower bound, whereas big omega can be loose or tight.
Big O notation signifies a loose or tight upper bound. For instance, 12n = O(n) (tight upper bound, because it's as precise as you can get), and 12n = O(n^2) (loose upper bound, because you could be more precise).
12n ≠ ω(n) because n is a tight bound on 12n, and ω only applies to loose bounds. That's why 12n = ω(log(n)), or even 12n = ω(1). I keep using 12n, but that value of the constant does not affect the equality.
Technically, O(n) is a set of all functions that grow asymptotically equal to or slower than n, and the belongs character is most appropriate, but most people use "= O(n)" (instead of "∈ O(n)") as an informal way of writing it.
Algorithmic complexity has a mathematic definition.
If f and g are two functions, f = O(g) if you can find two constants c (> 0) and n such as f(x) < c * g(x) for every x > n.
For Ω, it is the opposite: you can find constants such as f(x) > c * g(x).
f = Θ(g) if there are three constants c, d and n such as c * g(x) < f(x) < d * g(x) for every x > n.
Then, O means your function is dominated, Θ your function is equivalent to the other function, Ω your function has a lower limit.
So, when you are using Θ, your approximation is better for you are "wrapping" your function between two edges ; whereas O only set a maximum. Ditto for Ω (minimum).
To sum up:
O(n): in worst situations, your algorithm has a complexity of n
Ω(n): in best case, your algorithm has a complexity of n
Θ(n): in every situation, your algorithm has a complexity of n
To conclude, your assumption is wrong: it is Θ, not Ω. As you may know, n > log(n) when n has a huge value. Then, it is logic to say n = Θ(log(n)), according to previous definitions.

Big O, Big Omega, Big Theta [duplicate]

Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?
Short explanation:
If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as n (input size) gets larger is proportional to g(n).
If an algorithm is of O(g(n)), it means that the running time of the algorithm as n gets larger is at most proportional to g(n).
Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.
More technically:
O(n) represents upper bound. Θ(n) means tight bound.
Ω(n) represents lower bound.
f(x) = Θ(g(x)) iff f(x) =
O(g(x)) and f(x) = Ω(g(x))
Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2).
In fact, since f(n) = Θ(g(n)) means for sufficiently large values of n, f(n) can be bound within c1g(n) and c2g(n) for some values of c1 and c2, i.e. the growth rate of f is asymptotically equal to g: g can be a lower bound and and an upper bound of f. This directly implies f can be a lower bound and an upper bound of g as well. Consequently,
f(x) = Θ(g(x)) iff g(x) = Θ(f(x))
Similarly, to show f(n) = Θ(g(n)), it's enough to show g is an upper bound of f (i.e. f(n) = O(g(n))) and f is a lower bound of g (i.e. f(n) = Ω(g(n)) which is the exact same thing as g(n) = O(f(n))). Concisely,
f(x) = Θ(g(x)) iff f(x) = O(g(x)) and g(x) = O(f(x))
There are also little-oh and little-omega (ω) notations representing loose upper and loose lower bounds of a function.
To summarize:
f(x) = O(g(x)) (big-oh) means that
the growth rate of f(x) is
asymptotically less than or equal
to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means
that the growth rate of f(x) is
asymptotically greater than or
equal to the growth rate of g(x)
f(x) = o(g(x)) (little-oh) means that
the growth rate of f(x) is
asymptotically less than the
growth rate of g(x).
f(x) = ω(g(x)) (little-omega) means
that the growth rate of f(x) is
asymptotically greater than the
growth rate of g(x)
f(x) = Θ(g(x)) (theta) means that
the growth rate of f(x) is
asymptotically equal to the
growth rate of g(x)
For a more detailed discussion, you can read the definition on Wikipedia or consult a classic textbook like Introduction to Algorithms by Cormen et al.
There's a simple way (a trick, I guess) to remember which notation means what.
All of the Big-O notations can be considered to have a bar.
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
one is Big "O"
one is Big Theta
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
When both condition are true for the same expression, you can use the big theta notation....
Rather than provide a theoretical definition, which are beautifully summarized here already, I'll give a simple example:
Assume the run time of f(i) is O(1). Below is a code fragment whose asymptotic runtime is Θ(n). It always calls the function f(...) n times. Both the lower and the upper bound is n.
for(int i=0; i<n; i++){
f(i);
}
The second code fragment below has the asymptotic runtime of O(n). It calls the function f(...) at most n times. The upper bound is n, but the lower bound could be Ω(1) or Ω(log(n)), depending on what happens inside f2(i).
for(int i=0; i<n; i++){
if( f2(i) ) break;
f(i);
}
Theta is a shorthand way of referring to a special situtation where
the big O and Omega are the same.
Thus, if one claims The Theta is expression q, then they are also necessarily claiming that Big O is expression q and Omega is expression q.
Rough analogy:
If:
Theta claims, "That animal has 5 legs."
then it follows that:
Big O is true ("That animal has less than or equal to 5 legs.")
and
Omega is true("That animal has more than or equal to 5 legs.")
It's only a rough analogy because the expressions aren't necessarily specific numbers, but instead functions of varying orders of magnitude such as log(n), n, n^2, (etc.).
A chart could make the previous answers easier to understand:
Θ-Notation - Same order | O-Notation - Upper bound
In English,
On the left, note that there is an upper bound and a lower bound that are both of the same order of magnitude (i.e. g(n) ). Ignore the constants, and if the upper bound and lower bound have the same order of magnitude, one can validly say f(n) = Θ(g(n)) or f(n) is in big theta of g(n).
Starting with the right, the simpler example, it is saying the upper bound g(n) is simply the order of magnitude and ignores the constant c (just as all big O notation does).
f(n) belongs to O(n) if exists positive k as f(n)<=k*n
f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n
Wikipedia article on Big O Notation
Using limits
Let's consider f(n) > 0 and g(n) > 0 for all n. It's ok to consider this, because the fastest real algorithm has at least one operation and completes its execution after the start. This will simplify the calculus, because we can use the value (f(n)) instead of the absolute value (|f(n)|).
f(n) = O(g(n))
General:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 ≤ lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = O(n) 1
1/2*n = O(n) 1/2
2*n = O(n) 2
n+log(n) = O(n) 1
n = O(n*log(n)) 0
n = O(n²) 0
n = O(nⁿ) 0
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ O(log(n)) ∞
1/2*n ≠ O(sqrt(n)) ∞
2*n ≠ O(1) ∞
n+log(n) ≠ O(log(n)) ∞
f(n) = Θ(g(n))
General:
f(n)
0 < lim ──────── < ∞
n➜∞ g(n)
For g(n) = n:
f(n)
0 < lim ──────── < ∞
n➜∞ n
Examples:
Expression Value of the limit
------------------------------------------------
n = Θ(n) 1
1/2*n = Θ(n) 1/2
2*n = Θ(n) 2
n+log(n) = Θ(n) 1
Counterexamples:
Expression Value of the limit
-------------------------------------------------
n ≠ Θ(log(n)) ∞
1/2*n ≠ Θ(sqrt(n)) ∞
2*n ≠ Θ(1) ∞
n+log(n) ≠ Θ(log(n)) ∞
n ≠ Θ(n*log(n)) 0
n ≠ Θ(n²) 0
n ≠ Θ(nⁿ) 0
Conclusion: we regard big O, big θ and big Ω as the same thing.
Why? I will tell the reason below:
Firstly, I will clarify one wrong statement, some people think that we just care the worst time complexity, so we always use big O instead of big θ. I will say this man is bullshitting. Upper and lower bound are used to describe one function, not used to describe the time complexity. The worst time function has its upper and lower bound; the best time function has its upper and lower bound too.
In order to explain clearly the relation between big O and big θ, I will explain the relation between big O and small o first. From the definition, we can easily know that small o is a subset of big O. For example:
T(n)= n^2 + n, we can say T(n)=O(n^2), T(n)=O(n^3), T(n)=O(n^4). But for small o, T(n)=o(n^2) does not meet the definition of small o. So just T(n)=o(n^3), T(n)=o(n^4) are correct for small o. The redundant T(n)=O(n^2) is what? It's big θ!
Generally, we say big O is O(n^2), hardly to say T(n)=O(n^3), T(n)=O(n^4). Why? Because we regard big O as big θ subconsciously.
Similarly, we also regard big Ω as big θ subconsciously.
In one word, big O, big θ and big Ω are not the same thing from the definitions, but they are the same thing in our mouth and brain.

Difference between Big-Theta and Big O notation in simple language

While trying to understand the difference between Theta and O notation I came across the following statement :
The Theta-notation asymptotically bounds a function from above and below. When
we have only an asymptotic upper bound, we use O-notation.
But I do not understand this. The book explains it mathematically, but it's too complex and gets really boring to read when I am really not understanding.
Can anyone explain the difference between the two using simple, yet powerful examples.
Big O is giving only upper asymptotic bound, while big Theta is also giving a lower bound.
Everything that is Theta(f(n)) is also O(f(n)), but not the other way around.
T(n) is said to be Theta(f(n)), if it is both O(f(n)) and Omega(f(n))
For this reason big-Theta is more informative than big-O notation, so if we can say something is big-Theta, it's usually preferred. However, it is harder to prove something is big Theta, than to prove it is big-O.
For example, merge sort is both O(n*log(n)) and Theta(n*log(n)), but it is also O(n2), since n2 is asymptotically "bigger" than it. However, it is NOT Theta(n2), Since the algorithm is NOT Omega(n2).
Omega(n) is asymptotic lower bound. If T(n) is Omega(f(n)), it means that from a certain n0, there is a constant C1 such that T(n) >= C1 * f(n). Whereas big-O says there is a constant C2 such that T(n) <= C2 * f(n)).
All three (Omega, O, Theta) give only asymptotic information ("for large input"):
Big O gives upper bound
Big Omega gives lower bound and
Big Theta gives both lower and upper bounds
Note that this notation is not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.
I will just quote from Knuth's TAOCP Volume 1 - page 110 (I have the Indian edition). I recommend reading pages 107-110 (section 1.2.11 Asymptotic representations)
People often confuse O-notation by assuming that it gives an exact order of Growth; they use it as if it specifies a lower bound as well as an upper bound. For example, an algorithm might be called inefficient because its running time is O(n^2). But a running time of O(n^2) does not necessarily mean that running time is not also O(n)
On page 107,
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^4) and
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^3) and
1^2 + 2^2 + 3^2 + ... + n^2 = (1/3) n^3 + O(n^2)
Big-Oh is for approximations. It allows you to replace ~ with an equals = sign. In the example above, for very large n, we can be sure that the quantity will stay below n^4 and n^3 and (1/3)n^3 + n^2 [and not simply n^2]
Big Omega is for lower bounds - An algorithm with Omega(n^2) will not be as efficient as one with O(N logN) for large N. However, we do not know at what values of N (in that sense we know approximately)
Big Theta is for exact order of Growth, both lower and upper bound.
I am going to use an example to illustrate the difference.
Let the function f(n) be defined as
if n is odd f(n) = n^3
if n is even f(n) = n^2
From CLRS
A function f(n) belongs to the set Θ(g(n)) if there exist positive
constants c1 and c2 such that it can be "sandwiched" between c1g(n)
and c2g(n), for sufficiently large n.
AND
O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
f(n) ≤ cg(n) for all n ≥ n0}.
AND
Ω(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
cg(n) ≤ f(n) for all n ≥ n0}.
The upper bound on f(n) is n^3. So our function f(n) is clearly O(n^3).
But is it Θ(n^3)?
For f(n) to be in Θ(n^3) it has to be sandwiched between two functions one forming the lower bound, and the other the upper bound, both of which grown at n^3. While the upper bound is obvious, the lower bound can not be n^3. The lower bound is in fact n^2; f(n) is Ω(n^2)
From CLRS
For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and
only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
Hence f(n) is not in Θ(n^3) while it is in O(n^3) and Ω(n^2)
If the running time is expressed in big-O notation, you know that the running time will not be slower than the given expression. It expresses the worst-case scenario.
But with Theta notation you also known that it will not be faster. That is, there is no best-case scenario where the algorithm will retun faster.
This gives are more exact bound on the expected running time. However for most purposes it is simpler to ignore the lower bound (the possibility of faster execution), while you are generally only concerned about the worst-case scenario.
Here's my attempt:
A function, f(n) is O(n), if and only if there exists a constant, c, such that f(n) <= c*g(n).
Using this definition, could we say that the function f(2^(n+1)) is O(2^n)?
In other words, does a constant 'c' exist such that 2^(n+1) <= c*(2^n)? Note the second function (2^n) is the function after the Big O in the above problem. This confused me at first.
So, then use your basic algebra skills to simplify that equation. 2^(n+1) breaks down to 2 * 2^n. Doing so, we're left with:
2 * 2^n <= c(2^n)
Now its easy, the equation holds for any value of c where c >= 2. So, yes, we can say that f(2^(n+1)) is O(2^n).
Big Omega works the same way, except it evaluates f(n) >= c*g(n) for some constant 'c'.
So, simplifying the above functions the same way, we're left with (note the >= now):
2 * 2^n >= c(2^n)
So, the equation works for the range 0 <= c <= 2. So, we can say that f(2^(n+1)) is Big Omega of (2^n).
Now, since BOTH of those hold, we can say the function is Big Theta (2^n). If one of them wouldn't work for a constant of 'c', then its not Big Theta.
The above example was taken from the Algorithm Design Manual by Skiena, which is a fantastic book.
Hope that helps. This really is a hard concept to simplify. Don't get hung up so much on what 'c' is, just break it down into simpler terms and use your basic algebra skills.

How to calculate big-theta

Can some one provide me a real time example for how to calculate big theta.
Is big theta some thing like average case, (min-max)/2?
I mean (minimum time - big O)/2
Please correct me if I am wrong, thanks
Big-theta notation represents the following rule:
For any two functions f(n), g(n), if f(n)/g(n) and g(n)/f(n) are both bounded as n grows to infinity, then f = Θ(g) and g = Θ(f). In that case, g is both an upper bound and a lower bound on the growth of f.
Here's an example algorithm:
def find-minimum(List)
min = +∞
foreach value in List
min = value if min > value
return min
We wish to evaluate the cost function c(n) where n is the size of the input list. This algorithm will perform one comparison for every item in the list, so c(n) = n.
c(n)/n = 1 which remains bounded as n goes to infinity, so c(n) grows no faster than n. This is what is meant by big-O notation c(n) = O(n). Conversely, n/C(n) = 1 also remains bounded, so c(n) grows no slower than n. Since it grows neither slower nor faster, it must grow at the same speed. This is what is meant by theta notation c(n) = Θ(n).
Note that c(n)/n² is also bounded, so c(n) = O(n²) as well — big-O notation is merely an upper bound on the complexity, so any O(n) function is also O(n²), O(n³)...
However, since n²/c(n) = n is not bounded, then c(n) ≠ Θ(n²). This is the interesting property of big-theta notation: it's both an upper bound and a lower bound on the complexity.
Big theta is a tight bound, for a function T(n): if: Omega(f(n))<=T(n)<=O(f(n)), then Theta(f(n)) is the tight bound for T(n).
In other words Theta(f(n)) 'describes' a function T(n), if both O [big O] and Omega, 'describe' the same T, with the same f.
for example, a quicksort [with correct median choices], always takes at most O(nlogn), at at least Omega(nlogn), so quicksort [with good median choices] is Theta(nlogn)
EDIT:
added discussion in comments:
Searching an array is still Theta(n). the Theta function does not indicate worst/best case, but the behavior of the desired case. i.e, searching for an array, T(n)=number of ops for worst case. in here, obviously T(n)<=O(n), but also T(n)>=n/2, because at worst case you need to iterate the whole array, so T(n)>=Omega(n) and therefore Theta(n) is asymptotic bound.
From http://en.wikipedia.org/wiki/Big_O_notation#Related_asymptotic_notations, we learn that "Big O" denotes an upper bound, whereas "Big Theta" denotes an upper and lower bound, i.e. in the limit as n goes to infinity:
f(n) = O(g(n)) --> |f(n)| < k.g(n)
f(n) = Theta(g(n)) --> k1.g(n) < f(n) < k2.g(n)
So you cannot infer Big Theta from Big O.
ig-Theta (Θ) notation provides an asymptotic upper and lower bound on the growth rate of an algorithm's running time. To calculate the big-Theta notation of a function, you need to find two non-negative functions, f(n) and g(n), such that:
There exist positive constants c1, c2 and n0 such that 0 <= c1 * g(n) <= f(n) <= c2 * g(n) for all n >= n0.
f(n) and g(n) have the same asymptotic growth rate.
The big-Theta notation for the function f(n) is then written as Θ(g(n)). The purpose of this notation is to provide a rough estimate of the running time, ignoring lower order terms and constant factors.
For example, consider the function f(n) = 2n^2 + 3n + 1. To calculate its big-Theta notation, we can choose g(n) = n^2. Then, we can find c1 and c2 such that 0 <= c1 * n^2 <= 2n^2 + 3n + 1 <= c2 * n^2 for all n >= n0. For example, c1 = 1/2 and c2 = 2. So, f(n) = Θ(n^2).

Resources