I keep reading how easy it is to estimate Big O: drop least dominant terms and constants. My question is, can we do the same for Big Omega?
I know input dependency has nothing to do with asymptotic analysis: we can have an upper (Big O) and lower (Big Omega) on best, average and worst case analysis. But I am confused on how I can quickly estimate Big Omega of my algorithm, in the worst case for example.
If you could provide examples to clarify my confusion I would truly appreciate it.
Just two examples should enlighten you.
n² + n is O(n²) (it is also O(n³)).
n² + n is Ω(n²) (it is also Ω(n)).
You consider the dominant term.
n + (n cos n)² is O(n²) and Ω(n).
Upper and lower bound.
Here it states that T(n) is O(n^4). But I want to know why it is not O(n^3)? It contains n^3 and if we omit 20n and 1 it should be O(n^3) not O(n^4). Why it is like this?
It is in O(n^3), but O(n^4), O(n^5) etc is a superset of O(n^3), so if something is in O(n^3) then it can also be in O(n^100). The best answer and the one used by convention is the smallest big O to which is belongs, which is O(n^3), but it's not the only one.
I think you are getting confused between the theta notation and the big O notation.
Theta notation determines the rough estimate of the running time of an algorithm, whereas Big O notation determines the worst case running time of an algorithm.
The method you mentioned above is used to calculate the theta, not big O. The big O of the above-mentioned problem could be O(n^4), O(n^5), O(n^6) and so on... all are correct values. But for theta, only theta(n^3) is correct.
A function like f(n)=3n^2+2 is O(n^2) because n^2 is the biggest exponent in the function. However, the function 1f(n)= n^31 is not O(n^2) because the biggest exponent is 3, not 2.
So in order to make a guess like this on Big Omega or Big Theta, what should we look for in the function? Can we do something analogous to what we did for Big O notation above?
For example, let's say the questions asks us to find the Big Omega or Big Theta of the function f(n)= 3n^2 +1. Is f(n)= O(n), Big Omega(n) or a Big Theta(n)? If I am about to take an educated guess on whether this function is Big O(n), I would say no (because the biggest exponent of the function is 2, not 1). I would prove this more formally using induction.
So, can we do something analogous to what we did with Big O notation in the first example? What should I look for in the function to guess what the Big Omega and Theta will be, and to determine if the "educated guess" is correct?
Your example uses polynomials, so I will assume that.
your polynomial is O(n^k) if k is greater than or equal to the order of your polynomial.
your polynomial is Omega(n^k) if k is less than or equal to the order of your polynomial.
your polynomial is Theta(n^k) if it is both O(n^k) and Omega(n^k).
So, can we do something analogous to what we did with Big O notation
in the first example?
If you're looking for something that allows you to eyeball if something is Big Omega, Big O, or Big Theta for polynomials, you can use the Theorem of Polynomial Orders (pretty much what Patrick87 said).
Basically, the Theorem of Polynomial Orders allows you to solely look at the highest order term and use that as your desired bound for Big O, Big Omega, and Big Theta.
What should I look for in the function to guess what the Big Omega and
Theta will be, and to determine if the "educated guess" is correct?
Ideally, your function would be a polynomial as that would make it the problem much simpler. But, it can also be a logarithmic function or an exponential function.
To determine if the "educated guess" is correct, you have to first understand what kind of runtime you are looking for. Ask yourself: am I looking for the worst case running time for this algorithm? or am I looking for the best case running time for this algorithm? or am I looking for the general running time of the algorithm?
If you are looking at the worst-case running time of the algorithm, you can simply prove Big Omega using the theorem of polynomial order (if it's a polynomial function) or through an example. However, you must analyze the algorithm to be able to prove Big O and Big Theta.
If you are looking at the best-case running time of the algorithm, you can prove Big O using an example or through the theorem of polynomial order (if it's a polynomial). However, Big Omega and Big Theta can only be proved by analyzing the algorithm.
Basically, you can only prove the least informative bounds for best-case running time and worst-case running time of the algorithm with an example.
For proving the general running time of an algorithm, you have to make sure that the function for the algorithm's running time you have been given is for all input - a single example is not sufficient in this case. When you don't have a function for all input, you have to analyze the algorithm to prove any of the three (Big O, Big Omega, Big Theta) for all inputs for the algorithm.
This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 9 years ago.
What is Big O notation and why do we measure complexity of any algorithm in Big O notation?
An example will do the good.
You must check wiki
In mathematics, big O notation describes the limiting behavior of a
function when the argument tends towards a particular value or
infinity, usually in terms of simpler functions. It is a member of a
larger family of notations that is called Landau notation,
Bachmann–Landau notation (after Edmund Landau and Paul Bachmann), or
asymptotic notation. In computer science, big O notation is used to
classify algorithms by how they respond (e.g., in their processing
time or working space requirements) to changes in input size. In
analytic number theory, it is used to estimate the "error committed"
while replacing the asymptotic size, or asymptotic mean size, of an
arithmetical function, by the value, or mean value, it takes at a
large finite argument. A famous example is the problem of estimating
the remainder term in the prime number theorem.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What is the difference between Θ(n) and O(n)?
It seems to me like when people talk about algorithm complexity informally, they talk about big-oh. But in formal situations, I often see big-theta with the occasional big-oh thrown in.
I know mathematically what the difference is between the two, but in English, in what situation would using big-oh when you mean big-theta be incorrect, or vice versa (an example algorithm would be appreciated)?
Bonus: why do people seemingly always use big-oh when talking informally?
Big-O is an upper bound.
Big-Theta is a tight bound, i.e. upper and lower bound.
When people only worry about what's the worst that can happen, big-O is sufficient; i.e. it says that "it can't get much worse than this". The tighter the bound the better, of course, but a tight bound isn't always easy to compute.
See also
Wikipedia/Big O Notation
Related questions
What is the difference between Θ(n) and O(n)?
The following quote from Wikipedia also sheds some light:
Informally, especially in computer science, the Big O notation often is
permitted to be somewhat abused to describe an asymptotic tight bound
where using Big Theta notation might be more factually appropriate in a
given context.
For example, when considering a function T(n) = 73n3+ 22n2+ 58, all of the following are generally acceptable, but tightness of bound (i.e., bullets 2 and 3 below) are usually strongly preferred over laxness of bound (i.e., bullet 1
below).
T(n) = O(n100), which is identical to T(n) ∈ O(n100)
T(n) = O(n3), which is identical to T(n) ∈ O(n3)
T(n) = Θ(n3), which is identical to T(n) ∈ Θ(n3)
The equivalent English statements are respectively:
T(n) grows asymptotically no faster than n100
T(n) grows asymptotically no faster than n3
T(n) grows asymptotically as fast as n3.
So while all three statements are true, progressively more information is contained in
each. In some fields, however, the Big O notation (bullets number 2 in the lists above)
would be used more commonly than the Big Theta notation (bullets number 3 in the
lists above) because functions that grow more slowly are more desirable.
I'm a mathematician and I have seen and needed big-O O(n), big-Theta Θ(n), and big-Omega Ω(n) notation time and again, and not just for complexity of algorithms. As people said, big-Theta is a two-sided bound. Strictly speaking, you should use it when you want to explain that that is how well an algorithm can do, and that either that algorithm can't do better or that no algorithm can do better. For instance, if you say "Sorting requires Θ(n(log n)) comparisons for worst-case input", then you're explaining that there is a sorting algorithm that uses O(n(log n)) comparisons for any input; and that for every sorting algorithm, there is an input that forces it to make Ω(n(log n)) comparisons.
Now, one narrow reason that people use O instead of Ω is to drop disclaimers about worst or average cases. If you say "sorting requires O(n(log n)) comparisons", then the statement still holds true for favorable input. Another narrow reason is that even if one algorithm to do X takes time Θ(f(n)), another algorithm might do better, so you can only say that the complexity of X itself is O(f(n)).
However, there is a broader reason that people informally use O. At a human level, it's a pain to always make two-sided statements when the converse side is "obvious" from context. Since I'm a mathematician, I would ideally always be careful to say "I will take an umbrella if and only if it rains" or "I can juggle 4 balls but not 5", instead of "I will take an umbrella if it rains" or "I can juggle 4 balls". But the other halves of such statements are often obviously intended or obviously not intended. It's just human nature to be sloppy about the obvious. It's confusing to split hairs.
Unfortunately, in a rigorous area such as math or theory of algorithms, it's also confusing not to split hairs. People will inevitably say O when they should have said Ω or Θ. Skipping details because they're "obvious" always leads to misunderstandings. There is no solution for that.
Because my keyboard has an O key.
It does not have a Θ or an Ω key.
I suspect most people are similarly lazy and use O when they mean Θ because it's easier to type.
One reason why big O gets used so much is kind of because it gets used so much. A lot of people see the notation and think they know what it means, then use it (wrongly) themselves. This happens a lot with programmers whose formal education only went so far - I was once guilty myself.
Another is because it's easier to type a big O on most non-Greek keyboards than a big theta.
But I think a lot is because of a kind of paranoia. I worked in defence-related programming for a bit (and knew very little about algorithm analysis at the time). In that scenario, the worst case performance is always what people are interested in, because that worst case might just happen at the wrong time. It doesn't matter if the actually probability of that happening is e.g. far less than the probability of all members of a ships crew suffering a sudden fluke heart attack at the same moment - it could still happen.
Though of course a lot of algorithms have their worst case in very common circumstances - the classic example being inserting in-order into a binary tree to get what's effectively a singly-linked list. A "real" assessment of average performance needs to take into account the relative frequency of different kinds of input.
Bonus: why do people seemingly always use big-oh when talking informally?
Because in big-oh, this loop:
for i = 1 to n do
something in O(1) that doesn't change n and i and isn't a jump
is O(n), O(n^2), O(n^3), O(n^1423424). big-oh is just an upper bound, which makes it easier to calculate because you don't have to find a tight bound.
The above loop is only big-theta(n) however.
What's the complexity of the sieve of eratosthenes? If you said O(n log n) you wouldn't be wrong, but it wouldn't be the best answer either. If you said big-theta(n log n), you would be wrong.
Because there are algorithms whose best-case is quick, and thus it's technically a big O, not a big Theta.
Big O is an upper bound, big Theta is an equivalence relation.
There are a lot of good answers here but I noticed something was missing. Most answers seem to be implying that the reason why people use Big O over Big Theta is a difficulty issue, and in some cases this may be true. Often a proof that leads to a Big Theta result is far more involved than one that results in Big O. This usually holds true, but I do not believe this has a large relation to using one analysis over the other.
When talking about complexity we can say many things. Big O time complexity is just telling us what an algorithm is guarantied to run within, an upper bound. Big Omega is far less often discussed and tells us the minimum time an algorithm is guarantied to run, a lower bound. Now Big Theta tells us that both of these numbers are in fact the same for a given analysis. This tells us that the application has a very strict run time, that can only deviate by a value asymptoticly less than our complexity. Many algorithms simply do not have upper and lower bounds that happen to be asymptoticly equivalent.
So as to your question using Big O in place of Big Theta would technically always be valid, while using Big Theta in place of Big O would only be valid when Big O and Big Omega happened to be equal. For instance insertion sort has a time complexity of Big О at n^2, but its best case scenario puts its Big Omega at n. In this case it would not be correct to say that its time complexity is Big Theta of n or n^2 as they are two different bounds and should be treated as such.
I have seen Big Theta, and I'm pretty sure I was taught the difference in school. I had to look it up though. This is what Wikipedia says:
Big O is the most commonly used asymptotic notation for comparing functions, although in many cases Big O may be replaced with Big Theta Θ for asymptotically tighter bounds.
Source: Big O Notation#Related asymptotic notation
I don't know why people use Big-O when talking formally. Maybe it's because most people are more familiar with Big-O than Big-Theta? I had forgotten that Big-Theta even existed until you reminded me. Although now that my memory is refreshed, I may end up using it in conversation. :)