This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
Closed 2 years ago.
i heard somewhere that for example to tell that a function has a big theta of n it has to have complexity of n in both its best and worst cases, so linear search would not be big theta of n because it has best case O(1), but i doubt this information, so if you have any code which you want to analyse, when to say that this code has a big theta of some function ?
To understand big theta, we should first review big O and big omega. Big O describes the worst case runtime of a function, which means that it will never perform worse than that. Big omega describes the best case runtime of a function, meaning the function will never perform better than that. Big theta is found when big O = big omega, because the worst case = the best case, so the function is bounded above and below by the same time complexity. For example, when doing matrix multiplication of 2 matrices, each with dimensions n x n, the runtime of the function is O(n^3). The runtime is also bigOmega(n^3). This is because in both the worst and best cases of the function, you are always multiplying both matrices. You will not be able to "end early" because you are always multiplying everything in the matrix. Because the function is bounded above and below by n^3, the function is bigTheta(n^3).
Related
I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.
A function like f(n)=3n^2+2 is O(n^2) because n^2 is the biggest exponent in the function. However, the function 1f(n)= n^31 is not O(n^2) because the biggest exponent is 3, not 2.
So in order to make a guess like this on Big Omega or Big Theta, what should we look for in the function? Can we do something analogous to what we did for Big O notation above?
For example, let's say the questions asks us to find the Big Omega or Big Theta of the function f(n)= 3n^2 +1. Is f(n)= O(n), Big Omega(n) or a Big Theta(n)? If I am about to take an educated guess on whether this function is Big O(n), I would say no (because the biggest exponent of the function is 2, not 1). I would prove this more formally using induction.
So, can we do something analogous to what we did with Big O notation in the first example? What should I look for in the function to guess what the Big Omega and Theta will be, and to determine if the "educated guess" is correct?
Your example uses polynomials, so I will assume that.
your polynomial is O(n^k) if k is greater than or equal to the order of your polynomial.
your polynomial is Omega(n^k) if k is less than or equal to the order of your polynomial.
your polynomial is Theta(n^k) if it is both O(n^k) and Omega(n^k).
So, can we do something analogous to what we did with Big O notation
in the first example?
If you're looking for something that allows you to eyeball if something is Big Omega, Big O, or Big Theta for polynomials, you can use the Theorem of Polynomial Orders (pretty much what Patrick87 said).
Basically, the Theorem of Polynomial Orders allows you to solely look at the highest order term and use that as your desired bound for Big O, Big Omega, and Big Theta.
What should I look for in the function to guess what the Big Omega and
Theta will be, and to determine if the "educated guess" is correct?
Ideally, your function would be a polynomial as that would make it the problem much simpler. But, it can also be a logarithmic function or an exponential function.
To determine if the "educated guess" is correct, you have to first understand what kind of runtime you are looking for. Ask yourself: am I looking for the worst case running time for this algorithm? or am I looking for the best case running time for this algorithm? or am I looking for the general running time of the algorithm?
If you are looking at the worst-case running time of the algorithm, you can simply prove Big Omega using the theorem of polynomial order (if it's a polynomial function) or through an example. However, you must analyze the algorithm to be able to prove Big O and Big Theta.
If you are looking at the best-case running time of the algorithm, you can prove Big O using an example or through the theorem of polynomial order (if it's a polynomial). However, Big Omega and Big Theta can only be proved by analyzing the algorithm.
Basically, you can only prove the least informative bounds for best-case running time and worst-case running time of the algorithm with an example.
For proving the general running time of an algorithm, you have to make sure that the function for the algorithm's running time you have been given is for all input - a single example is not sufficient in this case. When you don't have a function for all input, you have to analyze the algorithm to prove any of the three (Big O, Big Omega, Big Theta) for all inputs for the algorithm.
This is the graph which I am expected to analyze. I have to find the gradient (slope) and from that I am expected to deduce the time complexity.
I have found that the slope is equal to 1,91. If that is true what else should I do?
Quotient of logarithms is approximately 2. What does it mean when removing the logarithms?
log(T(n)) / log(n) = 2
log(T(n)) = 2 * log(n)
log(T(n)) = log(n²)
T(n) = n²
T(n) denotes algorithm’s time complexity. Of course we are talking in asymptotic terms, i.e. using Big O notation we say that
T(n) ∈ O(n²).
You measured the value 2 for large inputs and you are assuming it will remain the same even for all bigger ones.
You can read more at a page by one of the tutors at University of Toronto. It uses basic calculus to explain how it works. Still, the idea behind all this is that logarithms make multiplicative constants from constant exponents and additive constants from multiplicative constants.
Also regarding interpretation of the plot, a similar question popped up here on Stack Overflow recently: Log-log plot/graph of algorithm time complexity
But note that this is really just an estimation of time complexity. You cannot prove time complexity of an algorithm by just running it on a finite set of inputs. This method can give you a good guess on what to try to prove using analysis of the algorithm, though.
This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
Closed 2 months ago.
Big Omega is supposed to be the opposite of Big O, but they can always have the same value, because by definition Big O means:
g(x) so that cg(x) is bigger or equal to f(x)
and Big Omega means
g(x) so that cg(x) is smaller or equal to f(x)
the only thing that changes is the value of c, if the value of c is an arbitrary value (a value that we choose to meet inequality), then Big Omega and Big O will be the same. So what's the point of those two? What purpose do they serve?
Big O is bounded above by (up to constant factor) asymptotically while Big Omega is bounded below by (up to constant factor) asymptotically.
Mathematically speaking, f(x) = O(g(x)) (big-oh) means that the growth rate of f(x) is asymptotically less than or equal to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means that the growth rate of f(x) is asymptotically greater than or equal to the growth rate of g(x)
See the Wiki reference below:
Big O notation
Sometimes you want to prove an upper bound (Big Oh), some other times you want to prove a lower bound (Big Omega).
http://en.wikipedia.org/wiki/Big_O_notation:
You're correct when you assert that such a g exists, but that doesn't mean it's known.
In addition to talking about the complexity of algorithms you can also talk about the complexity of problems.
It's known that multiplication for example is Ω(n) and O(n log(n) log(log(n))) in the number of bits, but a precise characterization (denoted by Θ) is unknown. It's the same story with integer factorization and NP problems in general which is what the whole P versus NP thing is about.
Furthermore there are apparently algorithms and ones proven to be optimal no less whose complexity is unknown. See http://en.wikipedia.org/wiki/User:Erel_Segal/Optimal_algorithms_with_unknown_runtime_complexity
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Big Theta Notation - what exactly does big Theta represent?
I understand it in theory, I guess, but what I'm having trouble grasping is the application of the three.
In school, we always used Big O to denote the complexity of an algorithm. Bubble sort was O(n^2) for example.
Now after reading some more theory I get that Big Oh is not the only measure, there's at least two other interesting ones.
But here's my question:
Big O is the upper-bound, Big Omega is the lower bound, and Big Theta is a mix of the two. But what does that mean conceptually? I understand what it means on a graph; I've seen a million examples of that. But what does it mean for algorithm complexity? How does an "upper bound" or a "lower bound" mix with that?
I guess I just don't get its application. I understand that if multiplied by some constant c that if after some value n_0 f(x) is greater than g(x), f(x) is considered O(g(x)). But what does that mean practically? Why would we be multiplying f(x) by some value c? Hell, I thought with Big O notation multiples didn't matter.
The big O notation, and its relatives, the big Theta, the big Omega, the small o and the small omega are ways of saying something about how a function behaves at a limit point (for example, when approaching infinity, but also when approaching 0, etc.) without saying much else about the function. They are commonly used to describe running space and time of algorithms, but can also be seen in other areas of mathematics regarding asymptotic behavior.
The semi-intuitive definition is as follows:
A function g(x) is said to be O(f(x)) if "from some point on", g(x) is lower than c*f(x), where c is some constant.
The other definitions are similar, Theta demanding that g(x) be between two constant multiples of f(x), Omega demanding g(x)>c*f(x), and the small versions demand that this is true for all such constants.
But why is it interesting to say, for example, that an algorithm has run time of O(n^2)?
It's interesting mainly because, in theoretical computer science, we are most interested in how algorithms behave for large inputs. This is true because on small inputs algorithm run times can vary greatly depending on implementation, compilation, hardware, and other such things that are not really interesting when analyzing an algorithm theoretically.
The rate of growth, however, usually depends on the nature of the algorithm, and to improve it you need deeper insights on the problem you're trying to solve. This is the case, for example, with sorting algorithms, where you can get a simple algorithm (Bubble Sort) to run in O(n^2), but to improve this to O(n log n) you need a truly new idea, such as that introduced in Merge Sort or Heap Sort.
On the other hand, if you have an algorithm that runs in exactly 5n seconds, and another that runs in 1000n seconds (which is the difference between a long yawn and a launch break for n=3, for example), when you get to n=1000000000000, the difference in scale seems less important. If you have an algorithm that takes O(log n), though, you'd have to wait log(1000000000000)=12 seconds, perhaps multiplied by some constant, instead of the almost 317,098 years, which, no matter how big the constant is, is a completely different scale.
I hope this makes things a little clearer. Good luck with your studies!