Here it states that T(n) is O(n^4). But I want to know why it is not O(n^3)? It contains n^3 and if we omit 20n and 1 it should be O(n^3) not O(n^4). Why it is like this?
It is in O(n^3), but O(n^4), O(n^5) etc is a superset of O(n^3), so if something is in O(n^3) then it can also be in O(n^100). The best answer and the one used by convention is the smallest big O to which is belongs, which is O(n^3), but it's not the only one.
I think you are getting confused between the theta notation and the big O notation.
Theta notation determines the rough estimate of the running time of an algorithm, whereas Big O notation determines the worst case running time of an algorithm.
The method you mentioned above is used to calculate the theta, not big O. The big O of the above-mentioned problem could be O(n^4), O(n^5), O(n^6) and so on... all are correct values. But for theta, only theta(n^3) is correct.
Related
I know what the Big O, Theta and Omega notations are, but for example, if my algorithm is a for inside of a for, looping n times, my complexity would be O(n²), but why O(n²) instead of ϴ(n²)? Since the complexity IS in fact O(n²) and Ω(n²), then it would also be ϴ(n²), and I just can't see any reason to not use ϴ(n²) instead of O(n²), since ϴ(n²) restricts my complexity with an upper and bottom value, not only upper in the case of O(n²).
If f(n) = Θ(g(n)) then f(n) = O(g(n)). This because Θ(g(n)) ⊆ O (g(n)).
In your specific case if a loop runs exactly n^2 time the time complexity is in both O(n^2) and Θ(n^2).
The main reason why big-O is typically enough is that we are more interested in the worst case time complexity when analyzing the algorithm's performance, and knowing the worst case scenario is usually enough.
Also, not always is possible to find a tight bound.
I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.
When I take the Algorithm course in Coursera, I met a question about Big-O notation that says O(n2) = O(n). I check some other answers in Stack overflow and some posts said that Big Notation means the "upper bound". Based on this def: could I sai O(n) = O(2^n) because O(n)<= O(2^n)?
enter image description here
In some cases polynomials are considered pretty much equivalent with regards to anything exponential.
But most probably 2 was a scalar, O(2*N) is the same as O(N) because constant factors are usually ignored in big O notation.
In any case, no O(n) and O(2 to the n) are not =, O(n) is however a subset of O(higher order N).
It would be inappropriate to say O(n) equals O(n^2). Rather, say, O(n) falls within the bounds of O(n^2).
A function like f(n)=3n^2+2 is O(n^2) because n^2 is the biggest exponent in the function. However, the function 1f(n)= n^31 is not O(n^2) because the biggest exponent is 3, not 2.
So in order to make a guess like this on Big Omega or Big Theta, what should we look for in the function? Can we do something analogous to what we did for Big O notation above?
For example, let's say the questions asks us to find the Big Omega or Big Theta of the function f(n)= 3n^2 +1. Is f(n)= O(n), Big Omega(n) or a Big Theta(n)? If I am about to take an educated guess on whether this function is Big O(n), I would say no (because the biggest exponent of the function is 2, not 1). I would prove this more formally using induction.
So, can we do something analogous to what we did with Big O notation in the first example? What should I look for in the function to guess what the Big Omega and Theta will be, and to determine if the "educated guess" is correct?
Your example uses polynomials, so I will assume that.
your polynomial is O(n^k) if k is greater than or equal to the order of your polynomial.
your polynomial is Omega(n^k) if k is less than or equal to the order of your polynomial.
your polynomial is Theta(n^k) if it is both O(n^k) and Omega(n^k).
So, can we do something analogous to what we did with Big O notation
in the first example?
If you're looking for something that allows you to eyeball if something is Big Omega, Big O, or Big Theta for polynomials, you can use the Theorem of Polynomial Orders (pretty much what Patrick87 said).
Basically, the Theorem of Polynomial Orders allows you to solely look at the highest order term and use that as your desired bound for Big O, Big Omega, and Big Theta.
What should I look for in the function to guess what the Big Omega and
Theta will be, and to determine if the "educated guess" is correct?
Ideally, your function would be a polynomial as that would make it the problem much simpler. But, it can also be a logarithmic function or an exponential function.
To determine if the "educated guess" is correct, you have to first understand what kind of runtime you are looking for. Ask yourself: am I looking for the worst case running time for this algorithm? or am I looking for the best case running time for this algorithm? or am I looking for the general running time of the algorithm?
If you are looking at the worst-case running time of the algorithm, you can simply prove Big Omega using the theorem of polynomial order (if it's a polynomial function) or through an example. However, you must analyze the algorithm to be able to prove Big O and Big Theta.
If you are looking at the best-case running time of the algorithm, you can prove Big O using an example or through the theorem of polynomial order (if it's a polynomial). However, Big Omega and Big Theta can only be proved by analyzing the algorithm.
Basically, you can only prove the least informative bounds for best-case running time and worst-case running time of the algorithm with an example.
For proving the general running time of an algorithm, you have to make sure that the function for the algorithm's running time you have been given is for all input - a single example is not sufficient in this case. When you don't have a function for all input, you have to analyze the algorithm to prove any of the three (Big O, Big Omega, Big Theta) for all inputs for the algorithm.
What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics.
How theta notation is different from the Big O notation ? Could anyone explain in plain English?
In algorithm analysis how there are used? I am confused?
If an algorithm's run time is Big Theta(f(n)), it is asymptotically bounded above and below by f(n). Big O is the same except that the bound is only above.
Intuitively, Big O(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time never exceeds f(n)." In rough words, if you think of run time as "bad", then Big O is a worst case. Big Theta(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time always varies as f(n)." In other words, Big Theta is a known tight bound: it's both worst case and best case.
A final try at intuition: Big O is "one-sided." O(n) run time is also O(n^2) and O(2^n). This is not true with Big Theta. If you have an algorithm run time that's O(n), then you already have a proof that it's not Big Theta(n^2). It may or may not be Big Theta(n)
An example is comparison sorting. Information theory tells us sorting requires at least ceiling(n log n) comparisons, and we have actually invented O(n log n) algorithms (where n is number of comparisons), so sorting comparisons are Big Theta(n log n).
I have always wanted to put this down in Simple words. Here is my try.
If an algorithm's time or space complexity is expressed in
Big O : Ex O(n) - means n is the upper limit. Final Value could be less than or equal to n.
Big Omega : Ex Ω(n) - means n is the lower limit. Final Value could be equal to or more than n.
Theta : Ex Θ(n) - means n is the only possible value. (both upper limit & lower limit)