Big O Notation Explain? - big-o

When I take the Algorithm course in Coursera, I met a question about Big-O notation that says O(n2) = O(n). I check some other answers in Stack overflow and some posts said that Big Notation means the "upper bound". Based on this def: could I sai O(n) = O(2^n) because O(n)<= O(2^n)?
enter image description here

In some cases polynomials are considered pretty much equivalent with regards to anything exponential.
But most probably 2 was a scalar, O(2*N) is the same as O(N) because constant factors are usually ignored in big O notation.
In any case, no O(n) and O(2 to the n) are not =, O(n) is however a subset of O(higher order N).

It would be inappropriate to say O(n) equals O(n^2). Rather, say, O(n) falls within the bounds of O(n^2).

Related

The sum of theta notation and Big o notation

Was wondering if I have a algorithm which has two parts with known runtime of theta(nlogn) and O(n).
So the total runtime goes to theta(nlogn) + O(n)
To my knowledge, if for either the sum of two BigOh Notations or the sum of theta notations, we always use the max value of each.
While in this case, as the worst runtime for the part O(n) is anyway smaller than the part theta(nlogn), can I assume the runtime of this algorithm is theta(nlogn)?
Thanks!
Yep, that’s correct. Regardless of whether the O(n) term is tight or not, it’s still a low-order term compared with Θ(n log n).

Why is an algorithm complexity given in the Big O notation instead of Theta?

I know what the Big O, Theta and Omega notations are, but for example, if my algorithm is a for inside of a for, looping n times, my complexity would be O(n²), but why O(n²) instead of ϴ(n²)? Since the complexity IS in fact O(n²) and Ω(n²), then it would also be ϴ(n²), and I just can't see any reason to not use ϴ(n²) instead of O(n²), since ϴ(n²) restricts my complexity with an upper and bottom value, not only upper in the case of O(n²).
If f(n) = Θ(g(n)) then f(n) = O(g(n)). This because Θ(g(n)) ⊆ O (g(n)).
In your specific case if a loop runs exactly n^2 time the time complexity is in both O(n^2) and Θ(n^2).
The main reason why big-O is typically enough is that we are more interested in the worst case time complexity when analyzing the algorithm's performance, and knowing the worst case scenario is usually enough.
Also, not always is possible to find a tight bound.

How to understand Big O notation in a given example

Here it states that T(n) is O(n^4). But I want to know why it is not O(n^3)? It contains n^3 and if we omit 20n and 1 it should be O(n^3) not O(n^4). Why it is like this?
It is in O(n^3), but O(n^4), O(n^5) etc is a superset of O(n^3), so if something is in O(n^3) then it can also be in O(n^100). The best answer and the one used by convention is the smallest big O to which is belongs, which is O(n^3), but it's not the only one.
I think you are getting confused between the theta notation and the big O notation.
Theta notation determines the rough estimate of the running time of an algorithm, whereas Big O notation determines the worst case running time of an algorithm.
The method you mentioned above is used to calculate the theta, not big O. The big O of the above-mentioned problem could be O(n^4), O(n^5), O(n^6) and so on... all are correct values. But for theta, only theta(n^3) is correct.

When big O or Omega or theta can be an element of a set?

I'm trying to figure out the efficiency of my algorithms and I have little bit confusion.
Just need some expert idea to justify my answers or reference me to somewhere which is explaining about the being an element of not in asymptotic subject. (There is many resources but nothing about element of set is found by me)
When we say O(n^2) which is for two loops is it right to say:
n^2 is an element of O(n^3)
To my understanding big O is the worst case and omega is the best efficient case. If we put them on the graph all the cases of n^2 is part of O(n^3) so the first one is not right?
n^3 is an element of omega(n^2)
And also about the second one it is not right. Because some of the best cases of omega(n^2) is not in all the cases of n^3!
Finally is
2^(n+1) element of theta(2^n)
I have no idea how to measure that!
Big O, omega, theta in this context are all complexities. It's the functions with those complexities which form the sets you're thinking of.
Indeed, the set of functions with complexity O(n*n) is a subset of those with complexity O(n*n*n). Simply said, that's because O(n*n*n) means that the complexity is less than c*n*n*n as n goes to infinity, for some constant c. If a function has actual complexity 3*n*n + 7*n, then its complexity as n goes to infinity is obviously less than c*n*n*n, for any c.
Therefore, O(n*n*n) isn't just "three loops", it's "three loops or less".
Ω is the reverse. It's a lower bound for complexity, and c*n*n is a trivial lower bound for n*n*n as n goes to infinity.
The set of functions with complexity Θ(n*n) is the intersection of those with complexities O(n*n) and Ω(n*n). E.g. 3*n doesn't have complexity Θ(n*n) because it doesn't have complexity Ω(n*n), and 7*n*n*n doesn't have complexity Θ(n*n) because it doesn't have complexity O(n*n).
I will list the answers one by one.
1.) n^2 is an element of O(n^3)
True
To know more about Big-Oh read here.
2.) n^3 is an element of omega(n^2)
True
To understand omega notation read here.
3.) 2^(n+1) element of theta(2^n)
True
By now you would know why this is right.(Hint:Constant factor)
Please ask if you have any more questions.

Plain English explanation of Theta notation?

What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics.
How theta notation is different from the Big O notation ? Could anyone explain in plain English?
In algorithm analysis how there are used? I am confused?
If an algorithm's run time is Big Theta(f(n)), it is asymptotically bounded above and below by f(n). Big O is the same except that the bound is only above.
Intuitively, Big O(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time never exceeds f(n)." In rough words, if you think of run time as "bad", then Big O is a worst case. Big Theta(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time always varies as f(n)." In other words, Big Theta is a known tight bound: it's both worst case and best case.
A final try at intuition: Big O is "one-sided." O(n) run time is also O(n^2) and O(2^n). This is not true with Big Theta. If you have an algorithm run time that's O(n), then you already have a proof that it's not Big Theta(n^2). It may or may not be Big Theta(n)
An example is comparison sorting. Information theory tells us sorting requires at least ceiling(n log n) comparisons, and we have actually invented O(n log n) algorithms (where n is number of comparisons), so sorting comparisons are Big Theta(n log n).
I have always wanted to put this down in Simple words. Here is my try.
If an algorithm's time or space complexity is expressed in
Big O : Ex O(n) - means n is the upper limit. Final Value could be less than or equal to n.
Big Omega : Ex Ω(n) - means n is the lower limit. Final Value could be equal to or more than n.
Theta : Ex Θ(n) - means n is the only possible value. (both upper limit & lower limit)

Resources