This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
Closed 2 months ago.
Big Omega is supposed to be the opposite of Big O, but they can always have the same value, because by definition Big O means:
g(x) so that cg(x) is bigger or equal to f(x)
and Big Omega means
g(x) so that cg(x) is smaller or equal to f(x)
the only thing that changes is the value of c, if the value of c is an arbitrary value (a value that we choose to meet inequality), then Big Omega and Big O will be the same. So what's the point of those two? What purpose do they serve?
Big O is bounded above by (up to constant factor) asymptotically while Big Omega is bounded below by (up to constant factor) asymptotically.
Mathematically speaking, f(x) = O(g(x)) (big-oh) means that the growth rate of f(x) is asymptotically less than or equal to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means that the growth rate of f(x) is asymptotically greater than or equal to the growth rate of g(x)
See the Wiki reference below:
Big O notation
Sometimes you want to prove an upper bound (Big Oh), some other times you want to prove a lower bound (Big Omega).
http://en.wikipedia.org/wiki/Big_O_notation:
You're correct when you assert that such a g exists, but that doesn't mean it's known.
In addition to talking about the complexity of algorithms you can also talk about the complexity of problems.
It's known that multiplication for example is Ω(n) and O(n log(n) log(log(n))) in the number of bits, but a precise characterization (denoted by Θ) is unknown. It's the same story with integer factorization and NP problems in general which is what the whole P versus NP thing is about.
Furthermore there are apparently algorithms and ones proven to be optimal no less whose complexity is unknown. See http://en.wikipedia.org/wiki/User:Erel_Segal/Optimal_algorithms_with_unknown_runtime_complexity
Related
I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.
I'm reading through my algorithms text and it states:
But shouldn't the vice verse be true as well, so long as the function g(n) is the same in both cases?
I understand why it wouldn't work for different functions i.e n^2 and n. But for the same function couldn't an arbitrarily smaller and larger constant be used to surround O(g(n)) to make it asymptotically tight?
The vice verse is not necessarily true since O notation establishes an upper bound for the function's complexity while Theta establishes both an upper and lower bound.
For example, for f(x) = x² you can say f(x) = O(x³) since x³ is indeed an upper bound for x² but you cannot say f(x) = ϴ(x³) since x³ is not a lower bound for x².
Take a look here for the precise definitions of O and ϴ notations
This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 5 years ago.
I read in a book that the following expression O(2^n + n^100) will be reduced to: O(2^n) when we drop the insignificant parts. I am confused because as per my understanding if the value of n is 3 then the part n^100 seems to have a higher count of executions. What am I missing?
Big O notation is asymptotic in nature, that means we consider the expression as n tends to infinity.
You are right that for n = 3, n^100 is greater than 2^n but once n > 1000, 2^n is always greater than n^100 so we can disregard n^100 in O(2^n + n^100) for n much greater than 1000.
For a formal mathematical description of Big O notation the wikipedia article does a good job
For a less mathematical description this answer also does a good job:
What is a plain English explanation of "Big O" notation?
The big O notation is used to describe asymptotic complexity. The word asymptotic plays a significant role. Asymptotic basically means that your n is not gonna be 3 or some other integer. You should think of n being infinitely large.
Even though n^100 grows faster in the beginning, there will be a point where 2^n will outgrow n^100.
You are missing the fact that O(n) is the asymptotic complexity. Speaking more strictly, you could calculate lim(2^n / n^100) when n -> infinity and you will see it equals to infinity, so it means that asymptotically 2^n grows faster than n^100.
When complexity is measured with n you should consider all possible values of n and not just 1 example. so in most cases, n is bigger than 100. this is why n^100 is insignificant.
Hi i have started learning algorithm analysis. Here i have a doubt in asymptotic analysis.
Let's say i have a function f(n) = 5n^3 + 2n^2 + 23.
Now i need to find the Big-Oh, Big-Omega and Theta Notations for the above function,
Big-Oh:
f(n) <= (5 + 2 + 23) n^3 // raising all the n's to the power of 3 will give us a value which will be always greater than f(n)
f(n) <= 30n^3
f(n) belongs to Big-Oh(n^3)
Big-Omega:
n^3 <= f(n)
f(n) belongs to Big-Omega(n^3)
Theta:
n^3 <= f(n) <= 30 n^3
f(n) belongs to Theta ( n^3)
So here,
f(n) belongs to Big-Oh(n^3)
f(n) belongs to Big-Omega(n^3)
f(n) belongs to Theta(n^3)
Like this for any polynomial,the order of growth for Oh,Omega and Theta Notations are same(in our case it is order of n^3).
When order of growth will be same for all the notations, then what is the use of showing them with different notations and where exactly
it can be used? Kindly give me a practical example if possible.
Big theta (Θ) is when our upper bound (O) and lower bound (Ω) are the same, in other words it's a tight bound then. That's one reason to show both O and Ω (or well all three).
Why is this useful? Because Θ is a tight bound - it's much stronger than O. With O you can say that the above is O(n^1000) and you are still technically correct. A lot of times O != Ω and you don't have that tight bound.
Why are we usually talking about O usually, then? Well because in most cases we are interested in the upper bound (of the worst case scenario) of our algorithm. Sometimes we simply don't know the Θ and we go with O instead. Also it's important to notice that many people simply misuse those symbols, don't fully understand them and/or are simply not precise enough and use O in places where Θ could be used.
For instance quicksort does not have a tight bound (unless we are talking specifically about either best/average or worst case analysis) as it's Ω(nlogn) in the best and average cases but O(n^2) in the worst case. On the other hand mergesort is both Ω(nlogn) and O(nlogn) therefore it's also Θ(nlogn).
All in all it's all theoretical as quicksort in practics is in most cases faster as you usually don't hit the worst case and the operations done by quicksort are easier.
In the example you gave, the execution time is known and seems to be fixed. So it is in O(n^3), Ω(n^3) and thus in Θ(n^3) for all cases.
However, for an algorithm, the execution time may, and not so rarely, depend on the input the algorithm is running on.
Eg.: to search a key in a linked list takes going through all the list members in the worst case and that's linear time.
In the best case, the key you're looking for at that time is in the very beginning of the list and that's constant time.
So the algorithm is in O(n) and in Ω(1)=O(1). There's no valid f(n) to specify Θ(f(n)) for this algorithm.
Above Function Running Time can be computed in the following manner
Is it omega and big o of n ?
5n^3+2n^2+23 >=cn
5n^2+2n+23/n >= c as n grows and it tends to infinite
such constant c can exist which is smaller than or equals
to left hand side of the inequality so the function running
time is omega of n.
5n^2+2n+23/n <= c when n tends to infinite this inequality
doesn't hold true because such constant can not exist
which is equal to or greater than Left hand side of the
inequality so the function running time is not big o of n.
is it omega and Big o of n ^3 ?
5n^3+2n^2+23 >=cn^3
5 + 2/n + 23/n^3 >=c this inequality holds true so it's
omega of n^3.
5 + 2/n + 23/n^3 <=c this inequality also holds true so it's
big o of n^3.
Since it's omega and big o of n^3 hence it's theta of n^3 as
well.
Similarly its omega of n^ 2 but not big o of n^2.
I know in Big O Notation we only consider the highest order, leading polynomial term because we are basically placing this theoretic worst case bound on compute-time complexity but sometimes I get confused on when we can legally drop/consider terms as constants. For example if our equation ran in
O((n^3)/3) --> we pull out the "1/3" fraction, treat it as a constant, drop it, then say our algo runs in O(n^3) time.
What about in the case of O((n^3)/((log(n))^2))? In this case could we pull out the 1/((log(n)^2)) term, treat it as a constant, drop it, and then ultimately conclude our algorithm is O(n^3). It does not look like we can, but what differentiates this case from the above case? both can be treated as constants because their values are relatively small in comparison to the leading polynomial term in the numerator but in the second case, the denominator term really brings down the worst case bound (convergence) as n values get larger and larger.
At this point, it starts to be a good idea to go back and look at the formal definition for big O notation. Essentially, when we say that f(x) is O(g(x)) we mean that there exists a constant factor a and a starting input n0 such that for all x >= n0 then f(x) <= a*g(x).
For a concrete example, to prove that 3*x^2 + 17 is O(n^2) we can use a = 20 and n0 = 1.
From this definition it becomes easy to see why the constant factors get dropped off - its just a matter of adjusting the a to compensate. As for your 1/log(n) question, if we have f(x) is O(g(x)) and g(x) is O(h(x)) then we also have f(x) is O(h(x)). So yes, 10*n^3/log(n) + x is O(n^3) but that is not a tighter upper bound and it is a weaker statement than saying that 10*n^3/log(n) + x is O(n^3/log(n)). For a tight bounds you would want to use big-Theta notation instead.
Any value which is fixed and does not depend on a variable (like n) can be treated as constant. You can separate the constants, remove the lower order terms and classify the result as the complexity. Also big O notation states if
f(x) <= c*g(x)
Then f(x) ~ O(g(x)). For example-
n^3 * 5 -> here 5 is a constant. Complexity is O(n^3)
4*(n^3)/((log(n))^2 + 7 -> here 7 and 4 are constants. Complexity is O(n^3/(logn)^2)