This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 5 years ago.
I read in a book that the following expression O(2^n + n^100) will be reduced to: O(2^n) when we drop the insignificant parts. I am confused because as per my understanding if the value of n is 3 then the part n^100 seems to have a higher count of executions. What am I missing?
Big O notation is asymptotic in nature, that means we consider the expression as n tends to infinity.
You are right that for n = 3, n^100 is greater than 2^n but once n > 1000, 2^n is always greater than n^100 so we can disregard n^100 in O(2^n + n^100) for n much greater than 1000.
For a formal mathematical description of Big O notation the wikipedia article does a good job
For a less mathematical description this answer also does a good job:
What is a plain English explanation of "Big O" notation?
The big O notation is used to describe asymptotic complexity. The word asymptotic plays a significant role. Asymptotic basically means that your n is not gonna be 3 or some other integer. You should think of n being infinitely large.
Even though n^100 grows faster in the beginning, there will be a point where 2^n will outgrow n^100.
You are missing the fact that O(n) is the asymptotic complexity. Speaking more strictly, you could calculate lim(2^n / n^100) when n -> infinity and you will see it equals to infinity, so it means that asymptotically 2^n grows faster than n^100.
When complexity is measured with n you should consider all possible values of n and not just 1 example. so in most cases, n is bigger than 100. this is why n^100 is insignificant.
Related
I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.
The following question was on a recent assignment in University. I would have thought the answer would be n^2+T(n-1) as I thought the n^2 would make it's asymptotic time complexity O(n^2). Where as with T(n/2)+1 its asymptotic time complexity would be O(log2(n)).
The answers were returned and it turns out the correct answer is T(n/2)+1 however I can't get my head around why this is the case.
Could someone possibly explain to me why that's the worst case time complexity of this algorithm? It's possible my understanding of time complexity is just wrong.
The asymptotic time complexity is taking n large. In the case of your example, since the question specifies that k is fixed, the only complexity relevant is the last one. See the Wikipedia formal definition, specifically:
As n grows to infinity, the recursion that dominates T(n) = T(n / 2) + 1. You can prove this as well using the formal definition, basically picking x_0 = 10 * k and showing that a finite M can be found using the first two cases. It should be clear that both log(n) and n^2 satisfy the definition, so the tighter bound is the asymptotic complexity.
What does O (f (n)) mean? It means the time is at most c * f (n), for some unknown and possibly large c.
kevmo claimed a complexity of O (log2 n). Well, you can check all the values n ≤ 10k, and let the largest value of T (n) be X. X might be quite large (about 167 k^3 in this case, I think, but it doesn't actually matter). For larger n, the time needed is at most X + log2 (n). Choose c = X, and this is always less than c * log2 (n).
Of course people usually assume that a O (log n) algorithm would be quick, and this one most certainly isn't if say k = 10,000. So you learned as well that O notation must be handled with care.
I'm studying for my computer science exams and I've came across a few questions on simplifying asymptotic complexity and i'm unsure how far too take it. For example:
Give '2n log(n) + 3 log(n)' in its simplest form.
Which i would consider to be n log n.
Is this correct and is there a method for determining exactly how specific i should be?
Other questions given were:
4 log₄(n + 1)
2n log(n) + 1/2⋅n²
4n² + 3n
And my respective guesses:
log n
n²
3n
Rule of thumb: In Big-O notation, you only have to take the fastest growing part of the sum (if the sum has a constant number of summands) and you can strip off every constant factors.
So your guesses are correct.
In general you should study the mathematical definition of Big-O for more details.
If you are familiar with the math, you can simplify it with all mathematical methods you know.
This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
Closed 2 months ago.
Big Omega is supposed to be the opposite of Big O, but they can always have the same value, because by definition Big O means:
g(x) so that cg(x) is bigger or equal to f(x)
and Big Omega means
g(x) so that cg(x) is smaller or equal to f(x)
the only thing that changes is the value of c, if the value of c is an arbitrary value (a value that we choose to meet inequality), then Big Omega and Big O will be the same. So what's the point of those two? What purpose do they serve?
Big O is bounded above by (up to constant factor) asymptotically while Big Omega is bounded below by (up to constant factor) asymptotically.
Mathematically speaking, f(x) = O(g(x)) (big-oh) means that the growth rate of f(x) is asymptotically less than or equal to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means that the growth rate of f(x) is asymptotically greater than or equal to the growth rate of g(x)
See the Wiki reference below:
Big O notation
Sometimes you want to prove an upper bound (Big Oh), some other times you want to prove a lower bound (Big Omega).
http://en.wikipedia.org/wiki/Big_O_notation:
You're correct when you assert that such a g exists, but that doesn't mean it's known.
In addition to talking about the complexity of algorithms you can also talk about the complexity of problems.
It's known that multiplication for example is Ω(n) and O(n log(n) log(log(n))) in the number of bits, but a precise characterization (denoted by Θ) is unknown. It's the same story with integer factorization and NP problems in general which is what the whole P versus NP thing is about.
Furthermore there are apparently algorithms and ones proven to be optimal no less whose complexity is unknown. See http://en.wikipedia.org/wiki/User:Erel_Segal/Optimal_algorithms_with_unknown_runtime_complexity
What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics.
How theta notation is different from the Big O notation ? Could anyone explain in plain English?
In algorithm analysis how there are used? I am confused?
If an algorithm's run time is Big Theta(f(n)), it is asymptotically bounded above and below by f(n). Big O is the same except that the bound is only above.
Intuitively, Big O(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time never exceeds f(n)." In rough words, if you think of run time as "bad", then Big O is a worst case. Big Theta(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time always varies as f(n)." In other words, Big Theta is a known tight bound: it's both worst case and best case.
A final try at intuition: Big O is "one-sided." O(n) run time is also O(n^2) and O(2^n). This is not true with Big Theta. If you have an algorithm run time that's O(n), then you already have a proof that it's not Big Theta(n^2). It may or may not be Big Theta(n)
An example is comparison sorting. Information theory tells us sorting requires at least ceiling(n log n) comparisons, and we have actually invented O(n log n) algorithms (where n is number of comparisons), so sorting comparisons are Big Theta(n log n).
I have always wanted to put this down in Simple words. Here is my try.
If an algorithm's time or space complexity is expressed in
Big O : Ex O(n) - means n is the upper limit. Final Value could be less than or equal to n.
Big Omega : Ex Ω(n) - means n is the lower limit. Final Value could be equal to or more than n.
Theta : Ex Θ(n) - means n is the only possible value. (both upper limit & lower limit)