Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Some standard books on Algorithms produce this:
0 ≤ f(n) ≤ c⋅g(n) for all n > n0
While defining big-O, can anyone explain to me what this means, using a strong example which can help me to visualize and understand big-O more precisely?
Assume you have a function f(n) and you are trying to classify it - is it a big O of some other function g(n).
The definition basically says that f(n) is in O(g(n)) if there exists two constants C,N such that
f(n) <= c * g(n) for each n > N
Now, let's understand what it means.
Start with the n>N part - it means, we do not "care" for low values of n, we only care for high values, and if some (final number of) low values do not follow the criteria - we can silently ignore them by choosing N bigger then them.
Have a look on the following example:
Though we can see that for low values of n: n^2 < 10nlog(n), the second quickly catches up and after N=10 we get that for all n>10 the claim 10nlog(n) < n^2 is correct, and thus 10nlog(n) is in O(n^2).
The constant c means we can also tolerate some multiple by constant factor, and we can still accept it as desired behavior (useful for example to show that 5*n is O(n), because without it we could never find N such that for each n > N: 5n < n, but with the constant c, we can use c=6 and show 5n < 6n and get that 5n is in O(n).
This question is a math problem, not an algorithmic one.
You can find a definition and a good example here: https://math.stackexchange.com/questions/259063/big-o-interpretation
As #Thomas pointed out, Wikipedia also has a good article on this: http://en.wikipedia.org/wiki/Big_O_notation
If you need more details, try to ask a more specific question.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In a data structure textbook, the author use this to prove that O(log^c(n))is effective because the complexity is very close to the constant, I don't quite understand the equation.
The intuitive reason why this is is true is that log is the inverse of e^x. Just as the exponential function grows faster than x^k for any k, its inverse must grow slower than x^(1/k) for any k. (Draw pictures and flip the x and y axis to get this intuition.)
However intuition does not lead to a formal proof.
So first, convince yourself that log(log(n)) = o(log(n)).
From that, for any given c, there is an N such that for all n > N that log(log(n)) < c log(n). Now take e^x of both sides and you have found that for sufficiently large n, log(n) < n^c. And therefore log(n) = O(n^c) for any given c.
But that is big-O. We wanted little-o. Well, log(n) = O(n^(c/2) which means that log(n) is actually in o(n^c). And now we're done.
This question already has an answer here:
Show that g(n) is O(g(n)) for each of the following [closed]
(1 answer)
Closed 4 years ago.
I don't get how to show it---I take the log of both sides, and?
This question is to prove that f(n) is O(g(n)), which I know how to do for things that have the same base. not as much for this.
2^(sqrt(log(n)) is O(n(^4/3))
For sufficiently large n, sqrt(log(n)) is positive and bounded from above by log(n). Since 2^x is monotonically increasing, 2^sqrt(log(n)) is bounded from above by 2^log(n) = n. Moreover, for large n, n is clearly bounded from above by n^(4/3). Therefore the original function itself is bounded from above by n^(4/3) as well.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I know that for f(n) to be O(g(n)) we have to find a constant c > 0 and n0 such that f(n) ≤ c⋅g(n)
whenever n ≥ n0
So what I am thinking is that if we take c to be 2 for example and n0 to be 1 it seems to me that n0.5 is O(n0.5). Am I right?
Your argument is correct, but it is easier to see what is going on if you make n0 very large, say n0=10^6, Then n^0.5 >= 1000, and the difference between n^0.5 and ceil(n^0.5) is <= 1, so is obviously covered by c=2, and in fact is obviously trivial. As Potatoswatter points out, as long as f(n) is increasing, you can make n large enough that a change of a constant is obviously trivial, no matter what the constant.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Let f(n)= ( (n^2+2n)/n + 1/1000*(n^(3/2)))*log(n)
The time complexity for this function could be both O(n²*log(n)) and O(n^(3/2)*log(n))
How is this possible? I thought the dominating term here was n² (*log(n)) and therefore it should be O(n²*log(n)) only the big O notation and time complexity measures feels so ambiguous
Big O notation isn't that confusing. It defines an upper bound to the running time of an algorithm, hence, if O(f(n)) is a valid upper bound, every other O(g(n)) such that g(n) > f(n) definitively is valid, since if your code will run in less then f(n), will for sure run in less than g(n).
In your case, since O(n^2 *log(n)) dominates O(n^(3/2) log(n)), it's a valid upper bound too, even if it's less strict. Furthermore, you could say that your algorithm is O(n^3). The question is, which one of those Big O notation gives us more informations about the algorithm? The obvious answer is the lower one, and that's the reason why we usually indicate that.
To make things cler : let's say you can throw a ball up in the air 10m. Then, you can say that you can't throw higher than 10m, OR you could say you can't throw it higher than 15 meters. The fact that the first one is a stricter upper bound, doesn't make the second one a false statement.
"Big O notation" being applied on the sum always leaves dominant (the biggest ones) terms only. In case of one independent variable one term only will survive. In your case
O(n^2*log(n) + n^(3/2)*log(n)) = O(n^2*log(n))
since 1-st term is bigger than the 2-nd:
lim(term1/term2) = lim(n^2*log(n) / (n^(3/2)*log(n))) = lim(n^(1/2)) = inf
but it seems, that you made an arithemic error in your computations:
(n^2+2n)/n = n + 2, not n^2 + 2 * n
in that case
O(n*log(n) + 2*log(n) + n^(3/2)*log(n))
the last term which is "n^(3/2)*log(n)" is the biggest one
O(n*log(n) + 2*log(n) + n^(3/2)*log(n)) = O(n^(3/2)*log(n))
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
WHY do logarithms grow slower than any polynomial? What is the (understandable) proof for this?
Similarly,
WHY do exponentials always grow faster than any polynomial?
EDIT: This answer is essentially doing what PengOne said.
We take the limit of
log_2(x) / x^p
for constant p > 0 and show that the limit is zero. Since both log_2(x) and x^p go to infinity as x grows without bound, we apply l'Hopital's rule. This means our limit is the same as the limit of
1/(x*ln2) / p*x^(p-1)
Using simple rules of fractions, we reduce this to
1 / (p * x^p * ln2)
Since the denominator goes to infinity while the numerator is constant, we can evaluate the limit - it's zero, which means that log_2(x) grows asymptotically more slowly than x^p, regardless of the (positive) value of p.
EDIT2:
By the way, if you are interested in this question and the posted answers, consider showing support for the new Computer Science StackExchange site by following this link and committing to the movement:
http://area51.stackexchange.com/proposals/35636/computer-science-non-programming?referrer=rpnXA1_2BNYzXN85c5ibxQ2
Given two (nonnegative) real-valued functions f and g, you want to compute
lim_{x -> infinity} f(x) / g(x)
This limit is:
0 if and only if f grows slower than g
infinity if and only if f grows faster than g
c for some constant 0 < c < infinity if and only if f and g grow at the same rate
Now you can take any examples you like and compute the limits to see which grows faster.
You could consider the derivatives.
d(x^n)/dx = nx^(n-1)
d(ln x)/dx = 1/x
for n >= 1 nx^(n-1) increases with x or stays the same, whereas 1/x decreases with x, so the polynomial grows quicker.
The logarithm of e^x is x, whereas the logarithm of n^x is n ln x, so using the above argument to compare the logarithm of e^x and the logarithm of x^n, e^x grows quicker.