Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'm confused as to how f(n) can be O(g(n)), theta(g(n)) and omega(g(n)). Could someone help explain?
In fact each function that is of theta g(n) will be of O(g(n)) and omega(g(n)). The simplified definition is that f(n) is in Theta(g(n)) if it grows precisely as fast as g(n), while f(n) is in O(g(n)) if it grows no faster than g(n) and is in Omega(g(n)) if it grows no slower than g(n)(all there definitions hold for sufficiently large n). Thus when the speed at which f(n) and g(n) are the same both the conditions for omega and O hold.
As for why f(n) is in Theta(g(n)) - try dividing the two functions and analyzing the fraction when n grows to infinity.
The clearest and straightforward way to solve such a question, you should use the Limit Method Process like the following:
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In a data structure textbook, the author use this to prove that O(log^c(n))is effective because the complexity is very close to the constant, I don't quite understand the equation.
The intuitive reason why this is is true is that log is the inverse of e^x. Just as the exponential function grows faster than x^k for any k, its inverse must grow slower than x^(1/k) for any k. (Draw pictures and flip the x and y axis to get this intuition.)
However intuition does not lead to a formal proof.
So first, convince yourself that log(log(n)) = o(log(n)).
From that, for any given c, there is an N such that for all n > N that log(log(n)) < c log(n). Now take e^x of both sides and you have found that for sufficiently large n, log(n) < n^c. And therefore log(n) = O(n^c) for any given c.
But that is big-O. We wanted little-o. Well, log(n) = O(n^(c/2) which means that log(n) is actually in o(n^c). And now we're done.
This question already has an answer here:
Show that g(n) is O(g(n)) for each of the following [closed]
(1 answer)
Closed 4 years ago.
I don't get how to show it---I take the log of both sides, and?
This question is to prove that f(n) is O(g(n)), which I know how to do for things that have the same base. not as much for this.
2^(sqrt(log(n)) is O(n(^4/3))
For sufficiently large n, sqrt(log(n)) is positive and bounded from above by log(n). Since 2^x is monotonically increasing, 2^sqrt(log(n)) is bounded from above by 2^log(n) = n. Moreover, for large n, n is clearly bounded from above by n^(4/3). Therefore the original function itself is bounded from above by n^(4/3) as well.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I know that for f(n) to be O(g(n)) we have to find a constant c > 0 and n0 such that f(n) ≤ c⋅g(n)
whenever n ≥ n0
So what I am thinking is that if we take c to be 2 for example and n0 to be 1 it seems to me that n0.5 is O(n0.5). Am I right?
Your argument is correct, but it is easier to see what is going on if you make n0 very large, say n0=10^6, Then n^0.5 >= 1000, and the difference between n^0.5 and ceil(n^0.5) is <= 1, so is obviously covered by c=2, and in fact is obviously trivial. As Potatoswatter points out, as long as f(n) is increasing, you can make n large enough that a change of a constant is obviously trivial, no matter what the constant.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Let f(n)= ( (n^2+2n)/n + 1/1000*(n^(3/2)))*log(n)
The time complexity for this function could be both O(n²*log(n)) and O(n^(3/2)*log(n))
How is this possible? I thought the dominating term here was n² (*log(n)) and therefore it should be O(n²*log(n)) only the big O notation and time complexity measures feels so ambiguous
Big O notation isn't that confusing. It defines an upper bound to the running time of an algorithm, hence, if O(f(n)) is a valid upper bound, every other O(g(n)) such that g(n) > f(n) definitively is valid, since if your code will run in less then f(n), will for sure run in less than g(n).
In your case, since O(n^2 *log(n)) dominates O(n^(3/2) log(n)), it's a valid upper bound too, even if it's less strict. Furthermore, you could say that your algorithm is O(n^3). The question is, which one of those Big O notation gives us more informations about the algorithm? The obvious answer is the lower one, and that's the reason why we usually indicate that.
To make things cler : let's say you can throw a ball up in the air 10m. Then, you can say that you can't throw higher than 10m, OR you could say you can't throw it higher than 15 meters. The fact that the first one is a stricter upper bound, doesn't make the second one a false statement.
"Big O notation" being applied on the sum always leaves dominant (the biggest ones) terms only. In case of one independent variable one term only will survive. In your case
O(n^2*log(n) + n^(3/2)*log(n)) = O(n^2*log(n))
since 1-st term is bigger than the 2-nd:
lim(term1/term2) = lim(n^2*log(n) / (n^(3/2)*log(n))) = lim(n^(1/2)) = inf
but it seems, that you made an arithemic error in your computations:
(n^2+2n)/n = n + 2, not n^2 + 2 * n
in that case
O(n*log(n) + 2*log(n) + n^(3/2)*log(n))
the last term which is "n^(3/2)*log(n)" is the biggest one
O(n*log(n) + 2*log(n) + n^(3/2)*log(n)) = O(n^(3/2)*log(n))
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Some standard books on Algorithms produce this:
0 ≤ f(n) ≤ c⋅g(n) for all n > n0
While defining big-O, can anyone explain to me what this means, using a strong example which can help me to visualize and understand big-O more precisely?
Assume you have a function f(n) and you are trying to classify it - is it a big O of some other function g(n).
The definition basically says that f(n) is in O(g(n)) if there exists two constants C,N such that
f(n) <= c * g(n) for each n > N
Now, let's understand what it means.
Start with the n>N part - it means, we do not "care" for low values of n, we only care for high values, and if some (final number of) low values do not follow the criteria - we can silently ignore them by choosing N bigger then them.
Have a look on the following example:
Though we can see that for low values of n: n^2 < 10nlog(n), the second quickly catches up and after N=10 we get that for all n>10 the claim 10nlog(n) < n^2 is correct, and thus 10nlog(n) is in O(n^2).
The constant c means we can also tolerate some multiple by constant factor, and we can still accept it as desired behavior (useful for example to show that 5*n is O(n), because without it we could never find N such that for each n > N: 5n < n, but with the constant c, we can use c=6 and show 5n < 6n and get that 5n is in O(n).
This question is a math problem, not an algorithmic one.
You can find a definition and a good example here: https://math.stackexchange.com/questions/259063/big-o-interpretation
As #Thomas pointed out, Wikipedia also has a good article on this: http://en.wikipedia.org/wiki/Big_O_notation
If you need more details, try to ask a more specific question.