How to prove this: log n = O(n^c) [closed] - algorithm

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In a data structure textbook, the author use this to prove that O(log^c(n))is effective because the complexity is very close to the constant, I don't quite understand the equation.

The intuitive reason why this is is true is that log is the inverse of e^x. Just as the exponential function grows faster than x^k for any k, its inverse must grow slower than x^(1/k) for any k. (Draw pictures and flip the x and y axis to get this intuition.)
However intuition does not lead to a formal proof.
So first, convince yourself that log(log(n)) = o(log(n)).
From that, for any given c, there is an N such that for all n > N that log(log(n)) < c log(n). Now take e^x of both sides and you have found that for sufficiently large n, log(n) < n^c. And therefore log(n) = O(n^c) for any given c.
But that is big-O. We wanted little-o. Well, log(n) = O(n^(c/2) which means that log(n) is actually in o(n^c). And now we're done.

Related

Solve Recurrence Relation by Master theorem? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Can someone please clarify this solution a little more?
T(n) = 2T(n^1/2) + log n
Solution:
Let k = log n,
T(n) = T(2^k)=2T(2^(k/2)) + k
Substituting into this the equation S(k) = T(2^k)
we get that
S(k)=2S(k/2) + k
Now, this recurrence equation allows us to use master theorem, which specifies that
S(k) is O(k log k). Substituting back for T(n) implies T(n) is O(log n log log n)
How many times can you keep dividing n by 2? Log_2(n) times. Because Log_2(n) is to what power you need to raise 2, to get n.
Also loglog(n) is how many times you can take the square root of n, so maybe that substitution isn't that necessary, if you know this.

Prove n^2 + 5 log(n) = O(n^2) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am trying to prove that n^2 + 5 log(n) = O(n^2), O representing big-O notation. I am not great with proofs and any help would be appreciated.
Informally, we take big-O to mean the fastest growing term as n grows arbitrarily large. Since n^2 grows much faster than log(n), that should be clear.
More formally, asymptotic behaviors are identical when the limit of the ratio of two functions approaches 1 as their parameter(s) approach(es) infinity, which should sound like the same thing. So, you would need to show that lim(n->inf)((n^2+5log(n))/n^2) = 1.

Is (log n)^k = O(n^1/2)? For k greater or equal to 0 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
In big-O notation is O((log n)^k) = O(log n), where k is some constant right? So what's happening with the (log n)^k when k>=0?
Perhaps this might be the source of the misunderstanding?
log(n^k) = k * log(n), but no such simplification works for log(n)^k = (log(n))^k.
O((log n) * k) == O(log n), but (log n)^k is definitely not the same thing. I believe you're thinking of constant multiplication, which is equivalent in big O notation. However, raising f(n) to a power changes the time to completion. This is the same concept as O(n) being different from O(n^2).

How to calculate growth rate using Big O? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am just starting to learn about Big O notation and had a question about how to calculate the growth rate of an algorithm. Suppose I had an algorithm with O(√n log n) time complexity, and for n = 10 my algorithm takes 2 seconds. If I want to know how long it would take with n = 100, do I set up a ratio where 2/x = (√10 log 10)/(√100 log 100) and then solve for x? Or can I just say that my input is 10 times larger, so it will take 2*(√10 log 10) seconds?
The first method is right. Big O doesn't care about constant multiples so you can determine the constant by solving for it with algebra.
c*(√10*log(10)) = 2
c = 2/(√10*log(10))
√100*log(100) * 2/(√10*log(10)) = x
However, keep in mind that big O also doesn't care about 'smaller' terms and so those constant overheads and other smaller-scaling factors will only make this calculation asymptotically accurate. For example, an algorithm governed by the following equation:
(√n log n + 1/n) = t
is still O(√n log n) and this will make your calculations less accurate for small values of n.

Understanding the big O notation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Some standard books on Algorithms produce this:
0 ≤ f(n) ≤ c⋅g(n) for all n > n0
While defining big-O, can anyone explain to me what this means, using a strong example which can help me to visualize and understand big-O more precisely?
Assume you have a function f(n) and you are trying to classify it - is it a big O of some other function g(n).
The definition basically says that f(n) is in O(g(n)) if there exists two constants C,N such that
f(n) <= c * g(n) for each n > N
Now, let's understand what it means.
Start with the n>N part - it means, we do not "care" for low values of n, we only care for high values, and if some (final number of) low values do not follow the criteria - we can silently ignore them by choosing N bigger then them.
Have a look on the following example:
Though we can see that for low values of n: n^2 < 10nlog(n), the second quickly catches up and after N=10 we get that for all n>10 the claim 10nlog(n) < n^2 is correct, and thus 10nlog(n) is in O(n^2).
The constant c means we can also tolerate some multiple by constant factor, and we can still accept it as desired behavior (useful for example to show that 5*n is O(n), because without it we could never find N such that for each n > N: 5n < n, but with the constant c, we can use c=6 and show 5n < 6n and get that 5n is in O(n).
This question is a math problem, not an algorithmic one.
You can find a definition and a good example here: https://math.stackexchange.com/questions/259063/big-o-interpretation
As #Thomas pointed out, Wikipedia also has a good article on this: http://en.wikipedia.org/wiki/Big_O_notation
If you need more details, try to ask a more specific question.

Resources