Rates in algorithm analysis? [closed] - algorithm

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
WHY do logarithms grow slower than any polynomial? What is the (understandable) proof for this?
Similarly,
WHY do exponentials always grow faster than any polynomial?

EDIT: This answer is essentially doing what PengOne said.
We take the limit of
log_2(x) / x^p
for constant p > 0 and show that the limit is zero. Since both log_2(x) and x^p go to infinity as x grows without bound, we apply l'Hopital's rule. This means our limit is the same as the limit of
1/(x*ln2) / p*x^(p-1)
Using simple rules of fractions, we reduce this to
1 / (p * x^p * ln2)
Since the denominator goes to infinity while the numerator is constant, we can evaluate the limit - it's zero, which means that log_2(x) grows asymptotically more slowly than x^p, regardless of the (positive) value of p.
EDIT2:
By the way, if you are interested in this question and the posted answers, consider showing support for the new Computer Science StackExchange site by following this link and committing to the movement:
http://area51.stackexchange.com/proposals/35636/computer-science-non-programming?referrer=rpnXA1_2BNYzXN85c5ibxQ2

Given two (nonnegative) real-valued functions f and g, you want to compute
lim_{x -> infinity} f(x) / g(x)
This limit is:
0 if and only if f grows slower than g
infinity if and only if f grows faster than g
c for some constant 0 < c < infinity if and only if f and g grow at the same rate
Now you can take any examples you like and compute the limits to see which grows faster.

You could consider the derivatives.
d(x^n)/dx = nx^(n-1)
d(ln x)/dx = 1/x
for n >= 1 nx^(n-1) increases with x or stays the same, whereas 1/x decreases with x, so the polynomial grows quicker.
The logarithm of e^x is x, whereas the logarithm of n^x is n ln x, so using the above argument to compare the logarithm of e^x and the logarithm of x^n, e^x grows quicker.

Related

How to prove this: log n = O(n^c) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In a data structure textbook, the author use this to prove that O(log^c(n))is effective because the complexity is very close to the constant, I don't quite understand the equation.
The intuitive reason why this is is true is that log is the inverse of e^x. Just as the exponential function grows faster than x^k for any k, its inverse must grow slower than x^(1/k) for any k. (Draw pictures and flip the x and y axis to get this intuition.)
However intuition does not lead to a formal proof.
So first, convince yourself that log(log(n)) = o(log(n)).
From that, for any given c, there is an N such that for all n > N that log(log(n)) < c log(n). Now take e^x of both sides and you have found that for sufficiently large n, log(n) < n^c. And therefore log(n) = O(n^c) for any given c.
But that is big-O. We wanted little-o. Well, log(n) = O(n^(c/2) which means that log(n) is actually in o(n^c). And now we're done.

Finding big-O notation of forumulas [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm trying to see if I'm correct in my work for finding the big-O notation of some formulas. Each of these formulas are the number of operations in some algorithm. These are the formulas:
Forumulas
a.) n2 + 5n
b.) 3n2 + 5n
c.) (n + 7)(n - 2)
d.) 100n + 5
e.) 5n +3n2
f.) The number of digits in 2n
g.)The number of times that n can be divided by 10 before dropping below 1.0
My answers:
a.) O(n2)
b.) O(n2)
c.) O(n2)
d.) O(n)
e.) O(n2)
f.) O(n)
g.) O(n)
Am I correct on my analysis?
Let's go through this one at a time.
a.) n2 + 5. Your answer: O(n2)
Yep! You're ignoring lower-order terms correctly.
b.) 3n2 + 5n. Your answer: O(n2).
Yep! Big-O eats constant factors for lunch.
c.) (n + 7)(n - 2). Your answer: O(n2).
Yep! You could expand this out into n2 + 5n - 14 and from there drop the low-order terms to get O(n2), or you could realize that n + 7 = O(n) and n - 2 = O(n) to see that this is the product of two terms that are each O(n).
d.) 100n + 5. Your answer: O(n).
Yep! Again, dropping constants and lower-order terms.
e.) 5n + 3n2. Your answer: O(n2).
Yep! Order is irrelevant; 5n is still a low-order term.
f.) The number of digits in 2n. Your answer: O(n).
This one is technically correct but is not a good bound. Remember that big-O notation gives an upper bound and you are correct that the number n has O(n) digits, but only in the sense that the number of digits of n is asymptotically less than n. To see why this bound isn't very good, let's look at the numbers 10, 100, 1000, 10000, and 100000. These numbers have 2, 3, 4, 5, and 6 digits, respectively. In other words, growing by a factor of ten only grows the number of digits by one. If the O(n) bound you had were tight, then you'd expect that the number of digits would grow by a factor of ten every time you made the number ten times bigger, which isn't accurate.
As a hint for this one, if a number has d digits, then it's between 10d and 10d+1 - 1. That means the numeric value of a d-digit number is exponential as a function of d. So, if you start with a number of digits, the numeric value is exponentially larger. Try running this backwards. If you have a numeric value that you know is exponentially larger than the number of digits, what does that mean about the number of digits as a function of the numeric value?
f.) The number of times that n can be divided by 10 before dropping below 1.0. Your answer: O(n)
This one is also technically correct but not a very good bound. Let's take the number 100,000, for example. You can divide this by 10 seven times before you drop below 1.0, but giving a bound of O(n) means that you're saying the answers grows linearly as a function of n, so doubling n should double the number of times you can divide by ten... but is that actually the case?
As a hint, the number of times you can divide a number by ten before it drops below 1.0 is closely related to the number of digits in that number. If you can figure out this problem, you'll figure out part (e), and vice-versa.
Good luck!

What is the smallest value of n such that an algorithm whose running time is 100n^2 runs faster than an algorithm whose running time is 2^n? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
What is the smallest value of n such that an algorithm whose running time is 100n^2 runs faster than an algorithm whose running time is 2^n on the same machine?
The Scope
Although I am interested in the answer, I am more interested in how to find the answer step by step (so that I can repeat the process to compare any two given algorithms if at all possible).
From the MIT Press Algorithms book
You want the values of n where 100 × n2 is less than 2 × n.
Which is the solution of 100 × n2 - 2 × n < 0, which happens to be 0 < n < 0.02.
One thousand words:
EDIT:
The original question talked about 2 × n, not 2n (see comments).
For 2n, head to https://math.stackexchange.com/questions/182156/multiplying-exponents-solving-for-n
Answer is 15
The first thing you have to know, is what running time means. If we're talking about algorithms theoretically, the running time of an algorithm is the number of steps (or the amount of time) it takes to finish depending on the size of the input (where the size of the input is for example the number of bits, but also other measures are sometimes considered). In this sense, the algorithm which requires the least number of steps is the fastest.
So in your two formulas, n is the size of the input, and 100 * n^2 and 2^n are the number of steps the two algorithms run if given an input of size n.
On first sight, the 2^n algorithm looks much faster than the 100 * n^2 algorithm. For example, for n = 4, 100*4^2 = 1600 and 2^4 = 16.
However, 2^n is an exponential function, whereas 100 * n^2 is a polynomial function. That means that when n is large enough, it will be the case that 2^n > 100 * n^2. So you will have to solve the unequality 100 * n^2 < 2^n. This will already be the case for a fairly small n, so you can just start evaluating the functions, starting at n=5, and you will reach the answer to the question in a few minutes.

Several big O notations for the same function [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Let f(n)= ( (n^2+2n)/n + 1/1000*(n^(3/2)))*log(n)
The time complexity for this function could be both O(n²*log(n)) and O(n^(3/2)*log(n))
How is this possible? I thought the dominating term here was n² (*log(n)) and therefore it should be O(n²*log(n)) only the big O notation and time complexity measures feels so ambiguous
Big O notation isn't that confusing. It defines an upper bound to the running time of an algorithm, hence, if O(f(n)) is a valid upper bound, every other O(g(n)) such that g(n) > f(n) definitively is valid, since if your code will run in less then f(n), will for sure run in less than g(n).
In your case, since O(n^2 *log(n)) dominates O(n^(3/2) log(n)), it's a valid upper bound too, even if it's less strict. Furthermore, you could say that your algorithm is O(n^3). The question is, which one of those Big O notation gives us more informations about the algorithm? The obvious answer is the lower one, and that's the reason why we usually indicate that.
To make things cler : let's say you can throw a ball up in the air 10m. Then, you can say that you can't throw higher than 10m, OR you could say you can't throw it higher than 15 meters. The fact that the first one is a stricter upper bound, doesn't make the second one a false statement.
"Big O notation" being applied on the sum always leaves dominant (the biggest ones) terms only. In case of one independent variable one term only will survive. In your case
O(n^2*log(n) + n^(3/2)*log(n)) = O(n^2*log(n))
since 1-st term is bigger than the 2-nd:
lim(term1/term2) = lim(n^2*log(n) / (n^(3/2)*log(n))) = lim(n^(1/2)) = inf
but it seems, that you made an arithemic error in your computations:
(n^2+2n)/n = n + 2, not n^2 + 2 * n
in that case
O(n*log(n) + 2*log(n) + n^(3/2)*log(n))
the last term which is "n^(3/2)*log(n)" is the biggest one
O(n*log(n) + 2*log(n) + n^(3/2)*log(n)) = O(n^(3/2)*log(n))

Understanding the big O notation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Some standard books on Algorithms produce this:
0 ≤ f(n) ≤ c⋅g(n) for all n > n0
While defining big-O, can anyone explain to me what this means, using a strong example which can help me to visualize and understand big-O more precisely?
Assume you have a function f(n) and you are trying to classify it - is it a big O of some other function g(n).
The definition basically says that f(n) is in O(g(n)) if there exists two constants C,N such that
f(n) <= c * g(n) for each n > N
Now, let's understand what it means.
Start with the n>N part - it means, we do not "care" for low values of n, we only care for high values, and if some (final number of) low values do not follow the criteria - we can silently ignore them by choosing N bigger then them.
Have a look on the following example:
Though we can see that for low values of n: n^2 < 10nlog(n), the second quickly catches up and after N=10 we get that for all n>10 the claim 10nlog(n) < n^2 is correct, and thus 10nlog(n) is in O(n^2).
The constant c means we can also tolerate some multiple by constant factor, and we can still accept it as desired behavior (useful for example to show that 5*n is O(n), because without it we could never find N such that for each n > N: 5n < n, but with the constant c, we can use c=6 and show 5n < 6n and get that 5n is in O(n).
This question is a math problem, not an algorithmic one.
You can find a definition and a good example here: https://math.stackexchange.com/questions/259063/big-o-interpretation
As #Thomas pointed out, Wikipedia also has a good article on this: http://en.wikipedia.org/wiki/Big_O_notation
If you need more details, try to ask a more specific question.

Resources