How to calculate growth rate using Big O? [closed] - algorithm

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am just starting to learn about Big O notation and had a question about how to calculate the growth rate of an algorithm. Suppose I had an algorithm with O(√n log n) time complexity, and for n = 10 my algorithm takes 2 seconds. If I want to know how long it would take with n = 100, do I set up a ratio where 2/x = (√10 log 10)/(√100 log 100) and then solve for x? Or can I just say that my input is 10 times larger, so it will take 2*(√10 log 10) seconds?

The first method is right. Big O doesn't care about constant multiples so you can determine the constant by solving for it with algebra.
c*(√10*log(10)) = 2
c = 2/(√10*log(10))
√100*log(100) * 2/(√10*log(10)) = x
However, keep in mind that big O also doesn't care about 'smaller' terms and so those constant overheads and other smaller-scaling factors will only make this calculation asymptotically accurate. For example, an algorithm governed by the following equation:
(√n log n + 1/n) = t
is still O(√n log n) and this will make your calculations less accurate for small values of n.

Related

N*2^N vs N*N Time complexity [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
Which time complexity is better N*(2^N) or N^2 and why?
N*(2^N) or N^2
N*(2^N) is exponential.
If you take n=10, for example, you get 10240
N^2 is merely polynomial.
If you take n=10, for example, you get 100
Exponential is worse than polynomial for large N, and even for reasonable Ns, in your case. To see it intuitively, imagine growing N by 1. In the polynomial case, the result grows by a fraction ((N+1) / N) ^ 2. It grows, but not much. In the exponential case, growing N by 1 doubles the result.

How to prove this: log n = O(n^c) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In a data structure textbook, the author use this to prove that O(log^c(n))is effective because the complexity is very close to the constant, I don't quite understand the equation.
The intuitive reason why this is is true is that log is the inverse of e^x. Just as the exponential function grows faster than x^k for any k, its inverse must grow slower than x^(1/k) for any k. (Draw pictures and flip the x and y axis to get this intuition.)
However intuition does not lead to a formal proof.
So first, convince yourself that log(log(n)) = o(log(n)).
From that, for any given c, there is an N such that for all n > N that log(log(n)) < c log(n). Now take e^x of both sides and you have found that for sufficiently large n, log(n) < n^c. And therefore log(n) = O(n^c) for any given c.
But that is big-O. We wanted little-o. Well, log(n) = O(n^(c/2) which means that log(n) is actually in o(n^c). And now we're done.

Time Complexity and Big O Notation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am stuck on a homework question. The question is as follows.
Consider four programs - A, B, C, and D - that have the following performances.
A: O(log n)
B: O(n)
C: O(n2)
C: O(2n)
If each program requires 10 seconds to solve a problem of size 1000, estimate the time required by each program when the size of its problem increases to 2000.
I am pretty sure that O(n) would just double to 20 seconds since we are doubling the size and this would represent a loop in Java that iterates n number of times. Doubling n would double the output. But I am completely lost on numbers 1, 3, and 4.
I am not looking for direct answers to this question, but rather for someone to dumb down the way I can arrive at the answer. Maybe by explaining what each of these Big O notations is actually doing on the back end. If I understood the way that the algorithm is calculated and where all the elements fit into some sort of equation to solve for time, that would be awesome. Thank you in advance.
I have spent weeks combing through the textbook, but it is all written in a very complicated matter that I am having a hard time digesting. Videos online haven't been much help either.
Let's have an example (the one that you don't have in your list): O(n^3).
The ratio between the sizes of your problems is 2: 2000/1000 = 2. The big-O notation gives you an estimation that if you have a problem of size n the complexity of the problem of the size 2n would be... (2n)^3 = 8n^3. That is 8 times higher than the original task.
I hope that would help.

Prove n^2 + 5 log(n) = O(n^2) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am trying to prove that n^2 + 5 log(n) = O(n^2), O representing big-O notation. I am not great with proofs and any help would be appreciated.
Informally, we take big-O to mean the fastest growing term as n grows arbitrarily large. Since n^2 grows much faster than log(n), that should be clear.
More formally, asymptotic behaviors are identical when the limit of the ratio of two functions approaches 1 as their parameter(s) approach(es) infinity, which should sound like the same thing. So, you would need to show that lim(n->inf)((n^2+5log(n))/n^2) = 1.

Log-log plot/graph of algorithm time complexity [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I just wrote the quick and merge sort algorithms and I want to make a log-log plot of their run time vs size of array to sort.
As I have never done this my question is does it matter if I choose arbitrary numbers for the array length (size of input) or should I follow a pattern (something like 10^3, 10^4, 10^5, etc)?
In general, you need to choose array lengths, for each method, that are large enough to display the expected o(n log n) or O(n^2) type behavior.
If your n is too small the run time may be dominated by other growth rates, for example an algorithm with run time = 1000000*n + n^2 will look to be ~O(n) for n < 1000. For most algorithms the small n behavior means that your log-log plot will initially be curved.
On the other hand, if your n is too large your algorithm may take too long to complete.
The best compromise may be to start with small n, and time for n, 2n, 4n,..., or n, 3n, 9n,... and keep increasing until you can clearly see the log log plots asymptoting to a straight lines.

Resources