Big Oh Notation Confusion [closed] - complexity-theory

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm not sure if this is a problem with my understanding but this aspect of Big Oh notation seems strange to me. Say you have two algorithms - the first preforms n^2 operations and the second performs n^2-n operations. Because of the dominance of the quadratic term, both algorithms would have complexity O(n^2), yet the second algorithm will always be better than the first. That seems weird to me, Big Oh notation makes it seem like they are same. I dunno...

Big O is not about the time it takes to execute your algorithm, it is about how well it will scale when presented with large data sets (large values of n).
When presented with a large data set, the n^2 term will quickly overshadow any linear term. So the linear term becomes insignificant.

When n grows towards infinity n^2 will be much greater then n so the -n won't have any significant difference on the outcome.

Related

Which algorithm would be the faster algorithm? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 12 months ago.
Improve this question
As per Big O notations, if time complexity of one algorithm is O(2^n) and the other is O(n^1000), then which would be faster one?
How to recognize overall behavior for some non-obvious cases: get logarithm of both functions.
(Sometimes we can also get ratio of the functions and evaluate ratio limit for large n's, here this approach is not good)
log(2^n) = n*log(2)
log(n^1000) = 1000*log(n)
The first result is slanted line with positive coefficient. The second one's plot is convex curve with negative second derivative, so the first function becomes larger at some big n value.
How plot looks
O(n^1000) is in the same class as (n^2) and O(n^777777777) which is Polynomial time, whereas O(2^n) is Exponential time which is way slower than Polynomial
https://www.bigocheatsheet.com/

asymptotic bounding and control structures [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
So far in my learning of algorithms, I have assumed that asymptotic boundings are directly related to patterns in control structures.
So if we have n^2 time complexity, I was thinking that this automatically means that I have to use nested loops. But I see that this is not always correct (and for other time complexities, not just quadratic).
How to approach this relationship between time complexity and control structure?
Thank you
Rice's theorem is a significant obstacle to making general statements about analyzing running time.
In practice there's a repertoire of techniques that get applied. A lot of algorithms have nested loop structure that's easy to analyze. When the bounds of one of those loops is data dependent, you might need to do an amortized analysis. Divide and conquer algorithms can often be analyzed with the Master Theorem or Akra–Bazzi.
In some cases, though, the running time analysis can be very subtle. Take union-find, for example: getting the inverse Ackermann running time bound requires pages of proof. And then for things like the Collatz conjecture we have no idea how to even get a finite bound.

Time Complexity and Big O Notation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am stuck on a homework question. The question is as follows.
Consider four programs - A, B, C, and D - that have the following performances.
A: O(log n)
B: O(n)
C: O(n2)
C: O(2n)
If each program requires 10 seconds to solve a problem of size 1000, estimate the time required by each program when the size of its problem increases to 2000.
I am pretty sure that O(n) would just double to 20 seconds since we are doubling the size and this would represent a loop in Java that iterates n number of times. Doubling n would double the output. But I am completely lost on numbers 1, 3, and 4.
I am not looking for direct answers to this question, but rather for someone to dumb down the way I can arrive at the answer. Maybe by explaining what each of these Big O notations is actually doing on the back end. If I understood the way that the algorithm is calculated and where all the elements fit into some sort of equation to solve for time, that would be awesome. Thank you in advance.
I have spent weeks combing through the textbook, but it is all written in a very complicated matter that I am having a hard time digesting. Videos online haven't been much help either.
Let's have an example (the one that you don't have in your list): O(n^3).
The ratio between the sizes of your problems is 2: 2000/1000 = 2. The big-O notation gives you an estimation that if you have a problem of size n the complexity of the problem of the size 2n would be... (2n)^3 = 8n^3. That is 8 times higher than the original task.
I hope that would help.

Prove n^2 + 5 log(n) = O(n^2) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am trying to prove that n^2 + 5 log(n) = O(n^2), O representing big-O notation. I am not great with proofs and any help would be appreciated.
Informally, we take big-O to mean the fastest growing term as n grows arbitrarily large. Since n^2 grows much faster than log(n), that should be clear.
More formally, asymptotic behaviors are identical when the limit of the ratio of two functions approaches 1 as their parameter(s) approach(es) infinity, which should sound like the same thing. So, you would need to show that lim(n->inf)((n^2+5log(n))/n^2) = 1.

Log-log plot/graph of algorithm time complexity [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I just wrote the quick and merge sort algorithms and I want to make a log-log plot of their run time vs size of array to sort.
As I have never done this my question is does it matter if I choose arbitrary numbers for the array length (size of input) or should I follow a pattern (something like 10^3, 10^4, 10^5, etc)?
In general, you need to choose array lengths, for each method, that are large enough to display the expected o(n log n) or O(n^2) type behavior.
If your n is too small the run time may be dominated by other growth rates, for example an algorithm with run time = 1000000*n + n^2 will look to be ~O(n) for n < 1000. For most algorithms the small n behavior means that your log-log plot will initially be curved.
On the other hand, if your n is too large your algorithm may take too long to complete.
The best compromise may be to start with small n, and time for n, 2n, 4n,..., or n, 3n, 9n,... and keep increasing until you can clearly see the log log plots asymptoting to a straight lines.

Resources