When would O(n*n) be quicker then O (log n)? - performance

I have this question on a practice test and I'm not sure of when code would run quicker on O(n*n) over O(log n).

Big oh notation gives upper bounds. Not more.
If algorithm A is O(n ^ 2), it could require exactly n ^ 2 steps.
If algorithm B is O(log n), it could require exactly 10000 * log n steps.
Algorithm A is a lot faster than algorithm B for small n.

Remember that Big-O is the upper bound. It's quite possible that because of constants that under smaller input sizes the O(n^2) algorithm can run faster than O(log n). It could be entirely possible that in most cases the n^2 can also run faster and that algorithm happens to run in n^2 only because of certain input sets that cause it to have to do a lot of work.

I am retracting my previous answer of never because technically it is possible for a O(n*n) algorithm to be faster than a O(log n) algorithm, though highly improbably. See my discussion with Jesus under his answer for more details. The graph below shows that an algorithm that has a time complexity of exactly log n is always faster than an algorithm that has a time complexity of exactly n*n.

Related

Are O(n log n) algorithms always better than all O(n^2) algorithms?

When trying to properly understand Big-O, I am wondering whether it's true that O(n log n) algorithms are always better than all O(n^2) algorithms.
Are there any particular situations where O(n^2) would be better?
I've read multiple times that in sorting for example, a O(n^2) algorithm like bubble sort can be particularly quick when the data is almost sorted, so would it be quicker than a O(n log n) algorithm, such as merge sort, in this case?
No, O(n log n) algorithms are not always better than O(n^2) ones.
The Big-O notation describes an upper bound of the asymptotic behavior of an algorithm, i.e. for n that tends towards infinity.
In this definition you have to consider some aspects:
The Big-O notation is an upper bound of the algorithm complexity, meaning that for some inputs (like the one you mentioned about sorting algorithms) an algorithm with worst Big-O complexity may actually perform better (bubble sort runs in O(n) for an already sorted array, while mergesort and quicksort takes always at least O(n log n));
The Big-O notation only describes the class of complexity, hiding all the constant factors that in real case scenarios may be relevant. For example, an algorithm that has complexity 1000000 x that is in class O(n) perform worst than an algorithm with complexity 0.5 x^2 (class O(n^2)) for inputs smaller than 2000000. Basically the Big-O notation tells you that for big enough input n, the O(n) algorithms will perform better than O(n^2), but if you work with small inputs you may still prefer the latter solution.
O(n log n) is better than O(n2) asymptotically.
Big-O, Big-Theta, Big-Omega, all those measure the asymptotic behavior of functions, i.e., how functions behave when their argument goes toward a certain limit.
O(n log n) functions grow slower than O(n2) functions, that's what Big-O notation essentially says. However, this does not mean that O(n log n) is always faster. It merely means that at some point, the O(n log n) function will always be cheaper for an ever-rising value of n.
In that image, f(n) = O(g(n)). Note that there is a range where f(n) is actually more costly than g(n), even though it is bounded asymptotically by g(n). However, when talking limits, or asymptotics for that matter, f(n) outperforms g(n) "in the long run," so to say.
In addition to #cadaniluk's answer:
If you restrict the inputs to the algorithms to a very special type, this also can effect the running time. E.g. if you run sorting algorithms only on already sorted lists, BubbleSort will run in linear time, but MergeSort will still need O(n log n).
There are also algorithms that have a bad worst-case complexity, but a good average case complexity. This means that there are bad input instances such that the algorithm is slow, but in total it is very unlikely that you have such a case.
Also never forget, that Big-O notation hides constants and additive functions of lower orders. So an Algorithm with worst-case complexity O(n log n) could actually have a complexity of 2^10000 * n * log n and your O(n^2) algorithm could actually run in 1/2^1000 n^2. So for n < 2^10000 you really want to use the "slower" algorithm.
Here is a practical example.
The GCC implementations of sorting functions have O(n log n) complexity. Still, they employ O(n^2) algorithms as soon as the size of the part being sorted is less than some small constant.
That's because for small sizes, they tend to be faster in practice.
See here for some of the internal implementation.

Is n or nlog(n) better than constant or logarithmic time?

In the Princeton tutorial on Coursera the lecturer explains the common order-of-growth functions that are encountered. He says that linear and linearithmic running times are "what we strive" for and his reasoning was that as the input size increases so too does the running time. I think this is where he made a mistake because I have previously heard him refer to a linear order-of-growth as unsatisfactory for an efficient algorithm.
While he was speaking he also showed a chart that plotted the different running times - constant and logarithmic running times looked to be more efficient. So was this a mistake or is this true?
It is a mistake when taken in the context that O(n) and O(n log n) functions have better complexity than O(1) and O(log n) functions. When looking typical cases of complexity in big O notation:
O(1) < O(log n) < O(n) < O(n log n) < O(n^2)
Notice that this doesn't necessarily mean that they will always be better performance-wise - we could have an O(1) function that takes a long time to execute even though its complexity is unaffected by element count. Such a function would look better in big O notation than an O(log n) function, but could actually perform worse in practice.
Generally speaking: a function with lower complexity (in big O notation) will outperform a function with greater complexity (in big O notation) when n is sufficiently high.
You're missing the broader context in which those statements must have been made. Different kinds of problems have different demands, and often even have theoretical lower bounds on how much work is absolutely necessary to solve them, no matter the means.
For operations like sorting or scanning every element of a simple collection, you can make a hard lower bound of the number of elements in the collection for those operations, because the output depends on every element of the input. [1] Thus, O(n) or O(n*log(n)) are the best one can do.
For other kinds of operations, like accessing a single element of a hash table or linked list, or searching in a sorted set, the algorithm needn't examine all of the input. In those settings, an O(n) operation would be dreadfully slow.
[1] Others will note that sorting by comparisons also has an n*log(n) lower bound, from information-theoretic arguments. There are non-comparison based sorting algorithms that can beat this, for some types of input.
Generally speaking, what we strive for is the best we can manage to do. But depending on what we're doing, that might be O(1), O(log log N), O(log N), O(N), O(N log N), O(N2), O(N3), or (or certain algorithms) perhaps O(N!) or even O(2N).
Just for example, when you're dealing with searching in a sorted collection, binary search borders on trivial and gives O(log N) complexity. If the distribution of items in the collection is reasonably predictable, we can typically do even better--around O(log log N). Knowing that, an algorithm that was O(N) or O(N2) (for a couple of obvious examples) would probably be pretty disappointing.
On the other hand, sorting is generally quite a bit higher complexity--the "good" algorithms manage O(N log N), and the poorer ones are typically around O(N2). Therefore, for sorting an O(N) algorithm is actually very good (in fact, only possible for rather constrained types of inputs), and we can pretty much count on the fact that something like O(log log N) simply isn't possible.
Going even further, we'd be happy to manage a matrix multiplication in only O(N2) instead of the usual O(N3). We'd be ecstatic to get optimum, reproducible answers to the traveling salesman problem or subset sum problem in only O(N3), given that optimal solutions to these normally require O(N!).
Algorithms with a sublinear behavior like O(1) or O(Log(N)) are special in that they do not require to look at all elements. In a way this is a fallacy because if there are really N elements, it will take O(N) just to read or compute them.
Sublinear algorithms are often possible after some preprocessing has been performed. Think of binary search in a sorted table, taking O(Log(N)). If the data is initially unsorted, it will cost O(N Log(N)) to sort it first. The cost of sorting can be balanced if you perform many searches, say K, on the same data set. Indeed, without the sort, the cost of the searches will be O(K N), and with pre-sorting O(N Log(N)+ K Log(N)). You win if K >> Log(N).
This said, when no preprocessing is allowed, O(N) behavior is ideal, and O(N Log(N)) is quite comfortable as well (for a million elements, Lg(N) is only 20). You start screaming with O(N²) and worse.
He said those algorithms are what we strive for, which is generally true. Many algorithms cannot possibly be improved better than logarithmic or linear time, and while constant time would be better in a perfect world, it's often unattainable.
constant time is always better because the time (or space) complexity doesn't depend on the problem size... isn't it a great feature? :-)
then we have O(N) and then Nlog(N)
did you know? problems with constant time complexity exist!
e.g.
let A[N] be an array of N integer values, with N > 3. Find and algorithm to tell if the sum of the first three elements is positive or negative.
What we strive for is efficiency, in the sense of designing algorithms with a time (or space) complexity that does not exceed their theoretical lower bound.
For instance, using comparison-based algorithms, you can't find a value in a sorted array faster than Omega(Log(N)), and you cannot sort an array faster than Omega(N Log(N)) - in the worst case.
Thus, binary search O(Log(N)) and Heapsort O(N Log(N)) are efficient algorithms, while linear search O(N) and Bubblesort O(N²) are not.
The lower bound depends on the problem to be solved, not on the algorithm.
Yes constant time i.e. O(1) is better than linear time O(n) because the former is not depending on the input-size of the problem. The order is O(1) > O (logn) > O (n) > O (nlogn).
Linear or linearthimic time we strive for because going for O(1) might not be realistic as in every sorting algorithm we atleast need a few comparisons which the professor tries to prove with his decison Tree- comparison analysis where he tries to sort three elements a b c and proves a lower bound of nlogn. Check his "Complexity of Sorting" in the Mergesort lecture.

N log(N) or N clarification

Will performing a O(log N) algorithm N times give O(N log(N))? Or is it O(N)?
e.g. Inserting N elements into a self-balancing tree.
int i = 0;
while (i++ < N) {
insert(itemsToInsert[i]);
}
It's definitely O(N log(N)). It COULD also be O(N), if you could show that the sequence of calls, as a total, grows slow enough (because while SOME calls are O(log N), enough others are fast enough, say O(1), to bring the total down).
Remember: O(f) means the algorithm is no SLOWER than f, but it can be faster (even if just in certain cases).
N times O(log(N)) leads to O(N log(N)).
Big-O notation notates the asymptotic behavior of the algorithm. The cost of each additional step is O(log N); we know that for an O(N) algorithm, the cost of each additional step is O(1) and asymptotically the cost function bound is a straight line.
Therefore O(N) is too low of a bound; O(N log N) seems about right.
Yes and no.
Calculus really helps here. The first iteration is complexity log(1), the second iteration is log(2), &ct until the Nth iteration which is log(N). Rather than thinking of the problem as a multiplication, think of it as an integral...
This happens to come out as O(N log(N)), but that is kind of a coincidence.

Big oh notation for heaps

I am trying to understand big oh notations. Any help would be appreciated.
Say there is a program that creates a max heap and then pushes and removes the item.
Say there is n items.
To create a heap,it takes O(n) to heapify if you have read it into an array and then, heapifies it.
To push an item, it takes O(1) and to remove it, it takes O(1)
To heapify it after that, it takes log n for each remove and n log n for n items
So the big oh notation is O(n + n log n)
OR, is it O(n log n) only because we choose the biggest one.
The complexity to heapify the new element in the heap is O(logN), not O(1)(unless you use an Fibonacci heap which it seems is not the case).
Also there is no notation O(N + NlogN) as NlogN grows faster than N so this notation is simply written as O(NlogN).
EDIT: The big-oh notation only describes the asymptotic behavior of a function, that is - how fast it grows. As you get close to infinity 2*f(x) and 11021392103*f(x) behave similarly and that is why when writing big-oh notation, we ignore any constants in front of the function.
Formally speaking, O(N + N log N) is equivalent to O(N log N).
That said, it's assumed that there are coefficients buried in each of these, ala: O( aN + bN log(cN) ). If you have very large N values, these coefficients become unimportant and the algorithm is bounded only by its largest term, which, in this case, is log(N).
But it doesn't mean the coefficients are entirely unimportant. This is why in discussions of graph algorithms you'll often see authors say something like "the Floyd-Warshall algorithm runs in O(N^3) time, but has small coefficients".
If we could somehow write O(0.5N^3) in this case, we would. But it turns out that the coefficients vary depending on how you implement an algorithm and which computer you run it on. Thus, we settle for asymptotic comparisons, not necessarily because it is the best way, but because there isn't really a good alternative.
You'll also see things like "Worst-case: O(N^2), Average case: O(N)". This is an attempt to capture how the behavior of the algorithm varies with the input. Often times, presorted or random inputs can give you that average case, whereas an evil villain can construct inputs that produce the worst case.
Ultimately, what I am saying is this: O(N + N log N)=O(N log N). This is true, and it's the right answer for your homework. But we use this big-O notation to communicate and, in the fullness of time, you may find situations where you feel that O(N + N log N) is more expressive, perhaps if your algorithm is generally used for small N. In this case, do not worry so much about the formalism - just be clear about what it is you are trying to convey with it.

Is O(log n) always faster than O(n)

If there are 2 algorthims that calculate the same result with different complexities, will O(log n) always be faster? If so please explain. BTW this is not an assignment question.
No. If one algorithm runs in N/100 and the other one in (log N)*100, then the second one will be slower for smaller input sizes. Asymptotic complexities are about the behavior of the running time as the input sizes go to infinity.
No, it will not always be faster. BUT, as the problem size grows larger and larger, eventually you will always reach a point where the O(log n) algorithm is faster than the O(n) one.
In real-world situations, usually the point where the O(log n) algorithm would overtake the O(n) algorithm would come very quickly. There is a big difference between O(log n) and O(n), just like there is a big difference between O(n) and O(n^2).
If you ever have the chance to read Programming Pearls by Jon Bentley, there is an awesome chapter in there where he pits a O(n) algorithm against a O(n^2) one, doing everything possible to give O(n^2) the advantage. (He codes the O(n^2) algorithm in C on an Alpha, and the O(n) algorithm in interpreted BASIC on an old Z80 or something, running at about 1MHz.) It is surprising how fast the O(n) algorithm overtakes the O(n^2) one.
Occasionally, though, you may find a very complex algorithm which has complexity just slightly better than a simpler one. In such a case, don't blindly choose the algorithm with a better big-O -- you may find that it is only faster on extremely large problems.
For the input of size n, an algorithm of O(n) will perform steps proportional to n, while another algorithm of O(log(n)) will perform steps roughly log(n).
Clearly log(n) is smaller than n hence algorithm of complexity O(log(n)) is better. Since it will be much faster.
More Answers from stackoverflow

Resources