Big O and Big Theta Equality - big-o

For example, I am asked the asymptotic complexity of building a binary heap (the type of algorithm is arbitrary) if I say an algorithm is Θ(log(n)) would it also be correct to say that it is O(n)

As long as you're measuring the same quantity, anything that is Θ(log n) is also O(n). If the runtime is Θ(log n), then it's also O(log n) (that's part of the definition of Θ notation), and anything that's O(log n) is also O(n).
The case where you might have to be careful is if these are implicitly measuring different quantities. For example, if an algorithm's best-case runtime is Θ(log n), it doesn't necessarily follow that the algorithm's worst-case runtime will be O(n).

Related

What is Big O of n^2 x logn?

Is it n^2 x logn or n^3? I know both of these answers act as upper bounds, I’m just torn between choosing a tighter but more complex bound (option 1), or a “worse” yet simpler bound (option 2).
Are there general rules to big O functions such as big O functions can never be too complex/a product of two functions?
You already seem to have an excellent understanding of the fact that big-O notation is an upper bound, and also that a function with runtime complexity n^2 logn falls in both O(n^2 logn) and O(n^3), so I'll spare you the mathematics of that. It's immediately clear (from the fact that n^2 logn is in O(n^3)) that O(n^2 logn) is a subset of O(n^3), so the former is a at least as good of a bound. It turns out to be a strictly tighter bound (that can be seen with some basic algebra), which is a definite point in its favor. I do understand your concern about the complexity of bounds, but I wouldn't worry about that. Mathematically, it's best to favor accuracy over simplicity, when the two are at odds, and n^2 logn is not that complex of an expression. So in my mind, O(n^2 logn) is a much better bound to state.
Other examples of similar or greater complexity:
As indicated in the comments, merge sort and quicksort have average time complexity O(n logn).
Interpolation search has an average time complexity of O(log logn).
The average case of Dijkstra's algorithm is stated on Wikipedia to be the absolute mouthful O(E + V log(E/V) log(V)).

Why O(n log n) is greater than O(n)?

I read that O(n log n) is greater than O(n), I need to know why is it so?
For instance taking n as 1, and solving O(n log n) will be O(1 log 1) = O(0). On the same hand O(n) will be O(1)?
Which actually contradicts O(n log n) > O(n)
Let us start by clarifying what is Big O notation in the current context. From (source) one can read:
Big O notation is a mathematical notation that describes the limiting
behavior of a function when the argument tends towards a particular
value or infinity. (..) In computer science, big O notation is used to classify algorithms
according to how their run time or space requirements grow as the
input size grows.
The following statement is not accurate:
For instance taking n as 1, solving O(n log n) will be O(1 log 1) =
O(0). On the same hand O(n) will be O(1)?
One cannot simply perform "O(1 log 1)" since the Big O notation does not represent a function but rather a set of functions with a certain asymptotic upper-bound; as one can read from source:
Big O notation characterizes functions according to their growth
rates: different functions with the same growth rate may be
represented using the same O notation.
Informally, in computer-science time-complexity and space-complexity theories, one can think of the Big O notation as a categorization of algorithms with a certain worst-case scenario concerning time and space, respectively. For instance, O(n):
An algorithm is said to take linear time/space, or O(n) time/space, if its time/space complexity is O(n). Informally, this means that the running time/space increases at most linearly with the size of the input (source).
and O(n log n) as:
An algorithm is said to run in quasilinear time/space if T(n) = O(n log^k n) for some positive constant k; linearithmic time/space is the case k = 1 (source).
Mathematically speaking the statement
I read that O(n log n) is greater than O(n) (..)
is not accurate, since as mentioned before Big O notation represents a set of functions. Hence, more accurate will be O(n log n) contains O(n). Nonetheless, typically such relaxed phrasing is normally used to quantify (for the worst-case scenario) how a set of algorithms behaves compared with another set of algorithms regarding the increase of their input sizes. To compare two classes of algorithms (e.g., O(n log n) and O(n)) instead of
For instance taking n as 1, solving O(n log n) will be O(1 log 1) =
O(0). On the same hand O(n) will be O(1)?
Which actually contradicts O(n log n) > O(n)
you should analyze how both classes of algorithms behaves with the increase of their input size (i.e., n) for the worse-case scenario; analyzing n when it tends to the infinity
As #cem rightly point it out, in the image "big-O denote one of the asymptotically least upper-bounds of the plotted functions, and does not refer to the sets O(f(n))"
As you can see in the image after a certain input, O(n log n) (green line) grows faster than O(n) (yellow line). That is why (for the worst-case) O(n) is more desirable than O(n log n) because one can increase the input size, and the growth rate will increase slower with the former than with the latter.
I'm going to give the you the real answer, even though it seems to be more than one step away from the way you're currently thinking about it...
O(n) and O(n log n) are not numbers, or even functions, and it doesn't quite make sense to say that one is greater than the other. It's sloppy language, but there are actually two accurate statements that might be meant by saying that "O(n log n) is greater than O(n)".
Firstly, O(f(n)), for any function f(n) of n, is the infinite set of all functions that asymptotically grow no faster than f(n). A formal definition would be:
A function g(n) is in O(f(n)) if and only if there are constants n0 and C such that g(n) <= Cf(n) for all n > n0.
So O(n) is a set of functions and O(n log n) is a set of functions, and O(n log n) is a superset of O(n). Being a superset is kind of like being "greater", so if one were to say that "O(n log n) is greater than O(n)", they might be referring to the superset relationship between them.
Secondly, the definition of O(f(n)) makes f(n) an upper bound on the asymptotic growth of functions in the set. And the upper bound is greater for O(n log n) than it is for O(n). In more concrete terms, there a constant n0 such that n log n > n, for all n > n0. The bounding function itself is asymptotically greater, and this is another thing that one might mean when saying "O(n log n) is greater than O(n)".
Finally, both of these things are mathematically equivalent. If g(n) is asymptotically greater than f(n), it follows mathematically that O(g(n)) is a superset of O(f(n)), and if O(g(n)) is a proper superset of O(f(n)), it follows mathematically that g(n) is asymptotically greater than f(n).
Therefore, even though the statement "O(n log n) is greater than O(n)" does not strictly make any sense, it has a clear and unambiguous meaning if you're willing to read it charitably.
The big O notation only has an asymptotic meaning, that is it makes sense only when n goes to infinity.
For example, a time complexity of O(100000) just means your code runs in constant time, which is asymptotically faster (smaller) than O(log n).

Are O(n log n) algorithms always better than all O(n^2) algorithms?

When trying to properly understand Big-O, I am wondering whether it's true that O(n log n) algorithms are always better than all O(n^2) algorithms.
Are there any particular situations where O(n^2) would be better?
I've read multiple times that in sorting for example, a O(n^2) algorithm like bubble sort can be particularly quick when the data is almost sorted, so would it be quicker than a O(n log n) algorithm, such as merge sort, in this case?
No, O(n log n) algorithms are not always better than O(n^2) ones.
The Big-O notation describes an upper bound of the asymptotic behavior of an algorithm, i.e. for n that tends towards infinity.
In this definition you have to consider some aspects:
The Big-O notation is an upper bound of the algorithm complexity, meaning that for some inputs (like the one you mentioned about sorting algorithms) an algorithm with worst Big-O complexity may actually perform better (bubble sort runs in O(n) for an already sorted array, while mergesort and quicksort takes always at least O(n log n));
The Big-O notation only describes the class of complexity, hiding all the constant factors that in real case scenarios may be relevant. For example, an algorithm that has complexity 1000000 x that is in class O(n) perform worst than an algorithm with complexity 0.5 x^2 (class O(n^2)) for inputs smaller than 2000000. Basically the Big-O notation tells you that for big enough input n, the O(n) algorithms will perform better than O(n^2), but if you work with small inputs you may still prefer the latter solution.
O(n log n) is better than O(n2) asymptotically.
Big-O, Big-Theta, Big-Omega, all those measure the asymptotic behavior of functions, i.e., how functions behave when their argument goes toward a certain limit.
O(n log n) functions grow slower than O(n2) functions, that's what Big-O notation essentially says. However, this does not mean that O(n log n) is always faster. It merely means that at some point, the O(n log n) function will always be cheaper for an ever-rising value of n.
In that image, f(n) = O(g(n)). Note that there is a range where f(n) is actually more costly than g(n), even though it is bounded asymptotically by g(n). However, when talking limits, or asymptotics for that matter, f(n) outperforms g(n) "in the long run," so to say.
In addition to #cadaniluk's answer:
If you restrict the inputs to the algorithms to a very special type, this also can effect the running time. E.g. if you run sorting algorithms only on already sorted lists, BubbleSort will run in linear time, but MergeSort will still need O(n log n).
There are also algorithms that have a bad worst-case complexity, but a good average case complexity. This means that there are bad input instances such that the algorithm is slow, but in total it is very unlikely that you have such a case.
Also never forget, that Big-O notation hides constants and additive functions of lower orders. So an Algorithm with worst-case complexity O(n log n) could actually have a complexity of 2^10000 * n * log n and your O(n^2) algorithm could actually run in 1/2^1000 n^2. So for n < 2^10000 you really want to use the "slower" algorithm.
Here is a practical example.
The GCC implementations of sorting functions have O(n log n) complexity. Still, they employ O(n^2) algorithms as soon as the size of the part being sorted is less than some small constant.
That's because for small sizes, they tend to be faster in practice.
See here for some of the internal implementation.

N log(N) or N clarification

Will performing a O(log N) algorithm N times give O(N log(N))? Or is it O(N)?
e.g. Inserting N elements into a self-balancing tree.
int i = 0;
while (i++ < N) {
insert(itemsToInsert[i]);
}
It's definitely O(N log(N)). It COULD also be O(N), if you could show that the sequence of calls, as a total, grows slow enough (because while SOME calls are O(log N), enough others are fast enough, say O(1), to bring the total down).
Remember: O(f) means the algorithm is no SLOWER than f, but it can be faster (even if just in certain cases).
N times O(log(N)) leads to O(N log(N)).
Big-O notation notates the asymptotic behavior of the algorithm. The cost of each additional step is O(log N); we know that for an O(N) algorithm, the cost of each additional step is O(1) and asymptotically the cost function bound is a straight line.
Therefore O(N) is too low of a bound; O(N log N) seems about right.
Yes and no.
Calculus really helps here. The first iteration is complexity log(1), the second iteration is log(2), &ct until the Nth iteration which is log(N). Rather than thinking of the problem as a multiplication, think of it as an integral...
This happens to come out as O(N log(N)), but that is kind of a coincidence.

What is the big-O of the function (log n)^k

What is the big-O complexity of the function (log n)k for any k?
Any function whose runtime has the form (log n)k is O((log n)k). This expression isn't reducable to any other primitive function using simple transformations, and it's fairly common to see algorithms with runtimes like O(n (log n)2). Functions with this growth rate are called polylogarithmic.
By the way, typically (log n)k is written as logk n, so the above algorithm would have runtime O(n log2 n. In your case, the function log2 n + log n would be O(log2 n).
However, any function with runtime of the form log (nk) has runtime O(log n), assuming that k is a constant. This is because log (nk) = k log n using logarithm identities, and k log n is O(log n) because k is a constant. You should be careful not to blindly conclude that an algorithm that is O(log (nk)) is O(log n), though; if k is a parameter to the function or depends on n, the correct big-O computation would be O(k log n) in this case.
Depending on the context in which you're working, you sometimes see the notation Õ(f(n)) to mean O(f(n) logk n) for some constant k. This is sometimes called "soft-O" and is used in contexts in which the logarithmic terms are irrelevant. In that case, you could say that both functions are Õ(1), though this usage is not common in simple algorithmic analysis (in fact, outside of Wikipedia, I have seen this used precisely once).
Hope this helps!
It will still be (log(n))^2. A logarithm raised to a power is already in the lowest/simplest form.
(log n)^k is:
O((log n)^k)
O(n^k)
O(n)
O(n log n)
O(n^1/2)
O(n^0.00000002)
etc. Which one is meaningful for you depends on the constants and the context.
log(n) is O((log(n))^2) so the entire expression is O((log(n))^2)

Resources