What is the big-O of the function (log n)^k - algorithm

What is the big-O complexity of the function (log n)k for any k?

Any function whose runtime has the form (log n)k is O((log n)k). This expression isn't reducable to any other primitive function using simple transformations, and it's fairly common to see algorithms with runtimes like O(n (log n)2). Functions with this growth rate are called polylogarithmic.
By the way, typically (log n)k is written as logk n, so the above algorithm would have runtime O(n log2 n. In your case, the function log2 n + log n would be O(log2 n).
However, any function with runtime of the form log (nk) has runtime O(log n), assuming that k is a constant. This is because log (nk) = k log n using logarithm identities, and k log n is O(log n) because k is a constant. You should be careful not to blindly conclude that an algorithm that is O(log (nk)) is O(log n), though; if k is a parameter to the function or depends on n, the correct big-O computation would be O(k log n) in this case.
Depending on the context in which you're working, you sometimes see the notation Õ(f(n)) to mean O(f(n) logk n) for some constant k. This is sometimes called "soft-O" and is used in contexts in which the logarithmic terms are irrelevant. In that case, you could say that both functions are Õ(1), though this usage is not common in simple algorithmic analysis (in fact, outside of Wikipedia, I have seen this used precisely once).
Hope this helps!

It will still be (log(n))^2. A logarithm raised to a power is already in the lowest/simplest form.

(log n)^k is:
O((log n)^k)
O(n^k)
O(n)
O(n log n)
O(n^1/2)
O(n^0.00000002)
etc. Which one is meaningful for you depends on the constants and the context.

log(n) is O((log(n))^2) so the entire expression is O((log(n))^2)

Related

Why O(n log n) is greater than O(n)?

I read that O(n log n) is greater than O(n), I need to know why is it so?
For instance taking n as 1, and solving O(n log n) will be O(1 log 1) = O(0). On the same hand O(n) will be O(1)?
Which actually contradicts O(n log n) > O(n)
Let us start by clarifying what is Big O notation in the current context. From (source) one can read:
Big O notation is a mathematical notation that describes the limiting
behavior of a function when the argument tends towards a particular
value or infinity. (..) In computer science, big O notation is used to classify algorithms
according to how their run time or space requirements grow as the
input size grows.
The following statement is not accurate:
For instance taking n as 1, solving O(n log n) will be O(1 log 1) =
O(0). On the same hand O(n) will be O(1)?
One cannot simply perform "O(1 log 1)" since the Big O notation does not represent a function but rather a set of functions with a certain asymptotic upper-bound; as one can read from source:
Big O notation characterizes functions according to their growth
rates: different functions with the same growth rate may be
represented using the same O notation.
Informally, in computer-science time-complexity and space-complexity theories, one can think of the Big O notation as a categorization of algorithms with a certain worst-case scenario concerning time and space, respectively. For instance, O(n):
An algorithm is said to take linear time/space, or O(n) time/space, if its time/space complexity is O(n). Informally, this means that the running time/space increases at most linearly with the size of the input (source).
and O(n log n) as:
An algorithm is said to run in quasilinear time/space if T(n) = O(n log^k n) for some positive constant k; linearithmic time/space is the case k = 1 (source).
Mathematically speaking the statement
I read that O(n log n) is greater than O(n) (..)
is not accurate, since as mentioned before Big O notation represents a set of functions. Hence, more accurate will be O(n log n) contains O(n). Nonetheless, typically such relaxed phrasing is normally used to quantify (for the worst-case scenario) how a set of algorithms behaves compared with another set of algorithms regarding the increase of their input sizes. To compare two classes of algorithms (e.g., O(n log n) and O(n)) instead of
For instance taking n as 1, solving O(n log n) will be O(1 log 1) =
O(0). On the same hand O(n) will be O(1)?
Which actually contradicts O(n log n) > O(n)
you should analyze how both classes of algorithms behaves with the increase of their input size (i.e., n) for the worse-case scenario; analyzing n when it tends to the infinity
As #cem rightly point it out, in the image "big-O denote one of the asymptotically least upper-bounds of the plotted functions, and does not refer to the sets O(f(n))"
As you can see in the image after a certain input, O(n log n) (green line) grows faster than O(n) (yellow line). That is why (for the worst-case) O(n) is more desirable than O(n log n) because one can increase the input size, and the growth rate will increase slower with the former than with the latter.
I'm going to give the you the real answer, even though it seems to be more than one step away from the way you're currently thinking about it...
O(n) and O(n log n) are not numbers, or even functions, and it doesn't quite make sense to say that one is greater than the other. It's sloppy language, but there are actually two accurate statements that might be meant by saying that "O(n log n) is greater than O(n)".
Firstly, O(f(n)), for any function f(n) of n, is the infinite set of all functions that asymptotically grow no faster than f(n). A formal definition would be:
A function g(n) is in O(f(n)) if and only if there are constants n0 and C such that g(n) <= Cf(n) for all n > n0.
So O(n) is a set of functions and O(n log n) is a set of functions, and O(n log n) is a superset of O(n). Being a superset is kind of like being "greater", so if one were to say that "O(n log n) is greater than O(n)", they might be referring to the superset relationship between them.
Secondly, the definition of O(f(n)) makes f(n) an upper bound on the asymptotic growth of functions in the set. And the upper bound is greater for O(n log n) than it is for O(n). In more concrete terms, there a constant n0 such that n log n > n, for all n > n0. The bounding function itself is asymptotically greater, and this is another thing that one might mean when saying "O(n log n) is greater than O(n)".
Finally, both of these things are mathematically equivalent. If g(n) is asymptotically greater than f(n), it follows mathematically that O(g(n)) is a superset of O(f(n)), and if O(g(n)) is a proper superset of O(f(n)), it follows mathematically that g(n) is asymptotically greater than f(n).
Therefore, even though the statement "O(n log n) is greater than O(n)" does not strictly make any sense, it has a clear and unambiguous meaning if you're willing to read it charitably.
The big O notation only has an asymptotic meaning, that is it makes sense only when n goes to infinity.
For example, a time complexity of O(100000) just means your code runs in constant time, which is asymptotically faster (smaller) than O(log n).

Prove O(n) is not a subset of O(n log n)

I saw a proof for O(2n) is same as O(n) in this post => Which algorithm is faster O(N) or O(2N)?
Which means O(n) is same as O(4n).
Can someone show me how O(n) is not a subset of O(n log n)?
Because, if n = 16 and base = 2, O(n log n) will be O(n * 4), which should make it O(n)?
I know above statement is wrong. But not sure which part. Kindly clarify.
Because, if n = 16 and base = 2, O(n log n) will be O(n * 4), which should make it O(n)?
This is a fundamental misunderstanding of what O(n log n) means.
O(n log n) is a set of functions. Intuitively, it is the set of all functions {g(n)} where g(n) is proportional to f(n) = n log n.
(There is a rigorous mathematical definition of what "proportional" means that deals with awkward edge cases, but you need to understand "limits" ... which is relatively advanced mathematics ... to comprehend the definition.)
You are substituting a value for the argument ... which is mathematically meaningless. Facially, you are evaluating O(n log n) as a function for some value of n. That might make sense if O(...) denoted a function. But it doesn't.
Big O is a mathematical notation for a set of functions that are related to a given function in a particular way. And (intuitively) the relationship is about what happens when the n gets larger. You can substitute a specific value for n and still preserve the meaning of the notation.
(What you have done makes about as much mathematical sense as canceling out the x in:
d(x.x)
------
dx
... or one of those schoolboy "proofs" that one is zero that entails division by zero.)
To gain a deeper understand of why your substtution is meaningless, review the more formal definition of Big Oh notation; e.g. on Wikipedia. If you know what limits are.
You cannot say that n=16. Then you're treating it as a constant. n is a variable.
Look at O(n²). If n=16, then O(n²)=O(16*n)=O(256)=O(1)
It works for any complexity. Consider O(n!), as it is for traveling salesman. If n=16, then O(n!)=O(16!)=O(huge constant)=O(1)
Besides, as chepner pointed out, O(n) IS a subset of O(nlog n). Your real question is if the sets O(n) and O(nlog n) are equal, which they are not.

Big O and Big Theta Equality

For example, I am asked the asymptotic complexity of building a binary heap (the type of algorithm is arbitrary) if I say an algorithm is Θ(log(n)) would it also be correct to say that it is O(n)
As long as you're measuring the same quantity, anything that is Θ(log n) is also O(n). If the runtime is Θ(log n), then it's also O(log n) (that's part of the definition of Θ notation), and anything that's O(log n) is also O(n).
The case where you might have to be careful is if these are implicitly measuring different quantities. For example, if an algorithm's best-case runtime is Θ(log n), it doesn't necessarily follow that the algorithm's worst-case runtime will be O(n).

Complexity of algorithms - Competitive Analysis

For example we have an algorithm with the complexity O(n log n). An online algorithm for the same problem is 5-competitive. What is the complexity of the online algorithm?
In my opinion the result should be something like O(5 * n log n). Did I understand this correctly?
Big-O notation refers to the asymptotic complexity of a function. The simplest way to explain this is that it means no constant are included in the notation. That means that n log n, 5n log n, and even 10^6*n log n all fall in to the big-O class of O(n log n)

I'm really confused about time compexity

I understand that an algorithm's time T(n) can be bounded by O(g(n)) by the definition:
T(n) is O(g(n)) iff there is a c > 0, n0 > 0, such that for all n >= n0:
for every input of size n, A takes at most c * g(n) steps.
T(n) is the time that is the longest out of all the inputs of size n.
However what I don't understand is the definition for Ω(g(n)). The definition is that for some input of size n, A takes at least c * g(n) steps.
But if that's the definition for Ω then couldn't I find a lower bound for any algorthm that is the same as the upper bound? For instance if sorting in the worst case takes O(nlogn) then wouldn't I be able to show easily Ω(nlogn) as well seeing as how there has to be at least one bad input for any size n that would take nlogn steps? Lets assume that we're talking about heapsort.
I am really not sure what I'm missing here because whenever I'm being taught a new algorithm the time for a certain method is either Ɵ(g(n)) or O(g(n)), but no explanation is provided as to why it's either Ɵ or O.
I hope what I said was clear enough if not then ask away at what you misunderstood. I really need this confusion cleared up. Thank you.
O is an upper bound, meaning that we know an algorithm that's O(n lg n) takes, asymptotically, at most a constant times n lg n steps in the worst case.
Ω is a lower bound, meaning that we know it's not possible for an Ω(n lg n) algorithm to take asymptotically less than a n lg n steps in the worst case.
Ɵ is a tight bound: for example, if an algorithm is Ɵ(n lg n) then we know both it's both O(n lg n) (so is at least as fast as n lg n) and Ω(n lg n) (so we know it's no faster than n lg n).
The reason your argument is flawed is that you're actually assuming you know Ɵ(n lg n), not just O(n lg n).
For example, we know there's a Ω(n lg n) general bound on comparison sorts. Once we proved O(n lg n) for mergesort, that therefore means that mergesort is Ɵ(n lg n). Note that mergesort is also O(n^2), because it's no slower than n^2. (That's not how people would typically describe it, but that is what the formal notation means.)
For some algorithms, we don't know tight bounds; the general 3SUM problem in simple models of computation is known to be Ω(n lg n) because it can be used to perform sorting, but we only have Ɵ(n^2) algorithms. The best algorithm for the problem is between n lg n and n^2; we can say that it's O(n^2) and Ω(n lg n), but we don't know the Ɵ.
There's also o(f), which means strictly less than f, and ω(f), which means strictly greater than f.
The definition that I am familiar with is that T(n) is Ω(g(n)) if for some n0, for all n>n0, T(n) >= g(n)*k for some k.
Then something is Θ(n) iff it is both O(g(n)) and Ω(g(n)).

Resources