I am developing some algorithm with takes up O(log^3 n). (NOTE: Take O as Big Theta, though Big O would be fine too)
I am unsure whereas O(log^3 n), or even O(log^2 n), is considered to be more/less/equaly complex as O(n log n).
If I were to follow the rules stright away, I'd say O(n log n) is the more complex one, but still, I don't have any clue as why or how.
I've done some research but I haven't been able to find an answer to this question.
Thank you very much.
Thus (n log n) is "bigger" than ((log n)3). This could be easily generalized to ((log n)k) via induction.
If you graph the two functions together you can see that n log(n) grows faster than log3 n.
To prove this, you need to prove that n log n > log3 n for all values of n greater than some arbitrary number c. Find such a c and you have your proof.
In fact, n log(n) grows faster than any logx n for positive x.
Related
I saw a proof for O(2n) is same as O(n) in this post => Which algorithm is faster O(N) or O(2N)?
Which means O(n) is same as O(4n).
Can someone show me how O(n) is not a subset of O(n log n)?
Because, if n = 16 and base = 2, O(n log n) will be O(n * 4), which should make it O(n)?
I know above statement is wrong. But not sure which part. Kindly clarify.
Because, if n = 16 and base = 2, O(n log n) will be O(n * 4), which should make it O(n)?
This is a fundamental misunderstanding of what O(n log n) means.
O(n log n) is a set of functions. Intuitively, it is the set of all functions {g(n)} where g(n) is proportional to f(n) = n log n.
(There is a rigorous mathematical definition of what "proportional" means that deals with awkward edge cases, but you need to understand "limits" ... which is relatively advanced mathematics ... to comprehend the definition.)
You are substituting a value for the argument ... which is mathematically meaningless. Facially, you are evaluating O(n log n) as a function for some value of n. That might make sense if O(...) denoted a function. But it doesn't.
Big O is a mathematical notation for a set of functions that are related to a given function in a particular way. And (intuitively) the relationship is about what happens when the n gets larger. You can substitute a specific value for n and still preserve the meaning of the notation.
(What you have done makes about as much mathematical sense as canceling out the x in:
d(x.x)
------
dx
... or one of those schoolboy "proofs" that one is zero that entails division by zero.)
To gain a deeper understand of why your substtution is meaningless, review the more formal definition of Big Oh notation; e.g. on Wikipedia. If you know what limits are.
You cannot say that n=16. Then you're treating it as a constant. n is a variable.
Look at O(n²). If n=16, then O(n²)=O(16*n)=O(256)=O(1)
It works for any complexity. Consider O(n!), as it is for traveling salesman. If n=16, then O(n!)=O(16!)=O(huge constant)=O(1)
Besides, as chepner pointed out, O(n) IS a subset of O(nlog n). Your real question is if the sets O(n) and O(nlog n) are equal, which they are not.
I have to wirte a Programm with the running time mentioned above i know how i would make n log n that would be binary search or n^2 which would just be 2 for loops. But i don't get to the point to make a combination of it
There's no single correct way to do this. Here are a few options:
Design an algorithm that inserts n2 elements into a balanced BST. This does O(log n) work n2 times. (This uses the fact that log n2 = 2 log n = O(log n).)
Sort n2 elements with heapsort or mergesort. More generally, run any O(n log n)-time algorithm on an input of size n2.
Write a recursive algorithm whose runtime is given by the recurrence T(n) ≤ 4T(n / 2) + n2. This does n2 work per level and has O(log n) levels, and by the Master Theorem solves to O(n2 log n).
Hopefully this gets you going in the right direction.
Posted elsewhere as a question: What is the maximum running time of sorting n strings, each having a length of n characters? (Hint: String comparison is not a trivial operation).
My Question : O(n^2) better or worse than O(n^2 log n)
I don't know if there exist any algorithm with O(n^2 log n), this question is from revision on Past Year Exam Question.
The question asked:
Given four algorithms with the following time complexities, O(2n^2), O(n^2 log n), O(3n log n), and O(12n), ascending them in ascending growth rate.
As my opinion, O(n^2 log n) is better when log n < 1, worse when log n >1.
As a conclusion to this, which is better between this 2
Thank you for anyone who viewed or answer this question.
Just put some value and prove it mathematically.. Take values of N where is the set of positive integers and solve the equation. Compare the results..
When using the this notation for complexity of an algorithm you don't bother with small values of n, you only look at the big values, also any constants can be removed.
Here they are listed from highest complexity to the lowest:
O (n^2 log n)
O (2n^2) = O (n^2)
O (n log n)
O (12n) = O (n)
Is O(n Log n) in polynomial time? If so, could you explain why?
I am interested in a mathematical proof, but I would be grateful for any strong intuition as well.
Thanks!
Yes, O(nlogn) is polynomial time.
From http://mathworld.wolfram.com/PolynomialTime.html,
An algorithm is said to be solvable in polynomial time if the number
of steps required to complete the algorithm for a given input is
O(n^m) for some nonnegative integer m, where n is the complexity of
the input.
From http://en.wikipedia.org/wiki/Big_O_notation,
f is O(g) iff
I will now prove that n log n is O(n^m) for some m which means that n log n is polynomial time.
Indeed, take m=2. (this means I will prove that n log n is O(n^2))
For the proof, take k=2. (This could be smaller, but it doesn't have to.)
There exists an n_0 such that for all larger n the following holds.
n_0 * f(n) <= g(n) * k
Take n_0 = 1 (this is sufficient)
It is now easy to see that
n log n <= 2n*n
log n <= 2n
n > 0 (assumption)
Click here if you're not sure about this.
This proof could be a lot nicer in latex math mode, but I don't think stackoverflow supports that.
It is, because it is upper-bounded by a polynomial (n).
You could take a look at the graphs and go from there, but I can't formulate a mathematical proof other than that :P
EDIT: From the wikipedia page, "An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm".
It is at least not worse than polynomial time. And still not better: n < n log n < n*n.
Yes. What's the limit of nlogn as n goes to infinity? Intuitively, for large n, n >> logn and you can consider the product dominated by n and so nlogn ~ n, which is clearly polynomial time. A more rigorous proof is by using the the Sandwich theorem which Inspired did:
n^1 < nlogn < n^2.
Hence nlogn is bounded above (and below) by a sequence which is polynomial time.
I understand that an algorithm's time T(n) can be bounded by O(g(n)) by the definition:
T(n) is O(g(n)) iff there is a c > 0, n0 > 0, such that for all n >= n0:
for every input of size n, A takes at most c * g(n) steps.
T(n) is the time that is the longest out of all the inputs of size n.
However what I don't understand is the definition for Ω(g(n)). The definition is that for some input of size n, A takes at least c * g(n) steps.
But if that's the definition for Ω then couldn't I find a lower bound for any algorthm that is the same as the upper bound? For instance if sorting in the worst case takes O(nlogn) then wouldn't I be able to show easily Ω(nlogn) as well seeing as how there has to be at least one bad input for any size n that would take nlogn steps? Lets assume that we're talking about heapsort.
I am really not sure what I'm missing here because whenever I'm being taught a new algorithm the time for a certain method is either Ɵ(g(n)) or O(g(n)), but no explanation is provided as to why it's either Ɵ or O.
I hope what I said was clear enough if not then ask away at what you misunderstood. I really need this confusion cleared up. Thank you.
O is an upper bound, meaning that we know an algorithm that's O(n lg n) takes, asymptotically, at most a constant times n lg n steps in the worst case.
Ω is a lower bound, meaning that we know it's not possible for an Ω(n lg n) algorithm to take asymptotically less than a n lg n steps in the worst case.
Ɵ is a tight bound: for example, if an algorithm is Ɵ(n lg n) then we know both it's both O(n lg n) (so is at least as fast as n lg n) and Ω(n lg n) (so we know it's no faster than n lg n).
The reason your argument is flawed is that you're actually assuming you know Ɵ(n lg n), not just O(n lg n).
For example, we know there's a Ω(n lg n) general bound on comparison sorts. Once we proved O(n lg n) for mergesort, that therefore means that mergesort is Ɵ(n lg n). Note that mergesort is also O(n^2), because it's no slower than n^2. (That's not how people would typically describe it, but that is what the formal notation means.)
For some algorithms, we don't know tight bounds; the general 3SUM problem in simple models of computation is known to be Ω(n lg n) because it can be used to perform sorting, but we only have Ɵ(n^2) algorithms. The best algorithm for the problem is between n lg n and n^2; we can say that it's O(n^2) and Ω(n lg n), but we don't know the Ɵ.
There's also o(f), which means strictly less than f, and ω(f), which means strictly greater than f.
The definition that I am familiar with is that T(n) is Ω(g(n)) if for some n0, for all n>n0, T(n) >= g(n)*k for some k.
Then something is Θ(n) iff it is both O(g(n)) and Ω(g(n)).