Is O(n Log n) in polynomial time? If so, could you explain why?
I am interested in a mathematical proof, but I would be grateful for any strong intuition as well.
Thanks!
Yes, O(nlogn) is polynomial time.
From http://mathworld.wolfram.com/PolynomialTime.html,
An algorithm is said to be solvable in polynomial time if the number
of steps required to complete the algorithm for a given input is
O(n^m) for some nonnegative integer m, where n is the complexity of
the input.
From http://en.wikipedia.org/wiki/Big_O_notation,
f is O(g) iff
I will now prove that n log n is O(n^m) for some m which means that n log n is polynomial time.
Indeed, take m=2. (this means I will prove that n log n is O(n^2))
For the proof, take k=2. (This could be smaller, but it doesn't have to.)
There exists an n_0 such that for all larger n the following holds.
n_0 * f(n) <= g(n) * k
Take n_0 = 1 (this is sufficient)
It is now easy to see that
n log n <= 2n*n
log n <= 2n
n > 0 (assumption)
Click here if you're not sure about this.
This proof could be a lot nicer in latex math mode, but I don't think stackoverflow supports that.
It is, because it is upper-bounded by a polynomial (n).
You could take a look at the graphs and go from there, but I can't formulate a mathematical proof other than that :P
EDIT: From the wikipedia page, "An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm".
It is at least not worse than polynomial time. And still not better: n < n log n < n*n.
Yes. What's the limit of nlogn as n goes to infinity? Intuitively, for large n, n >> logn and you can consider the product dominated by n and so nlogn ~ n, which is clearly polynomial time. A more rigorous proof is by using the the Sandwich theorem which Inspired did:
n^1 < nlogn < n^2.
Hence nlogn is bounded above (and below) by a sequence which is polynomial time.
Related
I was reading about Big-O Notation
So, any algorithm that is O(N) is also an O(N^2).
It seems confusing to me, I know that Big-O gives upper bound only.
But how can an O(N) algorithm also be an O(N^2) algorithm.
Is there any examples where it is the case?
I can't think of any.
Can anyone explain it to me?
"Upper bound" means the algorithm takes no longer than (i.e. <=) that long (as the input size tends to infinity, with relevant constant factors considered).
It does not mean it will ever actually take that long.
Something that's O(n) is also O(n log n), O(n2), O(n3), O(2n) and also anything else that's asymptotically bigger than n.
If you're comfortable with the relevant mathematics, you can also see this from the formal definition.
O notation can be naively read as "less than".
In numbers if I tell you x < 4 well then obviously x<5 and x< 6 and so on.
O(n) means that, if the input size of an algorithm is n (n could be the number of elements, or the size of an element or anything else that mathematically describes the size of the input) then the algorithm runs "about n iterations".
More formally it means that the number of steps x in the algorithm satisfies that:
x < k*n + C where K and C are real positive numbers
In other words, for all possible inputs, if the size of the input is n, then the algorithm executes no more than k*n + C steps.
O(n^2) is similar except the bound is kn^2 + C. Since n is a natural number n^2 >= n so the definition still holds. It is true that, because x < kn + C then x < k*n^2 + C.
So an O(n) algorithm is an O(n^2) algorithm, and an O(N^3 algorithm) and an O(n^n) algorithm and so on.
For something to be O(N), it means that for large N, it is less than the function f(N)=k*N for some fixed k. But it's also less than k*N^2. So O(N) implies O(N^2), or more generally, O(N^m) for all m>1.
*I assumed that N>=1, which is indeed the case for large N.
Big-O notation describes the upper bound, but it is not wrong to say that O(n) is also O(n^2). O(n) alghoritms are subset of O(n^2) alghoritms. It's the same that squares are subsets of all rectangles, but not every rectangle is a square. So technically it is correct to say that O(n) alghoritm is O(n^2) alghoritm even if it is not precise.
Definition of big-O:
Some function f(x) is O(g(x)) iff |f(x)| <= M|g(x)| for all x >= x0.
Clearly if g1(x) <= g2(x) then |f(x)| <= M|g1(x)| <= M|g2(x)|.
For an algorithm with just a single Loop will get a O(n) and algorithm with a nested loop will get a O(n^2).
Now consider the Bubble sort algorithm it uses the nested loop in it,
If we give an already sort set of inputs to a bubble sort algorithm the inner loop will never get executed so for a scenario like this it gets O(n) and for the other cases it gets O(n^2).
Steven Skiena's The Algorithm design manual's chapter 1 exercise has this question:
Let P be a problem. The worst-case time complexity of P is O(n^2) .
The worst-case time complexity of P is also Ω(n log n) . Let A be an
algorithm that solves P. Which subset of the following statements are
consistent with this information about the complexity of P?
A has worst-case time complexity O(n^2) .
A has worst-case time complexity O(n^3/2).
A has worst-case time complexity O(n).
A has worst-case time complexity ⍬(n^2).
A has worst-case time complexity ⍬(n^3) .
How can an algorithm have two worst-case time complexities?
Is the author trying to say that for some value of n (say e.g. 300) upper bound for algorithm written for solving P is of the order of O(n^2) while for another value of n (say e.g. 3000) the same algorithm worst case was Ω(n log n)?
The answer to your specific question
is the author trying to say that for some value of n (say e.g. 300) upper bound for algorithm written for solving P is of the order of O(n^2) while for another value of n (say e.g. 3000) the same algorithm worst case was Ω(n log n)?
is no. That is not how complexity functions work. :) We don't talk about different complexity classes for different values of n. The complexity refers to the entire algorithm, not to the algorithm at specific sizes. An algorithm has a single time complexity function T(n), which computes how many steps are required to carry out the computation for an input size of n.
In the problem, you are given two pieces of information:
The worst case complexity is O(n^2)
The worst case complexity is Ω(n log n)
All this means is that we can pick constants c1, c2, N1, and N2, such that, for our algorithm's function T(n), we have
T(n) ≤ c1*n^2 for all n ≥ N1
T(n) ≥ c2*n log n for all n ≥ N2
In other words, our T(n) is "asymptotically bounded below" by some constant time n log n and "asymptotically bounded above" by some constant times n^2. It can itself be anything "between" an n log n style function and an n^2 style function. It can even be n log n (since that is bounded above by n^2) or it can be n^2 (since that's bounded below by n log n. It can be something in between, like n(log n)(log n).
It's not so much that an algorithm has "multiple worst case complexities" in the sense it has different behaviors. What are you seeing is an upper bound and a lower bound! And these can, of course, be different.
Now it is possible that you have some "weird" function like this:
def p(n):
if n is even:
print n log n stars
else:
print n*2 stars
This crazy algorithm does have the bounds specified in the problem from the Skiena book. And it has no Θ complexity. That might have been what you were thinking about, but do note that it is not necessary for a complexity function to be this weird in order for us to say the upper and lower bounds differ. The thing to remember is that upper and lower bounds are not tight unless explicitly stated to be so.
Of the following two execution times, which are polynomial and why?
I O(n^log n)
II O(log(n^n))
I believe only I is polynomial, as II looks to be logarithmic, would this be correct assertion?
By log properties, log(n^n) = n * log(n) which is less than n^2 for large n. Therefore, O(log(n^n)) is contained in O(n^2) and so is in polynomial time.
n^log n can't be bounded by c * n^k for any c, k, as log n is a monotonically growing function, so clearly it cannot be in polynomial time. It is however smaller than 2^n for sufficiently large n (I'll leave this as an exercise to verify) and so is at most exponential.
Talking about Big O notations, if one algorithm time complexity is O(N) and other's is O(2N), which one is faster?
The definition of big O is:
O(f(n)) = { g | there exist N and c > 0 such that g(n) < c * f(n) for all n > N }
In English, O(f(n)) is the set of all functions that have an eventual growth rate less than or equal to that of f.
So O(n) = O(2n). Neither is "faster" than the other in terms of asymptotic complexity. They represent the same growth rates - namely, the "linear" growth rate.
Proof:
O(n) is a subset of O(2n): Let g be a function in O(n). Then there are N and c > 0 such that g(n) < c * n for all n > N. So g(n) < (c / 2) * 2n for all n > N. Thus g is in O(2n).
O(2n) is a subset of O(n): Let g be a function in O(2n). Then there are N and c > 0 such that g(n) < c * 2n for all n > N. So g(n) < 2c * n for all n > N. Thus g is in O(n).
Typically, when people refer to an asymptotic complexity ("big O"), they refer to the canonical forms. For example:
logarithmic: O(log n)
linear: O(n)
linearithmic: O(n log n)
quadratic: O(n2)
exponential: O(cn) for some fixed c > 1
(Here's a fuller list: Table of common time complexities)
So usually you would write O(n), not O(2n); O(n log n), not O(3 n log n + 15 n + 5 log n).
Timothy Shield's answer is absolutely correct, that O(n) and O(2n) refer to the same set of functions, and so one is not "faster" than the other. It's important to note, though, that faster isn't a great term to apply here.
Wikipedia's article on "Big O notation" uses the term "slower-growing" where you might have used "faster", which is better practice. These algorithms are defined by how they grow as n increases.
One could easily imagine a O(n^2) function that is faster than O(n) in practice, particularly when n is small or if the O(n) function requires a complex transformation. The notation indicates that for twice as much input, one can expect the O(n^2) function to take roughly 4 times as long as it had before, where the O(n) function would take roughly twice as long as it had before.
It depends on the constants hidden by the asymptotic notation. For example, an algorithm that takes 3n + 5 steps is in the class O(n). So is an algorithm that takes 2 + n/1000 steps. But 2n is less than 3n + 5 and more than 2 + n/1000...
It's a bit like asking if 5 is less than some unspecified number between 1 and 10. It depends on the unspecified number. Just knowing that an algorithm runs in O(n) steps is not enough information to decide if an algorithm that takes 2n steps will complete faster or not.
Actually, it's even worse than that: you're asking if some unspecified number between 1 and 10 is larger than some other unspecified number between 1 and 10. The sets you pick from being the same doesn't mean the numbers you happen to pick will be equal! O(n) and O(2n) are sets of algorithms, and because the definition of Big-O cancels out multiplicative factors they are the same set. Individual members of the sets may be faster or slower than other members, but the sets are the same.
Theoretically O(N) and O(2N) are the same.
But practically, O(N) will definitely have a shorter running time, but not significant. When N is large enough, the running time of both will be identical.
O(N) and O(2N) will show significant difference in growth for small numbers of N, But as N value increases O(N) will dominate the growth and coefficient 2 becomes insignificant. So we can say algorithm complexity as O(N).
Example:
Let's take this function
T(n) = 3n^2 + 8n + 2089
For n= 1 or 2, the constant 2089 seems to be the dominant part of function but for larger values of n, we can ignore the constants and 8n and can just concentrate on 3n^2 as it will contribute more to the growth, If the n value still increases the coefficient 3 also seems insignificant and we can say complexity is O(n^2).
For detailed explanation refer here
O(n) is faster however you need to understand that when we talk about Big O, we are measuring the complexity of a function/algorithm, not its speed. And we measure this complexity asymptotically. In lay man terms, when we talk about asymptotic analysis, we take immensely huge values for n. So if you plot the graph for O(n) and O(2n), the values will stay in some particular range from each other for any value of n. They are much closer compared to the other canonical forms like O(nlogn) or O(1), so by convention we approximate the complexity to the canonical form O(n).
I am developing some algorithm with takes up O(log^3 n). (NOTE: Take O as Big Theta, though Big O would be fine too)
I am unsure whereas O(log^3 n), or even O(log^2 n), is considered to be more/less/equaly complex as O(n log n).
If I were to follow the rules stright away, I'd say O(n log n) is the more complex one, but still, I don't have any clue as why or how.
I've done some research but I haven't been able to find an answer to this question.
Thank you very much.
Thus (n log n) is "bigger" than ((log n)3). This could be easily generalized to ((log n)k) via induction.
If you graph the two functions together you can see that n log(n) grows faster than log3 n.
To prove this, you need to prove that n log n > log3 n for all values of n greater than some arbitrary number c. Find such a c and you have your proof.
In fact, n log(n) grows faster than any logx n for positive x.