What is the standard way of writing O(max(n, m))? - algorithm

What is the standard way of writing "the big-O of the greatest of m and n"?

It can be written as
O(m+n)
It might not look the same at first, but it is, since
max(m, n) <= m+n <= 2max(m, n)
If you want, you can also just write O(max(m, n))

Related

How is O(N) algorithm also an O(N^2) algorithm?

I was reading about Big-O Notation
So, any algorithm that is O(N) is also an O(N^2).
It seems confusing to me, I know that Big-O gives upper bound only.
But how can an O(N) algorithm also be an O(N^2) algorithm.
Is there any examples where it is the case?
I can't think of any.
Can anyone explain it to me?
"Upper bound" means the algorithm takes no longer than (i.e. <=) that long (as the input size tends to infinity, with relevant constant factors considered).
It does not mean it will ever actually take that long.
Something that's O(n) is also O(n log n), O(n2), O(n3), O(2n) and also anything else that's asymptotically bigger than n.
If you're comfortable with the relevant mathematics, you can also see this from the formal definition.
O notation can be naively read as "less than".
In numbers if I tell you x < 4 well then obviously x<5 and x< 6 and so on.
O(n) means that, if the input size of an algorithm is n (n could be the number of elements, or the size of an element or anything else that mathematically describes the size of the input) then the algorithm runs "about n iterations".
More formally it means that the number of steps x in the algorithm satisfies that:
x < k*n + C where K and C are real positive numbers
In other words, for all possible inputs, if the size of the input is n, then the algorithm executes no more than k*n + C steps.
O(n^2) is similar except the bound is kn^2 + C. Since n is a natural number n^2 >= n so the definition still holds. It is true that, because x < kn + C then x < k*n^2 + C.
So an O(n) algorithm is an O(n^2) algorithm, and an O(N^3 algorithm) and an O(n^n) algorithm and so on.
For something to be O(N), it means that for large N, it is less than the function f(N)=k*N for some fixed k. But it's also less than k*N^2. So O(N) implies O(N^2), or more generally, O(N^m) for all m>1.
*I assumed that N>=1, which is indeed the case for large N.
Big-O notation describes the upper bound, but it is not wrong to say that O(n) is also O(n^2). O(n) alghoritms are subset of O(n^2) alghoritms. It's the same that squares are subsets of all rectangles, but not every rectangle is a square. So technically it is correct to say that O(n) alghoritm is O(n^2) alghoritm even if it is not precise.
Definition of big-O:
Some function f(x) is O(g(x)) iff |f(x)| <= M|g(x)| for all x >= x0.
Clearly if g1(x) <= g2(x) then |f(x)| <= M|g1(x)| <= M|g2(x)|.
For an algorithm with just a single Loop will get a O(n) and algorithm with a nested loop will get a O(n^2).
Now consider the Bubble sort algorithm it uses the nested loop in it,
If we give an already sort set of inputs to a bubble sort algorithm the inner loop will never get executed so for a scenario like this it gets O(n) and for the other cases it gets O(n^2).

Best running time to order n numbers

I have n numbers between 0 and (n^4 - 1) what is the fastest way I can sort them.
Of course, nlogn is trivial, but I thought about the option of Radix Sort with base n and than it will be linear time, but I am not sure because of the -1.
Thanks for help!
I think you are misunderstanding the efficiency of Radix Sort. From Wikipedia:
Radix sort complexity is O(wn) for n keys which are integers of word size w. Sometimes w is presented as a constant, which would make radix sort better (for sufficiently large n) than the best comparison-based sorting algorithms, which all perform O(n log n) comparisons to sort n keys. However, in general w cannot be considered a constant: if all n keys are distinct, then w has to be at least log n for a random-access machine to be able to store them in memory, which gives at best a time complexity O(n log n).
I personally would implement quicksort choosing an intelligent pivot. Using this method you can achieve about 1.188 n log n efficiency.
If we use Radix Sort in base n we get the desired linear time complexity, the -1 doesn't matter.
We will represent the numbers in base n:
Then we get : <= (log(base n) of (n^4 - 1)) * (n + n) <= 4 * (2n) <= O(n).
n is for n numbers, the other n is just the digits span (overestimate) and log of n^4 - 1 is less than log n^4 which is 4 in base n. Overall linear time complexity.
Thanks for the help anyway! If I did something wrong please notify me!

If algorithm time complexity is theta(n^2), is it possible that for one input it will run in O(n)?

If algorithm time complexity is theta(n^2), is it possible that for one input it will run in O(n)?
by the definition of theta it seems to be that no input will run in O(n). however some say that its possible.
I really can't think of a scenario that an algorithm that run in theta(n^2), will have one input that may run in O(n).
If its true, can you please explain to me and give me an example?
Thanks a lot!
I think your terminology is tripping you up.
An algorithm cannot be "Θ(n2)." Theta notation describes the growth rates of functions. You can say that an algorithm's runtime is Θ(n2), in which case the algorithm cannot run in time O(n) on any inputs, or you could say that an algorithm's worst-case runtime is Θ(n2), in which case it could conceivably be possible that the algorithm will run in time O(n) for some inputs (take, for example, insertion sort).
Hope this helps!
If algorithm time complexity is theta(n^2), is it possible that for one input it will run in O(n)?
No. Here's why. Lets say that the running time of your algorithm is f(n). Since f(n) = Θ(n) then we'll have for some constants c0>0 and n0>0 such that c0*n^2 <= f(n) for every n >= n0. Lets us suppose that f(n) = O(n). This would mean that for some constants c1>0, n1>0 we would have f(n) <= c1*n for every n>=n1. Then for n >= max(n1, n2) we would have
c0*n^2 <= f(n) <= c1*n => c0*n <= c1 which is not true for n > c1/c0. Contradiction.
Informally, you can always think of O as <= and Θ as = (and of Ω as >=). So you can reformulate your problem as:
if something is equal to n^2 is it less than n?
My understanding that while Big-Oh only asserts the upper bound, Big-Theta asserts an upper bound and a lower bound. By definition, if something performs in theta(n^2), there is no input for which the performance is theta(n).
Note: these all refer to asymptotic complexity. Algorithms can perform differently on smaller inputs, i.e., an algorithm that runs in theta(n^2) might outperform (on smaller inputs) something that runs in theta(n) because of the hidden constant factors.

Is O(n Log n) in polynomial time?

Is O(n Log n) in polynomial time? If so, could you explain why?
I am interested in a mathematical proof, but I would be grateful for any strong intuition as well.
Thanks!
Yes, O(nlogn) is polynomial time.
From http://mathworld.wolfram.com/PolynomialTime.html,
An algorithm is said to be solvable in polynomial time if the number
of steps required to complete the algorithm for a given input is
O(n^m) for some nonnegative integer m, where n is the complexity of
the input.
From http://en.wikipedia.org/wiki/Big_O_notation,
f is O(g) iff
I will now prove that n log n is O(n^m) for some m which means that n log n is polynomial time.
Indeed, take m=2. (this means I will prove that n log n is O(n^2))
For the proof, take k=2. (This could be smaller, but it doesn't have to.)
There exists an n_0 such that for all larger n the following holds.
n_0 * f(n) <= g(n) * k
Take n_0 = 1 (this is sufficient)
It is now easy to see that
n log n <= 2n*n
log n <= 2n
n > 0 (assumption)
Click here if you're not sure about this.
This proof could be a lot nicer in latex math mode, but I don't think stackoverflow supports that.
It is, because it is upper-bounded by a polynomial (n).
You could take a look at the graphs and go from there, but I can't formulate a mathematical proof other than that :P
EDIT: From the wikipedia page, "An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm".
It is at least not worse than polynomial time. And still not better: n < n log n < n*n.
Yes. What's the limit of nlogn as n goes to infinity? Intuitively, for large n, n >> logn and you can consider the product dominated by n and so nlogn ~ n, which is clearly polynomial time. A more rigorous proof is by using the the Sandwich theorem which Inspired did:
n^1 < nlogn < n^2.
Hence nlogn is bounded above (and below) by a sequence which is polynomial time.

Algorithm complexity, log^k n vs n log n

I am developing some algorithm with takes up O(log^3 n). (NOTE: Take O as Big Theta, though Big O would be fine too)
I am unsure whereas O(log^3 n), or even O(log^2 n), is considered to be more/less/equaly complex as O(n log n).
If I were to follow the rules stright away, I'd say O(n log n) is the more complex one, but still, I don't have any clue as why or how.
I've done some research but I haven't been able to find an answer to this question.
Thank you very much.
Thus (n log n) is "bigger" than ((log n)3). This could be easily generalized to ((log n)k) via induction.
If you graph the two functions together you can see that n log(n) grows faster than log3 n.
To prove this, you need to prove that n log n > log3 n for all values of n greater than some arbitrary number c. Find such a c and you have your proof.
In fact, n log(n) grows faster than any logx n for positive x.

Resources