How do I express O(n) * O(n log n) - big-o

I'm writing a report where I need to present some results with Big O notation. Since I have not used Big O notations before, I'm a bit unsure of how to write.
I understand that if you have O(n) * O(n) then the result becomes O(n^2). For example, a loop in a loop.
And O(n) * O(log n) equals O(n log n).For example, if you need to loop over a funktion that search in a balanced binary tree.
But if I have to loop a function with time complexity O(n log n).
How do I write O(n) * O(n log n) correctly?

It's just normal multiplication of whatever is inside the O.
n * n*log(n) = n^2*log(n)
So it's:
O(n^2 log n)

Related

Is the theta bound of an algorithm unique?

For example, the tightest bound for Binary search is θ(logn), but we can also say it has O(n^2) and Ω(1).
However, I'm confused about if we can say something like "Binary search has a θ(n) bound" since θ(n) is between O(n^2) and Ω(1)?
The worst-case execution of binary search on an array of size n uses Θ(log n) operations.
Any execution of binary search on an array of size n uses O(log n) operations.
Some "lucky" executions of binary search on an array of size n use O(1) operations.
The sentence "The complexity of binary search has a Θ(n) bound" is so ambiguous and misleading that most people would call it false. In general, I advise you not to use the word "bound" in the same sentence as one of the notations O( ), Θ( ), Ω( ).
It is true that log n < n.
It is false that log n = Θ(n).
The statement log n < Θ(n) is technically true, but so misleading that you should never write it.
It is true that log n = O(n).
The "because" is wrong. Θ(n) is indeed compatible with O(n²) and Ω(1), but so is Θ(log n).
In the case of the dichotomic search, you can establish both bounds O(log n) and Ω(log n), which is tight, and summarized by Θ(log n).
You may not choose complexities "randomly", you have to prove them.

Why Merge Sort time complexity is not O(N)?

Merge Sort time complexity is O(n log n) so here n is dominate on logn , so Is Merge-Sort is O(N)
Thanks
O(n log n) is the best that you can get using tradional sort algorithms.
You can't say that O(n log n) == O(n) even if n dominates logn because they are multiplying not adding.
If you got n + logn and n dominates logn then you can say that O is O(n)

How is the cost of suffix array generation O(n^2 log n)?

To build a suffis array on a string of n characters,
we first generate the n suffixes O(n)
and then sort them O(n log n)
the total time complexity apprears to be O(n) + O(nlogn) = O(nlogn).
But I am reading that it is O(n^2 log n) and could not understand how. Can someone please explain?
First of all the statement O(n) + O(nlogn) = O(n) is wrong. O(n) + O(nlogn) = O(nlog(n)).
Second and the reason why you are confused - comparing two suffixes is not constant. As each suffix is a string of length up to n, the comparison of two suffixes is in the order of O(n). Thus sorting n suffixes is in the order of O(n * log (n) * n) = O(n^2 * log(n)).

Which is complexity of set difference using quick sort and binary search?

We have two sets A, B and we want to compute set difference A - B, we will sort first elements of B with quicksort which have average complexity O(n * log n) and after we search each element from A in B with binary search which have complexity O(log n), the entire set difference algorihm described up which complexity will have ? if we consider that we use qucksort and binary search. I tried follow way to compute complexity of set difference using this algorithms: O(n * log n) + O(log n) = O(n * log n + log n) = O(log n * (n + 1)) = O((n + 1) * log n). Is it correct ?
First, constant does not really count in O notation facing a polynomial that grows faster than a constant, so 1 will be owned by n, which means O((n + 1) * log n) is just O(n * log n).
Now the important issue - suppose A has m elements, you need to do m binary searches, each has complexity O(log n). So totally, the complexity should be O(n * log n) + O(m * log n) = O((n + m) * log n).
O (n * log n) + O (log n) = O (n * log n)
http://en.wikipedia.org/wiki/Big_O_notation#Properties
If a function may be bounded by a polynomial in n, then as n tends to
infinity, one may disregard lower-order terms of the polynomial.

Big-O notation calculation, O(n) * O(log n) = O(n log n)

I need to design an algorithm that is able to do some calculations in given O notation. It has been some time since I last calculated with O notation and I am a bit confused on how to add different O notations together.
O(n) * O(log n) = O(n log n)
O(n) + O(n) = O(2n) = O(n)
O(n) * O(log n) + O(n log n) = O(n log n) + O(n log n) = O(n log n)
Are these correct? What other rules have I overlooked?
The rule for multiplication is really simple:
O(f) * O(g) = O(f * g)
The sum of two O terms is harder to calculate if you want it to work for arbitrary functions.
However, if f ∈ O(g), then f + g ∈ O(g).
Therefore, your calculations are correct, but your original title is not;
O(n) + O(log n) = O(n)

Resources