Time complexity of an algoithm with multiple loops - algorithm

Let us assume we have an algorithm with the following structure:
A for-loop with O(n) complexity.
Another for-loop with O(n) complexity.
Inside this loop is a search algorithm with O(log n) complexity executed in every iteration of the for-loop.
Now, what time complexity does this algorithm have? Is it O(n^2), O(n), O(n log n) or something else?

The solution would be O(n + nlogn) which is O(n logn)
If you want to learn about big-O notation I recommend this book: Introduction to algorithms
Link: https://mitpress.mit.edu/books/introduction-algorithms-third-edition

Related

Prove or refute: average time, worst and amortized time complexity

I'm trying to understand how to answer this question:
Given an algorithm A (some algorithm, no idea what it actually does) with average time complexity O(log n) and worst case complexity O(n), prove/refute the following:
There exists an algorithm B that executes the same program as A that has the time complexity O(log n) on average amortized.
There exists an algorithm C that executes the same program as A that has the amortized time complexity O(n).
There exists an algorithm D that executes the same program as A that has time complexity O(n^2) at worst.
It looks as if I supposed to prove it but I do not completely understand how am I supposed to do that mathematically, specially because I am not given the algorithm A and have no idea how it works.
For B I am confused because how can it be both? Won't the fact that it is supposed to be on average the more dominant thing because we already know that on average A's complexity is O(log n)?
So confused about this whole question...

Are O(n log n) algorithms always better than all O(n^2) algorithms?

When trying to properly understand Big-O, I am wondering whether it's true that O(n log n) algorithms are always better than all O(n^2) algorithms.
Are there any particular situations where O(n^2) would be better?
I've read multiple times that in sorting for example, a O(n^2) algorithm like bubble sort can be particularly quick when the data is almost sorted, so would it be quicker than a O(n log n) algorithm, such as merge sort, in this case?
No, O(n log n) algorithms are not always better than O(n^2) ones.
The Big-O notation describes an upper bound of the asymptotic behavior of an algorithm, i.e. for n that tends towards infinity.
In this definition you have to consider some aspects:
The Big-O notation is an upper bound of the algorithm complexity, meaning that for some inputs (like the one you mentioned about sorting algorithms) an algorithm with worst Big-O complexity may actually perform better (bubble sort runs in O(n) for an already sorted array, while mergesort and quicksort takes always at least O(n log n));
The Big-O notation only describes the class of complexity, hiding all the constant factors that in real case scenarios may be relevant. For example, an algorithm that has complexity 1000000 x that is in class O(n) perform worst than an algorithm with complexity 0.5 x^2 (class O(n^2)) for inputs smaller than 2000000. Basically the Big-O notation tells you that for big enough input n, the O(n) algorithms will perform better than O(n^2), but if you work with small inputs you may still prefer the latter solution.
O(n log n) is better than O(n2) asymptotically.
Big-O, Big-Theta, Big-Omega, all those measure the asymptotic behavior of functions, i.e., how functions behave when their argument goes toward a certain limit.
O(n log n) functions grow slower than O(n2) functions, that's what Big-O notation essentially says. However, this does not mean that O(n log n) is always faster. It merely means that at some point, the O(n log n) function will always be cheaper for an ever-rising value of n.
In that image, f(n) = O(g(n)). Note that there is a range where f(n) is actually more costly than g(n), even though it is bounded asymptotically by g(n). However, when talking limits, or asymptotics for that matter, f(n) outperforms g(n) "in the long run," so to say.
In addition to #cadaniluk's answer:
If you restrict the inputs to the algorithms to a very special type, this also can effect the running time. E.g. if you run sorting algorithms only on already sorted lists, BubbleSort will run in linear time, but MergeSort will still need O(n log n).
There are also algorithms that have a bad worst-case complexity, but a good average case complexity. This means that there are bad input instances such that the algorithm is slow, but in total it is very unlikely that you have such a case.
Also never forget, that Big-O notation hides constants and additive functions of lower orders. So an Algorithm with worst-case complexity O(n log n) could actually have a complexity of 2^10000 * n * log n and your O(n^2) algorithm could actually run in 1/2^1000 n^2. So for n < 2^10000 you really want to use the "slower" algorithm.
Here is a practical example.
The GCC implementations of sorting functions have O(n log n) complexity. Still, they employ O(n^2) algorithms as soon as the size of the part being sorted is less than some small constant.
That's because for small sizes, they tend to be faster in practice.
See here for some of the internal implementation.

Do you know any example when space complexity is O (n log(n))

Do you know any example when space complexity is O (n log(n))
I just started to learn algorithms and I wasn't able to find the answer.
A skip list has (worst case) space complexity of O(n log n)
There is a good summary of algorithms and datastructures and their space/time complexity at http://bigocheatsheet.com/

Complexity of algorithms - Competitive Analysis

For example we have an algorithm with the complexity O(n log n). An online algorithm for the same problem is 5-competitive. What is the complexity of the online algorithm?
In my opinion the result should be something like O(5 * n log n). Did I understand this correctly?
Big-O notation refers to the asymptotic complexity of a function. The simplest way to explain this is that it means no constant are included in the notation. That means that n log n, 5n log n, and even 10^6*n log n all fall in to the big-O class of O(n log n)

N log(N) or N clarification

Will performing a O(log N) algorithm N times give O(N log(N))? Or is it O(N)?
e.g. Inserting N elements into a self-balancing tree.
int i = 0;
while (i++ < N) {
insert(itemsToInsert[i]);
}
It's definitely O(N log(N)). It COULD also be O(N), if you could show that the sequence of calls, as a total, grows slow enough (because while SOME calls are O(log N), enough others are fast enough, say O(1), to bring the total down).
Remember: O(f) means the algorithm is no SLOWER than f, but it can be faster (even if just in certain cases).
N times O(log(N)) leads to O(N log(N)).
Big-O notation notates the asymptotic behavior of the algorithm. The cost of each additional step is O(log N); we know that for an O(N) algorithm, the cost of each additional step is O(1) and asymptotically the cost function bound is a straight line.
Therefore O(N) is too low of a bound; O(N log N) seems about right.
Yes and no.
Calculus really helps here. The first iteration is complexity log(1), the second iteration is log(2), &ct until the Nth iteration which is log(N). Rather than thinking of the problem as a multiplication, think of it as an integral...
This happens to come out as O(N log(N)), but that is kind of a coincidence.

Resources