I have n numbers between 0 and (n^4 - 1) what is the fastest way I can sort them.
Of course, nlogn is trivial, but I thought about the option of Radix Sort with base n and than it will be linear time, but I am not sure because of the -1.
Thanks for help!
I think you are misunderstanding the efficiency of Radix Sort. From Wikipedia:
Radix sort complexity is O(wn) for n keys which are integers of word size w. Sometimes w is presented as a constant, which would make radix sort better (for sufficiently large n) than the best comparison-based sorting algorithms, which all perform O(n log n) comparisons to sort n keys. However, in general w cannot be considered a constant: if all n keys are distinct, then w has to be at least log n for a random-access machine to be able to store them in memory, which gives at best a time complexity O(n log n).
I personally would implement quicksort choosing an intelligent pivot. Using this method you can achieve about 1.188 n log n efficiency.
If we use Radix Sort in base n we get the desired linear time complexity, the -1 doesn't matter.
We will represent the numbers in base n:
Then we get : <= (log(base n) of (n^4 - 1)) * (n + n) <= 4 * (2n) <= O(n).
n is for n numbers, the other n is just the digits span (overestimate) and log of n^4 - 1 is less than log n^4 which is 4 in base n. Overall linear time complexity.
Thanks for the help anyway! If I did something wrong please notify me!
Related
I have a very basic implementation of merge and insertion sort that involves a threshold below which insertion sort is used on sub-arrays of problem size n, where merge and insertion sort are the most basic and widely available:
def hybrid_sort(array: list, threshold: int = 10):
if len(array) > 1:
mid = len(array) // 2
left = array[:mid]
right = array [mid:]
if len(array) > threshold:
hybrid_sort(left)
hybrid_sort(right)
merge(array, left, right)
else:
insertion_sort(array)
Unless I am completely misunderstanding then this would mean that we have a recurrence relation for this particular piece of code generalized as:
T(n) = 2T(n/2) + O(n^2)
The first half showing up for merge sort, and the second being insertion sort opertations.
By the master theorem, n raised to log_b(a) would equal n in this case, because you'd have n raised to the log_2(2) which is 1, so n^1 = n.
Then, our F(n) = n^2 which is is 'larger' than n, so by case 3 of the master theorem my algorithm above would be f(n) or O(n^2), because f(n) is bounded from below by n.
This doesn't seem right to me considering we know merge sort is O(nlog(n)), and I'm having a hard time wrapping my head around this. I think it's because I've not yet analyzed such an algorithm that has a conditional 'if' check.
Can anyone illuminate this for me?
Unless the threshold itself depends on n, the insertion sort part does not matter at all. This has the same complexity as a normal merge sort.
Keep in mind that the time complexity of an algorithm that takes an input of size n is a function of n that is generally difficult to compute exactly, and so we focus on the asymptotic behavior of that function instead. This is where the big O notation comes into play.
In your case, as long as threshold is a constant, this means that as n grows, threshold becomes insignificant and all the insertion sorts can just be grouped up as a constant factor, making the overall complexity O((n-threshold) * log(n-threshold) * f(threshold)), where f(threshold) is a constant. So it simplifies to O(n log n), the complexity of merge sort.
Here's a different perspective that might help give some visibility into what's happening.
Let's suppose that once the array size reaches k, you switch from merge sort to insertion sort. We want to work out the time complexity of this new approach. To do so, we'll imagine the "difference" between the old algorithm and the new algorithm. Specifically, if we didn't make any changes to the algorithm, merge sort would take time Θ(n log n) to complete. However, once we get to arrays of size k, we stop running mergesort and instead use insertion sort. Therefore, we'll make some observations:
There are Θ(n / k) subarrays of the original array of size k.
We are skipping calling mergesort on all these arrays. Therefore, we're avoiding doing Θ(k log k) work for each of Θ(n / k) subarrays, so we're avoiding doing Θ(n log k) work.
Instead, we're insertion-sorting each of those subarrays. Insertion sort, in the worst case, takes time O(k2) when run on an array of size k. There are Θ(n / k) of those arrays, so we're adding in a factor of O(nk) total work.
Overall, this means that the work we're doing in this new variant is O(n log n) - O(n log k) + O(nk). Dialing k up or down will change the total amount of work done. If k is a fixed constant (that is, k = O(1)), this simplifies to
O(n log n) - O(n log k) + O(nk)
= O(n log n) - O(n) + O(n)
= O(n log n)
and the asymptotic runtime is the same as that of regular insertion sort.
It's worth noting that as k gets larger, eventually the O(nk) term will dominate the O(n log k) term, so there's some crossover point where increasing k starts decreasing the runtime. You'd have to do some experimentation to fine-tune when to make the switch. But empirically, setting k to some modest value will indeed give you a big performance boost.
I just came around this weird discovery, in normal maths, n*logn would be lesser than n, because log n is usually less than 1.
So why is O(nlog(n)) greater than O(n)? (ie why is nlogn considered to take more time than n)
Does Big-O follow a different system?
It turned out, I misunderstood Logn to be lesser than 1.
As I asked few of my seniors i got to know this today itself, that if the value of n is large, (which it usually is, when we are considering Big O ie worst case), logn can be greater than 1.
So yeah,
O(1) < O(logn) < O(n) < O(nlogn) holds true.
(I thought this to be a dumb question, and was about to delete it as well, but then realised, no question is dumb question and there might be others who get this confusion so I left it here.)
...because log n is always less than 1.
This is a faulty premise. In fact, logb n > 1 for all n > b. For example, log2 32 = 5.
Colloquially, you can think of log n as the number of digits in n. If n is an 8-digit number then log n ≈ 8. Logarithms are usually bigger than 1 for most values of n, because most numbers have multiple digits.
Plot both the graph( on desmos (https://www.desmos.com/calculator) or any other web) and look yourself the result on large values of n ( y=f(n)). I am saying that you should look for large value because for small value of n the program will not have time issue. For convenience I had attached a graph below you can try for other base of log.
The red represent time = n and blue represent time = nlog(n).
Here is a graph of the popular time complexities
n*log(n) is clearly greater than n for n>2 (log base 2)
An easy way to remember might be, taking two examples
Imagine the binary search algorithm with is Log N time complexity: O(log(N))
If, for each step of binary search, you had to iterate the array of N elements
The time complexity of that task would be O(N*log(N))
Which is more work than iterating the array once: O(N)
In computers, it's log base 2 and not base 10. So log(2) is 1 and log(n), where n>2, is a positive number which is greater than 1.
Only in the case of log (1), we have the value less than 1, otherwise, it's greater than 1.
Log(n) can be greater than 1 if n is greater than b. But this doesn't answer your question that why is O(n*logn) is greater than O(n).
Usually the base is less than 4. So for higher values n, n*log(n) becomes greater than n. And that is why O(nlogn) > O(n).
This graph may help. log (n) rises faster than n and is greater than 1 for n greater than logarithm's base. https://stackoverflow.com/a/7830804/11617347
No matter how two functions behave on small value of n, they are compared against each other when n is large enough. Theoretically, there is an N such that for each given n > N, then nlogn >= n. If you choose N=10, nlogn is always greater than n.
The assertion is not always accurate. When n is small, (n^2) requires more time than (log n), but when n is large, (log n) is more effective. The growth rate of (n^2) is less than (n) and (log n) for small values, so we can say that (n^2) is more efficient because it takes less time than (log n), but as n increases, (n^2) increases dramatically, whereas (log n) has a growth rate that is less than (n^2) and (n), so (log n) is more efficient.
For higher values of log n it becomes greater than 1. as we consider all possible values of n we can say that for most of the time log n is greater than 1. Hence we can say O(nlogn) > O(n) (Assuming higher values)
Remember "big O" is not about values, it is about the shape of the function I can have an O(n2) function that runs faster, even for values of n over a million than a O(1) function...
So i came upon this question where:
we have to sort n numbers between 0 and n^3 and the answer of time complexity is O(n) and the author solved it this way:
first we convert the base of these numbers to n in O(n), therefore now we have numbers with maximum 3 digits ( because of n^3 )
now we use radix sort and therefore the time is O(n)
so i have three questions :
1. is this correct? and the best time possible?
2. how is it possible to convert the base of n numbers in O(n)? like O(1) for each number? because some previous topics in this website said its O(M(n) log(n))?!
3. and if this is true, then it means we can sort any n numbers from 0 to n^m in O(n) ?!
( I searched about converting the base of n numbers and some said its
O(logn) for each number and some said its O(n) for n numbers so I got confused about this too)
1) Yes, it's correct. It is the best complexity possible, because any sort would have to at least look at the numbers and that is O(n).
2) Yes, each number is converted to base-n in O(1). Simple ways to do this take O(m^2) in the number of digits, under the usual assumption that you can do arithmetic operations on numbers up to O(n) in O(1) time. m is constant so O(m^2) is O(1)... But really this step is just to say that the radix you use in the radix sort is in O(n). If you implemented this for real, you'd use the smallest power of 2 >= n so you wouldn't need these conversions.
3) Yes, if m is constant. The simplest way takes m passes in an LSB-first radix sort with a radix of around n. Each pass takes O(n) time, and the algorithm requires O(n) extra memory (measured in words that can hold n).
So the author is correct. In practice, though, this is usually approached from the other direction. If you're going to write a function that sorts machine integers, then at some large input size it's going to be faster if you switch to a radix sort. If W is the maximum integer size, then this tradeoff point will be when n >= 2^(W/m) for some constant m. This says the same thing as your constraint, but makes it clear that we're thinking about large-sized inputs only.
There is wrong assumption that radix sort is O(n), it is not.
As described on i.e. wiki:
if all n keys are distinct, then w has to be at least log n for a
random-access machine to be able to store them in memory, which gives
at best a time complexity O(n log n).
The answer is no, "author implementation" is (at best) n log n. Also converting these numbers can take probably more than O(n)
is this correct?
Yes it's correct. If n is used as the base, then it will take 3 radix sort passes, where 3 is a constant, and since time complexity ignores constant factors, it's O(n).
and the best time possible?
Not always. Depending on the maximum value of n, a larger base could be used so that the sort is done in 2 radix sort passes or 1 counting sort pass.
how is it possible to convert the base of n numbers in O(n)? like O(1) for each number?
O(1) just means a constant time complexity == fixed number of operations per number. It doesn't matter if the method chosen is not the fastest if only time complexity is being considered. For example, using a, b, c to represent most to least significant digits and x as the number, then using integer math: a = x/(n^2), b = (x-(a*n^2))/n, c = x%n (assumes x >= 0). (side note - if n is a constant, then an optimizing compiler may convert the divisions into a multiply and shift sequence).
and if this is true, then it means we can sort any n numbers from 0 to n^m in O(n) ?!
Only if m is considered a constant. Otherwise it's O(m n).
I have been learning about Radix sort recently and one of the sources I have used is the Wikipedia page. At the moment there is the following paragraph there regarding the efficiency of the algorithm:
The topic of the efficiency of radix sort compared to other sorting
algorithms is somewhat tricky and subject to quite a lot of
misunderstandings. Whether radix sort is equally efficient, less
efficient or more efficient than the best comparison-based algorithms
depends on the details of the assumptions made. Radix sort complexity
is O(wn) for n keys which are integers of word size w. Sometimes w is
presented as a constant, which would make radix sort better (for
sufficiently large n) than the best comparison-based sorting
algorithms, which all perform O(n log n) comparisons to sort n keys.
However, in general w cannot be considered a constant: if all n
keys are distinct, then w has to be at least log n for a random-access
machine to be able to store them in memory, which gives at best a time
complexity O(n log n). That would seem to make radix sort at most
equally efficient as the best comparison-based sorts (and worse if
keys are much longer than log n).
The part in bold has regrettably become a bit of a block that I am unable to get past. I understand that in general Radix sort is O(wn), and through other sources have seen how O(n) can be achieved, but cannot quite understand why n distinct keys requires O(n log n) time for storage in a random-access machine. I'm fairly certain it comes down to some simple mathematics, but unfortunately a solid understanding remains just beyond my grasp.
My closest attempt is as follows:
Given a base, 'B' and a number in that base, 'N', The maximum digits 'N' can have is:
(logB of N) + 1.
If each number in a given list, L, is unique, then we have up to:
L *((logB of N) + 1) possibilities
At which point I'm unsure how to progress.
Is anyone able to please expand on the above section in bold and break down why n distinct keys requires a minimum of log n for random-access storage?
Assuming MSB radix sort with constant m bins:
For an arbitrarily large data type which must accommodate at least n distinct values, the number of bits required is N = ceiling(log2(n))
Thus the amount of memory required to store each value is also O(log n); assuming sequential memory access, the time complexity of reading / writing a value is O(N) = O(log n), although can use pointers instead
The number of digits is O(N / m) = O(log n)
Importantly, each consecutive digit must differ by a power-of-2, i.e. m must also be a power-of-2; assume this to be small enough for the HW platform, e.g. 4-bit digits = 16 bins
During sorting:
For each radix pass, of which there are O(log n):
Count each bucket: get the value of the current digit using bit operations - O(1) for all n values. Should note that each counter must also be N bits, although increments by 1 will be (amortized) O(1). If we had used non-power-of-2 digits, this would in general be O(log n log log n) ( source )
Make the bucket count array cumulative: must perform m - 1 additions, each of which is O(N) = O(log n) (unlike the increment special case)
Write the output array: loop through n values, determine the bin again, and write the pointer with the correct offset
Thus the total complexity is O(log n) * [ n * O(1) + m * O(log n) + n * O(1) ] = O(n log n).
I have read quite a bit on big-O notation and I have a basic understanding. This is a specific question that I hope will help me understand it better.
If I have and array of 100 integers (no duplicates, and randomly generated) and I use heapsort to sort it, I know that big-O notation for heapsort is n lg n. For n = 100, this works out to 100 × 6.64, which is roughly 664.
While I know this is the upper bound on the number of comparisons and my count can be less than 664, if I am trying to figure out the number of comparisons for a heap sorted array of 100 random numbers, it should always be less than or equal to 664?
I am trying to add counters to my heapsort to get the big-O comparison time and coming up with crazy numbers. I will continue to work it out, but wanted to just verify that I was thinking of the upper bound properly.
Thanks!
Big-O notation does not give you an exact upper bound on a function's runtime - instead, it tells you asymptotically how the function's runtime grows. If a function has runtime O(n log n), it means that the function grows at roughly the same rate as the function f(n) = n log n. That means, for example, that the actual runtime could be 23 n log n + 17 n, or it could be 0.05 n log n. Consequently, you can't use the fact that heapsort is O(n log n) to count the number of comparisons made. You'd need a more precise analysis.
It just so happens that you can get a very precise analysis of heapsort, but it requires you to do a more meticulous analysis of the algorithm. You can show, for example, that the number of comparisons required to call make-heap is at most 3n, and that the number of comparisons made during the repeated calls to extract-min is at most 2n log (n + 1) (the binary heap has log (n + 1) layers, and during each of the n extract-max's, at each layer at most two comparisons are made). This gives an overall number of comparisons upper-bounded by 2n log (n + 1) + 3n.
The famous Ω(n log n) sorting barrier can be used to get a matching lower bound. Any comparison-based sorting algorithm, of which heapsort is one, must make at least log n! = n log n - n + O(log n) (this is Stirling's approximation) comparisons on average, and so heapsort is required to make at least n log n - n comparisons in the worst-case. (Note that this is actually n log n, not some constant multiple of n log n. You can read over the proof of the Ω(n log n) barrier for why this is.)
Hope this helps!
Let's say that you know that your algorithm requires O( n log_2 n ) comparisons when sorting n elements.
This tells you the following, and only the following: there exists a constant number C such that, as n approaches infinity, the algorithm never requires more than C * n * log_2 n comparisons.
It does not tell you anything about the specific number of comparisons that might be required for any value of n -- it tells you about how the number of comparisons required grows in the limit as the number of elements grows.
You can not use the Big-O complexity of your sorting algorithm to prove anything about the behaviour of a particular finite n, such as 100 elements. Sorting 100 elements might require 64 comparisons, or 664, or 664 million. The latter is clearly not reasonable, but Big-O simply provides no information here.