Why there is no shell sort with binary insertion? - sorting

Shell sort in general is improved insertion sort. Another improvement of insertion sort is binary insertion sort.
Why there is no shell sort with binary insertion?
It can be done and it should not be very difficult to code.
(I agree getting indexes right can take 2-3 days)

Insertion sort can benefit significantly from using a binary search to find the insertion point because the sorted portion of the array is contiguous in memory.
Note that while binary search allows you to find the insertion point in O(log n) time, it still takes O(n) time to shift the elements along by one space in order to do the insertion. However, if the subarray is contiguous, this shifting can be done very quickly (i.e. with a low constant factor) by moving the whole section of memory, e.g. using a memmove system call. Moving contiguous sections of memory in bulk is fast because it's a common operation which the hardware is optimised for.
In contrast, shell sort's "sorted portions" are not contiguous; they are "every hth element" of the array, so you cannot perform an insertion using a bulk memory move. In this case a binary search to find the insertion point means you can do O(log n) comparisons, but you still need to do O(n) writes, and you don't have the benefit of a low constant factor for that n.
All of that said, this is a theoretical argument; you could try implementing it yourself, to see how much of a difference it makes in practice.

Related

List with amortized constant or logarithmic insertion: how fast can possibly be lookup?

Everybody knows (or must know), that it is impossible to design a list data structure that supports both O(1) insertion in the middle, and O(1) lookup.
For instance, linked list support O(1) insertion, but O(N) for lookup, while arrays support O(1) for lookup, but O(N) for insertion (possibly amortized O(1) for insertion at the beginning, the end, or both).
However, suppose you are willing to trade O(1) insertion with:
Amortized O(1) insertion
O(log(N)) insertion
Then what is the theoretical bound for lookup in each of these cases? Do you know existing data structures? What about memory complexity?
Tree-based data structures, like a rope or finger tree, can often provide logarithmic insertion time at arbitrary positions. The tradeoff is in access time, which tends to also be logarithmic except in special cases, like the ends of a finger tree.
Dynamic arrays can provide amortized constant insertion at the ends, but insertion in the middle requires copying part of the array, and is O(N) in time, as you mention.
It's probably possible to implement a data structure which supports amortized constant middle insertion. If adding to either end, treat as a dynamic array. If inserting in the middle, keep the old array and add a new array "above" it which contains the new "middle" of the list, using the old array for data which is left or right of the middle. Access time would be logarithmic after your first middle insertion, and keeping track of what data was in which layer would quickly get complicated.
This might be the 'tiered' dynamic array mentioned in the wikipedia article, I haven't researched it further.
I suspect the reason no one really uses a data structure like that is that inserting in the middle is infrequently the case you most need to most for, and logarithmic insert (using trees) is good enough for most real world cases.
These are still open problems, but the best bounds that I am aware of are from Arne Andersson's Sublogarithmic searching without multiplications, which has insertions, deletions, and lookups of O(sqrt(lg(n))). However, this comes at a cost of 2^k additional space, where k is the number of bits in the integers being stored in the data structure, hence the reason that we're still using balanced binary trees instead of Andersson's data structure. A variant of the data structure allows O(1) lookups, but then the additional space increases to n2^k where n is the number of elements in the data structure. A randomized variant doesn't use any additional space, but then the sqrt(lg(n)) insertion/deletion/lookup times become average space times instead of worst case times.

Complexity of maintaining a sorted list vs inserting all values then sorting

Would the time and space complexity to maintain a list of numbers in sorted order (i.e start with the first one insert it, 2nd one comes along you insert it in sorted order and so on ..) be the same as inserting them as they appear and then sorting after all insertions have been made?
How do I make this decision? Can you demonstrate in terms of time and space complexity for 'n' elements?
I was thinking in terms of phonebook, what is the difference of storing it in a set and presenting sorted data to the user each time he inserts a record into the phonebook VS storing the phonebook records in a sorted order in a treeset. What would it be for n elements?
Every time you insert into a sorted list and maintain its sortedness, it is O(logn) comparisons to find where to place it but O(n) movements to place it. Since we insert n elements this is O(n^2). But, I think that if you use a data structure that is designed for inserting sorted data into (such as a binary tree) then do a pass at the end to turn it into a list/array, it is only O(nlogn). On the other hand, using such a more complex data structure will use about O(n) additional space, whereas all other approaches can be done in-place and use no additional space.
Every time you insert into an unsorted list it is O(1). Sorting it all at the end is O(nlogn). This means overall it is O(nlogn).
However, if you are not going to make lists of many elements (1000 or less) it probably doesn't matter what big-O it is, and you should either focus on what runs faster for small data sets, or not worry at all if it is not a performance issue.
It depends on what data structure you are inserting them in. If you are asking about inserting in an array, the answer is no. It takes O(n) space and time to store the n elements, and then O(n log n) to sort them, so O(n log n) total. While inserting into an array may require you to move \Omega(n) elements so takes \Theta(n^2). The same problem will be true with most "sequential" data structures. Sorry.
On the other hand, some priority queues such as lazy leftist heaps, fibonacci heaps, and Brodal queues have O(1) insert. While, a Finger Tree gives O(n log n) insert AND linear access (Finger trees are as good as a linked list for what a linked list is good for and as good as balanced binary search trees for what binary search trees are good for--they are kind of amazing).
There are going to be application-specific trade-offs to algorithm selection. The reasons one might use an insertion sort rather than some kind of offline sorting algorithm are enumerated on the Insertion Sort wikipedia page.
The determining factor here is less likely to be asymptotic complexity and more likely to be what you know about your data (e.g., is it likely to be already sorted?)
I'd go further, but I'm not convinced that this isn't a homework question asked verbatim.
Option 1
Insert at correct position in sorted order.
Time taken to find the position for i+1-th element :O(logi)
Time taken to insert and maintain order for i+1-th element: O(i)
Space Complexity:O(N)
Total time:(1*log 1 +2*log 2 + .. +(N-1)*logN-1) =O(NlogN)
Understand that this is just the time complexity.The running time can be very different from this.
Option 2:
Insert element O(1)
Sort elements O(NlogN)
Depending on the sort you employ the space complexity varies, though you can use something like quicksort, which doesn't need much space anyway.
In conclusion though both time complexity are the same, the bounds are weak and mathematically you can come up with better bounds.Also note that worst case complexity may never be encountered in practical situations, probably you will see only average cases all the time.If performance is such a vital issue in your application, you should test both sets of code on random sampling.Do tell me which one works faster after your tests.My guess is option 1.

Sorting in place

What is meant by to "sort in place"?
The idea of an in-place algorithm isn't unique to sorting, but sorting is probably the most important case, or at least the most well-known. The idea is about space efficiency - using the minimum amount of RAM, hard disk or other storage that you can get away with. This was especially relevant going back a few decades, when hardware was much more limited.
The idea is to produce an output in the same memory space that contains the input by successively transforming that data until the output is produced. This avoids the need to use twice the storage - one area for the input and an equal-sized area for the output.
Sorting is a fairly obvious case for this because sorting can be done by repeatedly exchanging items - sorting only re-arranges items. Exchanges aren't the only approach - the Insertion Sort, for example, uses a slightly different approach which is equivalent to doing a run of exchanges but faster.
Another example is matrix transposition - again, this can be implemented by exchanging items. Adding two very large numbers can also be done in-place (the result replacing one of the inputs) by starting at the least significant digit and propogating carries upwards.
Getting back to sorting, the advantages to re-arranging "in place" get even more obvious when you think of stacks of punched cards - it's preferable to avoid copying punched cards just to sort them.
Some algorithms for sorting allow this style of in-place operation whereas others don't.
However, all algorithms require some additional storage for working variables. If the goal is simply to produce the output by successively modifying the input, it's fairly easy to define algorithms that do that by reserving a huge chunk of memory, using that to produce some auxiliary data structure, then using that to guide those modifications. You're still producing the output by transforming the input "in place", but you're defeating the whole point of the exercise - you're not being space-efficient.
For that reason, the normal definition of an in-place definition requires that you achieve some standard of space efficiency. It's absolutely not acceptable to use extra space proportional to the input (that is, O(n) extra space) and still call your algorithm "in-place".
The Wikipedia page on in-place algorithms currently claims that an in-place algorithm can only use a constant amount - O(1) - of extra space.
In computer science, an in-place algorithm (or in Latin in situ) is an algorithm which transforms input using a data structure with a small, constant amount of extra storage space.
There are some technicalities specified in the In Computational Complexity section, but the conclusion is still that e.g. Quicksort requires O(log n) space (true) and therefore is not in-place (which I believe is false).
O(log n) is much smaller than O(n) - for example the base 2 log of 16,777,216 is 24.
Quicksort and heapsort are both normally considered in-place, and heapsort can be implemented with O(1) extra space (I was mistaken about this earlier). Mergesort is more difficult to implement in-place, but the out-of-place version is very cache-friendly - I suspect real-world implementations accept the O(n) space overhead - RAM is cheap but memory bandwidth is a major bottleneck, so trading memory for cache-efficiency and speed is often a good deal.
[EDIT When I wrote the above, I assume I was thinking of in-place merge-sorting of an array. In-place merge-sorting of a linked list is very simple. The key difference is in the merge algorithm - doing a merge of two linked lists with no copying or reallocation is easy, doing the same with two sub-arrays of a larger array (and without O(n) auxiliary storage) AFAIK isn't.]
Quicksort is also cache-efficient, even in-place, but can be disqualified as an in-place algorithm by appealing to its worst-case behaviour. There is a degenerate case (in a non-randomized version, typically when the input is already sorted) where the run-time is O(n^2) rather than the expected O(n log n). In this case the extra space requirement is also increased to O(n). However, for large datasets and with some basic precautions (mainly randomized pivot selection) this worst-case behaviour becomes absurdly unlikely.
My personal view is that O(log n) extra space is acceptable for in-place algorithms - it's not cheating as it doesn't defeat the original point of working in-place.
However, my opinion is of course just my opinion.
One extra note - sometimes, people will call a function in-place simply because it has a single parameter for both the input and the output. It doesn't necessarily follow that the function was space efficient, that the result was produced by transforming the input, or even that the parameter still references the same area of memory. This usage isn't correct (or so the prescriptivists will claim), though it's common enough that it's best to be aware but not get stressed about it.
In-place sorting means sorting without any extra space requirement. According to wiki , it says
an in-place algorithm is an algorithm which transforms input using a data structure with a small, constant amount of extra storage space.
Quicksort is one example of In-Place Sorting.
I don't think these terms are closely related:
Sort in place means to sort an existing list by modifying the element order directly within the list. The opposite is leaving the original list as is and create a new list with the elements in order.
Natural ordering is a term that describes how complete objects can somehow be ordered. You can for instance say that 0 is lower that 1 (natural ordering for integers) or that A is before B in alphabetical order (natural ordering for strings). You can hardly say though that Bob is greater or lower than Alice in general as it heavily depends on specific attributes (alphabetically by name, by age, by income, ...). Therefore there is no natural ordering for people.
I'm not sure these concepts are similar enough to compare as suggested. Yes, they both involve sorting, but one is about a sort ordering that is human understandable (natural) and the other defines an algorithm for efficient sorting in terms of memory by overwriting into the existing structure instead of using an additional data structure (like a bubble sort)
it can be done by using swap function , instead of making a whole new structure , we implement that algorithm without even knowing it's name :D

Fastest data structure for inserting/sorting

I need a data structure that can insert elements and sort itself as quickly as possible. I will be inserting a lot more than sorting. Deleting is not much of a concern and nethier is space. My specific implementation will additionally store nodes in an array, so lookup will be O(1), i.e. you don't have to worry about it.
If you're inserting a lot more than sorting, then it may be best to use an unsorted list/vector, and quicksort it when you need it sorted. This keeps inserts very fast. The one1 drawback is that sorting is a comparatively lengthy operation, since it's not amortized over the many inserts. If you depend on relatively constant time, this can be bad.
1 Come to think of it, there's a second drawback. If you underestimate your sort frequency, this could quickly end up being overall slower than a tree or a sorted list. If you sort after every insert, for instance, then the insert+quicksort cycle would be a bad idea.
Just use one of the self-balanced binary search trees, such as red-black tree.
Use any of the Balanced binary trees like AVL trees. It should give O(lg N) time complexity for both of the operations you are looking for.
If you don't need random access into the array, you could use a Heap.
Worst and average time complexity:
O(log N) insertion
O(1) read largest value
O(log N) to remove the largest value
Can be reconfigured to give smallest value instead of largest. By repeatedly removing the largest/smallest value you get a sorted list in O(N log N).
If you can do a lot of inserts before each sort then obviously you should just append the items and sort no sooner than you need to. My favorite is merge sort. That is O(N*Log(N)), is well behaved, and has a minimum of storage manipulation (new, malloc, tree balancing, etc.)
HOWEVER, if the values in the collection are integers and reasonably dense, you can use an O(N) sort, where you just use each value as an index into a big-enough array, and set a boolean TRUE at that index. Then you just scan the whole array and collect the indices that are TRUE.
You say you're storing items in an array where lookup is O(1). Unless you're using a hash table, that suggests your items may be dense integers, so I'm not sure if you even have a problem.
Regardless, memory allocating/deleting is expensive, and you should avoid it by pre-allocating or pooling if you can.
I had some good experience for that kind of task using a Skip List
At least in my case it was about 5 times faster compared to adding everything to a list first and then running a sort over it at the end.

Is there ever a good reason to use Insertion Sort?

For general-purpose sorting, the answer appears to be no, as quick sort, merge sort and heap sort tend to perform better in the average- and worst-case scenarios. However, insertion sort appears to excel at incremental sorting, that is, adding elements to a list one at a time over an extended period of time while keeping the list sorted, especially if the insertion sort is implemented as a linked list (O(log n) average case vs. O(n)). However, a heap seems to be able to perform just (or nearly) as well for incremental sorting (adding or removing a single element from a heap has a worst-case scenario of O(log n)). So what exactly does insertion sort have to offer over other comparison-based sorting algorithms or heaps?
From http://www.sorting-algorithms.com/insertion-sort:
Although it is one of the elementary sorting algorithms with
O(n2) worst-case time, insertion sort
is the algorithm of choice either when
the data is nearly sorted (because it
is adaptive) or when the problem size
is small (because it has low
overhead).
For these reasons, and because it is also stable, insertion sort is
often used as the recursive base case
(when the problem size is small) for
higher overhead divide-and-conquer
sorting algorithms, such as merge sort
or quick sort.
An important concept in analysis of algorithms is asymptotic analysis. In the case of two algorithms with different asymptotic running times, such as one O(n^2) and one O(nlogn) as is the case with insertion sort and quicksort respectively, it is not definite that one is faster than the other.
The important distinction with this sort of analysis is that for sufficiently large N, one algorithm will be faster than another. When analyzing an algorithm down to a term like O(nlogn), you drop constants. When realistically analyzing the running of an algorithm, those constants will be important only for situations of small n.
So what does this mean? That means for certain small n, some algorithms are faster. This article from EmbeddedGurus.net includes an interesting perspective on choosing different sorting algorithms in the case of a limited space (16k) and limited memory system. Of course, the article references only sorting a list of 20 integers, so larger orders of n is irrelevant. Shorter code and less memory consumption (as well as avoiding recursion) were ultimately more important decisions.
Insertion sort has low overhead, it can be written fairly succinctly, and it has several two key benefits: it is stable, and it has a fairly fast running case when the input is nearly sorted.
Yes, there is a reason to use either an insertion sort or one of its variants.
The sorting alternatives (quick sort, etc.) of the other answers here make the assumption that the data is already in memory and ready to go.
But if you are attempting to read in a large amount of data from a slower external source (say a hard drive), there is a large amount of time wasted as the bottleneck is clearly the data channel or the drive itself. It just cannot keep up with the CPU. A natural series of waits occur during any read. These waits are wasted CPU cycles unless you use them to sort as you go.
For instance, if you were to make your solution to this be the following:
Read a ton of data in a dedicated loop into memory
Sort that data
You would very likely take longer than if you did the following in two threads.
Thread A:
Read a datum
Place datum into FIFO queue
(Repeat until data exhausted from drive)
Thread B:
Get a datum from the FIFO queue
Insert it into the proper place in your sorted list
(repeat until queue empty AND Thread A says "done").
...the above will allow you to use the otherwise wasted time. Note: Thread B does not impede Thread A's progress.
By the time the data is fully read, it will have been sorted and ready for use.
Most sorting procedures will use quicksort and then insertion sort for very small data sets.
If you're talking about maintaining a sorted list, there is no advantage over some kind of tree, it's just slower.
Well, maybe it consumes less memory or is a simpler implementation.
Inserting into a sorted list will involve a scan, which means that each insert is O(n), therefore sorting n items becomes O(n^2)
Inserting into a container such as a balanced tree, is typically log(n), therefore the sort is O(n log(n)) which is of course better.
But for small lists it hardly makes any difference. You might use an insert sort if you have to write it yourself without any libraries, the lists are small and/or you don't care about performance.
YES,
Insertion sort is better than Quick Sort on short lists.
In fact an optimal Quick Sort has a size threshold that it stops at, and then the entire array is sorted by insertion sort over the threshold limits.
Also...
For maintaining a scoreboard, Binary Insertion Sort may be as good as it gets.
See this page.
For small array size insertion sort outperforms quicksort.
Java 7 and Java 8 uses dual pivot quicksort to sort primitive data types.
Dual pivot quicksort out performs typical single pivot quicksort. According to algorithm of dual pivot quicksort :
For small arrays (length < 27), use the Insertion sort algorithm.
Choose two pivot...........
Definitely, insertion sort out performs quicksort for smaller array size and that is why you switch to insertion sort for arrays of length less than 27. The reason could be: there are no recursions in insertion sort.
Source: http://codeblab.com/wp-content/uploads/2009/09/DualPivotQuicksort.pdf

Resources