Is stable and in-place the same? - algorithm

When talking about algorithms. I see description of both in-place and stable sorting algorithms. Is saying a algorithm is stable the same as saying its in-place? if not what is the difference?

No,
Stable algorithm means that the relative ordering of 'equal' elements shall remain same after the algorithm is executed.
For instance, if you have an array
{-2, 4, 5, -11, 9, -10}
and you want to sort it such that all negative elements come before the positive elements. And you want the relative ordering of -ve and +ve elements remain the same
{-2, -11, -10, 4, 5, 9}
This is the output of a stable algorithm
As noted in the comments, in place algorithm means the algorithm does not require additional space other than the input data. The output is data occupies the same place in memory that was occupied by the input data and the input data gets destroyed.

Stable means the order of input elements is unchanged except where change is required to satisfy the requirements. A stable sort applied to a sequence of equal elements will not change their order.
In-place means that the input and output occupy the same memory storage space. There is no copying of input to output, and the input ceases to exist unless you have made a backup copy. This is a property that often requires an imperative language to express, because pure functional languages do no have a notion of storage space or overwriting data.

No, it's not the same.
A stable sort is one that, for elements that compare equal, their relative position in the sorted output is guaranteed to be the same as in the source. Contrast this with an unstable sort, in which items that compare equal will appear in the sorted result in an unpredictable order. This distinction is not important in simple cases (e.g. sorting integers), but it becomes important when the sort criteria is only part of the data that each item contains (e.g. sort colored socks by size only).
An in-place sort is one that sorts the input without requiring additional space; it is also called a "destructive" sort in that after sorting you have lost the unsorted form of the input data (it has been replaced by sorted data).

Related

Why make selection sort stable? [duplicate]

I'm very curious, why stability is or is not important in sorting algorithms?
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted. Some sorting algorithms are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms are not, like Heap Sort, Quick Sort, etc.
Background: a "stable" sorting algorithm keeps the items with the same sorting key in order. Suppose we have a list of 5-letter words:
peach
straw
apple
spork
If we sort the list by just the first letter of each word then a stable-sort would produce:
apple
peach
straw
spork
In an unstable sort algorithm, straw or spork may be interchanged, but in a stable one, they stay in the same relative positions (that is, since straw appears before spork in the input, it also appears before spork in the output).
We could sort the list of words using this algorithm: stable sorting by column 5, then 4, then 3, then 2, then 1.
In the end, it will be correctly sorted. Convince yourself of that. (by the way, that algorithm is called radix sort)
Now to answer your question, suppose we have a list of first and last names. We are asked to sort "by last name, then by first". We could first sort (stable or unstable) by the first name, then stable sort by the last name. After these sorts, the list is primarily sorted by the last name. However, where last names are the same, the first names are sorted.
You can't stack unstable sorts in the same fashion.
A stable sorting algorithm is the one that sorts the identical elements in their same order as they appear in the input, whilst unstable sorting may not satisfy the case. - I thank my algorithm lecturer Didem Gozupek to have provided insight into algorithms.
I again needed to edit the question due to some feedback that some people don't get the logic of the presentation. It illustrates sorting w.r.t. first elements. On the other hand, you can either consider the illustration consisting of key-value pairs.
Stable Sorting Algorithms:
Insertion Sort
Merge Sort
Bubble Sort
Tim Sort
Counting Sort
Block Sort
Quadsort
Library Sort
Cocktail shaker Sort
Gnome Sort
Odd–even Sort
Unstable Sorting Algorithms:
Heap sort
Selection sort
Shell sort
Quick sort
Introsort (subject to Quicksort)
Tree sort
Cycle sort
Smoothsort
Tournament sort(subject to Hesapsort)
Sorting stability means that records with the same key retain their relative order before and after the sort.
So stability matters if, and only if, the problem you're solving requires retention of that relative order.
If you don't need stability, you can use a fast, memory-sipping algorithm from a library, like heapsort or quicksort, and forget about it.
If you need stability, it's more complicated. Stable algorithms have higher big-O CPU and/or memory usage than unstable algorithms. So when you have a large data set, you have to pick between beating up the CPU or the memory. If you're constrained on both CPU and memory, you have a problem. A good compromise stable algorithm is a binary tree sort; the Wikipedia article has a pathetically easy C++ implementation based on the STL.
You can make an unstable algorithm into a stable one by adding the original record number as the last-place key for each record.
It depends on what you do.
Imagine you've got some people records with a first and a last name field. First you sort the list by first name. If you then sort the list with a stable algorithm by last name, you'll have a list sorted by first name AND last name.
There's a few reasons why stability can be important. One is that, if two records don't need to be swapped by swapping them you can cause a memory update, a page is marked dirty, and needs to be re-written to disk (or another slow medium).
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array. Some sorting algorithms are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms are not, like Heap Sort, Quick Sort, etc.
However, any given sorting algo which is not stable can be modified to be stable. There can be sorting algo specific ways to make it stable, but in general, any comparison based sorting algorithm which is not stable by nature can be modified to be stable by changing the key comparison operation so that the comparison of two keys considers position as a factor for objects with equal keys.
References:
http://www.math.uic.edu/~leon/cs-mcs401-s08/handouts/stability.pdf
http://en.wikipedia.org/wiki/Sorting_algorithm#Stability
I know there are many answers for this, but to me, this answer, by Robert Harvey, summarized it much more clearly:
A stable sort is one which preserves the original order of the input set, where the [unstable] algorithm does not distinguish between two or more items.
Source
Some more examples of the reason for wanting stable sorts. Databases are a common example. Take the case of a transaction data base than includes last|first name, date|time of purchase, item number, price. Say the data base is normally sorted by date|time. Then a query is made to make a sorted copy of the data base by last|first name, since a stable sort preserves the original order, even though the inquiry compare only involves last|first name, the transactions for each last|first name will be in data|time order.
A similar example is classic Excel, which limited sorts to 3 columns at a time. To sort 6 columns, a sort is done with the least significant 3 columns, followed by a sort with the most significant 3 columns.
A classic example of a stable radix sort is a card sorter, used to sort by a field of base 10 numeric columns. The cards are sorted from least significant digit to most significant digit. On each pass, a deck of cards is read and separated into 10 different bins according to the digit in that column. Then the 10 bins of cards are put back into the input hopper in order ("0" cards first, "9" cards last). Then another pass is done by the next column, until all columns are sorted. Actual card sorters have more than 10 bins since there are 12 zones on a card, a column can be blank, and there is a mis-read bin. To sort letters, 2 passes per column are needed, 1st pass for digit, 2nd pass for the 12 11 zone.
Later (1937) there were card collating (merging) machines that could merge two decks of cards by comparing fields. The input was two already sorted decks of cards, a master deck and an update deck. The collator merged the two decks into a a new mater bin and an archive bin, which was optionally used for master duplicates so that the new master bin would only have update cards in case of duplicates. This was probably the basis for the idea behind the original (bottom up) merge sort.
If you assume what you are sorting are just numbers and only their values identify/distinguish them (e.g. elements with same value are identicle), then the stability-issue of sorting is meaningless.
However, objects with same priority in sorting may be distinct, and sometime their relative order is meaningful information. In this case, unstable sort generates problems.
For example, you have a list of data which contains the time cost [T] of all players to clean a maze with Level [L] in a game.
Suppose we need to rank the players by how fast they clean the maze. However, an additional rule applies: players who clean the maze with higher-level always have a higher rank, no matter how long the time cost is.
Of course you might try to map the paired value [T,L] to a real number [R] with some algorithm which follows the rules and then rank all players with [R] value.
However, if stable sorting is feasible, then you may simply sort the entire list by [T] (Faster players first) and then by [L]. In this case, the relative order of players (by time cost) will not be changed after you grouped them by level of maze they cleaned.
PS: of course the approach to sort twice is not the best solution to the particular problem but to explain the question of poster it should be enough.
Stable sort will always return same solution (permutation) on same input.
For instance [2,1,2] will be sorted using stable sort as permutation [2,1,3] (first is index 2, then index 1 then index 3 in sorted output) That mean that output is always shuffled same way. Other non stable, but still correct permutation is [2,3,1].
Quick sort is not stable sort and permutation differences among same elements depends on algorithm for picking pivot. Some implementations pick up at random and that can make quick sort yielding different permutations on same input using same algorithm.
Stable sort algorithm is necessary deterministic.

Are there algorithms to (approximately?) sort data that can change?

All sorting algorithms I know require exclusive access to the data structure they work on. Are there any that can handle data that can change at any time?
To make this possible at all, we can certainly assume:
The rate of change is low, let's say i.e. we will repeatedly have enough time to walk through the whole structure and verify that it is currently sorted.
All changes are atomic and do not violate integrity, i.e. we won't deal with accidentally lost pointers etc., and all changes can be assumed to perform additional actions to ensure the structure is still connected (let's say in at most O(log n) time).
I'm interested in any information, papers or implementations, also if they have more or less strict assumptions than those above.
Many, many data structures maintain data in sorted order. For example, any tree, skiplist, heap, etc., allows ordered access. In general, inserting, removing or updating a data item is O(log N) or better (N = number of items in the dataset). Therefore you can expect the cost of maintaining the sorted invariant for the dataset over a time interval to be O(M*log(N)), where M is the number of items you insert/delete/update in that time interval.
Some sorting algorithms (insertion sort for example) perform better when data is partially sorted. At best, the cost of running such an algorithm is O(N), but this happens only in very limited circumstances. On average you can expect it to be closer to O(N*log(N)).
Therefore, if the sorting invariant of the dataset needs to be maintained at all times you should use a data structure like an index or heap. However, if you only need to have the data sometimes, it might be more efficient to just buffer the updates in an array and re-sort the whole dataset whenever needed.
Most comparison/exchange sorts should be able to mostly sort an array that is being modified. Insertion sort and Shell sort certainly can, as can Bubble sort and even selection sort. I'm not entirely sure about Quicksort. Seems like some implementations could go into an infinite loop if a data value got changed in the middle of the sort.
Consider the simple case of insertion sort. Start with the array [4, 7, 5, 3, 2].
After a few iterations you have: [3, 4, 5, 7, 2]. At this point somebody reaches in and changes that 4 to a 1, giving you [3, 1, 5, 7, 2]. Your sort is trying to place the last item, 2. It'll end up giving you [3, 1, 2, 7, 5], and since it's placed the last element that's what your final array will look like.
In an array that changes infrequently, you'll likely have just a few items out of place and Insertion sort could quickly put things in order.
You have to be careful with your implementation, though. Because other threads could be modifying the array, you can't have a temporary variable holding the contents of an array item. If the items in the array are references that won't change (i.e. only the thing being referred to can change, not the element in the array itself), then holding that reference in a temporary is no problem. But if the array is, say, an array of integers, then all comparisons have to be done against the actual array elements rather than by holding a value in a temporary.
That said, such a thing is fairly unusual. Many ordered data structures can be coded to be lock-free such that multiple threads can be reading and/or writing concurrently. That removes the need to "approximately" sort anything, as the data structure maintains order at all times.

Sort in ascending or descending order (chosen arbitrarily; Prefer whichever is cheaper)

I have an array of elements. This array could be:
Randomly shuffled (about 20% of the time)
Nearly sorted* in ascending order (about 40% of the time)
Nearly sorted in descending order (about 40% of the time)
But I do not know (in advance) which of these cases applies. I would prefer to sort the array into the order which it is already close to.
It does not matter whether the output is ascending or descending, but it must be one or the other (so I can perform a binary search on it.)
The sort need not be stable.
Some background info: The process goes roughly like this:
Populate the array
Sort on some attribute A
Do some processing (compute quantiles, and some other minor stuff)
Sort on some other attribute B
Do more processing
Sort on attribute C
Do more processing
A and B are often correlated with each other (but may be positively or negatively.) Same applies to B and C. Occasionally A == C.
* "nearly sorted" here means most elements are close to their final positions. But rarely exactly at their final positions (there is a lot of additive noise, and not many long sorted subsequences.) Still, there are usually a few "outliers" at the start and end of the array which are poor predictors of the order for the next sort. 
Is there an algorithm that can advantage of the fact that I have no preference for ascending vs. descending, to sort more cheaply (compared to the TimSort I am currently using?)
I'd continue using Timsort (however, a good alternative is Smoothsort*), but first probe the array to decide whether to sort in ascending or descending order. Look at the first and last elements and sort accordingly. If the array is unsorted, the choice is immaterial; if it is (partially) sorted, probing at a wide interval is more likely to correctly detect which way.
*Smoothsort has the same best, average, and worst case time as Timsort, and better space complexity. Like Timsort, it was specifically designed to take advantage of partially sorted data.
Another possibility to consider:
Start doing a (hand-rolled) insertion sort
As you go, count the number of inversions you perform
After you have done some small fixed number of insertions, compare the number of inversions that you have counted, to the maximum number of inversions that would have occurred by that point if the data were reverse-sorted to begin with:
If the proportion is close to 0, then (probably) the data is nearly-sorted. Complete the insertion sort, which performs very well on nearly-sorted data. If you don't like the sound of "probably" then continue counting inversions as you go and be ready to fall back to Timsort if it falls under a threshold.
If the proportion is close to 1, then (probably) the data is nearly-reverse-sorted, and you have a small number of sorted elements at the start. Move them to the end, reverse them, and complete an insertion sort with reversed comparator.
Otherwise the data is random, use your favourite sorting algorithm. I'd say Timsort, but since that does well on nearly-sorted data there must be some other algorithm that does at least a tiny bit better than Timsort does on uniformly-shuffled data. Probably plain merge sort without the Tim.
The "small fixed number" can be a number for which insertion sort is fairly fast even in bad cases. I would guess 10-20 or so. It's possible to work out the probability of a false positive in uniformly shuffled data for any given number of insertions and any given threshold of "close to 0/1", but I'm too lazy.
You say the first and last few array elements typically buck the trend, in which case you could exclude them from the initial test insertion sort.
Obviously this approach is somewhat inspired by Timsort. But Timsort is fiendishly optimized for data that contains runs -- I have tried to fiendishly optimize only for data that's close to one big run (in either direction). Another feature of Timsort is that it's well tested, I don't claim to share that.

When is the appropriate time to use Radix Sort?

What are the constraints on your data for you to be able to use Radix sort?
If I'm sorting a large list of integers, would it be appropriate to use Radix sort? Why is Radix sort not used more?
It's great when you have a large set of data with keys that are somehow constrained. For example, when you need to order a 1-million array of 64-bit numbers, it can be used to sort by 8 least significant bits, then by the next 8, and so on (applied 8 times). That way this array can be sorted in 8*1M operations, rather than 1M*log(1M).
If you know the range of the integer values, and it's not too large,
maybe counting sort would be a better choice in your case.
One reason you might not see it as often as you'd think you would is that Radix sort is not as general purpose as comparison based sorts (quicksort/mergesort/heapsort). It requires that you can represent the items to be sorted as an integer, or something like an integer. When using a standard library, it is easy to define a comparison function that compares arbitrary objects. It might be harder to define an encoding that properly maps your arbitrary data type into an integer.
Bucket sorting is useful in situations where the number of discrete key values is small relative to the number of data items, and where the goal is to produce a re-sorted copy of a list without disturbing the original (so needing to maintain both the old and new versions of the list simultaneously is not a burden). If the number of possible keys is too large to handle in a single pass, one can extend bucket sort into radix sort by making multiple passes, but one loses much of the speed advantage that bucket sort could offer for small keys.
In some external-sorting scenarios, especially when the number of different key values is very small (e.g. two), a stable sort is required, and the I/O device can only operate efficiently with one sequential data stream, it may be useful to make K passes through the source data stream, where K is the number of key values. On the first pass, one copies all the items where the key is the minimum legitimate value and skips the rest, then copies all the items where the key is the next higher value, skipping the rest, etc. This approach will obviously be horribly efficient if there are very many different key values, but will be quite good if there are two.

What is stability in sorting algorithms and why is it important?

I'm very curious, why stability is or is not important in sorting algorithms?
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted. Some sorting algorithms are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms are not, like Heap Sort, Quick Sort, etc.
Background: a "stable" sorting algorithm keeps the items with the same sorting key in order. Suppose we have a list of 5-letter words:
peach
straw
apple
spork
If we sort the list by just the first letter of each word then a stable-sort would produce:
apple
peach
straw
spork
In an unstable sort algorithm, straw or spork may be interchanged, but in a stable one, they stay in the same relative positions (that is, since straw appears before spork in the input, it also appears before spork in the output).
We could sort the list of words using this algorithm: stable sorting by column 5, then 4, then 3, then 2, then 1.
In the end, it will be correctly sorted. Convince yourself of that. (by the way, that algorithm is called radix sort)
Now to answer your question, suppose we have a list of first and last names. We are asked to sort "by last name, then by first". We could first sort (stable or unstable) by the first name, then stable sort by the last name. After these sorts, the list is primarily sorted by the last name. However, where last names are the same, the first names are sorted.
You can't stack unstable sorts in the same fashion.
A stable sorting algorithm is the one that sorts the identical elements in their same order as they appear in the input, whilst unstable sorting may not satisfy the case. - I thank my algorithm lecturer Didem Gozupek to have provided insight into algorithms.
I again needed to edit the question due to some feedback that some people don't get the logic of the presentation. It illustrates sorting w.r.t. first elements. On the other hand, you can either consider the illustration consisting of key-value pairs.
Stable Sorting Algorithms:
Insertion Sort
Merge Sort
Bubble Sort
Tim Sort
Counting Sort
Block Sort
Quadsort
Library Sort
Cocktail shaker Sort
Gnome Sort
Odd–even Sort
Unstable Sorting Algorithms:
Heap sort
Selection sort
Shell sort
Quick sort
Introsort (subject to Quicksort)
Tree sort
Cycle sort
Smoothsort
Tournament sort(subject to Hesapsort)
Sorting stability means that records with the same key retain their relative order before and after the sort.
So stability matters if, and only if, the problem you're solving requires retention of that relative order.
If you don't need stability, you can use a fast, memory-sipping algorithm from a library, like heapsort or quicksort, and forget about it.
If you need stability, it's more complicated. Stable algorithms have higher big-O CPU and/or memory usage than unstable algorithms. So when you have a large data set, you have to pick between beating up the CPU or the memory. If you're constrained on both CPU and memory, you have a problem. A good compromise stable algorithm is a binary tree sort; the Wikipedia article has a pathetically easy C++ implementation based on the STL.
You can make an unstable algorithm into a stable one by adding the original record number as the last-place key for each record.
It depends on what you do.
Imagine you've got some people records with a first and a last name field. First you sort the list by first name. If you then sort the list with a stable algorithm by last name, you'll have a list sorted by first name AND last name.
There's a few reasons why stability can be important. One is that, if two records don't need to be swapped by swapping them you can cause a memory update, a page is marked dirty, and needs to be re-written to disk (or another slow medium).
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array. Some sorting algorithms are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms are not, like Heap Sort, Quick Sort, etc.
However, any given sorting algo which is not stable can be modified to be stable. There can be sorting algo specific ways to make it stable, but in general, any comparison based sorting algorithm which is not stable by nature can be modified to be stable by changing the key comparison operation so that the comparison of two keys considers position as a factor for objects with equal keys.
References:
http://www.math.uic.edu/~leon/cs-mcs401-s08/handouts/stability.pdf
http://en.wikipedia.org/wiki/Sorting_algorithm#Stability
I know there are many answers for this, but to me, this answer, by Robert Harvey, summarized it much more clearly:
A stable sort is one which preserves the original order of the input set, where the [unstable] algorithm does not distinguish between two or more items.
Source
Some more examples of the reason for wanting stable sorts. Databases are a common example. Take the case of a transaction data base than includes last|first name, date|time of purchase, item number, price. Say the data base is normally sorted by date|time. Then a query is made to make a sorted copy of the data base by last|first name, since a stable sort preserves the original order, even though the inquiry compare only involves last|first name, the transactions for each last|first name will be in data|time order.
A similar example is classic Excel, which limited sorts to 3 columns at a time. To sort 6 columns, a sort is done with the least significant 3 columns, followed by a sort with the most significant 3 columns.
A classic example of a stable radix sort is a card sorter, used to sort by a field of base 10 numeric columns. The cards are sorted from least significant digit to most significant digit. On each pass, a deck of cards is read and separated into 10 different bins according to the digit in that column. Then the 10 bins of cards are put back into the input hopper in order ("0" cards first, "9" cards last). Then another pass is done by the next column, until all columns are sorted. Actual card sorters have more than 10 bins since there are 12 zones on a card, a column can be blank, and there is a mis-read bin. To sort letters, 2 passes per column are needed, 1st pass for digit, 2nd pass for the 12 11 zone.
Later (1937) there were card collating (merging) machines that could merge two decks of cards by comparing fields. The input was two already sorted decks of cards, a master deck and an update deck. The collator merged the two decks into a a new mater bin and an archive bin, which was optionally used for master duplicates so that the new master bin would only have update cards in case of duplicates. This was probably the basis for the idea behind the original (bottom up) merge sort.
If you assume what you are sorting are just numbers and only their values identify/distinguish them (e.g. elements with same value are identicle), then the stability-issue of sorting is meaningless.
However, objects with same priority in sorting may be distinct, and sometime their relative order is meaningful information. In this case, unstable sort generates problems.
For example, you have a list of data which contains the time cost [T] of all players to clean a maze with Level [L] in a game.
Suppose we need to rank the players by how fast they clean the maze. However, an additional rule applies: players who clean the maze with higher-level always have a higher rank, no matter how long the time cost is.
Of course you might try to map the paired value [T,L] to a real number [R] with some algorithm which follows the rules and then rank all players with [R] value.
However, if stable sorting is feasible, then you may simply sort the entire list by [T] (Faster players first) and then by [L]. In this case, the relative order of players (by time cost) will not be changed after you grouped them by level of maze they cleaned.
PS: of course the approach to sort twice is not the best solution to the particular problem but to explain the question of poster it should be enough.
Stable sort will always return same solution (permutation) on same input.
For instance [2,1,2] will be sorted using stable sort as permutation [2,1,3] (first is index 2, then index 1 then index 3 in sorted output) That mean that output is always shuffled same way. Other non stable, but still correct permutation is [2,3,1].
Quick sort is not stable sort and permutation differences among same elements depends on algorithm for picking pivot. Some implementations pick up at random and that can make quick sort yielding different permutations on same input using same algorithm.
Stable sort algorithm is necessary deterministic.

Resources