Sorting algorithm with no repeat comparisons? - sorting

I have a list of items to sort manually. My method is to pop a dialogue box to the user which says "which item is better", they click on the better one and this equates to a a>b comparison in code.
Using .NET's arraylist which implements sorting using QuickSort I'm finding the same 2 items are often compared to one another.
Since comparison is extremely slow (manual) is there an algorithm like quicksort but which remembers the result of a comparison of 2 items?

Related

Sorting based on fuzzy criteria OR Create an acceptable order with only n comparisons

I'm looking for an algorithm to sort a large number of items using the fewest comparisons. My specific case makes it unclear which of the obvious approaches is appropriate: the comparison function is slow and non-deterministic so it can make errors, because it's a human brain.
In other words, I want to sort arbitrary items on my computer into a list from "best" to "worst" by comparing them two at a time. They could be images, strings, songs, anything. My program would display two things for me to compare. The program doesn't know anything about what is being compared, its job is just to decide which pairs to compare. So that gives the following criteria
It's a comparison sort - The only time the user sees items is when comparing two of them.
It's an out-of-place sort - I don't want to move the actual files, so items can have placeholder values or metadata files
Comparisons are slow - at least compared to a computer. Data locality won't have an effect, but comparing obvious disparities will be quick, similar items will be slow.
Comparison is subjective - comparison results could vary slightly at different times.
Items don't have a total order - the desired outcome is an order that is "good enough" at runtime, which will vary depending on context.
Items will rarely be almost sorted - in fact, the goal is to get random data to an almost-sorted state.
Sets usually will contain runs - If every song on an album is a banger, it might be faster because of (2) to compare them to the next album rather than each other. Imagine a set {10.0, 10.2, 10.9, 5.0, 4.2, 6.9} where integer comparisons are fast but float comparisons are very slow.
There are many different ways to approach this problem. In addition to sorting algorithms, it's similar to creating tournament brackets, and voting systems. As that table illustrates, there are countless ways to define and solve the problem based on various criteria. For this question I'm only interested in treating it as a sorting problem where the user is comparing two items at a time and choosing a preference. So what approach makes sense for either of the two following versions of the question?
How to choose pairs to get the best result in O(n) or fewer operations? (for example compare random pairs of items with n/2 operations, then use n/2 operations to spot check or fine-tune)
How to create the best order with additional operations but no additional comparisons (e.g. similar items are sorted into buckets or losers are removed, anything that doesn't increase the number of comparisons)
The representation of comparison results can be anything that makes the solution convenient - it can be dictionary keys corresponding to the final order, a "score" based on number of comparisons, a database, etc.
Edit: The comments have helped clarify the question in that the goal is similar to something like bucket sort, samplesort or the partitioning phase of quicksort. So the question could be rephrased as how to choose good partitions based on comparisons, but I'm also interested in any other ways of using the comparison results that wouldn't be applicable in a standard in-place comparison sort like keeping a score for each item.

Why make selection sort stable? [duplicate]

I'm very curious, why stability is or is not important in sorting algorithms?
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted. Some sorting algorithms are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms are not, like Heap Sort, Quick Sort, etc.
Background: a "stable" sorting algorithm keeps the items with the same sorting key in order. Suppose we have a list of 5-letter words:
peach
straw
apple
spork
If we sort the list by just the first letter of each word then a stable-sort would produce:
apple
peach
straw
spork
In an unstable sort algorithm, straw or spork may be interchanged, but in a stable one, they stay in the same relative positions (that is, since straw appears before spork in the input, it also appears before spork in the output).
We could sort the list of words using this algorithm: stable sorting by column 5, then 4, then 3, then 2, then 1.
In the end, it will be correctly sorted. Convince yourself of that. (by the way, that algorithm is called radix sort)
Now to answer your question, suppose we have a list of first and last names. We are asked to sort "by last name, then by first". We could first sort (stable or unstable) by the first name, then stable sort by the last name. After these sorts, the list is primarily sorted by the last name. However, where last names are the same, the first names are sorted.
You can't stack unstable sorts in the same fashion.
A stable sorting algorithm is the one that sorts the identical elements in their same order as they appear in the input, whilst unstable sorting may not satisfy the case. - I thank my algorithm lecturer Didem Gozupek to have provided insight into algorithms.
I again needed to edit the question due to some feedback that some people don't get the logic of the presentation. It illustrates sorting w.r.t. first elements. On the other hand, you can either consider the illustration consisting of key-value pairs.
Stable Sorting Algorithms:
Insertion Sort
Merge Sort
Bubble Sort
Tim Sort
Counting Sort
Block Sort
Quadsort
Library Sort
Cocktail shaker Sort
Gnome Sort
Odd–even Sort
Unstable Sorting Algorithms:
Heap sort
Selection sort
Shell sort
Quick sort
Introsort (subject to Quicksort)
Tree sort
Cycle sort
Smoothsort
Tournament sort(subject to Hesapsort)
Sorting stability means that records with the same key retain their relative order before and after the sort.
So stability matters if, and only if, the problem you're solving requires retention of that relative order.
If you don't need stability, you can use a fast, memory-sipping algorithm from a library, like heapsort or quicksort, and forget about it.
If you need stability, it's more complicated. Stable algorithms have higher big-O CPU and/or memory usage than unstable algorithms. So when you have a large data set, you have to pick between beating up the CPU or the memory. If you're constrained on both CPU and memory, you have a problem. A good compromise stable algorithm is a binary tree sort; the Wikipedia article has a pathetically easy C++ implementation based on the STL.
You can make an unstable algorithm into a stable one by adding the original record number as the last-place key for each record.
It depends on what you do.
Imagine you've got some people records with a first and a last name field. First you sort the list by first name. If you then sort the list with a stable algorithm by last name, you'll have a list sorted by first name AND last name.
There's a few reasons why stability can be important. One is that, if two records don't need to be swapped by swapping them you can cause a memory update, a page is marked dirty, and needs to be re-written to disk (or another slow medium).
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array. Some sorting algorithms are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms are not, like Heap Sort, Quick Sort, etc.
However, any given sorting algo which is not stable can be modified to be stable. There can be sorting algo specific ways to make it stable, but in general, any comparison based sorting algorithm which is not stable by nature can be modified to be stable by changing the key comparison operation so that the comparison of two keys considers position as a factor for objects with equal keys.
References:
http://www.math.uic.edu/~leon/cs-mcs401-s08/handouts/stability.pdf
http://en.wikipedia.org/wiki/Sorting_algorithm#Stability
I know there are many answers for this, but to me, this answer, by Robert Harvey, summarized it much more clearly:
A stable sort is one which preserves the original order of the input set, where the [unstable] algorithm does not distinguish between two or more items.
Source
Some more examples of the reason for wanting stable sorts. Databases are a common example. Take the case of a transaction data base than includes last|first name, date|time of purchase, item number, price. Say the data base is normally sorted by date|time. Then a query is made to make a sorted copy of the data base by last|first name, since a stable sort preserves the original order, even though the inquiry compare only involves last|first name, the transactions for each last|first name will be in data|time order.
A similar example is classic Excel, which limited sorts to 3 columns at a time. To sort 6 columns, a sort is done with the least significant 3 columns, followed by a sort with the most significant 3 columns.
A classic example of a stable radix sort is a card sorter, used to sort by a field of base 10 numeric columns. The cards are sorted from least significant digit to most significant digit. On each pass, a deck of cards is read and separated into 10 different bins according to the digit in that column. Then the 10 bins of cards are put back into the input hopper in order ("0" cards first, "9" cards last). Then another pass is done by the next column, until all columns are sorted. Actual card sorters have more than 10 bins since there are 12 zones on a card, a column can be blank, and there is a mis-read bin. To sort letters, 2 passes per column are needed, 1st pass for digit, 2nd pass for the 12 11 zone.
Later (1937) there were card collating (merging) machines that could merge two decks of cards by comparing fields. The input was two already sorted decks of cards, a master deck and an update deck. The collator merged the two decks into a a new mater bin and an archive bin, which was optionally used for master duplicates so that the new master bin would only have update cards in case of duplicates. This was probably the basis for the idea behind the original (bottom up) merge sort.
If you assume what you are sorting are just numbers and only their values identify/distinguish them (e.g. elements with same value are identicle), then the stability-issue of sorting is meaningless.
However, objects with same priority in sorting may be distinct, and sometime their relative order is meaningful information. In this case, unstable sort generates problems.
For example, you have a list of data which contains the time cost [T] of all players to clean a maze with Level [L] in a game.
Suppose we need to rank the players by how fast they clean the maze. However, an additional rule applies: players who clean the maze with higher-level always have a higher rank, no matter how long the time cost is.
Of course you might try to map the paired value [T,L] to a real number [R] with some algorithm which follows the rules and then rank all players with [R] value.
However, if stable sorting is feasible, then you may simply sort the entire list by [T] (Faster players first) and then by [L]. In this case, the relative order of players (by time cost) will not be changed after you grouped them by level of maze they cleaned.
PS: of course the approach to sort twice is not the best solution to the particular problem but to explain the question of poster it should be enough.
Stable sort will always return same solution (permutation) on same input.
For instance [2,1,2] will be sorted using stable sort as permutation [2,1,3] (first is index 2, then index 1 then index 3 in sorted output) That mean that output is always shuffled same way. Other non stable, but still correct permutation is [2,3,1].
Quick sort is not stable sort and permutation differences among same elements depends on algorithm for picking pivot. Some implementations pick up at random and that can make quick sort yielding different permutations on same input using same algorithm.
Stable sort algorithm is necessary deterministic.

Optimized Algorithm: Fastest Way to Derive Sets

I'm writing a program for a competition and I need to be faster than all the other competitors. For this I need a little algorithm help; ideally I'd be using the fastest algorithm.
For this problem I am given 2 things. The first is a list of tuples, each of which contains exactly two elements (strings), each of which represents an item. The second is an integer, which indicates how many unique items there are in total. For example:
# of items = 3
[("ball","chair"),("ball","box"),("box","chair"),("chair","box")]
The same tuples can be repeated/ they are not necessarily unique.) My program is supposed to figure out the maximum number of tuples that can "agree" when the items are sorted into two groups. This means that if all the items are broken into two ideal groups, group 1 and group 2, what are the maximum number of tuples that can have their first item in group 1 and their second item in group 2.
For example, the answer to my earlier example would be 2, with "ball" in group 1 and "chair" and "box" in group 2, satisfying the first two tuples. I do not necessarily need know what items go in which group, I just need to know what the maximum number of satisfied tuples could be.
At the moment I'm trying a recursive approach, but its running on (n^2), far too inefficient in my opinion. Does anyone have a method that could produce a faster algorithm?
Thanks!!!!!!!!!!
Speed up approaches for your task:
1. Use integers
Convert the strings to integers (store the strings in an array and use the position for the tupples.
String[] words = {"ball", "chair", "box"};
In tuppls ball now has number 0 (pos 0 in array) , chair 1, box 2.
comparing ints is faster than Strings.
2. Avoid recursion
Recursion is slow, due the recursion overhead.
For example look at binarys search algorithm in a recursive implementatiion, then look how java implements binSearch() (with a while loop and iteration)
Recursion is helpfull if problems are so complex that a non recursive implementation is to complex for a human brain.
An iterataion is faster, but not in the case when you mimick recursive calls by implementing your own stack.
However you can start implementing using a recursiove algorithm, once it works and it is a suited algo, then try to convert to a non recursive implementation
3. if possible avoid objects
if you want the fastest, the now it becomes ugly!
A tuppel array can either be stored in as array of class Point(x,y) or probably faster,
as array of int:
Example:
(1,2), (2,3), (3,4) can be stored as array: (1,2,2,3,3,4)
This needs much less memory because an object needs at least 12 bytes (in java).
Less memory becomes faster, when the array are really big, then your structure will hopefully fits in the processor cache, while the objects array does not.
4. Programming language
In C it will be faster than in Java.
Maximum cut is a special case of your problem, so I doubt you have a quadratic algorithm for it. (Maximum cut is NP-complete and it corresponds to the case where every tuple (A,B) also appears in reverse as (B,A) the same number of times.)
The best strategy for you to try here is "branch and bound." It's a variant of the straightforward recursive search you've probably already coded up. You keep track of the value of the best solution you've found so far. In each recursive call, you check whether it's even possible to beat the best known solution with the choices you've fixed so far.
One thing that may help (or may hurt) is to "probe": for each as-yet-unfixed item, see if putting that item on one of the two sides leads only to suboptimal solutions; if so, you know that item needs to be on the other side.
Another useful trick is to recurse on items that appear frequently both as the first element and as the second element of your tuples.
You should pay particular attention to the "bound" step --- finding an upper bound on the best possible solution given the choices you've fixed.

What is stability in sorting algorithms and why is it important?

I'm very curious, why stability is or is not important in sorting algorithms?
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted. Some sorting algorithms are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms are not, like Heap Sort, Quick Sort, etc.
Background: a "stable" sorting algorithm keeps the items with the same sorting key in order. Suppose we have a list of 5-letter words:
peach
straw
apple
spork
If we sort the list by just the first letter of each word then a stable-sort would produce:
apple
peach
straw
spork
In an unstable sort algorithm, straw or spork may be interchanged, but in a stable one, they stay in the same relative positions (that is, since straw appears before spork in the input, it also appears before spork in the output).
We could sort the list of words using this algorithm: stable sorting by column 5, then 4, then 3, then 2, then 1.
In the end, it will be correctly sorted. Convince yourself of that. (by the way, that algorithm is called radix sort)
Now to answer your question, suppose we have a list of first and last names. We are asked to sort "by last name, then by first". We could first sort (stable or unstable) by the first name, then stable sort by the last name. After these sorts, the list is primarily sorted by the last name. However, where last names are the same, the first names are sorted.
You can't stack unstable sorts in the same fashion.
A stable sorting algorithm is the one that sorts the identical elements in their same order as they appear in the input, whilst unstable sorting may not satisfy the case. - I thank my algorithm lecturer Didem Gozupek to have provided insight into algorithms.
I again needed to edit the question due to some feedback that some people don't get the logic of the presentation. It illustrates sorting w.r.t. first elements. On the other hand, you can either consider the illustration consisting of key-value pairs.
Stable Sorting Algorithms:
Insertion Sort
Merge Sort
Bubble Sort
Tim Sort
Counting Sort
Block Sort
Quadsort
Library Sort
Cocktail shaker Sort
Gnome Sort
Odd–even Sort
Unstable Sorting Algorithms:
Heap sort
Selection sort
Shell sort
Quick sort
Introsort (subject to Quicksort)
Tree sort
Cycle sort
Smoothsort
Tournament sort(subject to Hesapsort)
Sorting stability means that records with the same key retain their relative order before and after the sort.
So stability matters if, and only if, the problem you're solving requires retention of that relative order.
If you don't need stability, you can use a fast, memory-sipping algorithm from a library, like heapsort or quicksort, and forget about it.
If you need stability, it's more complicated. Stable algorithms have higher big-O CPU and/or memory usage than unstable algorithms. So when you have a large data set, you have to pick between beating up the CPU or the memory. If you're constrained on both CPU and memory, you have a problem. A good compromise stable algorithm is a binary tree sort; the Wikipedia article has a pathetically easy C++ implementation based on the STL.
You can make an unstable algorithm into a stable one by adding the original record number as the last-place key for each record.
It depends on what you do.
Imagine you've got some people records with a first and a last name field. First you sort the list by first name. If you then sort the list with a stable algorithm by last name, you'll have a list sorted by first name AND last name.
There's a few reasons why stability can be important. One is that, if two records don't need to be swapped by swapping them you can cause a memory update, a page is marked dirty, and needs to be re-written to disk (or another slow medium).
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array. Some sorting algorithms are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. And some sorting algorithms are not, like Heap Sort, Quick Sort, etc.
However, any given sorting algo which is not stable can be modified to be stable. There can be sorting algo specific ways to make it stable, but in general, any comparison based sorting algorithm which is not stable by nature can be modified to be stable by changing the key comparison operation so that the comparison of two keys considers position as a factor for objects with equal keys.
References:
http://www.math.uic.edu/~leon/cs-mcs401-s08/handouts/stability.pdf
http://en.wikipedia.org/wiki/Sorting_algorithm#Stability
I know there are many answers for this, but to me, this answer, by Robert Harvey, summarized it much more clearly:
A stable sort is one which preserves the original order of the input set, where the [unstable] algorithm does not distinguish between two or more items.
Source
Some more examples of the reason for wanting stable sorts. Databases are a common example. Take the case of a transaction data base than includes last|first name, date|time of purchase, item number, price. Say the data base is normally sorted by date|time. Then a query is made to make a sorted copy of the data base by last|first name, since a stable sort preserves the original order, even though the inquiry compare only involves last|first name, the transactions for each last|first name will be in data|time order.
A similar example is classic Excel, which limited sorts to 3 columns at a time. To sort 6 columns, a sort is done with the least significant 3 columns, followed by a sort with the most significant 3 columns.
A classic example of a stable radix sort is a card sorter, used to sort by a field of base 10 numeric columns. The cards are sorted from least significant digit to most significant digit. On each pass, a deck of cards is read and separated into 10 different bins according to the digit in that column. Then the 10 bins of cards are put back into the input hopper in order ("0" cards first, "9" cards last). Then another pass is done by the next column, until all columns are sorted. Actual card sorters have more than 10 bins since there are 12 zones on a card, a column can be blank, and there is a mis-read bin. To sort letters, 2 passes per column are needed, 1st pass for digit, 2nd pass for the 12 11 zone.
Later (1937) there were card collating (merging) machines that could merge two decks of cards by comparing fields. The input was two already sorted decks of cards, a master deck and an update deck. The collator merged the two decks into a a new mater bin and an archive bin, which was optionally used for master duplicates so that the new master bin would only have update cards in case of duplicates. This was probably the basis for the idea behind the original (bottom up) merge sort.
If you assume what you are sorting are just numbers and only their values identify/distinguish them (e.g. elements with same value are identicle), then the stability-issue of sorting is meaningless.
However, objects with same priority in sorting may be distinct, and sometime their relative order is meaningful information. In this case, unstable sort generates problems.
For example, you have a list of data which contains the time cost [T] of all players to clean a maze with Level [L] in a game.
Suppose we need to rank the players by how fast they clean the maze. However, an additional rule applies: players who clean the maze with higher-level always have a higher rank, no matter how long the time cost is.
Of course you might try to map the paired value [T,L] to a real number [R] with some algorithm which follows the rules and then rank all players with [R] value.
However, if stable sorting is feasible, then you may simply sort the entire list by [T] (Faster players first) and then by [L]. In this case, the relative order of players (by time cost) will not be changed after you grouped them by level of maze they cleaned.
PS: of course the approach to sort twice is not the best solution to the particular problem but to explain the question of poster it should be enough.
Stable sort will always return same solution (permutation) on same input.
For instance [2,1,2] will be sorted using stable sort as permutation [2,1,3] (first is index 2, then index 1 then index 3 in sorted output) That mean that output is always shuffled same way. Other non stable, but still correct permutation is [2,3,1].
Quick sort is not stable sort and permutation differences among same elements depends on algorithm for picking pivot. Some implementations pick up at random and that can make quick sort yielding different permutations on same input using same algorithm.
Stable sort algorithm is necessary deterministic.

Inserting items in a list that is frequently insertion sorted

I have a list that is frequently insertion sorted. Is there a good position (other than the end) for adding to this list to minimize the work that the insertion sort has to do?
The best place to insert would be where the element belongs in the sorted list. This would be similar to preemptively insertion sorting.
Your question doesn't make sense. Either the list is insertion sorted (which means you can't append to the end by definition; the element will still end up in the place where it belongs. Otherwise, the list wouldn't be sorted).
If you have to add lots of elements, then the best solution is to clone the list, add all elements, sort the new list once and then replace the first list with the clone.
[EDIT] In reply to your comments: After doing a couple of appends, you must sort the list before you can do the next sorted insertion. So the question isn't how you can make the sorted insertion cheaper but the sort between appends and sorted insertions.
The answer is that most sorting algorithms do pretty good with partially sorted lists. The questions you need to ask are: What sorting algorithm is used, what properties does it have and, most importantly, why should you care.
The last question means that you should measure performance before you do any kind of optimization because you have a 90% chance that it will hurt more than it helps unless it's based on actual numbers.
Back to the sorting. Java uses a version of quicksort to sort collections. Quicksort will select a pivot element to partition the collection. This selection is crucial for the performance of the algorithm. For best performance, the pivot element should be as close to the element in the middle of the result as possible. Usually, quicksort uses an element from the middle of the current partition as a pivot element. Also, quicksort will start processing the list with the small indexes.
So adding the new elements at the end might not give you good performance. It won't affect the pivot element selection but quicksort will look at the new elements after it has checked all the sorted elements already. Adding the new elements in the middle will affect the pivot selection and we can't really tell whether that will have an influence on the performance or not. My instinctive guess is that the pivot element will be better if quicksort finds sorted elements in the middle of the partitions.
That leaves adding new elements at the beginning. This way, quicksort will usually find a perfect pivot element (since the middle of the list will be sorted) and it will pick up the new elements first. The drawback is that you must copy the whole array for every insert. There are two ways to avoid that: a) As I said elsewhere, todays PCs copy huge amounts of RAM in almost no time at all, so you can just ignore this small performance hit. b) You can use a second ArrayList, put all the new elements in it and then use addAll(). Java will do some optimizations internally for this case and just move the existing elements once.
[EDIT2] I completely misunderstood your question. For the algorithm insertion sort, the best place is probably somewhere in the middle. This should halve the chances that you have to move an element through the whole list. But since I'm not 100% sure, I suggest to create a couple of small tests to verify this.

Resources